No, it's an explicit state change. Which means when it is used, you know
exactly where to find THE ERROR. What is difficult for you to understand
about this non-obtuse approach?
Well, I personally am ambivalent about post-free NULL-setting. I don't
do it myself, for reasons I'm about to describe; but I understand why
some people do do it, for the reasons you've been stating.
(1) Nothing is a panacea. "THE ERROR" is usually two lines above the
segfault, in which case it's easy to spot, whether you assign NULL or
not; but often it's in a branch of the call graph that's no longer
even on the stack, or maybe not even in the same thread! Now, assigning
NULL to dead pointers can make debugging-with-a-debugger easier, sure,
but it's not a panacea. (This is more a response to your hyperbole
above than a reason not to assign NULL. But if it /were/ really a
cure-all, I'd hardly be able to object to it, would I?)
(2) Assigning NULL to dead pointers takes screen real estate. It also
can take up RAM real estate, on embedded platforms where the size of
the executable is critical. (Unless, of course, you use a smart compiler
that optimizes away the useless writes, in which case you're back where
you started, with less debuggability and more source code --- the
worst of both worlds.)
(3) Dead writes in general screw up static analysis tools (like 'lint',
to take a terrible example). Either you'll get a lot of false positives
("Variable 'p' written on line 42, but never read"), or else you'll
turn off that warning and miss a lot of true positives.
Static analysis is the reason I /do/ take a firm stand against
the practice of initializing all variables at their definitions, which
is essentially the flip side of re-assigning all pointers after their
last use. If you do that, you're deliberately crippling static analysis
by removing all the "Variable 'p' used without being initialized"
warnings. Computers are better at control-flow analysis than humans
are; let them do their jobs!
-Arthur