Charlie said:
So what !
There are other ways of getting infected so why bother with condoms...
Because condoms don't provide 100% protection. If you
need 100% protection, the solution is to refrain from, er,
screwing around. And if you don't screw around, you don't
need condoms.
What it comes down to, I think, is a difference in
attitude about how to write programs that work. I've been
pondering how to express my ideas about The Right Way (tm)
to do things, and my misgivings about piecemeal approaches
like zeroing the argument to free(). Here's my poor best
at trying to explain my convoluted self:
Programs aren't life-like, in the sense that their
correctness[1] isn't a matter of probability[2]. An
incorrect program is incorrect even if it hasn't actually
failed yet; the error is latent, ready to cause a failure[3]
when the circumstances are right (or wrong, depending on
your point of view). In managing dynamic memory, omitting
to keep proper track of which pointers are "live" and which
have "died" is an error. You must have a means to tell
whether a pointer is or isn't current before you try to
use[4] it -- and if you have such a means, you don't need
any special value in the pointer itself.
Now, the "means" could perfectly well rest on the
assertion "All non-NULL pointers in my program are valid."
But clobbering just the one pointer handed to free() is
not enough to maintain the assertion: You need additional
effort and additional mechanisms to find and clobber all
the other copies that may be lying around. It's often
the case, for example, that the argument to free() is
the argument to free()'s caller, so zapping free()'s
argument is only the beginning of the job. A clear-the-
pointers scheme is far beyond the capabilities of the
simplistic dodges like the one I illustrated; it must be
driven from the top down rather than from the bottom up.
Bottom-up schemes aren't good enough for serious use.
They can even be harmful: by preventing failures in a
chunk of erroneous code 99% of the time, they can make
it less likely that the error will be exposed in testing.
They are the Typhoid Marys of programming: asymptomatic
and yet deadly. And that's why I don't like 'em.
Notes:
[1] I'm using "correctness" in the weak sense: A "correct"
program is one that doesn't "go down in flames" by doing
something like following stale pointers. Such a program
may nevertheless compute the value of pi as -42 or state
that the square root of 3 is 9 or otherwise contravene its
specification, but that's not the kind of "correctness"
I'm writing about. "Well-behaved" might have been a better
word than "correct," but I'm not up for a rewrite.
[2] This isn't meant to rule out probabilistic computation
methods. A program that seeks a solution probabilistically
and sometimes fails to find one but "plays nice" and reports
the failure in a controlled manner is still correct in the
sense of [1], and in some stronger senses as well.
[3] "Failure" as in "going off the rails." A program can
produce incorrect results without "failing;" this is the
flip side of [1].
[4] Note that simply examining the value of an invalid
pointer constitutes a "use," even if the pointer is not
dereferenced. Clearing free()d pointers can avoid this
particular problem, but you've got to get 'em all or it's
of no use.