I have to take your word for that, but I am not aware of having trouble
with that distinction myself.
Experienced programmers tend to be better about it. The main problem isn't
in the conscious awareness, though, it's in the tendency for the brain to
fill in answers that are partial results to something else.
Same general principle as a whole lot of social engineering attacks and
tricks. If you accuse someone of something they would like to deny, they are
likely to deny it. If you state something that necessarily implies that
thing, they may not notice that the implication is one they're trying to
deny.
So, someone who would easily spot something direct like "were you at the
nightclub at the time of the murder" might easily fall for "did you see anyone
you knew at the nightclub at the time of the murder?"
Similarly, even if you *know* that EOF may have values other than -1, if
you've seen it as -1 and know it's -1 on a given implementation, you're
less likely to wonder whether something like:
/* we take a pointer to [1] so we can index by -1 */
int flagtable[] = {
0xff, /* EOF */
0x00, /* NUL */
}, *flags = flagtable + 1;
is safe. You know how it works.
The trouble with this anecdote is that you don't know what the author of
the code knew. They may have had access nothing but the value of
.TRUE. on one system at one time. They may not have been privy to what
.TRUE. *has* to be rather than what it *happens* to be. I.e. this is
probably an anecdote about knowing to little, not about knowing too much.
Well, that's the thing. If you asked the vendor, they told you that .FALSE.
was all bits zero, and .TRUE. wasn't.
Maybe, but is anyone here arguing for giving people *only* that
information? I though the debate was about knowing some actual value of
EOF in addition to the more general information about it.
See above; people tend to generalize a little bit unconsciously even when
they know better.
And if you did find out what it is, would that lead you to make the kind
of errors you've described?
I'd be more likely to than I am now.
If all you are saying is that there is no need to now, I suspect we all
agree on that, but I feel you are suggesting something else but I am not
sure I know what it is.
I come to programming from an unusual background, which is that I studied
psychology. I've spent a lot of my time totally fascinated by the ways in
which human brains are unreliable. This is of particular interest because
my brain is very, very, heavily optimized for speed over reliability; I'm
extremely fast, but I have a particularly noticeable tendency towards a lot
of the common cognitive errors. (Mostly not the socially-focused ones,
though.)
Human brains are... Well, frankly, they're *astoundingly* bad. Classic
example: You have a bunch of people on a subway car. A woman announces
that her radio has been stolen. The majority of the people present are
able to give a description of the person who took the radio.
There *wasn't* a radio. No radio was ever present. But they remember seeing
it taken.
In general, if you have information which is *nearly* the piece of information
you need, and the piece of information you need is unknown or even known to
be unknowable (such as "the" value of EOF for purposes of a portable program),
your brain is extremely likely to fill in something Useful.
Good programmers (and engineers, and mathematicians, and lawyers, and just
about everyone else who has to think for a living) are much better than
resisting this than people who haven't trained for it, but they are still
vulnerable to it.
-s