Aha. My thanks to the poster the other day who noted that the best
way to get an answer is to make some assertion and wait for people
to jump on you
I had previously made enquiries as to why people
bothered with '\0' instead of the easier-to-type 0 but gotten no
answer.
I think Chris Torek's answer in this thread (roughly, "because it's
supposed to be a character, so make it look like one") is the best
rationale.
So this is just an idiom that you are supposed to have picked up
while learning the language (like using upper-case characters
for macro names vs. function names)?
Basically. It's an idiom that you're supposed to encounter *earlier*
in your language-learning career than the fact that chars are just
small integers anyway; thus it's supposed to make *more* sense to use
a character when you mean a character, and zero when you mean zero.
You see?
Personally I have this defined:
#define END_OF_STRING '\0'
which makes for greatly readable code (I only eschew it in throwaway
programs, or when it would make my line length exceed 80 chars).
Personally, I think that's silly in the extreme. It doesn't help
readability any, since it's just substituting a programmer-specific
idiom for a language-wide idiom, and it makes the code longer. It
also requires either that you make a new header to #include this
#definition in every program you write, or that you duplicate the
code in every translation unit.
Pedantically, it invokes undefined behavior by trying to re#define
an identifier reserved to the implementation, should the implementation
ever find the need to signal to you that it's encountered an Error
having something to do with ND_OF_STRING.
[My first objection is a little hypocritical, perhaps, as many of
my own programs use #define steq(x,y) (!strcmp(x,y)) to simplify
the argument parsing code: another programmer-specific idiom substituted
for a perfectly good language-wide idiom. But in my defense, I'm making
the code shorter and less error-prone, not longer and murkier.]
How did this idiom originate historically?
By the need to be able to include embedded nulls in string literals.
All the string escape codes (\n,\r,\a,\0,\b,...) are legitimate escape
codes for character literals, too. As for why the language designers
picked \ to be the escape character in literals, I couldn't say.
"Historical reasons" of some sort, no doubt.
-Arthur