K
Keith Thompson
David Brown said:I have used processors with 16-bit "char", and no way to make an 8-bit
type (except as a bitfield). Nowhere, in any of the documentation,
manuals, datasheets, or anywhere else was there any reference to a
"byte" that is not 8 bits. It made clear that a /char/ was 16 bits
rather than the usual 8 bits, but they were never called "bytes".
I haven't used such devices much - but the vast majority of people who
use the term "byte" have never even heard of such devices, never mind
used them.
There are only two situations when "byte" does not automatically and
unequivocally mean "8 bits" - one is in reference to ancient computer
history (and documents from that era, such as network RFC's), and the
other is extreme pedantry. There is a time and place for both of these
- but you won't convince many people that you would ever /really/ think
a reference to a "byte" meant anything other than 8 bits.
(If you can give modern, or at least still-current, references to usage
of "byte" for something other than 8 bits, then I will recant and blame
the egg nog!.)
A "char", as you say, has a well defined meaning - but not a well
defined size.
As I'm sure you know, the ISO C standard uses the term "byte" to refer
to the size of type char, which is CHAR_BIT bits. CHAR_BIT is required
to be *at least* 8, but may be larger. I can't think of anything in the
standard that even implies that 8 is the preferred value.
And yes, I understand that there are real-world systems with CHAR_BIT >
8 (DSPs, mostly), though I haven't used a C compiler on any of them.
Even if CHAR_BIT were required to be exactly 8, I'd still prefer to
refer to CHAR_BIT rather than using the constant 8, since the macro name
makes it clearer just what I mean.
But if I have a need to write code that won't work unless CHAR_BIT==8,
I'll probably take a moment to ensure that it won't *compile* unless
CHAR_BIT==8. (Unless I'm working on existing code that has such
assumptions scattered through it; in that case, I probably won't bother.)