In the programming language that *is* C, the implementation *must*
define CHAR_BIT. It's not optional.
To complicate matters, char is not required by the C standard to be the
machine "byte". It is only required to be at least 8 bits. And a
machine "byte" is not always 8 bits....
True. However:
So, given that it can't strictly be done in C
I don't think this is true. Consider that:
- integers must have a pure binary representation, which means
value bits are in order
- you can inspect the representation of any object using a
pointer to unsigned char
Given that, try this code:
#include <stdio.h>
int main(void)
{
unsigned i, c;
unsigned char *bp1, *bp2;
if (sizeof i > 1)
{
i = 0;
bp1 = bp2 = (unsigned char *)&i;
bp1++; /* point to second byte */
bp2 += sizeof i - 1; /* point to second-to-last byte */
for (c = 0; *bp1 == 0 && *bp2 == 0; c++)
i = 1u << c;
printf("bytes have %d bits in this implementation\n", (int)c - 1);
}
return 0;
}
This works for both big- and little-endian architectures, as far as I
can see, because it tests when the shift operator alters the second
or penultimate bytes. I don't offhand see a way for a conforming
implementation (where sizeof(unsigned) > 1) to produce the wrong
answer.
Extending the program to handle the case where sizeof(unsigned) is 1
is left as an exercise for the reader.
--
Michael Wojcik (e-mail address removed)
"Well, we're not getting a girl," said Marilla, as if poisoning wells were
a purely feminine accomplishment and not to be dreaded in the case of a boy.
-- L. M. Montgomery, _Anne of Green Gables_