It's guaranteed that, for any integer type, all-bits-zero is a
representation of the value 0. (Neither the C90 nor the C99 standard
says this, but n1124 does; this change was made in TC2 in response to
DR #263. I've never heard of an implementation that doesn't have this
property anyway, and I'd be comfortable depending on it.) For
one's-complement and sign-magnitude representations, there are two
distinct representations of zero (+0 and -0), but you can avoid that
by using an unsigned type. But unsigned types *can* have padding
bits, so even if buf==0, you might still have missed a 1 bit.
Could you please explain this? If I have calculated (or even knw..) that the
underlying word size is 32 bit and I use an unsigned int to represent
this in C, then how do padding bits make any difference? Isnt the
variable guarenteed to go from 0 to 2^32-1?
No, it isn't. The guaranteed range of unsigned int is 0 to 32768.
But, for example, an implementation could legally have:
CHAR_BIT == 8
sizeof(unsigned int) == 4
INT_MAX == 32767
An unsigned int then has 32 bits: 16 value bits and 16 padding bits.
For details, see section 6.2.6.2 of the C99 standard.
Wow. That would break a lot of code. Assumimg some situation like
that, is their a constant "MAX BITS FOR ULONG" or "MAX WORD SIZE IN BITS"? e.g
to get a platform independant assurance of the number of bits one can
use in a single HW word? Having said that, in my sample code in the
initial reply to Eric, I would only need to recalculate my "start
mask" from 0x80000000 to "whatever" when I calculate "usable bits per
HW word" in program init. Since the overhead of doing that calc is
almost less than zero compared to cother computation, I could do that.