[..]
What are "the first four bits" anyway? Those with the lowest value? Or
the highest? And what if there are padding bits?
"The first four bits" are the bits numbered 0 through 3, no? Aren't any
"first" items always the ones with lower indices? How can they be the
"highest"?
In the embedded world we are frequently concerned with endian issues. On
a big endian machine numbers are stored as god intended, highest order
bits in lowest ordered memory. Then Intel goes and pucks up everything by
making the 80x86 platform little endian so that the least significant bits
come first. On some architectures it is even more archaic: 32 bit
integer representation on a 16 bit micro-controller. Some of them split
the number into 16 bit words and swap the bytes within those words...so,
it's not as cut and dry and I would ask anyone looking for information
about bits to give me the bit numbers on a scale 2^n n=[0..z].
My headache with this issue currently is that some morons designed a
robotics messaging protocol based on a little endian format and we must
design code that works on both big and little endian machines. I cannot
even use bitfields in my structs to pull the data because the protocol
violates network byte ordering of ethernet. I have to code friggin
accessor functions for every stinking field in every stinking type of
message I want to support...talk about archaic!