E. Robert Tisdale said:
Nonsense!
Suppose that two machine architectures with different byte sizes
need to communicate with each other (over ethernet for example).
Let's assume we have a machine where CHAR_BIT==32 and sizeof(int)==1,
and it needs to communicate with a machine with 8-bit bytes. (Feel
free to assume a 2's-complement representation with no padding bits.)
Assume that the communications protocol is defined in terms of
transmitting an octet at a time in a defined order. (I'm not familiar
with the Ethernet protocol, so I won't use it as an exmample.) Then
*something* on the sending machine needs to break each 32-bit int down
into four octets. Since a 32-bit int on this machine is not
inherently composed of 8-bit chunks, the endianness of this conversion
is a matter of choice, and has nothing to do with any inherent
property of the type int.
Suppose we have a machine with 8-bit bytes that wants to communicate
over a protocol that sends one 4-bit nybble at a time. We have to
decide which nybble to send first. This decision has nothing to do
with any inherent endianness of a single 8-bit byte.
Again, as I said earlier, "the type int has no meaningful endianness"
for such a system; you can impose whatever endianness you need for a
given interface.