(e-mail address removed) said:
Never, if you're careful and lucky.
If you stick to standard C and only ever have to process files that you
have yourself produced on a single implementation, the answer is - as near
as makes no odds - never.
Normally it starts to matter when you're reading integers from a binary
file (or writing them /to/ a binary file).
No, this operates purely on the value of x, not its representation.
To access individual bytes, use an unsigned char *.
"Big endian" means that the most significant values come first in the
underlying representation. A good example is prices in a shop: when we see
39.99 on a pair of jeans, we know that it's about 40 currency units, not
almost a hundred currency units. "Little endian" means that the least
significant values come first - and I suppose the obvious example would be
UK date format: day/month/year.
So if you do something like this:
int x = 1;
unsigned char *p = (unsigned char *)&x;
printf("%d\n", *p);
it is likely to print 1 on a little-endian system, but 0 on a big-endian
system (provided sizeof(int) is at least 2, which isn't actually
guaranteed), whereas if you had written:
int x = 1;
unsigned char *p = (unsigned char *)&x;
p += (sizeof x) - 1;
printf("%d\n", *p);
it is likely to print 0 on a little-endian system, but 1 on a big-endian
system.
Generally, such code is best avoided. If you need to know whether your
system is big- or little- (or middle-!) endian, try to redesign your
program so that you don't need to know this. If that's impossible, perhaps
because of some externally imposed data format, at least now you know how
to tell the difference.
--
Richard Heathfield <
http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <
http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999