N
Nobody
If you assume that *some* predefined signed integer type is 32-bit two's
complement, that's probably ok for the vast majority of current
(non-embedded) systems. Assuming that "int" is such a type is unwise
and unnecessary.
It's unnecessary if you don't mind copying data to/from actual "int"s
every time you call something which wants an "int *".
Personally, if I'm writing for desktop/server/32-bit-embedded, I'll assume
that "int" is 32-bit two's complement. Anything else may as well not exist
until I see a concrete reason for considering its existence.
I'll also assume that "int" is the correct type for array indices (and
passing to malloc) unless support for more than 2^31 entries is at least
plausible, in which case I'll use "long" unless I'm explicitly required
to care about Win64.
I certainly wouldn't assume little-endian representation for anything to
be shared with different systems. x86 happens to be dominant today, but
there's not guarantee that it always will be; there are still a
significant number of SPARC systems out there. And it's a solvable
problem anyway; you don't *have* to depend on a particular endianness.
(This is why "network byte order" exists.)
I wouldn't normally rely upon it to the extent of sprinkling fread/fwrite
(etc) about the code. I'd isolate the calls so that adding support for a
big-endian system would amount to "add a version of this source file
which does byte swapping".
OTOH, I wouldn't completely rule out:
struct the_data *p = mmap(...);
simply because it would make the program impossible to port to an
"unusual" system without a major rewrite. Sometimes, portability just
costs too much.
Alignment is an issue only for in-memory data; it's irrelevant for
reading and writing files.
It's relevant if your I/O model is "mmap() the file" and you don't want to
litter the code with "packed" qualifiers and take a performance hit.