C is backwards. It is backwards in this area of unknown sizes, but minimum
allowed sizes, and it is backwards in the area of undefined behavior, where
it should have defined behavior and overrides.
That's my position.
It appears you have a position about a written work (the C standard, in its
various editions).
Have you ever seen at least its cover page?
C has support for integral types of exact sizes. In 1990 it didn't; it was
added in a newer revision of the C standard in 1999.
There is a header <stdint.h> which declares various typedefs for them,
and you can test for their presence with macros.
If your program requires a 32 bit unsigned integer type, its name is
uint32_t. If you're concerned that it might not be available, you can test
for it. (Believe it or not, there exist such computers that don't have a 32
bit integral type natively: historic systems like like 36 bit IBM mainframes,
and some DEC PDP models.)
Programs can be written such that the exact size doesn't matter.
For instance a library which implements bit vectors can use any unsigned
integer type for its "cell" size, and adjust its access methods accordingly.
I use a multi-precision integer library whose basic "radix" can be any one
of various unsigned integer sizes, chosen at compile time. It will build
with 16 bit "digits", or 32 bit "digits", or 64 bit "digits".
In principle, it could work with 36 bit "digits".
C is defined in such a way that it can be efficiently implemented in
situations that don't look like a SPARC or x86 box.
Thompson and Ritchie initially worked on a PDP-7 machine with 18 words.
To specify sizes like 16 and 8 would ironically make C poorly targettable to
its birthplace.
C is not Java; C targets real hardware. If the natural word size on some
machine is 17 bits, then that can readily be "int", and cleverly written
programs can take advantage of that while also working on other machines.
Just because you can't deal with it doesn't mean it's insane or wrong;
maybe you're just inadequate as a programmer.