Is there any mentioning in the standard of the number of bits
of the various built in types, apart from char/signed
char/unsigned char types? Or only about the minimum value
ranges of them?
No. But for integral types, the standard does impose a pure
binary representation, so the minimum value ranges do impose a
minimum number of bits. For floating point types, the standard
imposes minimum ranges and precision, which also imposes a
minimum amount of information present---if a "bit" is a binary
digit, this also imposes a certain minimum number of bits.
There's also a requirement that sizeof be an integral type, and
that you can examine all of the bits of an object (regardless of
its type) using unsigned char. Independently of the requirement
that CHAR_BIT be at least 8, this would forbid the usual PDP-10
organization of 36 bit words, with 5 7 bit bytes per word (and
one unused bit). It doesn't require that all bits participate
in the value representation, however---on a Unisys MCP, for
example, you can see all 48 bits of an int accessing it as an
array of [6] unsigned char, but only 40 participate in the value
representation, and only 39 can be accessed using unsigned int
with shift and masking operations.