J
James Kuyper
Yes, these type names are suitable for that purpose (though I am not
sure they are reserved for it by C99)
If <stdint.h> is #included, C99 defines what those names mean for all N,
while only making the least and fast types mandatory for N = 8, 16, 32,
and 64. Conversely, the standard prohibits <stdint.h> from defining
those names for values of N for which the corresponding types are
unsupported.
Since those typedefs and macros do not have external linkage, but do
have file scope, the corresponding identifiers are reserved only if
<stdint.h> is #included; otherwise, they're in the name space reserved
for users, which means that an implementation is NOT free to give them
any other meaning.
- and it is the implementation
here that has failed to include them in <stdint.h> (I am not sure
whether this is the responsibility of the compiler or the library).
The library can't do anything to make it work unless the compiler
supports a type of the right size; but the corresponding typedef must
not be defined unless and until the standard header is #included, so it
seems to me that they must both work together. Also, support for 128-bit
integers will probably affect intmax_t, and therefore also the
However, it is possible to express a 32-bit zero as "0", and a 64-bit
("long long") zero as "0LL". But there is no way to write a literal
128-bit zero, without extending C to allow "0LLL".
That's what the INTN_C macros are for. They're clumsy, but they do the job:
#include <stdint.c>
#ifndef INT128_MAX
#error 128 bit types not supported
#endif
int128_t i128 = INT128_C(0);
Unless int128_t is a typedef for a standard type, INT128_C() will have
to do something implementation-specific to mark it as an int128_t value.
Keeping in mind that the result must be suitable for use in #if
preprocessing directives, it can't be something like ((int128_t)0),
because, for the purposes of such directives, that would parse as
((0)0), which is a syntax error. It might very well, as an
implementation-specific extension, expand to 0LLL. However, your code
doesn't have to know about that; all it needs to know is whether or not
INT128_MAX is #defined. If so, it can use INT128_C().
The "int128_t" and related types are enough to do most 128-bit integer
work in C. But there are unfortunately a number of places where the C
language and library specifications are tied to the poorly-defined
"int", "short", "long", "long long" types rather than size-specific
types. This includes the suffixes on literals, and the format
specifiers in printf() (the "PRId32" style macros help enormously, but
they are not exactly elegant - and since they expand to specifiers for
short, int, long or long long, they can't support 128-bit integers).
There's no requirement that PRId128 expand into a specifier for short,
int, long, or long long. There is, on the other hand, a requirement that
if PRId128 is #defined, it must expand into a format specifier "suitable
for use ... when converting" int128_t. That specifier might be
implementation-specific, unless int128_t happens to be the same typedef
for a standard type, but it must exist.
On the other hand, I don't see how this would limit practical 128-bit
usage much. After all, how often do you need to write 128-bit literals,
or printf them out?
I'm sure that if I needed to work with 128-bit data, I would need to
write them and to printf() them out. Luckily, as explained above, that's
not a problem.