Tom said:
Supposing you could, why would you?
/* The high-precision calculations in this module
* need a 1000-bit signed integer, so ...
*/
#include <limits.h>
sizeof(int) = (1000 + CHAR_BIT - 1) / CHAR_BIT;
/*
* Ain't science wonderful?
*/
It doesn't work that way, though: sizeof just describes
the data types the implementation provides, but does not
control their characteristics. Similarly with CHAR_BIT and
the like: you cannot get "fat integers" by re-#defining
INT_MAX to a larger-than-usual number.
<off-topic>
The very first computer I ever programmed actually had
this ability! Integers were normally five decimal digits
wide, with values between -99999 and +99999, but at the drop
of an option card you could make them wider for more precision
or narrower for better storage economy. I'm pretty sure you
couldn't make them narrower than two digits (-99 to +99), but
I can no longer remember what the upper limit was. If there
was an upper limit, though, it was imposed by the compiler and
not by the underlying hardware: the machine itself was quite
happy to work on thousand-digit integers using the same
instructions as for five-digit values.
That was in the mid-1960's, using FORTRAN II on an IBM 1620.
It is a sign of how far we've advanced that we no longer have
this kind of flexibility and are instead made to lie in whatever
Procrustean bed some machine designer chooses to inflict on us ;-)
</off-topic>