The trouble is that, even if you know what you're doing, it can be very
easy to accidentally get outside the range in which the guarantees
apply; you can use double to represent exact integers, but there's no
warning when you exceed the range where that works.
For any unsigned type that has no more bits than 612,787,565,149,966; that
is, any conceivable unsigned type, the following is a sufficient condition
to store any value of said type in a "long double":
((long long unsigned)sizeof(utype) * CHAR_BIT * 30103 + 99999) / 100000
<= LDBL_DIG
For uint32_t, the left side evaluates to 10, and both DBL_DIG and LDBL_DIG
must be at least 10 on any conformant platform.
After the conversion to the chosen floating point type, eg. long double,
one must track the possible ranges in every floating point expression
involved, and make sure that any evaluation can't exceed "limit", which
can be initialized like this:
char lim_str[LDBL_DIG + 1] = "";
long double limit;
(void)sscanf(memset(lim_str, '9', LDBL_DIG), "%Lf", &limit);
(Of course not exceeding this bound may not be sufficient for converting
back to "utype", but since "(utype)-1" itself was convertible, this final
condition is only a simple comparison away.)
--o--
The number of full decimal digits needed to represent the C value
"(utype)-1" is given by the math expression
ceil(log_10(2 ** numbits - 1))
"numbits" being the number of value bits in "utype". It is safe to assume
(or rather, we have to assume) that all bits are value bits. Continuing
with math formulas, and exploiting log_10 being strictly monotonic and
ceil being monotonic,
ceil(log_10(2 ** numbits - 1))
<= ceil(log_10(2 ** numbits ))
== ceil(numbits * log_10(2))
<= ceil(numbits * (30103 / 100000))
== ceil(numbits * 30103 / 100000)
which equals the value of the math expression
floor( (numbits * 30103 + (100000 - 1)) / 100000 )
Therefore, this integer value is not less than the number of full decimal
digits needed. As "numbits" increases, this value becomes greater than the
exact number of decimal places required. The speed of divergence is
determined by the accuracy of 30103 / 100000 approximating log_10(2), but
I'm too lazy to try to calculate that relationship.
BTW, 30103 and 100000 are coprimes (30103 is a prime in its own right),
thus the smallest positive "numbits" where "numbits * 30103" is an
integral multiple of 100000 is 100000, which would still make for quite a
big integer type. Hence we can assume that the remainder of the modular
division "numbits * 30103 / 100000" is always nonzero, and the last
ceiling math expression could be rewritten as
floor(numbits * 30103 / 100000) + 1
This simplifies the initial C expression to
(long long unsigned)sizeof(utype) * CHAR_BIT * 30103 / 100000 < LDBL_DIG
Unfortunately, the entire approach falls on its face with uint64_t and an
extended precision (1 + 15 + 64 = 80 bits) "long double", even though the
significand has the required number of bits available. (As said above, the
condition is only sufficient, not necessary.)
The problem is that the method above works with entire base 10 digits. The
decimal representation of UINT64_MAX needs 20 places (19 full places and a
"fractional place", rounded up to 20), but the 64 bit significand only
provides for 19 whole decimal places, and the comparison is done in whole
decimal places. What's worse, an extended precision "long double" can only
allow for an LDBL_DIG of 18 (as my platform defines it), presumably
because (and I'm peeking at C99 5.2.4.2.2 p8) "long double" must
"accomodate" not only integers with LDBL_DIG decimal places, but also any
decimal fraction with LDBL_DIG digits. The exponent of the "long double"
stores the position of the *binary* point, not that of the *decimal*
point, and this probably sacrifices a further decimal digit.
(I gave you some material to shred, please be gentle while shredding.)
Cheers,
lacos