Yevgen Muntyan said:
Keith Thompson wrote: [...]
Consider a CPU that supports signed but not unsigned arithmetic.
It's
not very realistic, but it's possible. Assume an N-bit word size.
Then int might have N-1 value bits and 1 sign bit, and unsigned int
might have the same layout, but treating the sign bit as a padding
bit. This gives us UINT_MAX == INT_MAX.
Yep (and we can add same amount of padding bits into int and unsigned
for the same effect, and it's be the only possible case when
UINT_MAX == INT_MAX). Question is: doesn't the standard explicitly
disallow it?
I don't believe so. (If it does, someone will quote chapter and verse
shortly.)
And the second question, if UINT_MAX == INT_MAX is
possible, is it forbidden to have two's complement arithmetic then
(so that -INT_MIN is not representable in unsigned).
No, it's not forbidden; there's no specific requirement for -INT_MIN
to be representable in unsigned.
If it's not forbidden, then signed type overflow checking needs to
special-case INT_MIN multipliers (in addition to fancy replacement
for ABS macro).
Why would any special-case checking be needed? Signed overflow
invokes undefined behavior; no checking of any kind is required.
Consider an existing implementation with 32-bit int, INT_MIN ==
-2**31, INT_MAX == 2**31-1, UINT_MAX == 2**32-1 (i.e., a very typical
32-bit two's-complement system). (I'm using "**" as a shorthand for
exponentiation.)
Now modify the implementation in *only* the following ways:
Change UINT_MAX (in <limits.h>) to 2**31-1, equal to INT_MAX.
Document that unsigned int has a single padding bit, and that any
unsigned int representation in which that padding bit is set is a
trap representation.
and leave *everything else* as it is. Arithmetic, logical, and
bitwise operations will yield exactly the same representations as they
did before (and the same values in cases other than the new trap
representations). The only difference is that some cases that were
well-defined are now undefined behavior.
The new implementation is still conforming. It's perverse, in that it
fails to define behaviors that it could perfectly well define, and it
could break some non-portable code that *assumes* unsigned int has no
padding bits, but I don't believe it would violate the standard in any
way.