K
Keith Thompson
Malcolm McLean said:So you're saying we have to say
x = -(int) ux;
to negate an unsigned integer?
To negate an unsigned integer, you simply apply the unary "-" operator
to it. The result of negating an unsigned integer is, of course, an
unsigned integer.
I think what you're trying to do is, given an unsigned int value, store
the (mathematical) negative of that value in a signed int. The obvious
way to do that is, as you say:
unsigned int ux = /* ... */;
int x = -(int)ux;
which works for values of ux in the range 0 to INT_MAX.
Which means that INT_MIN will break.
Yes. Let's assume 16-bit int and the common 2's-complement
representation, so:
INT_MIN is -32768
INT_MAX is +32767
UINT_MAX is +65536
If ux == +32768, you want to store the value -32768 in x.
I can't think of a good way to do that without either treating it as
a special case or depending on implementation-defined or undefined
behavior. (int)ux either gives you an implementation-defined result
or raises an implementation-defined signal.
(In practice, a simple "int x = -ux;" is likely to work because of
the implementation-defined behavior that most implementations happen
to have, but it's specifically not guaranteed by the standard.)
I think you've found a bug in the standard.
I don't think I'd call it a bug; it's just something that's difficult
to do, and I'm not sure there's any reasonable way to correct it.
The presence of an extra negative value for most signed integer
representations can be a problem, but it's imposed by the underlying
behavior of most machines and the need for C to cater to a wide
variety of hardware.
Can you think of a way to fix the standard to make this easier?