Seebs said:
Can someone explain me about the following statement?
For unsigned types, the range is always 0 to <type>_MAX. On your system,
the chances are UCHAR_MAX is 255. The way numbers outside that range are
converted is by adding/subtracting multiples of (UCHAR_MAX+1) until you get
something in range.
So...
-1 => (-1) + (UCHAR_MAX + 1) => UCHAR_MAX
[...]
Seems to be related to 2's complement above.
Not in the least. It does not matter what representation the system uses;
-1, converted to an unsigned type, is TYPE_MAX.
Actually it *is* related to 2's complement in the least.
}
Conversion of a signed or unsigned integer to an unsigned type, where
the value isn't already within the target type's range, is defined as
follows (C99 6.3.1.3p2):
Otherwise, if the new type is unsigned, the value is converted by
repeatedly adding or subtracting one more than the maximum value that
can be represented in the new type until the value is in the range of
the new type.
with a footnote:
The rules describe arithmetic on the mathematical value, not the
value of a given type of expression.
Note that the conversion is described in terms of values, not
representation.
Of course the implementation doesn't actually have to do these
repeated additions or subtractions. On a system that uses
2's-complement, converting from a signed type to an unsigned type
either copies the low-order bits, copies the entire representation,
or sign-extends the representation, depending on the relative sizes
of the source and target. This is a very simple and fast operation;
the compiler doesn't even need to take signedness into account.
(For sign-and-magnitude or ones'-complement representations, the
compiler has to do a little more work, but such systems are rare.)
The standard very carefully doesn't refer to representation when it
discusses the conversion rules, but the rules were almost certainly
originally *motivated* by the natural way to do conversions on a
2's-complement system.