James Kanze wrote:
On Mar 13, 11:05 pm, "Johannes Schaub (litb)"
Paavo Helde wrote:
[...]
Plain char can be signed or unsigned, as defined by the
implementation. If it is signed, then I think 4.7.3 holds:
"If the destination type is signed, the value is unchanged
if it can be represented in the destination type (and
bit-field width); otherwise, the value is
implementation-defined."
But where does it say that char can be signed? I only find
text where it says it could hold negative values. It does not
seem to say that it can be included in the list of signed or
unsigned integer types by an implementation
Last sentence in �3.9.1/1: "a plain char object can take on
either the same values as a signed char or an unsigned char;
which one is implementation-defined."
I was thinking of the same, but I think the problem runs deeper
than defining the set of values. Plain char, signed char, and
unsigned char are three distinguished types. Only signed char and
unsigned char are listed as signed and unsigned integral types.
Plain char, like bool, is an integral type that appears to be
neither signed nor unsigned. This poses a problem for interpreting
integer conversions from and to plain char because even if plain
char cannot hold negative values and behaves like unsigned char,
there is formally no guarantee that, e.g., arithmetic with plain
char is mod 2^N or that integer values are converted mod 2^N. For
that, one would need farther reaching provisions.