Denis McMahon said:
No they didn't, at least not in the original code,
Incorrect, the types did match in the original code (ignoring the
size_t passed for the second "%d").
or rather, the
original was doing an implicit char to int cast, which meant that what
the OP thought was an operation on an 8 bit char was (probably) actually
taking place on a 32 bit int.
There is no such thing as an "implicit cast". You mean "implicit
conversion" or, more precisely, "promotion". (And it's from unsigned
char to int, not from char to int.)
Consider:
char i = 0x80;
printf("%d",i<<1);
The << operator takes two ints and returns an int, so before the <<
operation takes place, i is implicitly cast to an int.
The << operator takes two operands of integer type, not necessarily
int. The integer promotions are performed on each of the operands,
which is why the char value is implicitly converted to int, *not*
because "<<" requires an int operand (it doesn't). (Note that i
was unsigned char, not char, in the original article.)
Next, the integer value 0x80 is shifted left 1 bit to 0x100;
Correct.
Finally, printf is taking the integer value 0x100 (or binary 0000 0000
0000 0000 0000 0001 0000 0000) and displaying it as a decimal number, 256.
Correct (except that you're assuming int is 32 bits). Note that
"int" and "integer" are not synonymous. There are several integer
types in C; "int" is just one of them. Yes, 0x100 is an integer
value; more precisely, it's an int value.
And there is nothing wrong with the code. There is no type mismatch.
"%d" expects an int argument, and that's exactly what it gets.
(Except perhaps on exotic systems where CHAR_MAX > INT_MAX, which can
happen only if plain char is unsigned, CHAR_BIT >= 16, and some other
unlikely conditions apply.)
The types might not have matched somebody's expectation of what
they should be, but compilers pay no attention to expectations.