D
Dann Corbit
Christian Christmann said:Hi,
a question on castings.
My code example:
#include <stdio.h>
int main( void )
{
unsigned int a = 4294967295U;
signed int b = 4294967295U;
signed int c = (signed int) a;
printf( "a:%ud\n", a );
printf( "b:%ud\n", b );
printf( "c:%ud\n", c );
return 0;
}
The output is:
a:4294967295d
b:4294967295d
c:4294967295d
I don't understand why "b" and "c" are also a 32-bit values. Since I
defined "b" and "c" as signed int, there are ony 31 bits that can be used
to represent the number, thus the value range is [-2147483648,
2147483647].
When assigning "b" the value 4294967295U, I thought that an implicit cast
is performed that converts the value to a 31bit value + 1 bit for the
sign.
In a similar way, "c = (signed int)a" is an explicit cast that should
convert the 32bit value of "a" into a 31bit value represented by "c".
However, printf indicates that casting is not performed. Why?
A cast just means to reinterpret the value as another type. I think in this
case it is not doing what you think it ought to do, but it is doing what it
is supposed to do.
You would probably find :
signed int d = -1;
unsigned int e = -1;
....
printf( "d:%ud\n", d );
printf( "e:%ud\n", e );
equally surprising.
Have a look at this:
http://en.wikipedia.org/wiki/Two's_complement
http://www.azillionmonkeys.com/qed/2scomp.html
http://www.hal-pc.org/~clyndes/computer-arithmetic/twoscomplement.html