B
Bilgehan.Balban
Hi,
I have a basic question on signed and unsigned integers. Consider the
following code:
#define SOME_ADDR 0x10000000
// Some context
{
unsigned int *x = (unsigned int *)(SOME_ADDR);
*x = ( 1 << 10 );
}
Here, how would ( 1 << 10 ) be interpreted in terms of sign? My
compiler does not give any warnings for a case like (1 << 10) assigned
to an unsigned int, however, it does say, "result of operation out of
range" for an assignment like (1 << 31). My interpretation of it was
that, in (1 << 31), "1" is signed by default, and shifting it 31 bits
overflows the type because [31] is the sign bit, and this is the cause
of the warning. But why does it not warn for the former case? Is the
sign determined by the lvalue?
Finally, a bit off-topic but, does a cast between signed and unsigned
values generate (perhaps a handful of) instructions for converting
two's complement signed and unsigned notation?
Thanks,
Bahadir
I have a basic question on signed and unsigned integers. Consider the
following code:
#define SOME_ADDR 0x10000000
// Some context
{
unsigned int *x = (unsigned int *)(SOME_ADDR);
*x = ( 1 << 10 );
}
Here, how would ( 1 << 10 ) be interpreted in terms of sign? My
compiler does not give any warnings for a case like (1 << 10) assigned
to an unsigned int, however, it does say, "result of operation out of
range" for an assignment like (1 << 31). My interpretation of it was
that, in (1 << 31), "1" is signed by default, and shifting it 31 bits
overflows the type because [31] is the sign bit, and this is the cause
of the warning. But why does it not warn for the former case? Is the
sign determined by the lvalue?
Finally, a bit off-topic but, does a cast between signed and unsigned
values generate (perhaps a handful of) instructions for converting
two's complement signed and unsigned notation?
Thanks,
Bahadir