J
J.W.
Hello, all,
I wanted to get some help understanding the process of computing a sign of a
number.
I tried this in my program:
#define CHAR_BIT 8
int v=-100;
int sign = v >> (sizeof(int) * CHAR_BIT - 1);
I expect the answer will be 1 since the highest bits stored in v will be 1 ,
however, the sign will give me the value of -1.
I would appreciate if somebody here could give me some hint why it turns out
to be -1 , NOT 1.
Thanks,
J.W.
I wanted to get some help understanding the process of computing a sign of a
number.
I tried this in my program:
#define CHAR_BIT 8
int v=-100;
int sign = v >> (sizeof(int) * CHAR_BIT - 1);
I expect the answer will be 1 since the highest bits stored in v will be 1 ,
however, the sign will give me the value of -1.
I would appreciate if somebody here could give me some hint why it turns out
to be -1 , NOT 1.
Thanks,
J.W.