Suppose I'm using an implementation where an int is 16 bits.
doesn't change anything. The problem isn't the "16" that is in there
but rather the "unsigned." Unsigned variables can only be positive
(they have no sign.) A signed variable has the option to be either
positive or negative--though it won't be able to be as big of a number
as an unsigned value (explained more at the bottom.)
In the program below what function is called in the first case,
and what is called in the second case?
Looks like bar() bar() to me which means that either I or Falconer is
spacing out. If it can never be negative, then it will never be less
than 0 so the first statement for both of these should be false and
bar() will be called.
Also, if there is a difference between C89 and C99, I would
like to know.
I'm a lightweight on the specifications so no comment.
I have tried with different compilers, and I see some differences.
If different compilers are giving different results, my best bet would
be that some are doing what they should do and not allowing an unsigned
value to be less than 0, while as others are trying to be intelligent
and correct the programmer's mistakes invisibly. While I wouldn't say
that the second "intelligent" compiler is buggy I certainly wouldn't
want to use it. Bugs are best corrected by you and not merely disappear
based on some particular quirk of your development
environment--otherwise the instant your environment changes, everything
comes to a halt.
And back to the topic of signed versus unsigned, the way that these
values are represented is exactly the same:
16 bit value
1111111111111111 = 65535
1111111111111111 = -1
The only way for your computer to know which one you meant is when you
specify "int" or "unsigned int." Based on that it will treat the same
exact bits in two entirely different ways.
To be specific, an unsigned number grows like this:
0 = 0
1 = 1
10 = 2
11 = 3
100 = 4
101 = 5
110 = 10
....
1111111111111101 = 65533
1111111111111110 = 65534
1111111111111111 = 65535
A signed number is the same up until the 16th bit
0 = 0
1 = 1
10 = 2
11 = 3
....
111111111111101 = 32765 //15 bits
111111111111110 = 32766
111111111111111 = 32767
1000000000000000 = -32768 //16 bits
1000000000000001 = -32767
1000000000000010 = -32766
....
1111111111111101 = -3
1111111111111110 = -2
1111111111111111 = -1
In the end, the word "hello" <- right there is just a bunch of ones and
zeroes. It is all just a matter of how you tell your computer it should
interpret it.
0010 1101 0100 0011 0110 1000 0111 0010 0110 1001 0111 0011