Malcolm said:
Because it is nonsense to try to express a negative number as an unsigned
integer.
That's not what the snippet attempts to do. It *converts*
a negative number to an `unsigned int', and the conversion obeys
the rules of modular arithmetic modulus `UINT_MAX+1'.
It is equally nonsensical to try to express a number with a
fractional part as an integer, but do you think `(int)(9.7 + 0.5)'
should be an error? Of course not: the *conversion* from `double'
to `int' is well-defined (for numbers in an appropriate range),
and is also useful. Should the compiler reject a useful and well-
defined operation as erroneous?
But the standard itself is a human construct. I would imagine that the main
reason it allows it is to avoid breaking hundreds of pre-standard programs.
The reason K and R allowed it was probably efficiency.
Perhaps. Have you asked them? Or "him," rather, because
K's role was to assist in describing R's invention. However,
until you can produce a statement from R to support your contention,
I'll continue to shave you with Occam's razor, and persist in my
outlandish supposition that the conversion is defined and not
erroneous because it is useful and well-behaved.
Obviously since I say the construct "should ideally generate an error" there
must be circumstances, non-ideal ones, in which the error is not generated.
This argument can, of course, justify or condemn absolutely
anything you like. As long as the arguer gets to control the
definition of "ideal," there's no externalizable content to the
debate. Solipsism rules -- and even the paranoid have enemies.
So why does the standard specify that -1, of all things, must cast to
UINT_MAX?
Look up "modular arithmetic" and "congruence."
You need to look at the motivation. Read what I said to Jack Klein about
natural language.
I read it, but I confess I didn't understand it. It seemed
entirely beside the point, a fog rather than an illumination. My
failing perhaps -- but when "everyone is out of step except
Johnny" it is reasonable to wonder about Johnny's sense of rhythm.
-1 casts to 0xFFFF because that is a reinterpretation of bits on two's
complement machines, and because the conversion can be accomplished in a
single machine instruction.
Already refuted, multiple times by multiple people. Also
self-contradictory: if the representation is already correct,
it should be "convertible" in *zero* machine instructions.
Natural language needs to be socially
appropriate as well as literally accurate. In this case I am explaining C
casting to someone who doesn't understand very much about it, so mention of
one's complement machines, or the text of the standard, is not useful and
confuses.
You are explaining your own mistaken understanding of C.
Your original statement was
"With the exception of casts from floating point types
to integers, C casts are simple reinterpretations of bits.
If the bit pattern doesn't make sense for the type you are
casting to, you will get garbage results and the compiler
won't warn about them."
.... and this is demonstrably (and demonstratedly) false. Perhaps
it would have been true if you had invented the language, and perhaps
you feel it "should" be true -- but it is not true, has never been
true, and (I'll bet) will never be true. When your "explanation"
of casting is flat-out wrong, you do a disservice by propounding it.