David Schwartz said:
It would be obvious to anyone who reviews C code professionally that
the cast was put there to indicate to someone looking at the code that
the conversion was intended and the person who put the cast in there
is vouching for its safety.
As someone else already wrote: What you describe above is technically
a comment containing meta-information about the code. And using a
comment, it is actually possible to document this fact, eg
a = b; /* truncate to unsigned */
while this remains a mere assumption for the meaningless cast.
It says, "I know the range of the type on the right exceeds the
range of the type on the left, and I have made sure that the value
cannot actually be out of range."
The definition of 'cast operator' is
Preceding an expression by a parenthesized type name converts
the value of the expression to the named type. This
construction is called a cast.
[6.5.4|4]
This implies that 'casts' are supposed to be used to request that
value conversion are performed, not to document that value conversions
will never happen, as you said above.
Now, if it doesn't mean that, then you are correct, it was put there
for some crazy, possibly erroneous, reason.
It is not 'erroneous' to do 'lossy conversion' of values in
C. That's a feature the language has always had (AFAIK) since it was
invented. The assumption that noone could possibly want to do this,
which is even more visible in the extended assumption that using an
assignment which causes such a value conversion to happen would be
prima facie evidence of 'the programmer' not knowing that the
conversion will happen, despite that this would be the exact opposite
of the defined meaning of it, is somewhat far fetched. For instance,
one way to serialize an integer in little-endian byte order,
indepedent of host byte order (assuming 4 byte [unsigned] ints) would
be:
uint8_t serialize_u32(uint32_t v, uint8_t *p)
{
*p++ = v;
*p++ = v >> 8;
*p++ = v >> 16;
*p++ = v >> 24;
return p;
}
It is perfectly clear from the code that its author didn't expect to
be able to store a complete uint32_t value in a single byte (uint8_t
object).
[...]
There are multiple separate issues. One is whether the code does what
the author intended it to do. Another is whether the code does what
the code is supposed to do (based on a specification or common sense
or just 'it clearly shouldn't crash). Both are important.
What the code author intended to do isn't important. For instance, it
is completely possible that the author intended to do something which
would have been wrong, but failed at actually accomplishing that, and
- by accident - wrote correct code. This can only be determined by
examining the code itself. And 'the code' has the nice property that
it is there and open to inspection, will reasoning about the
'intentions' of someone, beyond what he actually communicated, will
always necessarily involve (unverifiable) assumptions.
[...]
This is one of the reasons I generally prefer to hire programmers with
less experience. They have less unlearning to do.
This means little more except that you prefer to hire people without
the necessary practice to actually perform certain tasks, because this
might mean they would attempt to perform in a way different from the
way you would like it to be performed, or, in other words, that you
are opposed to the idea that 'programming', like any other craft (not
art) can be learnt (and taught) independently of the people
involved. I would assume that this was true at some time in the past
for more traditional professions, too, eg that there was a time when
doctors were generally believed to be wizards predetermined for their
profession by some mysterious, superhuman power (or builders, tool
makers, carpenters, tailers and so on).
I hope to live long enough to still see the proto-scientific 'stone ages'
end in this particular case, too.