Sorry for not responding earlier, it's a busy time.
I'm responding here to similar comments in several postings,
rather than repetitively responding invididually. The order
will be different but I will try to make sure there is enough
context so that doesn't matter.
The benefits are the same as using an extra variable, but without
needing to declare or name a variable.
Also it doesn't mask conversion warnings the way casts often do.
Thanks - I hadn't thought of that.
However, one of the reasons for using a cast is precisely that
the types are not assignment compatible, or that assignment will
give an extra warning (such as if gcc's "-Wconversion" is used,
and the assignment is to a smaller type).
That's not one reason, it's two, and the two situations should
not be lumped together.
If I see a cast that is required, in the sense that it cannot
be avoided through use of implicit conversions (not counting by
using (void *), which is a special case), I know why it's there -
there is some sort of skulduggery afoot, and there _has_ to be a
cast to get the compiler to tolerate the skulduggery. Probably
the person who wrote the cast made a conscious decision to engage
in said skulduggery, but whether that's true or not the presence
of a required cast is a red flag that calls for greater than
usual attention to what's going on there. Competent developers
try to write code that has as few red flags as possible (or at
least as few as reasonably possible); one reason for that is
such cases are generally more disasterous when they are done
wrongly, so naturally they merit a greater degree of scrutiny.
In the other case, if a section of code contains an unnecessary
cast, meaning it is not required in the sense described above,
there are lots of different reasons why it might be there. For
example:
a. a non-default conversion is necessary for the code to work
correctly (ie, a semantic difference is needed);
b. a non-default conversion has no effect on this platform,
but it is necessary on a different platform;
c. a non-default conversion may be necessary for the code to
work correctly, but it isn't obvious whether it is or not,
so a cast was added for safety;
d. a non-default conversion is not necessary for the code to
work correctly (and the author knows this), but a cast was
added to make it obvious that the code will work;
e. the author thinks a non-default conversion is necessary for
the code to work correctly (on this platform or some other
one), even though it isn't;
f. a redundant cast was added to call attention to the presence
of a non-obvious conversion;
g. a cast was added that is redundant on this platform but is
not redundant on a different platform;
h. a cast was put in to conform to a coding standard rule (or
code review, local practice, etc);
i. a cast was put in to make the code easier to understand for
inexperienced developers;
j. a cast was put in to suppress a compiler warning message;
k. a cast was put in to suppress a compiler warning message
not on this platform but a different platform (or compiler
version);
l. a cast was put in to suppress an expected future compiler
warning message (or expected alternate platform);
m. the author believes a cast is needed to suppress a compiler
warning message (on this platform or some other), but in
fact it is not; or
n. the cast was added earlier on for one of the above listed
reasons, but meanwhile the code has changed so the cast
now serves no current purpose.
The cast tells the compiler, and the programmer and readers, that
you know what you are doing here.
There are several things wrong with this statement, especially
for casts in the "unnecessary" category. Even though a cast
specifes what operation is to take place, it doesn't say what it
is there to accomplish, or why. Second, and partly as corollary
to the previous sentence, there is often no way to tell if an
unnecessary cast is really needed for its intended purpose.
Third, the implied assertion that the author knows what he/she is
doing is often wrong (and the more unnecessary casts there are
the more our attention is diluted away from the cases that need
it). Fourth, giving general advice to add casts in places where
casts are not necessary makes things worse - likely adopters of
such advice include inexperienced developers, who as a group
make more mistakes and would most benefit from receiving the
warning messages that unnecessary casts suppress.
Certainly explicit casts can mask some compiler warnings - but
sometimes that is exactly what you want.
That is exactly what I do NOT want. Casts written to suppress a
compiler warning message should _never_ be written in open code.
If there is no other way to suppress a warning other than by
using a cast, at least the cast should be wrapped in a suitable
macro, so that there might be suitable assertion checks, etc,
conditionally included.
This is particularly true
when changing to lower range types, or swapping between signed
and unsigned types. If you write "an_int_16 = an_int_32", a
compiler with appropriate warnings can flag that as risky.
Adding an explicit cast, "an_int_16 = (int16_t) an_int_32" makes
it clear to the reader, the writer, and the compiler that the
assignment is safe.
The problem is it does not make that clear. /Maybe/ it means the
author thinks it's safe, but that doesn't mean it is safe. It
might not mean even that; it might be just a knee-jerk reaction
to previously getting a compiler warning message. Looking at
the cast, there's no way to tell the difference between those
two circumstances.
(Clearly the
programmer must ensure that it /is/ safe in this case.)
Putting in a cast (meaning a plain cast in open code) takes away
one of our best tools to help with that. Automated tools are
more reliable than developers reasoning.
So yes, it can mask some problems - but I also think it can allow
other problems to be seen. There is a balance to be reached.
Surely you don't expect anyone to be convinced by this statement
until you say something about what those other things might be,
and offer some kind of evidence, even if anecdotal, that they
provide some positive benefit.
Perhaps I see the use of it more in the type of programming I do,
which involves small microcontrollers (often 8-bit or 16-bit),
and quite a lot of conversions between different sizes.
That makes it worse. If narrowing conversions (or similar) are
common and routine, they should be wrapped up in suitably safe and
type-safe macros or inline functions, not bludgeoned with an
open-code cast sledgehammer.
I don't see how it is a "maintenance nightmare", unless you
regularly change the types of your variables in "maintenance"
without due consideration of their use.
The problem is not the consideration, but making the attendant
changes at all the use sites. And using straight casts will make
those harder to find.
However, although I would definitely put at least one explicit
conversion in cases like the one mentioned, I don't put them
/everywhere/ - I use them when they make the code clearer and
leave no doubts as to what types are to be used. Too many casts
would make it hard to see the "real" code, and legibility is
vital.
Does this mean you don't have any specific guidelines (ie,
objective rather than subjective) for when/where casting should
be done? Criteria like "make the code clearer" or "too many
casts" might be good as philosophical principles but they are too
subjective to qualify even as guidelines.
The use of size-specific types is part of making code
platform-agnostic. "int", and "integer promotions" are /not/
platform independent - they are dependent on the size of the
target's "int". So when you need calculations with specific
sizes, using explicit conversions and casts rather than implicit
ones is part of avoiding platform-specific code.
Using size-specific types /in declarations/ is part of making code
less platform specific. It is never necessary to use casts to
accomplish this. Also, you missed my point about the particular
types used (which were types like 'uint32_t'). These types are
not available in all implementations. If what is wanted to
arithmetic that is at least 32 bits, and an integer conversion rank
at least as big as 'int', it's easy to get that without resorting
to casting, or even using any type names at all.
I think I over-stated my case in my first post in this branch - I
don't add explicit casts in /all/ cases when types change, but I
do add them when I need to be sure and clear exactly how and when
they change. This means I use some casts that are unnecessary, or
might be unnecessary depending on the target, but I don't do it
/all/ the time. For example, I would not cast a uint16_t to
uint32_t before assigning to a uint32_t variable without
particularly good reasons. But I /would/ be likely to cast a
uint32_t value to a uint16_t before assigning to a uint16_t
variable, as it makes it clear that I am reducing the range of the
variable. (I might alternatively use something like an "& 0xffff"
mask - again, it is not needed, but it can make the intention
clearer.)
I think I mostly agree with what you are trying to do. What I
disagree on is that using a cast is a good way to accomplish those
goals. An unnecessary cast does not reveal information but
conceals it, muddying the water rather than clarifying it.
Note that this is not just my idea - Misra coding standards have
a rule "Implicit conversions which may result in a loss of
information shall not be used."
The Misra coding rules are not best practices. Looking over the
list (the 2004 version), many of its rules are nothing more than
coding standard dogma, and often bad dogma. The rules they have
regarding casting are particularly egregious.
There is a balance to be achieved here - writing the casts
explicitly can make some code clearer, but make other code harder
to read. They can hide some diagnostic messages, but allow other
diagnostics to be enabled (by letting you hide the messages when
you need to).
Yes but it isn't necessary to use casts to avoid such messages.
And the habit of using casts weakens the value of the messages,
because people will start to add them reflexively in response
to getting the warnings.
In the case of the OP, he has:
uint16_t foo;
uint64_t bar;
bar = (foo << 16);
He needs to lengthen foo to at least 32 bits unsigned in order to
work correctly for all values of foo. I think - but I'm not
entirely sure - that for a target with 32-bit ints, and foo
having 0 in its MSB (as the OP said), then "bar = (foo << 16)"
will give the correct behaviour as it stands, with nothing
undefined or implementation-dependent [beyond the fact of int's
being 32 bits, presumably].
"In cases where you are uncertain, look up the rule and remove the
uncertainty." - paraphrased from Keith Thompson. Also, if you
think it will be the same but aren't sure, then write an assert()
to test it
assert( (uint32_t) foo << 16 == foo << 16 );
The assert does a much better job of communicating what is in the
author's mind than a straight cast does. (Of course, another
formulation that avoids both casting and the need for a checking
assertion is another possibility.)
However, I
/know/ that "bar = ((uint32_t) foo << 16)" will work as the user
expects, with no undefined or implementation-dependent behaviour.
Not so, because uint32_t is not present in all implementations,
not even just those that are C99 implementations.
It
will also work with 16-bit ints, and it makes it clear to the
reader exactly what is going on.
This is sort of like saying 'goto' makes it clear where execution
will continue. That is true at one level, but in a more important
way it's wrong.
So is that cast strictly necessary? No, I don't think so. Does
it make the code better? Yes, I believe so.
It is arguably an improvement over 'foo << 16', but that doesn't
mean it passes muster. Find a way to write it that you're sure
will work and doesn't need any casts. After doing that, find a
way to write it that you're sure will work and doesn't need any
type names or declarations.
However, adding an extra "(uint64_t)" case before the assignment,
as I first wrote, is probably excessive. It is a stylistic
choice whether it is included or not.
(I am sure I use unnecessary casts at other times that you will
disagree with more strongly. Style is always open to debate.)
Here I think you are using the word "style" in a way that's
inappropriate. A programming choice falls under the heading of
style when the various choices are directly and obviously
behaviorally equivalent, where "behavior" includes both the
semantics of the program code and how the compiler acts upon
processing that code. A style choice is a choice that makes a
difference /only/ to human readers, not to how the program works
or what the compiler does in the different cases. My objections
to unnecessary casting are that it isn't obvious whether or not
it makes a diffence, and often the use of a cast /will/ make a
difference (specifically in terms of warning messages) that is
undesired. The issues here are not style issues -- at least, not
in their high-order bits. So I suspect the word "style" is being
used here to avoid having to defend the practice of unnecessary
casting.