G
Guest
I have the following expression coded in a macro:
((((__typeof__((x)))0) - ((__typeof__((x)))1)) < ((__typeof__((x)))0))
It uses GCC's extension __typeof__() but I don't think that's an issue.
This expression is intended to evaluate true whenever x is a signed integer
and false whenever x is an unsigned integer. The purpose is to give the
compiler a good chance at optimizing out the expressions that follow in the
?: trinary operator. Since this expression is strictly a constant, it should
be easy to detect that only one or the other of the ?: expressions is needed.
When I use int, long, or long long, either signed or unsigned, this compiles
without any warning. When I use signed short, it is also without warning.
But when I use unsigned short, I get the warning.
Clearly, the result will be out of range for unsigned, although it will be
converted back into range. What puzzles me more than getting it for unsigned
short is NOT getting it for unsigned int. That doesn't seem consistent. Is
there something special in the standard that would make such an expression,
or the conversion involved in that arithmetic, be different for int compared
to short?
Here's the expression with hard coded types for easier reading (although for
no practical purpose):
((signed short)0) - ((signed short)1) < ((signed short)0)
((unsigned short)0) - ((unsigned short)1) < ((unsigned short)0)
((signed int)0) - ((signed int)1) < ((signed int)0)
((unsigned int)0) - ((unsigned int)1) < ((unsigned int)0)
((((__typeof__((x)))0) - ((__typeof__((x)))1)) < ((__typeof__((x)))0))
It uses GCC's extension __typeof__() but I don't think that's an issue.
This expression is intended to evaluate true whenever x is a signed integer
and false whenever x is an unsigned integer. The purpose is to give the
compiler a good chance at optimizing out the expressions that follow in the
?: trinary operator. Since this expression is strictly a constant, it should
be easy to detect that only one or the other of the ?: expressions is needed.
When I use int, long, or long long, either signed or unsigned, this compiles
without any warning. When I use signed short, it is also without warning.
But when I use unsigned short, I get the warning.
Clearly, the result will be out of range for unsigned, although it will be
converted back into range. What puzzles me more than getting it for unsigned
short is NOT getting it for unsigned int. That doesn't seem consistent. Is
there something special in the standard that would make such an expression,
or the conversion involved in that arithmetic, be different for int compared
to short?
Here's the expression with hard coded types for easier reading (although for
no practical purpose):
((signed short)0) - ((signed short)1) < ((signed short)0)
((unsigned short)0) - ((unsigned short)1) < ((unsigned short)0)
((signed int)0) - ((signed int)1) < ((signed int)0)
((unsigned int)0) - ((unsigned int)1) < ((unsigned int)0)