if(a);

M

Martin Johansen

if(a);

In this code, towhat type does the if statement cast the variable "a" to on
comparizon?
 
M

Mark McIntyre

if(a);

In this code, towhat type does the if statement cast the variable "a" to on
comparizon?

a isn't cast to any type. A cast is when the programmer deliberately types
extra stuff eg
(sometype) foo;
casts foo to a sometype object.

Back to your original question: 'a' has to be an integral type, or
something that can be converted to an integral type.
 
B

Ben Pfaff

Mark McIntyre said:
Back to your original question: 'a' has to be an integral type, or
something that can be converted to an integral type.

No, it only has to have scalar type. A scalar type is an
arithmetic type (either an integer or a floating-point type) or a
pointer type.

6.8.4.1 The if statement
Constraints
1 The controlling expression of an if statement shall have scalar type.
 
A

Arthur J. O'Dwyer

[NB: xpost to comp.std.c added]

a isn't cast to any type. A cast is when the programmer deliberately
types extra stuff eg
(sometype) foo;
casts foo to a sometype object.

For the OP's benefit: The correct term for something that "acts
like a cast" without the explicit "(sometype)" is an "implicit
conversion."
Back to your original question: 'a' has to be an integral type, or
something that can be converted to an integral type.

As Ben said, not quite. 'a' has to be a type which can be
validly compared for equality with 0. That is, the line
if (a) ;
is exactly equivalent to the line
if ((a) == 0) ;
no matter what the type of 'a' is. If 'a' doesn't have the right
type for that second line to make sense, then the first line is
invalid as well.


Crosspost to c.s.c added because of the following...

Question for the experts: Does the C standard clarify what it
means by "compare equal to 0" anywhere? Is that 0 the literal 0
of type 'int', or the value you get by converting 0 to the type
of 'a'? Is there any valid C program that could tell the difference?

-Arthur
 
M

Martin Johansen

For the OP's benefit: The correct term for something that "acts
like a cast" without the explicit "(sometype)" is an "implicit
conversion."

Ok i see and accept, but just for the record, why not call it a cast? It was
implied that it was an implicit action.
 
B

Ben Pfaff

Martin Johansen said:
Ok i see and accept, but just for the record, why not call it a cast? It was
implied that it was an implicit action.

A cast is explicit, but an implicit conversion is not explicit.
Thus, it is incorrect to call either one by the other's name.
 
J

Jack Klein

[NB: xpost to comp.std.c added]

a isn't cast to any type. A cast is when the programmer deliberately
types extra stuff eg
(sometype) foo;
casts foo to a sometype object.

For the OP's benefit: The correct term for something that "acts
like a cast" without the explicit "(sometype)" is an "implicit
conversion."
Back to your original question: 'a' has to be an integral type, or
something that can be converted to an integral type.

As Ben said, not quite. 'a' has to be a type which can be
validly compared for equality with 0. That is, the line
if (a) ;
is exactly equivalent to the line
if ((a) == 0) ;
no matter what the type of 'a' is. If 'a' doesn't have the right
type for that second line to make sense, then the first line is
invalid as well.


Crosspost to c.s.c added because of the following...

Question for the experts: Does the C standard clarify what it
means by "compare equal to 0" anywhere? Is that 0 the literal 0
of type 'int', or the value you get by converting 0 to the type
of 'a'? Is there any valid C program that could tell the difference?

-Arthur

What difference does it make? 0 is a valid value for any scalar type.

If you specifically code:

if (a == 0)

....where a is any scalar type, the following will occur.

The type of the octal signed int literal 0 will be compared to the
type of 'a'. Depending on the type of 'a', one of these things will
then happen in the virtual machine:

1. 'a' has type signed int, in which case no conversions are
performed.

2. 'a' has type unsigned int, in which case 0 is converted to
unsigned int.

3. 'a' has an integer type of lesser rank than signed int. If 'a' is
an unsigned type, and the entire range of values of that unsigned type
cannot be represented in a signed int, 'a' and 0 are both promoted to
unsigned int. (think 16 bit implementation where USHRT_MAX >
INT_MAX). Otherwise 0 is left alone and 'a' is promoted to signed
int.

4. 'a' has an integer type of greater rank than int. 0 is converted
to the signed or unsigned integer type of 'a'.

5. 'a' has a floating point type (or, I suppose, under C99, a complex
type). 0 is converted to 0.0F, 0.0, 0.0L, or whatever the syntax
might happen to be for a C99 complex literal (if there are such
things, I haven't used this feature).

6. 'a' is a pointer type, in which case 0 is converted to a null
pointer of the corresponding type.

With the exception of #6, these are lumped together in the C99
standard in 6.3.1.8 "Usual arithmetic conversions".

Note that in cases 1 through 5, the value of the signed int literal 0
is still 0 after undergoing conversion, if any.

In actuality, I think virtually any compiler in almost every
circumstance will use the "as-if" rule to avoid the conversion and use
the most efficient operation available for the underlying architecture
to directly test the value of 'a'. Most processors provide special
hardware to make a zero/not zero test simple and fast. But under the
"as-if" rule, it makes no difference.

Zero is a rather unique concept in mathematics, and it has a
well-defined meaning for every scalar type in C. The implementation
is free to use whatever method it deems appropriate so long as it
correctly determines whether or not the scalar being tested is exactly
0 (or NULL) or not.
 
B

Ben Pfaff

Arthur J. O'Dwyer said:
As Ben said, not quite. 'a' has to be a type which can be
validly compared for equality with 0. That is, the line
if (a) ;
is exactly equivalent to the line
if ((a) == 0) ;
no matter what the type of 'a' is.

That's exactly wrong: == should be !=.
 
D

Douglas A. Gwyn

Arthur said:
Question for the experts: Does the C standard clarify what it
means by "compare equal to 0" anywhere?

It seems clear enough to everybody I know of:
a null pointer value compares equal to zero;
an arithmetic zero value compares equal to
zero.
 
A

Antoine Leca

[fu2 comp.std.c only]

En (e-mail address removed), Jack Klein va escriure:
If you specifically code:

if (a == 0)

...where a is any scalar type, the following will occur.

The type of the octal signed int literal 0 will be compared to the
type of 'a'.

Sorry to nit-pick: what is the reasonning to add "octal" above? (I will not
argue that 0 is not an octal constant, it is, but I do not see why it could
interfere.)
3. 'a' has an integer type of lesser rank than signed int. If 'a' is
an unsigned type, and the entire range of values of that unsigned type
cannot be represented in a signed int, 'a' and 0 are both promoted to
unsigned int.

Again nit-picking (but since you did an exhaustive enumeration...): it was
my understanding that 'a' is first "promoted" to (in this case) unsigned;
there is no need to promote 0; then, in a subsequent step, 0 is _converted_
to unsigned; then the two unsigned values are compared.


Antoine
 
D

Dan Pop

Question for the experts: Does the C standard clarify what it
means by "compare equal to 0" anywhere? Is that 0 the literal 0
of type 'int', or the value you get by converting 0 to the type
of 'a'?

It means exactly what would happen if "(a) == 0" were explicitly written.
If a is subject to the integral promotions, it is promoted. Then, 0 is
converted to the type of a (conversion that is explicitly documented for
all scalar types) and the expression is evaluated.

Note, however, that the expression of relevance here is "(a) != 0" as this
is the actual test performed by "if (a)", but this doesn't change anything
in the above paragraph.
Is there any valid C program that could tell the difference?

Nope: the explicit "!= 0" can always be removed without affecting the
program correctness: if the expression was invalid before, it would be
invalid after and if it was correct before it would be correct after and
with identical semantics. As a matter of fact, even when the explicit
comparison is used, the abstract machine also performs an implicit
comparison, on the result of the != operator ;-)

Note to the newbies following this discussion: unless a is used as a
conceptual boolean (i.e. a flag) it's preferrable NOT to omit the
explicit comparison and to use NULL instead of 0 when testing pointers.

Dan
 
J

James Kuyper

Martin said:
Ok i see and accept, but just for the record, why not call it a cast? It was
implied that it was an implicit action.

Because the word "cast" refers specifically to the (sometype) syntax. If
the conversion is occurring without use of that syntax, it would be
rather odd to call it by a name that refers to the missing syntax. Would
you describe a 5-mile hike along a road as a 5-mile car-ride just
because a car could also be used to travel the same path?
 
J

James Kanze

|> > For the OP's benefit: The correct term for something that "acts
|> > like a cast" without the explicit "(sometype)" is an "implicit
|> > conversion."

|> Ok i see and accept, but just for the record, why not call it a
|> cast? It was implied that it was an implicit action.

A cast is syntax, to specify a desired conversion. What a cast does is
a conversion.
 
M

Mark F. Haigh

Dan said:
Note to the newbies following this discussion: unless a is used as a
conceptual boolean (i.e. a flag) it's preferrable NOT to omit the
explicit comparison and to use NULL instead of 0 when testing pointers.

In *your* opinion. My eyes parse if(p) quicker than if(p != NULL).

Quoth the FAQ, question 5.3:

``Abbreviations'' such as if(p), though perfectly legal, are considered
by some to be bad style (and by others to be good style; see question
17.10).

You misrepresent your style opinions as objective fact. Boo!

Mark F. Haigh
(e-mail address removed)
 
D

Dan Pop

In said:
In *your* opinion. My eyes parse if(p) quicker than if(p != NULL).

What is not a matter of opinion is that if(p != NULL) is *correctly*
parsed by *any* reader, while if(p) isn't (otherwise there wouldn't be
a FAQ question dedicated to this very issue in the first place).

Therefore, it is a *fact* that if(p != NULL) is more readable than if(p).

Dan
 
W

Wojtek Lerch

Dan said:
What is not a matter of opinion is that if(p != NULL) is *correctly*
parsed by *any* reader, while if(p) isn't (otherwise there wouldn't be
a FAQ question dedicated to this very issue in the first place).

Therefore, it is a *fact* that if(p != NULL) is more readable than if(p).

Yes, but only according to your definition of readability, based on the
number of people on the planet who can correctly parse the given piece
of code. Apparently, in your opinion that's an appropriate definition
of readability in this context. But in some other people's opinion, it
may be less important how easily a completely newbie can misunderstand
our code, and more important how efficiently an experienced person can
read it.

In short, whether something is preferrable or not depends on who prefers it.
 
C

Casper H.S. Dik

Wojtek Lerch said:
Yes, but only according to your definition of readability, based on the
number of people on the planet who can correctly parse the given piece
of code. Apparently, in your opinion that's an appropriate definition
of readability in this context. But in some other people's opinion, it
may be less important how easily a completely newbie can misunderstand
our code, and more important how efficiently an experienced person can
read it.

I don't think "experience" enters into it unless you mean "person
experienced with that particular code". The "if (foo != NULL)" idiom
immediately conveys both the pointerness and the kind of test performed;
"if (foo)", requires the reader to know more of the context of the program
and that means that the "experience" needs to be with the code in question
and not with coding in general; therefor the code is less readable for
others. So if more than one person ever needs to look at the code
the longer code definitely adds to maintainability.

Casper
 
D

Dan Pop

In said:
Yes, but only according to your definition of readability, based on the
number of people on the planet who can correctly parse the given piece
of code. Apparently, in your opinion that's an appropriate definition
of readability in this context. But in some other people's opinion, it
may be less important how easily a completely newbie can misunderstand
our code, and more important how efficiently an experienced person can
read it.

I have yet to see proof that it makes *any* difference to the *competent*
programmers but I have already seen proof that *experienced* programmers
may not fully and correctly understand the short form. Furthermore, the
code may be written by a beginner who has the wrong idea about the effect
of omitting the explicit comparison... The explicit comparison simply
removes all kinds of doubts.
In short, whether something is preferrable or not depends on who prefers it.

Not when objective criteria are used.

Furthermore, if "if (p)" is fine, then so must be "if (p = q)": either we
omit the explicit comparison systematically or we don't. Yet, the latter,
even if perfectly readable to the competent programmer, still makes him
uncomfortable when seen in someone else's code... No such adverse
effects related to the systematic use of the explicit comparison.

Dan
 
W

Wojtek Lerch

Dan said:
I have yet to see proof that it makes *any* difference to the *competent*
programmers but I have already seen proof that *experienced* programmers
may not fully and correctly understand the short form. Furthermore, the
code may be written by a beginner who has the wrong idea about the effect
of omitting the explicit comparison... The explicit comparison simply
removes all kinds of doubts.

Right; and in your opinion, removing all kinds of doubts seems to be
absolutely more important than anything else. Whereas in my opinion, it
may be a tradeoff between how the two different styles affect different
conflicting goals I want to achieve. For instance, adding superfluous
parentheses to expressions also removes all kind of doubt, but my
opinion is that "a+b*c > d+e && f > g" is more readable than "((a+(b*c))
> (d+e)) && (f > g)".

And it takes up less disk space! Maybe saving disk space is more
important to some people than removing all kinds of doubts that a
complete newbie may possibly have? Who are you to say that those people
are "objectively" wrong?
Not when objective criteria are used.

No objective criteria can tell you what you should prefer, except by
referring so something else that you prefer, too. If I prefer my code
to be confusing to newbies, what kind of objective criteria are you
going to use to prove that my preference is wrong?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,142
Messages
2,570,820
Members
47,367
Latest member
mahdiharooniir

Latest Threads

Top