what will be the output

V

vaib

#include<stdio.h>
main()
{
short int i=0;
for(;++i;i=2)
printf("%u\n",i);
return 0;
}

i know its simple and not worth asking but the output should be
1
3
3
3.....infinite loop

but the output i'm recieving is
3
3
3.....infinite loop
what's amiss ??
 
J

Jim Langston

vaib said:
#include<stdio.h>
main()
{
short int i=0;
for(;++i;i=2)
printf("%u\n",i);
return 0;
}

i know its simple and not worth asking but the output should be
1
3
3
3.....infinite loop

but the output i'm recieving is
3
3
3.....infinite loop
what's amiss ??

For this modified program:

#include<stdio.h>
main()
{
int x = 0;
short int i=0;
for(;++i && x++ < 10 ;i=2)
printf("%u\n",i);
return 0;
}

the output is
1
3
3
3
3
3
3
3
3
3
as expected.

Perhaps the first 1 scrolls off the screen too fast for you to see it?
 
A

Alf P. Steinbach

* vaib:
#include<stdio.h>
main()

'main' must have result type 'int'.

Formally the program can do anything if it compiles, since it's invalid
as C or C++.

{
short int i=0;
for(;++i;i=2)
printf("%u\n",i);
return 0;
}

i know its simple and not worth asking but the output should be
1
3
3
3.....infinite loop

but the output i'm recieving is
3
3
3.....infinite loop
what's amiss ??

Produces 1, 3, 3, 3 ... with g++ and msvc (when 'main' corrected).

Cheers & hth.,

- Alf
 
J

James Kanze

* vaib:

'main' must have result type 'int'.
Formally the program can do anything if it compiles, since
it's invalid as C or C++.

Formally, if it compiles without a diagnostic, you didn't use a
C or a C++ compiler. But even with the return type...

Isn't it undefined behavior to pass a signed type to a %u format
specifier? (Or is there a special rule which allows it as long
as the actual value fits in both types?)
 
A

Alf P. Steinbach

* James Kanze:
Isn't it undefined behavior to pass a signed type to a %u format
specifier? (Or is there a special rule which allows it as long
as the actual value fits in both types?)

I don't think there is any special rule, but it can be inferred from the
required behavior of x>>n for non-negative values of x, namely that x>>n
yields n/2^n with / denoting integer division, and also that this is the
bit-pattern shifted right n bit positions.

These requirements imply that except for possible padding bits to the
left, the bit-patterns for non-negative signed values are the same as
the bit-patterns for unsigned values, namely direct binary (for a long
time I erronously believed that grey code also fit the bill, but no).

Nit: the code shown in the original posting, using 'short', also relies
on implicit promotion to 'int' for the argument.


Cheers,

- Alf
 
J

Jack Klein

Formally, if it compiles without a diagnostic, you didn't use a
C or a C++ compiler. But even with the return type...


Isn't it undefined behavior to pass a signed type to a %u format
specifier? (Or is there a special rule which allows it as long
as the actual value fits in both types?)

The C99 standard specifically allows passing a signed integer type as
an unsigned integer type of the same size, or vice versa, as long as
the value is within the range of both types.

The C95 standard, on which the C++ standard is currently based, does
not have this exemption, which was added to C99.

I believe the intent of the addition of C99 was merely to document the
existing behavior of all C90 implementations, but technically the C++
standard is based on the C95/C90 spec, specifically making it
undefined.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.club.cc.cmu.edu/~ajo/docs/FAQ-acllc.html
 
J

James Kanze

* James Kanze:
I don't think there is any special rule, but it can be inferred from the
required behavior of x>>n for non-negative values of x, namely that x>>n
yields n/2^n with / denoting integer division, and also that this is the
bit-pattern shifted right n bit positions.

That explains why it works in practice. But a compiler is
allowed to implement variable args in such a way that it does
type checking. The normal situation in the case of va_arg is
that if the types mismatch, then in general, the code has
undefined behavior, according to the standard.

Anyway, as Jack Klein points out, it is undefined behavior in
C90, and thus in C++ today, but the C99 standard explicitly
added two exceptions: one for signed and unsigned, when the
value is in the common range, and one for void* and char*, and
presumably, the next version of the C++ standard will be based
on this. If we assume (which seems safe) 1) that it does work
with all current implementations, and 2) that it will be
required to work in the next version of the standard, I'd say
that it's a pretty safe bet. Like long long, for example.

I'm not sure I like having the exceptions, since it makes
implementing a checking implementation more difficult. In the
case of signed/unsigned, however, I'd be willing to bet that
they were introduced because of printf---I'm willing to bet that
there is a lot of code out there that occasionally passes an
integral literal, e.g. 0, to %u. And of course, such code does
work with all real implementations.

Just another example of why printf is bad, bad, bad.
These requirements imply that except for possible padding bits
to the left, the bit-patterns for non-negative signed values
are the same as the bit-patterns for unsigned values, namely
direct binary (for a long time I erronously believed that grey
code also fit the bill, but no).

Integral representation is irrelevant to the question. Just
because an implementation uses the same representation for long
and int, you can't pass an int to a "%d"---the results are
undefined behavior, and not implementation defined.
Nit: the code shown in the original posting, using 'short',
also relies on implicit promotion to 'int' for the argument.

Arguments to a var args parameter always undergo integral
promotion. (Or is this only so trivially obvious to me because
I wrote a lot of C back in the days when it was K&R C, and you
didn't have prototypes?)
 
A

Alf P. Steinbach

* James Kanze:
That explains why it works in practice. But a compiler is
allowed to implement variable args in such a way that it does
type checking. The normal situation in the case of va_arg is
that if the types mismatch, then in general, the code has
undefined behavior, according to the standard.

Anyway, as Jack Klein points out, it is undefined behavior in
C90, and thus in C++ today, but the C99 standard explicitly
added two exceptions: one for signed and unsigned, when the
value is in the common range, and one for void* and char*, and
presumably, the next version of the C++ standard will be based
on this. If we assume (which seems safe) 1) that it does work
with all current implementations, and 2) that it will be
required to work in the next version of the standard, I'd say
that it's a pretty safe bet. Like long long, for example.

I'm not sure I like having the exceptions, since it makes
implementing a checking implementation more difficult.

Considering how old these languages are, the non-existence of such
checking implementation is pretty significant. ;-)

I think we must distinguish clearly between explicit UB, where the
standard explicitly and intentionally gives the implementation the
widest latitude possible in how to handle things (or not), and implicit
UB, where the standard is silent, which only with contorted logic can be
regarded as intentional: more likely an unintentional omission or simply
not regarded as practically relevant enough to waste time on specifying.

Regarding /implicit/ purely formal UB it's possible that it is, but
proving that is like proving a negative: one must prove conclusively
that there's nothing that directly or by implication makes it defined,
and anyway it is, as you hint, practically speaking irrelevant because
of C99 and C++0x -- the world moves on... :)

Integral representation is irrelevant to the question.

On the contrary, it's very relevant, because the C++ standard is not an
arbitrary collection of rules: it's a set of rules intended to specify a
practically useful language, capturing existing practice.

So that when you don't have a directly applicable formal rule at hand,
considering the practice, here the representation, can tell you what the
practical conclusion must be, without recourse to the inaccessible or
very difficult to find or very difficult to reason out formal.

Considering the integral representation thus (1) tells a developer what
the practical answer is for practical coding, and (2) tells a language
lawyer what the standard should be if it isn't already (and hence, a
possible improvement). And for (2) one doesn't even need to know
whether the formal already covers this. All one does need to know is
that it's so darn difficult to find, that explicit language that
formalizes the practice would be a great improvement.

Just
because an implementation uses the same representation for long
and int, you can't pass an int to a "%d"---the results are
undefined behavior, and not implementation defined.

Well, first, I can, and moreover anyone can, with in practice well
defined result, and second, you haven't proved that it's implicit UB or
explicit UB. The latter could be possible in a Usenet discussion, but I
doubt that here is explicit UB. The former, proving implicit UB, well
then I think we need to stock up with beer and foodstuff... :)

Arguments to a var args parameter always undergo integral
promotion. (Or is this only so trivially obvious to me because
I wrote a lot of C back in the days when it was K&R C, and you
didn't have prototypes?)

I just mentioned it because possibly it wasn't obvious to the OP.


Cheers,

- Alf
 
J

James Kanze

* James Kanze:

[concerning var args...]
Considering how old these languages are, the non-existence of
such checking implementation is pretty significant. ;-)

In what sense? There aren't very many implementations with
array bounds checking either. Both features would provide a
safety net, at some run-time cost. For various reasons, C and
C++ implementations always seem to prefer speed to any other
feature. Partially, of course, because it's easy to measure,
but only partially.

(FWIW: there have been, and maybe still are, checking
implementations. Centerline, for example.)
I think we must distinguish clearly between explicit UB, where
the standard explicitly and intentionally gives the
implementation the widest latitude possible in how to handle
things (or not), and implicit UB, where the standard is
silent, which only with contorted logic can be regarded as
intentional: more likely an unintentional omission or simply
not regarded as practically relevant enough to waste time on
specifying.

Again, I don't quite see your point. This is obviously a case
of explicit UB; the standard says explicitly that if the types
don't match, the behavior is undefined. The intent is obviously
to allow it to fail in undefined ways if you pass, say, and
integer where a pointer is expected. One "unexpected way" it
might fail is, of course, a controled error. The "intent" of
the standard here seems obvious: to allow the implementation to
do anything it wants if the types don't match. Anything it
wants includes, of course, intentionally failing.
Regarding /implicit/ purely formal UB it's possible that it is, but
proving that is like proving a negative: one must prove conclusively
that there's nothing that directly or by implication makes it defined,
and anyway it is, as you hint, practically speaking irrelevant because
of C99 and C++0x -- the world moves on... :)
On the contrary, it's very relevant,

When the standard explicitly says undefined behavior, it's
undefined behavior. When the standard says it must work, it
must work. Integral representation would only be relevant if
the standard didn't say.

[...]
Well, first, I can, and moreover anyone can, with in practice
well defined result, and second, you haven't proved that it's
implicit UB or explicit UB.

Jack Klien posted a quote from the standard. It didn't think it
necessary to repeat it. The standard says, explicitly that: "If
there is no actual next argument, or if type is not compatible
with the type of the actual next argument (as promoted according
to the default argument promotions), the behavior is undefined".
(The C99 standard adds ", except for the following[...]" after
this sentence, with the two exceptions I mentionned.) You can't
get more explicit than that.
 
A

Alf P. Steinbach

* James Kanze:
* James Kanze:
[...]
Just
because an implementation uses the same representation for long
and int, you can't pass an int to a "%d"---the results are
undefined behavior, and not implementation defined.
Well, first, I can, and moreover anyone can, with in practice
well defined result, and second, you haven't proved that it's
implicit UB or explicit UB.

Jack Klien posted a quote from the standard. It didn't think it
necessary to repeat it. The standard says, explicitly that: "If
there is no actual next argument, or if type is not compatible
with the type of the actual next argument (as promoted according
to the default argument promotions), the behavior is undefined".
(The C99 standard adds ", except for the following[...]" after
this sentence, with the two exceptions I mentionned.) You can't
get more explicit than that.

It seems that that posting with quote is not on my news-server.

However, there is a posting by Jack Klein where he referred to that
passage from the C standards, for "C95" and "C99". We need to go back
to C90 (I don't know about C95). Probably Jack meant to write "C90".

In C90 the passage is presumably the same as in C99 except without C99's
explicit excemption allowing signed argument for unsigned argument and
vice versa when the value is in the non-negative range.

The question is then whether and how C90 defines "compatible with".

As an example, in C99 in annex H it's stated[1] that int, long int and
long long int, and the corresponding unsigned types, are "compatible
with" the ISO/IEC 10967-1 standard. So this is one meaning of
"compatible with" used in C99, an operational view. Another possible
meaning, defined earlier, is essentially "same type", but not exactly,
otherwise "compatible" would not be different from "same" (I think this
meaning is about separately defined types, and C struct versus typedef etc.)

Anyways, if C++0x refers to C99 or adopts that wording, then it will be
explicitly defined behavior.

If meaning of "compatible with" in the relevant paragraph of C90 is
conclusively established as one that does not include the case of signed
int for formal argument unsigned, then I'll grant you that formally, in
a very academic sense, it's currently, but not future, formal UB. :)


Cheers,

- Alf (picking nits)


Notes:
[1] I don't have the C99 standard. I'm referring to a PDF I have on
disk, that I think was the latest draft of that standard, n869.
However, it doesn't say "draft", I think it's a draft because a cover
page is missing. Anyway it suffices for practical checking of what's C,
but not for formal language-lawyering...
 
J

James Kanze

* James Kanze:

[...]
However, there is a posting by Jack Klein where he referred to that
passage from the C standards, for "C95" and "C99". We need to go back
to C90 (I don't know about C95). Probably Jack meant to write "C90".
In C90 the passage is presumably the same as in C99 except without C99's
explicit excemption allowing signed argument for unsigned argument and
vice versa when the value is in the non-negative range.
The question is then whether and how C90 defines "compatible with".

§6.1.2.6. Basically, for built-in types, "compatible with"
means "same as". The reason for "compatible with" rather than
"same as" is because C uses structural equivalence for structs,
etc., rather than name equivalence. Two struct's have
compatible types if all of their members have the same types,
and occur in the same order. (There are a few other rules as
well, e.g. a function declared "int f();" is compatible with one
declared "int f(int);", for example, but "int f(void);" and "int
f(int);" aren't compatible.)

C++ doesn't have this notion. Which does raise questions when
C++ defines something by reference to the C standard, and the C
standard defines it using concepts which are foreign to C++.
Anyways, if C++0x refers to C99 or adopts that wording, then
it will be explicitly defined behavior.

And even if it doesn't, for some reason, can you imagine an
implementation where varargs worked differently in C++ than in
C.
If meaning of "compatible with" in the relevant paragraph of
C90 is conclusively established as one that does not include
the case of signed int for formal argument unsigned, then I'll
grant you that formally, in a very academic sense, it's
currently, but not future, formal UB. :)

"Compatible type" is a very fundamental concept in C, and signed
an unsigned integral types are not "compatible".
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,183
Messages
2,570,967
Members
47,516
Latest member
TobiasAxf

Latest Threads

Top