old style function definitions

K

Kobu

Does anyone know how old style function definitions differ in
"behaviour" to new style function definitions?

I heard the old style function definitions caused integeral promotion
and floating point promotion on the formal parameters. Is this true?

Is it okay for me to set up a new style funciton prototype when calling
old style function definitions (for legacy code)? Is this okay?
 
T

TTroy

Kobu said:
Does anyone know how old style function definitions differ in
"behaviour" to new style function definitions?

I heard the old style function definitions caused integeral promotion
and floating point promotion on the formal parameters. Is this true?

Is it okay for me to set up a new style funciton prototype when calling
old style function definitions (for legacy code)? Is this okay?


You should not mix prototypes will old style functions because old
style functions follow the promotion rules for passed parameters, but
prototypes follow the direct conversion rules for types.

Not a good idea.
 
S

SM Ryan

# Does anyone know how old style function definitions differ in
# "behaviour" to new style function definitions?
#
# I heard the old style function definitions caused integeral promotion
# and floating point promotion on the formal parameters. Is this true?

The old style argument passing is very similar to var-args passing. (If you
want to do some language archaelogy you can find out more about the linkage.)

# Is it okay for me to set up a new style funciton prototype when calling
# old style function definitions (for legacy code)? Is this okay?

The declaration should match the definition.


#include <stdio.h>

int t();

int t(a,b,c) int a,b,c; {
if (a==1) printf("A %d\n",b);
else printf("B %d %d %d\n",a,b,c);
}

int main(int N,char **P) {
t(1,3);
t(4,7,11);
return 0;
}
 
E

Eric Sosman

Kobu said:
Does anyone know how old style function definitions differ in
"behaviour" to new style function definitions?

I heard the old style function definitions caused integeral promotion
and floating point promotion on the formal parameters. Is this true?

Yes: they undergo what are called the "default argument
promotions." See section 6.5.2.2 of the Standard for details.
Is it okay for me to set up a new style funciton prototype when calling
old style function definitions (for legacy code)? Is this okay?

It's okay, but only if you get the prototype correct --
and that's harder than it may seem at first. For example,
consider this old-style function

int f(x, y)
float x, y;
{...}

The prototype for this function is not `int f(float, float)'
but `int f(double,double)' because of the promotions.

Here's a harder one:

int g(s)
unsigned short s;
{...}

Applying the same principle as for f(), the prototype for g()
is certainly not `int g(unsigned short)' -- but what is it, then?
On some implementations `unsigned short' promotes to `int', but
on others it promotes to `unsigned int'. You would need to write
something like

#include <limits.h>
#if USHRT_MAX <= INT_MAX
int g(int);
#else
int g(unsigned int);
#endif

to be sure of getting the prototype right on all implementations.

Here's one that's harder still:

int h(t)
time_t t;
{...}

I cannot think of any way to write a prototype for h() that is
guaranteed to work on all implementations. All you know about
time_t is that it is "an arithmetic type" -- but it could be
signed or unsigned, integer or floating-point, and you do not
whether it promotes nor to what.

IMHO it is risky at best to mix old-style definitions with
new-style declarations. In simple cases you can write a correct
prototype with a modicum of care, but I think your effort would
be better spent updating the function definitions (if that's
possible) than in trying to hoodwink the compiler.
 
L

Luke Wu

Eric said:
true?

Yes: they undergo what are called the "default argument
promotions." See section 6.5.2.2 of the Standard for details.


It's okay, but only if you get the prototype correct --
and that's harder than it may seem at first. For example,
consider this old-style function

int f(x, y)
float x, y;
{...}

The prototype for this function is not `int f(float, float)'
but `int f(double,double)' because of the promotions.

Isn't he better of using non-prototype declarations instead (same
thing, but less hastle)?

int f();

actually he wouldn't need a declaration at all for this case, because
it returns int.
 
C

Chris Torek

Isn't he better of using non-prototype declarations instead (same
thing, but less hastle)?

int f();

"Better off"? Possibly -- but these are different things. Given
the first example definition ("int f(a, b) float a, b; { ... }"),
the correct prototype declaration is:

int f(double, double);

Suppose we write that, and then include a call such as:

f(3, 4)

in the code. Any conforming C89 or C99 compiler must accept this,
and convert 3 and 4 to 3.0 and 4.0 in the call, because arguments
in a prototyped call "work like" ordinary assignments to the formal
parameters of the function. Given:

int f(a, b) float a, b; { ... }

the compiler has to implement the actual call-and-entry as if we
had written a prototyped version like this:

int f(double a0, double b0) { float a = a0, b = b0; ... }

so "a" will be set to 3.0f and b to 4.0f (ignoring any possible
error introduced via double-to-float conversion, anyway -- fortunately
both 3.0 and 4.0 are exact in all extant floating-point systems in
the world :) ).

If we remove the prototype declaration, and substitute in the
non-prototype declaration:

int f();

above the call to f(3, 4), the call suddenly becomes *incorrect*:
it passes two "int"s where two "double"s are required. The result
is likely to be disaster. (On typical x86 compilers, the two ints
will occupy the stack-space for one of the "double"s and the other
will be constructed from miscellaneous stack goop. On typical
register-parameter-machines, the "int"s will be delivered in integer
registers instead of floating-point registers. Some calling
conventions use the integer registers for floating-point anyway,
and if so, the behavior will be similar to that on the x86; otherwise
it will be even less predictable.)

Similarly, the lack of a prototype will prevent detection of
erroneous calls such as:

f("oops");

Clearly, prototypes are generally a good idea. This leads to the
desire to provide prototypes even for those old-style definitions,
which is generally also a good idea, but has the pitfalls that Eric
Sosman noted. The best solution, in my opinion, is to change the
code so that prototype declarations *and definitions* are used
everywhere, abandoning pre-C89 implementations entirely. (But
there are occasions where this is not possible, for whatever reason,
in which case, providing "optional prototypes", as 4.xBSD did with
its implementation-namespace __P macro, is a second-best solution.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,995
Messages
2,570,236
Members
46,825
Latest member
VernonQuy6

Latest Threads

Top