Function declaring style

D

dndfan

Hello,

In the short time I have spent reading this newsgroup, I have seen this
sort of declaration a few times:
int
func (string, number, structure)
char* string
int number
struct some_struct structure

Now, I am vaguely familiar with it; I did not read up on it because I
have read in an apparently misinformed book that such declarations were
very old and that no one used them any more. Can someone please explain
why they are still using this style and why the book said it was
obsolete? Is what I have written any different than:
int
func (char* string, int number, struct some_struct structure) ?

Sorry if I'm being too basic about this, but it's just something that
will probably never get mentioned in regular C courses.

Thanks in advance.
 
M

Mark McIntyre

In the short time I have spent reading this newsgroup, I have seen this
sort of declaration a few times:

actually you mean definition.

(snip example of pre-ANSI function definition)
Now, I am vaguely familiar with it; I did not read up on it because I
have read in an apparently misinformed book that such declarations were
very old and that no one used them any more.

The book is correct. However legacy code is likely to still contain
such definitions, as will code written by people forced to use antique
compilers, or who are learning from very old books.
Is what I have written any different than:

Only slightly - this ISO/ANSI definition is also a prototype, and
gives the compiler more ability to check types I believe.
Mark McIntyre
 
M

Malcolm

Now, I am vaguely familiar with it; I did not read up on it because I
have read in an apparently misinformed book that such declarations were
very old and that no one used them any more. Can someone please explain
why they are still using this style and why the book said it was
obsolete? Is what I have written any different than:
I'm still using Fortran 77.
I would guess that the style is borrowed from Fortran, and of course it is
messy and hard to read and the modern syntax is better.

However you should be familiar with it. I failed a job interview because I
wasn't, and was presented with some pre-ANSI code to debug. The company
still used it, for some reason.
 
K

Keith Thompson

In the short time I have spent reading this newsgroup, I have seen this
sort of declaration a few times:


Now, I am vaguely familiar with it; I did not read up on it because I
have read in an apparently misinformed book that such declarations were
very old and that no one used them any more. Can someone please explain
why they are still using this style and why the book said it was
obsolete? Is what I have written any different than:

That old style of function definition has been basically obsolete
since the ANSI standard was approved in 1989. For several years after
that, there were still enough compilers in use that didn't support
prototypes (the superior alternative introduced by the ANSI standard)
that it was still sometimes necessary to use old-style definitions.
You'll still see a fair amount of old code that uses preprocessor
tricks to cater to pre-ANSI and ANSI compilers; there's also a tool
called "ansi2knr" that translates code using prototypes to code using
the old-style definitions. ("knr" refers to K&R, Kernighan &
Ritchie's _The C Programming Language_. The first edition describes
the pre-ANSI version of the language. The second edition describes
the newer language defined by the ANSI standard.)

The 1989 ANSI standard (or the equivalent 1990 ISO C standard) has
caught on almost universally. I'm sure there are still pre-ANSI
compilers in use somewhere, but I don't think they exist on any of the
systems I currently use. There's no longer any need to use old-style
function definitions unless you have a specific requirement to support
an ancient system.

But the old-style definitions are still supported by the newer
standards for backward compatibility.

The newer 1999 ISO C standard has not caught on as quickly, probably
because it's less of an improvement over the previous standard than
the 1989 ANSI C standard was over the (non-)standard that it replaced.
 
M

Mark Brader

In the short time I have spent reading this newsgroup, I have seen this
sort of declaration a few times:

(There should be semicolons after each of the three declarations.)
Now, I am vaguely familiar with it; I did not read up on it because I
have read in an apparently misinformed book that such declarations were
very old and that no one used them any more. Can someone please explain
why they are still using this style and why the book said it was
obsolete?

There is no good reason to use the old style in new code today.
Is what I have written any different than:

Yes, there is a difference: when you use the modern "prototype"
syntax, the compiler does automatic type conversions on each argument
the same way as it does in an ordinary assignment (=) expression.
With the old syntax, the types have to match or the behavior is
undefined. So these lines:

answer = func("hello", 3.1414, strucked);
answer = func("hello", 3, strucked);

are equivalent if func() was defined using a prototype, because the
double constant 3.1414 is safely converted to int. (If func() was
defined in a separate file, of course, you also need a prototyped
declaration in scope when you call it.)

If you used the old syntax, only the third form would work safely.
The others would cause undefined behavior -- in practice, what's
likely to happen is at least that some bits from the floating-point
value will be misinterpreted as an integer, and other arguments may
be misread as well. Similar issues arise if you use a long int
argument (3L) and int and long int are different sizes.

There are some other subtle differences relating certain to specific
types of arguments such as "float" and "short".
--
Mark Brader "If you design for compatibility with a
Toronto donkey cart, what you get is a donkey cart."
(e-mail address removed) -- ?, quoted by Henry Spencer

My text in this article is in the public domain.
 
D

dndfan

Mark said:
(There should be semicolons after each of the three declarations.)

Sorry about that. And sorry for using incorrect terminology.
There is no good reason to use the old style in new code today.


Yes, there is a difference: when you use the modern "prototype"
syntax, the compiler does automatic type conversions on each argument
the same way as it does in an ordinary assignment (=) expression.
With the old syntax, the types have to match or the behavior is
undefined. So these lines:

answer = func("hello", 3.1414, strucked);
answer = func("hello", 3, strucked);

are equivalent if func() was defined using a prototype, because the
double constant 3.1414 is safely converted to int. (If func() was
defined in a separate file, of course, you also need a prototyped
declaration in scope when you call it.)

If you used the old syntax, only the third form would work safely.
The others would cause undefined behavior -- in practice, what's
likely to happen is at least that some bits from the floating-point
value will be misinterpreted as an integer, and other arguments may
be misread as well. Similar issues arise if you use a long int
argument (3L) and int and long int are different sizes.

There are some other subtle differences relating certain to specific
types of arguments such as "float" and "short".

Thanks ot everyone for valuable advice and insight.
 
D

Dave Thompson

On 5 Feb 2006 15:28:27 -0800, in comp.lang.c , (e-mail address removed)
wrote:
Is [old-style function definition] any different than:
int
func (char* string, int number, struct some_struct structure) ?

Only slightly - this ISO/ANSI definition is also a prototype, and
gives the compiler more ability to check types I believe.

Almost. A prototype definition is also a (prototype) declaration and
the compiler _must_ diagnose any mismatch with calls made in the scope
of that declaration (which is the rest of the translation unit, i.e.,
intramodule calls) _and_ any mismatch with another/prior prototype
declaration (such as an interface in an #include'd .h file).

A nonprototype definition is also a nonprototype declaration and does
not require such checking*; but the definition does provide type
information that the compiler and/or linker _can_ check if they wish.
* If there is _another_ prototype declaration in scope _that_
declaration does require checking of calls -- but not matching with
the definition. This was (is?) a handy transition technique, because
it is easy to macroize a declaration to be either prototype or
oldstyle, but much harder to do this with the definition.

There is also a subtle difference for some parameter types, though not
the ones in the OP's case. If a parameter in a K&R1 definition is a
float or an integer type narrower (lower rank) than int they are
actually passed as double and (signed or unsigned) int respectively
and then 'narrowed' back to the declared types in the called body.
Thus to write a prototype declaration, and make prototyped calls, to a
K&R1-defined function with such parameter types, the prototype must
use the widened types.

- David.Thompson1 at worldnet.att.net
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,997
Messages
2,570,239
Members
46,827
Latest member
DMUK_Beginner

Latest Threads

Top