A scope problem??

F

Flash Gordon

Andy said:
I am a little confused...

For instance:

------------
* option 1 *
------------

---- type.h ----
typedef struct node{
int x;
int y;
}NODE;


---- foo.h ----
#include "type.h"

NODE foo(); /* need to see the definition of NODE here */


---- foo.c ----
#include "foo.h"

NODE foo(){}; /* it is included in foo.h already */

Personally, I prefer "option 1", then I do not need to worry about the
order of the #includes. However, "option 1" could lead to the problem
I had before. Probably I should avoid using looping #includes...

type.h does *not* include foo.h, so it does *not* lead to problems (when
include guards are added). It is only if you are foolish enough to then
add a direct (or indirect) include of foo.h to type.h that you then hit
problem, and that will only happen if you don't design things properly.
 
G

Guest

I figured out the reason why I made such kind of mistake. Basically it
is a bad practice of writing code.
Usually, I put the type definitions in header files, and then if some
other files want to use these type definitions, I just use the
#include. In this way, the files using these type definition may be
included before the real type definition. So it will give you some
weird error msgs...
A good way (at least I think) to debug this error is to use "gcc -E
*.c", then you can check the preprocessing file very easily:)
For instance,
----A.h-----
#ifndef A_H_
#define A_H_
#include "C.h"
typedef struct{
    int x;
    int y;
}NODE;
void bar();
#endif
----A.c----
#include "A.h"
void bar(){
    foo();
}
----C.h------
#ifndef C_H_
#define C_H_
#include "A.h"
Node foo(); /* need to use type NODE defined in A.h */
#endif
----C.c----
#include "C.h"
Node foo(){}
----main.c----
#include "A.h"
int main(){
    bar();
    return 1;
}
if you try to compile this code, you will get error msgs like:
-----------------------------------
gcc -O2 -g -Wall -fmessage-length=0   -c -o main.o main.c
In file included from A.h:6,
                 from main.c:3:
C.h:6: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before
‘foo’
make: *** [main.o] Error 1
-----------------------------------
Is there some good way to avoid this bad practice? Currently, I just
put these definitions into type.h, which is shared as common header..
Fundamentally, the problem is that A.h and C.h both include each other.
Having a loop like this is never going to work, it's just that your
include guards make it fail in a slightly more mysterious way.

note well. Include guards DO NOT fix this problem.

In this case, there is no need for A.h to include C.h, so you could just
[remove?] that #include and everything would be fine (also correct the spelling of
Node/NODE to be consistent).  If A.h really does need some things that
are in C.h, they should be broken out into a third header file which can
be included by both A.h and C.h.  If the dependencies are more
complicated, more files might be needed, but there should always be a
way to resolve it without loops.
agreed
I C. I think I learned somewhere saying "loop #include does not matter
as long as you guard it using #ifndef".

this is wrong. Unlearn it.

I can't see why you would do this. Include guards guard against
multiple inclusion. The only protection against looping is careful
design.


not sure what you mean

I'd recommend keeping the guards, but removing all #includes from
all .h files.

No! I prefer it if all include files standalone compile. On one
project
you couldn't check a header file into configuration control unless
this
was so. It's a nightmare if they don't. The only book I've seen that
discusses
this sort of stuff is Lakos's "Large Scale C++ Design". Yes it's C++
but much
of it is still interesting.
If a header file requires definitions from another
header, it can have a brief comment to that effect.

No. Really NO. This involves you having to "open the box" to
find out how to use something. If a header file changes its
dependencies
you have to change every file that uses it! This might involve
*thousands*
of files!
This forces
all the inclusions to take place in the .c file where they exist
on the same level and can be easily rearranged to follow the
dependencies. The circularity is relieved by the linear nature of
the text file: one of 'em's got to be first.

I mentioned I didn't agree didn't I?

--
Nick Keighley

The world you perceive is drastically simplified model of the real
world
(Herbert Simon)
 
L

luserXtrog

luserXtrog said:
[...]
I'd recommend keeping the guards, but removing all #includes from
all .h files. If a header file requires definitions from another
header, it can have a brief comment to that effect. This forces
all the inclusions to take place in the .c file where they exist
on the same level and can be easily rearranged to follow the
dependencies. The circularity is relieved by the linear nature of
the text file: one of 'em's got to be first.
I disagree.  There's nothing wrong with having #includes in header
files.  If I need the functionality defined in foo.h, and foo.h needs
the functionality defined in btfsplk.h, it's foo.h's job to #include
"btfsplk.h" for itself.  If a later version of foo.h also needs
functionality defined in rxuqnxbh.h, I shouldn't have to change my
code.
There is at least one thing wrong with it. The very issue of this
thread.

Each header should, ideally at least, define a coherent interface to
be used by client code.  The header should, IMNSHO, do whatever it
needs to do to provide that interface in a usable form.  If I need to
use an interface defined by "foo.h", providing
    #include "foo.h"
should be sufficient (along with whatever linker option I need to load
the corresponding library).  As I said, if foo.h depends on btfsplk.h,
it's not the client's responsibility to write
    #include "btfsplk.h"
any more than it's the client's responsibility to write
    typedef struct foo { /* ... */ };

Of course, if the client code directly uses features defined in
btfsplk.h, it should include that header.
So library headers are considered a good model for writing new code?
Isn't one of the commandments about this?

If you're referring to <http://www.lysator.liu.se/c/ten-commandments.html>,
no, it isn't.

I misremembered. It was Rob Pike's rule.
And why shouldn't library headers be considered a good model for
writing new code?  It's perfectly reasonable, and in fact probably
necessary, for a particular stdio.h to depend on system-specific
headers.  Code that uses stdio.h shouldn't need to be aware of those
system-specific headers.  Why shouldn't the same principles apply to
foo.h?

Are you recommending write-only code?
 
L

luserXtrog

I'm not familiar with that one.  Can you provide a citation?
Certainly.
http://www.lysator.liu.se/c/pikestyle.html



No, of course not.

Well, in this case, there do not appear to be thousands of source
files; nor do there appear to be any system-specifics. I think the
mechanism was getting in the way of understanding the problem, and
removal would lead to the desired correction.

I was not wholy endorsing the radical view that included files should
never themselves include other files. But as Rob Pike appears to hold
that view, it cannot be entirely indefensible.
 
K

Keith Thompson

luserXtrog said:

Ok, thanks.

I know that disagreeing with someone like Rob Pike leads to a greater
than usual risk of being wrong, but I'm going to take the risk.
I disagree with this rule, quite strongly.

Here's what he wrote:

Simple rule: include files should never include include files.
If instead they state (in comments or implicitly) what files
they need to have included first, the problem of deciding
which files to include is pushed to the user (programmer)
but in a way that's easy to handle and that, by construction,
avoids multiple inclusions. Multiple inclusions are a bane
of systems programming. It's not rare to have files included
five or more times to compile a single C source file. The Unix
/usr/include/sys stuff is terrible this way.

There's a little dance involving #ifdef's that can prevent a file
being read twice, but it's usually done wrong in practice - the
#ifdef's are in the file itself, not the file that includes it.
The result is often thousands of needless lines of code passing
through the lexical analyzer, which is (in good compilers)
the most expensive phase.

Just follow the simple rule.

Note that the web page is dated February 21, 1989 (it's safe to assume
it wasn't originally a web page). Two decades later, compiler speed
is less of an issue. If making the compiler's preprocessor go through
a few thousand lines of code means I can write my code in a more
maintainable manner, it's well worth it.

As for putting the #ifdefs in the including file rather than in the
included file, I disagree with that as well. The #ifdef trick
requires inventing a unique macro name for each include file. Using a
consistent convention is impractical when include files come from
multiple sources. If I need a #include "foo.h", don't make me
remember remember whether I have to write #ifdef FOO_H or #ifdef H_FOO
(or #ifdef __FOO, and remind myself to complain to the author about
using a reserved identifier).

Now if rearranging the way headers are included could reduce the
time it takes to compile a large project from, say, 2 hours to 1
hour, it would be worth looking into -- but only if I can be sure it
won't hurt maintainability and cost me far more time and frustration
than it saves me. An hour spent waiting for a long compilation
to finish (letting me do other construct work in the meantime) is
much better than an hour or more spent modifying every file that
depends on foo.h because one if its internal dependencies changed,
and another hour or two pulling out my hair because I made a subtle
error on one of the changes.

It should be possible for a preprocessor to recognize the #ifdef
trick, remember that it's already seen a header, and take a shortcut
that avoids re-scanning the entire file (*if* that's going to save
enough time to be worthwhile).

It's the compiler's job to make my job easier, not vice versa.

I wonder if Rob Pike still holds the same opinion.
 
G

Guest

luserXtrog wrote:


I disagree most strenuously. Every header file should #include
everything it needs to be a complete (though empty) translation unit
on it's own - anything less than that just produces maintenance
nightmares in the long run.

In my own group, I insist that every header file must pass the
following test: create a source code file that does nothing but
#include the header file twice, and define a global int variable whose
name doesn't conflict with anything in the header file (to shut off
warnings about an empty file). That source code file should compile
without any important warnings or error messages, when the compiler is
set to it's pickiest settings. This isn't the only requirement I
insist on for header files, it's just one of the simplest to verify.

or just make foo.h the first header in foo.c (including <> headers)
 
J

James Kuyper

or just make foo.h the first header in foo.c (including <> headers)

The test is not only of completeness, but also of protection against
double inclusion. I suppose I could #include foo.h twice at the top of
foo.c; I never thought of that before - but I'm not happy about the idea
of putting something like that in our delivered code. It looks like a
stupid mistake, rather than a clever header-testing technique.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,994
Messages
2,570,223
Members
46,813
Latest member
lawrwtwinkle111

Latest Threads

Top