Meta-C question about header order

K

Kaz Kylheku

Allowing people to focus their limited resources on the important issues
instead of distracting them with irrelevant detail results, in my
opinion, in a greater chance of those "polished gems of engineering by
people who take pride" being produced.

Which is why I'm not arguing that this is the (or that there is a) One True Way
to structure C sources.

However, I have found this old-school approach to be very nice and workable
on a recent project of mine; none of the FUD against it is proving to be
a generalization. (FWIF, I have much more experience with the "every header
is guarded, and includes what it needs" approach.)

Anyway, in real engineering, there is more global awareness of changes
to a design. Maybe software is the way it is because everyone expects not
to be distracted with irrelevant details such as "how the hell does this
thing actually work as a coherent whole".

In electronics, you would never get away with bullshit like "I'm just going to
throw a little sub-circuit into this schematic and not care about anything
else, since the impedances obviously are such that it has no impact". (Let
someone else worry about board area and layout, noise and crosstalk issues,
emission, added power consumption and heat dissipation, etc).

It's not easy enough for some people that we can just type our product
from a keyboard and have a toolchain convert it into the final running
image. It additionally has to be possible to make changes without knowing
a whole lot of pesky context. Because our time is so expensive, and all.

Maybe this is why companies sometimes spend hundreds of thousands developing
some beautifully functioning working for a device, and then it goes to hell
because the task of making drivers was given to some yahoos as an afterthought,
and the end product is a flop that blue-screens everyone's PC.
 
K

Keith Thompson

Sometimes, people go out of their way to please their lint.
When they have a lint that warns on multiple includes of the
same file, they deem that to be »dirty«. Would they use a
lint that would warn on the lack of include guards, they
would instead deem this to be »dirty«.

Is there a version of lint that warns about multiple includes of the
same file?
To please their lint, some - otherwise perfectly sane -
people write

if(( buf = malloc( bufsiz )))

instead of

if( buf = malloc( bufsiz ))

and then call this »good style«.

And I'd tend to agree with those perfectly sane people. The extra
parentheses make it more obvious that the "=" was meant to be an
assignment, not an "==" comparison.

Though I'd write:

if ((buf = malloc(bufsize)) != NULL)

myself.
 
I

Ian Collins

Jens said:
#
# Have you ever written or had to maintain cross platform software?

While at Sun Microsystems, I maintained VirtualBox. Does that qualify?

Undoubtedly! What rule does that project follow?
....
# Probably quite a few of us have spent many decades developing and
# maintaining a variety of code bases. Sure there was a time when opening
# and reading headers took seconds rather than the micro-seconds it takes
# today.
# These days developer time is way more valuable than machine
# time.

The whole idea is not about editor loading time. I'd say it's not even
in the first place about compile time minimization. But that's a welcome
and free side effect. For me the best benefit is improved
identifiability of refactoring opportunities, interface accumulation,
code collecting fat.

I guess it really is little more than a matter of taste and preferred
working practices. I can't really see how it improves refactoring
opportunities, but I'm open to suggestions given I spend as much time
refactoring as writing new code.
To all the people who think the "headers don't include headers" rule
is a folly, honestly, did you actually try it for a non-trivial code
base or are you possibly prejudiced? I encourage you to actually try
it. You might be in for a surprise. Granted, converting an existing
project is making a pig fly. But with enough thrust...

I have and I found it painful. It probably doesn't suit the type of
work I do where the project requirements tend to be extremely vague and
the design evolves over time. I can see it working much better when
coding to an existing design.
 
L

Les Cargill

Kaz said:
Which is why I'm not arguing that this is the (or that there is a) One True Way
to structure C sources.

However, I have found this old-school approach to be very nice and workable
on a recent project of mine; none of the FUD against it is proving to be
a generalization. (FWIF, I have much more experience with the "every header
is guarded, and includes what it needs" approach.)

Anyway, in real engineering, there is more global awareness of changes
to a design. Maybe software is the way it is because everyone expects not
to be distracted with irrelevant details such as "how the hell does this
thing actually work as a coherent whole".

In electronics, you would never get away with bullshit like "I'm just going to
throw a little sub-circuit into this schematic and not care about anything
else, since the impedances obviously are such that it has no impact". (Let
someone else worry about board area and layout, noise and crosstalk issues,
emission, added power consumption and heat dissipation, etc).

Yep, yep and YUP!
It's not easy enough for some people that we can just type our product
from a keyboard and have a toolchain convert it into the final running
image. It additionally has to be possible to make changes without knowing
a whole lot of pesky context. Because our time is so expensive, and all.


There are other, more complex reasons for this. Time spent acutally
coding is, what 5% of total cost?

I am software much more than hardware, but I've never understood this
drive in these industries. It's verification that's hard; why pretend it
doesn't exist?
Maybe this is why companies sometimes spend hundreds of thousands developing
some beautifully functioning working for a device, and then it goes to hell
because the task of making drivers was given to some yahoos as an afterthought,
and the end product is a flop that blue-screens everyone's PC.


Yep. Although some of that is that drivers come later,
and later is always on the short end of the stick.
 
K

Kaz Kylheku

Yep. Although some of that is that drivers come later,
and later is always on the short end of the stick.

Yes; if only one phase of multi-stage development process blows past some
deadline, it is necessarily the chronologically last one.
 
B

BartC

Kaz Kylheku said:
The theme we are seeing from the naysayers is developers shouldn't have to
know
this, or shouldn't care about that, and in general should be able to focus
on a
small part of the program through a narrow peep-hole in order to make an
intended change while learning as little as possible about the program.

That may be very well and fine in most of the industry, but in certain
programs
that are developed to be polished gems of engineerng by people who take
pride,
this idea of "know as little as possible" is poor ideological fit.

You take that thinking a bit further, then it is better to have larger
numbers of much smaller include files that each define only one thing.
Because after all you can't have a single include file providing a
'peephole' into the 10 or 100 functions it might define.

Take it one step further, then perhaps you can dispense with include files
altogether; just discretely define each constant, variable, macro, typedef
and function that is referenced not only directly by the this module, but
also indirectly.

Which means any program along these lines:

#include <windows.h>

int main(void) {
MessageBox(0,"Hello World","",0);
}

would need to have at least 25,000 lines of preamble before you even get to
your own code.

Actually, that's not quite true. Maybe the programmer can use his/her
knowledge of the inner workings of windows.h to pick up only the 5,091
scattered lines that are actually necessary in any specific module. Of
course, a few edits later, it might need to be 6,620 lines, and the next
day, only 4,378 lines. With luck he/she might have a few minutes spare each
day to actually work on their code!

Or maybe one can choose the sensible approach and just use these headers as
they were intended.
 
M

Malcolm McLean

You take that thinking a bit further, then it is better to have larger
numbers of much smaller include files that each define only one thing.
Because after all you can't have a single include file providing a
'peephole' into the 10 or 100 functions it might define.

Take it one step further, then perhaps you can dispense with include files
altogether; just discretely define each constant, variable, macro, typedef
and function that is referenced not only directly by the this module, but
also indirectly.

Which means any program along these lines:

#include <windows.h>

int main(void) {

MessageBox(0,"Hello World","",0);

}

would need to have at least 25,000 lines of preamble before you even get to
your own code.
I had exactly that issue with Baby X.

There's a message box function. But all it actually needs to expose in its public interface is the
connection to the opaque BabyX system, and the message box function itself, plus a few flags
for options.
So it's a choice whether to wrap everything up into one big include, or require users to #include
headers for each component separately. The internally of course message box makes a lot of calls
to the Baby X system, it needs buttons and labels and modal access. So should there be private
components?
In fact I went for putting everything into one big header which is included by all files.
 
B

BartC

Malcolm McLean said:
On Tuesday, April 15, 2014 9:51:42 AM UTC+1, Bart wrote:

I had exactly that issue with Baby X.

There's a message box function. But all it actually needs to expose in its
public interface is the
connection to the opaque BabyX system, and the message box function
itself, plus a few flags
for options.
So it's a choice whether to wrap everything up into one big include, or
require users to #include
headers for each component separately. The internally of course message
box makes a lot of calls
to the Baby X system, it needs buttons and labels and modal access. So
should there be private
components?
In fact I went for putting everything into one big header which is
included by all files.

Which is the sensible approach I suggested. And I understand with Baby X,
then it might work on top of X Windows, or on top of Win32, or perhaps
something else altogether.

Those dependencies /do not belong/ in the user's code. The OP's entire
approach seems to be along the wrong lines. Maybe there are issues with too
many include files and too many poorly constructed ones, but I think the
solutions lie elsewhere.
 
K

Ken Brody

That strikes me as a stupid^H^H^H^H^H^H suboptimal rule.
Agreed.

You'll have headers that depend on other headers, but without that
relationship being expressed in the source.

Then require than every header which requires some other header to be
included first document such requirements at the top. Perhaps make up a
"#pragma" line (which must be ignored by the compiler if not recognized)
that tells the reader about this:

#pragma RequiresInclude(header.h)

Or (since the "rules" require that you must be in control of these headers,
anyway) start each header with a #define that tells that the header has been
included. For example:

#define __INCLUDED_HEADER_H

(Yes, yes, I know about the starting underscore.)

Then, for any header which requires it:

#ifndef __INCLUDED_HEADER_H
#error Sorry, but you need to include HEADER.H first.
#endif

[...]
Can you think of a lightweight way to solve this? Maybe using perl, the
unix tool box, make, gmake, gcc, a C lexer?

So you need to build a set of tools that would be unnecessary if you
were allowed to have #include directives in headers.

A not quite serious suggestion: Cheat.
:)

[...]
A more realistic method: Rather than having #include directives in
headers, add comments that specify the dependencies, and write a tool
that uses those comments.

Agreed. That was sort of my thinking with the "#praga" method above.
Better yet: Drop the rule and use #include directives as needed.

I'm curious how this fits with implementation header files which #include
others?

I'm also curious as to the "logic" (assuming there is any) behind such a "rule"?
 
K

Ken Brody

I used to think so fresh out of school.

But actually, this style is superior because leads to much cleaner code
organization and faster compilation. It keeps everything "tight".

I disagree there. Yes, perhaps including <my_entire_library_header.h> might
not be the best idea if you want a "tight" compile. (Though I see no
problem having such a header, and allowing the user to decide.) However,
there's no reason you can't break down headers into functional groupings
(which, admittedly, might contain some things you don't use), and which in
turn #include those other headers that are needed.

Why should you require that every source module which uses your library know
which other parts of your library are required? And, should an update to
your library mean that one of the headers now depends on an additional
header, why should the user have to go through every source module and add it?
I chose this approach in the TXR project, and am very pleased with it.
I understand now why some people recommend it.

If it works for you, go for it.

[...]
Also, the dependencies are easy to understand. If you look at the
list of #include directives at the top of a .c file, those are
the files which, if they are touched, will trigger a re-compile
of this file. And no others!

It's easy to generate a dependency makefile. If we have a foo.c
with these contents:

#include <stdio.h>
#include "a.h"
#include "b.h"
#include "foo.h"

then the dependency rule is precisely this:

foo.o: foo.c a.h b.h foo.h

My complaint, as noted above, is what happens if "foo." version 2.0 requires
"c.h" as well?
Done! At a glance we know all the dependencies. Our regular confrontation with
these dependencies in every source file prevents us from screwing up the
program with a spaghetti of creeping dependencies.

Honestly, I think it's just the opposite. When you add a dependency, "we"
need to now update every module that used the now-dependent header. The
"creeping dependencies" will happen as a project/library grows. It's just a
matter of whether "we" need to know about it, or if the implementation "just
does it".

[...]
 
K

Ken Brody

Kaz Kylheku wrote: [...]
The compiler diagnostics are simpler. None of this:

"Syntax error in line 32 of X
included from line 42 of Y,
included from line 15 of Z ..."

Also, the dependencies are easy to understand. If you look at the
list of #include directives at the top of a .c file, those are
the files which, if they are touched, will trigger a re-compile
of this file. And no others!

And?

And, I like it; I think it is beautiful to have an explicit view of the
dependencies laid out in the code.

Another benefit: no ugly #ifndef SYMBOL / #define SYMBOL ... #endif crap
in all the header files! Just a comment block and the definitions!

I assume that the comment block will explicitly state "you need to include
the following headers, in this order"?

[...]
Of course a library is a unit, and it should ideally provide one simple header
(or one for each major feature area).

Well, the OP said that that option was off the table. Unless, of course,
the "one simple header" was the only header used. (ie: it doesn't #include
any other headers.) However, unless it's a trivial library, it would no
longer qualify as a "simple" header.
If a library has some base definitions that are used by several features, then
I'd probably want to have a header for those base definitions which must be
included before the main features.

But all these headers can, internally, include the detailed internal headers:
all of the needed ones, in the correct order (which do not include other
headers).

Not according to the OP. Headers cannot be nested. Period.

[...]
 
K

Ken Brody

My lost point was that in practice where the includes are included makes no
real difference to the build time.


I must admit I do miss being able to do crosswords during builds :)


Yes, but if there are platform or other outside dependencies which govern
which headers a particular configuration requires, they have to be written
out in each source file. If code were write once, change never this
wouldn't be a problem. But it isn't (except for perl, which we all know is
a write only language).


OK, "the build system" will do this for you!


No argument there then.
 
K

Ken Brody

Kaz Kylheku wrote: [...]
The faster the machines get, the less I tolerate response and turnaround
time.

I must admit I do miss being able to do crosswords during builds :)

:)

"Way back when", a simple "make $module_name" would take 20 minutes for just
the *link* phase. (And we're talking about a binary of only a few hundred K
back then.) Now, "make clean all" takes less than 5 minutes for the entire
package, including several libraries and over a dozen modules.

[...]
 
M

Malcolm McLean

On 4/13/2014 1:02 AM, Kaz Kylheku wrote:


Well, the OP said that that option was off the table. Unless, of course,
the "one simple header" was the only header used. (ie: it doesn't #include
any other headers.) However, unless it's a trivial library, it would no
longer qualify as a "simple" header.
The complexity of the interface has little to do with the complexity of the underlying code.
Some libraries, like GUI libraries, expose a lot of functions, most of which just set parameters in
the GUI object and redraw it. Others expose only a few functions, but ones which are very difficult
to write and have lots of subroutines. Eg a grammar checker fundamentally just takes a string
and returns flags for words which aren't grammatical English. But there's a lot of complexity in
getting that result.
 
K

Kaz Kylheku

I assume that the comment block will explicitly state "you need to include
the following headers, in this order"?

No; the comment block gives the all important copyright notice and license. :)
 
M

Malcolm McLean

Which is the sensible approach I suggested. And I understand with Baby X,
then it might work on top of X Windows, or on top of Win32, or perhaps
something else altogether.

Those dependencies /do not belong/ in the user's code. The OP's entire
approach seems to be along the wrong lines. Maybe there are issues with too
many include files and too many poorly constructed ones, but I think the
solutions lie elsewhere.
Qt adopts the "include the class you need" approach. Of course it's a lot bigger than Baby X, and
it's C++ rather than C.
I can't remember offhand if Qt exposes header dependencies to the user.
 
K

Kaz Kylheku

Which is the sensible approach I suggested. And I understand with Baby X,
then it might work on top of X Windows, or on top of Win32, or perhaps
something else altogether.

That approach also supports "precompiled headers": a feature that some
compilers have. The included parts of a translation unit can be saved in a
"compiled" form for faster inclusion. If all translation units share the same
material (they all include the same common header) then it just has to be
lexically analyzed once; the binary form is then used by all source files.

The downside of the approach is that it doesn't express dependencies correctly.
Everything depends on every header file. Touch any header and you have to
recompile everything.

The gains obtained from a faster full rebuild will be erased by poor
incremental rebuild times, with interest.

In this regard, it is not sensible, except for very small programs.
Exact dependencies are superior (regardless of the specific approach).
 
M

Malcolm McLean

That approach also supports "precompiled headers": a feature that some
compilers have. The included parts of a translation unit can be saved in a
"compiled" form for faster inclusion. If all translation units share the same
material (they all include the same common header) then it just has to be
lexically analyzed once; the binary form is then used by all source files.
Unfortunately MS Visual Studio creates massive precompiled header files.
It's a real nuisance if you want to separate out your valuable work from
megabytes of binary junk that all computer systems generate. Then it
insists on adding stdafx.h to everything. So portable ansi C source files
won't compile. And it doesnt save any meaningful time for a smallish
project.
 
S

Stefan Ram

Malcolm McLean said:
Unfortunately MS Visual Studio creates massive precompiled header files.

I think this is the preset, so that it will not compile
standard C or C++ source code out of the box IIRC. First,
one has to change the compiler settings not to use precompiled
header files.
 
J

Jens Schweikhardt

in <[email protected]>:
# Jens Schweikhardt wrote:
#> #
#> # Have you ever written or had to maintain cross platform software?
#>
#> While at Sun Microsystems, I maintained VirtualBox. Does that qualify?
#
# Undoubtedly! What rule does that project follow?

BYO - Bring your own :) Bring a libc. Bring a make. Bring a shell.
Bring half of the POSIX toolbox.

As for header inclusion, no explicit rule was stated. But it was (and
still is, AFAICT) not the "headers don't include headers" rule but the
approach preferred by most of the participants in this thread, basically
include whatever you need when you need it.

I really wonder what (big) projects would benefit from IWYU (which I
understand now to be not exactly equivalent to "headers don't include
headers"). Mozilla, I'm told, is experimenting with it. It could
possibly do wonders to {Libre,Open,Star,*)Office on the compile-time
front.


Regards,

Jens
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,954
Messages
2,570,116
Members
46,704
Latest member
BernadineF

Latest Threads

Top