[...]
The errno construct is particularly bad ...
I didn't advocate using errno; that was another poster. I think
overloading errno for application error signalling is a poor plan.
And I specifically objected to the errno construct used by said other
poster. And then you objected to my objection ... I think we are getting
dragged into an unnecessary discussion about a style issue, and there is
no enlightenment whatsoever for anyone following this thread because
it's just too pointless (yes, I should have known better than to start
talking about a religious topic.)
[...]
Those who don't care about security are doomed to lose it. Remind me
never to use any software you've written.
I think you need to cool down a bit. Nothing in programming is either
good or bad; some things just suck less than others, and there are
always tradeoffs. I believe that being careful to pass valid arguments
and omitting the checks will pay off in the long run - through reduced
(source) code size and perhaps a little more speed, and in addition,
that the errno construct in particular is likely to mask bugs.
Furthermore, it does not even make sense to say that such argument
validation in case of a bug does actually buy any security because the
definition of ``security'' depends upon the context of the application.
When writing a desktop or server application, an invalid pointer
argument will probably result in a crash, which is good in some ways
because it makes the bug obvious and helps you find and fix it.
On the other hand, if you're writing control software for a spaceship,
or anything else whose survival is ``mission-critical'', then you would
of course prefer sophisticated error detection and handling over a crash
But then you proably wouldn't be using C either, and there would be a
whole range of other potential problems to be handled. The choice of an
argument validation policy and how you can react to errors is not a
choice of the Dark Side vs. the Bright Side, but of whatever makes the
most sense in your circumstances.
Note that I'm not even arguing against argument validation in general,
I just prefer to omit it in cases like the one we are discussing, and I
think that the standard C library, including all of string.h, is giving
a good example to follow for library functions in general - If you pass
an invalid argument, you get undefined behavior.
It does nothing "needlessly". The need is patently obvious from the
miserable security state of a vast number of C programs.
So your point is that there are many sloppy programmers who may use the
software, and that you would rather give them an error code than a
crash. This is fine with me, but it doesn't mean that coding with this
purpose in mind results in good software design, nor does it mean that
your choice of policy is better than ``mine'' (again, my comments were
restricted to a very specific case.) While there is a vast number of
broken C programs out there, there is also a vast number of well-working
(though certainly not completely bug-free) C programs out there that
never pass invalid arguments to library functions.
Do HP-UX, AIX and UnixWare with page zero mapped make your software
``more secure'' than, say, Linux or FreeBSD with page zero unmapped,
only because they will permit a program that incorrectly dereferences a
null pointer for reading to continue executing?
In the vast majority of cases, the increase in execution time and
program size you complain about are negligible.
Negligible but still unnecessary if the caller is written correctly. A
caller should take full responsibility to invoke the callee correctly.
This will reduce (source) code size and it may help you enforce a design
where the state of your objects is always well-known.
There are certainly times when this rule of thumb should be lifted
because an invalid invocation seems more likely, but low-level library
functions do not belong into this category, and bad null pointer
arguments should probably never be expected (though there are exceptions
where it makes sense to explicitly define, or overload the meaning of a
null pointer - see free() and fflush() for examples.)
but it also makes it harder to detect and handle the bug.
Nonsense. One possible consequence of undefined behavior is that
the program works correctly anyway, or appears to. [...]
The errno construct I objected to, and which triggered this pointless
subthread, requires explicit interaction from the caller. A dereferenced
null pointer will yield a crash on a vast number of implementations.
There are platforms where this is not the case, but perhaps that just
means that you should not be doing software development on them if you
can avoid them?
Don't get me wrong, I'm not saying ``let's trade program robustness for
lower code size and programming efforts!'', but something more along the
lines of ``this simply does not affect robustness in a well-written
application, and even if the bug does occur some time, then it is
questionable whether the `defensive' approach does actually save the
day.''
Certainly, if the caller was written by someone incompetent. If
we're going to assume that, though, then your assumption that the
caller provided valid arguments looks a bit shaky, doesn't it?
When was the last time you actually adhered to what you are proposing
here? If you know what you're doing, then you also know with reasonable
certainty whether or not you are invoking a function correctly at any
point in time. Programming is no lottery and you always have to make
essential assumptions about the integrity of your program.
Many Unix functions set a wide variety of errno codes, but it is
impractical and nonsensical to test for all of them.
char buf[128];
int rc;
int fd;
/* ... */
rc = read(buf, sizeof buf - 1, fd);
if (rc == -1) {
/*
* Would you really test for:
* EBADF (bad file descriptor)
* EFAULT (bad address)
* EINVAL (bad STREAM or multiplexer)
* ... and a variety of other obscure
* errors that you *know* cannot occur
* in the particular program context?
*/
} else if (rc == 0) {
/* EOF */
} else {
buf[rc] = 0;
/* Use buf */
}
If you ever find yourself writing something like
if (errno == EFAULT) {
.... then you should reassure yourself that the pointer can *never* be
invalid and solve the actual problem, rather than coding around it.
This is one of the silliest arguments I've heard in some time.
Programs do not "believe" anything. Programmers may believe they
have written correct code; they are often wrong.
See the example above.
This whole thing really boils down to the question of whether or not
compatibility with buggy code is desirable. Programmers may also be able
to get their implementation to enable them to dereference null pointers,
to emulate misaligned instructions, and to make string constants
writable. That doesn't mean that any development relying on these
features will result in stable and well-designed software.
the UNIX Operating System_, 6.4.2. The detection of invalid pointers
happens automatically by the memory management hardware during the
context switch process.
The kernel is also mapped into the process's address space, yet it
should not be accessed through a userland pointer, which is why address
translation does not suffice to ensure validity.
True, some OSes do make explicit checks (I see the Linux 2.4 kernel
does). Others do not. Your generalization was no more correct than
mine was, it appears.
I don't think I made any generalizations; While the pointer stuff is
indeed for the most part covered by the hardware, all other arguments
also need to be checked explicitly. For example, an integer you pass to
the kernel may or may not be a valid file descriptor, but the kernel
cannot take its validity for granted.