Lawrence Kirby said:
On Thu, 03 Feb 2005 00:38:23 +0000, Keith Thompson wrote:
...
Ultimately everything that affects behaviour, including the operating
system, has to be considered part of the implementation.
Sure, but the visible interface has to conform to the requirements in
the standard, whether the underlying OS does or not.
Absolutely. I say though, as I explained in another article, that the
standard does not prohibit memory overcommitment.
And that's the point of contention. Is there a DR on this topic? If
a committee response, or a future TC or version of the standard, says
that overcommitment is ok, I'll grit my teeth and accept it. But then
I wonder if there's any point of having malloc() ever return a null
pointer.
As long as the abort process dosn't generate anything that is considered
to be program output or a normal termination condition, it could. Of
course that may be unacceptable on QOI grounds, it depends on the
situation where the abort can happen. The standard has to be very loose in
this area to allow conforming implementations to exist at all.
I contend that the following program is strictly conforming (assuming
that any program that writes to stdout can be strictly conforming):
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
char *ptr = malloc(0);
printf("Hello, world\n");
return 0;
}
If it doesn't print "Hello, world" when I execute it, that's a bug in
the implementation. You seem to be arguing that if any statements
that follow malloc(0) are *never* executed, that's acceptable. (If it
happens to fail because somebody pulled the power plug before it was
able to finish, that's a different story.)
Even if you do that the OS can decide subsequently that it dosn't have
enough memory to go around and yours is the program it is going to kill to
recover some. The kill action might be initiated by your program accessing
part of a malloc'd array that had been paged out. The point is that it
isn't always possible for a C compiler/library to work around what the OS
does.
But in this case, it's not only possible, it's easy. Regardless of
whether lazy allocation is conforming or not, an implementer who wants
to provide a non-lazy malloc() can do so.
The requirement for malloc() is that it has to behave as the standard
requires, not as some OS routine happens to behave.
[...]
If I use an overcommitting system I don't want C programs subverting that
to the detriment of non-C programs.
[...]
Is your concern that C programs using a non-overcommitting malloc()
would consume more resources, to the detriment of non-C programs
running simultaneously on the system? (When I first read that, I
thought you meant that non-C programs would be forced to do
non-overcommitting allocations, but I don't think that's what you
meant.)
In the abstract machine, yes, which is what the majority of the
standard including this describes (see 5.1.2.3p1). OTOH show me the part
of the standard that says an actual implementation can't terminate the
execution of a strictly conforming program at any point it pleases for any
reason, IOW that the program will successfully execute to completion.
There is a requirement for (at least) one program as specified by 5.2.4.1.
But the "at least one program" wouldn't make sense there if it was already
the case for all strictly conforming programs.
Sure, a program can die at any time due to external influences
(somebody kills the process, the OS runs out of resources, somebody
pulls the power plug). And it's very difficult to define when this
constitutes a violation of the C standard and when it's just a case of
"oh well, stuff happens". If my program dies when my infinitely
recursive function call attempts to allocate a terabyte of memory, I
have no grounds for complaint. If it dies whenever I try to compute
2+2 in a strictly conforming program, that's a bug in the
implementation. If it dies when I try to access memory that was
allocated by an apparently successful malloc() call, I'm going to be
grumpy.
What you are guaranteed is that while the execution of the program
continues the object created by malloc() will behave correctly as an
object. It just doesn't guarantee continued execution of the
program subsequently.
What I am guaranteed is that if malloc() returns a non-null result,
the memory I requested was allocated. The question is what
"allocated" means.
There's also the issue of which behavior is more useful (which is
separate from the question of what the standard actually requires).
If C programs commonly use malloc() to allocate huge amounts of
memory, and then only use part of it, overallocation makes sense. If,
on the other hand, a program malloc()s a block of memory only if it's
actually going to use all of it, overallocation merely causes certain
errors to be detected later and without any recourse. If I request a
megabyte of memory that the system can't or won't give me, I'd rather
have the malloc() fail cleanly than have my program abort later on.
I would think that it's rare for C programs to malloc() memory that
they're not actually going to use, and I would argue that programs
that do so are misbehaving.