The question is specific - "how do *you* handle the exception ?".
Note that there are two very distinct levels involved. Even if
you don't want to handle the error locally, you still have to
ensure that your objects are in a coherent state when an
exception passes through. Something like:
int* p1 = new int[ 100 ] ;
int* p2 = new int[ 100 ] ;
is a memory leak, for example.
Luckily, if you're using operator new (but not malloc), this is
easy to test. Just replace the global operator new with one
which "runs out" on command. (This is a standard part of my
memory checking operator new.)
I think that most people think like you do - I don't need to
worry about it, someone else will handle it when in fact it's
never tested and never designed to be handled.
There's certainly no excuse for it never being tested. Just
insert an appropriate operator new, and write something like:
int allocCount = 0 ;
bool noMem = true ;
while ( noMem ) {
t.nextMinorCycle() ;
++ allocCount ;
noMem = false ;
Gabi::MemoryCheck memchk ;
memchk.setErrorTrigger( allocCount ) ;
try {
SetOfCharacter s( it.begin(), it.end() ) ;
} catch ( std::bad_alloc& ) {
noMem = true ;
}
memchk.resetErrorTrigger() ;
t.verify( memchk.unfreedCount(), 0 ) ;
}
Not that I do any better mind you ... I make just as much a mess of it
as anyone else does.
I know what you mean. I added the option to my MemoryCheck
class many years ago, but if you look at the code at my site,
none of the tests use it. I only started systematically testing
like this a couple of months ago. I might add that I found an
error the very first time I did such a test. And I consider
myself a fairly careful programmer.
The only point I am making is that it's very unlikely that
expecting some higher up frame to deal with my lower down
frame's issue without at least making the right design choices
is going to work is a recipe for failure.
Go ahead and try it. Set the process memory limit and run
various commands. Many of them crash and burn.
And those that don't crash and burn are likely to leak
resources, or end up in an inconsistent internal state.
I'd at least expect them to
complain very early about needing more memory to run.
One way I handled this in the past was to initially reserve a large
chunk o memory (1MB or somthing like that) and upon failure to get more
memory, I would set a flag and free the block and retry the memory
allocation. This would then set the application to alert the user that
they're out of memory and they should terminate other apps etc. If they
chose to proceed it would attempt to allocate it's memory reserve so it
could do it again. This was the only successful low memory handling
scheme I ever wrote. It can't be implemented using exceptions because
it needs to work before exceptions are thrown.
It's probably the best you can do in many cases. I work on
large scale servers. For the most part, they run on dedicated
machines, with what should be sufficient memory. If we run out
of memory, it's almost certainly because of a memory leak; we
free a pre-allocated block with hopefully enough memory to
ensure that logging will still work, log the problem, and abort
(which of course triggers a restart of the application). This
is done, of course, by means of the new_handler, and not by
catching exceptions.
There are exceptions. An LDAP server can receive arbitrarily
complex requests from the user, for example. Typically, it will
build such requests as a tree (with AND, OR and NOT nodes, as
well as predicates in leaf nodes). If one request fails because
of a lack of memory, it may simply be that it is too complex.
Catch bad_alloc, reject that request, and continue running. In
theory, at least; typically, the requests are parsed using
recursive descent, so where you run out of memory is when trying
to grow the stack
. (There's no standard solution for that,
but I know how to handle it under Solaris.)
The fact that there can be such cases, of course, means that in
library code, you have to do the right thing, even if it doesn't
matter in most applications.