For most code, on most platforms, the two will be one and the same.
The OS cleans up most resources when a process dies, and most process
have no choice but to die.
I disagree (obviously). Here's the way I see it: it all depends on the
number of functions program executes. A simple programs who only does
one thing (e.g. a "main" function with no "until something external
says stop" in the call chain) in C++ benefits slightly from "die on
OOM" approach (in C, or something else without exceptions, benefit is
greater because there, error checking is very labor-demanding). In
fact, it benefits from "die on any problem" approach.
Programs that do more than one function are at a net loss with "die on
OOM" approach, and the loss is bigger the more the functions there are
(and the more important they are. Imagine an image processing program.
So you apply a transformation, and that OOMs. You die, your user loses
his latest changes that worked. But if you go back the stack, clean
all those resources transformation needed and say "sorry, OOM", he
could have saved (heck, you could have done it for the user, given
that we hit OOM). And... Dig this: trying to do the same at the spot
you hit OOM is a __mighty__ bad idea. Why? Because memory, and other
resources, are likely already scarce, and an attempt to do anything
might fail do to that.
Or imagine an HTTP server. One request OOMs, you die. You terminate
and restart, and you cut off all other concurrent request processing
not nice, nor necessary. And so on.
Straightforward C++ on most implementations will deallocate memory as
it goes, so when the application runs out of memory, there won't be
anything to free up: retrying the operation will cause the code to
fail in the same place. Making more memory available requires
rewriting the code to avoid unnecessarily holding on to resources that
it no longer needs.
That is true, but only if peak memory memory use is actually used to
hold program state (heap fragmentation plays it's part, too). My
contention is that this the case much less often that you make it out
to be.
Even when there's memory to free up, writing an exception handler that
actually safely runs under an out-of-memory condition is impressively
difficult.
I disagree with that, too. First off, when you actually hit the top-
level exception handler, chances are, you will have freed some memory.
Second, OOM-handling facilities are already made not to allocate
anything. E.g. bad_alloc will not try to do it in any implementation
I've seen. I've also seen OOM exception objects pre-allocated
statically in non-C++ environments, too (what else?). There is
difficulty, I agree with that, but it's actually trivial: keep in mind
that, once you hit that OOM handler (most likely, some top-level
exception handler not necessarily tied to OOM), you have all you might
need prepared upfront. That's +/- all. For a "catastrophy", prepare
required resources upfront.
Properly written, exception-safe C++ code will do the right thing when
std::bad_alloc is thrown, and most C++ code cannot sensibly handle
std::bad_alloc. As a result, the automatic behavior, which is to let
the exception propagate up to main() and terminate the program there,
is the correct behavior for the overwhelming majority of C++
applications. As programmers, we win anytime the automatic behavior
is the correct behavior.
Yeah, I agree that one cannot sensibly "handle" bad_alloc. It can
sensibly __report__ it though. The thing is though, a vaaaast majority
of exceptions, code can't "handle". It can only report them, and in
rare cases, retry upon some sort o operator's reaction (like, check
the network and retry saving a file on a share). That makes OOM much
less special than any other exception, and less of a reason to
terminate.
Goran.