It may be, but you are less helpless than what you are making it out
to be.
Say that you want to save. If your file stream is already there, and
given that saving is logically a "const" operation,
It is not logically a "const" operation and never will be one. The
very notion is absurd. How can I/O be const?
reason for things to go wrong (it's possible, but not likely). Or say
that you want to log the error. Logging is something that must a no-
throw operation, therefore, logging facilities are already there and
ready.
Providing logging as a no-throw operation is a logical impossibility
unless it is swallowing the errors for you. I/O can always fail,
period. Even when you reserve the descriptor and the buffer.
Moreover, it's generally impossible to detect failure without actually
performing the operation!
I tried, a long time ago, to eat all my memory and then proceed to
disk I/O. This works on e.g. Unix and windows. Why wouldn't it?
Sure, if you're using read(2) and write(2) (or equivalents) and have
already allocated your buffers, then being out of memory won't require
any additional allocations on the part of your process. Of course,
performing I/O requires more effort than just the read and write
calls, and many (most?) people don't write code that uses such low-
level interfaces. Those interfaces frequently do not (e.g., C++
iostreams) make it easy or even possible to ensure that any given I/O
operation will not cause memory allocation to occur.
Nevermind that data is often stored in memory in a different format
from how it is stored on disk, converting between these formats often
requires allocating memory. If you truly believe the fact that
read(2) and write(2) do no allocations is somehow relevant in this
discussion, then you are truly clueless. There is more to doing I/O
than just the actual system calls that transfer data from your process
to the kernel or I/O device.
You can't be serious with this. I would really like to see a codebase
that saves state to disk prior to any allocation (any failure
condition, really).
You don't. You save it before performing the complicated image
processing operation that might fail, instead of trying to save the
file after it failed. Plenty of codebases expect you to do this, and
plenty of smart users do this automatically and out of habit, even if
the application does it for them.
Actually, what could be attempted is saving after any change. But that
won't work well for many-a-editor either. Best you can reasonably do
is to save recovery from time to time.
If that's the best I can do, then why the hell are you telling me to
handle OOM at all? You came up with the suggestion, and now you're
telling me what you originally suggested is not possible. So which is
it?
It's not unnecessary caching, it's transient peaks in memory usage
during some work.
What transient peaks? If the amount of memory allocated to my process
is less than what I actually need to perform my processing, it means
some sort of caching (e.g., pool or block allocator) must be
occurring. Writing those caches such that they support giving memory
back to the operating system may be difficult and not worth the effort
involved. In some cases, I may not even know they're occurring or be
able to influence them.
You often don't know how much memory a given system
has, nor you don't know what e.g. other processes are doing wrt memory
at a time you need more memory.
If the operating system's virtual memory allows for memory allocation
by other processes to cause allocation failure in my own, then
ultimately I may be forced to crash anyway. Many operating systems
kernel panic (i.e., stop completely) if they reach their commit limit
and have no way of raising the limit (e.g., adding swap automatically
or expanding an existing file). Talking about other processes when
all mainstream systems provide robust virtual memory systems is
tomfoolery.
But I have. I have been intentionally droving code up the wall with
memory usage and looked at what happens. If you have your resources
prepared up front, it's not hard doing something meaningful in that
handler (depends also what one considers reasonable).
Your definition of reasonable is asinine, since it requires
programmers to write code that relies on low-level operating system
behaviors and system calls. Moreover, it assumes that doing such
things is possible with any further exceptions occurring! Finally, it
assumes that the behavior of the system calls themselves is somehow
the only relevant thing!
Quick? Why? Because the way to write a critical piece of code is off
the top of one's head? That's not serious.
Meh. You are trying to construct a case of trying to do a lot in case
of a resource shortage in order to prove that __nothing__ can be done
in case of resource shortage. I find this dishonest.
You're the one who suggested that we write state out to a file when we
reach an out of memory condition, not I! I'm not suggesting that
anymore be done than what is necessary to have a reasonable chance of
the operation succeeding, and I didn't even suggest everything
strictly necessary since it is application dependent.
Realistically, here's what I'd do for save case:
try
{
throw zone, lotsa work}
catch(const whatever& e)
{
inform_operator(e, ...); // nothrow zone
try { save(); } // throw zone
catch(const whatever& e)
{
inform_operator(e, ...); // nothrow zone
}
}
It is not possible to write 'inform_operation' generically in such a
way that it's nothrow unless it actively swallows exceptions. All of
the stuff that you said was 'Meh' is required to notify the operator!
Of course, assuming there is an operator is just icing on the cake.
Then, I would specifically test inform_operator under load and try to
make it reasonably resilient to it. But whatever happens, I would not
allow exception to escape out of it.
Doing this doesn't buy you a thing. It doesn't ensure the operator
(who doesn't exist) will see the message, it doesn't ensure you can
safely save. Ensuring these things requires doing what I suggest, at
a minimum, if it's even possible to do ensure notifications and
saving, which it is not.
I disagree. For me, there's actually no such thing as "error
handling". There's error reporting and there's __program state
handling__ (in face of errors). This is IMO a very important
distinction.
Not when discussing out of memory conditions (and most exceptions),
there's not. It has no bearing on the relevant questions: will the
program terminate and how will it do it?
Hmmm... We are most likely in disagreement what are exceptions used
for.
Clearly, but your disagreement isn't really with me but with language
designers and implementers the world over.
Adam