On 09/ 4/11 11:20 AM, James Kanze wrote:
On 09/ 2/11 04:37 PM, Adam Skutt wrote:
[...]
I agree. On a descent hosted environment, memory exhaustion is usually
down to either a system wide problem, or a programming error.
Or an overly complex client request.
Not spotting those is a programming (or specification) error!
And the way you spot them is by catching bad_alloc
.
No, you set upfront bounds on allowable inputs. This is what other
engineering disciplines do, so I'm not sure why computer programmers
would do something different.
Because code runs in a more volatile environment
It does nothing of the sort! Code does not have to deal the physical
environment: it doesn't have to be concerned with the external
temperature, humidity, shock, weather conditions, etc. It does not
care whether the computer has been placed in a server room or outside
in the middle of the desert. Hardware frequently has to care about
all of these factors, and many more. Operating systems provide
reasonable levels of isolation between components: software can
generally ignore the other software running on the same computer.
Hardware design has to frequently care about these factors: the mere
placement of ICs on a board can cause them to interfere with one
another!
The list goes on and on, and applies to all of the other engineering
disciplines too. This is easily by far both the most absurd and most
ignorant thing you've said yet, by a mile.
and tends to handle
more complex (models of) systems.
Because it's cheaper and easier to do such things in software, in no
small part because many of the classical design considerations for
hardware simply disappear. However, that doesn't change my statement
on setting bounds in the least.
An obvious example: a program operates on a set of X-es in one part,
and on a set of Y-s in another. Both are being "added" to operation as
user goes along. Given system limits, code can operate in a range on A
X-es and 0 Y-s, or 0 X-es and B Y-s, and any many-a-combination in
between. Whichever way you decide on a limit on max count of X or Y,
some use will suffer.
I'm not sure how you think this is irrelevant to this portion of the
discussion, but it's not even true. The limit for both may be
excessively generous for any reasonable use case. Moreover, plenty of
hardware has to process two distinct inputs and still sets bounds, so
it is an accepted technique.
Compound this with the empirical observation
that, beside X and Y, there's U, V, W and many more, and there you
have it.
There is no such empirical observation. That windmill in front of you
is not a dragon.
Add a sprinkle of a volatile environment, as well as
differing environments, because one code base might run in all sorts
of them...
A simple answer to this is to (strive to ;-)) handle OOM gracefully.
Even if anything you'd just wrote were true, you're still making the
case for handling OOM by termination in reality. If the environment
were really as diverse and volatile as you claim, and I can't prevent
the condition by setting reasonable bounds, there's really no reason
to believe I can respond to the condition after the fact, either.
Adam