Douglas A. Gwyn said:
You don't need multiple copies of read-only (I or D)
segments. And the reason the rest of a process' space
is R/W is that it will most likely be needed in the
course of performing the algorithm.
Not necessarily, it could be data that is initialised
during the first few seconds of running and then remains
largely unchanged and could be shared in systems that
implement copy-on-write sharing when a process forks.
The only inherently *dynamic* RAM is the stack (which in a
properly designed app should be bounded by a reasonable
size) and the heap. The main purpose of the heap is
specifically to share the limited RAM resource among
competing processes. It is important to program
reliability that each process be able to sense when a
resource shortage occurs during execution and to retain
control when that happens; the recovery strategy needs
to be specific to the application, but typically would
involve backing out a partially completed transaction,
posting an error notification, scheduling a retry, etc.
But what others are saying is that many applications
are allocating much, much more memory than they need, "just
in case", and that lazy allocation by the OS results in
a large improvement in performance.
Ideally stack overflow would throw an exception, but
anyway malloc is relied on to return a null pointer to
indicate heap resource depletion. For a process to be
abnormally terminated in the middle of a data operation
instead of being given control over when and how to
respond to the condition is unacceptable.
This depends on the purpose. For some uses, a large
improvement in performance may be worth the dangers of lazy
allocation. Also, my (possibly faulty) recollection is that
the memory failure results in a signal, which can in theory
be caught and handled - not that I would want to do it.
If, as you claim, there is now a consensus among OS designers
that it is okay for systems to behave that way, then that
indicates a serious problem with the OS designers.
What David actually said was that the consensus
was that it was "unreasonably expensive", ie that the
performance degradation was too large to justify the improvement
in safety. Of course, this is a value judgement, and
different individuals will come to different conclusions,
and the performance/safety trade off will be different
in a nuclear reactor controller, a critical database server,
and a user's desk top PC, to name a few extreme cases.
It also indicates that Linux must *not* be used in any
critical application, at least not without great pains
being taken to determine the maximaum aggregate RAM
utilization and providing enough RAM+swap space to
accommodate it, and one still would be taking a chance.
Yes, you would have to ensure that the RAM+SWAP was
large enough for the worst case, but ensuring this should be
adequate. With overcommitment turned off, it may be the
case that you would have to supply so much RAM+swap that
the system became too expensive.
I suppose this is what comes of letting amateurs design
and construct operating systems, and/or write the apps.
There was a time when software engineers were in charge.
I suspect there may be some truth to the claim about
amateurs designing applications, as modern applications seem
to regard RAM as infinite. However I think you are a little
unfair concerning OS designers. There seems to have been
a lot of debate and serious consideration of the performance
and safety trade offs. The fact that some OS designers have
made decisions on these trade offs that you disagree with
does not, in my opinion, mean that they are "amateurs".
Of course the best is to have both options. If
you need the safety, turn overcommitment off and pay the
price (either in money for more RAM or swap space or in
performance or both). If you do not need the safety,
allow overcommitment and get better performance for the
same cash. If you need safety, but cannot afford the price
of turning overcommitment off, either carefully analyse your
memory usage to ensure you are "reasonably" safe, or get
more money, or lower your target safety level, or lower
your performance standards, or compromise in some other
way.
Charles