On Sep 13, 1:06pm, yatremblay@bel1lin202.(none) (Yannick Tremblay)
wrote:
Which is what I told you. It's still not sufficiently general to
support the counterargument, "You only have to handle some of them".
You are being obtuse and refuse to even try to understand.
I design an application.
I specifically do the design in such a way that I know where "large"
memory allocation occurs. In fact I designed it so that I
purposefully do one large allocation rather than drip-by-drip.
There is not need to be able to recover from all allocation failure.
You seem to be the one that claims that because it is impossible to
recover from all possible allocation failures, then you should never
ever under any circumstances even consider attempting to recover from
any allocation failure whatsoever. I disagree and the code sample
posted demonstrate that this is possible.
You said it yourself, if memory is really exhausted and a small
allocation really fails, then attempting to recover is really hard. So
you are correct that attempting to recover from all allocation
failures is a bad idea.
However, I can design an application so that it can recover from some
specific allocation failure if I use my brain to design it correctly.
Beside, the reality of that matter is only having to handle some of
them does not make things one iota easier, and I'm not sure why you
think it would make things easier. The difficulty is in figuring out
what goes inside the catch block, not in figuring out where to place
the damn thing (though it's hardly as simple as you think it is).
See code sample supplied that you purposefully ignore (well almost
ignore).
1- It works
2- Some things is being done in the catch block
No problem. Not difficult. I am not sure what is your problems with
it.
Regardless, like it or not, the burden on you to support your claim is
as I described. It's not my fault you're making claims you cannot
support.
FFS! Code supplied. Claim supported. Did you run it? It works. More
than one person have told you that they have tested a similar thing in
the real world.
My claim that it is possible to recover from some allocation failures
is fully supported.
I have no idea what claim you claim that I made that you claim that I
cannot support.
Clarification before you try to claim that I claim other things:
My claims:
"It is possible to recover from some allocation failures"
"It is possible to design an application in such a way so that there
are suitable recovery points"
"It is possible to design an application in such a way that you plan
to do potentially large allocation in one particular area and
given this knowledge and this planning, it is safe to *attempt* to
recover"
"On a case by case basis, the cost of handling some OOM error may be
justified by the value"
As far as I can understand, you seem to claim the generic that
recovering from any allocation failures whatsoever is impossible to do
and never worth doing. This is invalid. You can disprove an "it's
impossible to do" by simply supplying one example where it works.
BTW: I am not disputing that it is very difficult (or virtually
impossible) to write a generic allocation failure handler that will
always succesfully recover from all allocation failure regardless of
the cause.
There are plenty of systems where such things are impossible or simply
more difficult than they are worth. This is why we have multiple
levels of exception safety guarantees. There may not be such thing as
a "safe point of failure". All of my exception safety guarantees may
be weak.
I never claimed that it is always possible for all system. I claim
that one can design a system where it is possible in restricted cases.
The fact that some system may not have any "safe point of failure"
does not mean that it is impossible to design a specific system to
have safe points of failure.
Are you claiming that it is impossible to design any system with safe
points of failure?
I don't know why you keep returning to this point.
Because it seems you are refusing to understand.
'Too large' is not
the only reason to see an allocation failure.
In the system as designed, you *know* that in this particular location,
the allocation size is possibly too large (because it depends on external
input). You *know* that because you designed it that way.
Hence two possibilities:
1- The allocation failure was due to the requested allocation being
too large. The the recovery attempt will succeed and it is fine to
continue.
2- The allocation failure was not due to the requested allocation
being too large and instead due to allocation being totally impossible
on the system now. Then the recovery attempt will fail and the
program will terminate.
Essentially you are advocating assuming that you are always and will
always ever be in situation #2. I am advocating that given good
design, good expertise and knowledge, you can know where it is worth
checking if you are in situation #1 and it may be worth attempting to
recover.
Of course, that also leave allocation failures that happens elsewhere in
the program. In this case, the result will be as you advocate and the
application will terminate. As designed.
On modern operating
systems with virtual memory, it's may not even be the primary reason
to see an allocation failure[1]. Heap fragmentation / allocator
limitations are far more likely to cause an OOM condition. Such
issues equally effect small and large allocations: it depends entirely
on the algorithm used to allocate memory. Do you know what your
std::allocator does? It probably doesn't do what you think it does.
Likewise, malloc() probably doesn't behave the way you think it does
either.
Can we at least agree that the application can't know if an allocation
failed because of heap fragmentation, allocator limitation, maximum
per-process OS enforced limits or OS actually having run out of memory
altogether? The visible result for the application will be the same.
So this is irrelavent to the discussion. For what it's worth, the
posted code will work as advertised on a system with little physical
RAM and disabled virtual memory.
Included code demonstate that on modern operating system, it is
possible to design an application that may have to allocate memory
depending on external input (unknown at compile time) in a system with
unknown current available resources and that it can be design in such
a way so that in some/one location in the code, allocation failure are
likely to be primarily caused by "large" allocation requests.
The posted code does not in any way attempt to demonstrate that the
application will always be able to recover from all possible causes of
OOM errors at any possible places in the code.
I may never, ever see an error because the allocation was "too
large". It's poor justification for going through the effort of
writing an OOM handler. Size alone does not tell me which allocations
in the application will fail.
For your application, this may be the case. For other applications,
this may not be the case.
As I've stated many times: where one thinks the allocation failures
will happen and where they will actually happen are two entirely
different things.
You choose to quit at the first hurdle. I choose to at least attempt
to jump. If I fail, no loss. I am no worse than you. If I succeed,
I live to fight another day.
See posted code. The application is designed so that allocation
failure due to input complexity happen where planned.
Other allocation failures may happen elsewhere. The way an allocation
failure happening elsewhere is handled will be different than the way
it will be treated if it happened in the purposefully designed
"recoverable" section.
Especially when executing threaded code. Simply
*saying* design does not tell me what I need to know. You haven't yet
told me all the factors I need to consider in my design. There are
plainly more factors than how much memory a particular request makes
and why the request is being made.
I am not the designer of your application so I can't know what are all
the factors that need to be considered for *your* design. I have no
ideas of *your* requirements.
I will leave it to you as an exercise to modify the code previously
posted so that it can run multithreaded and have recovery points.
It's no different from optimization: just because you claim the
hotspot is 'X' does not actually mean the hotspot is 'X'. Just
because you say, "The program will run out of memory here" does not
mean that will ever actually happen in practice.
Do you like trying to put claims in other peoples mouths?
I am saying: I design the program so that potentially large allocation
happens here and given this design I choose to attempt to recover *if*
the allocation failure happened there.
Question: In the posted simplitic example, does the hotspot happen
where planned?
If the program has an allocation failure elsewhere, the result is the
same as what you preach: terminate. So no loss whatsoever for other
allocation failure. Gain for some specific allocation failures.
No, it doesn't highlight that memory is available in general. It
highlights memory was available in whatever asinine test cases you
came up with, or that memory was not allocated in that particular
situation. I'm not sure why you think an inductive proof has any
value here whatsoever. I'm also not sure why you think such a
simplistic example has any value whatsoever.
A simplistic example can disprove a generality.
You argue against ever attempting to recover from OOM because it's so
much more difficult than anyone can imagine. That it's so difficult
that it never ever worth even attempting.
The simplistic example demonstrate that it is perfectly possible and
not necessarily complex in specific cases if you design your
application this way.
If you choose to design your application in such a way that nowhere is
it possible to recover from an allocation failure, it is *your* choice.
The goal is to improve robustness by handling OOM. It was already
stated that using iostreams as-is will not do this, so I'm not sure
what you hoped to prove by writing this example. Write an example
that actually improves robustness. And since you claimed this can be
done in a multi-threaded program without impacting the other threads,
do that too. Otherwise, you haven't demonstrated anything of any
value whatsoever.
Bullshit! (sorry for the rudeness but you asked for it)
The example supply improved robustness since even after an OOM error,
the program recovers and can keep processing further inputs.
The example can be extended to multithread using the same principle.
It's simple to do. Can you do it?
You keep claiming that everyone else are giving unsupported claims.
I gave you an example supporting my claims. The example demonstrate
that it is possible to recover from some OOM errors. What about you
proving your claims?
Just for grins, try allocating all that space one byte at a time (go
ahead and leak it), so you actually fill up the freestore before
making the failed allocation. Then see how much space you have
available, if you don't outright crash your computer[2][3].
So you are advocating that I should design my application in such a
way that OOM error are always fatal. IN such a way that I
purposefully micro-allocate lots and lots of memory so that failure
will most likely happen at totally random places. Euh?!? Well the
consequences will be that OOM errors will always be fatal. That the
way the application is designed.
I chose to design the application so that some OOM errors can be
handled. Is there a law against good design and an obligation to
always enforcing stupid design?
I am not sure I understand why, in order to support you arguments that
it is never possible to ever recover from an OOM error, everyone
should always design their application so that they are purposefully
designed not to be able to recover from an OOM error.
The key to the design is precisely that it does not allocate memory
one byte at a time. Because of this design, the application will never
run out of memory altogether purely due to input complexity. It
doesn't matter if it is multi-threaded or single threaded. The
application uses malloc/new/allocator in a similar way as you
suggested to use predefined arbitrary limits: instead of checking that
the input are of lower complexity than X, it "check that I have enough
resources to process this input and if so, reserve them immediately".
The std::bad_alloc is just the allocator answering "no" to the first
question. The failed "new" does not affect the available resources,
it fails to acquire them.
Such a simplistic handler will not protect other threads from failing
during stack unwind for the original std::bad_alloc. How can it
possibly do so?
Given your claimed superior knowledge of allocators, please enlight us
on why the same pattern would fail on a multithreaded setup?
Can you clarify why thread #1 *failing* to allocate a large block of
memory directly stop thread #2 from being able to allocate a small
block of memory? Are your allocator not-thread safe?
It will work as advertised. Assuming you design your multithreaded
application intelligently and build in places where recovery is
possible. If allocation failure was due to the size of allocation,
the other threads will keep working fine. If the allocation failure
was due to other reason, the program will terminate.
You choose to believe it is not possible. I know it is possible (in
limited circumstance, for specific cases, if you design carefully,
even in multithreaded applications).
Obviously, if you design you application to leak memory one byte at a
time, the application is likely to fail at any random point and will
most probably not be able to recover. That's your design and your
choice.
You haven't yet designed a system for this purpose, so your opinion is
worth nothing.
This would be worth an rude reply but I'll skip.
BTW: all your claims so far are unsupported. Kettle, pot, black?
[3] This is of course one reason that restarting is inherently
superior to handling OOM: if your program does leak memory, then
restarting the program will get that memory back. Plus, you will
eventually have to take it out of service anyway to plug the leak for
good.
If you program leaks memory, you should fix it. Not rely on periodic
restarts. Sorry but IMO, crashing and restarting is inherently
inferior to not crashing and staying in a fully stable state. We will
have to agree to disagree but I doubt users seeing the apps crash will
be particularly happy.
Are you now recommending that just in case an application may have
been written by an incompetent programmer that leaks memory, every
persistent service application in the world should always be restarted
periodically?
BTW: designing your application so that it attempts to recover from
some error does not mean that you can't also have a monitor that
restart the application if it crashes.
Sorry about the tone of some of my comments but the style of your
answer annoyed me. I think the discussion is worthwhile and
interesting.
Yannick