K
kathy
I want to use std::vector:ushback function to keep data in RAM. What
will happened if no more memory available?
will happened if no more memory available?
I want to use std::vector:ushback function to keep data in RAM. What
will happened if no more memory available?
I want to use std::vector:ushback function to keep data in RAM. What
will happened if no more memory available?
Alf said:This default response can be overridden in three ways:
* By replacing the class' operator new (this is just an unfortunate name
for the allocation function; it's *called* by a 'new' expression, which
after that call proceeds to call the specified constructor, i.e. you're
not overriding what a 'new' expression does by defining operator new).
* By replacing the global namespace operator new (ditto comment).
* By installing a new-handler (see set_new_handler, I think the name
was).
* kathy:
In practice, what happens on a modern system as free memory
starts to become exhausted and/or very fragmented, is that the
system slows to a crawl, so, you're unlikely to actually reach
that limit for a set of small allocations.
However, a very large allocation might fail.
In that case, the C++ standard guarantees a std::bad_alloc
exception as the default response, provided the allocation at
the bottom was via 'new'.
This default response can be overridden in three ways:
* By replacing the class' operator new (this is just an
unfortunate name for the allocation function; it's
*called* by a 'new' expression, which after that call
proceeds to call the specified constructor, i.e. you're
not overriding what a 'new' expression does by defining
operator new).
* By replacing the global namespace operator new (ditto
comment).
* By installing a new-handler (see set_new_handler, I think
the name was).
On modern systems, it's not rare for the actual memory to be the
same size as the virtual memory, which means that in practice,
you'll never page.
(4GB for both seems to be a common figure,
even on 64 bit systems.) The phenomenon you describe would
mainly apply to older systems.
Or not. Some OS's don't tell you when there's not enough
virtual memory, in which case, the allocation works, but you get
a core dump when you use the memory.
Example?
Which is required in the default allocator.
Interestingly enough, this has absolutely no effect on
std::vector---std::vector< T > will always use :perator new,
even if T has a class specific allocator.
* By instantiating the vector with a custom allocator.
In all cases, however... The allocator must return a valid
pointer, if it returns. So the only possible ways of handling
an error are to throw an exception or to terminate the program.
Alf P. Steinbach wrote:
.. but be aware that (at least some of) the Microsoft compilers are
defective in this area, returning NULL instead of an exception.
Actually there's a fourth: Write your own memory allocator and
give it to your std::vector instance as template parameter.
(Not that this would be any easier than any of the above, but
just pointing out.)
No, that theory isn't practice. I think what you would have meant to
write, if you thought about it, would have been "which means that,
provided there are no other processes using much memory, an ideal OS won't
page". And some systems may work like that, and when you're lucky you
don't have any other processes competing for actual memory.
Andy Chapman's comment else-thread was more relevant, I think.
Because with enough RAM you can conceivably hit the limit of the address
space and get a std::bad_alloc before the OS starts trashing.
Well, my experience and interest in recent years has mostly been with
Windows.
AFAIK it's quite difficult to make Windows refrain from paging.
This is a problem for some people implementing Windows based servers.
Example?
"Alf P. Steinbach"
You set the swap file size to 0 and there is no swap, ever.
On a machine
that has 2G or 4G ram, it is a reasonable setting. (I definitely use such
configurarion with windows, and only add swap after there is a clear
reason...)
Adding so much swap to gain your crawl effect is practical for what?
Andy Chapman's comment else-thread was more relevant, I think.
Because with enough RAM you can conceivably hit the limit of the address
space and get a std::bad_alloc before the OS starts trashing.
That is another hard limit, very common OS-es use memory layout model that
only gives out 2G or 3G of address space, then that is it. (When WIN32 was
introduced with this property back in 90s we were talking about the new
'640k shall be enough for everyone' problem, though the usual memory in
those days PCs were 8-16M, and filling a gig looked insane -- but bloatware
is like a gas, fills every cubic bit of space.
Yeah.
[snip]Example?
Linux. google for memory overcommit. (or there is good description in
Exceprional C++ Style). It is exactly as nasty as it sounds -- OS just
gives you address space, and running out of pages you get shot on ANY
access. There is no way to write a conforming C++ implementatuion for such
environment.
* James Kanze:
No, that theory isn't practice.
I think what you would have meant to write, if you thought
about it, would have been "which means that, provided there
are no other processes using much memory, an ideal OS won't
page". And some systems may work like that, and when you're
lucky you don't have any other processes competing for actual
memory.
Andy Chapman's comment else-thread was more relevant, I think.
Because with enough RAM you can conceivably hit the limit of
the address space and get a std::bad_alloc before the OS
starts trashing.
Well, my experience and interest in recent years has mostly
been with Windows.
AFAIK it's quite difficult to make Windows refrain from paging.
This is a problem for some people implementing Windows based
servers.
Example?
Yeah, I lost the context of std::vector, sorry.
"Alf P. Steinbach"
Fighting memory-exhaustion is IME an lost battle on win, at
least it was a few years ago. You lauunch 'the elephant'
(stress application from sdk) and everything around you
srashes, while the rest is there unusable, as dialogs pop up
with broken text, ill buttons, etc.
In MFC app, I abandoned hope of recovery from memory error --
no matter what I did in my part, there were always portions of
the lib that failed, also, not using any memory on teardown is
too tricky. (and as mentioned, GUI tends to break...)
If the whole mem management would be done at a single place,
it could hibernate the process until memory appears, but it is
not the case, different libs use a mix of new, malloc in
language, a couple of WIN32 API's, and later some COM
interfaces too.
system/performance/virtual mem-> set to 0. (with some recovery
setting there is ~2M lower limit)
As paging strategy is 'interesting' at least -- the system
writes to swap even when you just use 15% of the RAM, it is
well advised to turn it off having 2G for a simple 'desktop'
use...
Server is certainly another kind of thing, it needs much
memory, and swap is beneficial to survive usage bursts, also
here the system have better chance to fair swap use, keeping
preprocessed tables (indexes, compilations) offline until a
next need.
Linux. google for memory overcommit. (or there is good
description in Exceprional C++ Style). It is exactly as
nasty as it sounds -- OS just gives you address space, and
running out of pages you get shot on ANY access.
There is no way to write a conforming C++ implementatuion for
such environment.
In summary, standard C++ memory exhaustion detection is
unreliable on Linux and in general unreliable on Windows. ;-)
It is on two of the systems I have before me: Solaris and
Windows. Linux has some tweaks where it will over allocate, but
if the size of the virtual memory is equal to the size of the
real memory (using Unix terminology---I would have said that the
size of the virtual memory is 0, and all memory is real), then
you'll never page. It's quite possible to run Solaris (and
probably Windows) with no disk space allocated for virtual
memory.
Alf P. Steinbach said:First, very few would use a program that modified the system page file
settings just to simplify its own internal memory handling.
Second, there's no way to do it in standard C++.
Third, the context wasn't about how to turn page file usage off, it was
about what happens on a system with enough memory but page file usage
enabled.
I'm sorry but that's a meaningless question, incorporating three false
assumptions.
First, not all systems have enough physical RAM to turn off use of page
file, even if a program could recommend to the user that he/she do that,
so this assumption is false.
Second, right now we're at a special moment in time where on most PCs the
physical RAM size matches the address space size available to and used by
most programs. But as you note below, "bloatware is like gas, fills every
cubic bit of space". It's also known as Wirth's law. And it means that in
a few years we'll be back at the usual usual where the processes running
in total use far, far more virtual memory than there is physical RAM.
So the crawl effect is what is observed in general, and it does not belong
to any particular person, so also this assumption is false.
Third, the assumption that most or all PCs are configured with use of page
file turned off, so that one would actively have to turn it on, is also
false.
In summary, standard C++ memory exhaustion detection is unreliable on
Linux and in general unreliable on Windows. ;-)
James Kanze said:Note that for my applications, this is absolutely unacceptable.
It's better for the application to crash than for it to simply
hibernate seconds, minutes, days...
Worse: some process gets shot. One person (Gabriel Dos Reis, I
think) told me of a case where the "init" process was killed
(which meant that no one could log in---including the
administrator). It may be just a psychological effect, but I
have the impression that emacs is a favorite target.
from what I understand, it isn't recommended.)At least under Linux, you can turn this behavior off. (Although
Err, I did not mean the program, I meant YOU. The user or admin of the
machine. A program certainly has no business with swap or the memory or
anything.
Possibly I misread, the context I was aware of was what happens when memory
is exhausted. As it seen from a C++ program's perspective, that just keeps
adding items to a vector, and see what eventually happens.
You claimed that the system will most likely crawl before anything else is
observed. I didn;t notice anyone injected a restriction of the system to
include a swap configured way beyond physical memory, and the physical
memory set way lower than the process' address limit.
Is this a problem with Windows, per se, or with the majority of
Windows applications. Be it Windows or Unix, most applications
don't handle out of memory conditions gracefully---they crash,
or if they manage to avoid crashing, they are missing some vital
resources (like the text for a button).
Alf, there are quite a lot of 32-bit systems around with 2 and 4Gb RAM
these days. They'll run out of address space first, and the alloc will
fail before they've ground to a halt.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.