Memory issue

K

kathy

I want to use std::vector::pushback function to keep data in RAM. What
will happened if no more memory available?
 
A

Alf P. Steinbach

* kathy:
I want to use std::vector::pushback function to keep data in RAM. What
will happened if no more memory available?

In practice, what happens on a modern system as free memory starts to become
exhausted and/or very fragmented, is that the system slows to a crawl, so,
you're unlikely to actually reach that limit for a set of small allocations.

However, a very large allocation might fail.

In that case, the C++ standard guarantees a std::bad_alloc exception as the
default response, provided the allocation at the bottom was via 'new'.

This default response can be overridden in three ways:

* By replacing the class' operator new (this is just an unfortunate name
for the allocation function; it's *called* by a 'new' expression, which
after that call proceeds to call the specified constructor, i.e. you're
not overriding what a 'new' expression does by defining operator new).

* By replacing the global namespace operator new (ditto comment).

* By installing a new-handler (see set_new_handler, I think the name was).

In general it's difficult to handle memory exhaustion in a good way. What to do
depends on whether it was a small allocation (uh oh, skating on thin ice,
further execution will likely fail, best to terminate) or a very large
allocation (hm, perhaps inform the user of failure to do whatever it was that
caused the allocation), or something in between (what to do?). And so many, if
not most, programs simply assume that there will be no allocation failure, ever,
although that's of course just the ostrich policy adopted by default.


Cheers & hth.,

- Alf
 
J

Jerry Coffin

I want to use std::vector::pushback function to keep data in RAM. What
will happened if no more memory available?

By default, it'll end up allocating memory via new, which will
(again, by default) throw an exception.
 
J

Juha Nieminen

Alf said:
This default response can be overridden in three ways:

* By replacing the class' operator new (this is just an unfortunate name
for the allocation function; it's *called* by a 'new' expression, which
after that call proceeds to call the specified constructor, i.e. you're
not overriding what a 'new' expression does by defining operator new).

* By replacing the global namespace operator new (ditto comment).

* By installing a new-handler (see set_new_handler, I think the name
was).

Actually there's a fourth: Write your own memory allocator and give it
to your std::vector instance as template parameter. (Not that this would
be any easier than any of the above, but just pointing out.)
 
J

James Kanze

In practice, what happens on a modern system as free memory
starts to become exhausted and/or very fragmented, is that the
system slows to a crawl, so, you're unlikely to actually reach
that limit for a set of small allocations.

On modern systems, it's not rare for the actual memory to be the
same size as the virtual memory, which means that in practice,
you'll never page. (4GB for both seems to be a common figure,
even on 64 bit systems.) The phenomenon you describe would
mainly apply to older systems.
However, a very large allocation might fail.

Or not. Some OS's don't tell you when there's not enough
virtual memory, in which case, the allocation works, but you get
a core dump when you use the memory.
In that case, the C++ standard guarantees a std::bad_alloc
exception as the default response, provided the allocation at
the bottom was via 'new'.

Which is required in the default allocator.
This default response can be overridden in three ways:
* By replacing the class' operator new (this is just an
unfortunate name for the allocation function; it's
*called* by a 'new' expression, which after that call
proceeds to call the specified constructor, i.e. you're
not overriding what a 'new' expression does by defining
operator new).

Interestingly enough, this has absolutely no effect on
* By replacing the global namespace operator new (ditto
comment).
* By installing a new-handler (see set_new_handler, I think
the name was).

* By instantiating the vector with a custom allocator.

In all cases, however... The allocator must return a valid
pointer, if it returns. So the only possible ways of handling
an error are to throw an exception or to terminate the program.
 
A

Alf P. Steinbach

* James Kanze:
On modern systems, it's not rare for the actual memory to be the
same size as the virtual memory, which means that in practice,
you'll never page.

No, that theory isn't practice. I think what you would have meant to write, if
you thought about it, would have been "which means that, provided there are no
other processes using much memory, an ideal OS won't page". And some systems may
work like that, and when you're lucky you don't have any other processes
competing for actual memory. :)

Andy Chapman's comment else-thread was more relevant, I think.

Because with enough RAM you can conceivably hit the limit of the address space
and get a std::bad_alloc before the OS starts trashing.

(4GB for both seems to be a common figure,
even on 64 bit systems.) The phenomenon you describe would
mainly apply to older systems.

Well, my experience and interest in recent years has mostly been with Windows.

AFAIK it's quite difficult to make Windows refrain from paging.

This is a problem for some people implementing Windows based servers.

Or not. Some OS's don't tell you when there's not enough
virtual memory, in which case, the allocation works, but you get
a core dump when you use the memory.
Example?



Which is required in the default allocator.

That's what I wrote, yes.

Interestingly enough, this has absolutely no effect on
std::vector---std::vector< T > will always use ::eek:perator new,
even if T has a class specific allocator.

Yeah, I lost the context of std::vector, sorry.

* By instantiating the vector with a custom allocator.

In all cases, however... The allocator must return a valid
pointer, if it returns. So the only possible ways of handling
an error are to throw an exception or to terminate the program.

Yes, that's correct.


Cheers,

- Alf
 
J

James Kanze

Alf P. Steinbach wrote:
.. but be aware that (at least some of) the Microsoft compilers are
defective in this area, returning NULL instead of an exception.

Be aware that a lot of compilers or systems have problems in
this regard. By default, for example, Linux will calmly tell
the system it has the memory, then cause the program to core
dump when it tried to use it. Some versions or configurations
of AIX or HP/UX also have this problem. And on at least one
configuration of Windows that I tried (NT, a long time ago),
when there wasn't enough memory, the system popped up a dialog
box asking you to kill other processes so it could retry;
operator new didn't return until you clicked on the dialog.

At any rate, the current Microsoft compilers (from Visual Studio
8) don't seem to have this problem. I also couldn't create
Alf's problem with my small tests. I was able to allocate 1917
blocks of 1MB, touching one byte every 1024 in each block in
order to be sure that it was really allocated, and I got a
bad_alloc, with no apparent thrashing or or any other bad
effects. (I'm afraid I don't know the exact configuration.
It's just a machine on my desk, which I normally only use for
email.)

A lot depends on how the OS manages its swap space, if there
isn't sufficient real memory. I remember doing the same test
under Solaris 2.2---the system literally hung for about 10
minutes... before I got a null pointer and could free the
memory. Solaris 2.4 was already a lot better, and while it
would thrash considerably (this was with 48MB real memory, and a
swap space of maybe 100 GB), the mouse would still react, and
you could use an xterm, albeit slowly.
 
J

James Kanze

Actually there's a fourth: Write your own memory allocator and
give it to your std::vector instance as template parameter.
(Not that this would be any easier than any of the above, but
just pointing out.)

It's not a question of easier; each solution does something
different. Replacing the class' operator new doesn't affect
allocations in an std::vector. Replacing the global operator
new, or installing a new handler, affects all allocations,
everywhere. Using a custom memory allocator affects only
allocations in the vector itself.
 
B

Balog Pal

"Alf P. Steinbach"
No, that theory isn't practice. I think what you would have meant to
write, if you thought about it, would have been "which means that,
provided there are no other processes using much memory, an ideal OS won't
page". And some systems may work like that, and when you're lucky you
don't have any other processes competing for actual memory. :)

You set the swap file size to 0 and there is no swap, ever. On a machine
that has 2G or 4G ram, it is a reasonable setting. (I definitely use such
configurarion with windows, and only add swap after there is a clear
reason...)

Adding so much swap to gain your crawl effect is practical for what?
Andy Chapman's comment else-thread was more relevant, I think.

Because with enough RAM you can conceivably hit the limit of the address
space and get a std::bad_alloc before the OS starts trashing.

That is another hard limit, very common OS-es use memory layout model that
only gives out 2G or 3G of address space, then that is it. (When WIN32 was
introduced with this property back in 90s we were talking about the new
'640k shall be enough for everyone' problem, though the usual memory in
those days PCs were 8-16M, and filling a gig looked insane -- but bloatware
is like a gas, fills every cubic bit of space. ;)
Well, my experience and interest in recent years has mostly been with
Windows.

Fighting memory-exhaustion is IME an lost battle on win, at least it was a
few years ago. You lauunch 'the elephant' (stress application from sdk) and
everything around you srashes, while the rest is there unusable, as dialogs
pop up with broken text, ill buttons, etc.

In MFC app, I abandoned hope of recovery from memory error -- no matter what
I did in my part, there were always portions of the lib that failed, also,
not using any memory on teardown is too tricky. (and as mentioned, GUI
tends to break...)

If the whole mem management would be done at a single place, it could
hibernate the process until memory appears, but it is not the case,
different libs use a mix of new, malloc in language, a couple of WIN32
API's, and later some COM interfaces too.
AFAIK it's quite difficult to make Windows refrain from paging.

system/performance/virtual mem-> set to 0. (with some recovery setting there
is ~2M lower limit)

As paging strategy is 'interesting' at least -- the system writes to swap
even when you just use 15% of the RAM, it is well advised to turn it off
having 2G for a simple 'desktop' use...
This is a problem for some people implementing Windows based servers.

Server is certainly another kind of thing, it needs much memory, and swap is
beneficial to survive usage bursts, also here the system have better chance
to fair swap use, keeping preprocessed tables (indexes, compilations)
offline until a next need.

Linux. google for memory overcommit. (or there is good description in
Exceprional C++ Style). It is exactly as nasty as it sounds -- OS just
gives you address space, and running out of pages you get shot on ANY
access. There is no way to write a conforming C++ implementatuion for such
environment. :(
 
A

Alf P. Steinbach

* Balog Pal:
"Alf P. Steinbach"

You set the swap file size to 0 and there is no swap, ever.

First, very few would use a program that modified the system page file settings
just to simplify its own internal memory handling.

Second, there's no way to do it in standard C++.

Third, the context wasn't about how to turn page file usage off, it was about
what happens on a system with enough memory but page file usage enabled.

On a machine
that has 2G or 4G ram, it is a reasonable setting. (I definitely use such
configurarion with windows, and only add swap after there is a clear
reason...)

Adding so much swap to gain your crawl effect is practical for what?

I'm sorry but that's a meaningless question, incorporating three false assumptions.

First, not all systems have enough physical RAM to turn off use of page file,
even if a program could recommend to the user that he/she do that, so this
assumption is false.

Second, right now we're at a special moment in time where on most PCs the
physical RAM size matches the address space size available to and used by most
programs. But as you note below, "bloatware is like gas, fills every cubic bit
of space". It's also known as Wirth's law. And it means that in a few years
we'll be back at the usual usual where the processes running in total use far,
far more virtual memory than there is physical RAM. So the crawl effect is what
is observed in general, and it does not belong to any particular person, so
also this assumption is false.

Third, the assumption that most or all PCs are configured with use of page file
turned off, so that one would actively have to turn it on, is also false.

Andy Chapman's comment else-thread was more relevant, I think.

Because with enough RAM you can conceivably hit the limit of the address
space and get a std::bad_alloc before the OS starts trashing.

That is another hard limit, very common OS-es use memory layout model that
only gives out 2G or 3G of address space, then that is it. (When WIN32 was
introduced with this property back in 90s we were talking about the new
'640k shall be enough for everyone' problem, though the usual memory in
those days PCs were 8-16M, and filling a gig looked insane -- but bloatware
is like a gas, fills every cubic bit of space. ;)
Yeah.


[snip]

Linux. google for memory overcommit. (or there is good description in
Exceprional C++ Style). It is exactly as nasty as it sounds -- OS just
gives you address space, and running out of pages you get shot on ANY
access. There is no way to write a conforming C++ implementatuion for such
environment. :(

In summary, standard C++ memory exhaustion detection is unreliable on Linux and
in general unreliable on Windows. ;-)


Cheers,

- Alf
 
J

James Kanze

* James Kanze:
No, that theory isn't practice.

It is on two of the systems I have before me: Solaris and
Windows. Linux has some tweaks where it will over allocate, but
if the size of the virtual memory is equal to the size of the
real memory (using Unix terminology---I would have said that the
size of the virtual memory is 0, and all memory is real), then
you'll never page. It's quite possible to run Solaris (and
probably Windows) with no disk space allocated for virtual
memory.
I think what you would have meant to write, if you thought
about it, would have been "which means that, provided there
are no other processes using much memory, an ideal OS won't
page". And some systems may work like that, and when you're
lucky you don't have any other processes competing for actual
memory. :)

No. The system has a fixed resource: virtual memory. In the
usual Unix terminology, the "size" of the virtual memory is the
size of the main memory, plus the size(s) of the swap partitions
(called tmpfs under Linux). For various reasons, it's very
unusual to run without swap/temporary partition, but it's
possible to configure the system to do so. And if you do, the
size of the virtual memory is equal to the size of the real
memory, and you never page anything out.
Andy Chapman's comment else-thread was more relevant, I think.
Because with enough RAM you can conceivably hit the limit of
the address space and get a std::bad_alloc before the OS
starts trashing.

That's a likely case with Windows, where you can only access
half of the available address space. But you should also be
able to configure the system so that it doesn't use more than
the available main memory.
Well, my experience and interest in recent years has mostly
been with Windows.
AFAIK it's quite difficult to make Windows refrain from paging.
This is a problem for some people implementing Windows based
servers.

I can imagine it would be. Perhaps the reason I'm so sure that
this can be done is that I work on large scale, high performance
servers, where we can't tolerate paging. But they're all Unix
systems---the only ones I've actually been involved in at that
level have been Sun OS and Solaris.

Linux. AIX in some configurations. Reportedly HP/UX. I seem
to recall some wierd behavior from Windows NT as well.
Yeah, I lost the context of std::vector, sorry.

Actually, I think it's an interesting point. If I provide a
class specific allocator, it's because I want all of the memory
for objects of the class allocated with that allocator.
std::vector screws that.

It's something that hadn't occurred to me before.
 
J

James Kanze

"Alf P. Steinbach"
Fighting memory-exhaustion is IME an lost battle on win, at
least it was a few years ago. You lauunch 'the elephant'
(stress application from sdk) and everything around you
srashes, while the rest is there unusable, as dialogs pop up
with broken text, ill buttons, etc.

Is this a problem with Windows, per se, or with the majority of
Windows applications. Be it Windows or Unix, most applications
don't handle out of memory conditions gracefully---they crash,
or if they manage to avoid crashing, they are missing some vital
resources (like the text for a button).
In MFC app, I abandoned hope of recovery from memory error --
no matter what I did in my part, there were always portions of
the lib that failed, also, not using any memory on teardown is
too tricky. (and as mentioned, GUI tends to break...)
If the whole mem management would be done at a single place,
it could hibernate the process until memory appears, but it is
not the case, different libs use a mix of new, malloc in
language, a couple of WIN32 API's, and later some COM
interfaces too.

I've seen this on at least one Windows machine (many years back,
Windows NT professional, IIRC). Out of memory caused a dialog
box to pop up, asking you to kill other applications; when you
clicked to clear it, it retried the allocation. And popped up
the dialog again if the allocation failed.

Note that for my applications, this is absolutely unacceptable.
It's better for the application to crash than for it to simply
hibernate seconds, minutes, days...
system/performance/virtual mem-> set to 0. (with some recovery
setting there is ~2M lower limit)
As paging strategy is 'interesting' at least -- the system
writes to swap even when you just use 15% of the RAM, it is
well advised to turn it off having 2G for a simple 'desktop'
use...
Server is certainly another kind of thing, it needs much
memory, and swap is beneficial to survive usage bursts, also
here the system have better chance to fair swap use, keeping
preprocessed tables (indexes, compilations) offline until a
next need.

The servers I work on generally run on dedicated machines (often
with no terminal attached), configured with enough memory for
them to run entirely in main memory. If we run out of memory,
it can only be due to a memory leak, so the best reaction is to
terminate the program, and have it start over.
Linux. google for memory overcommit. (or there is good
description in Exceprional C++ Style). It is exactly as
nasty as it sounds -- OS just gives you address space, and
running out of pages you get shot on ANY access.

Worse: some process gets shot. One person (Gabriel Dos Reis, I
think) told me of a case where the "init" process was killed
(which meant that no one could log in---including the
administrator). It may be just a psychological effect, but I
have the impression that emacs is a favorite target.
There is no way to write a conforming C++ implementatuion for
such environment. :(

At least under Linux, you can turn this behavior off. (Although
from what I understand, it isn't recommended.)
 
J

James Kanze

[...]
In summary, standard C++ memory exhaustion detection is
unreliable on Linux and in general unreliable on Windows. ;-)

There are several issues. First, if you're writing important
software, it's not unreasonable to request that the client
configure his machine in consequence. Which may mean turning
off features of the "default configuration" if they can cause
problems. (See "man sysctl" and the variable
vm.overcommit_memory under Linux. Balog has already explained
the necessary steps under Windows. Solaris works correctly "out
of the box".)

Second, there is a difference: as far as I can tell, windows
will report the error to you. The machine may become very, very
slow (as did early Solaris), but it still works, and you still
get an std::bad_alloc. Under Linux, unless you've reconfigured
vm.overcommit_memory, no. You get no indication of a possible
error---the program just crashes.
 
A

Alf P. Steinbach

* James Kanze:
It is on two of the systems I have before me: Solaris and
Windows. Linux has some tweaks where it will over allocate, but
if the size of the virtual memory is equal to the size of the
real memory (using Unix terminology---I would have said that the
size of the virtual memory is 0, and all memory is real), then
you'll never page. It's quite possible to run Solaris (and
probably Windows) with no disk space allocated for virtual
memory.

There are three sizes involved: total page file size, available-to-process
address space (which is the virtual memory, and is per process) and physical
memory size.

What you're describing is total page file size 0.

That's different from having physical memory size match the virtual memory size,
which contrary to your apparent claim doesn't much prevent page file usage, but
it seems from your clarification that it's just an issue of terminology.

[snip]


Cheers,

- Alf
 
B

Balog Pal

Alf P. Steinbach said:
First, very few would use a program that modified the system page file
settings just to simplify its own internal memory handling.

Err, I did not mean the program, I meant YOU. :) The user or admin of the
machine. A program certainly has no business with swap or the memory or
anything.
Second, there's no way to do it in standard C++.

Third, the context wasn't about how to turn page file usage off, it was
about what happens on a system with enough memory but page file usage
enabled.

Possibly I misread, the context I was aware of was what happens when memory
is exhausted. As it seen from a C++ program's perspective, that just keeps
adding items to a vector, and see what eventually happens.

You claimed that the system will most likely crawl before anything else is
observed. I didn;t notice anyone injected a restriction of the system to
include a swap configured way beyond physical memory, and the physical
memory set way lower than the process' address limit.
I'm sorry but that's a meaningless question, incorporating three false
assumptions.

First, not all systems have enough physical RAM to turn off use of page
file, even if a program could recommend to the user that he/she do that,
so this assumption is false.

A machine that fits the "On a machine that has 2G or 4G ram" we can
certainly assume that much ram. :) The interesting question is then only
what is the usual work's memory footprint to make a fair recommendation.
There wasn't either a claim or assumption that every possible case is
covered by that.
Second, right now we're at a special moment in time where on most PCs the
physical RAM size matches the address space size available to and used by
most programs. But as you note below, "bloatware is like gas, fills every
cubic bit of space". It's also known as Wirth's law. And it means that in
a few years we'll be back at the usual usual where the processes running
in total use far, far more virtual memory than there is physical RAM.

The machines with much RAM will ikely use 64 bit OS, so the address space
thing will be gone. Then we'll be back to similar situation we had 5-10
years ago, the 2G limit was above the sky, yet the normal configs did not
set arbitrary amount of swap -- just the practical 2-3x physical size, where
reaching the limit, if happened once in a while still had the machine
responsive to at least kill something. :)

As tracking the memory usage req is not hard, people working with hungry
apps certainly bought the next moodule -- and the next motherboard when
reaching a limit there -- VM is only good is used lightly, with regular
overreach the price on time and nerves exceeds that of the hardware.
So the crawl effect is what is observed in general, and it does not belong
to any particular person, so also this assumption is false.

Third, the assumption that most or all PCs are configured with use of page
file turned off, so that one would actively have to turn it on, is also
false.

I did not claim or assume that either -- the default install of win does set
up some swap, so not-caring users probably live with that. It does not mean
it is good for the majority -- or even a reasonable amount of situations.

Those who care about their system's performance shall do massive amount of
configuration on win and keep vigilance up, as most every program you
install nowadays drops a resident update/remind/nag/whatever deamon.

In summary, standard C++ memory exhaustion detection is unreliable on
Linux and in general unreliable on Windows. ;-)

well, you can put it that way ;-/
 
B

Balog Pal

James Kanze said:
Note that for my applications, this is absolutely unacceptable.
It's better for the application to crash than for it to simply
hibernate seconds, minutes, days...

Sure, but for others it is perfetcly sensible. Especially those used on a
really personal computer's desktop. And the program does not even have to
bother with dialog itself, windows has such a system popup. Too bad by the
time it surface most stuff broke. (I guess one fair reason is that calls to
USER module attempt allocation and return with failure, and those functions
are really not checked for return, or only in an assert... )

As a user, if I work in the editor, and see the system dialog, I would
happily close soemthing else (the problem is not with one program, but with
launching lots, or some zombies failed to quit competely), and get back to
work.

On unattended system certainly who will decide what to kill? And with the
VM+crawl it takes ages until some process progress far enough to finish a
hungry req and release enough to get breath.
Worse: some process gets shot. One person (Gabriel Dos Reis, I
think) told me of a case where the "init" process was killed
(which meant that no one could log in---including the
administrator). It may be just a psychological effect, but I
have the impression that emacs is a favorite target.

That is anothet thing -- system shooting at random process to make memory.
It also happens. What I was talking about is the simple consequence of
overcommit, when you "got" the block, and at some later moment want to
actually use it. On the page fault the system fails to get a real page, and
the result is crashing that process on that access. From C++ POV it looks
like a random crash where perfectly defined behavior would due.
At least under Linux, you can turn this behavior off. (Although
from what I understand, it isn't recommended.)

I heard some more recent distribs no longer have overcommit as default, but
probably it will be a long-standing problem.
 
A

Alf P. Steinbach

* Balog Pal:
Err, I did not mean the program, I meant YOU. :) The user or admin of the
machine. A program certainly has no business with swap or the memory or
anything.

In that case it's irrelevant.

Possibly I misread, the context I was aware of was what happens when memory
is exhausted. As it seen from a C++ program's perspective, that just keeps
adding items to a vector, and see what eventually happens.

You claimed that the system will most likely crawl before anything else is
observed. I didn;t notice anyone injected a restriction of the system to
include a swap configured way beyond physical memory, and the physical
memory set way lower than the process' address limit.

A swap file (a.k.a. page file) isn't meaningful if physical memory alone is
enough. Any claim that the usual configuration is to not have a swap file is
just incorrect. At this point in time the usual PC configuration is perhaps not
the most sensible one, but it was, and will again be.

And before and after this point in time (except that we've had this transient
situation once before) most PCs have had and will again have far less physical
memory than the available address space for ordinary apps.

So both those that you call "restrictions", as if they were unusual and quite
unexpected features to consider, are the normal state of affairs.


Cheers & hth.,

- Alf
 
A

Andreas Dehmel

Is this a problem with Windows, per se, or with the majority of
Windows applications. Be it Windows or Unix, most applications
don't handle out of memory conditions gracefully---they crash,
or if they manage to avoid crashing, they are missing some vital
resources (like the text for a button).


Regarding Windows problems at the memory limit: we recently came
across a rather nasty property of the Windows CRT when you're
allocating a massive amount of relatively small blocks: the heap
gets reconfigured (address space normally reserved for medium and
large blocks is reassigned to small blocks) and can't get back,
so even if you free all the small blocks and have several GB of
free memory, trying to allocate something like a 10MB block will
fail. It doesn't only happen when you hit the memory limit, but
it's most obvious then. It's impossible to recover from this sort
of problem no matter how well-behaved your application is, so this
is basically a Windows problem ``per se''.

Don't underestimate the probability of running into this problem,
it's amazingly simple in C++ where many classes will allocate small
blocks ``under the hood''. Just think of what most STL containers
(list/set/map) and similar structures will do with the heap...



Andreas
 
J

Jorgen Grahn

Alf, there are quite a lot of 32-bit systems around with 2 and 4Gb RAM
these days. They'll run out of address space first, and the alloc will
fail before they've ground to a halt.

Not to mention a system where you have explicitly put a limit on how
much a certain process may allocate (like the 'limit' or 'ulimit'
commands in various Unix shells).

/Jorgen
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,159
Messages
2,570,879
Members
47,414
Latest member
GayleWedel

Latest Threads

Top