Memory size?

  • Thread starter Joerg Schwerdtfeger
  • Start date
L

Larry Doolittle

Linux does exactly this. It makes for fun debugging.

echo 0 > /proc/sys/vm/overcommit_memory

Linux represents a perpetual battleground for standards-compliant-pedants
vs. get-the-job-done-pragmatists. Linus's biggest successes are when
he manages to satisfy both simultaneously; cases like this are second
place, when each can configure the system to their liking at a whim.

- Larry
 
A

Alex Monjushko

echo 0 > /proc/sys/vm/overcommit_memory
Linux represents a perpetual battleground for standards-compliant-pedants
vs. get-the-job-done-pragmatists. Linus's biggest successes are when
he manages to satisfy both simultaneously; cases like this are second
place, when each can configure the system to their liking at a whim.

This is pointless from the vendor perspective. If I ship a program,
I want to have control over the semantics of memory allocation. The
end-user is not likely to be sufficiently informed to make the decision
for me. I fail to see the pragmatism.
 
G

Giorgos Keramidas

Alex Monjushko said:
This is pointless from the vendor perspective. If I ship a program,
I want to have control over the semantics of memory allocation. The
end-user is not likely to be sufficiently informed to make the decision
for me. I fail to see the pragmatism.

What happens when two different vendor perspectives clash, each having
choosen to go their own, distinct way and stretching the poor user to
fit their own conception of "pragmatism"?

- Giorgos
 
R

Richard Bos

Giorgos Keramidas said:
What happens when two different vendor perspectives clash, each having
choosen to go their own, distinct way and stretching the poor user to
fit their own conception of "pragmatism"?

That's just the point, isn't it? One program should not be able to force
the allocation strategy for other programs, nor require the user to do
so. If I know my program will need what it allocates, I should be able
to depend on malloc() behaving Standard-conformingly; this need not and
should not influence the way other programs get their memory.

Richard
 
D

Dan Pop

In said:
Until I see the name of the offending OS, I will take this to be an urban
legend.

An incomplete list includes AIX, Digital Unix in lazy swap allocation mode
and Linux. Unlike the other systems, Linux checks that the allocated
memory is available at the time when the allocation request is made (which
of course, doesn't guarantee that it will still be available when the
program actually needs to use it, but, at least, requests exceeding the
system's capabilities are immediately rejected).

There are perfectly good reasons for this strategy and, on platforms where
they have a choice, users prefer the unsafe mode. Here are some examples:

1. Most applications overallocate memory. Back when swap space was a
limited resource (the whole disk had less than 1 GB), it was quite
easy to run out of virtual memory after starting only a few
applications, although most of it was *unused* (but allocated).
Switching to lazy swap allocation mode made an impressive difference.

2. Sparse arrays can be handled as ordinary arrays. The unused parts of
the array don't consume any resource except virtual memory address
space.

3. Large buffers come for free: the unused parts don't waste any
resources.

From a pragmatic point of view, a system running out of (virtual) memory
becomes unusable, anyway. Lazy swap space allocation delays this moment,
sometimes by a significant factor. Which is why users prefer it.

Of course, on a high reliability server, lazy swap space allocation may
not be an acceptable option.

OTOH, the sizes of the current disks make the issue far less important
than it was a decade ago.

As for the conformance of the C implementations on such systems, I can
find no requirement that the one and only program that needs to be
correctly translated and executed *must* contain malloc and friends calls
;-)

Dan
 
L

Larry Doolittle

#!/bin/sh
if [ `uname -s` = "Linux" -a `cat /proc/sys/vm/overcommit_memory` != 0 ]; then
echo "This program won't run on a Linux system with"
echo "an over-commit memory policy."
echo "As root, run the shell command:"
echo " echo 0 > /proc/sys/vm/overcommit_memory"
echo "and then try running this program again"
exit 1
fi
echo "run your program here"
That's just the point, isn't it? One program should not be able to force
the allocation strategy for other programs, nor require the user to do
so. If I know my program will need what it allocates, I should be able
to depend on malloc() behaving Standard-conformingly; this need not and
should not influence the way other programs get their memory.

The distinction is normally only important when programs are real
memory pigs. At that level, it is probably useful to suggest that
the user buy two machines, and segregate applications. If the
problem is theoretical rather than actual,
echo 0 > /proc/sys/vm/overcommit_memory ,
give yourself a huge swap space, and be done with it. There is no way
a program's proper functionality can depend on memory overcommittment.

The lkml has thrashed through this territory innumerable times. The
only relevant observation here is that Linux _has_ a standard-conforming
mode, which can be set by the admin, and tested for by a mortal user or
program.

- Larry
 
A

Alex Monjushko

What happens when two different vendor perspectives clash, each having
choosen to go their own, distinct way and stretching the poor user to
fit their own conception of "pragmatism"?

You misunderstood. I want to have control over the semantics of
memory allocation in /my/ program. I don't want my program to
force these semantics for other programs or vice-versa.

For what it's worth, in some cases, I have found it to useful to
explicitly use all allocated memory right off the bat, to make
sure that I would be able to safely use it later.
 
A

Alex Monjushko

#!/bin/sh
if [ `uname -s` = "Linux" -a `cat /proc/sys/vm/overcommit_memory` != 0 ]; then
echo "This program won't run on a Linux system with"
echo "an over-commit memory policy."
echo "As root, run the shell command:"
echo " echo 0 > /proc/sys/vm/overcommit_memory"
echo "and then try running this program again"
exit 1
fi
echo "run your program here"

Right, and break it for everybody else? Suppose that another
program has this:

#!/bin/sh
if [ `uname -s` = "Linux" \
-a `cat /proc/sys/vm/overcommit_memory` -eq 0 ]; then
echo "This program won't run on a Linux system without"
echo "an over-commit memory policy."
echo "As root, run the shell command:"
echo " echo 1 > /proc/sys/vm/overcommit_memory"
echo "and then try running this program again"
exit 1
fi
echo "run your program here"

Not nice, is it?

A global memory management policy with an esoteric default just
does not make much sense to me.
 
M

Michael Wojcik

Quite apart from this rendering any C implementation on that platform
unconforming (after all, if malloc() succeeds, the Standard says you own
that memory),

[We just hashed this out - again - in February. And before that in
January. Google for the thread if you like; searching for "lazy
allocation" should do it.]

It doesn't guarantee that you can use it. If it did, all implementa-
tions on virtual-memory OSes would potentially be non-conforming,
since the OS could lose backing store (eg due to disk failure)
between the time that malloc succeeded and the program tried to use
it.
is there _any_ good reason for this practice?

Sparse arrays, for one.
I mean, under OSes like that, what's
the use of all our precautions of checking that malloc() returned
succesfully?

In Unix OSes with lazy allocation, malloc may still fail for other
reasons - most notably because a ulimit has been reached.

More generally, programs may fail *at any time* due to problems in
the environment - and running out of virtual storage capacity counts
as one of those. That doesn't absolve programs from performing
normal error detection and handling.

--
Michael Wojcik (e-mail address removed)

[After the lynching of George "Big Nose" Parrot, Dr. John] Osborne
had the skin tanned and made into a pair of shoes and a medical bag.
Osborne, who became governor, frequently wore the shoes.
-- _Lincoln [Nebraska] Journal Star_
 
S

Sam Dennis

Alex said:
A global memory management policy with an esoteric default just
does not make much sense to me.

Just FYI, the default is, at least on the version I run, not to
overcommit. (Except, AFAICS, under unlikely circumstances with
one architecture.)
 
J

James Kanze

(e-mail address removed) (Dan Pop) writes:

|> There are perfectly good reasons for this strategy and, on platforms
|> where they have a choice, users prefer the unsafe mode.

They prefer it so much that IBM was forced to abandon it as the default
mode on the AIX.

There are some users, particularly those doing mathematical calculations
in research centers, for whom it might not be a real hinderness -- in
fact, it might even make like a very little bit simpler. But there are
a lot of users whose programs have to work correctly, and back out
correctly if the resources aren't there to support it. None of my
customers would ever knowingly have accepted such behavior.
 
J

James Kanze

|> >> Richard Bos writes:
|> >>> (e-mail address removed) (Richard Tobin) wrote:
|> >>> > It's worse than that. Many operating systems overcommit
|> >>> > memory, returning a valid pointer from malloc() and then
|> >>> > killing the program when it tries to access too much of it.

|> >>> Quite apart from this rendering any C implementation on that
|> >>> platform unconforming (after all, if malloc() succeeds, the
|> >>> Standard says you own that memory), is there _any_ good reason
|> >>> for this practice? I'd have thought simply telling your user
|> >>> that he can't have that much memory is preferable to pretending
|> >>> that he can, and then unceremoniously dumping him in it when he
|> >>> tries to use it. I mean, under OSes like that, what's the use of
|> >>> all our precautions of checking that malloc() returned
|> >>> succesfully?

|> >> Until I see the name of the offending OS, I will take this to be
|> >> an urban legend.

|> > Linux does exactly this. It makes for fun debugging.

AIX used to, and can still be made to do so as well.

|> echo 0 > /proc/sys/vm/overcommit_memory

Does this turn lazy commit on, or off? A quick check on my Linux box
(Mandrake 10.0, default installation) shows that there is such a file,
and it contains 0. Has Mandrake corrected something, or is this the
default, or...

|> Linux represents a perpetual battleground for
|> standards-compliant-pedants vs. get-the-job-done-pragmatists.
|> Linus's biggest successes are when he manages to satisfy both
|> simultaneously; cases like this are second place, when each can
|> configure the system to their liking at a whim.

The real problem here is what is the job that needs getting done: a
program that works reliably, or one that pushes the limit, working most
of the time, failing unaccountably on rare occasions, but being able on
the average to handle bigger data sets that it otherwise could.

For 99% of commercial applications, the program has to work reliably,
and the programs don't have to deal with large data sets (at least not
in memory). None of my customers would ever knowingly accept
overcommitting. They want to be sure that the job gets done, or that
the program backs out cleanly, freeing such resources as file locks, if
the resources aren't present. Customers like these were the commercial
presure which forced IBM to change the default mode for the AIX.

As to configurability: it is worthless on a processor level. The AIX
still retains the ability to overcommit, but it only does so if a
specific shell variable is set in the process. So you have to
explicitly ask for it, and one process asking for it doesn't affect
other processes.
 
R

Richard Bos

James Kanze said:
(e-mail address removed) (Dan Pop) writes:

|> There are perfectly good reasons for this strategy and, on platforms
|> where they have a choice, users prefer the unsafe mode.

They prefer it so much that IBM was forced to abandon it as the default
mode on the AIX.

There are some users, particularly those doing mathematical calculations
in research centers, for whom it might not be a real hinderness -- in
fact, it might even make like a very little bit simpler. But there are
a lot of users whose programs have to work correctly, and back out
correctly if the resources aren't there to support it. None of my
customers would ever knowingly have accepted such behavior.

Neither would I, and nor would my users. Telling them that I'm very
sorry, but their last hour's entered text is completely lost because
their system doesn't have a big enough disk is simply not acceptable.

Richard
 
R

Richard Bos

Quite apart from this rendering any C implementation on that platform
unconforming (after all, if malloc() succeeds, the Standard says you own
that memory),

[We just hashed this out - again - in February. And before that in
January. Google for the thread if you like; searching for "lazy
allocation" should do it.]

It doesn't guarantee that you can use it. If it did, all implementa-
tions on virtual-memory OSes would potentially be non-conforming,
since the OS could lose backing store (eg due to disk failure)
between the time that malloc succeeded and the program tried to use
it.

And as I said back then, that's no argument - failing hardware can
render _everything_ unconforming if you allow it to count. After all, at
any moment a cosmic ray could flip a bit in your program's memory and
turn that valid double you held there into a trap representation.
Sparse arrays, for one.

Rare enough that one should not make _all_ programs unsafe because of
it. OTOH, a facility whereby a program that uses directly allocated
sparse arrays could tell the OS that _it_ can tolerate over-committing
would be useful.
In Unix OSes with lazy allocation, malloc may still fail for other
reasons - most notably because a ulimit has been reached.

More generally, programs may fail *at any time* due to problems in
the environment - and running out of virtual storage capacity counts
as one of those.

Yes, but those are unavoidable. This is by design - if the system didn't
over-commit, this would be one less unnecessary worry for the user.

Richard
 
H

Harti Brandt

On Wed, 2 Jun 2004, Alex Monjushko wrote:

AM>>>
AM>>>> >> echo 0 > /proc/sys/vm/overcommit_memory
AM>>>> >>
AM>>>> >> Linux represents a perpetual battleground for standards-compliant-pedants
AM>>>> >> vs. get-the-job-done-pragmatists. Linus's biggest successes are when
AM>>>> >> he manages to satisfy both simultaneously; cases like this are second
AM>>>> >> place, when each can configure the system to their liking at a whim.
AM>>>> >
AM>>>> > This is pointless from the vendor perspective. If I ship a program,
AM>>>> > I want to have control over the semantics of memory allocation. The
AM>>>> > end-user is not likely to be sufficiently informed to make the decision
AM>>>> > for me. I fail to see the pragmatism.
AM>
AM>> #!/bin/sh
AM>> if [ `uname -s` = "Linux" -a `cat /proc/sys/vm/overcommit_memory` != 0 ]; then
AM>> echo "This program won't run on a Linux system with"
AM>> echo "an over-commit memory policy."
AM>> echo "As root, run the shell command:"
AM>> echo " echo 0 > /proc/sys/vm/overcommit_memory"
AM>> echo "and then try running this program again"
AM>> exit 1
AM>> fi
AM>> echo "run your program here"
AM>
AM>Right, and break it for everybody else? Suppose that another
AM>program has this:
AM>
AM>#!/bin/sh
AM>if [ `uname -s` = "Linux" \
AM> -a `cat /proc/sys/vm/overcommit_memory` -eq 0 ]; then
AM> echo "This program won't run on a Linux system without"
AM> echo "an over-commit memory policy."
AM> echo "As root, run the shell command:"
AM> echo " echo 1 > /proc/sys/vm/overcommit_memory"
AM> echo "and then try running this program again"
AM> exit 1
AM>fi
AM>echo "run your program here"
AM>
AM>Not nice, is it?
AM>
AM>A global memory management policy with an esoteric default just
AM>does not make much sense to me.

Just make your malloc() touching all pages it allocates and make this
behaviour settable via an environment variable.

harti
 
H

Harti Brandt

On Wed, 2 Jun 2004, Michael Wojcik wrote:

MW>
MW>> (e-mail address removed) (Richard Tobin) wrote:
MW>>
MW>> > It's worse than that. Many operating systems overcommit memory,
MW>> > returning a valid pointer from malloc() and then killing the program
MW>> > when it tries to access too much of it.
MW>>
MW>> Quite apart from this rendering any C implementation on that platform
MW>> unconforming (after all, if malloc() succeeds, the Standard says you own
MW>> that memory),
MW>
MW>[We just hashed this out - again - in February. And before that in
MW>January. Google for the thread if you like; searching for "lazy
MW>allocation" should do it.]
MW>
MW>It doesn't guarantee that you can use it. If it did, all implementa-
MW>tions on virtual-memory OSes would potentially be non-conforming,
MW>since the OS could lose backing store (eg due to disk failure)
MW>between the time that malloc succeeded and the program tried to use
MW>it.

I don't think that this is a valid argument since POSIX doesn't address
faulty hardware. I don't think that there is a place in POSIX that
explicitly says that hardware must be non-faulty, but I'd say this should
be clear.

MW>
MW>> is there _any_ good reason for this practice?
MW>
MW>Sparse arrays, for one.
MW>
MW>> I mean, under OSes like that, what's
MW>> the use of all our precautions of checking that malloc() returned
MW>> succesfully?
MW>
MW>In Unix OSes with lazy allocation, malloc may still fail for other
MW>reasons - most notably because a ulimit has been reached.

The difference is that you can usually handle malloc() returning NULL.
In the case of running out of swap space after malloc() has returned
a non-NULL value this is generally harder. Last year it tooks us an entire
month to find out why www.berlioz.de periodically was frozen to the point
that only the power switch helped (this was a 2-CPU Linux running apache).
The problem was that the apaches consumed all memory and swap space and
the system could not get out of this situation. Now that is a Sun Solaris
with apache and no problems so far. Other systems instead of freezing
start to more or less randomly kill processes, but it is not so easy for a
well behaved process (one that is just trying to use the malloced() space)
to react to this.

MW>More generally, programs may fail *at any time* due to problems in
MW>the environment - and running out of virtual storage capacity counts
MW>as one of those. That doesn't absolve programs from performing
MW>normal error detection and handling.

According to POSIX this is not an environmental condition.

Note, that I'm not arguing that overcommitting is bad, just that your
arguments are not really good.

harti
 
D

Dik T. Winter

About laze memory allocation:

>
> Neither would I, and nor would my users. Telling them that I'm very
> sorry, but their last hour's entered text is completely lost because
> their system doesn't have a big enough disk is simply not acceptable.

I know that on my desktop the X server had regular crashes until we
switched off lazy memory allocation.
 
D

Dan Pop

In said:
Neither would I, and nor would my users. Telling them that I'm very
sorry, but their last hour's entered text is completely lost because
their system doesn't have a big enough disk is simply not acceptable.

You must be really dense if you still haven't figured out that, in most
cases, the programmer doesn't have the control over the system behaviour.
It's either the implementor, or the sysadmin or even the user that
controls this aspect of the execution environment. And statically
allocated memory is affected just as well as dynamically allocated
memory.

Dan
 
D

Dan Pop

In said:
On Wed, 2 Jun 2004, Alex Monjushko wrote:

AM>>>
AM>>>> >> echo 0 > /proc/sys/vm/overcommit_memory
AM>>>> >>
AM>>>> >> Linux represents a perpetual battleground for standards-compliant-pedants
AM>>>> >> vs. get-the-job-done-pragmatists. Linus's biggest successes are when
AM>>>> >> he manages to satisfy both simultaneously; cases like this are second
AM>>>> >> place, when each can configure the system to their liking at a whim.
AM>>>> >
AM>>>> > This is pointless from the vendor perspective. If I ship a program,
AM>>>> > I want to have control over the semantics of memory allocation. The
AM>>>> > end-user is not likely to be sufficiently informed to make the decision
AM>>>> > for me. I fail to see the pragmatism.
AM>
AM>> #!/bin/sh
AM>> if [ `uname -s` = "Linux" -a `cat /proc/sys/vm/overcommit_memory` != 0 ]; then
AM>> echo "This program won't run on a Linux system with"
AM>> echo "an over-commit memory policy."
AM>> echo "As root, run the shell command:"
AM>> echo " echo 0 > /proc/sys/vm/overcommit_memory"
AM>> echo "and then try running this program again"
AM>> exit 1
AM>> fi
AM>> echo "run your program here"
AM>
AM>Right, and break it for everybody else? Suppose that another
AM>program has this:
AM>
AM>#!/bin/sh
AM>if [ `uname -s` = "Linux" \
AM> -a `cat /proc/sys/vm/overcommit_memory` -eq 0 ]; then
AM> echo "This program won't run on a Linux system without"
AM> echo "an over-commit memory policy."
AM> echo "As root, run the shell command:"
AM> echo " echo 1 > /proc/sys/vm/overcommit_memory"
AM> echo "and then try running this program again"
AM> exit 1
AM>fi
AM>echo "run your program here"
AM>
AM>Not nice, is it?
AM>
AM>A global memory management policy with an esoteric default just
AM>does not make much sense to me.

Just make your malloc() touching all pages it allocates and make this
behaviour settable via an environment variable.

Doesn't help much if the program crashes while doing that.

Dan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,143
Messages
2,570,822
Members
47,368
Latest member
michaelsmithh

Latest Threads

Top