C exe on winXP memory limitation

R

Richard Heathfield

fermineutron said:
The posts between my last one and Bills post which is addressed in my
last one, were made while i was typing the last post so the question in
my last post is alreday answered.

THANK YOU everyone who posted something usefull, currently i can think
of only 1 exception to that group, not going to say who.

Then I will. The one who failed to provide a useful reply was Bill Medland.

Actually, to be fair, Bill's reply was partly useful, because it included
the important information that your requests for information about stacks
and stuff is off-topic. The non-useful part was the part where he gave
platform-specific advice, which few if any here will trouble themselves to
check over for accuracy. You'd have done better to ask platform-specific
questions in a newsgroup dealing specifically with your platform.
 
B

Bart

fermineutron said:
Probably because he wants to shut me up without actually answering my
question.

See. You did it again.
The question about .data Segment of compiled C code still stands,
unless it was answered while i am typing this messege.

The question was already answered, but you didn't notice.

C doesn't need to have a stack only "automatic storage". When an
implementation does have a stack it creates automatic objects on the
stack so that recursive function calls work correctly. Just look at the
disassembly output of your compiler to see what I mean.

But as you should already know if you read more carefully, this is all
off-topic in comp.lang.c.

Regards,
Bart.
 
I

Iwo Mergler

fermineutron said:
Ok, i want to figure this out:

According to Kip R Irvine "Assembly Language for intel dased computers"

An executable file has to have 3 Segments, (Notice capitol S)
data Segment
code Segment
stack Segment

Furthermore depending on the .model directive, the Segments could be
equal to or greater than 1 segment (notice low case s)

now segment is a chunk of RAM equal to 64KB or less, this way offset is
a 16-bit addreess within a segment and direct memory address is
segment:eek:ffset = 32 bits.

This is ancient history. The 64KB segments were needed to
access more than 64K total memory on 16 bit computers
(8088,8086,80186,80286). The problem has gone away with
the introduction of the 80386 processor in 1985.
So, when compiler translates the C code into asm, does it create the
data Segment, to store variables which are declared in functions?

No. Variables declared in functions (automatic variables) are
implicit in the code of the function and are created on the
stack at runtime, just before each function call.

Only global and static variables end up in a data area.
does not create the .data Segment at all?

It's normally called a .data 'section' these days. It is created when
you have global variables in your program.

Kind regards,

Iwo
 
S

Simon Biber

Richard said:
fermineutron said:


No C90 implementation is obliged to support objects larger than 32767 bytes.
No C99 implementation is obliged to support objects larger than 65535
bytes.

To determine whether you can get a larger object without crashing the
program if you can't, the best way is to get the space via malloc.

Probably true; unfortunately many systems have non-compliant malloc
implementations. They will over-commit the available memory and indicate
success. Later when your program tries to use the memory, if there is
not enough storage available at that time, your process may be killed.
 
R

Richard Heathfield

Simon Biber said:
Probably true; unfortunately many systems have non-compliant malloc
implementations. They will over-commit the available memory and indicate
success. Later when your program tries to use the memory, if there is
not enough storage available at that time, your process may be killed.

Probably true, but that's a QoI issue. :)
 
J

Jordan Abel

2006-10-31 said:
Simon Biber said:


Probably true, but that's a QoI issue. :)

I think it's a conformance issue (one could argue that conformance
issues are a subset of QoI issues, but one would be wrong.)

void *malloc(size_t);
const size_t N = 10000;
int main(void) {
char *x;
x = malloc(N);
if(x) {
size_t i;
for(i=0;i<N;i++)
x='a';
}
return 0;
}

The above program is NOT permitted to crash for any N.

(The only reason this behavior is tolerated is that it rarely comes up,
and for systems where it would be an issue I believe it can be turned
off, at least on linux.)
 
R

Richard Heathfield

Jordan Abel said:
I think it's a conformance issue

If it's down to malloc (which I missed on first reading), yes, it is. If
it's the *operating system* that is over-committing, which is what I
assumed (perhaps wrongly) that he was talking about, then it's not a
conformance issue. The implementation cannot be blamed for believing the
OS.
 
J

Jordan Abel

2006-10-31 said:
Jordan Abel said:


If it's down to malloc (which I missed on first reading), yes, it is. If
it's the *operating system* that is over-committing, which is what I
assumed (perhaps wrongly) that he was talking about, then it's not a
conformance issue. The implementation cannot be blamed for believing the
OS.

The OS is part of the implementation.

The compiler can't be blamed for the libraries, either, but that doesn't
mean you can have a conforming implementation on which %Lf does not
handle long doubles (a problem on a certain windows implementation that
throws together a library with one idea of the size of type long double
with a compiler with another). This particular issue would be the case
even if (as it happens, they're not, but that's not the point) both the
library and the compiler can be 100% conforming, if paired with
a different one of the other part.

It's a system, and when it runs out of memory, it fails to handle it
correctly. It doesn't matter whose "fault" it is, all that matters is
that in the end, the whole does The Wrong Thing.

In this case, the library (one part of the implementation) believes the
OS (another part of the implementation), when the OS, as shipped by
default, is configured to lie.
 
K

Keith Thompson

Richard Heathfield said:
Jordan Abel said:

If it's down to malloc (which I missed on first reading), yes, it is. If
it's the *operating system* that is over-committing, which is what I
assumed (perhaps wrongly) that he was talking about, then it's not a
conformance issue. The implementation cannot be blamed for believing the
OS.

The OS is part of the implementation. If the OS provides a memory
allocation function that doesn't report errors, then a conforming
implementation cannot use that function to implement malloc().

<OT>As I understand it, if you over-commit and then attempt to use
it, *any* process can be killed.</OT>
 
S

Spiros Bousbouras

Jordan said:
void *malloc(size_t);
const size_t N = 10000;
int main(void) {
char *x;
x = malloc(N);
if(x) {
size_t i;
for(i=0;i<N;i++)
x='a';
}
return 0;
}

The above program is NOT permitted to crash for any N.


The above programme does not compile.
 
G

Gordon Burditt

No C90 implementation is obliged to support objects larger than 32767 bytes.
Probably true; unfortunately many systems have non-compliant malloc
implementations. They will over-commit the available memory and indicate
success. Later when your program tries to use the memory, if there is
not enough storage available at that time, your process may be killed.

My recommendation is to try to use the memory immediately. memset()
the memory to something. If you're really paranoid, memset() it
to something other than 0 so some smart-alec OS won't decide not
to bother allocating all-zero pages. Or just use calloc().

If you want to get really, really clever, make your own memory
allocation wrapper function which sets up a signal handler, does a
setjmp, calls malloc(), and memset()s the memory. The signal handler
gets called when the over-commit is discovered (if it's a catchable
signal) longjmp's back to this function and causes it to free() the
allocated memory and return NULL. Warning: longjmp() out of a
signal handler just screams for the wrath of undefined behavior.
 
A

Andrew Poelstra

Jordan said:
void *malloc(size_t);
const size_t N = 10000;
int main(void) {
char *x;
x = malloc(N);
if(x) {
size_t i;
for(i=0;i<N;i++)
x='a';
}
return 0;
}

The above program is NOT permitted to crash for any N.


The above programme does not compile.


It took me a while to figure this one out. The problem is that size_t
has not been defined.

#including <stdlib.h> and then removing the now-unnecessary malloc()
prototype would fix it.
 
K

Keith Thompson

Gordon said:
My recommendation is to try to use the memory immediately. memset()
the memory to something. If you're really paranoid, memset() it
to something other than 0 so some smart-alec OS won't decide not
to bother allocating all-zero pages. Or just use calloc().

Not an entirely bad idea, but if malloc() overcommitted the memory
allocation, there's no good way to handle the error.
 
J

J. J. Farrell

Spiros said:
Jordan said:
void *malloc(size_t);
const size_t N = 10000;
int main(void) {
char *x;
x = malloc(N);
if(x) {
size_t i;
for(i=0;i<N;i++)
x='a';
}
return 0;
}

The above program is NOT permitted to crash for any N.


The above programme does not compile.


Well, that sure stops it crashing ...
 
R

Richard Bos

Keith Thompson said:
Not an entirely bad idea, but if malloc() overcommitted the memory
allocation, there's no good way to handle the error.

True, but at least this way you know about it _now_, rather than after
you've spent half an hour entering data, when it appears that the memory
set aside for the report based on that data isn't there after all.

Richard
 
J

Jordan Abel

2006-11-01 said:
True, but at least this way you know about it _now_, rather than after
you've spent half an hour entering data, when it appears that the memory
set aside for the report based on that data isn't there after all.

You may know about it, but your program doesn't - after a SIGKILL it's
not in a position to know much of anything (and that's if you're lucky
enough that your program is the one to be killed)
 
R

Richard Bos

Jordan Abel said:
You may know about it, but your program doesn't - after a SIGKILL it's
not in a position to know much of anything (and that's if you're lucky
enough that your program is the one to be killed)

True. So it's far from an ideal situation. All the same, if you have to
program for such broken implementations, crashing immediately is better
than crashing with a delay.

Richard
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,819
Latest member
masterdaster

Latest Threads

Top