Matthias said:
Say that I have a class X, such that:
class X {
A a;
B b;
C c;
...
K k;
};
with each object a-k being of quite some size (that means that
it's way larger than just a pointer or maybe even a string).
Now X is istantiated in the program entry point,
that is, it resides on the stack of main().
I think that most comp.lang.c++ subscribers would prefer that
you use the term *automatic storage* instead of stack
even though the typical implementation
(and, in fact, every ANSI/ISO C++ compliant implementation)
allocates automatic storage from the program stack.
How large can x be to not lead to a stack overflow?
For the typical implementation, you can think of [virtual] memory
as a sequence of addresses:
top
--------
00000000
00000001
00000002
.
.
.
FFFFFFFE
FFFFFFFF
--------
bottom
The program stack grows upward [into free storage] from the bottom
of [virtual] memory and "dynamic memory" is allocated starting
somewhere near the top of [virtual] memory
(usually just after the .text
Code:
and .data [static data] segments)
and grows downward into free storage.
You run out of stack space only when you run out of free storage.
Stack overflow is almost always the result of exceeding
some artificial limit on the stack size set by your operating system
or by your program. For example, on my Linux workstation:
[QUOTE]
limit stacksize[/QUOTE]
stacksize 10240 kbytes
which I can easily reset:
[QUOTE]
limit stacksize unlimited
limit stacksize[/QUOTE]
stacksize unlimited
This limit serves as a check on runaway processes
such as a call to a recursive function which never returns.
[QUOTE]
Is it (in those cases) generally a better idea
to just let X hold pointers and allocate memory dynamically?
In my applications,
I almost never use dynamic memory allocated by 'new'.
Are there any guidelines when to do that, and when not?[/QUOTE]
Usually, dynamic memory allocation should be reserved
for objects such as arrays
for which the size is not known until runtime.
Most objects allocated from automatic storage are small
but reference much larger objects through pointers
into dynamically allocated memory (using new in their constructors).
Usually, the compiler emits code to allocate automatic storage
for all of the local objects including large arrays
upon entry into the function.
Right now, it appears that
C++ will adopt C99 style variable size arrays
which will complicate the situation a little.
Now, to answer your question,
you should generally avoid new
unless the object must survive the scope where it was created.
For example:
class doubleVector {
private:
double* P;
size_t N;
public:
doubleVector(size_t n): P(new double[n]), N(n) { }
~doubleVector(void) { delete [] P; }
};
When you create a doubleVector:
doubleVector v(n);
automatic storage is allocated for v.P and v.N [on the program stack]
but the array itself is allocated from free storage by the constructor
using new.
The destructor is called and frees this storage
when the thread of execution passes out of the scope
where v was declared.
Is it wise to allocate very large objects from automatic storage?
Probably not. It will probably interfere with other stack operations
by causing an extra page fault even before you access the object.
But, because it depends upon the implementation,
there is very little general guidance that we can give you.
The distinction between large and small objects
depends upon how much memory you have -- 256MByte? 2GByte? More?
Many new platforms have enough physical memory
to store *all* of virtual memory!
The best advice that we can give you is to test both
automatic and dynamic storage for large objects
and use automatic storage if you don't find an appreciable difference.[/QUOTE]
That was an insightful read, thanks Robert.