When to use 'new' and when not

M

Matthias Kaeppler

Hi,

say I have a class X, such that:

class X
{
A a;
B b;
C c;
...
K k;
};

with each object a-k being of quite some size (that means, it's way
larger than just a pointer or maybe even a string).
Now X is istantiated in the program entry point, that is, it resides on
the stack of main().

How large can x be to not lead to a stack overflow? Is it (in those
cases) generally a better idea to just let X hold pointers and allocate
memory dynamically?

In my applications, I almost never use dynamic memory allocated by
'new'. Are there any guidelines when to do that, and when not?
 
R

Rolf Magnus

Matthias said:
Hi,

say I have a class X, such that:

class X
{
A a;
B b;
C c;
...
K k;
};

with each object a-k being of quite some size (that means, it's way
larger than just a pointer or maybe even a string).
Now X is istantiated in the program entry point, that is, it resides on
the stack of main().

How large can x be to not lead to a stack overflow?


That's system dependant. Actually, it's even system dependant whether there
is a stack (in the sense you're using the word) at all. Also, the amount of
usable stack space depends on lots of other things, and it typically is
possible to specify the stack size at compile time with a compiler command
line option.
Is it (in those cases) generally a better idea to just let X hold pointers
and allocate memory dynamically?

I'd allocate dynamically only if I really need to. If your object really
happens to be too big (like - say - half a megabyte), you can still
allocate the X object dynamically and keep the members as they are.
In my applications, I almost never use dynamic memory allocated by
'new'. Are there any guidelines when to do that, and when not?

dynamically allocated objects are often needed if the size is not known at
compile time (e.g. like std::string, which doesn't know how many characters
there will be), or when using polymorphic types, when you don't know at
compile-time which type to instantiate. Another use is if you want to avoid
copying the object. You can allocate it dynamically and copy only the
pointer.
 
D

Dennis Jones

Matthias Kaeppler said:
Hi,

say I have a class X, such that:

class X
{
A a;
B b;
C c;
...
K k;
};

with each object a-k being of quite some size (that means, it's way
larger than just a pointer or maybe even a string).
Now X is istantiated in the program entry point, that is, it resides on
the stack of main().

How large can x be to not lead to a stack overflow?

That depends on the system's or application's stack size.
Is it (in those
cases) generally a better idea to just let X hold pointers and allocate
memory dynamically?

That depends on whether you want to run the risk of running out of stack
space. If the objects are huge, I'd say yes, allocate them dynamically.
In my applications, I almost never use dynamic memory allocated by
'new'.

And you've never run into stack overflow problems?
Are there any guidelines when to do that, and when not?

Why do you like stack-based objects so much? Is it because you don't like
having to worry about deleting them? If so, then you could use auto_ptr,
which is a stack-based object that holds a pointer to a dynamically
allocated object. Thus, auto_ptr offers best of both worlds.

For example:

#include <memory>
int main( void )
{
// 'x' will be destroyed automatically when exitting the current scope
std::auto_ptr<X> x( new X );

return 0;
}

You can even use auto_ptr's for class members:

#include <memory>
class X
{
std::auto_ptr<A> a;

public:
X();
};

X::X()
// 'a' will be destroyed automatically when this instance of X is
destroyed
: a( new A )
{
}

- Dennis
 
M

Matthias Kaeppler

Dennis said:
That depends on the system's or application's stack size.

And how can I figure that out to get a clue?
That depends on whether you want to run the risk of running out of stack
space. If the objects are huge, I'd say yes, allocate them dynamically.

What does huge mean, 1MB, 10MB, 100MB?
And you've never run into stack overflow problems?

No.



Why do you like stack-based objects so much? Is it because you don't like
having to worry about deleting them? If so, then you could use auto_ptr,
which is a stack-based object that holds a pointer to a dynamically
allocated object. Thus, auto_ptr offers best of both worlds.

I don't like them any better than an object which memory was allocated
dynamically. However, calls to new and delete produce additional code,
and if I'm not 100% certain that I need dynamic allocation, why pay for it?
I'm very aware of smart pointers by the way, the reason I ask is more
due to the fact that I never had any problems with stack overflows or
such, so I never saw a reason in using new and delete much.
However, I agree this is kind of short-sighted :)
 
E

E. Robert Tisdale

Matthias said:
Say that I have a class X, such that:

class X {
A a;
B b;
C c;
...
K k;
};

with each object a-k being of quite some size (that means that
it's way larger than just a pointer or maybe even a string).
Now X is istantiated in the program entry point,
that is, it resides on the stack of main().

I think that most comp.lang.c++ subscribers would prefer that
you use the term *automatic storage* instead of stack
even though the typical implementation
(and, in fact, every ANSI/ISO C++ compliant implementation)
allocates automatic storage from the program stack.
How large can x be to not lead to a stack overflow?

For the typical implementation, you can think of [virtual] memory
as a sequence of addresses:

top
 
M

Matthias Kaeppler

E. Robert Tisdale said:
Matthias said:
Say that I have a class X, such that:

class X {
A a;
B b;
C c;
...
K k;
};

with each object a-k being of quite some size (that means that
it's way larger than just a pointer or maybe even a string).
Now X is istantiated in the program entry point,
that is, it resides on the stack of main().


I think that most comp.lang.c++ subscribers would prefer that
you use the term *automatic storage* instead of stack
even though the typical implementation
(and, in fact, every ANSI/ISO C++ compliant implementation)
allocates automatic storage from the program stack.
How large can x be to not lead to a stack overflow?


For the typical implementation, you can think of [virtual] memory
as a sequence of addresses:

top
--------
00000000
00000001
00000002
.
.
.
FFFFFFFE
FFFFFFFF
--------
bottom

The program stack grows upward [into free storage] from the bottom
of [virtual] memory and "dynamic memory" is allocated starting
somewhere near the top of [virtual] memory
(usually just after the .text
Code:
 and .data [static data] segments)
and grows downward into free storage.
You run out of stack space only when you run out of free storage.
Stack overflow is almost always the result of exceeding
some artificial limit on the stack size set by your operating system
or by your program.  For example, on my Linux workstation:
[QUOTE]
limit stacksize[/QUOTE]
stacksize       10240 kbytes

which I can easily reset:
[QUOTE]
limit stacksize unlimited
limit stacksize[/QUOTE]
stacksize       unlimited

This limit serves as a check on runaway processes
such as a call to a recursive function which never returns.
[QUOTE]
Is it (in those cases) generally a better idea
to just let X hold pointers and allocate memory dynamically?

In my applications,
I almost never use dynamic memory allocated by 'new'.
Are there any guidelines when to do that, and when not?[/QUOTE]


Usually, dynamic memory allocation should be reserved
for objects such as arrays
for which the size is not known until runtime.

Most objects allocated from automatic storage are small
but reference much larger objects through pointers
into dynamically allocated memory (using new in their constructors).

Usually, the compiler emits code to allocate automatic storage
for all of the local objects including large arrays
upon entry into the function.
Right now, it appears that
C++ will adopt C99 style variable size arrays
which will complicate the situation a little.


Now, to answer your question,
you should generally avoid new
unless the object must survive the scope where it was created.
For example:

class doubleVector {
private:
double*    P;
size_t    N;
public:
doubleVector(size_t n): P(new double[n]), N(n) { }
~doubleVector(void) { delete [] P; }
};

When you create a doubleVector:

doubleVector v(n);

automatic storage is allocated for v.P and v.N [on the program stack]
but the array itself is allocated from free storage by the constructor
using new.
The destructor is called and frees this storage
when the thread of execution passes out of the scope
where v was declared.

Is it wise to allocate very large objects from automatic storage?
Probably not.  It will probably interfere with other stack operations
by causing an extra page fault even before you access the object.
But, because it depends upon the implementation,
there is very little general guidance that we can give you.
The distinction between large and small objects
depends upon how much memory you have -- 256MByte?  2GByte?  More?
Many new platforms have enough physical memory
to store *all* of virtual memory!

The best advice that we can give you is to test both
automatic and dynamic storage for large objects
and use automatic storage if you don't find an appreciable difference.[/QUOTE]

That was an insightful read, thanks Robert.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,202
Messages
2,571,057
Members
47,665
Latest member
salkete

Latest Threads

Top