Is this valid and moral C++?

  • Thread starter Filimon Roukoutakis
  • Start date
W

werasm

Ivan said:
Yes, this is one of several ways to address the problem.
All I am saying is that this issue is worth attention.

Yes, I cannot understand Roland's point too. Obviously reserve might
solve the problem. I think it could cause unnecessary memory
allocation though. I would prefer:

std::auto_ptr<Object> instance( new Object );
myVector.push_back( instance.get() ); //X
instance.release(); //No X

Werner
 
K

Kai-Uwe Bux

werasm said:
Yes, I cannot understand Roland's point too. Obviously reserve might
solve the problem. I think it could cause unnecessary memory
allocation though. I would prefer:

std::auto_ptr<Object> instance( new Object );
myVector.push_back( instance.get() ); //X
instance.release(); //No X

Alternatively,

myVector.push_back( 0 );
myVector.back() = new Object;

looks exception safe, too.


Best

Kai-Uwe Bux
 
R

Roland Pibinger

Yes, I cannot understand Roland's point too. Obviously reserve might
solve the problem. I think it could cause unnecessary memory
allocation though.

OOM can be handled by a global new_handler if the operating system
supports it. Linux e.g. uses 'optimistic memory allocation' so
checking for OOM exceptions is not useful there. See also Moral #2
here: http://www.gotw.ca/publications/mill16.htm
I would prefer:

std::auto_ptr<Object> instance( new Object );
myVector.push_back( instance.get() ); //X
instance.release(); //No X

This is impractical for any real-world program. In general you need
not check within your program for OOM, stack overflow, int overflow,
etc.. You can reasonably assume that you are in 'secure territory'.

Best wishes,
Roland Pibinger
 
I

Ivan Vecerina

: On 24 Mar 2007 08:04:48 -0700, "werasm" wrote:
: >> Yes, this is one of several ways to address the problem.
: >> All I am saying is that this issue is worth attention.
: >>
: >Yes, I cannot understand Roland's point too. Obviously reserve might
: >solve the problem. I think it could cause unnecessary memory
: >allocation though.
:
: OOM can be handled by a global new_handler if the operating system
: supports it. Linux e.g. uses 'optimistic memory allocation' so
: checking for OOM exceptions is not useful there. See also Moral #2
: here: http://www.gotw.ca/publications/mill16.htm

I rarely check for new/memory allocations in my code. I only do my
best to write code that is exception-safe. New-ing an object can
fail in any case because of a constructor failure.
Given, push_back of a pointer is unlikely to fail, yet formally
it is a container operation that is allowed to fail and throw
an exception. So I'll write my code "to the spec".

: >I would prefer:
: >
: >std::auto_ptr<Object> instance( new Object );
: >myVector.push_back( instance.get() ); //X
: >instance.release(); //No X
:
: This is impractical for any real-world program.

Indeed. The real issue is that it is illegal to create a
container of auto_ptr. vector<T*> is brittle by nature.

When I have a container of polymorphic objects, I find
that the overhead of vector< shared_ptr<T> > is acceptable.
When not using polymorphic objects, I do not use containers
of pointers, but a container of <T> - with possibly some
accessory "index" containers storing pointers that refer
into an "allocation" container (a list or deque).

In C++0x, with the introduction of R-value references, I expect
that an equivalent of vector< auto_ptr<T> > will become available
in the standard library.

: In general you need
: not check within your program for OOM, stack overflow, int overflow,
: etc.. You can reasonably assume that you are in 'secure territory'

I write software for medical devices.

You think that one can just assume that int overflows never happen?

void on_decrement_radiation_power()
{
--ray_power; // safe? what if unsigned ray_power was zero?
}

Depending on the type of device your code runs on, you will also
ensure that stack overflows can't be triggered by excessive
recursion, or ensure graceful failure if sufficient memory
isn't available for new incoming data.


Cheers -Ivan
 
W

werasm

Ivan said:
: >I would prefer:
: >
: >std::auto_ptr<Object> instance( new Object );
: >myVector.push_back( instance.get() ); //X
: >instance.release(); //No X
:
Indeed. The real issue is that it is illegal to create a
container of auto_ptr. vector<T*> is brittle by nature.

Yes, but you of course realise that the container was not a container
of auto_ptr's. Notice the <get()> in the call to push_back. The
container only contained normal pointers, hence the call to release
afterwards.

That said, I have read the article Roland mentioned. For me it is
actually concerning that the possibility exists that a call to
something like buffer_[x] = 'y' might fail with a much worse exception
than bad_alloc (like an access violation). Even when one does get to
the situation where memory becomes a problem (and this is certainly a
possibility, especially when writing application where resources are
constraining - embedded linux), one would like to have the means of
taking action.

For instance, take your medical applications for example:

In the event of memory failure, one would possible want to disable a
less critical portion of the program, freeing its memory and allowing
for a more critical portion to try again - especially when a life
depends on the execution of a more critical part.

I suppose for these kinds of programs one will have to revert to pre-
allocated memory period (possible overloading new/deletes to only us
this). How do you even know whether the pre-allocated memory was
really allocated?

Although memory failure is very rare, if it is a possibility at all,
one would like it to fail deterministically.

Regards,

Werner
 
R

Roland Pibinger

I rarely check for new/memory allocations in my code. I only do my
best to write code that is exception-safe. New-ing an object can
fail in any case because of a constructor failure.

Yes, constructor failure is something you expect. It's entirely
different from OOM, stack overflow, ...
Given, push_back of a pointer is unlikely to fail, yet formally
it is a container operation that is allowed to fail and throw
an exception. So I'll write my code "to the spec".

Allowed, but not required. So maybe the spec is defective?
The real issue is that it is illegal to create a
container of auto_ptr. vector<T*> is brittle by nature.
When I have a container of polymorphic objects, I find
that the overhead of vector< shared_ptr<T> > is acceptable.

I don't think that 'smart pointes' solve any problem (BTW, a container
for auto_ptrs is not difficult but why should one use it? See:
http://www.relisoft.com/resource/auto_vector.html).
Depending on the type of device your code runs on, you will also
ensure that stack overflows can't be triggered by excessive
recursion, or ensure graceful failure if sufficient memory
isn't available for new incoming data.

IMO, the question is which resources you can rely on withing your
program and which not. Checking for stack overflow, OOM, ... would
severely impair the creation of reusable components. You must rely on
some solid ground for your programming.

Best wishes,
Roland Pibinger
 
I

Ivan Vecerina

: On Sun, 25 Mar 2007 10:48:07 +0200, "Ivan Vecerina" wrote:
: >I rarely check for new/memory allocations in my code. I only do my
: >best to write code that is exception-safe. New-ing an object can
: >fail in any case because of a constructor failure.
:
: Yes, constructor failure is something you expect. It's entirely
: different from OOM, stack overflow, ...

As I previously pointed out, what you expect is dependent on
the platform and application that you are working on.

: >Given, push_back of a pointer is unlikely to fail, yet formally
: >it is a container operation that is allowed to fail and throw
: >an exception. So I'll write my code "to the spec".
:
: Allowed, but not required. So maybe the spec is defective?

You mean, maybe the experience of the designers of the C++
language and libraries pales in comparison to yours?
You mean, it should not be possible to write an allocator
that can fail to allocate memory on a platform with limited
resources, and throw an exception to allow graceful recovery?

: >The real issue is that it is illegal to create a
: >container of auto_ptr. vector<T*> is brittle by nature.
: >When I have a container of polymorphic objects, I find
: >that the overhead of vector< shared_ptr<T> > is acceptable.
:
: I don't think that 'smart pointes' solve any problem

Smart pointers (seek to) address the problem of memory leaks.
In your opinion, so many people have been working on smart
pointers for no reason ?

: (BTW, a container
: for auto_ptrs is not difficult but why should one use it?
: See: http://www.relisoft.com/resource/auto_vector.html).

The difficulty of implementing a solution is proportional
to the breadth of its applicability.
Have you considered, for example, that:
- you'd have to rewrite a whole separate container
to have an auto_list
- it is unsafe to use auto_vector::iterator with
many standard algorithms, such as std::unique

I would suggest reading this introduction to R-value references
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n1377.htm
[ here's a convenient index into more info about R-value
references and other C++ features that are in development:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2169.html ]

: >Depending on the type of device your code runs on, you will also
: >ensure that stack overflows can't be triggered by excessive
: >recursion, or ensure graceful failure if sufficient memory
: >isn't available for new incoming data.
:
: IMO, the question is which resources you can rely on withing your
: program and which not.


This is simple: what can be relied on or not, for portable C++ code,
is defined in the C++ standard. And according to this specification,
vector::push_back is allowed to throw an exception.

: Checking for stack overflow, OOM, ... would
: severely impair the creation of reusable components.

I have never suggested checking for stack overflow. But if I write
a recursive algorithms I will make sure, for example, that its
worst case recursion depth is logarithmic to the size of its input.
I do not specifically check for out-of-memory exceptions, but I use
RAII-related idioms to write code that is (hopefully) exception-safe
and immune to resource leaks.

: You must rely on some solid ground for your programming.

Isn't this exactly what I am doing by not making
platform-specific assumptions?


Take care -Ivan
 
I

Ivan Vecerina

Hi, Werner,
:
: Ivan Vecerina wrote:
:
: > : >I would prefer:
: > : >
: > : >std::auto_ptr<Object> instance( new Object );
: > : >myVector.push_back( instance.get() ); //X
: > : >instance.release(); //No X
: > :
: > Indeed. The real issue is that it is illegal to create a
: > container of auto_ptr. vector<T*> is brittle by nature.
:
: Yes, but you of course realise that the container was not a container
: of auto_ptr's. Notice the <get()> in the call to push_back. The
: container only contained normal pointers, hence the call to release
: afterwards.

My comment was not about the correctness of the solution
you posted (no question about this). I was only regretting
that it required 3 lines of code instead of one.

: That said, I have read the article Roland mentioned. For me it is
: actually concerning that the possibility exists that a call to
: something like buffer_[x] = 'y' might fail with a much worse exception
: than bad_alloc (like an access violation). Even when one does get to
: the situation where memory becomes a problem (and this is certainly a
: possibility, especially when writing application where resources are
: constraining - embedded linux), one would like to have the means of
: taking action.

This implementation approach ("lazy commit") might be a reasonable
choice for destop systems - assuming that the user is alerted when
swap memory nears exhaustion, and that the user will first notice
system thrashing and look into releasing memory.
I am not a linux expert, but on many platforms it is possible to
specify the maximum memory that can be allocated to a process,
and I wouldn't be surprised if this were the case in Linux as well.
Critical applications and servers may use their own allocators,
and impose hard limits on their total resource allocations
(or use other hard limits, such as the maximum number of sessions
that a server may open simultaneously).
Tuning these hard limits is part of the job of a competent
server system administrator...

: For instance, take your medical applications for example:
:
: In the event of memory failure, one would possible want to disable a
: less critical portion of the program, freeing its memory and allowing
: for a more critical portion to try again - especially when a life
: depends on the execution of a more critical part.

Small embedded critical systems simply avoid making any memory
allocations at all.
As aplications and devices become more complex, the emphasis
is not as much on avoiding any failure (difficult to prove) as
on making sure that failure is handled gracefully.
On an X-ray machine, an external watchdog will immediately stop
all irradiation if the software stops responding, or if a certain
output threshold is reached). In your car a shut-down of the
ABS electronics will not prevent (mechanically-transmitted)
breaking force from functioning (unless you have one of these
specific models that attempted break-by-wire).

: Although memory failure is very rare, if it is a possibility at all,
: one would like it to fail deterministically.
Yes.

Kind regards,
Ivan
 
F

Filimon Roukoutakis

Jim said:
std::vector<someclass*> is usually bad design unless you are working with
polymorphism. Even then some would say use a smart pointer (although I use
naked pointers myself).

I am working with polymorphism. At some point I was reading strong
statements about not using std::smart_ptr with std::containers. Did I
misunderstand?
 
J

Jim Langston

Filimon Roukoutakis said:
I am working with polymorphism. At some point I was reading strong
statements about not using std::smart_ptr with std::containers. Did I
misunderstand?

Probably not. There are a few people who don't like smart pointers for
various reasons.

The question becomes, then, why do you need a someclass** instead of a
class*? Unless you are changing where the pointer is pointing somewhere, in
which case wouldn't it just be better to overwrite the value?
 
R

Richard Herring

Filimon Roukoutakis said:
Jim Langston wrote:
[...]
std::vector<someclass*> is usually bad design unless you are working
with polymorphism. Even then some would say use a smart pointer
(although I use naked pointers myself).

I am working with polymorphism. At some point I was reading strong
statements about not using std::smart_ptr with std::containers. Did I
misunderstand?

There's no such animal as std::smart_ptr, so the question is what kind
of pointer they really meant. Probably they meant that you can't have
std::containers of std::auto_ptr, because it doesn't model normal copy
semantics.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,301
Messages
2,571,549
Members
48,295
Latest member
JayKillian
Top