C++ standard library and exceptions

I

ittium

Groups,
I have been using C++ for last couple of years, only exception from
standard library (+STL) that I ever worried was bad_alloc from new.
Lately I have been reading about exception and It appears large numbers
of routine can throw exception. I was wondering what is the best way to
find out the list of exception a standard library routine can throw. I
am actually looking for **some kind** of standard man pages, that talks
about exceptions apart from method signatures and return values.
thanks
Ittium
 
I

ittium

Groups,
I have been using C++ for last couple of years, only exception from
standard library (+STL) that I ever worried was bad_alloc from new.
Lately I have been reading about exception and It appears large numbers
of routine can throw exception. I was wondering what is the best way to
find out the list of exception a standard library routine can throw. I
am actually looking for **some kind** of standard man pages, that talks
about exceptions apart from method signatures and return values.
thanks
Ittium

PS: I am aware of the document
http://www2.research.att.com/~bs/3rd_safe.pdf
I am actually looking for some online resource that I can refer before
using any standard library routine (like we do for system call error codes)
 
G

Goran

Groups,
I have been using C++ for last couple of years, only exception from
standard library (+STL) that I ever worried was bad_alloc from new.
Lately I have been reading about exception and It appears large numbers
of routine can throw exception.

Yes, however, AFAIK, all of them throws indicate a serious bug in your
program. Do you want to continue running even though you stepped on a
bug? I think not.

The only potential exception I know of is vector resize, when you
might get length_error because elem_size*elem_count if over address
space size.
I was wondering what is the best way to
find out the list of exception a standard library routine can throw.

I don't know, but I know this: that's the wrong way of working with
exceptions. In a vast majority of cases you simply don't want to know
what exception types are. What you do need, though, is whether some
routine can throw or not (and even that, not that often, only when you
have a piece of no-throw code and you do work with some STL element in
it).

Could you given an example of why you think it's important to know the
type of the exception? I'll try to show you how to do it so that it
isn't.
I
am actually looking for **some kind** of standard man pages, that talks
about exceptions apart from method signatures and return values.

I know of none. But, due to what I think about that, see above, I
didn't look very hard.

Further, I believe that specifying such a list would hinder further
development of the library, and would also hinder what's available to
implementation to e.g. improve debugging facilities. If such list was
prescribed, you might easily get have non-compliant error-checking
facilities checks in a debug version (or even in release), or make
creation/use of such facilities harder.

Goran.
 
I

ittium

Yes, however, AFAIK, all of them throws indicate a serious bug in your
program. Do you want to continue running even though you stepped on a
bug? I think not.

The only potential exception I know of is vector resize, when you
might get length_error because elem_size*elem_count if over address
space size.


I don't know, but I know this: that's the wrong way of working with
exceptions. In a vast majority of cases you simply don't want to know
what exception types are. What you do need, though, is whether some
routine can throw or not (and even that, not that often, only when you
have a piece of no-throw code and you do work with some STL element in
it).

Could you given an example of why you think it's important to know the
type of the exception? I'll try to show you how to do it so that it
isn't.


I know of none. But, due to what I think about that, see above, I
didn't look very hard.

Further, I believe that specifying such a list would hinder further
development of the library, and would also hinder what's available to
implementation to e.g. improve debugging facilities. If such list was
prescribed, you might easily get have non-compliant error-checking
facilities checks in a debug version (or even in release), or make
creation/use of such facilities harder.

Goran.

Thanks Goran, for a very detailed explanation. Please correct me if I am
wrong. This is what you are saying

It is not a good idea to catch the exception thrown from standard
library routine, such a exception is a condition that you can not
recover except in following cases
1. bad_allocation (Although handling this may also be a very difficult,
you have to employ things like reserving memory for such condition. May
be in few cases you can recover.)
2. vector length error
We should not try to catch other exceptions (using catch). When such
exception occur, default behavior (terminate) will take place and
program will dump core.

Ittium
 
G

Goran

Thanks Goran, for a very detailed explanation. Please correct me if I am
wrong. This is what you are saying

It is not a good idea to catch the exception thrown from standard
library routine,

Actually, I would say "it's a bad idea to catch any exception" ;-),
for the very reason you state just below (there's no recovery). I
usually say "when you think you need to catch an exception, don't; see
what happens when it's thrown and where you end up. Only if you don't
like that, catch. I don't see a big difference between an exception
thrown by a standard library, and one of your own.

Here's how I see things: code runs doing some operation. During that
operation, exception happens. In a vast (VAST) majority of cases, that
operation is dead in the water. So... It makes no sense to catch an
exception anywhere inside that operation. It makes sense to clean up,
get out and inform the user. So for example, a command-line utility
will only catch in main(). Some kind of a server will catch on some
request-processing boundary (e.g. in order to (try to) produce
"request failed because: e.what()" response). And so on.
such a exception is a condition that you can not
recover except in following cases
1. bad_allocation (Although handling this may also be a very difficult,
you have to employ things like reserving memory for such condition. May
be in few cases you can recover.)

You can find many elaborate and heated discussions about catching
bad_alloc on this newsgroup. ;-)
2. vector length error
We should not try to catch other exceptions (using catch). When such
exception occur, default behavior (terminate) will take place and
program will dump core.

I don't agree with that (see above). Exceptions are not (necessarily)
about terminating the process. Almost always, they are about getting
out, cleanning up, and informing the user about the error. IOW, I
don't like dumping core. That's not "informing the user about the
error". Dumping core is "informing the developer about the error".
That's two different things - informing the user is about things that
went wrong at runtime; informing the developer is about bugs in code.

Goran.

Yes, however, "will dump core" is system specific. You probably should
have a giant try/catch in your main.
 
I

ittium

Actually, I would say "it's a bad idea to catch any exception" ;-),
for the very reason you state just below (there's no recovery). I
usually say "when you think you need to catch an exception, don't; see
what happens when it's thrown and where you end up. Only if you don't
like that, catch. I don't see a big difference between an exception
thrown by a standard library, and one of your own.

Here's how I see things: code runs doing some operation. During that
operation, exception happens. In a vast (VAST) majority of cases, that
operation is dead in the water. So... It makes no sense to catch an
exception anywhere inside that operation. It makes sense to clean up,
get out and inform the user. So for example, a command-line utility
will only catch in main(). Some kind of a server will catch on some
request-processing boundary (e.g. in order to (try to) produce
"request failed because: e.what()" response). And so on.


You can find many elaborate and heated discussions about catching
bad_alloc on this newsgroup. ;-)


I don't agree with that (see above). Exceptions are not (necessarily)
about terminating the process. Almost always, they are about getting
out, cleanning up, and informing the user about the error. IOW, I
don't like dumping core. That's not "informing the user about the
error". Dumping core is "informing the developer about the error".
That's two different things - informing the user is about things that
went wrong at runtime; informing the developer is about bugs in code.

Goran.

Yes, however, "will dump core" is system specific. You probably should
have a giant try/catch in your main.
I am little confused now, If I sum up, you are saying, **almost all**
the exceptions thrown by standard library are non recoverable so there
is no use of catching them (looks scary). Programmer's are helpless when
standard library throws exception (assuming they have checked all the
error code but still software is rarely 100% correct)

If there are some exceptions that may be recoverable, would not it be
good to list them down (catch all by ... will not help much) so that
programmers can catch them and try to recover (if possible).

As far as the user defined exception are considered,you add **most of**
them in the hope that software will recover from the exception
condition, so not catching any exception is probably not a good idea.
 
A

André Gillibert

io_x said:
+comp.lang.c


i actualy think:
all exceptions in C or C++ library, or whatsover library,
but division by 0, has to be turned off

all beahavour of seg fault or "abort()" in C or C++ library
or in each library, has to be minimized to 0

for this i say all function has to return a possible
error code
all function in C has no error code are badly thinked

this is valid too for API library OS too

Followup-To: comp.lang.c++

Do you mean that the behavior of segfault should be defined as throwing
a standard C++ exception?

Well-defined programs cannot have any segfault... Any program having a
segfault is in inconsistent state. Important data may have been
damaged, and there is no guarantee of any sort of recovery.

For the sake of security and data integrity, a good system detecting
such an error, should kill/abort/terminate the program immediately.
 
N

Nobody

Well-defined programs cannot have any segfault... Any program having a
segfault is in inconsistent state. Important data may have been
damaged, and there is no guarantee of any sort of recovery.

For the sake of security and data integrity, a good system detecting
such an error, should kill/abort/terminate the program immediately.

That's a slight over-generalisation. There are situations where it's
possible to handle a segfault. But such cases aren't the norm, and
handling them is complex and requires non-portable code. They certainly
shouldn't be converted to exceptions automatically.
 
K

Kaz Kylheku

Followup-To: comp.lang.c++

Do you mean that the behavior of segfault should be defined as throwing
a standard C++ exception?

Well-defined programs cannot have any segfault... Any program having a
segfault is in inconsistent state. Important data may have been
damaged, and there is no guarantee of any sort of recovery.

Sorry, that's a ridiculous assertion. Catching segfaults is a useful
software technique that can be used to do cool things. A software emulator for
a CPU can catch a segfault to detect that writes are taking place to simulated
I/O space. Garbage collectors can use segfaults. Some Lisp implementations
catch SIGSEGV as part of their garbage collector implementation. For instance
CLISP has a "libsigsegv" library for handling access violations and fixing them
up:

http://libsigsegv.sourceforge.net/
For the sake of security and data integrity, a good system detecting
such an error, should kill/abort/terminate the program immediately.

That is a silly view. Detecting the error already provides security and
integrity.

In all advanced CPU architectures, exceptions store enough information so that
the handler can recover by fixing the situation and resuming the program (if
that makes sense).

Terminating the program isn't the only possibility.

Since architectures provide the capability, it means that higher level
languages which do not map the capability are crippled with respect to machine
language.

I.e. you have some better exception handling features at the instruction set
level than you do in C++.
 
J

Joshua Maurice

The main problem with segfaults is that they are not guaranteed. A memory
page which was not present in some run may be present in another run,
maybe because of the overall load of the computer has changed, or the
program itself altered slightly. Thus it is not guaranteed that
dereferencing an invalid pointer causes a segfault. If it does not, the
program may easily silently misbehave, which is much worse by far. Ergo,
dereferencing an invalid pointer is a bug in the program which must be
fixed.

Portably speaking, there is no such thing as a segfault. On some
systems which define "segfault", you can guarantee that a segfault
will occur in some situations. The specific situation in mind is when
you dereference a null pointer. Again entirely non-portable, but
sometimes incredibly useful.
If some (part of the) program as the garbage collector has been carefully
coded to not misbehave when accessing potentially invalid memory
locations, then yes it is possible to work with segfaults. I suspect such
garbage collectors will have non-deterministic behavior by themselves and
will not always release all memory blocks they could (erring on the safe
side). This does not mean that relying on segfaults is a good idea for an
ordinary program.

A much simpler example to fathom is Sun's JVM. It is very carefully
coded to catch seg faults, and on seg fault see if it just tried to
dereference a pointer, specifically a user Java reference, and then
check if that Java reference / pointer is null. This is an
implementation of Java that basically has 0 overhead for null pointer
checking, specifically guaranteeing that a Java exception will be
thrown when a Java null reference is dereferenced.
 
W

Wolfgang.Draxinger

yes but it is better function return error code than seg fault the
program e.g

The problem is, that a segfault may corrupt the very error detection
logic. A segfault in a process means, that there is no way the
execution of the program can continue without being sure, something
will break eventually somewhere else.

The only sane reaction to a segfault is a process' equivalent to
biological apoptosis http://en.wikipedia.org/wiki/Apoptosis

If you want to gracefully react to a segfault, upon program start fork
the process and install a handler for the case if it terminates with
segfault error condition. You could even go as far as attaching to the
process as debugger and extract the state of the process to give some
sensible error report.

If you were really crazy -- and I mean in the sense of a lunatic, Joker
like madness -- you could even try to implement a system the restores
the process state in the last sane state recorded before commencing the
action that ultimately led to the segfault (however it's very likely
the process will segfault again).

A process segfaulting always means, that there is something
fundamentally broken in the programming itself, which cannot be fixed
by dealing with a error condition, but only by fixing the errornous
code itself.


Wolfgang
 
J

Joshua Maurice

The problem is, that a segfault may corrupt the very error detection
logic. A segfault in a process means, that there is no way the
execution of the program can continue without being sure, something
will break eventually somewhere else.

The only sane reaction to a segfault is a process' equivalent to
biological apoptosishttp://en.wikipedia.org/wiki/Apoptosis

If you want to gracefully react to a segfault, upon program start fork
the process and install a handler for the case if it terminates with
segfault error condition. You could even go as far as attaching to the
process as debugger and extract the state of the process to give some
sensible error report.

If you were really crazy -- and I mean in the sense of a lunatic, Joker
like madness -- you could even try to implement a system the restores
the process state in the last sane state recorded before commencing the
action that ultimately led to the segfault (however it's very likely
the process will segfault again).

A process segfaulting always means, that there is something
fundamentally broken in the programming itself, which cannot be fixed
by dealing with a error condition, but only by fixing the errornous
code itself.

Again, as explained else-thread, several very popular and very
successful programs disagree with you. Examples include Sun's/Oracle's
JVM.
 
H

hanukas

Again, as explained else-thread, several very popular and very
successful programs disagree with you. Examples include Sun's/Oracle's
JVM.

JVM.. uh huh.. okay
 
T

Tobias Müller

Joshua Maurice said:
Again, as explained else-thread, several very popular and very
successful programs disagree with you. Examples include Sun's/Oracle's
JVM.

Such programs are a special case.

1. You can only reliably handle a small subset of segfaults. (Namely NULL
Pointer dereference)

2. Only if you /know/ that no other segfaults can appear.

3. It is probably quite platform specific and therefore not portable.

It is also ok to create a website with many IE specific features if you
/know/ that only IE users will use it but I still wouldn't recommend it.

Tobi
 
J

Joshua Maurice

Such programs are a special case.

1. You can only reliably handle a small subset of segfaults. (Namely NULL
Pointer dereference)

2. Only if you /know/ that no other segfaults can appear.

3. It is probably quite platform specific and therefore not portable.

It is also ok to create a website with many IE specific features if you
/know/ that only IE users will use it but I still wouldn't recommend it.

I agree to most of that. I disagree that "not portable" is always
"bad" or "must be avoided". Sometimes non-portable solutions are the
correct solutions, like in the case of the JVM. Thus I corrected the
earlier poster when he posted the following, as he was mistaken.

On Dec 15, 3:14 am, "Wolfgang.Draxinger"
 
J

Joshua Maurice

"Joshua Maurice" <[email protected]> ha scritto nel messaggio






Again, as explained else-thread, several very popular and very
successful programs disagree with you. Examples include Sun's/Oracle's
JVM.

#if these programs do that, without to detect-correct the error in the
#new versions of programs
#they make wrong
#because wrong write in the memory can produce indefinite result...
#
#but i say only it is better some detection first of write in memory
#that can be wrong memory for the program
#so could be no wrong write but there is a detection of the error

Null pointer dereferences are undefined behavior by the C++ standard.
Linux, for example, defines the behavior.

Your proposed solution of checking the pointer for null before
accessing carries a cost in the "not thrown" code path - the "test if
0 and branch" instruction(s). The solution that Sun's/Oracle's JVM
uses, that of using segfaults, carries no overhead in the "not thrown"
code path.
 
S

Sektor van Skijlen

Sorry, that's a ridiculous assertion. Catching segfaults is a useful
software technique that can be used to do cool things.  A software emulator for
a CPU can catch a segfault to detect that writes are taking place to simulated
I/O space.  Garbage collectors can use segfaults. Some Lisp implementations
catch SIGSEGV as part of their garbage collector implementation. For instance
CLISP has a "libsigsegv" library for handling access violations and fixing them
up:

http://libsigsegv.sourceforge.net/

That's an implementation detail, strongly cooperating with system
level. It has
completely nothing to do with SEGV occuring in C++ programs.
That is a silly view. Detecting the error already provides security and
integrity.

SEGV is not "detecting the error". SEGV is detecting the situation
that shouldn't
have occurred - it means that, of course, reporting SEGV means error,
but this
can quickly lead us to a statement that "not reporting SEGV means no
error in
data integrity", which is not true anyway - SEGV might have been only
one of
possible realizations of "undefined behavior". Others are e.g. writing
to a
memory not expected to be written.

What I mean is that SEGV doesn't just mean that "well, some error
happened".
It means that the runtime code went totally out of control and caused
much
more damage in the runtime data than just one stupid SEGV.

Note that virtual machines with strict memory assignment have much
more
possibilities to catch this kind of error. In Java, for example, you
are
not allowed to cast pointers, so writing to the other object than you
wanted
ends up with an exception (although you still can write to an
incorrect cell
in the array, or to an incorrect object in the list).
In all advanced CPU architectures, exceptions store enough information sothat
the handler can recover by fixing the situation and resuming the program (if
that makes sense).

Terminating the program isn't the only possibility.

When the code ran out of control, terminating the program is the only
sensible
way to continue. Terminating the program may even save the user's data
from
being lost (think what would happen if a corrupted program would next
write
the data to a file, overwriting previous data).
Since architectures provide the capability, it means that higher level
languages which do not map the capability are crippled with respect to machine
language.

I.e. you have some better exception handling features at the instruction set
level than you do in C++.

SEGV should be treated strictly as "the code run out of control". So
the only
situation when you'd like to handle this error is when you are running
a code
contained in a kind-of plugin or explicitly separated part of code,
where you
know that you didn't let this code operate with any global data - in
other words,
you know that whatever happened in the code, it happened in its
private pool.
In this case you can catch the SEGV exception and immediately kill the
plugin
or part (part can be, for example, a TAB in a web browser), then
continue with
the rest of the applicaiton.

In any other case you should let your program crash because it's the
best you
can do to stop any further damage.

Please find more detailed information in the following article:
http://sektorvanskijlen.wordpress.com/2011/01/05/exceptions-konsidered-harmfool/


Regards,
Sektor
 
8

88888 Dihedral

That's an implementation detail, strongly cooperating with system
level. It has
completely nothing to do with SEGV occuring in C++ programs.


SEGV is not "detecting the error". SEGV is detecting the situation
that shouldn't
have occurred - it means that, of course, reporting SEGV means error,
but this
can quickly lead us to a statement that "not reporting SEGV means no
error in
data integrity", which is not true anyway - SEGV might have been only
one of
possible realizations of "undefined behavior". Others are e.g. writing
to a
memory not expected to be written.

What I mean is that SEGV doesn't just mean that "well, some error
happened".

In a true virtual memory system of a robust OS,
it is not allowed to write directly to some wrong address outside those
allowed.

But this does not solve any indirect calling dead cycle formed
by lousy programs.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,982
Messages
2,570,189
Members
46,735
Latest member
HikmatRamazanov

Latest Threads

Top