8 years of C++ Exception Handling

F

Frank Puck

So what do you consider post-C++-Exception-Handling?


Being aware of that certain statements may throw and writing code, which
does not ignore this fact.
Writing code that relies on the fact that it will only be executed if and
only if all previous statements have executed successfully.
Matching design philosophies including e.g.
* the notion of non-corrupting fail of an operation
* the notion that the enduser can expect to see a descriptive error
message,
which includes any system error message and maybe even stack-trace
information, e.g.:
Cannot parse input file "test.cpp", because of
Cannot start preprocessor, because of
maximum number of processes reached
I don't tolerate exceptions other than from data input from files or
other external data sources. Once data has been accepted, there should
be no exceptional conditions, and data created internally should not
cause an exceptional condition.


so you ignore that new may throw?
so you ignore that opening a file for writing may fail?
After that everything else failing is a bug/algorithm failure, and
outside the scope of legitimate handling by exceptions.


assertions are still useful
 
L

lilburne

Frank said:
Being aware of that certain statements may throw and writing code, which
does not ignore this fact.

I'd much rather not have any throws thanks you very much.
Writing code that relies on the fact that it will only be executed if and
only if all previous statements have executed successfully.

They damn well ought to have or the CPU needs replacing.
Matching design philosophies including e.g.
* the notion of non-corrupting fail of an operation


An operation unless it is permissible for the operation to
fail it is a coding/design bug.

* the notion that the enduser can expect to see a descriptive error
message,
which includes any system error message and maybe even stack-trace
information, e.g.:
Cannot parse input file "test.cpp", because of
Cannot start preprocessor, because of
maximum number of processes reached


Exceptions aren't a prerequisite for informative system
error messages.

What is the user going to do with a stack trace?

so you ignore that new may throw?


Pretty much. If a calculation's memory requirements exceeds
the virtual address space, all the messing about in the
world ain't going obtain more. Now in our new handler we'll
release a few Mb which we reserved for such eventualities
and we'll set a flag to abort the current calculation, but
of course at this point no exception has fired.

so you ignore that opening a file for writing may fail?

We don't have any file IO that thows. At least not as far as
application code is concerned. If there is any throwing by
any particular OS it don't propagate outside of the wrapper
classes.

assertions are still useful

Indeed they are.
 
J

Jorge Rivera

lilburne said:
You'd probably hate mine too. Minimal runtime checking but heavily laced
with assertions. Violate a precondition and in a debug run - kerboom.

This is probably the fundamental disagreement between our approaches.

Consider this:

{

BaseClass* base = someFunction....
DerivedClass* derived = std::dynamic_cast<DerivedClass>(base);

// I assume you use something like this for a 'debug' release
#ifdef DEBUG
assert(derived);
#endif // DEBUG

}

This code definitely helps developers make sure that after the cast,
derived is not NULL. Therefore developers are forced to add runtime
checking code. With properly designed exception code, you will not
execute code after an 'invalid' state, therefore simplifying code
construction.

For example if operation_<n> functions throw exceptions, the following
code will break if obj is in an 'invalid' state anywhere in the process.

Object obj;
operation_1_throws(obj);
operation_2_throws(obj);
operation_3_throws(obj);
operation_4_throws(obj);

This is much more readable an simple to encapsulate in try/catch than
the following code.

Object obj;

if(STATUS_OK != operation_1_ret(obj))
{
// failed here, handle this in some way
}
else if(STATUS_OK != operation_2_ret(obj))
{
// failed at operation_2, handle here
}
else if(STATUS_OK != operation_3_ret(obj))
{
// failed at operation_3, handle here
}
else if(STATUS_OK != operation_4_ret(obj))
{
// failed at operation_4, handle here
}

I do understand that there is a place for this type of code, all I'm
saying is that exceptions can be very useful and are, in some instances
(I'm pretty sure this topic has been brought to death in this or some
other forum...), better design solutions than return codes and even you
beloved assertions.

Cheers,

Jorge
 
J

Jonathan Turkanis

Jorge Rivera said:
lilburne wrote:
Consider this:

{

BaseClass* base = someFunction....
DerivedClass* derived = std::dynamic_cast<DerivedClass>(base);

// I assume you use something like this for a 'debug' release
#ifdef DEBUG
assert(derived);
#endif // DEBUG

}

If you're willing to assume in release mode that base can be
static_cast'd to DerivedClass, then you should use static_cast. The
dynamic_cast, together with the assertion, can be used in debug mode.

Jonathan
 
J

Jorge Rivera

Jonathan said:
If you're willing to assume in release mode that base can be
static_cast'd to DerivedClass, then you should use static_cast. The
dynamic_cast, together with the assertion, can be used in debug mode.

I guess you are an MSVC++ developer only, right? There is no definition
in C++ about 'debug' or 'release' versions. Whatever optimizations
Microsoft uses does not change what standard behavior is. The use of
dynamic_cast over static_cast is a design decision, nor a build decision.

I do appreciate the comment, as I did not know this about MSVC++ (if you
use a different compiler, let me know).

Jorge L.
 
T

tom_usenet

That's your definition of an error, and I guess I have the freedom of
designing my classes to define error as something different, right?


But that's exactly what I want. I do not allow you to continue without
checking for the error. You CAN NOT ignore it, even if it causes you
the paing of processing the error in a catch block, it is better (and
ths is, of course, just my perspective and programming approach) than
just allowing you to ignore the return code.

You can create return values that have to be checked.

template <class T>
struct ret_value
{
ret_value(T const& value)
:m_value(value), m_checked(false) {}

~ret_value()
{
if (!m_checked)
std::terminate();
}

T& get()
{
m_checked = true;
return m_value;
}

operator T&()
{
return get();
}

private:
T m_value;
bool m_checked;
};

or similar. Exceptions aren't there to force you to handle errors,
they're there to allow you to handle an errors at the point that you
have enough information available to handle the error without manually
returning it up the call stack.

Tom

C++ FAQ: http://www.parashift.com/c++-faq-lite/
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
 
J

Jorge Rivera

tom_usenet said:
You can create return values that have to be checked.

template <class T>
struct ret_value
{
ret_value(T const& value)
:m_value(value), m_checked(false) {}

~ret_value()
{
if (!m_checked)
std::terminate();
}

T& get()
{
m_checked = true;
return m_value;
}

operator T&()
{
return get();
}

private:
T m_value;
bool m_checked;
};

or similar. Exceptions aren't there to force you to handle errors,
they're there to allow you to handle an errors at the point that you
have enough information available to handle the error without manually
returning it up the call stack.

Even if this was the original intention of exceptions, that is the way I
like using them. Although the approach you present serves the purpose
of making sure that clients check status variables, there are design
disadvantages to this approach (OK, that was a stupid comment, as that
applies to everything in this discussion....).

I hadn't thought of this approach though, and it appears very useful.
In the future I may do more things like this to reduce dependency on
exception handling mechanisms. Thanks,

Jorge L.
 
L

lilburne

Jorge said:
This is probably the fundamental disagreement between our approaches.

Consider this:

{

BaseClass* base = someFunction....
DerivedClass* derived = std::dynamic_cast<DerivedClass>(base);

// I assume you use something like this for a 'debug' release
#ifdef DEBUG
assert(derived);
#endif // DEBUG

}

This code definitely helps developers make sure that after the cast,
derived is not NULL. Therefore developers are forced to add runtime
checking code. With properly designed exception code, you will not
execute code after an 'invalid' state, therefore simplifying code
construction.


Yes this is the fundamental difference.

We would say that if someFunction is returning BaseClass* then the
calling code should handle a BaseClass. Dynamic casting might help to
select efficient methods for processing different types of DerivedClass,
but it is a bug not to handle the general case too.

Alternatively it might be a post-condition that someFunction always
returns DerivedClass, if someFunction subsequently changes so that it
returns other things that is an API change and I don't think that
calling code should be testing for API changes in the code it calls.
Where would it end?

If the assumption is that someFunction always returns a DerivedClass
then the assert is sufficient (BTW no conditional required).

The technical problem we have with exceptions is the overhead of stack
unwinding. A simple test I wrote a month or so ago had gcc 3.3 5 times
slower in the presence of a try, throw, catch than an assert. This seems
to be a particular problem with 3.3 as our previous tests have only
shown a 20% performance degradation. Performance is a critical factor to
our customers as calculations can take several hours and even days in
some cases, and whilst we tend to be the fastest in the industry taking
a hit for something that should never occur seems to be too much to ask.
I'd rather that 20% was spent on improving surface quality then trapping
bugs at runtime.

A phillosophical problem we have is that it assumes that the calling
code is going to catch the exception. But if the caller couldn't be
bothered to ensure that the arguments being passed were such that the
called code could proceed then it is unlikely the caller will bother to
catch an exception either. Probably somewhere up in the main loop there
is a catch(...) but this seems like an aweful cop-out.

For example if operation_<n> functions throw exceptions, the following
code will break if obj is in an 'invalid' state anywhere in the process.

Object obj;
operation_1_throws(obj);
operation_2_throws(obj);
operation_3_throws(obj);
operation_4_throws(obj);


Well again we'd say that a precondition of each of these functions is
that 'obj' is not invalid.

void operation_1(Object &obj) {
assert(obj.invalid() == false);
// perform operation1
}


perhaps our applications are such that we have full control over the
objects we create, outside of I/O our objects don't become invalid.

We certainly wouldn't expect that a consequence of calling operation_1
would be to cause a perfectly good 'obj' to become invalid. Perhaps
operation_1 initializes 'obj'? In which case we wouldn't expect
operation_2 to cause a perfectly good 'obj' to be come invalid.

I don't see the additional clarity of:

try {
Object obj;
operation_1_throws(obj);
operation_2_throws(obj);
operation_3_throws(obj);
operation_4_throws(obj);
}
catch (...) {
}

over

Object obj;
if (MY_OK == operation_1(obj))
operation_2(obj);
operation_3(obj);
operation_4(obj);
} else {
}

(I'm pretty sure this topic has been brought to death in this or some
other forum...), better design solutions than return codes and even you
beloved assertions.

Better design solutions are always preferable.
 
J

Jorge Rivera

lilburne said:
The technical problem we have with exceptions is the overhead of stack
unwinding. A simple test I wrote a month or so ago had gcc 3.3 5 times
slower in the presence of a try, throw, catch than an assert. This seems
to be a particular problem with 3.3 as our previous tests have only
shown a 20% performance degradation. Performance is a critical factor to
our customers as calculations can take several hours and even days in
some cases, and whilst we tend to be the fastest in the industry taking
a hit for something that should never occur seems to be too much to ask.
I'd rather that 20% was spent on improving surface quality then trapping
bugs at runtime.

I understand your point and your design considerations. I agree that
exceptions should not be used everywhere just because, and that
exception handling is expensive. Sounds like your design is perfectly
valid.

Some of my systems are much more unreliable (not my code, the system it
acts upon), and external factors come into play much more often than
not. In these cases, the use of exceptions becomes somewhat more
important.

I am not sure exception handling is much more expensive if you are
already using RTTI, which some of my clients already use, though.

Thanks,

Jorge L.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,159
Messages
2,570,881
Members
47,418
Latest member
NoellaXku

Latest Threads

Top