Jorge said:
This is probably the fundamental disagreement between our approaches.
Consider this:
{
BaseClass* base = someFunction....
DerivedClass* derived = std::dynamic_cast<DerivedClass>(base);
// I assume you use something like this for a 'debug' release
#ifdef DEBUG
assert(derived);
#endif // DEBUG
}
This code definitely helps developers make sure that after the cast,
derived is not NULL. Therefore developers are forced to add runtime
checking code. With properly designed exception code, you will not
execute code after an 'invalid' state, therefore simplifying code
construction.
Yes this is the fundamental difference.
We would say that if someFunction is returning BaseClass* then the
calling code should handle a BaseClass. Dynamic casting might help to
select efficient methods for processing different types of DerivedClass,
but it is a bug not to handle the general case too.
Alternatively it might be a post-condition that someFunction always
returns DerivedClass, if someFunction subsequently changes so that it
returns other things that is an API change and I don't think that
calling code should be testing for API changes in the code it calls.
Where would it end?
If the assumption is that someFunction always returns a DerivedClass
then the assert is sufficient (BTW no conditional required).
The technical problem we have with exceptions is the overhead of stack
unwinding. A simple test I wrote a month or so ago had gcc 3.3 5 times
slower in the presence of a try, throw, catch than an assert. This seems
to be a particular problem with 3.3 as our previous tests have only
shown a 20% performance degradation. Performance is a critical factor to
our customers as calculations can take several hours and even days in
some cases, and whilst we tend to be the fastest in the industry taking
a hit for something that should never occur seems to be too much to ask.
I'd rather that 20% was spent on improving surface quality then trapping
bugs at runtime.
A phillosophical problem we have is that it assumes that the calling
code is going to catch the exception. But if the caller couldn't be
bothered to ensure that the arguments being passed were such that the
called code could proceed then it is unlikely the caller will bother to
catch an exception either. Probably somewhere up in the main loop there
is a catch(...) but this seems like an aweful cop-out.
For example if operation_<n> functions throw exceptions, the following
code will break if obj is in an 'invalid' state anywhere in the process.
Object obj;
operation_1_throws(obj);
operation_2_throws(obj);
operation_3_throws(obj);
operation_4_throws(obj);
Well again we'd say that a precondition of each of these functions is
that 'obj' is not invalid.
void operation_1(Object &obj) {
assert(obj.invalid() == false);
// perform operation1
}
perhaps our applications are such that we have full control over the
objects we create, outside of I/O our objects don't become invalid.
We certainly wouldn't expect that a consequence of calling operation_1
would be to cause a perfectly good 'obj' to become invalid. Perhaps
operation_1 initializes 'obj'? In which case we wouldn't expect
operation_2 to cause a perfectly good 'obj' to be come invalid.
I don't see the additional clarity of:
try {
Object obj;
operation_1_throws(obj);
operation_2_throws(obj);
operation_3_throws(obj);
operation_4_throws(obj);
}
catch (...) {
}
over
Object obj;
if (MY_OK == operation_1(obj))
operation_2(obj);
operation_3(obj);
operation_4(obj);
} else {
}
(I'm pretty sure this topic has been brought to death in this or some
other forum...), better design solutions than return codes and even you
beloved assertions.
Better design solutions are always preferable.