Hi.
I've heard about this [Error codes vs. exceptions], and wonder when is it
right to use codes, and when to use exceptions for reporting errors?
Use whatever's more clear and practical in any given case.
generally agreed.
And note that "codes" include the case of zero information about what
failed, where there's only one success indicating code (like boolean
"true") and one failure indicating code (like boolean "false")...
Also note that there are many other techniques for handling failures
that occur in a function f:
* let f return a possibly empty result
e.g. check out Barton/Nackman "Fallible" and Boost "optional"
possibly.
* let f call an error handling routine, that's possibly configurable
a great many libraries do this
seems good.
* let f terminate the program
a bit harsh, but e.g. a JPEG library used by ImageMagick did this
I personally consider this to be an unreasonable strategy in the general
case, since then a "reasonable" or even "trivial" issue may cause the
application to die, possibly without any way to recover short of the
application not using the library in the first place.
I have ended up dropping libraries in the past for pulling this sort of
thing (well, among other things).
like:
call library to load something from a file;
file fails to open;
library calls "abort()".
this is along with the annoyance of libraries which can only work with
files, but the data in question (in the application) is held in buffers
or similar.
* let f involve the user and try to fix the problem
this was used in Windows for missing removable media, missing
DLLs and so on, with a box that like DOS did earlier said "abort,
ignore, retry" (or the like)
in most cases, this probably shouldn't be done in library code (but
should probably be left up to the application).
it was an extra annoyance a while back trying to figure out how to go
about making Windows not throw up a dialog box whenever a
"LoadLibrary()" call failed to work (Windows just assumed that a
"LoadLibrary()" call failing was a serious problem, rather than, say,
this call being used to check for an optional component).
luckily, there was an option to disable said dialog.
* let f simply have undefined behavior
may sound pretty stupid but is used by e.g. the C++ standard library
for functions where failures only are caused by failed preconditions
possibly, in this case.
I generally prefer code which doesn't just randomly blow up though, if
possible (usually, about the only real cases it is a good idea to skip
over basic "sanity checks" is in cases of "potentially performance
critical" code).
fairly common in my case is that, if the code for whatever reason can't
do what is requested, it will function as a no-op and indicate that it
has failed (and sometimes log the error if likely relevant).
in many cases, "assert" or similar can also be useful (in the case of
"clearly wrong behavior which should never occur during operation", for
example a required parameter being NULL, ...).
for example, if a required parameter is NULL or argument values are
invalid, very likely the caller has done something wrong, and it is
likely a good idea to trap into the debugger.
this isn't IMO a good strategy for "general issues" though, but more for
cases where "the caller has clearly done something wrong" or similar.
Regardless, you have the option of letting f log the incident, or not.
I actually do a fair amount of logging.
typically my apps will spit out a fair amount of information to logs as
part of their start-up process, and typically this will consist largely
of debugging-related messages.
this is fairly useful as, sadly, not everything can be easily tracked
down in a debugger (very often the case with bugs which result from
interactions between components, where often much of the effort may come
down to trying to track down just where exactly the bug actually is or
the state of the system in cases where it manifests).
But generally, e.g. when I'm going to call a C library function or
Windows API function that (as they typically do) has some ad hoc fault
indication + some error code scheme, then I usually wrap it in a
function or expression that throws a C++ exception. That's generally
more convenient even for local handling of the failure. And one main
reason is that it encapsulates and gets rid of all the myriad ad hoc
schemes for checking whether the function failed and for responding to
it -- it's a standardization and simplification.
yep, seems reasonable as such.
decided mostly to leave out descriptions of some of the hassles of
trying to use exception handling in C code via hand-rolled mechanisms
(it can be done, but isn't very pretty), or dealing with cross-language
error-handling (extra fun when the project is split between 3 or more
languages).
As shown by the various exception messages there are (at least) 4 ways
that "strtol" can fail.
I would speculate that in many programs using the "strtol" function
directly, not all failure paths are checked -- but the wrapper replaces
the sequence of four ad hoc checks with simple standard C++ exception
handling. Of course, with a C++11 conforming compiler there's little
need to call "strtol" because in C++11 iostreams do that for you (C++03
instead used the "scanf" family, with possible Undefined Behavior). But
I think it illustrates the case for wrapping in C++.
fair enough.
So, C = preference for codes, error handlers functions etc., and C++ =
preference for exceptions. But in some cases, as with e.g. calls across
binary module boundaries, you can't assume C++ exception support. Then
you have to design for the code to be callable as C, i.e. no exceptions.
yep.
it is better not to require a particular language for a library interface.
sometimes optional wrapper interfaces may make sense though, such as
wrapper classes or overloaded operators, ...
All that said, I'm primarily responding because I'm really curious about
where this great need for MECHANICAL RULES comes from?
If programming could be reduced to mechanical rules, if that was a good
idea, then, u know, it could be automated, and then you would not have
this problem of finding the rules because you wouldn't be programming:
it would be done by machine... So, instead of seeking mechanical rules
to be applied mindlessly, I suggest seeking up concrete EXAMPLES of
failure handling. Then apply INTELLIGENCE and understanding of concrete
situations that you face, and just strive to Keep Things Simple. ;-)
I actually have little idea where it comes from, but I have seen lots of
this in a number of language groups, namely people who believe that
"language X must be used this particular way", following certain little
rules (often trivial, sometimes arcane), and also that they represent
"authority" on "what everyone using the language does and believes".
sometimes, there are good "rules of thumb", but ultimately it comes down
to whether or not the rule will be beneficial in a given situation,
rather than it being something which must always be followed (even in
situations where it doesn't make sense or would do more harm than good).
although, sometimes there are edge cases, for example, "goto":
there is a rule to never use "goto";
then again, there are some potential edge cases where it could be
useful, and it seems disagreeable to ban a feature outright;
however, for code that is not already a mess, it is very rare to
actually have much need to use a "goto" to begin with (so, it seems more
like it is a symptom of spaghetti code, rather than the cause of said
spaghetti code).
many other cases appear to be similar, with people more often focusing
more on the symptoms of a problem, rather than on the cause of the
problem. ( they see the symptoms and the cause as one and the same? )
or such...