[comp.sources.d snipped; this seems very C-specific]
Keith said:
Eric Sosman said:
Dan said:
[...]
Another popular use is for error handling when the error was detected
deep in the function call chain. Instead of propagating the error back
through all the call chain, up to the function that is supposed to do
something about it, you get there with a single longjmp call.
Try to implement both examples without using longjmp and you'll see that,
although you can live without it, it's a lot more comfortable to live
with it.
Idly wondering ...
jmp_buf bailout;
time_t started_at;
int compare(const void *p, const void *q) {
if (difftime(time(NULL), started_at) > 30.0)
longjmp (bailout, 1);
...
}
...
started_at = time(NULL);
if (setjmp(bailout) == 0)
qsort (array, count, sizeof array[0], compare);
else
fputs ("Sort was too slow: aborted\n", stderr);
Conforming? (I think so.) Leaking? (I wonder ...)
I don't see a problem with it as far as conformance is concerned.
Performance, however, is likely to be a problem. Unless your
compare() function's normal operation takes a very long time, the time
spent in qsort() is likely to be dominated by calls to time() and
difftime(). A sort that takes too long would likely have worked if
you hadn't tried to measure it.
If I were going to do something like this in real life, I'd probably
add a counter variable and do the time check every N comparisons, for
some suitable value of N.
For a toy example like this, of course, leaving out such bells and
whistles is perfectly appropriate.
Yes, it's a toy example: I was just trying to concoct some
semi-plausible reason to longjmp() from a callback function to
the original caller, without giving the intermediate function a
chance to clean itself up. For example, a qsort() implementation
might malloc() some memory or acquire other implementation-specific
resources, and if the comparator called longjmp() these would not
be released.
In my opinion, that's the chief drawback of longjmp(): it lets
you "blow past" intermediate levels in the call history without
knowing whether they need to free() memory, fclose() files, or
whatever. Two general approaches seem useful in this regard:
- Wrap up setjmp() and longjmp() with macros and functions
that enforce a more disciplined structure. In particular,
a function that needs to clean things up should have a way
to "intercept" the abnormal unwinding, do its cleaning up,
and then allow the unwind to proceed upwards through other
interceptors (possibly) to the ultimate catcher. You want
not only try/catch, but try/catch/finally.
- Use setjmp() and longjmp() as they are, but only among
functions that are "intimately connected." Perhaps they
should exist in the same source file, or at least share
the same design (c.f. recursive descent parsers). It's
important in this sort of usage to limit the "scope" of
the extraordinary control transfer.
Keep in mind the phenomenon of bit rot, and the problems of
large software edifices. As the code size grows it eventually
reaches the point where nobody, no matter how competent, is able
to understand all the subsystem-local conventions and practices.
Somebody, sometime, *will* change the behavior of some function
and introduce an apparently benign initialize-operate-cleanup
sequence without realizing that somebody else expects to be able
to longjmp() right past the whole shebang:
void do_something(void) {
first_thing();
second_thing();
third_thing();
}
in version 1.0 becomes
void do_something(void) {
DBHandle *db = connect_to_database();
start_transaction(db);
first_thing();
second_thing();
third_thing();
commit_transaction(db);
close_database_connection(db);
}
with the addition of the Sooper Dooper Data Snooper in version 3.0,
and if third_thing() decides to call longjmp() ...
It's a tool. It's a tool with no safety catches or blade
guards, because the compiler is usually unable to diagnose only
a very few of its possible misuses. Thus, IMHO, it's a tool to
be used only with great care, and at great need.