Exception Misconceptions

T

tanix

could be 1000 times
could be i do it for hobby, but i like to follow what
really does one function after it is written, when born

Who does not?

The source code debuggers was one of the most important innovations.
It allowed to increase the speed and complexity of programs by
orders of magnitude.

This brings up an interesting issue about design.

There are two alternatives:

1) You do "good" design, meaning use all sorts of charts, diagrams
or what have you. Then specify things down to such a low level
that you even know all your return codes and all the arguments.

2) Intuitively see the overall structure and your idea.
Then, use this "top down design and bottom up implementation"
trick and lay down the lowest level, most fundamental "worker"
code, keeping in mind expandability issue and universality
of your code.

You see, whatever code you need to write, there will always
be this code that needs to deal with basic things, such
as file system access, network access, your basic structure
of data record or your low level video or sound functionality.

You'd have to do that no matter how you design what.

If you design an audio app, no matter what you do, you'd need
to have the lowest level code that deals with device drivers
and assures you can play whatever you need to play.

So, defining that low level code that you can immediately
use to test at least something, is probably the most profitable
investment of your energy into that project.

Then, you can go higher with your design and design some
code that sits on the top of low level stuff, keeping in mind
your overall architecture, that is still vague at this point,
even though you know what you are basically interested in
doing.

And so you SCULPT your program, instead of spending tremendous
amount of time during the initial desing. Because at that time
you hardly can see all the subtle interactions and have to
unnecessarily waste lots of energy trying to see what kind
of potential issues you might have in the future.

I had an experience once with one guy, who was extremist,
evangelical type. He just kept forcing everyone he had to
deal with to first spend months on design, without doing
a single line of code writing. So you forever grope in the
darkness.

There was one project we had to do. What HE did, was sitting
there for monts, scratching his head and writing down tons
of things in his grand design.

After a few months, he had a bible size book. When I looked
inside, I had goose bumps. Because the whoel thing was
"designed" down to exact algorithms, parameters and you
name it.

His idea was this: it may take me half a year on design.
But then it will take me 15 minutes to code that stuff.

How many people around would agree with that approach?

We had another project and I was the only one around that
could handle it. It was a relatively large database
application. When the bosses asked "how long do you think
it is going to take you to get something workable"?

I said: well, about a month.

The did not believe it. The said: ok, go ahead.
My programmer friend, who happened to be under influence
of that grand architect, said: you must be crazy.
You won't be able to do it even in two months,
and even if you WILL be able to get something to work,
I can break your code in seconds.

I said: OK. Let us see.
Within a month I had it working, pretty much to the day.
When I showed it to him and he wanted to humiliate me,
implying that my program is full of bugs and wont survive
for few seconds of heavy punishment, he sai:

Well, for me to break your program, I don't even have to
do much. I simply sit you a keyboard and simuplaneously
push ALL sorts of buttons non stop until your program
completely concs out, and so he did.

Yes, there was one moment when things stopped responding
just because he jammed so much shit into all the buffers
than even O/S could lock up.

But, after a couple of seconds we saw program giving you
a prompt again. Everything was working even under this
kind of abuse. He got pale on his face. He could not
believe it.

Now, that grand architect would probably take AT LEAST
6 months of doing it and I am not sure what he would
produce at the end, was not a subject to revision
even before he finished coding his 1st version.

The bottom line: it is a matter of personal style.
I, personally, like to see something alive pretty much
from the day one, and I do debug that thing from any
conceivable angle, pretty much from the start.

So yes, debuggers do help.
:--}

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
J

James Kanze

<a65d3221-02ca-4a18-907f-9027cce07...@c34g2000yqn.googlegroups.com>, James
Kanze <[email protected]> wrote:
[...]
The rate of coding is orders of magnitude higher nowadays
and complexities also.

Actually, I think in a lot of cases, the reverse is true. The
applications I see today are generally a lot simpler than those
of yesterday. For one thing, there's true runtime support for a
lot more functionality, so you don't have to implement it in the
application. And there are a lot of applications doing very
simple things---computers are cheap enough that you don't have
a complex operation to justify using one.

But both now and in previous times, there is a large interval
between the extremes. It wouldn't surprise me if the most
complex applications today were more complex than in previous
times. The least complex are certainly less complex. About all
I'd say is that the average application is less complex than
previously.
Well, I would simply feel uncomfortable doing ANY job,
unless it is a kernel mode driver, where rules of the game
are quite different.

Kernel mode drivers were one of the cases I was thinking about.
I've also written for embedded processors with only 2 KB of ROM,
and 64 bytes of RAM. That was a long time agom but I'm sure
that there are still embedded processors with very limited
resources (many of which probably don't even have a C++
compiler---just C).

For an application on a general purpose machine, I'd have
serious doubts about a company which used C, or banned
exceptions in C++. But there's a lot more out there than just
general purpose machines.

[...]
Once you decide to go with return codes, that's it.
You have to test EVERY SINGLE return code.

That is definitely true. (Provided the "return code" returns
something useful or important. I wouldn't normally bother with
checking the return code of printf, for example, because it's
really meaningless.)
You can not make ANY assumptions.
If I EVER find something that does not need to return the
return code, because, for example, I solved it differently,
I immediately rewrite the called routine so it does not
return any return codes, just for the purity of code's sake.

The possible problem here is virtual functions: if one of the
derived classes can fail, all of the derived classes need a
return value (or to be able to throw an exception, or whatever).

[...]
But I am just curious: what is the essense of argument
that the exception processing, under non exception conditions,
can possibly incure a significant enough overhead to even
bother about it?

That it once did, for some primorial compiler? I don't know,
otherwise. It's certainly false today.
Is it a memory allocation/deallocation issue?
Sorry to mentioni Java again, but since I started working with
Java as my primary language, I just do not recall a single
case where exceptions were inefficient.

Exceptions are easier to make efficient when you don't have
value semantics, and objects with non-trivial destructors. It's
also easier to write incorrect code in such cases---as I said
elsewhere, one of the most common errors in Java is a missing
finally block. Still, except for some very early, experimental
implementations, I don't know of a case where an exception that
wasn't thrown has caused performance problems in C++ either.

It's also possible to misuse exceptions, so that they get thrown
in the normal logic of program execution. I've heard of such
cases (although I've never seen them in code I've had to deal
with).

Still, any argument against exceptions based on performance is
simply FUD, at least IMHO.
Well, that comes as a skill. Sooner or later they will learn,
or loose their job. THEN they WILL learn for sure.

The problem is that they don't necessarily learn. Since the
problem only occurs in "exceptional" cases, and a lot of people
neglect testing those.

It's interesting to note that in the Java runtime, when opening
a file fails, they run garbage collection and try again.
Because a lot of people forget a finally block when they've
opened a file, and in case of an exception, never close it.
(Whereas in my code, most of the time, if it's an output file,
and there is an exception, I not only close the file, but delete
it. So there's no inconsistent file lying around after the
program has run.)

[...]
Do you have MORE logic in your program if you use return codes?
Or LESS?

You have the same amount of logic. Or else there's an error
somewhere. The only difference is that with exceptions, some of
the logic isn't visible, and doesn't have to be manually
written. (It does still have to be considered when analysing
program correctness.)

Don't confuse logic with lines of code.
 
T

tanix

On Dec 22, 12:48 am, (e-mail address removed) (tanix) wrote:
<a65d3221-02ca-4a18-907f-9027cce07...@c34g2000yqn.googlegroups.com>, James
[...]
The rate of coding is orders of magnitude higher nowadays
and complexities also.

Actually, I think in a lot of cases, the reverse is true.

I know, I know. That is your style after all.
:--}

Zo... Let us see here the next jewel.
The
applications I see today are generally a lot simpler than those
of yesterday.

Wut?
That bites, I tellya!
For one thing, there's true runtime support for a
lot more functionality,

True, which means?
so you don't have to implement it in the
application.
Kinda.

And there are a lot of applications doing very
simple things---computers are cheap enough that you don't have
a complex operation to justify using one.

Huh?
I'll skip that one. Just don't get what you are talking about.
But both now and in previous times, there is a large interval
between the extremes. It wouldn't surprise me if the most
complex applications today were more complex than in previous
times. The least complex are certainly less complex. About all
I'd say is that the average application is less complex than
previously.

Well, sorry, I'd have to spend way too much time arguing this.
Seems like you are stretching things WAY too far for my taste.
Kernel mode drivers were one of the cases I was thinking about.
I've also written for embedded processors with only 2 KB of ROM,
and 64 bytes of RAM.

Hey, what a time that was!

When you have to write the most compact code for your BIOS.
Otherwise it won't fit into a 4 K ROM!
:--}

THOSE were the times!
That was a long time agom but I'm sure
that there are still embedded processors with very limited
resources (many of which probably don't even have a C++
compiler---just C).

Sure, for some reason, C is STILL the most popular language
around, at least looking at the amount of traffic on C groups.
For an application on a general purpose machine, I'd have
serious doubts about a company which used C, or banned
exceptions in C++.

I said that just for the sake of argument.
Not that anyone in his clear mind would go as far
as BANNING exceptions code, even though I have seen things
not to far from it.
But there's a lot more out there than just
general purpose machines.
[...]
Once you decide to go with return codes, that's it.
You have to test EVERY SINGLE return code.
That is definitely true. (Provided the "return code" returns
something useful or important. I wouldn't normally bother with
checking the return code of printf, for example, because it's
really meaningless.)

Yep, some of them are funny.
But what about scanf? :--}
You can not make ANY assumptions.
If I EVER find something that does not need to return the
return code, because, for example, I solved it differently,
I immediately rewrite the called routine so it does not
return any return codes, just for the purity of code's sake.

The possible problem here is virtual functions: if one of the
derived classes can fail, all of the derived classes need a
return value (or to be able to throw an exception, or whatever).

[...]
But I am just curious: what is the essense of argument
that the exception processing, under non exception conditions,
can possibly incure a significant enough overhead to even
bother about it?
That it once did, for some primorial compiler? I don't know,
otherwise. It's certainly false today.

Cool. Makes me feel better. That is ALL I want to hear
about this issue.
Exceptions are easier to make efficient when you don't have
value semantics, and objects with non-trivial destructors.

Oh, don't tell me about those non-tivial destructors.
It makes me shiver! :--}
It's
also easier to write incorrect code in such cases---as I said
elsewhere, one of the most common errors in Java is a missing
finally block. Still, except for some very early, experimental
implementations, I don't know of a case where an exception that
wasn't thrown has caused performance problems in C++ either.

See?
Then my argument stands.
And that is, once the exception IS hit,
take as much time as you want to do as good of a job of
handling or reporting it as you can imagine.

Because, first of all, it is going to happen in about 0.00001%
of all cases and the overall impact of it is less than you
know what by know hopefully.

Even in some local exceptions I try to do error reporting
and logging to the point where you know EXACTLY at which
exact point happened what and what were the "wrong" paramters
or values of what.

In my main app, I designed a pretty cool mechanism
of error reporting.

What it does is this:

The main class, which is the master of everything,
had a listbox, displayed at the bottom of the main program
window. And there is a checkbox for error reporting that
triggers finer granularity of reporting.

Every single operation that is performed by ANY modules
is passed to the main class via interface.

It is automatically shown in the history listbox.
The contents of this box is automatically logged into
perpetual logs, with file names time stamped to the day.

Which means what?

Well, which means that if I ever find something funky,
not only I can see it immediately in the history box,
but I can even dig those logs, going MONTHS back and
see if some of this funk already happened in the past.

Now, the way things are logged is that you have the
leading characters and keywords that allow you to basically
find anything you want by opening a history log for
your current session and search for those lil hooks,
or things like *** Error: .....

The history listbox has about 1000 entries history,
configurable to anything you want by simply modifying
your main program config file.

So, you can scroll back in history and see ALL sorts
of nice things, such as performance of your program,
the amount of records processed, the time it took,
the amounts of duplicates you found, the amount of
records you had to skip and ALL sorts of "nice to know"
things.

That is probably the best solution I saw to day
that utilizes the concept of logging vs. debugging
and makes your logs as clear and as simple do work with,
as you can imagine. So far, have not had to regret
this design even once. Probably one of the most
powerful aspects of the whole program and I was able
to get more help out of it, than even if I had to debug
this thing for days.

You like that one? :--}
It's also possible to misuse exceptions, so that they get thrown
in the normal logic of program execution.

Well, again, my classic example of string to number conversion.
The conversion error is quite "normal" flow
and exception simply replaces the result with default value.

I did see one guy, who is a super programmer,
or world class programmer, who even designed the whole
language that used exceptions as an equivalent of return code
in the language he wrote.

I don't want to tell you his name or the name of that language

But the whole thing looked a bit insane to me.

His language was event driven language.

Jeeez!

Yep, I'd have to agree with you on that one.
Using exceptions in this way is a bit sick.
I've heard of such
cases (although I've never seen them in code I've had to deal
with).

Lucky you! :--}
Still, any argument against exceptions based on performance is
simply FUD, at least IMHO.

That is what I thouht. I was just curious.
Who knows, they might have invented some trans gallactic
exception mechanism that allows you FULLY recover all the
intermediate what have you once the exception is hit.
The problem is that they don't necessarily learn. Since the
problem only occurs in "exceptional" cases, and a lot of people
neglect testing those.

Well, then who knows, that manager may loose his job.

During severl years, when I was cranking a lot of code out,
in the middle of major development phase, I had some situations
when I basically wrote the code to the point where
"it should never happen", while handling some weird errors.

It ran fine for a day or two, and then BOOM!

I had to go back and handle that "impossible to happen" thing.

And this thing happened to me several times during the next
few months. And EVERY SINGLE TIME when I thought, hey this
thing works like a champ, BOOM!

Since then, I never try to forget about some funky
condition I was too lazy to handle or was to itchy just to
get the whole thing to work. But hey, it is more pleasure
to see the whole thing working than to deal with all those
nasty lil lice that get under your skin.

Zo...

There is a fine balance, a tradeoff.
You want to get the kicks out of seeing your thing working?
Then write the major stuff and look at how it runs.

But remember one thing: design your code in such a way,
that there are nice hooks left so that you can easily
plugin the exception or error handling code in without
needing to rewrite some major pieces of code.

Otherwise, you are screwed for good,
and it is going to cost you 10 times more time and effort
to finally make it all hum like Bentley.
It's interesting to note that in the Java runtime, when opening
a file fails, they run garbage collection and try again.

Actually, I am impressed with Java gc.

It turns out that it garbage collects your local scope
allocations almost as efficiently as your normal stack unwind
code.

And it garbage collects on ajacent stack slots almost as
fast.

I, personally, think C++ BADLY needs this.

Just to make sure you don't have even theoretical leaks,
all you have to do is to set some object "pointer" to null
at the end, in that finally block or at the end of your code,
which should ALWAYS have a try/catch/finally blocks if that
code is ANY good.

ALL my major code has it. Several levels deep.
NEVER had to regret it.
Outstanding logging and error reporting is a piece o cake
with this design for one thing.
Because a lot of people forget a finally block when they've
opened a file, and in case of an exception, never close it.

Interesting thing about finally block is that it is executed
no matter what, even if there is no exceptions.

Nice.
(Whereas in my code, most of the time, if it's an output file,
and there is an exception, I not only close the file, but delete
it. So there's no inconsistent file lying around after the
program has run.)

Well, I do not bother to go THAT far.
When operation STARTS, that is where I need to know all my
files are good if they are input.

I can not just delete the output file if some error occurs.
Because that file already has several hundred of perfeclty
good data, and even if it is damaged as a result of some
funk, the file is a text file. I don't use any other stuff.
A matter of principle.

So what I do instead is to have the archive maintenance
module, that is as powerful as it gets. It allows you
to do magic with your files, including sorting the records,
removing duplicates, normalizing the structure to increase
the processing efficiency, filtering the archive with the
most sophisticate filters, merging archives, appending
archives and verifying the archive integrity.

If some record, even in the middle of archive, is damaged,
we report and log where exactly did it happen, so it is
a matter of seconds to find the exact place and manually
fix the record or delete it if it screwed up beyond
repair.

So, after I run the archive clean operation,
I am guaranteed to have the cleanest data you can imagine
in your wildest dreams. Even clearner than it was in the
original, non error input, that sometimes still has
errors becuase of some funky servers serving some funky
non RFC compliant data.

Cool eh?

:--}

We are sitting pretty here with this thing.
[...]
Do you have MORE logic in your program if you use return codes?
Or LESS?
You have the same amount of logic.

Not true. But I know you are going to come up with something
kinky. Let us see here.
Or else there's an error
somewhere.

I'd like to see a more substantial argument on this.
The only difference is that with exceptions, some of
the logic isn't visible, and doesn't have to be manually
written. (It does still have to be considered when analysing
program correctness.)

You ARE a pervert! :--}

But fine, I appreciate THIS kind of thing.
Don't confuse logic with lines of code.

Well, I bet this argument is SO subtle,
that you'll crack your scull sooner than you can prove it.

:--}

Enjoy the trip.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
I

Ian Collins

tanix said:
I know, I know. That is your style after all.
:--}


Wut?
That bites, I tellya!


True, which means?


Huh?
I'll skip that one. Just don't get what you are talking about.

Oh come on, even you must realise there are millions or tiny CPUs in all
sorts of trivial gadgets these days.

<ramblings snipped.
Well, I bet this argument is SO subtle,
that you'll crack your scull sooner than you can prove it.

Bullshit. Try implementing automatic object destruction in C and see
how many lines of code that adds that you don't have to write in C++.

Your sig delimiter is broken.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,999
Messages
2,570,243
Members
46,836
Latest member
login dogas

Latest Threads

Top