Exception Misconceptions

N

Nick Keighley

He means you're a total douchebag, and he's right.  I do think the
responses that actually answer your question are interesting and
worthwhile, however.

If I'd meant it I'd have said it. Please don't over-interpet what I
say
 
N

Nick Keighley

??? What are you talking about?

you seem to switch from technical discussion to insult at the drop of
a hat. And sometimes treat people who have a genuine disagreement with
you as if they were idiots. See later in this post...
Though the one who I quoted was thinking, surely, at the standard level, I
was relating it to the implementation/mechanism level

yes, but the implementation must implement the standard. So if you
want to know what destructirs are invoked you read the standard! If an
implementation can be detected doing something different then it isn't
an implementaion of C++. Java for instance may trigger its
"destructors" long after the object has gone out of scope.
and is still where the
focus of discussion is.

I thinkyou are wrong to be obsessed with implementation.Perhaps you
should phrase your question "how do you typically implement
exceptions" rather than "what destructors should be invoked when
objects go out of scope".
All else is is off-topic for it is not relevant to
the "question" posed. Do you see?
nope

a jump to the end of the function *is* a "mechanism" in my book. And
too simple to boot.

How does your mechanism handle this: [example omitted]

you don't think the example illustrated a point (that jumping to the
end of a function isn't enough)
(Joke spared, but it was waaay funny!). (OK, I was about to write: "_I_
don't have a mechanis...", and then I had to package it differently.) (See,
it never ends! he he he he! ). (Waaay funny IMO).

this isn't attitude? Actually it's just rudeness.

Seriously now, I'm not a compiler writer so if such a thing as I IMAGINED
exists, you'll have to ask the one who implemented it.



You can't extract just that portion of the sentence and get the same
meaning.

Why did you do that? It is REQUIRED to read to the sentence
terminator to comprehend the thought. Periods and other sentence terminators
are not there only for decorative purposes, you know.

I don't think breaking your sentence at a conjuction seriously mangled
its meaning. I certainly didn't intend to misrepresent you. The
previous poster described how exceptions are typically implemented.
That really does look a "mechanism" to *me*.
When stack class objects go out of scope, their destructors are called. You
know that. I know you do. (Don't you?).

I'm not getting your point. In the nopthrow case dtors are invoked
when itemgoout of scope, yes.

he seems to be saying that in many cases there is a special mechanism
to invoke dtors in the case of a exception.
The topic is the underlying mechanisms of implementation, NOT the
standard-level operation. You must have missed that very important element:
the topic of discussion. (It could be my bad English?).

but you seem tobe asking which dtors are invoked and when and *that*
is answwered by the standard.
And if you have nothing to say that is relevant to the discussion, you know
what not to do.



The "question" had nothing to do with the standard. Hello?

yes it does. We are are descending towards pantomime mode... "Look out
behind you!"
Some
implementations use what you call "a mechanism". I don't understand
why you care or what [point] you are trying to make.

JK was doing a fine job at describing the underpinnings of a typical
implementation.
ok

I think this is an area where you should bow out and let JK
continue, for he has the requisite knowledge and information being sought:
aka, "the answer".

I think you changed the question part way through. But you may have a
point in that I am no longer adding useful information to the thread.
Until I see 2 distinct mechanisms, I am right.-

what? Microsoft's and most other people's?
 
N

Nick Keighley

   [...]
this hypothesis is wrong, at least
for the compilers I know.  You also said "An explicit mechanism
that is part of the exception machinery that calls destructors?
I don't think so."  I'm not sure what you mean by that,
How convenient for you, since that IS the hypothesis.
but the compiler definitely does generate code which is
specific to exception handling, and it is that code which
calls the destructors.
I guess at this point, nothing else will do except a
side-by-side comparison analysis of the actual processes and
mechanisms of an implementation (or a few implementations).
Thanks for trying to explain it though.
The real problem I'm having, I think, is in understanding what
you mean by "an explicit mechanism".  Within the compiler (and
the compiler's runtime library), there is a lot of code which
deals exclusively in exceptions: the tables generated in my
explination are only used in exception handling, the code
which walks back the stack, looking for the return addresses in
the tables is only used for exception handling, and the separate
clean-up routines called by that code are only used for
exception handling.  To me, that's a "mechanism", and it is
explicitly used for exceptions.  But maybe you're thinking of
something else with regards to "mechanism".

Yes, you understand "mechanism" the same way as I used it (and no need to
get into the plural of the term, as this is not an English language learning
newsgroup though at times it sure feels like it). "Umpteen" posts ago, you
told of that mechanism and I asked then what is the mechanism and processing
like in the case where there are no exceptions, say the compiler switch that
turns off exception handling is on. So far, we have the one mechanism
"defined" (for some whatever implementation) and now I'm in search of the
other one for comparison. I meant "explicit" as "separate from the mechanism
that calls destructors and controls program flow in the case where there are
no exceptions". 'explicit' was probably a bad word choice, though how what
info was being sought couldn't be clear given all the context, baffles me..
(Aside: "mechanism" applied to software code may be a bit of a stretch for
some minds uninitiated to anything other than the narrow world of software
development. Another hypothesis in the making.)- Hide quoted text -

you speak of "multiple mechanisms" by this do you mean

1. the code invoked when objects go out of scope in the "normal
fashion", that is by reaching the end of the enclosing block or
executing a return statement (probably a few more like gotos and
breaks)

2. the code invoked when objects go out of scope due to an exception
being thrown

And your hypothesis is that 1 and 2 are the same code.

It appears that most current compilers use different code (mechanisms)
for 1 and 2. C-front may have done it your way and MS may do it your
way but GCC probably doesn't.

Is that a fair summary of your position (and a response to it?)
 
D

dragan

Jorgen said:
Your first posting

Oh. After all this time, we are back at the first posting? "What's wrong
with this picture?".
started out with mentioning "commonly held/observed
misconceptions"

And don't think I don't have follow up things. I was soliciting other
peoples' observations, but maybe it isn't accessible enough of a question
FOR PEOPLE WITH IQs OVER 120! ;) :p
so I guess most of us assumed you were talking about
the things that matter when *using* the language.

Hmm. I'd have to go back and reread my OP to see if it was not obvious that
I was on the IMPLEMENTATION (HOW IT WORKS) track. Or maybe you can tell me
if it says that when YOU go back and reread it? I already said that the
passage I quoted could have been, and probably was, from that perspective,
but the quoted passage is ambiguous/incorrect no matter how you assimilate
it (or so I remember). (And yes, I avoid going back in time. "Forward young
man!". or... "Go West young man! Go West!").
That's why you got
the kinds of answers you got.

Not at all. JK was on the right track immediately. So if all the other
responses were what you are referring to (by "virtue" of their volume), you
have no explanation to convey.
I also don't see how you can interpret those answers as attempts to
curb your appetite for knowledge.

You're taking quite a "liberty" to extrapolate one tiny passage to the whole
thread in general. I'm like a junkyard dog: someone corners me, I bite.
("junkyard dog": I'm not good with "your" "this is like..." things, but I'm
getting better at them and understanding them, but it's somewhat close. Is
there a thesaurus-like book/site for such?).
 
D

dragan

Nick said:
you seem to switch from technical discussion to insult at the drop of
a hat.

Examples please (in private please, this isn't a chatroom).
And sometimes treat people who have a genuine disagreement with
you as if they were idiots.

You think? "It ain't me".
See later in this post...

Cool: A USENET post "hyperlink"! (Only "slightly" less effective than the
HTML kind ;) ). Will there be a commercial break before each "cliff hanger"?
I actually "watch" some TV now that I can skip over the commercials via my
PC DVR. :) (Aside).
yes, but the implementation must implement the standard.

Please, just stop. Are you trying to "push my buttons"? Just stop it.
So if you
want to know what destructirs are invoked you read the standard!

I said stop it.
If an
implementation can be detected doing something different then it isn't
an implementaion of C++. Java for instance may trigger its
"destructors" long after the object has gone out of scope.

"The Templars of the C++ Standard"?
I thinkyou are wrong to be obsessed with implementation.

Oh, I'm "obsessed" now. Interesting.
Perhaps you
should phrase your question "how do you typically implement
exceptions" rather than "what destructors should be invoked when
objects go out of scope".

"The thing is", now, that "this is now and that was then". You KNOW I'm not
going to entertain childish antics. I didn't post a question at the start.
(Was there a question mark in the OP?). If there is a question NOW, surely
it will escape all those just learning the English language. But, you
knowing the English language.. OK, that isn't enough. The ability to
assimilate information and ... blah, blah. You're making me feel "bad"... I
always HATED being discriminated against via labelers: "technical person"
just because my BS degree is of Science.

Then sit back and observe the thread and not interject noise, thank you. Is
this your thread? No, it is mine. All "ancillaries" are just noise. you
don't really wanna be noise do you? Do you post in every thread? Why? Why
not?
a jump to the end of the function *is* a "mechanism" in my book. And
too simple to boot.

Let's analyze this. I "baby-ishly" said something very definitively and
purposefully naively and you're all on this remote, tiny irrelevant series
of words instead of the main point. Childish Antic No. 9? (No. 9, No. 9. No.
9....).
How does your mechanism handle this: [example omitted]

you don't think the example illustrated a point (that jumping to the
end of a function isn't enough)

You are snipping-n-pasting weirdly. Who are you asking and what are you
asking? If it's not important (it's not), stop making NOISE.
this isn't attitude? Actually it's just rudeness.

Show me. I don't know what you are talking about. Tell me. Explain. How is
what you quoted me saying rude? Please respond to just this in a separate
post, because this I am interested in. (Here in group, not in private). (Was
this the "USENET post hyperlink"? I think it is!).
I don't think breaking your sentence at a conjuction seriously mangled
its meaning.

What you "think" (quotes required) and what is true are chasms apart then,
surely. I'm not going to play childish antics with you. Newsgroups posts are
not source code. (I may be on to something here with that!). I know what I
said. I'm not going back in time. Stop pushing my buttons.
I certainly didn't intend to misrepresent you.

You don't represent me at all.
The
previous poster described how exceptions are typically implemented.
That really does look a "mechanism" to *me*.

You're way behind "the times". JK is tasked with explaining the contrasting
case. He presented one "mechanism". Of course, he said NOTHING, untill he
shows the OTHER one. I don't think he'd want you as a lawyer though. (Are
you a thug?). :p
I'm not getting your point.

Patience. I believe JK has the knowledge or can get the info, and I don't
care from where it comes. I don't care if JK sees a problem to be solved and
utilizes all the resources at his disposal (unless they are other people) to
craft an answer. I sent out an RFP and the only one to get a second look was
JK. Do you understand that you have already been eliminated?
In the nopthrow case dtors are invoked
when itemgoout of scope, yes.

You have other stuff. You didn't get this contract, and you probably won't
get others if you keep going after ones that you are not qualified for.
he seems to be saying

Did he assign you his spokesperson? If not, shut up.
but you seem tobe asking which dtors are invoked and when and *that*
is answwered by the standard.

"You're fired". JK is not hired though. Wanna know why? Because I am leading
him to the answer. That he can get it is one thing, that I have to lead him
there is quite another. At this point, I am indeed "building it myself" (no
offense JK).
yes it does.

No, read the RFP.
I think you changed the question part way through.

I know you failed to keep up with the client's (me) need. VERY few are "cut
out" to be consultants. I'm not eliminating you out of the realm, but you
have to stay well within your level of capability (and I ENCOURAGE people to
"offer their wares"). It's just business.
 
D

dragan

Nick said:
James said:
James Kanze wrote:
James Kanze wrote: [...]
this hypothesis is wrong, at least
for the compilers I know. You also said "An explicit mechanism
that is part of the exception machinery that calls destructors?
I don't think so." I'm not sure what you mean by that,
How convenient for you, since that IS the hypothesis.
but the compiler definitely does generate code which is
specific to exception handling, and it is that code which
calls the destructors.
I guess at this point, nothing else will do except a
side-by-side comparison analysis of the actual processes and
mechanisms of an implementation (or a few implementations).
Thanks for trying to explain it though.
The real problem I'm having, I think, is in understanding what
you mean by "an explicit mechanism". Within the compiler (and
the compiler's runtime library), there is a lot of code which
deals exclusively in exceptions: the tables generated in my
explination are only used in exception handling, the code
which walks back the stack, looking for the return addresses in
the tables is only used for exception handling, and the separate
clean-up routines called by that code are only used for
exception handling. To me, that's a "mechanism", and it is
explicitly used for exceptions. But maybe you're thinking of
something else with regards to "mechanism".

Yes, you understand "mechanism" the same way as I used it (and no
need to get into the plural of the term, as this is not an English
language learning newsgroup though at times it sure feels like it).
"Umpteen" posts ago, you told of that mechanism and I asked then
what is the mechanism and processing like in the case where there
are no exceptions, say the compiler switch that turns off exception
handling is on. So far, we have the one mechanism "defined" (for
some whatever implementation) and now I'm in search of the other one
for comparison. I meant "explicit" as "separate from the mechanism
that calls destructors and controls program flow in the case where
there are no exceptions". 'explicit' was probably a bad word choice,
though how what info was being sought couldn't be clear given all
the context, baffles me. (Aside: "mechanism" applied to software
code may be a bit of a stretch for some minds uninitiated to
anything other than the narrow world of software development.
Another hypothesis in the making.)- Hide quoted text -

you speak of "multiple mechanisms" by this do you mean

Are you soliciting a Consultant? :)
[snipped further inquiries]

This is a "PAINFUL" thread (and I had OTHER "misconceptions" to follow!).
 
J

Joshua Maurice

First, from a certain purely technical perspective, how things are
implemented do not matter. In the real world, how things are
implemented matter. I care whether or not my functions are expanded
inline, and how exceptions are implemented.

To clarify dragan's interest, I think he's curious if it's commonly
implemented like the following. Specifically, if there are multiple
execution paths, one for exception thrown, one not, aka multiple call
sites to the destructor of a local stack object.

//original code
{
A x;
B y;
C z;
}

//pseudo code
{
new (address_of_x) A;
if (no pending exception)
{
new (address_of_y) B;
if (no pending exception)
{
new (address_of_z) C;
if (no pending exception)
{
address_of_z->~C();
}
address_of_y->~B();
}
address_of_x->~A();
}
}

//

However, ideally, exceptions should not be implemented this way. It
was originally intended to offer (near) zero overhead when the
exception is not thrown. The above translation is a lot of overhead
(and probably not even how Microsoft win32 does it either).

Instead, something like the following was intended: The Program
Counter register, or PC, is a register that most processors have. Its
sole purpose is to hold the location of the next instruction to
execute. On a good implementation, a throw statement will save the PC
to another location, and jump to another section of code defined at
compile / link time, a giant lookup table, mapping PC values to
exception handlers, where the exception handlers will clean up the
local objects, then pass control to another exception handler or to a
user-written catch block. Specifically, if you know exactly where in
the code you threw an exception, and you know your current stack, then
you know precisely how to unwind the stack. You know all of the local
objects you need to destroy, and in what order, and you know where to
put control back to user written code.
 
J

James Kanze

James said:
James Kanze wrote:
James Kanze wrote:
[...]
this hypothesis is wrong, at least
for the compilers I know. You also said "An explicit mechanism
that is part of the exception machinery that calls destructors?
I don't think so." I'm not sure what you mean by that,
How convenient for you, since that IS the hypothesis.
but the compiler definitely does generate code which is
specific to exception handling, and it is that code which
calls the destructors.
I guess at this point, nothing else will do except a
side-by-side comparison analysis of the actual processes and
mechanisms of an implementation (or a few implementations).
Thanks for trying to explain it though.
The real problem I'm having, I think, is in understanding
what you mean by "an explicit mechanism". Within the
compiler (and the compiler's runtime library), there is a
lot of code which deals exclusively in exceptions: the
tables generated in my explination are only used in
exception handling, the code which walks back the stack,
looking for the return addresses in the tables is only used
for exception handling, and the separate clean-up routines
called by that code are only used for exception handling.
To me, that's a "mechanism", and it is explicitly used for
exceptions. But maybe you're thinking of something else
with regards to "mechanism".
Yes, you understand "mechanism" the same way as I used it (and
no need to get into the plural of the term, as this is not an
English language learning newsgroup though at times it sure
feels like it). "Umpteen" posts ago, you told of that
mechanism and I asked then what is the mechanism and
processing like in the case where there are no exceptions, say
the compiler switch that turns off exception handling is on.
So far, we have the one mechanism "defined" (for some whatever
implementation) and now I'm in search of the other one for
comparison. I meant "explicit" as "separate from the mechanism
that calls destructors and controls program flow in the case
where there are no exceptions".

OK. In that case, the mechanism I explained is explicit. In
all of the compilers I've seen, destructors are simply called at
the end of scope when no exceptions are raised. Exactly as they
were before exceptions were added to the language. The extra
tables and the associated stack walkback only comes into play
when an exception is thrown.

The goal here is "you don't pay for what you don't use". As
long as no exception is thrown, the code executes as fast as if
exceptions weren't in the language (or almost---added control
flow paths may affect optimization, and the additional tables
may affect locality).

Not all compilers use this technique. G++ does, as does Sun CC,
but Microsoft does seem to insert some extra calls here and
there (although I've not studied its mechanism enough to know
exactly how it works).

Basically, at least with Sun CC (and except for the optimizer
considering the additional flow paths), code is generated
exactly as if exceptions didn't exist. Plus the additional
tables are generated. When you throw an exception, the compiler
generates special code to allocate memory in a reserved area and
copy the exception into it, then calls a special runtime
function which does the stack walkback. Which can be relatively
expensive in runtime, because of all of the table lookup's it is
doing. (I don't know off hand whether it has to do a linear
search each time, of if the tables are sorted, and it can do a
binary search.)
 
D

dragan

James Kanze said:
James said:
James Kanze wrote:
James Kanze wrote:
[...]
this hypothesis is wrong, at least
for the compilers I know. You also said "An explicit mechanism
that is part of the exception machinery that calls destructors?
I don't think so." I'm not sure what you mean by that,
How convenient for you, since that IS the hypothesis.
but the compiler definitely does generate code which is
specific to exception handling, and it is that code which
calls the destructors.
I guess at this point, nothing else will do except a
side-by-side comparison analysis of the actual processes and
mechanisms of an implementation (or a few implementations).
Thanks for trying to explain it though.
The real problem I'm having, I think, is in understanding
what you mean by "an explicit mechanism". Within the
compiler (and the compiler's runtime library), there is a
lot of code which deals exclusively in exceptions: the
tables generated in my explination are only used in
exception handling, the code which walks back the stack,
looking for the return addresses in the tables is only used
for exception handling, and the separate clean-up routines
called by that code are only used for exception handling.
To me, that's a "mechanism", and it is explicitly used for
exceptions. But maybe you're thinking of something else
with regards to "mechanism".
Yes, you understand "mechanism" the same way as I used it (and
no need to get into the plural of the term, as this is not an
English language learning newsgroup though at times it sure
feels like it). "Umpteen" posts ago, you told of that
mechanism and I asked then what is the mechanism and
processing like in the case where there are no exceptions, say
the compiler switch that turns off exception handling is on.
So far, we have the one mechanism "defined" (for some whatever
implementation) and now I'm in search of the other one for
comparison. I meant "explicit" as "separate from the mechanism
that calls destructors and controls program flow in the case
where there are no exceptions".

OK. In that case, the mechanism I explained is explicit. In
all of the compilers I've seen, destructors are simply called at
the end of scope when no exceptions are raised. Exactly as they
were before exceptions were added to the language. The extra
tables and the associated stack walkback only comes into play
when an exception is thrown.

The goal here is "you don't pay for what you don't use".

I wonder if a much simpler implementation is possible if not being
"hell-bent" on _zero_ overhead. One that reuses the same destructor
call/stack walk as in the non-exceptional case. I would proceed in that
direction first, if I was implementing the language.
As
long as no exception is thrown, the code executes as fast as if
exceptions weren't in the language (or almost---added control
flow paths may affect optimization, and the additional tables
may affect locality).

Not all compilers use this technique. G++ does, as does Sun CC,
but Microsoft does seem to insert some extra calls here and
there (although I've not studied its mechanism enough to know
exactly how it works).

Basically, at least with Sun CC (and except for the optimizer
considering the additional flow paths), code is generated
exactly as if exceptions didn't exist. Plus the additional
tables are generated. When you throw an exception, the compiler
generates special code to allocate memory in a reserved area and
copy the exception into it, then calls a special runtime
function which does the stack walkback. Which can be relatively
expensive in runtime, because of all of the table lookup's it is
doing. (I don't know off hand whether it has to do a linear
search each time, of if the tables are sorted, and it can do a
binary search.)

So overall, the answer is that in practice compiler implementors opt for the
zero-overhead goal which may require or make lucrative machinery that is
separate from the machinery used in the normal processing case, but other
simpler schemes are probably possible. Since there is nothing inherently
tying exceptions to dedicated/explicit mechanisms, any statement worded such
that it implies that, is wrong (a misconception).
 
J

James Kanze

James Kanze wrote:
James Kanze wrote:
James Kanze wrote:
[...]
OK. In that case, the mechanism I explained is explicit. In
all of the compilers I've seen, destructors are simply called at
the end of scope when no exceptions are raised. Exactly as they
were before exceptions were added to the language. The extra
tables and the associated stack walkback only comes into play
when an exception is thrown.
The goal here is "you don't pay for what you don't use".
I wonder if a much simpler implementation is possible if not being
"hell-bent" on _zero_ overhead. One that reuses the same destructor
call/stack walk as in the non-exceptional case. I would proceed in
that direction first, if I was implementing the language.

I'm not sure what you're describing here? That every function have an
additional, hidden return value, which is tested on return from every
function?

Other mechanisms are possible. I believe some earlier compilers did
use
a system of objects automatically registering themselves on
construction, and deregistering themselves on destruction (with try
blocks registering and deregistering their catch clauses as well).
The
registry is organized more or less as a stack, and the exception
handler
just pops until it encounters a catch clause which handles the
exception.

Such mechanisms have a very noticeable impact on performance in the
case
where an exception isn't thrown. Probably acceptable in most
applications, but certainly not in all.
So overall, the answer is that in practice compiler implementors opt
for the zero-overhead goal which may require or make lucrative
machinery that is separate from the machinery used in the normal
processing case, but other simpler schemes are probably possible.
Since there is nothing inherently tying exceptions to
dedicated/explicit mechanisms, any statement worded such that it
implies that, is wrong (a misconception).

The machinery isn't that complicated. After all, you need to be able
to
walk back the stack in other cases as well (e.g. in a debugger---and
what compiler doesn't come with a debugger). The alternatives are
relatively expensive, and some people do choose their compiler based
on
benchmark results (and those benchmarks rarely test the performance
when
an exception is thrown). For better or worse, performance is an issue
for compiler vendors---lower performance means less sales.
 
D

dragan

James said:
James Kanze wrote:
James Kanze wrote:
James Kanze wrote:
[...]
OK. In that case, the mechanism I explained is explicit. In
all of the compilers I've seen, destructors are simply called at
the end of scope when no exceptions are raised. Exactly as they
were before exceptions were added to the language. The extra
tables and the associated stack walkback only comes into play
when an exception is thrown.
The goal here is "you don't pay for what you don't use".
I wonder if a much simpler implementation is possible if not being
"hell-bent" on _zero_ overhead. One that reuses the same destructor
call/stack walk as in the non-exceptional case. I would proceed in
that direction first, if I was implementing the language.

I'm not sure what you're describing here?

Nothing specific because I'm not currently programming such stuff. Having
never implemented such stuff, I would try to find one design first before
conceding to 2 separate mechanisms that do "the same thing".
That every function have an
additional, hidden return value, which is tested on return from every
function?

Where that came from is baffling.
Other mechanisms are possible. I believe some earlier compilers did
use
a system of objects automatically registering themselves on
construction, and deregistering themselves on destruction (with try
blocks registering and deregistering their catch clauses as well).
The
registry is organized more or less as a stack, and the exception
handler
just pops until it encounters a catch clause which handles the
exception.

Such mechanisms have a very noticeable impact on performance in the
case
where an exception isn't thrown. Probably acceptable in most
applications, but certainly not in all.

I wonder if there are papers on such. I haven't been an ACM member for a
long time. It may be worth it to join again.
The machinery isn't that complicated.

Something I'll have to look at in the future maybe (as if there wasn't other
stuff I should be doing!). I'd rather read about it in papers though if
there are some/any. I'd be interested in the early implementations and how
they evolved and why ("geez Louise", I'm getting geekier by the minute!).
 
B

Brian

James Kanze said:
James Kanze wrote:
James Kanze wrote:
James Kanze wrote:
   [...]
OK.  In that case, the mechanism I explained is explicit.  In
all of the compilers I've seen, destructors are simply called at
the end of scope when no exceptions are raised.  Exactly as they
were before exceptions were added to the language.  The extra
tables and the associated stack walkback only comes into play
when an exception is thrown.
The goal here is "you don't pay for what you don't use".
I wonder if a much simpler implementation is possible if not being
"hell-bent" on _zero_ overhead. One that reuses the same destructor
call/stack walk as in the non-exceptional case. I would proceed in
that direction first, if I was implementing the language.

I'm not sure what you're describing here?  That every function have an
additional, hidden return value, which is tested on return from every
function?

Other mechanisms are possible.  I believe some earlier compilers did
use
a system of objects automatically registering themselves on
construction, and deregistering themselves on destruction (with try
blocks registering and deregistering their catch clauses as well).
The
registry is organized more or less as a stack, and the exception
handler
just pops until it encounters a catch clause which handles the
exception.

Such mechanisms have a very noticeable impact on performance in the
case
where an exception isn't thrown.  Probably acceptable in most
applications, but certainly not in all.


So overall, the answer is that in practice compiler implementors opt
for the zero-overhead goal which may require or make lucrative
machinery that is separate from the machinery used in the normal
processing case, but other simpler schemes are probably possible.
Since there is nothing inherently tying exceptions to
dedicated/explicit mechanisms, any statement worded such that it
implies that, is wrong (a misconception).

The machinery isn't that complicated.  After all, you need to be able
to
walk back the stack in other cases as well (e.g. in a debugger---and
what compiler doesn't come with a debugger).  The alternatives are
relatively expensive, and some people do choose their compiler based
on
benchmark results (and those benchmarks rarely test the performance
when
an exception is thrown).  For better or worse, performance is an issue
for compiler vendors---lower performance means less sales.


When buying a car I care about how fast it goes from
0 to 60. There's flexibility about the range for that
with me, but if a car is two to three times slower
than others in that regard, it's a big red flag --
http://webEbenezer.net/comparison.html


Brian Wood
http://webEbenezer.net
 
J

James Kanze

On Dec 17, 12:27 pm, James Kanze <[email protected]> wrote:

[...]
When buying a car I care about how fast it goes from
0 to 60. There's flexibility about the range for that
with me, but if a car is two to three times slower
than others in that regard, it's a big red flag

So you won't consider cars like the VW Golf, since a Ferrari does
accelerate two or three times fasters.

Actually, I'm sure you didn't think that statement out. Performance
is
only one of many issues---I'd guess that correctness would be the most
important one (but even there---correctness is only important for the
features you use). Or even availability: early implementations of
exceptions used the slower mechanism because they could get the
implementation out the door quicker that way. And so on.
 
T

tanix

[...]
The machinery isn't that complicated. After all, you need to be
able to walk back the stack in other cases as well (e.g. in a
debugger---and what compiler doesn't come with a debugger). The
alternatives are relatively expensive, and some people do choose
their compiler based on benchmark results (and those benchmarks
rarely test the performance when an exception is thrown). For
better or worse, performance is an issue for compiler
vendors---lower performance means less sales.
When buying a car I care about how fast it goes from
0 to 60. There's flexibility about the range for that
with me, but if a car is two to three times slower
than others in that regard, it's a big red flag

So you won't consider cars like the VW Golf, since a Ferrari does
accelerate two or three times fasters.

Well, what to do if you are zombified with this "power" trip since
the craddle. "Power" is the only thing that counts to most biorobots,
most of whom are complex inferiority driven and zombified with violence
since the craddle. Long story.
Actually, I'm sure you didn't think that statement out. Performance
is only one of many issues

Yep, important one, but not the MOST important one in the scheme of things.
---I'd guess that correctness would be the most
important one (but even there---correctness is only important for the
features you use).

Well, correctness is a vague concept overall.
Correctness of what?

Correctness of your powerful and flexible logging and program state
monitoring system that dynamically displays the intermediate states your
program takes while doing some timely and important operations?

Correctness of your user interface design that turns out to be one of
the most important characteristics of any program because it allows you
to do the maximum number of things with the most of ease and conceptually
clear user interface design?

Correctness in terms of being able to recover from any kind of error
and continue on some time expensive operations?

Or simply dumb correctness of the low level algorithms, that are
even being correct, still make your program suck big time overall
as it is totally unintuitive, cumbersome and you name it?

"Correctness" of your documentation and making it available on all
user interface panels, dialog boxes, etc. as a context sensitive
help buttons, clearly describing you the functionality of any GUI
element or well tagged and interlink documentation, describing
your program with a fine enough degree of detail, while, at the
same time not overloading you with tons of crap, just like almost
all the Microsoft documentation does?

Or just describing you things that take at least a couple of good
paragraphs to describe in a single sentence? Again, just like
Microsoft does? Those things can not be describe in a single
sentence, and yet, that is all you get.

Is THAT correctness?

Or, the program being compatible for generations and runs
on any version of o/s, no matter what they invent more or less?
Does THAT count as "correctness"?

Does ANY kind of error handling and reporting count as "correctness"?
Or even availability: early implementations of
exceptions used the slower mechanism because they could get the
implementation out the door quicker that way. And so on.

Well, I thought the early versions of exceptions were the fastest
possible, even in principle, because they utilized the setjmp,
longjmp design and would simply abandon the stack below certain
depth by simply long jumping several levels up the stack to
setjmp label.

Interesting aspect on exception performance I saw on these thread
is the issue of exceptions "being expensive" in terms of processing
overhead. Well, it does count in exceptions of the string to number
conversion and things like that, not even clear it matters that
much because the very conversion itself is very expensive.

But my opinion is that the exception processing overhead is a non
issue since your program runs without exceptions in 99.999% of cases.
But once you do hit some exception, what does it matter how much
of a performance of exeption processing do you get, if your program
is basically screwed at that point, and about the most critical
and most important issue would be an informational aspect of it,
to give the user as much and as clear information about it, as
possible, presenting this error in a listbox of running status,
that could be scolled back to see what happened to your program
even hours ago.

Plus automatic error logging into rotating log files, that are
automatically updated and time stamped ad infinitum.

I think using exceptions in a way that they become a normal flow
of your code is a misuse of exceptions. You should not rely on
exceptions to do program logic. Otherwise the very notion of
ecxeption becomes totally distorted.

Once you hit ANY exception, take as much time as you want
to do anything you want, even going out to disk, error informing,
or anything like that. The more thoughrough job you do processing
exceptions, the better. If you can manage deallocate the heap
memory, great, it matters in C++. Luckily, in Java, it does not
matter. Because the heap deallocation is automatic and really
fine grained and optimized for local scope. So its performance
is non issue.

It is surprising to see that to this day, there is no gc in C++.
In any kind of more or less complex program the chance of memory
leaks is so high, that I doubt there are that many programs that
do not leak memory.

The same thing with threading, gui and other things, C++ is so
missing even after generations.

It's a pitty, Sun decided to take upon Microsoft with their JVM
legal case. That simply killed Java because Microsoft just dropped
it totally from their product line and stopped ANY development
with anything that even sounds Java-ish.

That is probably the biggest tragedy in the software industry,
because the non-dynamically scoped lanugages that are the bread
and butter of software industry, are not being developed.

C++ is a dead end as it stands right now in my opinion.

Today, with all dynamically scoped languages there is no issue
of portability. Take Python, PHP, Ruby, SQL that are running
the world more or less. There is no issue with "does it run
on Linux and Windows"?

And look at this pathetic C++ with all these bizarre syntax
complications. How many people in their clear mind, writing
those Egyptian hieroglyphs with templates think that anyone
else, looking at their code would be willing to spend half
an hour trying to understand that most unintuitive meaning
of those 3 mile long template definitions and all the side
effects of it?

People have milliseconds to look at some code.

I was using Java for several years and, because Microsoft
does not support anything beyond JDK 1.3, which is at leat
10 years old, you can not use generics (templates equivalent
of C++ more or less), or you can not use even swing
functionality (GUI stuff).

But, the best development environment I know of is MS
Visual Studion, and I mean BY FAR. As a result, I was forced
not to ever use generics and to this day, I have not seen
ANY problems with not using them. Yes, casting is not nice,
but in the scheme of things, at least from the stand point
of code clarity, it is MUCH better to use casting than
templates or generics. First of all, I can understand the
code in milliseconds. Try to understand someone elses code,
written by "smart" egghead, who is basically a pervert,
trying to make the simpliest looking thing, look like
a grand unifying theory.

Just look at 3 things that make Java what it is:

1) Built in threads - TOP notch idea. Helps portability
like nothing else.

2) Built-in GC. - TOP notch idea. Memory leaks will prevent
any program from running more than a couple of days in one
go.

3) Built-in GUI. Outstanding idea.

4) Binary compatibility. - The MOST important criteria for me,
BY FAR.

I don't want to write several versions of my programs to be
able to run it on ANY O/S. First of all, all that stuff is
out of the window nowadays becasue dynamically scoped languages
are already taking up the majority of sw business.

WHO in his clear mind would like to EVEN BOTHER about maintaining
all this non portable crap?

And what I am seeing is these C++ purists are sitting here and
running their mealy mouths about "purity of language" to the
point of obscene, chasing away even guys who would like to
discuss the threading, GUI or all other issues on this group.

What are you guys doing here? Trying to dig yourself down
into the ground as fast as you can manage? Digging your own
grave with a back hoe?

Do you want me to talk on 5 different groups about C++
based programming issues? And what does THAT buy you?

Fragmentation of discussions and issues?

Schitsophrenic view on system design?

Totally non-portable code?

Never being able to write a single GUI code for any O/S?

Never being able to run a binary compatible code,
while ALL dynamically scoped languages do it all day long,
ALL over the place?

Wasting weeks on cleaning up the memory leaks?
I have wasted MONTHS on trying to catch all the subtle memory
leaks in a sophisticated async based program because of some
network availability issues make you maintain all sorts of
queues, depending on user interaction, causing such headaches,
that you can not even begin to imagine from the standpoint
of memory leaks.

And still, to this day, I do have a few memory leaks.
Nothing major really, and no one will even notice it even
restarting program several times. But still, there should be
no memory leaks PERIOD.

How do you do it with C++?

Either you wake up, or smell the flowers on the graveyard.

That is the verdict for you, C++ "gurus", writing the most
convoluted crap and presenting it as a particle science
and a grand unifying theory.

It makes me puke seeing most of your code.
Every time I have to deal with someone elses code,
it makes me shiver. Because I know your cunningness and your
complex of inferiority. You'd waste hours writing some most
confusing spagetti, just to make sure it is going to take them
days to understand your code and will make you one of the
"irreplaceable". Job security trip.
Instead of writing the simpliest code that does exactly the same
thing and can be understood within milliseconds by anyone who
knows what he is doing.

Is THAT a progress?

Just look at all these "inventions" and "improvements" in C++?

Where do you think it is going to take you?

And the bottom line, with all that "great" improvement,
your programs still suck balls in vast majority of cases.

What a pitty.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
B

Brian

On Dec 17, 12:27 pm, James Kanze <[email protected]> wrote:

    [...]
When buying a car I care about how fast it goes from
0 to 60.  There's flexibility about the range for that
with me, but if a car is two to three times slower
than others in that regard, it's a big red flag

So you won't consider cars like the VW Golf, since a Ferrari does
accelerate two or three times fasters.

Actually, I'm sure you didn't think that statement out.  Performance
is
only one of many issues---I'd guess that correctness would be the most
important one (but even there---correctness is only important for the
features you use).  Or even availability: early implementations of
exceptions used the slower mechanism because they could get the
implementation out the door quicker that way.  And so on.

I'm not arguing with that, just saying that with the software
in question, both are free. If price/cost isn't a factor,
I'd definitely take a Ferrari. This reminds me of something
C. S. Lewis said: "We are half-hearted creatures, fooling
about with drink and sex and ambition when infinite joy is
offered us, we are like ignorant children who want to
continue making mud pies in a slum because we cannot imagine
what is meant by the offer of a vacation at the sea. We are
far too easily pleased.” To some extent I think users of
some well-known serialization libraries are making mud pies
in the slums.

Brian Wood
http://webEbenezer.net
 
T

tanix

On Dec 17, 12:27 pm, James Kanze <[email protected]> wrote:

=A0 =A0 [...]
The machinery isn't that complicated. =A0After all, you need to be
able to walk back the stack in other cases as well (e.g. in a
debugger---and what compiler doesn't come with a debugger). =A0The
alternatives are relatively expensive, and some people do choose
their compiler based on benchmark results (and those benchmarks
rarely test the performance when an exception is thrown). =A0For
better or worse, performance is an issue for compiler
vendors---lower performance means less sales.
When buying a car I care about how fast it goes from
0 to 60. =A0There's flexibility about the range for that
with me, but if a car is two to three times slower
than others in that regard, it's a big red flag

So you won't consider cars like the VW Golf, since a Ferrari does
accelerate two or three times fasters.

Actually, I'm sure you didn't think that statement out. =A0Performance
is
only one of many issues---I'd guess that correctness would be the most
important one (but even there---correctness is only important for the
features you use). =A0Or even availability: early implementations of
exceptions used the slower mechanism because they could get the
implementation out the door quicker that way. =A0And so on.

I'm not arguing with that, just saying that with the software
in question, both are free. If price/cost isn't a factor,
I'd definitely take a Ferrari.

And I'd take Cadillac Seville.
Do you mind?
I would not take Ferrari even if you pay me.
I'd sell it for all its worth.
What a sick zombie machine!

Have you ever driven a Cadillac Seville 1991?
This reminds me of something
C. S. Lewis said: "We are half-hearted creatures, fooling
about with drink and sex and ambition when infinite joy is
offered us, we are like ignorant children who want to
continue making mud pies in a slum because we cannot imagine
what is meant by the offer of a vacation at the sea. We are
far too easily pleased.=94 To some extent I think users of
some well-known serialization libraries are making mud pies
in the slums.

Brian Wood
http://webEbenezer.net

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
J

James Kanze

James Kanze <[email protected]> wrote:

[...]
Well, correctness is a vague concept overall.
Correctness of what?

Of the compiler, since that's the tool we're talking about. If
the code generated by the compiler doesn't do what the language
says it should, then you have a very big program. It's a low
level correctness, but it's still an essential one.

(I won't bother replying to the rest. The problems of the
poster should be obvious to any reasonably mature person who
reads it.)
 
J

James Kanze

"tanix" <[email protected]> ha scritto nel
messaggionews:[email protected]...
do you know, exist wrapper for malloc, that at end of the
program check if there are "memory leak" and report the result
to the screen

If the program ends, it doesn't leak memory, and practically,
there's no way any tool can tell whether there is a leak or not.
(Well, they can tell that some things are definitively leaked.
If there are no pointers to the memory, for example, it has
leaked.) What the tools do is suggest possible leaks.

With regards to the first statement, of course: I've worked on a
fairly large number of applications which didn't leak. Some of
them, you may have used, without knowing it, since many of the
programs I've worked on aren't visible to the user---they do
things like routing your telephone call to the correct
destination. (And the proof that there isn't a leak: the
program has run over five years without running out of memory.)
so memory leak can not be one problem if one use these special
"malloc" functions (like i use always with all bell that
sound)

I'll say it again: there's no silver bullet. In the end, good
software engineering is the only solution. Thus, for example, I
prefer using garbage collection when I can, but it's a tool
which reduces my workload, not something which miraculously
elimninates all memory leaks (and some of the early Java
applications were noted for leaking, fast and furious).
 
J

James Kanze

On Dec 17, 12:27 pm, James Kanze <[email protected]> wrote:
[...]
The machinery isn't that complicated. After all, you
need to be able to walk back the stack in other cases as
well (e.g. in a debugger---and what compiler doesn't
come with a debugger). The alternatives are relatively
expensive, and some people do choose their compiler
based on benchmark results (and those benchmarks rarely
test the performance when an exception is thrown). For
better or worse, performance is an issue for compiler
vendors---lower performance means less sales.
When buying a car I care about how fast it goes from 0 to
60. There's flexibility about the range for that with me,
but if a car is two to three times slower than others in
that regard, it's a big red flag
So you won't consider cars like the VW Golf, since a Ferrari
does accelerate two or three times fasters.
Actually, I'm sure you didn't think that statement out.
Performance is only one of many issues---I'd guess that
correctness would be the most important one (but even
there---correctness is only important for the features you
use). Or even availability: early implementations of
exceptions used the slower mechanism because they could get
the implementation out the door quicker that way. And so
on.
I'm not arguing with that, just saying that with the software
in question, both are free. If price/cost isn't a factor, I'd
definitely take a Ferrari.

You don't have three children:). I wouldn't mind having a
Ferrari, either, but for day to day use, there are more
practical cars. Regardless of price. With three children and
two dogs, a van beats a Ferrari hands down. If you really live
out in the wilderness, with only dirt roads, you'll probably
prefer a Jeep. And if you have to drive and park a lot in
Paris or London, you'll want something small. As with software,
there's no silver bullet.

With regards to exception handling and C++ (to get back on
subject), if some of the early implementations of exceptions
used slower mechanisms than is considered necessary today, it's
generally because it was a question of supporting exceptions now
with the slower mechanism, or supporting them in two years time
with an optimal mechanism.

Similarly, with regards to optimizing application code: you've
only got so much time to write it in, so it's often a question
of making it take 5 milliseconds less time (over an hour), or
adding a feature. I think all really competent computer
scientists are purists, and would like for every line to be
"optimal" (foremostly in elegance and readability, but also in
terms of performance). I also think that all really competent
software engineers know how and when to make engineering trade
offs.
 
T

tanix

do you know, exist wrapper for malloc, that at end of the program
check if there are "memory leak" and report the result to the screen

Well, I AM getting the leak dumps at the end of the program.
The problem is that we have an issue of the same object being
passed around and saved into several queeues and it is not easy
to say who exactly did not do deallocation. There are several
customers.

I had to add the specific stamp to stamp the packet information,
allocated by the driver interface code. So, when buffer is placed
on unknown packet list and it is not possible to process it until
the user decides what you want to do with it, there is nothing
I can do. The unknown packet dialog is an async code. We can not
block until the user decides to respond to unknown packet.
We have to allow other non issue traffic and allow user to
continue on with ANYTHING related to the program, even if he does
not know what to do with that packet for quite a while,
including the need to issue a whois from the same program and
to see wether he wants to allow this traffic now and in the future.
And that means potential delays in processing for upto minutes.
What if the user went to the kitchen meanwhile and someone
attacks his box? The unknown packet dialog is up, and not only
it is up, but there may be the entire queue of packets that
are waiting to be processed if you are being attacked.

On the top of it, all the processing is asynchronous.
So, the packet buffer, and we carry the whole packet buffer
because user may want do dump the packet buffer to see specifics
of the buffer, has to be carried around and either be attached
to the monitor listbox, or held pending.

And all these mechanisms are not and can not be common.
So, WHO did not release the buffer, when and why?
Try to figure out in totally async environment with several
consumers.

Finally, when I stamped the buffers with the allocator ID tag,
i was able to get read of those memory leaks.

But the whole point is that it wooks months to even decide to
go after this issue and it took days to rewrite the code,
including the NDIS driver before the solution was finally there.

You see the issue?

And I NEVER EVER saw the issue of this kind with Java.
One heavy duty program I wrote in Java works like a champ
for years without a SINGLE issue with memory leak and gc is
so efficient that it is not far from automatic stack deallocation.

And to me, personally, the memory leak issues are some of the
top priority items and it is very unfortunate that this issue
is not resolved to this day in C++.

Sure, you don't have the JVM or MVM to rely upon and the whole
thing becomes quite a trip. But these kinds of things eventually
kill the language for all practical purposes.

Take for example the issue with writing a portable GUI code.
It is a nightmare in C++ environment. Microsoft does it their
way, Linux/Unix does it their way, all sorts of graphit toolkits
do it their way. It is a literal nightmare.

With Java, it is a non issue. And GUI power is one of the most
important criterias in determining the program "correctness".
I do not take a notion of program correctness to be the
formal correctness. Formal correctness can only be proven
mathematically. So, for vast majority of programs out there,
it is nothing more than a pipe dream.

And I am not even using the swing version of GUI.
AWT is just fine for what I am doing and the user interface
is probably the best you can imagine in the wildest dreams.
I could care less about swing. Everything is just fine.

But, because I can use VC, even though it does not support
the "latest and gratest" JDK, I can crank out my code
several times faster than they can do with other development
environments. Plus I can run on any O/S without even recompiling,
and in one case, when my window box got rooted to the point
I could not use it for more than a month because that rootkit
was the most sophisticated thing I even knew exists, that
saved my skin. I just switched to Linux and copied some config
files from win version and restarted the operations at the
same exact point where I lost my windows.

How much does THAT kind of thing count to you?
Well, don't know about you, but this is the number one criteria
for me, and I mean number one, not two or three.

And it is a pitty that C++ can not satisfy either of these morst
critical elements for me.

And I saw plenty of posts by different people and their opinions
on it, and it basically lead nowhere. No progress so far that
I know of in none of the most critical areas of modern programming
languages. Even stinky Javascript does not have as many portability
prblems as C++. What a pitty!
so memory leak can not be one problem if one use these special "malloc"
functions (like i use always with all bell that sound)

Not do worry, I did all of that.
The point is how much time and energy you have to waste to bother
with stoopid things like memory leaks?

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,999
Messages
2,570,243
Members
46,836
Latest member
login dogas

Latest Threads

Top