Exception Misconceptions: Exceptions are for unrecoverable errors.

K

Kaz Kylheku

James said:
tanix wrote:
[...]
Memory managment is not a problem. You can implement GC, for
any apllication even in assembler. Reference counting is
simple form of gc and works well in c++ because of RAII.

Reference counting doesn't work in C++, because of cycles. And
reference counting is very, very slow compared to the better
garbage collector algorithms.
Hm, have you ever head cycles? In my long experience I never
had cyclic references...
Reference counting can;t be slow because it does not have
to lock all threads

Reference counting blocks the calling thread on an atomic increment
or decrement operation whenever the reference count must be manipulated.

Reference counting generates bus traffic. Whenever a refcount field is
written, the corresponding cache line is now dirty on the local
processor and must be updated to the other processors.

Refcounting forces garbage objects to be visited one by one before they
are reclaimed, in a random order, and possibly more than once! An object
with refcount 42 will in general be visited 42 times before 42
references can be dropped.

Someone recently remarked in the comp.lang.lisp newsgroup that a
a particular historic Lisp implementation, which used reference counting
rather than real GC, sometimes fell into such a long pause in
processing a dropped reference that frustrated programmers would just
reboot the system!!!

See when you drop a refcount on an object and it reaches zero, you are
not done. That object has pointers to other objects, and /their/
references have to be dropped. Refcounting does not eliminate the
fact that a graph is being walked.

But reference counting is the dumbest, slowest way of hunting down garbage.
 
K

Kaz Kylheku

Looks like it is a matter of life and death to you.
But I doubt you can win this argument.

I doubt you can convince this village bohunk that he can possibly
lose an argument.
 
B

Branimir Maksimovic

Kaz said:
How it can be is that surprising truths in the world don't take a pause
so that morons can catch up.


Big stopping threads using the scheduler is more efficient than
throwing locks or atomic instructions in their execution path.
Well, stopping threads by using scheduler or any other means
while they work is same or worse as throwing locks or atomic
instructions in excetion path....
Actually this is I said already. Simplest way to perform garbage
collection is to pause program scan references then continue program...
What's more efficient: pausing a thread once in a long while, or having
it constantly trip over some atomic increment or decrement, possibly
millions of times a second?

Atomic increment/ decrement costs nothing if nothing is locked....
So there is actually small probability that that will happen
because usually there are not many objects referenced from multiple
threads.

The job of GC is to find and reclaim unreachable objects.
Exactly.


When an object becomes unreachable, it stays that way. A program does
not lose a reference to an object, and then magically recover the
reference. Thus, in general, garbage monotonically increases as
computation proceeds.

This means that GC can in fact proceed concurrently with the
application.

Oonly after it finds unreferenced objects...

The only risk is that the program will generate more
garbage while GC is running, which the GC will miss---objects which GC
finds to be reachable became unreachable before it completes.
But that's okay; they will be found next time.

So you will have 500 megabytes ram more used then with manual
deallocation.... ;)
This is hinted at in the ``snapshot mark-and-sweep'' paragraph
in the GC algorithms FAQ.

http://www.iecc.com/gclist/GC-algorithms.html


Go study garbage collection. There is lots of literature there.

It's not a small, simple topic.


WTF are you stuipd?
Stupid?


Firstly, any comparison between GC and manual deallocation is moronic.

Of course, manual deallocation does not have to pause
complete program...
In order to invoke manual deallocation, the program has to be sure
that the object is about to become unreachable, so that it does
not prematurely delete an object that is still in use.

Hey. delete p just frees block of memory. That work is not that
complicated...


Moreover,
the program has to also ensure that it eventually identifies all objects
that are no longer in use. I.e. by the time it calls the function, the
program has already done exactly the same the job that is done by the
garbage collector: that of identifying garbage.

What are you talking about?

/Both/ manual deallocation and garbage collection have to recycle
objects somehow; the deallocation part is a subset of what GC does.

Yup. GC does much more deallocation part can be done concurrently.
That's why manual deallocation will always be more efficeint
and faster.
Look when I say free(p) it is just simple routine....

(Garbage collectors integrated with C in fact call free on unreachable
objects; so in that case it is obvious that the cost of /just/ the call
to free is lower than the cost of hunting down garbage /and/ calling
free on it!)
?


The computation of an object lifetime is not cost free, whether it
is done by the program, or farmed off to automatic garbage collection.
Your point about locking is naively wrong, too. Memory allocators which
are actually in widespread use have internal locks to guard against
concurrent acces by multiple processors.

Of course.

Even SMP-scalable allocators
like Hoard have locks.

Of course.


See, the problem is that even if you shunt
allocation requests into thread-local heaps, a piece of memory may be
freed by a different thread from the one which allocated it.

I have written such allocator...

Thread A
allocates an object, thread B frees it. So a lock on the heap has to be
acquired to re-insert the block into the free list.

Lock operation is very short and collision between two thread may pause
one or two threads. But other will continue to work. unlike with
gc which will completely pause program for sure.

Greets!
 
B

Branimir Maksimovic

Kaz said:
James said:
tanix wrote:
[...]
Memory managment is not a problem. You can implement GC, for
any apllication even in assembler. Reference counting is
simple form of gc and works well in c++ because of RAII.
Reference counting doesn't work in C++, because of cycles. And
reference counting is very, very slow compared to the better
garbage collector algorithms.
Hm, have you ever head cycles? In my long experience I never
had cyclic references...
Reference counting can;t be slow because it does not have
to lock all threads

Reference counting blocks the calling thread on an atomic increment
or decrement operation whenever the reference count must be manipulated.

Hm, it does block nothing, actually it can happen if two or more
threads share same object which is in practice rare case.
And in same time add/release references.
Actually typical scenario is auto_ptr not shared ptr when
one thread passes object to the other.
Reference counting generates bus traffic. Whenever a refcount field is
written, the corresponding cache line is now dirty on the local
processor and must be updated to the other processors.

Reference counting generates much less bus traffic then stopping
program and scanning memory. In order for GC to reach pointers,
it must ensure that every pointer in application
is flushed in memory, which is ....much much worse then
reference counting... Actually scanning memory generates
bus traffic if scan area, is larger than cache, which
is usually case since real world application heap
is usually larger than available cache.
Refcounting forces garbage objects to be visited one by one before they
are reclaimed, in a random order, and possibly more than once! An object
with refcount 42 will in general be visited 42 times before 42
references can be dropped.

42 references , unlikely... If it had million references, we could
be worried, but 42;)

Someone recently remarked in the comp.lang.lisp newsgroup that a
a particular historic Lisp implementation, which used reference counting
rather than real GC, sometimes fell into such a long pause in
processing a dropped reference that frustrated programmers would just
reboot the system!!!

Hm. Lisp? This is shared_ptr we talking about...

See when you drop a refcount on an object and it reaches zero, you are
not done. That object has pointers to other objects, and /their/
references have to be dropped. Refcounting does not eliminate the
fact that a graph is being walked.

You are wrong. shared_ptr's are usually used externally, auto_ptr's are
usually used internally. or simply new in constructor/delete
in destructor.

But reference counting is the dumbest, slowest way of hunting down garbage.

?

Greets
 
P

peter koch

Yup.

You have to unwind EVERYTHING, no matter how small it is.
Otherwise, sooner or later your box will run out of steam.

You have to unwind everything in any language. By unwinding it in the
destructor you only write it once and you dont have to have have
special code for exceptions. Contrast this to Java: here you need the
code explicitly present in every function using the feature.

/Peter
 
T

tanix

Well, that depends...I was born in 68'.

Let me show you one thing:

Number of nodes: 5855000
Timer: initial randomize: 0.818023
Timer: merge_sort: 5.558880
Timer: randomize after merge: 1.201952
Timer: radix_sort: 2.021901
Timer: randomize after radix: 1.470415
Timer: quick_sort after radix: 3.805699
vnode size : 5855000
Timer: quick_sort nodes by address: 0.730361
Timer: quick_sort: 0.505779
Timer: randomize after quick: 0.911052
cummulative result:
----------------------------------
initial randomize: 3.470863
merge: 21.176704 randomize: 5.611846
radix: 8.425690 randomize: 6.397170
quick sort nodes by address: 3.909824 > vector<void*> quick sort then
fill linked list with nodes sorted by address
quick no address sort after radix: 15.180166 > unoptimized linked list,
, nodes are not sorted by address
quick: 2.297783 randomize: 4.075525 > nodes are sorted by address....
7 times faster same algorithm, almost 6 million nodes
Pt count created:999999
true
true
qsort:0.06568 > this is cache optimized quick sort of million elements
vector
sort:0.134283 > this is sort from gcc lib
lqsort:0.19908 > this is cache optimized sort of linked list of million
elements
lsort:0.437295 > this is linked list sort from gcc's lib
Pt count:0

Hey, cool. I like that one. Only if I could understand
what it means.
Which virtual machine can perform crucial cache optimizations?

Sorry. I donnow. You just blew my stack!
:--}

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
J

James Kanze

Yes it is. A mutex held is also a ressource, and so is a
transaction. Both should be wrapped in a class having
appropriate destructor semantics.

A mutex probably should be considered a resource, but a
transaction? If the transaction is really a concrete object,
perhaps, but what if it is just an invariant that is temporarily
broken. (A good example of this would be a simple
implementation of shared_ptr. Boost goes to a lot of effort to
ensure that shared_ptr always leaves the program in coherent
state, but given that there are two dynamic allocations
involved, it requires careful consideration to ensure exception
safety.)
 
T

tanix

How it can be is that surprising truths in the world don't take a pause
so that morons can catch up.

Wow! That looks like a wresting arena!
:--}
Big stopping threads using the scheduler is more efficient than
throwing locks or atomic instructions in their execution path.

Kewl argument.
What's more efficient: pausing a thread once in a long while, or having
it constantly trip over some atomic increment or decrement, possibly
millions of times a second?
The job of GC is to find and reclaim unreachable objects.

That's what I though. But who knows. May be there is some magic to it.
:--}
When an object becomes unreachable, it stays that way. A program does
not lose a reference to an object, and then magically recover the
reference. Thus, in general, garbage monotonically increases as
computation proceeds.
This means that GC can in fact proceed concurrently with the
application. The only risk is that the program will generate more
garbage while GC is running, which the GC will miss---objects which GC
finds to be reachable became unreachable before it completes.
But that's okay; they will be found next time.

This is hinted at in the ``snapshot mark-and-sweep'' paragraph
in the GC algorithms FAQ.

http://www.iecc.com/gclist/GC-algorithms.html


Go study garbage collection. There is lots of literature there.

It's not a small, simple topic.
WTF are you stuipd?
Firstly, any comparison between GC and manual deallocation is moronic.

Well, kinda blunt way of putting it, but I'd have to agree.
In order to invoke manual deallocation, the program has to be sure
that the object is about to become unreachable, so that it does
not prematurely delete an object that is still in use. Moreover,
the program has to also ensure that it eventually identifies all objects
that are no longer in use. I.e. by the time it calls the function, the
program has already done exactly the same the job that is done by the
garbage collector: that of identifying garbage.
/Both/ manual deallocation and garbage collection have to recycle
objects somehow; the deallocation part is a subset of what GC does.
(Garbage collectors integrated with C in fact call free on unreachable
objects; so in that case it is obvious that the cost of /just/ the call
to free is lower than the cost of hunting down garbage /and/ calling
free on it!)

The computation of an object lifetime is not cost free, whether it
is done by the program, or farmed off to automatic garbage collection.

Your point about locking is naively wrong, too. Memory allocators which
are actually in widespread use have internal locks to guard against
concurrent acces by multiple processors. Even SMP-scalable allocators
like Hoard have locks. See, the problem is that even if you shunt
allocation requests into thread-local heaps, a piece of memory may be
freed by a different thread from the one which allocated it. Thread A
allocates an object, thread B frees it. So a lock on the heap has to be
acquired to re-insert the block into the free list.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
T

tanix

Kaz said:
Well, stopping threads by using scheduler or any other means
while they work is same or worse as throwing locks or atomic
instructions in excetion path....
Actually this is I said already. Simplest way to perform garbage
collection is to pause program scan references then continue program...


Atomic increment/ decrement costs nothing if nothing is locked....
So there is actually small probability that that will happen
because usually there are not many objects referenced from multiple
threads.



Oonly after it finds unreferenced objects...

The only risk is that the program will generate more

So you will have 500 megabytes ram more used then with manual
deallocation.... ;)


Of course, manual deallocation does not have to pause
complete program...


Hey. delete p just frees block of memory. That work is not that
complicated...

Yes it is.

What happens AFTER is has been "freed" as it looks to you?
Can you tell me?
Moreover,
What are you talking about?
Yup. GC does much more deallocation part can be done concurrently.
That's why manual deallocation will always be more efficeint
and faster.
Look when I say free(p) it is just simple routine....

But what is happening on the O/S level AFTER that?
What is your overall performance as a SYSTEM
and not just some local view of it?

You see, what counts is the END result.
How long does it take user to wait for response.
How long does it take for your program to continue its main operation.
And NOT how long does it take YOU to return from free() call.t
That is just a very local and a primitive view on the system
I'd have to say. I simply have no choice.
?

Of course.

Even SMP-scalable allocators

Of course.

Well, so it means that you can not just look at the pinhole
(of free() call return time as overall performance)
See, the problem is that even if you shunt
I have written such allocator...
Lock operation is very short and collision between two thread may pause
one or two threads. But other will continue to work. unlike with
gc which will completely pause program for sure.

Well, too bad we are still at it.
You see, to me the program performance translates in to run time
of some more or less complex operation to complete.

What I care about is not how many time my program "frezes" for so
many microseconds, but how long will it take me to comple my run.

If it takes me 4 hours, it is one thing.
If it takes me 4 hrs. and 10 minues, that is nothing to even mention.
But if it takes me 5 hrs, I'd start scratching my cockpit.
But not yet.
But when it takes me 6 hrs vs. 4, I'd definetely start looking
at some things.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
T

tanix

James said:
tanix wrote:
[...]
Memory managment is not a problem. You can implement GC, for
any apllication even in assembler. Reference counting is
simple form of gc and works well in c++ because of RAII.

Reference counting doesn't work in C++, because of cycles. And
reference counting is very, very slow compared to the better
garbage collector algorithms.
Hm, have you ever head cycles? In my long experience I never
had cyclic references...
Reference counting can;t be slow because it does not have
to lock all threads

Reference counting blocks the calling thread on an atomic increment
or decrement operation whenever the reference count must be manipulated.

Reference counting generates bus traffic. Whenever a refcount field is
written, the corresponding cache line is now dirty on the local
processor and must be updated to the other processors.

Refcounting forces garbage objects to be visited one by one before they
are reclaimed, in a random order, and possibly more than once! An object
with refcount 42 will in general be visited 42 times before 42
references can be dropped.

Someone recently remarked in the comp.lang.lisp newsgroup that a
a particular historic Lisp implementation, which used reference counting
rather than real GC, sometimes fell into such a long pause in
processing a dropped reference that frustrated programmers would just
reboot the system!!!

See when you drop a refcount on an object and it reaches zero, you are
not done. That object has pointers to other objects, and /their/
references have to be dropped. Refcounting does not eliminate the
fact that a graph is being walked.

But reference counting is the dumbest, slowest way of hunting down garbage.

What a pleasure to read this kind of stuff, I tellya.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
T

tanix

Kaz said:
James Kanze wrote:
tanix wrote:
[...]
Memory managment is not a problem. You can implement GC, for
any apllication even in assembler. Reference counting is
simple form of gc and works well in c++ because of RAII.
Reference counting doesn't work in C++, because of cycles. And
reference counting is very, very slow compared to the better
garbage collector algorithms.

Hm, have you ever head cycles? In my long experience I never
had cyclic references...
Reference counting can;t be slow because it does not have
to lock all threads

Reference counting blocks the calling thread on an atomic increment
or decrement operation whenever the reference count must be manipulated.

Hm, it does block nothing, actually it can happen if two or more
threads share same object which is in practice rare case.
And in same time add/release references.
Actually typical scenario is auto_ptr not shared ptr when
one thread passes object to the other.
Reference counting generates bus traffic. Whenever a refcount field is
written, the corresponding cache line is now dirty on the local
processor and must be updated to the other processors.

Reference counting generates much less bus traffic then stopping
program and scanning memory. In order for GC to reach pointers,
it must ensure that every pointer in application
is flushed in memory, which is ....much much worse then
reference counting... Actually scanning memory generates
bus traffic if scan area, is larger than cache, which
is usually case since real world application heap
is usually larger than available cache.
Refcounting forces garbage objects to be visited one by one before they
are reclaimed, in a random order, and possibly more than once! An object
with refcount 42 will in general be visited 42 times before 42
references can be dropped.

42 references , unlikely... If it had million references, we could
be worried, but 42;)

Someone recently remarked in the comp.lang.lisp newsgroup that a
a particular historic Lisp implementation, which used reference counting
rather than real GC, sometimes fell into such a long pause in
processing a dropped reference that frustrated programmers would just
reboot the system!!!

Hm. Lisp? This is shared_ptr we talking about...

See when you drop a refcount on an object and it reaches zero, you are
not done. That object has pointers to other objects, and /their/
references have to be dropped. Refcounting does not eliminate the
fact that a graph is being walked.

You are wrong. shared_ptr's are usually used externally, auto_ptr's are
usually used internally. or simply new in constructor/delete
in destructor.

But reference counting is the dumbest, slowest way of hunting down garbage.

?

Greets

Jeez. I feel jealous. I wish this is the kind of stuff is the kind
of thing I'd have to worry.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
T

tanix

I doubt you can convince this village bohunk that he can possibly
lose an argument.

Jeez. Time to get a bear I guess to mellow out the edges!
:--}

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
T

tanix

Kaz said:
Argument can be won by argument.

Well. Not necessarily. Not that I am trying to argue with this.

Some arguments are much more complex then they might look when
you look at the whole picuture and your main interests or points
of view, and there are ALL sorts of ways you may look at some "problem".

Salesman wants sales, vp wants performance, CEO wants redused cost,
engineer wants code "correctness", marketing guy wants big bang,
accountant wants his books to reconcile cause you guys are wasting
too much, and on and on and on.

Which "argument" is "correct"?

Secondly, do you even WANT to see something that differs
from what you are used to?

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
T

tanix

Yup.

You have to unwind EVERYTHING, no matter how small it is.
Otherwise, sooner or later your box will run out of steam.

You have to unwind everything in any language. By unwinding it in the
destructor you only write it once[/QUOTE]

Looks nice on paper. I agree.
and you dont have to have have
special code for exceptions.
Huh?

Contrast this to Java: here you need the
code explicitly present in every function using the feature.

Sorry. I don't follow this.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
B

Balog Pal

James Kanze said:
A mutex probably should be considered a resource, but a
transaction?

Sure. Actually I tend to replace the R in mozaiks from "resource" to
"responsibility". So everything looks uniform, resource alloc comes wih
responsibility to dealloc, mutex lock requires unlock, transaction opening
requires rollback or commit...

So you use the same tech -- the destructor sits there to carry out the
responsibility that is left over.
If the transaction is really a concrete object,
perhaps, but what if it is just an invariant that is temporarily
broken.

For API-based transactions it is simple, call BeginTrans, and keep a bool to
track explicit Rollback or Commit was called.

For internal state transactions it is more complicated -- you record the
stepst taken so rollback can be arranged. Though that is the less suggested
method, I try to use create-then-swap wherever possible.
(A good example of this would be a simple
implementation of shared_ptr. Boost goes to a lot of effort to
ensure that shared_ptr always leaves the program in coherent
state, but given that there are two dynamic allocations
involved, it requires careful consideration to ensure exception
safety.)

Err, what is that "lot effort"? Placing allocations into local scoped_ptrs
then swap or relase them into members afterwards?
 
T

tanix

Hey guys, the thread:

Re: Exception Misconceptions: Exceptions are for unrecoverable errors.

has been fragmented as it has been split by some zombies
by inserting CR/LF and blanks into a subject line.

As a result, there are several threads, not one.

Would you change the subject when you follow up on these thread
so they all merge into one thread again?

Just remove all white speces after unrecoverable.
One thread has two blanks after unrecoverable, which makes it
a different thread.


--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
J

James Kanze

Sure. Actually I tend to replace the R in mozaiks from
"resource" to "responsibility". So everything looks uniform,
resource alloc comes wih responsibility to dealloc, mutex lock
requires unlock, transaction opening requires rollback or
commit...
So you use the same tech -- the destructor sits there to carry
out the responsibility that is left over.

There is a similarity: additional actions may be needed when an
exception has been thrown. But the word "resource" (or even
"responibility") seems too limiting. I prefer talking in terms
of program coherence (in which invariants are maintained): the
coalition takes place in the other sense: when the program is
coherent, for example, no one holds a mutex lock, or other
resources that won't be used.
For API-based transactions it is simple, call BeginTrans, and
keep a bool to track explicit Rollback or Commit was called.
For internal state transactions it is more complicated -- you
record the stepst taken so rollback can be arranged. Though
that is the less suggested method, I try to use
create-then-swap wherever possible.

In general: it's best to do anything that might throw before
changing any state. (That principle predates the swap idiom by
some decades:).) The swap idiom is just a fairly simple way of
expressing it in C++ (and letting destructors handle the
clean-up in both the error cases and the normal case). But if
you're interested, an analysis of the constructor code for
boost::shared_ptr is illuminating; doing things correctly
requires some thought. Independantly of the language:
destructors do help in the implementation details, but the
initial analysis is still necessary.
Err, what is that "lot effort"? Placing allocations into
local scoped_ptrs then swap or relase them into members
afterwards?

Defining clearly what pointers have to be freed, when. In
general, the implementation doesn't require much effort, once
you've defined clearly what has to be done. (Even a finally
block in Java isn't that much effort. Once you know what needs
to go in it.)
 
B

Balog Pal

James Kanze said:
There is a similarity: additional actions may be needed when an
exception has been thrown. But the word "resource" (or even
"responibility") seems too limiting. I prefer talking in terms
of program coherence (in which invariants are maintained): the
coalition takes place in the other sense: when the program is
coherent, for example, no one holds a mutex lock, or other
resources that won't be used.

Hm, to me that is what sounds strange. To me mutex locking means a ctirical
section of code, and has nothing to do with invariants or coherence.

And invariant means something that holds. And supposed to. It may be broken
for some special technical reason (like not having atomic multi-assign), but
it better be avoided. While critical sections are totally meant to be
entered, and perfectly natural.
In general: it's best to do anything that might throw before
changing any state. (That principle predates the swap idiom by
some decades:).)

Sure, that just comes from the fact that an UNDO action is often far from
trivial even in theory, and doing it may fail just like the operation that
went forward. We rather avoid such potentially hopeless situations. ;-)
The swap idiom is just a fairly simple way of
expressing it in C++ (and letting destructors handle the
clean-up in both the error cases and the normal case). But if
you're interested, an analysis of the constructor code for
boost::shared_ptr is illuminating; doing things correctly
requires some thought.

I did study that constructor when it was a new thing (guess like a decade
ago), and it was definitely illuminating. IIRC it was before Herb's
Exceptional... books, and the Abrahams guarantees were either in the future
or new stuff.

By today the scene hopefully is different, that material is considered
fundamental for a long time. And have high attention in interviews for a
C++ position and code reviews.
Independantly of the language:
destructors do help in the implementation details, but the
initial analysis is still necessary.

I didn't say otherwise -- just dtors are a great tool that works like pyro
seat belts. And are good to have even if you stop the car in time.
Defining clearly what pointers have to be freed, when. In
general, the implementation doesn't require much effort, once
you've defined clearly what has to be done.

In normal cases we want to avoid that very problem. In ctor, and related
stuff -- copy ctor, op=, possibly others. The way to avoid it is to NOT use
raw pointers as members like shared_ptr does. But use smart members or a
base class.
That side-step the problem for good.

The smart pointer suite itself is IMO quite a special case. :)

Not to be done -- or trusted to a real expert. Who certainly spent his time
studying the existing implementations, and is aware of those problems.
 
A

aku ankka

And why do you think weakely typed languages are gaining ground?

Well, because you don't have to worry about all those nasty
things as arguments. They can be anything in run time.
And nowdays, the power of the underlying hardware is such,
that it no longer makes such a drastic difference whether you
run a strongly typed, compiled language or interpret it on
the fly, even though performance is order of magnitudes worse.

You need to put things in perspective.

What does it matter to me if web page renders in 100 ms.
versus 1 ms.?

For a lot of tasks that is so true. But then there are things where
this argument doesn't work. If you have a heavy workload and budget
hardware, you will have to work on the optimization really hard. In
some workloads a good application architecture won't be any good
without razor sharp innerloop.

Think of 1080p mpeg-4 or h.264 decoding on Intel Atom; you don't have
the luxury of saying that "well hardware is fucking fast I'll just do
this decoder in Perl.

If the decoder skips frames, you suck. If you write 2% of the decoder
in assembler, for example, so that you can use instructions you know
the Atom has and tried to trick the compiler to use them in C/C++ (for
example) without success, you just say: "**** THIS SHIT" and get the
job done. You squeeze extra 8% of performance out of your code, and
meet the target performance (full framerate = no dropped frames and
some headroom for higher bitrate files), job well done.

If the Atom platform has GPU, you might want to write CUDA / OpenCL /
GLSL / CS / etc. code to use the graphics processor to do some last
stages of the decoding, so that you can write directly into a
texture / framebuffer object. If nothing else, the YUV-to-RGB
conversion at least can be done in the GPU. For that kind of task, you
use the langauges that you must.

But yeah, for some simple flow control logic and stuff like that, we
got practically endless CPU cycles. But as always, absolute statements
are inaccurate and misleading. Different strokes for different tasks
and all that, right sirs?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

No members online now.

Forum statistics

Threads
473,999
Messages
2,570,243
Members
46,836
Latest member
login dogas

Latest Threads

Top