Exception Misconceptions: Exceptions are for unrecoverable errors.

J

James Kanze

Deallocation matters for long-running programs.
A programm that is running only a short time might
never need to actually reclaim memory. Otherwise,
I agree that this takes some time indeed.

It has to be considered when making comparisons, however...

In general, with most manual memory management schemes, total
time is proportional to the number of blocks allocated and
freed. With the most classical garbage collection algorithm
(mark and sweep), total time is proportional to the total amount
of memory in use when the garbage collector is run. If you're
allocating a lot of small, short lived blocks, then garbage
collection is faster. (It's not accident that the benchmarks
prepared by people favoring garbage collection tend to
manipulate very dynamic graph structures, where nodes are
constantly being allocated and freed.) When threading is
involved, significantly faster, since the typical malloc/free
will use a lock for each call. (There are, of course, faster
implementations of malloc/free available. The fastest I know of
for a multiple threaded environment in fact uses some of the
techniques of garbage collection, at least for memory which is
freed in a thread different from the one it was allocated in.)
 
J

James Kanze

You're missing the point. Comparisons of Java and C++ are
supposed to make Java look good. It doesn't matter whether
they make sense so long as they meet that goal.

As I said before, never trust a benchmark you haven't falsified
yourself:). On the other hand, why should a Java proponent
design a benchmark which makes his language look bad. And
since C++ doesn't have any vested interests ready to pay to make
it look good, and the others look bad, there aren't many C++
advocates designing benchmarks. (If someone's ready to pay me,
I'll design you a benchmark. Just tell me which language you
want to win, and it will. I know both languages well enough for
that.)

On the other hand, the fact that there are a large number of
Java applications which run and are sufficiently fast is more or
less a proof that performance isn't (always) a problem with the
language. (The fact that they almost all run on Intel
architectures may indicate that the JVM's available on other
systems aren't all that good.)
Hmm, could it be that Java proponents are all Republicans?

They're not that bad. (At least, not all of them.) There's a
difference about not presenting the whole picture, and just
lying (see "if Stephan Hawkings had lived in Britain").
 
S

Stefan Ram

James Kanze said:
As I said before, never trust a benchmark you haven't falsified

Yes, of course, it all starts with the fact, that one cannot
compare the »speed of languages« but only the speed of
specific programs running under a specific /implementation/
of a language running under a specific operating system
running on a specific hardware.

What is language-specific is only the fact that some
language features make some kinds of optimization possible
(like »restrict« in C) or impossible (e.g., when aliasing by
pointers is possible).

So, if I had to implement some algorithm, I would not refuse
Java from the first, because it is »slow«, but do some
benchmarking with code in the direction of that algorithm.

After all, /if/ Java is sufficiently fast for my purpose,
it gives me some conveniences, such as run-time array index
checking, automatic memory management and freedom from
the need for (sometimes risky) pointer arithmetics.
 
T

tanix

GC is heavy performance killer especially on multiprocessor systems
in combination with threads....it is slow, complex and inefficient...

Obviously, you've never actually measured. A lot depends on the
application, but typically, C++ with garbage collection runs
slightly faster than C++ without garbage collection. Especially
in a multi-threaded envirionment,

[...]
virtual machine is also heavy performance killer...

Which explains why some of the leading experts in optimization
claim that it is necessary for the best optimization. (I don't
fully buy that claim, but a virtual machine does have a couple
of advantages when it come to optimizing: it sees the actual
data being processed, for example, and the actual machine being
run on, and can optimize to both.)

Yep. And the more high level some abstraction is,
the more performance it can gain and the less of even theoretical
advantage any other approach may claim.
First, Java is a compiled language, and second, it's not slower
than any of the other compiled languages, globally. (Specific
programs may vary, of course.)

And that is exactly what I am seeing in my own situation.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
T

tanix

Hm, explain to me how can any thread, access or change any
pointer in memory without lock while gc is collecting....
There is no way for gc to collect without stopping all threads
without locking.... because gc is just another thread(s) in
itself...

Maybe. I've not actually studied the implementations in
detail. I've just measured actual time. And the result is that
over a wide variety of applications, garbage collection is, on
the average, slightly faster. (With some applications where it
is radically faster, and others where it is noticeably slower.)
Of course. GC is complex program that has only one purpose.
To let programmer not write free(p), but programmer
still has to write close(fd).
What's the purpose of that?

Less lines of code to write.

If you're paid by the line, garbage collection is a bad thing.
Otherwise, it's a useful tool, to be used when appropriate.
Refcounts are negligible in comparison to what gc is doing.

Reference counting is very expensive in a multithreaded
environment.

And in the end, measurements trump abstract claims.

[...]
Manual memory deallocation is simple, fast and efficient.
Nothing so complex like GC. Cost of new and delete is nothing
in comparison to GC.

That's definitely not true in practice.

[...]
GC cannot be implemented efficiently since it has to mess with
memory...

What you mean is that you don't know how to implement it
efficiently. Nor do I, for that matter, but I'm willing to
accept that there are people who know more about the issues than
I do. And I've measured the results of their work.

[...]
I don't want to discuss this, but it is obvious that nothing
in java is designed with performance in mind. Quite
opposite....

You don't want to discuss it, so you state some blatent lie, and
expect everyone to just accept it at face value. Some parts of
Java were definitely designed with performance in mind (e.g.
using int, instead of a class type). Others less so. But the
fact remains that with a good JVM, Java runs just as fast as C++
in most applications. Speed is not an argument against Java
(except for some specific programs), at least on machines which
have a good JVM.

I happened to have looked at some java source code and in those
places I looked at, I could not see a way of getting it to have
more performance.

These fools that those, who wrote java and various packages
are just lazy bums, while in reality those are the cream of the
crop programmers and these fools would probably have not a
chance to pass the interview at Sun, regardless of whether they
are C++ big mouths or what.

This is the CREAM OF THE CROP of nothing less then a Silicon Valley,
and most of all what these fools have to say is nothing more than
the sucking sounds.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
B

Branimir Maksimovic

tanix said:
tanix wrote:
C++ would probably be benefited tremendously if it adopted some
of the central Java concept, such as GC, threads and GUI.
GC is heavy performance killer especially on multiprocessor systems
in combination with threads....it is slow, complex and inefficient...
Obviously, you've never actually measured. A lot depends on the
application, but typically, C++ with garbage collection runs
slightly faster than C++ without garbage collection. Especially
in a multi-threaded envirionment,

[...]
Except that would require an equivalent of a virtual machine
underneath.
virtual machine is also heavy performance killer...
Which explains why some of the leading experts in optimization
claim that it is necessary for the best optimization. (I don't
fully buy that claim, but a virtual machine does have a couple
of advantages when it come to optimizing: it sees the actual
data being processed, for example, and the actual machine being
run on, and can optimize to both.)

Yep. And the more high level some abstraction is,
the more performance it can gain and the less of even theoretical
advantage any other approach may claim.

You believe in ferry tales... ;0)

Greets
 
B

Branimir Maksimovic

James said:
Obviously, you've never actually measured. A lot depends on the
application, but typically, C++ with garbage collection runs
slightly faster than C++ without garbage collection. Especially
in a multi-threaded envirionment,

How that can possibly be? GC kills all threads when it has
to collect? Or it can magically sweep through heap,stack,bss
etc and scan without locking all the time or stopping program?
Explain to me how?
Manual deallocation does not have to lock at all....
[...]
virtual machine is also heavy performance killer...

Which explains why some of the leading experts in optimization
claim that it is necessary for the best optimization. (I don't
fully buy that claim, but a virtual machine does have a couple
of advantages when it come to optimizing: it sees the actual
data being processed, for example, and the actual machine being
run on, and can optimize to both.)

Best optimization is when you can manually control memory management
and have access to hardware directly. Everything else
is algorithm optimization...which can be done in any language.

First, Java is a compiled language, and second, it's not slower
than any of the other compiled languages, globally. (Specific
programs may vary, of course.)

Java is compiled language in a sense that any interpreted language
is run time compiled...but that does not makes those languages
compiled...
For example php with popen calling c executable in my experience
is about three times faster as a server than jetty/solr
for example...

Greets
 
B

Branimir Maksimovic

James said:
Stefan Ram wrote:
[...]
Allocation is not where GC fails, rather deallocation....

It doesn't fail there, either. But any comparison should take
deallocation into consideration. (Well, formally... there's no
deallocation with garbage collection. But the system must take
some steps to determine when memory can be reused.)
Because there is no fastest and simpler way to perform
collection, than to stop program, perform collection in
multiple threads, then let program work....

Try Googleing for "incremental garbage collection".
Incremental garbage collection is form of collection when
you don;t free everything immediately, but this does not
change a fact whenever you have to see if something is referenced
or not you have to stop program and examine pointers,
which of course kills performance of threads...

Greets
 
T

tanix

tanix said:
C++ would probably be benefited tremendously if it adopted some
of the central Java concept, such as GC, threads and GUI.
GC is heavy performance killer especially on multiprocessor systems
in combination with threads....it is slow, complex and inefficient...
Obviously, you've never actually measured. A lot depends on the
application, but typically, C++ with garbage collection runs
slightly faster than C++ without garbage collection. Especially
in a multi-threaded envirionment,

[...]
Except that would require an equivalent of a virtual machine
underneath.
virtual machine is also heavy performance killer...
Which explains why some of the leading experts in optimization
claim that it is necessary for the best optimization. (I don't
fully buy that claim, but a virtual machine does have a couple
of advantages when it come to optimizing: it sees the actual
data being processed, for example, and the actual machine being
run on, and can optimize to both.)

Yep. And the more high level some abstraction is,
the more performance it can gain and the less of even theoretical
advantage any other approach may claim.

You believe in ferry tales... ;0)[/QUOTE]

I don't have to believe. It is pretty much self evident.
Why?

Well, because the higher level is your abstraction,
the less impact the language has. Because you have a virtual machine
underneeth that can do anything you please, and as efficiently
as anything else under the sun.

Basically, you are running a machine code at that level.

About the only thing you can claim is: well, but what are those
additional calls? Well, yep, there IS a theoretical overhead.
But once you start looking at the nasty details of it, it all
becomes pretty much a pipe dream.

There are ALL sorts of things that happen under the hood, and
in plenty of cases, your low level details become insignificant
in the scheme of things.

Simple as that.


--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
B

Branimir Maksimovic

tanix said:
I don't have to believe. It is pretty much self evident.
Why?

Well, because the higher level is your abstraction,
the less impact the language has. Because you have a virtual machine
underneeth that can do anything you please, and as efficiently
as anything else under the sun.

Virtual machines are always slower then real machines....
No matter what, one can optimize as far as it
goes just simple cases. Anything non trivial,
and very difficult to optimize like java code...

Greets
 
B

Branimir Maksimovic

James said:
tanix said:
[...]
Memory managment is not a problem. You can implement GC, for
any apllication even in assembler. Reference counting is
simple form of gc and works well in c++ because of RAII.

Reference counting doesn't work in C++, because of cycles. And
reference counting is very, very slow compared to the better
garbage collector algorithms.
Hm, have you ever head cycles? In my long experience I never
had cyclic references...
Reference counting can;t be slow because it does not have
to lock all threads, neither is incrementing decrementing
variable slow in comparison with complete memory scan every time
gc have to chase pointeras through cyclic graph inside
application. No matter what you do , it is impossible
to make it faster than refcounting...
[...]
And somebodey tried to convince me that conservative GC is
faster that shared_ptr/auto_ptr (what a ....;)

And you refused to even look at actual measurements. I'm aware
of a couple of programs where the Boehm collector significantly
out performs boost::shared_ptr. (Of course, I'm also aware of
cases where it doesn't. There is no global perfect solution.)

Hm, how can possibly complex algorithm outperform simple
reference counting. Try to measure deallocation speed.
Allocation in GC is same as manual allocation. But
deallocation is where it performs complex algorithm.

Greets
 
T

tanix

James said:
tanix wrote:



Obviously, you've never actually measured. A lot depends on the
application, but typically, C++ with garbage collection runs
slightly faster than C++ without garbage collection. Especially
in a multi-threaded envirionment,

How that can possibly be? GC kills all threads when it has
to collect? Or it can magically sweep through heap,stack,bss
etc and scan without locking all the time or stopping program?
Explain to me how?
Manual deallocation does not have to lock at all....
[...]
Except that would require an equivalent of a virtual machine
underneath.
virtual machine is also heavy performance killer...

Which explains why some of the leading experts in optimization
claim that it is necessary for the best optimization. (I don't
fully buy that claim, but a virtual machine does have a couple
of advantages when it come to optimizing: it sees the actual
data being processed, for example, and the actual machine being
run on, and can optimize to both.)

Best optimization is when you can manually control memory management
and have access to hardware directly. Everything else
is algorithm optimization...which can be done in any language.

Looks appealing in the local scope of things.
If you get too obscessed trying to save some machine cycles,
then yes, you do have a point.

The problem is in application of any complexity, even worth mentioning,
you are no longer dealing with machine instructions, however appealing
it might look.

You are dealing with SYSTEMS.

You are dealing with structures and higher level logical constructs.
By simply changing your architecture, you may achieve orders of
magnitude more performance. And performance is not the only thing
that counts in something in the real world, although it counts
probably more than ohter things.

Except stability.

And what other things are functionality, flexibility, configurability,
the power and clarity of your user interface, that turns out to be
one of the most imporant criteria, and plenty of other things.

Yes, if you think of your code as an assembly level set of instructions,
and not matter which instruction you are looking at, you are trying
to squeze every single machine cycle out of it, then you are not
"seeing the forest for the trees".

What I see using my program is not how efficient some subcomponent
is, but how many hours does it take me to process vast amounts
of information. I could care less if GC exist, except it helps me
more than it creates problems for it, and I don't even need to
prove it to anybody. It is self evident to me. After a while, you
stop questioning certain things if you saw a large enough history.

What is the point to forever flip those bits?

Let language designers think about these things, and I assume they
have done as good of a job doing it, as state of the art allows,
especially if they are getting paid tons of money for doing that.

I trust them. I may not agree with some things, and my primary
concerns nowadays are not whether GC is more or less efficient,
but how fast I can model my app, how easy it is to do that,
how supportive my IDE, how powerful my debugger is, how easy it
is for me to move my app to a different platform and things
like this.

You can nitpick all you want, but I doubt you will be able to
prove anything of substance by doing that kind of thing.
To me, it is just a royal waste of time. Totally unproductive.
Java is compiled language in a sense that any interpreted language
is run time compiled...

Not true.
but that does not makes those languages
compiled...

Java IS compiled. Period.

Would you argue with a concept of P-Machine on the basis that
it is "interpetive", just because it uses the higher level
abstraction, sitting on the top of O/S?

Java does not evaluate strings run time and it is a strongly
typed language, and that IS the central difference between
what I call dynamically scoped languages and statically
scoped languages.

It does not matter to me if Java runs bytecodes or P-Machine
code. It is just another layer on the top of O/S, and that layer,
by the sheer fact that it is a higher level abstraction,
can optimize things under the hood MUCH better than you can
optimize things with languages with lower levels of abstraction.

For some reason, people have gotten away from coding in
assembly languages for most applications.
This is exactly the same thing.

What is the difference between C++ and C?

Well, the ONLY difference I know is higher level of abstraction.
And that is ALL there is to it.
The same exact thing as Java using the JVM to provide it the
underlying mechanisms, efficient enough and flexible enough
for you to be able to express yourself on a more abstract level.

And that is ALL there is to it.

And why do you think weakely typed languages are gaining ground?

Well, because you don't have to worry about all those nasty
things as arguments. They can be anything in run time.
And nowdays, the power of the underlying hardware is such,
that it no longer makes such a drastic difference whether you
run a strongly typed, compiled language or interpret it on
the fly, even though performance is order of magnitudes worse.

You need to put things in perspective.

What does it matter to me if web page renders in 100 ms.
versus 1 ms.?

NONE.

My brain can not work that fast to read anything in those 99 ms.
anyway.

I think the whole argument is simply a waste of time, or rather,
a waste of creative potential that could be used for something
WAY more constructive and WAY more "revolutionary".
For example php with popen calling c executable in my experience
is about three times faster as a server than jetty/solr
for example...

Well, if you use even PHP as some kind of argument, then you
are obviosly not seen the forest. Because PHP is one of the
worst dogs overall. Because it is weakly typed language.

Even Python beats it hands down.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
B

Branimir Maksimovic

tanix said:
Well, if you use even PHP as some kind of argument, then you
are obviosly not seen the forest. Because PHP is one of the
worst dogs overall. Because it is weakly typed language.

Even Python beats it hands down.

No. cath is not in php, rather c executable for every request
initializes about 256 mb of ram of data every time and
uses simple printfs to return through pipe result to php,
and perfomrs three times faster then java jetty/solr which
hodls everything initizalized in memory...
as a search engine...
 
T

tanix

Virtual machines are always slower then real machines....

This is a blanket statement by someone who is obscessed with
machine cycles while his grand piece of work is not even worth
mentioning, I'd say.
No matter what, one can optimize as far as it
goes just simple cases. Anything non trivial,
Correct.

and very difficult to optimize like java code...

I don't have to optimize Java code in any special way.
It is the same way no matter WHAT language it is.

One more time: to me, program is a SYSTEM.

And the MOST critical parameter in the system is:
STABILITY.

Why? Because if your program is not stable, you are dead.

Yes, everyone wants performance. No question about it.
You don't want to sit there for 30 seconds waiting for your
frozen GUI to get unfrozen so you can enter some parameters
or type something somewhere.

And the reason it is frozen for that long of a time is not
matter of machine instructions or the "efficiency" of your
code. It is a matter of TOTALLY wrong design.

Program is not just a hack and tons of "efficient" spaghetti
code. It is HIGLY complex system with billions of interactions,
and MANY substystems cooperating under the hood.

It is not some fancy hex calculator where you flip some bits.

Unless programs are viewed as a system, you will be trying
to pick a piece of crap from some output hole and look at it
with a magnifying glass trying to make conclusions as what
the human being.


--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
T

tanix

James said:
tanix wrote:
[...]
Memory managment is not a problem. You can implement GC, for
any apllication even in assembler. Reference counting is
simple form of gc and works well in c++ because of RAII.

Reference counting doesn't work in C++, because of cycles. And
reference counting is very, very slow compared to the better
garbage collector algorithms.
Hm, have you ever head cycles? In my long experience I never
had cyclic references...
Reference counting can;t be slow because it does not have
to lock all threads, neither is incrementing decrementing
variable slow in comparison with complete memory scan every time
gc have to chase pointeras through cyclic graph inside
application. No matter what you do , it is impossible
to make it faster than refcounting...

Looks like it is a matter of life and death to you.
But I doubt you can win this argument.
[...]
And somebodey tried to convince me that conservative GC is
faster that shared_ptr/auto_ptr (what a ....;)

And you refused to even look at actual measurements. I'm aware
of a couple of programs where the Boehm collector significantly
out performs boost::shared_ptr. (Of course, I'm also aware of
cases where it doesn't. There is no global perfect solution.)

Hm, how can possibly complex algorithm outperform simple
reference counting. Try to measure deallocation speed.
Allocation in GC is same as manual allocation. But
deallocation is where it performs complex algorithm.

And so it goes "till your nose goes blue"...
:--}

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
B

Branimir Maksimovic

tanix said:
This is a blanket statement by someone who is obscessed with
machine cycles while his grand piece of work is not even worth
mentioning, I'd say.

Well, I've started and stopped application which controlled Shanghai
airptort back in 1993. Two stratus engineers besides me , I work
on 4 terminals in emacs with C language and VOS operating
system...
I was hired by stratus then as expert for c programming language...

Greets
 
T

tanix

James said:
Stefan Ram wrote:
[...]
Allocation is not where GC fails, rather deallocation....

It doesn't fail there, either. But any comparison should take
deallocation into consideration. (Well, formally... there's no
deallocation with garbage collection. But the system must take
some steps to determine when memory can be reused.)

I DO like that one. What a master stroke!

:--}
Incremental garbage collection is form of collection when
you don;t free everything immediately, but this does not
change a fact whenever you have to see if something is referenced
or not you have to stop program and examine pointers,

Yes. This IS becoming a matter of life and death issue it seems.

:--}
which of course kills performance of threads...

Greets

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
B

Branimir Maksimovic

tanix said:
James said:
Stefan Ram wrote:
[...]
Allocation is not where GC fails, rather deallocation....
It doesn't fail there, either. But any comparison should take
deallocation into consideration. (Well, formally... there's no
deallocation with garbage collection. But the system must take
some steps to determine when memory can be reused.)

I DO like that one. What a master stroke!

:--}
Incremental garbage collection is form of collection when
you don;t free everything immediately, but this does not
change a fact whenever you have to see if something is referenced
or not you have to stop program and examine pointers,

Yes. This IS becoming a matter of life and death issue it seems.

:--}

Ok, I give, Merry Christmass! ;)
Greets
 
T

tanix

Well, I've started and stopped application which controlled Shanghai
airptort back in 1993.

Don't know what you mean by that, but yes, sounds impressive.
Two stratus engineers besides me , I work
on 4 terminals in emacs with C language and VOS operating
system...

Wooo! That's definetely impressive.

Good. Than fix C++ so I can go back to it.
After all, it is one of the first "higher level" languages
I had to deal with. I kinda hunts you...
I was hired by stratus then as expert for c programming language...

Good. I don't remember what Stratus stands for, but I do recall
hearing it somewhere on more or less big scale. What did they do?

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
T

tanix

No. cath is not in php, rather c executable for every request
initializes about 256 mb of ram of data every time and
uses simple printfs to return through pipe result to php,
and perfomrs three times faster then java jetty/solr which
hodls everything initizalized in memory...
as a search engine...

Well, I'd be curious to see more specifics on this.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
473,999
Messages
2,570,243
Members
46,836
Latest member
login dogas

Latest Threads

Top