optimizing java compiler

T

Timo Nentwig

Hi!

As the -O option of javac does nothing and Sun obviously think it can affort
not to do so, which compiler is known to optimize bytecode well? Jikes?
eclipse? What about AspectJ?

Regards
Timo
 
M

Mark Thornton

Timo said:
Hi!

As the -O option of javac does nothing and Sun obviously think it can affort
not to do so, which compiler is known to optimize bytecode well? Jikes?
eclipse? What about AspectJ?

What benefit are you expecting to get from optimised bytecode? It could
even be counterproductive for JVM which include a JIT (which is most
these days). Class file size would be smaller, but once put into a jar
file the benefit might also be reduced or even eliminated.

Mark Thornton
 
N

Neal Gafter

Timo said:
As the -O option of javac does nothing and Sun obviously think it can affort
not to do so, which compiler is known to optimize bytecode well? Jikes?
eclipse? What about AspectJ?

Hotspot.
 
C

Chris Uppal

Timo said:

I don't think much of that article -- anyone reading it would come away with
the impression that there was only one JVM, that its implementation was set in
stone, and that it did no optimisation. None of these things are even
approximately true.

There may be a good case to be made for an optimising javac for generating
bytecodes for PDAs etc (at least for as long as their space limitations are
significant). But the case for optimisation in javac for normal apps is far
less clear. The arguments against it are that

(a) it makes the "hotspot" (and similar) optimisations in the JVM harder
(to be honest, I don't know whether this is *really* a problem -- it's
something you often hear asserted, and it sounds plausible, but I've never seen
an authoritative reference).

(b) it means that the compiler is harder to write, takes longer to add
features, is much harder to make *correct*, and is only doing optimisations
that the runtime may be, or is, doing anyway.

E.g. why would Sun put effort into optimisations in javac that duplicate what
their hotspot technology is doing ? There may be people compiling code with
Sun's javac, that then run it on JVMs without optimisation. Optimisations in
javac would be beneficial to *them*, but why should Sun (or indeed anyone else)
care much about them ?

If you can quantify the benefit you expect to get from an optimising javac,
then you'll be in a position to decide how much time/money/effort you are
willing to put into finding one. So it comes back to the question that Mark
asked:
What benefit are you expecting to get from optimised bytecode?

This is a serious suggestion: if this is important to you, then try
hand-optimising the generated bytecodes for a few realistic sample cases, then
run the optimised and non-optimised versions on a modern JVM (Sun's 'server'
JVM would be my choice) and see how much difference it really makes. My guess
is it will make very little difference, perhaps even have a negative effect,
but that *is* only a guess, and I may well be wrong. In any case I'd be
interested to hear the result.

-- chris
 
C

Chris Uppal

I said:
If you can quantify the benefit you expect to get from an optimising
javac, then you'll be in a position to decide how much time/money/effort
you are willing to put into finding one.

I forgot to mention that bytecode optimisers do exist. The only one that I
personally know of (though I've never tried it) is by JavaGO by Konstantin
Knizhnik (http://www.garret.ru/~knizhnik/javago/ReadMe.htm) which does
whole-program analysis and optimisation. There are others. You may also be
able to find optimisers that work at a more local level, but I can't think of
any offhand -- you could always write one if you think it'd be worthwhile ;-)

-- chris
 
D

Dale King

Chris said:
Timo Nentwig wrote:




I don't think much of that article -- anyone reading it would come away with
the impression that there was only one JVM, that its implementation was set in
stone, and that it did no optimisation. None of these things are even
approximately true.

There may be a good case to be made for an optimising javac for generating
bytecodes for PDAs etc (at least for as long as their space limitations are
significant). But the case for optimisation in javac for normal apps is far
less clear. The arguments against it are that

(a) it makes the "hotspot" (and similar) optimisations in the JVM harder
(to be honest, I don't know whether this is *really* a problem -- it's
something you often hear asserted, and it sounds plausible, but I've never seen
an authoritative reference).

(b) it means that the compiler is harder to write, takes longer to add
features, is much harder to make *correct*, and is only doing optimisations
that the runtime may be, or is, doing anyway.

E.g. why would Sun put effort into optimisations in javac that duplicate what
their hotspot technology is doing ? There may be people compiling code with
Sun's javac, that then run it on JVMs without optimisation. Optimisations in
javac would be beneficial to *them*, but why should Sun (or indeed anyone else)
care much about them ?

If you can quantify the benefit you expect to get from an optimising javac,
then you'll be in a position to decide how much time/money/effort you are
willing to put into finding one. So it comes back to the question that Mark
asked:


This is a serious suggestion: if this is important to you, then try
hand-optimising the generated bytecodes for a few realistic sample cases, then
run the optimised and non-optimised versions on a modern JVM (Sun's 'server'
JVM would be my choice) and see how much difference it really makes. My guess
is it will make very little difference, perhaps even have a negative effect,
but that *is* only a guess, and I may well be wrong. In any case I'd be
interested to hear the result.

I would also add that deferring the optimization to runtime also allows
the possibility that the optimization can be specifically targetted to
the runtime environment. You might want to optimize differently for one
architecture over another.

I have no idea if this actually happens, but delaying the optimization
to runtime certainly means that it can happen.
 
L

Larry Barowski

Chris Uppal said:
I don't think much of that article -- anyone reading it would come away with
the impression that there was only one JVM, that its implementation was set in
stone, and that it did no optimisation. None of these things are even
approximately true.

There may be a good case to be made for an optimising javac for generating
bytecodes for PDAs etc (at least for as long as their space limitations are
significant). But the case for optimisation in javac for normal apps is far
less clear. The arguments against it are that

(a) it makes the "hotspot" (and similar) optimisations in the JVM harder
(to be honest, I don't know whether this is *really* a problem -- it's
something you often hear asserted, and it sounds plausible, but I've never seen
an authoritative reference).

(b) it means that the compiler is harder to write, takes longer to add
features, is much harder to make *correct*, and is only doing optimisations
that the runtime may be, or is, doing anyway.

The newer Sun Java compilers (starting in some 1.4.1 or 1.4.2 - I can't
remember which) do some optimizations whether -O is supplied or not. This
can confuse debugging. It used to be that you could step from a return in
nested try/catch blocks to the inner "finally" block, back to the return, to
the
outer "finally" block, and back to the return again. Now this will only
happen
if there is a significant amount of code in the "finally"s, otherwise they
are
inlined and you will step from the return, to the first "finally", directly
to the
second "finally", and directly out from there, which is less clear.

Some other compile-time optimizations were added that can make
debugging even harder to follow, but I can't remember what they are at
the moment. The problem is that there is no way to turn them off.


-Larry Barowski
 
N

Neal Gafter

Larry said:
The newer Sun Java compilers (starting in some 1.4.1 or 1.4.2 - I can't
remember which) do some optimizations whether -O is supplied or not. This
can confuse debugging. It used to be that you could step from a return in
nested try/catch blocks to the inner "finally" block, back to the return, to
the
outer "finally" block, and back to the return again. Now this will only
happen
if there is a significant amount of code in the "finally"s, otherwise they
are
inlined and you will step from the return, to the first "finally", directly
to the
second "finally", and directly out from there, which is less clear.

This isn't an optimization, it is simply an improved code generation strategy.
Why would you expect the debugger to go back to the return statement? This is
the first I've heard of someone being unhappy with the new code sequence. I've
taken a note of your concern and I'll arrange some future javac to generate a
line number entry at the appropriate point in the code to emulate the old
debugger behavior with the new code sequence.
Some other compile-time optimizations were added that can make
debugging even harder to follow, but I can't remember what they are at
the moment. The problem is that there is no way to turn them off.

I'd like to know about them. I don't think we did any "optimizations" in the
sense of transforming the code that the user wrote. But the line number tables
might not be set up properly to imitate the old behavior. Tell me where you see
a problem and I'll fix it.
 
C

Chris Uppal

Neal said:
This isn't an optimization, it is simply an improved code generation
strategy. Why would you expect the debugger to go back to the return
statement? This is the first I've heard of someone being unhappy with
the new code sequence.

I'm hesitant to mention this because I woudn't like to seem that I'm suggesting
that this is in any way your fault or your problem, but the new code sequences
have broken a few previously useful tools.

For instance neither Jlint 2.3 nor JAD 1.5.8e2 will cope with the result of
compiling either of the two appended classes with a 1.4.2 compiler. Perhaps
the same code sequences will cause problems for other tools too.

(I'd have emailed bug reports to the authors but I don't have an address for
JAD, and I'm not sure that Jlint is actively maintained.)

-- chris

========================

import java.io.*;

class BreakJlintAndJAD
{
public boolean
save(String filename)
{
ObjectOutputStream out = null;
try
{
out = new ObjectOutputStream(new FileOutputStream(filename));
out.writeObject(new Object());
}
catch (Exception e)
{
return false;
}
finally
{
try
{
if (out != null)
out.close();
}
catch (IOException e)
{
return false;
}
}

return true;
}
}

class BreakJlintAndJADAnotherWay
{
Object read()
throws IOException, ClassNotFoundException
{
ObjectInputStream in = null;
try
{
in = new ObjectInputStream(new FileInputStream("tmp"));
return in.readObject();
}
finally
{
if (in != null)
in.close();
}
}
}
 
T

Tim Tyler

Timo Nentwig said:

You can optimise bytecode for *size* - resulting in smaller and
faster downloads.

These tools may help with that:

<A href="http://www.geocities.com/marcoschmidt.geo/java-class-file-optimizers.html">Java class file optimizers</a><BR>
<a href="http://dmoz.org/Computers/Programming/Languages/Java/Development_Tools/Obfuscators/">DMOZ</a><BR>
<A href="http://www.nq4.de/">JoGa</a><BR>
<A href="http://proguard.sourceforge.net/">ProGuard</a><BR>
<a href="http://www.utdallas.edu/~gxz014000/jcmp/">JCMP</a><BR>
<A href="http://jode.sourceforge.net/">JODE</a><BR>
<A href="http://www.alphaworks.ibm.com/tech/jax/">JAX</a><BR>
<A href="http://www.cs.purdue.edu/homes/hosking/bloat/">BLOAT</a><BR>
<a href="http://jarg.sourceforge.net/">JARG</a><BR>
<a href="http://www.optimizeit.com/">OptimizeIt</a> - $s<BR>
<a href="http://www.condensity.com/">Condensity</a> - $s<BR>
<a href="http://www.e-t.com/jshrink.html">jshrink</a> - $s<BR>
<A href="http://importscrubber.sourceforge.net/">Import Scrubber</a><BR>
<a href="http://www.garret.ru/~knizhnik/java.html">JavaGO</a><BR>
 
L

Larry Barowski

Here is one that could be a bit confusing. When stepping, control goes
from the if-statement directly to the finalizer without ever hitting the
return statement.

try {
for(int i = 0; i < 3; i++) {
if(i != 1)
;
else
return;
}
}
finally {
System.out.println("finally");
}


-Larry Barowski
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,001
Messages
2,570,254
Members
46,849
Latest member
Fira

Latest Threads

Top