Automatic way to test performance optimizations

P

Philipp

Hello,
In my code, I must sometimes implement peformance optimizations. By
definition, the functional behavior does not change through this,
which can be tested through the public interface (regression tests).
But I would also like to test, that the optimization is used in the
correct places. Even using reflection to gain private access to
fields, I find this is difficult without introducing code (e.g. flags)
which only use is testing.
As an example consider the below code found in Arrays.mergeSort of the
Sun JDK. How would you go about testing, that the insertion sort is
used in the correct cases?

Is there any specific tool available for JUnit (or any other) to test
performance automatically? Can we test that the code passes at certain
points or that certain methods have been called? If not, is there a
nice way to check in the log for a sign indicating the use of the
optimization?

Best regards
Phil
PS: Does Sun remove all log statements from its JDK src before
distribution or are they developing without logs (and should thus be
considered half-gods)?



--- Code from Arrays.java in Sun JDK 1.6 ---

private static void mergeSort(Object[] src,
Object[] dest,
int low,
int high,
int off) {
int length = high - low;

// Insertion sort on smallest arrays
if (length < INSERTIONSORT_THRESHOLD) {
for (int i=low; i<high; i++)
for (int j=i; j>low &&
((Comparable) dest[j-1]).compareTo(dest[j])>0; j--)
swap(dest, j, j-1);
return;
}

// Recursively sort halves of dest into src
int destLow = low;
int destHigh = high;
low += off;
high += off;
int mid = (low + high) >>> 1;
mergeSort(dest, src, low, mid, -off);
mergeSort(dest, src, mid, high, -off);

// If list is already sorted, just copy from src to dest. This is
an
// optimization that results in faster sorts for nearly ordered
lists.
if (((Comparable)src[mid-1]).compareTo(src[mid]) <= 0) {
System.arraycopy(src, low, dest, destLow, length);
return;
}

// Merge sorted halves (now in src) into dest
for(int i = destLow, p = low, q = mid; i < destHigh; i++) {
if (q >= high || p < mid && ((Comparable)src[p]).compareTo(src[q])
<=0)
dest = src[p++];
else
dest = src[q++];
}
}
 
E

Eric Sosman

Philipp said:
Hello,
In my code, I must sometimes implement peformance optimizations. By
definition, the functional behavior does not change through this,
which can be tested through the public interface (regression tests).
But I would also like to test, that the optimization is used in the
correct places. Even using reflection to gain private access to
fields, I find this is difficult without introducing code (e.g. flags)
which only use is testing.
As an example consider the below code found in Arrays.mergeSort of the
Sun JDK. How would you go about testing, that the insertion sort is
used in the correct cases?

It depends on what I decide "correct" means. If I'm
satisfied that the INSERTIONSORT_THRESHOLD criterion is an
adequate definition of "correctness," I think I'd check it
by code inspection (perhaps even by formal proof methods)
rather than by testing. If the question is really more like
"Has INSERTIONSORT_THRESHOLD's value been chosen correctly?"
the job would be much more complicated, and probably wouldn't
lead to a hard-and-fast answer but to something with a lot
of "In typical cases" and "Most of the time" and so on.
Is there any specific tool available for JUnit (or any other) to test
performance automatically? Can we test that the code passes at certain
points or that certain methods have been called? If not, is there a
nice way to check in the log for a sign indicating the use of the
optimization?

Take a look at the java.lang.instrument package (I haven't
used it myself, but the name looks promising).

With unit testing (JUnit or whatever) I think you're out
of luck from the outset. The underlying philosophy of unit
testing is to check that a unit behaves as advertised, not to
study the means by which the behavior is achieved. The unit
under test is a "black box," whose external manifestations are
the matter of interest and whose inner workings are a mystery.

Other testing schemes exist, but I don't think you'll find
them in a unit-testing framework.
PS: Does Sun remove all log statements from its JDK src before
distribution or are they developing without logs (and should thus be
considered half-gods)?

I don't know. Ask the oracle.
 
C

charlesbos73

Indeed. Unit testing something where the performance
is meant to be improved by a cache is miserable.

The "black box" can perform perfectly (give
correct results) with a completely bugged
cache with 100% misses.

Of course, we need to BREACH black box testing
to look inside the object to see the hit percentage!

I agree...

As a tradeoff, I've used the Decorator design pattern
several times with great benefits to test such cases.

It's far from perfect, but at least I'm not
completely breaching my OOD to deal with
problems that are purely related to the
computational domain.


interface Doable {

xxx doIt();

}

Then:

interface FastlyDoable extends Doable {

Tuple<Integer,Integer> getHitAndMiss();

}

The nice thing is that due to the recursive
nature of the Decorator pattern you can keep
referring to the Doable interface in your
application's code.

You don't pollute your domain with computational
concerns that much.

Sure, it's not perfect: there are computational concerns
at play that simply cannot be ignored.

But I'm not completely breaking my original
Doable interface.

I guess that just like you can have a src/ and test/ hierarchy
duplicating your package hierarchy, you can have
a computational/ hierarchy mimicking your src/ hierarchy,
making it clear that classes there are unrelated to
the customer's problem space.

I then instantiate objects implementing the FastlyDoable
interface in my code (and in production code), but they
never ever know that the objects are actually "FastlyDoable".

All the production code ever sees are Doable objects, which
is the interface both the wrapper and the "wrapped subject"
(as it is called in that terminology) exhibits.

In the unit test, I refer to these objects as FastlyDoable
when I want to verify that I get at least a certain number
of hits/miss in this and that case, etc.

It's not perfect, but for us it was a good tradeoff from
an OOD point of view, and it sure helped give us some
confidence that our caches are working at they should.



P.S: As usual, I won't be entertained nor consider interesting
remarks made by the "JLS-nazi bot" (it shall recognize itself)
and they shall be sent to /dev/null.
 
C

charlesbos73

Hi,

I answered to BugBear in this thread...

Regarding the title of this thread, I forgot
to mention that using the Decorator approach
one can write quite a lot of unit tests testing
a lot of corner cases and usual cases and make
sure that at least of x% of cache hits are
obtained, etc. I've used this technique both
with dumb pre-filled caches (for cases I knew
where going to be the most common), with regular
caches/memoized caches/whatnots-caches ;)

This helped me find bugs and helped me to quickly
catch regression bugs.

It can be made to be both very helpful yet have
very little impact on the OO design of the app.

It's not perfect but it works.
 
T

Tom Anderson

In my code, I must sometimes implement peformance optimizations. By
definition, the functional behavior does not change through this,
which can be tested through the public interface (regression tests).
But I would also like to test, that the optimization is used in the
correct places. Even using reflection to gain private access to
fields, I find this is difficult without introducing code (e.g. flags)
which only use is testing.
As an example consider the below code found in Arrays.mergeSort of the
Sun JDK. How would you go about testing, that the insertion sort is
used in the correct cases?

In this specific case, i'd write a class which implemented Comparable and
counted the number of times compareTo was called. Something like:

public class Counter {
private int count;

public void incrment() {
++count;
}
public int getCount() {
return count;
}
}

public class CountingInteger implemente Comparable<CountingInteger> {
private Integer i;
private Counter ctr;
public CountingInteger(int i, Counter ctr) {
this.i = i;
this.ctr = ctr;
}
public int compareTo(CountingInteger that) {
ctr.increment();
return this.i.compareTo(taht.i);
}

}

Then i'd make an array of CountingInteger objects sharing a common
Counter, do the sort, and assert that the count is what i expected. I'd
have to work out what that would be ahead of time somehow, but that's not
too hard.

If you were sorting ints, or Strings, rather than objects, thought, you'd
be stuffed. I imagine a lot of cases you have in real life will be ones
thta aren't amenable to the above mocking approach.

tom
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,817
Latest member
DicWeils

Latest Threads

Top