J
Jochen Riekhof
Hi...
finally I approach the 1.0 release of my freeware (LGPL) PTask library
and so if you are interested in implementing parallel algorithms I would
like to encourage you to give it a try. You can find it at
http://ptask.sourceforge.net as well as my english website
http://www.jocware.com/en/jocptask/. There are also some Fractal samples
(as Java webstart apps) on my site.
The library also offers exception handling and Future support including
cancellation.
I would very much like to hear your opinions, suggestions and criticism!
The basic idea is to always use just as many threads as CPUs are
available on the current machine to minimize context switching.
Algorithms are implemented by putting tasks (implementations of the
interface PTask) into nested serial and parallel queues. All PTasks
operate on the same data which is passed in when actually executing the
root queue. Normally you will use the PTaskQueue.process(data) method
which waits until all computation has completed, but a
startProcessing(data) method is also available which returns a Future.
Behind the scene a bookkeeping scheme is used that has quite small
overhead. My limited tests (see sample code) showed 30% faster
computations than naive multiple-thread implementations and still 15%
improvement compared to an ExecutorService approach.
Have a lot of fun!
Jochen Riekhof
finally I approach the 1.0 release of my freeware (LGPL) PTask library
and so if you are interested in implementing parallel algorithms I would
like to encourage you to give it a try. You can find it at
http://ptask.sourceforge.net as well as my english website
http://www.jocware.com/en/jocptask/. There are also some Fractal samples
(as Java webstart apps) on my site.
The library also offers exception handling and Future support including
cancellation.
I would very much like to hear your opinions, suggestions and criticism!
The basic idea is to always use just as many threads as CPUs are
available on the current machine to minimize context switching.
Algorithms are implemented by putting tasks (implementations of the
interface PTask) into nested serial and parallel queues. All PTasks
operate on the same data which is passed in when actually executing the
root queue. Normally you will use the PTaskQueue.process(data) method
which waits until all computation has completed, but a
startProcessing(data) method is also available which returns a Future.
Behind the scene a bookkeeping scheme is used that has quite small
overhead. My limited tests (see sample code) showed 30% faster
computations than naive multiple-thread implementations and still 15%
improvement compared to an ExecutorService approach.
Have a lot of fun!
Jochen Riekhof