G
G.W. Lucas
I have an application that performs some specialized image-processing
which is simple, but not supported by the JAI or other Java API. I
pull in data from an existing image, process it, and store it back in
a new image.
For my application, I am using the Java BufferedImage class for images
that are about 2 megapixels in size. The processing is requiring about
250 milliseconds, which isn't bad, though I my application will be
processing a LOT of images and is user-interactive, so I'd like to
trim that if I can.
Anyway, I added some more instrumentation to my time measurements and
realized that of that 250 milliseconds, 200 or so was happening in the
BufferedImage.getRGB() method that I was using to extract the raw data
from the image:
long time0 = System.currentTimeMillis();
int w = image.getWidth();
int h = image.getHeight();
int n = w * h;
int[] rgb = image.getRGB(0, 0, w, h, new int[n], 0, w);
long time1 = System.currentTimeMillis();
long accessTime = time1-time0;
I found the fact that the access took so much longer than my own
processing kind of a surprise... Ordinarily, when I see an unexpected
result like this, it's usually a indication that I'm doing the wrong
thing or using the wrong tool.
So, I was wondering if I might be using the wrong approach in pulling
out the raw data. Perhaps BufferedImage isn't even the right class to
use? I've read the API document on the many Java image classes and
find the nuances of "which one to use when" to be rather non-obvious.
Could anyone point me in the direction of the best way to do this? Is
there a web page that provides information about the design of the
image classes that might clarify this issue.
Thanks for your help.
Gary
which is simple, but not supported by the JAI or other Java API. I
pull in data from an existing image, process it, and store it back in
a new image.
For my application, I am using the Java BufferedImage class for images
that are about 2 megapixels in size. The processing is requiring about
250 milliseconds, which isn't bad, though I my application will be
processing a LOT of images and is user-interactive, so I'd like to
trim that if I can.
Anyway, I added some more instrumentation to my time measurements and
realized that of that 250 milliseconds, 200 or so was happening in the
BufferedImage.getRGB() method that I was using to extract the raw data
from the image:
long time0 = System.currentTimeMillis();
int w = image.getWidth();
int h = image.getHeight();
int n = w * h;
int[] rgb = image.getRGB(0, 0, w, h, new int[n], 0, w);
long time1 = System.currentTimeMillis();
long accessTime = time1-time0;
I found the fact that the access took so much longer than my own
processing kind of a surprise... Ordinarily, when I see an unexpected
result like this, it's usually a indication that I'm doing the wrong
thing or using the wrong tool.
So, I was wondering if I might be using the wrong approach in pulling
out the raw data. Perhaps BufferedImage isn't even the right class to
use? I've read the API document on the many Java image classes and
find the nuances of "which one to use when" to be rather non-obvious.
Could anyone point me in the direction of the best way to do this? Is
there a web page that provides information about the design of the
image classes that might clarify this issue.
Thanks for your help.
Gary