L
leebarman
In a simple data import mechanism for our application I set up digester
to load data defined in a xml file to java beans, which are then saved
to the persistance tear. This works fine for a reasonable amount of
data objects.
Our data files can be really big (over a million objects) and I can't
load all the bean objects to memory at once and then process each bean.
Is there a way for digester to process the data file incrementally, so
that the digester returns one bean at a time and then jumps to the
next?
Thanks,
Lee
to load data defined in a xml file to java beans, which are then saved
to the persistance tear. This works fine for a reasonable amount of
data objects.
Our data files can be really big (over a million objects) and I can't
load all the bean objects to memory at once and then process each bean.
Is there a way for digester to process the data file incrementally, so
that the digester returns one bean at a time and then jumps to the
next?
Thanks,
Lee