Incrementally digest big xml data files using commons digester

L

leebarman

In a simple data import mechanism for our application I set up digester
to load data defined in a xml file to java beans, which are then saved
to the persistance tear. This works fine for a reasonable amount of
data objects.

Our data files can be really big (over a million objects) and I can't
load all the bean objects to memory at once and then process each bean.


Is there a way for digester to process the data file incrementally, so
that the digester returns one bean at a time and then jumps to the
next?

Thanks,

Lee
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,995
Messages
2,570,236
Members
46,822
Latest member
israfaceZa

Latest Threads

Top