R
Rob Z
Hi all,
I am working with MLDBM to access a static "database file". (Written
once, never altered, only read.) The file is ~75MB and is a 4-level
HOH. i.e. hash of hashes of hashes of hashes. It is running on Linux
on an 2x CPU XServe with Perl 5.8.
The trouble is that the tie() command is taking ~10 seconds when first
connecting to the database file. I would like to shorten this as much
as possible, I dont need the file read into memory at the beginning, I
can read in each entry as it is needed later. I would actually like to
leave as much data out of memory as I can, until it is really needed.
As far as I can find, the whole file isnt being read into memory
(memory use is ~50MB for the process after the tie()), but a good
portion is. My concern is that this file will grow by about 8x over
the next few months, to 500+MB.
Anyway, I am looking for alternatives or options for speeding up that
initial tie() and making the smallest memory commitment up front as
possible. Any ideas?
Thanks,
Rob
I am working with MLDBM to access a static "database file". (Written
once, never altered, only read.) The file is ~75MB and is a 4-level
HOH. i.e. hash of hashes of hashes of hashes. It is running on Linux
on an 2x CPU XServe with Perl 5.8.
The trouble is that the tie() command is taking ~10 seconds when first
connecting to the database file. I would like to shorten this as much
as possible, I dont need the file read into memory at the beginning, I
can read in each entry as it is needed later. I would actually like to
leave as much data out of memory as I can, until it is really needed.
As far as I can find, the whole file isnt being read into memory
(memory use is ~50MB for the process after the tie()), but a good
portion is. My concern is that this file will grow by about 8x over
the next few months, to 500+MB.
Anyway, I am looking for alternatives or options for speeding up that
initial tie() and making the smallest memory commitment up front as
possible. Any ideas?
Thanks,
Rob