G
gdtrob
I am slurping a series of large .csv files (6MB) directly into an array
one at a time (then querying). The first time I slurp a file it is
incredibly quick. The second time I do it the slurping is very slow
despite the fact that I close the file (using a filehandle) and undef
the array. here is the relevant code:
open (TARGETFILE,"CanRPT"."$chromosome".".csv") || die "can't open
targetfile: $!";
print "opened";
@chrfile = <TARGETFILE>; #slurp the chromosome-specific repeat file
into memory
print "slurped";
(and after each loop)
close (TARGETFILE);
undef @chrfile;
If it is possible to quickly/simply fix this I would much rather keep
this method than setting up a line by line input to the array. The
first slurp is very efficient.
I am using activestate perl 5.6 on a win32 system with 1 gig ram:
one at a time (then querying). The first time I slurp a file it is
incredibly quick. The second time I do it the slurping is very slow
despite the fact that I close the file (using a filehandle) and undef
the array. here is the relevant code:
open (TARGETFILE,"CanRPT"."$chromosome".".csv") || die "can't open
targetfile: $!";
print "opened";
@chrfile = <TARGETFILE>; #slurp the chromosome-specific repeat file
into memory
print "slurped";
(and after each loop)
close (TARGETFILE);
undef @chrfile;
If it is possible to quickly/simply fix this I would much rather keep
this method than setting up a line by line input to the array. The
first slurp is very efficient.
I am using activestate perl 5.6 on a win32 system with 1 gig ram: