hash table usage questions

C

C.DeRykus

x> If the OP's worry about size is justified, I'd go with this:

x> /.cc$/ and delete $h{$_} while defined ($_=each %h);

x> As your method builds two in-memory lists, one with all keys and one with
x> all keys meeting the criteria.

sure that could be faster but it may not be as you think. you make N
calls to delete and calls to each in a loop. perl ops are the major
bottleneck in perl speed. if the slice lists are short enough (and they
could still be pretty long), IMO my solution should be faster. but only
benchmarking can tell for sure. my method does use more ram but ram is
cheaper than speed. :)

I don't how meaningful just a raw opcode count is but the hash-slice-
delete does have fewer:

perl -MO=Concise -e 'delete @h{ grep !/\.cc$/, keys %h }'|wc -l
17

perl -MO=Concise -e ' /.cc$/ and delete $h{$_} while defined ($_=each
%h);'
25

"Perl Hacks" (#82) cites B::COP timing info to see how long the
opcodes are taking.
 
U

Uri Guttman

CD> I don't how meaningful just a raw opcode count is but the hash-slice-
CD> delete does have fewer:

CD> perl -MO=Concise -e 'delete @h{ grep !/\.cc$/, keys %h }'|wc -l
CD> 17

CD> perl -MO=Concise -e ' /.cc$/ and delete $h{$_} while defined ($_=each
CD> %h);'
CD> 25

the raw ops count is nice but doesn't take into consideration the
execution of the while loop vs grep and slicing. the while loop will
execute many more perl ops because it stays in perl most of the
time. the grep/slice stays in perl's core (c) more of the time so it
will likely be faster (at least on normal sized ram usage).

uri
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,995
Messages
2,570,236
Members
46,822
Latest member
israfaceZa

Latest Threads

Top