Looking for fast string hash searching

T

Thomas Christmann

Hi!

First let me apologize for asking this question when there are so many answers
to it on Google, but most of them are really contradicting, and making what I
want to do very performant is crucial to my project. So, here's what I have:

My C programm connects to a database and gets ca. 50-100K domain name/file path
pairs. Those pairs have to be cached by my application. Building the cache may
take a second or two, but retrieving from it must be very fast. Since I get
the data from a database, I'd be able to order by domain name (which will be
my key, and is guaranteed to be unique), so I thought something like a btree
search for strings might be a good idea. I only have to look up by domain name
from the hash, searching by path is not permitted.
Since I'm far from being an expert on the subject of hashing and search
algorithms, your opinion on how to make this fast is humbly requested :)

TIA,

Thomas
 
S

Stephen L.

Thomas said:
Hi!

First let me apologize for asking this question when there are so many answers
to it on Google, but most of them are really contradicting, and making what I
want to do very performant is crucial to my project. So, here's what I have:

My C programm connects to a database and gets ca. 50-100K domain name/file path
pairs. Those pairs have to be cached by my application. Building the cache may
take a second or two, but retrieving from it must be very fast. Since I get
the data from a database, I'd be able to order by domain name (which will be
my key, and is guaranteed to be unique), so I thought something like a btree
search for strings might be a good idea. I only have to look up by domain name
from the hash, searching by path is not permitted.
Since I'm far from being an expert on the subject of hashing and search
algorithms, your opinion on how to make this fast is humbly requested :)

TIA,

Thomas

This isn't _really_ a `C' question...

If the distribution of "domain" names
is pretty even across the alphabet,
then you could use the 1st letter of
the name as an index to an array of
"pointers" to name/path pairs that
you can `bsearch()'. 100,000 entries
isn't that much now-a-days, and
dividing by 26 (for about 4,000 entries)
should provide a very fast lookup.


Stephen
 
T

Thomas Christmann

This isn't _really_ a `C' question...

I know, I know, and I'm sorry to post here, but you guys usually
help me very much (not knowingly, I suppose) with your posts. Also,
there isn't really an alt.hash.maps :)
If the distribution of "domain" names
is pretty even across the alphabet,
then you could use the 1st letter of
the name as an index to an array of
"pointers" to name/path pairs that
you can `bsearch()'. 100,000 entries
isn't that much now-a-days, and
dividing by 26 (for about 4,000 entries)
should provide a very fast lookup.

Sounds good, I'll give that a try.

Thanks,

Thomas
 
A

August Derleth

I know, I know, and I'm sorry to post here, but you guys usually
help me very much (not knowingly, I suppose) with your posts. Also,
there isn't really an alt.hash.maps :)

comp.programming or something like that (I've forgotten the exact name)
handles language-agnostic algorithm questions. You should get the
algorithm ironed out first before trying a specific implementation anyway.
 
J

James Kanze

|> Thomas Christmann wrote:

|> > First let me apologize for asking this question when there are so
|> > many answers to it on Google, but most of them are really
|> > contradicting, and making what I want to do very performant is
|> > crucial to my project. So, here's what I have:

|> > My C programm connects to a database and gets ca. 50-100K domain
|> > name/file path pairs. Those pairs have to be cached by my
|> > application. Building the cache may take a second or two, but
|> > retrieving from it must be very fast. Since I get the data from a
|> > database, I'd be able to order by domain name (which will be my
|> > key, and is guaranteed to be unique), so I thought something like
|> > a btree search for strings might be a good idea. I only have to
|> > look up by domain name from the hash, searching by path is not
|> > permitted. Since I'm far from being an expert on the subject of
|> > hashing and search algorithms, your opinion on how to make this
|> > fast is humbly requested :)

|> If the distribution of "domain" names
|> is pretty even across the alphabet,
|> then you could use the 1st letter of
|> the name as an index to an array of
|> "pointers" to name/path pairs that
|> you can `bsearch()'.

They aren't. I'll bet that well over half of all domains start with
"www.". Also, the alphabet for domain names isn't limited to letters.

I think that for this application, nothing will beat a good hash code.
The trick is, of course, to avoid a bad one:); for some reason, URL's
seem to be very sensitive to bad hash codes. A Google search for FNV
hashing should turn up what you need -- if performance of the hash
itself turns out to be an issue, and your hardware doesn't handle
arbitrary multiplies very rapidly, I've also used Mersenne prime based
hash codes in the past with good results. (The basic algorithm is the
same as for FNV hashing, but the multiplier is a Mersenne prime, which
can easily be calculated with a shift and a subtraction.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,997
Messages
2,570,241
Members
46,831
Latest member
RusselWill

Latest Threads

Top