Is there any library for indexing binary data?

Ì

Ìð¹Ï

Howdy,

Recently, I am finding a good library for build index on binary data.
Xapian & Lucene for python binding focus on text digestion rather than
binary data. Could anyone give me some recommendation? Is there any
library for indexing binary data no matter whether it is written in
python?

In my case, there is a very big datatable which stores structured
binary data, eg:
struct Item
{
long id; // used as key
double value;
};

I want to build the index on "id" field to speed on searching. Since
this datatable is not constant, the library should support incremental
indexing. If there is no suitable library, I have to do the index by
myself...

Thank you in advance.
 
I

Irmen de Jong

Howdy,

Recently, I am finding a good library for build index on binary data.
Xapian& Lucene for python binding focus on text digestion rather than
binary data. Could anyone give me some recommendation? Is there any
library for indexing binary data no matter whether it is written in
python?

In my case, there is a very big datatable which stores structured
binary data, eg:
struct Item
{
long id; // used as key
double value;
};

I want to build the index on "id" field to speed on searching. Since
this datatable is not constant, the library should support incremental
indexing. If there is no suitable library, I have to do the index by
myself...

Thank you in advance.

Put it into an Sqlite database? Or something else from
http://docs.python.org/library/persistence.html.
Or maybe http://www.pytables.org/ is more suitable to your needs (never
used that one myself though).
Or install a bank or 2 of memory in your box and read everything into
memory in one big hashtable.

Btw if you already have a big datatable in which the data is stored, I'm
guessing that already is in some form of database format. Can't you
write something that understands that database format.

But I think you need to provide some more details about your data set.

-irmen
 
Ì

Ìð¹Ï

Thank you irmen. I will take a look at pytable.
FYI, let me explain the case clearly.

Originally, my big data table is simply array of Item:
struct Item
{
long id; // used as key
BYTE payload[LEN]; // corresponding value with fixed length
};

All items are stored in one file by using "stdio.h" function:
fwrite(itemarray, sizeof(Item), num_of_items, fp);

Note that "id" is randomly unique without any order. To speed up
searching I regrouped / sorted them into two-level hash tables (in
the form of files). I want to employ certain library to help me index
this table.

Since the table contains about 10^9 items and LEN is about 2KB, it is
impossible to hold all data in memory. Furthermore, some new item may
be inserted into the array. Therefore incremental indexing feature is
needed.

Hope this help you to understand my case.
 
I

Irmen de Jong

Thank you irmen. I will take a look at pytable.
FYI, let me explain the case clearly.

Originally, my big data table is simply array of Item:
struct Item
{
long id; // used as key
BYTE payload[LEN]; // corresponding value with fixed length
};

All items are stored in one file by using "stdio.h" function:
fwrite(itemarray, sizeof(Item), num_of_items, fp);

Note that "id" is randomly unique without any order. To speed up
searching I regrouped / sorted them into two-level hash tables (in
the form of files). I want to employ certain library to help me index
this table.

Since the table contains about 10^9 items and LEN is about 2KB, it is
impossible to hold all data in memory. Furthermore, some new item may
be inserted into the array. Therefore incremental indexing feature is
needed.

I see, I thought the payload data was small as well. What about this idea:
Build a hash table where the keys are the id from your Item structs and
the value is the file seek offset of the Item 'record' in your original
datafile. (although that might generate values of type long, which take
more memory than int, so maybe we should use file_offset/sizeof(Item).
This way you can just keep your original data file (you only have to
scan it to build the hash table) and you will avoid a lengthy conversion
process.

If this hashtable still doesn't fit in memory use a sparse array
implementation of some sort that is more efficient at storing simple
integers, or just put it into a database solution mentioned in earlier
responses.

Another thing: I think that your requirement of 1e7 lookups per second
is a bit steep for any solution where the dataset is not in core memory
at once though.

Irmen.
 
Ì

Ìð¹Ï

Many thanks for your kind reply. As you mentioned, a sparse array may
be the best choice.
Storing offset rather than payload itself can greatly save memory space.

1e7 queries per second is my ideal aim. But 1e6 must be achieved.
Currently I have implemented 5e6 on one PC (without incremental
indexing and all incoming queries coming from local data stream).
Since the table is very big and responding is time critical, the
finally system will be definitely distributed computing. I hope that
Judy algorithm can simplify indexing, so I can focus on implementing
data persistence and distributed computing affairs.

--
ShenLei

ÔÚ 2010Äê3ÔÂ26ÈÕ ÉÏÎç2:55£¬Irmen de Jong said:
Thank you irmen. I will take a look at pytable.
FYI, let me explain the case clearly.

Originally, my big data table is simply array of Item:
struct Item
{
long id; // used as key
BYTE payload[LEN]; // corresponding value with fixed length
};

All items are stored in one file by using "stdio.h" function:
fwrite(itemarray, sizeof(Item), num_of_items, fp);

Note that "id" is randomly unique without any order. To speed up
searching I regrouped / sorted them into two-level hash tables (in
the form of files). I want to employ certain library to help me index
this table.

Since the table contains about 10^9 items and LEN is about 2KB, it is
impossible to hold all data in memory. Furthermore, some new item may
be inserted into the array. Therefore incremental indexing feature is
needed.

I see, I thought the payload data was small as well. What about this idea:
Build a hash table where the keys are the id from your Item structs and
the value is the file seek offset of the Item 'record' in your original
datafile. (although that might generate values of type long, which take
more memory than int, so maybe we should use file_offset/sizeof(Item).
This way you can just keep your original data file (you only have to
scan it to build the hash table) and you will avoid a lengthy conversion
process.

If this hashtable still doesn't fit in memory use a sparse array
implementation of some sort that is more efficient at storing simple
integers, or just put it into a database solution mentioned in earlier
responses.

Another thing: I think that your requirement of 1e7 lookups per second
is a bit steep for any solution where the dataset is not in core memory
at once though.

Irmen.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,008
Messages
2,570,268
Members
46,867
Latest member
Lonny Petersen

Latest Threads

Top