Z
zuerrong
Hi members,
Maybe this is not a special question to ruby.
But since my ruby app has meet this algorithm problem, so please give
the suggestion if you'd like.
I have these datas in a mysql table:
mysql> select startNum,endNum from ip_data limit 10;
+----------+----------+
| startNum | endNum |
+----------+----------+
| 16777216 | 16777471 |
| 16843008 | 16843263 |
| 16909056 | 16909311 |
| 17367040 | 17498111 |
| 17498112 | 17563647 |
| 17563648 | 17825791 |
| 17825792 | 18153471 |
| 18153472 | 18219007 |
| 18219008 | 18350079 |
| 18350080 | 18874367 |
+----------+----------+
(Those are actually the bigint for ip addresses.)
Given a numer, say 17498200, I want to find this number is in which
row of the table.
I could run the SQL with ruby's mysql API (dbi and mysql::dbd):
select * from ip_data where startNum <= #{number} and endNum >= #{number}
But I found that is much slow.
The talbe is not so large, has only 339542 rows totally.
So I was thinking loading the whole data into memory, discarding
mysql, and find a algorithm for doing it quickly.
Any suggestion please? Thanks.
Regards,
zuerrong
Maybe this is not a special question to ruby.
But since my ruby app has meet this algorithm problem, so please give
the suggestion if you'd like.
I have these datas in a mysql table:
mysql> select startNum,endNum from ip_data limit 10;
+----------+----------+
| startNum | endNum |
+----------+----------+
| 16777216 | 16777471 |
| 16843008 | 16843263 |
| 16909056 | 16909311 |
| 17367040 | 17498111 |
| 17498112 | 17563647 |
| 17563648 | 17825791 |
| 17825792 | 18153471 |
| 18153472 | 18219007 |
| 18219008 | 18350079 |
| 18350080 | 18874367 |
+----------+----------+
(Those are actually the bigint for ip addresses.)
Given a numer, say 17498200, I want to find this number is in which
row of the table.
I could run the SQL with ruby's mysql API (dbi and mysql::dbd):
select * from ip_data where startNum <= #{number} and endNum >= #{number}
But I found that is much slow.
The talbe is not so large, has only 339542 rows totally.
So I was thinking loading the whole data into memory, discarding
mysql, and find a algorithm for doing it quickly.
Any suggestion please? Thanks.
Regards,
zuerrong