parsing large text file and pairwise comparison

A

Alan

Hi. I have programmed in C++ before, but I`m a couple of years out
of practice. I am seeking some advice on getting started on a quickie
project. . . .

I have to read a 54MB text file and do a pairwise comparison
among 2500 items or so in the file. Each of those items have to be
compared to every other item. Many of the comparison will only require
comparing one field of the items. I will probably sort on this field
before I do the comparison, to speed things up. There are about 1000
sets of these items, so I`ll do this about 1000 times.

Would you recommend using strings to read it into vectors for
this? Is there a completely other, better way to do this?

Thanks, Alan
 
J

Jim Langston

Alan said:
Hi. I have programmed in C++ before, but I`m a couple of years out
of practice. I am seeking some advice on getting started on a quickie
project. . . .

I have to read a 54MB text file and do a pairwise comparison
among 2500 items or so in the file. Each of those items have to be
compared to every other item. Many of the comparison will only require
comparing one field of the items. I will probably sort on this field
before I do the comparison, to speed things up. There are about 1000
sets of these items, so I`ll do this about 1000 times.

Would you recommend using strings to read it into vectors for
this? Is there a completely other, better way to do this?

Thanks, Alan

Hard to say without knowing the data. Is it string data, or integer data or
floating point data or?
And what do you need to do with the comparisons, build another file? Output
something?
Is the data unique?

54MB is quite a large bit of data to try to fit into memory all at once.
Can you break the data up into some logical units?
 
R

Robert Mabee

Alan said:
I have to read a 54MB text file and do a pairwise comparison
among 2500 items or so in the file.

That sounds like a lot of data. Unless you want to wrestle with
resource limitations (for practice or fun) you'd be better off to
use an existing tool such as GNU sort for the heavy work. If the
file needs some pre-processing (such as combining multiple related
lines into one) then write a small filter program, to be invoked
in a command script, or let the main program do the pre-processing
and invoke sort using system() or more OS-specific means.

I'd use explicit temporary files with sort, rather than pipes, as
pipes often have amazingly bad performance for large quantities of
data. This also leaves evidence you can check for debugging.

Once the data are sorted on the relevant fields, checking for
uniqueness is easily done with just two one-line buffers.

Of course, if your purpose really requires comparing or otherwise
combining pairs outside a linear sort order you'll have to build up
an appropriate in-memory structure. Vectors of strings are a fine
way to store the data. If memory is limited you can store just the
part that's needed globally, together with a file position so you
can read the same line again when and if you need to copy the rest
of the line to the output.

If performance is too slow (which would be an issue only if this
isn't a one-time application) you can speed up the file read using
a large buffer, up to the file size, then making C strings by
writing 0 over the line separator.

Make a smaller test input file so your debugging time won't be spent
waiting for disk I/O.
 
A

Alan

It consists entirely of numbers (some integer, some real). The
primary "screening" comparison involves one integer field. For 90+% of
the data, this field will not match in the comparison. This will
reduce the data considerably.

The data does come in sequential, logical units (output of a time
cycle of a simulation). To clarify, I have to read in something like
2500 (variable) of the items at a time and compare them. Then I
calculate results and move on to the next group of 2500. If it might
be wise, I could create 1000 separate files instead of one big honker.


Any thoughts?

I will output a small file with summarized results (CSV, which
will be used in Excel for further analysis.
Thanks for the advice,
Alan

Reply to:

Hard to say without knowing the data. Is it string data, or integer
data or
floating point data or?
And what do you need to do with the comparisons, build another file?
Output
something?
Is the data unique?

54MB is quite a large bit of data to try to fit into memory all at
once.
Can you break the data up into some logical units?
 
J

Jacek Dziedzic

Alan said:
It consists entirely of numbers (some integer, some real). The
primary "screening" comparison involves one integer field. For 90+% of
the data, this field will not match in the comparison. This will
reduce the data considerably.

The data does come in sequential, logical units (output of a time
cycle of a simulation). To clarify, I have to read in something like
2500 (variable) of the items at a time and compare them. Then I
calculate results and move on to the next group of 2500. If it might
be wise, I could create 1000 separate files instead of one big honker.


Any thoughts?

I will output a small file with summarized results (CSV, which
will be used in Excel for further analysis.
Thanks for the advice,

If these are numbers than I would not read them into strings,
for you will have to convert them to int or double anyway,
to compare. String comparison will not suffice, because
0, -0.0, 0.0000E+00 are all the same number.

I'd go for a vector of structs, if you know the struct in
advance. If you can reasonably guess the number of elements,
or if it is in a header of this file, then .reserve() the
vector in advance, this will reduce the penalty of reallocating
it during .push_back().

My experience shows that the bottleneck of such calculations
is usually the string to number conversion hidden behind the
scenes in

input_stream >> some_number;

ie. the program becomes CPU-bound, not I/O-bound because of
the conversion. I suggest trying the stream-based approach
and only if the performance is unsatisfactory, trying to
re-code into sscanf()/atof() or the likes, being extremely
careful.

On the other hand, if you intend on comparing all pairs
of data, I suspect you'll have O(n^2) complexity of
O(n logn) if you sort it -- that way for large sets of
data the cost of reading it in, which scales as O(n)
will become less and less a hassle. Still, 2500 numbers
and a 54MB file does not sound like an awful lot.

I would not split the input into more smaller files,
IMHO it will only require more file opens and closes.

HTH,
- J.
 
A

Alan

Just to follow up, for anyone who is interested . . . I followed
most of the good advice provided, reading in the data as integers,
doubles, etc. and putting it into a vector of structures. This worked
great. On my Dell Latitude D600 laptop (not the latest and greatest),
it takes less than a minute to read in all this data and iterate
through the vector for a pairwise comparison.

Thanks again for all the great advice. Alan
 
A

Alan

Just to follow up, for anyone who is interested . . . I followed
most of the good advice provided, reading in the data as integers,
doubles, etc. and putting it into a vector of structures. This worked
great. On my Dell Latitude D600 laptop (not the latest and greatest),
it takes less than a minute to read in all this data and iterate
through the vector for a pairwise comparison.

Thanks again for all the great advice. Alan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,236
Members
46,825
Latest member
VernonQuy6

Latest Threads

Top