J
jason.cipriani
Except that the loading takes place before he starts the tight
loop, so can have no impact on the time in the loop. (I presume
he has profiled the code, and established that the bottleneck is
the find.)
My comment was unclear. I meant to imply that the text file format
could be modified to make it easier and more efficient to generate
optimized data structures on load. I did not mean to imply that
improving load times would have a direct effect on his particular
bottleneck.
That's what he said. Presumably based on profiler output.
I was acknowledging what he said. Now you are just picking on me.
Or have a second pass after loading which establishes them.
Given that there was not enough information to determine how many
times he went through his loop per run (and therefore per data load),
I took the approach of suggesting optimizations for both loading *and*
analysis. Increased load times by doing a second pass on load (which
already takes 10+ seconds) are only acceptable if the amount of
increased time is less than the amount of time saved while doing the
analysis. If that's the case, or if the text file can no longer be
modified for some reason, then sure, a second pass could make sense as
well.
If the set of user id's is more or less dense (say at least half
of the entries between 0 and the largest entry allocated), then
he can get the same effect by using std::vector instead of
std::map, without any extra set-up. Maybe his best solution is
to ensure that the user id's are dense.
Yes, that is another good idea.
Jason