Search algorithm to use

R

Richard Harter

Gosh folks...

A linear search will take about 25 compares on the average.

A binary search will take about 6 compares on the average.

Roughly speaking, a reasonable guess is that the binary search will
run about 3 times as fast as the linear search. Whether that makes
any perceptible difference in the application or not is another
question altogether.

This isn't right. First of all you need two compares in the binary
search loop, one for the element compare, and one for the loop test,
so the number of compares is about 12. Secondly the binary search
loop requires an addition and a divide by 2. On the other hand the
linear compare loop only requires a compare.
Programming a binary search should be something any competent
programmer can do almost as fast as he/she can write it down:

I haven't checked your code; however coding a binary search completely
correctly is notoriously trickier than it seems.
/* Return the smallest index of an element greater than
* the supplied argument x. If there is no such element,
* return the index where the element would be if there
* were one (which will be 'how_many' if how_many < 0
* is true).
*
* NEEDS: argument 'v' array be sorted in increasing
* order. NOT CHECKED.
*
* Disclaimer: not tested or compiled in any way.
*/

int
first_larger( SomeType v[], int how_many, SomeType x ){
int i = 0, j = how_many;

if( i >= j || v[0] > x ) return 0;

while( i + 1 != j ){
int m = (i + j) / 2;
if( v[m] <= x ) i = m;
else j = m;
}

return j;
}

And, just a few meta-comments.

The comp.lang.c group isn't normally the place to ask this kind of
question; comp.programming is better.
True.


If you asked this question because you got assigned it as homework,
then look over the comments and the code until you think you see how
it works, then stop looking at it and try to write the code yourself.
If you don't you're cheating yourself (and besides which karma will
get you :).

IMO it doesn't sound like a homework question; rather it sounds like
the sort of question that an inexperienced programmer would ask.

Richard Harter, (e-mail address removed)
http://home.tiac.net/~cri, http://www.varinoma.com
All my life I wanted to be someone;
I guess I should have been more specific.
 
T

Tim Rentsch

This isn't right. First of all you need two compares in the binary
search loop, one for the element compare, and one for the loop test,
so the number of compares is about 12. Secondly the binary search
loop requires an addition and a divide by 2. On the other hand the
linear compare loop only requires a compare.

The linear search needs two compares per loop iteration also - one to
test the value, and one to check the loop index.

The binary search needs an add and a divide by 2 (usually a shift);
also one assignment. The linear search needs an increment. Since
25/6 > 4, I think "about 3 as fast" is a fair statement, especially
since it was given as a "reasonable guess".

I haven't checked your code; however coding a binary search completely
correctly is notoriously trickier than it seems.

I used to think that too before I learned how to write a binary search
that is both simple and solid. I suspect the idea that writing a
binary search is tricky was much of the motivation of the OP; that's
part of why I wrote this.

Incidentally, looking over the code, it still looks right to me. The
comment, however, has an error - should read 'how_many >= 0'. And for
those who notice nits, "increasing" should be "non-decreasing"
instead.

/* Return the smallest index of an element greater than
* the supplied argument x. If there is no such element,
* return the index where the element would be if there
* were one (which will be 'how_many' if how_many < 0
* is true).
*
* NEEDS: argument 'v' array be sorted in increasing
* order. NOT CHECKED.
*
* Disclaimer: not tested or compiled in any way.
*/

int
first_larger( SomeType v[], int how_many, SomeType x ){
int i = 0, j = how_many;

if( i >= j || v[0] > x ) return 0;

while( i + 1 != j ){
int m = (i + j) / 2;
if( v[m] <= x ) i = m;
else j = m;
}

return j;
}
 
L

Lawrence Kirby

Hm. Had thought exponential growth referred to _any_ curve which
increased "steepness" as you went along the x axis.

Exponential typically refers to somthing of the form X^N where ^ is the
power operator. It is sometimes loosely used to refer to something that
grows faster than polynomial (i.e. N^X for arbirary large X). For example
2^N will exceed N^X with sufficiently large N, no matter how large you
make X.

Lawrence
 
M

Michael Wojcik

However, a linear search will correctly predict nearly all the
branches, while a binary search will mispredict half of them, on
average.

Unless comparison time dominates, I wouldn't call your guess
"reasonable", for that reason alone - and there are others.
(Consider we know nothing of the size of items and the memory-
management features of the platform, so we can't guess whether the
binary search will tend to cause more cache misses, for example.)
The linear search needs two compares per loop iteration also - one to
test the value, and one to check the loop index.

Any decent optimizer will unroll the loop, so loop index comparisons
will only add N/O comparisons, where O is the unrolling factor.

And if loop index comparisons take about as long as value comparisons,
then the latter must be very quick - so you fall foul of the branch-
misprediction penalty of binary search.

And this is why premature optimization is a poor idea.

Now, in this case, the OP did follow up by stating that profiling had
indicated his search function was a hot spot; and since that's the
case, he might indeed want to try various alternatives. If compar-
isons are very expensive, the binary search might even help - though
I suspect he'd be better off revisiting his data structure, as
various people suggested.

--
Michael Wojcik (e-mail address removed)

Every allegiance to some community eventually involves such a fetish,
which functions as the disavowal of its founding crime: is not 'America'
the fetish of an infinitely open space enabling every individual to
pursue happiness in his or her own way? -- Slavoj Zizek
 
R

Richard Harter

The linear search needs two compares per loop iteration also - one to
test the value, and one to check the loop index.

One comparison suffices if you use a sentinel. The following code
fragment is illustrative (but unchecked).

if (data[0] >= testval) return 0;
ptr = data + data_length;
while (*--ptr >= testval);
return ptr-data;

One decrement, one compare per element.
The binary search needs an add and a divide by 2 (usually a shift);
also one assignment. The linear search needs an increment. Since
25/6 > 4, I think "about 3 as fast" is a fair statement, especially
since it was given as a "reasonable guess".

It's a reasonable guess if you don't do the arithmetic and count
operations, and if you've never benchmarked the two approaches. For a
linear search the costs per iteration are

1 data compare & branch
1 increment/decrement

For the binary search the costs per iteration are:

1 data compare & branch
1/2 branch (then clause)
1 index compare and branch
1 addition
1 assignment
1 divide by 2
1 addition by 1 (avoidable)

If we just count operations that's 2 for a linear search and 6.5 for a
binary search which is just about a wash. However the code for the
linear search is a better candidate for optimization. Some years ago
I ran benchmarks on just this question; the break even point was
somewhere around 50. I expect that the results would be about the
same on more modern machines.

I used to think that too before I learned how to write a binary search
that is both simple and solid. I suspect the idea that writing a
binary search is tricky was much of the motivation of the OP; that's
part of why I wrote this.

It's not just me or you; errors in writing binary searches are (or
were) common. IIRC Bentley makes just that observation in
"Programming Pearls" and, of course, Dijkstra had some pungent
comments on the subject.
Incidentally, looking over the code, it still looks right to me. The
comment, however, has an error - should read 'how_many >= 0'. And for
those who notice nits, "increasing" should be "non-decreasing"
instead.

Well you see, "looks right" is not quite as reassuring as "proven
right". I expect it probably is - most of us reuse validated code
schemas for these little exercises.
/* Return the smallest index of an element greater than
* the supplied argument x. If there is no such element,
* return the index where the element would be if there
* were one (which will be 'how_many' if how_many < 0
* is true).
*
* NEEDS: argument 'v' array be sorted in increasing
* order. NOT CHECKED.
*
* Disclaimer: not tested or compiled in any way.
*/

int
first_larger( SomeType v[], int how_many, SomeType x ){
int i = 0, j = how_many;

if( i >= j || v[0] > x ) return 0;

while( i + 1 != j ){
int m = (i + j) / 2;
if( v[m] <= x ) i = m;
else j = m;
}

return j;
}

Richard Harter, (e-mail address removed)
http://home.tiac.net/~cri, http://www.varinoma.com
All my life I wanted to be someone;
I guess I should have been more specific.
 
T

Tim Rentsch

The linear search needs two compares per loop iteration also - one to
test the value, and one to check the loop index.

One comparison suffices if you use a sentinel. The following code
fragment is illustrative (but unchecked).

if (data[0] >= testval) return 0;
ptr = data + data_length;
while (*--ptr >= testval);
return ptr-data;

One decrement, one compare per element.
The binary search needs an add and a divide by 2 (usually a shift);
also one assignment. The linear search needs an increment. Since
25/6 > 4, I think "about 3 as fast" is a fair statement, especially
since it was given as a "reasonable guess".

It's a reasonable guess if you don't do the arithmetic and count
operations, and if you've never benchmarked the two approaches. For a
linear search the costs per iteration are

1 data compare & branch
1 increment/decrement

For the binary search the costs per iteration are:

1 data compare & branch
1/2 branch (then clause)
1 index compare and branch
1 addition
1 assignment
1 divide by 2
1 addition by 1 (avoidable)

If we just count operations that's 2 for a linear search and 6.5 for a
binary search which is just about a wash. However the code for the
linear search is a better candidate for optimization. Some years ago
I ran benchmarks on just this question; the break even point was
somewhere around 50. I expect that the results would be about the
same on more modern machines.

I used to think that too before I learned how to write a binary search
that is both simple and solid. I suspect the idea that writing a
binary search is tricky was much of the motivation of the OP; that's
part of why I wrote this.

It's not just me or you; errors in writing binary searches are (or
were) common. IIRC Bentley makes just that observation in
"Programming Pearls" and, of course, Dijkstra had some pungent
comments on the subject.
Incidentally, looking over the code, it still looks right to me. The
comment, however, has an error - should read 'how_many >= 0'. And for
those who notice nits, "increasing" should be "non-decreasing"
instead.

Well you see, "looks right" is not quite as reassuring as "proven
right". I expect it probably is - most of us reuse validated code
schemas for these little exercises.
/* Return the smallest index of an element greater than
* the supplied argument x. If there is no such element,
* return the index where the element would be if there
* were one (which will be 'how_many' if how_many < 0
* is true).
*
* NEEDS: argument 'v' array be sorted in increasing
* order. NOT CHECKED.
*
* Disclaimer: not tested or compiled in any way.
*/

int
first_larger( SomeType v[], int how_many, SomeType x ){
int i = 0, j = how_many;

if( i >= j || v[0] > x ) return 0;

while( i + 1 != j ){
int m = (i + j) / 2;
if( v[m] <= x ) i = m;
else j = m;
}

return j;
}

Actually I think we are mostly in agreement here, despite the apparent
differences in the postings. There are some different assumptions and
the language used may have been misleading in places. Let's see if
those can be straightened out.

The phrase "is a reasonable guess" (for the performance factor of
three) probably is better put as "is a reasonable back of the envelope
estimate". That's meant with the usual errors bars and ignoring of
(unknown or unspecified) second-order effects.

I admit, the estimate was skewed toward the case where the element
compare was "expensive" relative to other operations. That assumption
is reasonably good if the elements being compared are strings; fair
if the elements being compared are floating point or mixed-mode; poor
if the elements being compared are int's. I still think it's a fair
assumption for calculating a BOTE estimate, since it's standard in
comparing algorithm performance; but you're right that it's not a
good assumption when the elements being compared are int's.

You're also right that, for int arrays of 50 elements, a straight
linear search (not with sentinel) runs only a little slower than a
binary search like the one I illustrated. Out of curiosity I ran a
few benchmarks, and the simple linear search was only 25% slower than
the simple binary search. (In case anyone was wondering, the platform
here is an x86 architecture, with gcc -O2 doing the compiles.)

Using a sentinel on the linear search is nice technique, and
definitely an improvement. (Incidentally, the code snippet posted has
two errors - the two >= comparisons should be > comparisons, and the
return value should be 'ptr-data + 1' rather than 'ptr-data'.) A
linear search using the sentinel technique ran roughly 1.7 times as
fast as a simple linear search, or 40% faster than a simple binary
search. Completely unrolling the linear search -- which is reasonable
for only 50 elements -- gives a 1.2x speedup relative to the sentinel
linear, or a factor of almost 2.1 over the original simple linear
search. Nice!

I will grant you that many programmers will get binary search wrong
(and that this observation may have been made by messieurs Bentley and
Dijkstra). However, I believe this result obtains not because binary
search is inherently difficult, but because it is practiced wrongly by
less competent programmers, and because the myth of it being difficult
is perpetuated by instructors who don't teach it very well. When I
said the binary search code I wrote "looks right" I meant it looked
like there were no clerical errors; I'm sure the invariants were
right, as I was sure they were when I first posted it. It's become
second nature for me to make sure loop invariants are right, and
anyone who adopts this discipline can program binary searches with
confidence. (Come to think of it, asking a candidate to program a
simple binary search might make a good interview question.)

Fortunately for the poor binary search, it still has something to
offer in response to the performance of linear/sentinel search and
unrolled linear search in our 'int array of 50 element' benchmarks.

Combining two levels of binary comparison and then doing a simple
linear search (that is, not using the sentinel technique) runs a tad
faster (roughly 10%) than a linear search with sentinel .

The function

uint
binary_sentinel( int v[], uint n, int x ){
/** NEEDS: n > 0 **/
if( v[n/2] > x ){
uint i = 0;
while( v <= x ) i++;
return i;
} else {
while( v[--n] > x ) {}
return n+1;
}
}

which combines a single binary split with a linear/sentinel search,
outperforms even the fully unrolled linear search. (Here also the
speedup is roughly 10%.)

Finally, if we compare fully unrolled linear search and fully unrolled
binary search -- both reasonable on 50 element arrays if the search is
on the critical path for performance -- the binary search is 2.3 times
as fast as the linear search. The fully unrolled binary search also
has the nice property that it can work on arrays of any size up to 50
(ok, 63), rather than just a single fixed size of 50.

That all fit with what you were saying?
 
T

Tim Rentsch

However, a linear search will correctly predict nearly all the
branches, while a binary search will mispredict half of them, on
average.

Unless comparison time dominates, I wouldn't call your guess
"reasonable", for that reason alone - and there are others.
(Consider we know nothing of the size of items and the memory-
management features of the platform, so we can't guess whether the
binary search will tend to cause more cache misses, for example.)


Any decent optimizer will unroll the loop, so loop index comparisons
will only add N/O comparisons, where O is the unrolling factor.

And if loop index comparisons take about as long as value comparisons,
then the latter must be very quick - so you fall foul of the branch-
misprediction penalty of binary search.

And this is why premature optimization is a poor idea.

Now, in this case, the OP did follow up by stating that profiling had
indicated his search function was a hot spot; and since that's the
case, he might indeed want to try various alternatives. If compar-
isons are very expensive, the binary search might even help - though
I suspect he'd be better off revisiting his data structure, as
various people suggested.

I have the sense you think I was trying to provide some sort of
detailed performance analysis. I wasn't. There was no optimization -
premature or otherwise - in my posting. There were only observations,
no recommendation.

What I was trying to do is give a not-very-deep, but helpful, answer
to a question about whether binary searching would be an improvement
over linear searching. The answer I gave was intended to address the
question asked, as it was stated.

I was also trying to illustrate a way of thinking that would likely to
be helpful to someone thinking about whether it might be worth some
effort trying to improve the performance of this particular function.
I don't specialize in performance enhancement; but engineers that I
know who do (and who are good at what they do) say that big gains in
performance are almost always due to algorithmic improvements rather
than fussing with low-level machine details. I think that view is
borne out here; please see my response to Richard Harter's posting
for details.
 
R

Richard Harter

Actually I think we are mostly in agreement here, despite the apparent
differences in the postings. There are some different assumptions and
the language used may have been misleading in places. Let's see if
those can be straightened out.

Agreed. (Mostly :))
The phrase "is a reasonable guess" (for the performance factor of
three) probably is better put as "is a reasonable back of the envelope
estimate". That's meant with the usual errors bars and ignoring of
(unknown or unspecified) second-order effects.

Yes and no. The problem here is that we are comparing algorithms with
different cost functions on a restricted range. If we just use number
of iterations for our reasonable guess we're ignoring cost per
iteration. The "reasonable guess" will be off by some constant
factor. This doesn't matter for large n; big O dominants. It's a
different matter for a small restricted range of n. Still, it's a
reasonable guess.
I admit, the estimate was skewed toward the case where the element
compare was "expensive" relative to other operations. That assumption
is reasonably good if the elements being compared are strings; fair
if the elements being compared are floating point or mixed-mode; poor
if the elements being compared are int's. I still think it's a fair
assumption for calculating a BOTE estimate, since it's standard in
comparing algorithm performance; but you're right that it's not a
good assumption when the elements being compared are int's.

Granted. As a caveat comparison costs on strings are distribution
dependent. For quasi-uniform distributions comparison costs for
binary searches will be less than those of linear searches. Surprise!
You're also right that, for int arrays of 50 elements, a straight
linear search (not with sentinel) runs only a little slower than a
binary search like the one I illustrated. Out of curiosity I ran a
few benchmarks, and the simple linear search was only 25% slower than
the simple binary search. (In case anyone was wondering, the platform
here is an x86 architecture, with gcc -O2 doing the compiles.)

Using a sentinel on the linear search is nice technique, and
definitely an improvement. (Incidentally, the code snippet posted has
two errors - the two >= comparisons should be > comparisons, and the
return value should be 'ptr-data + 1' rather than 'ptr-data'.)

Oops.
A
linear search using the sentinel technique ran roughly 1.7 times as
fast as a simple linear search, or 40% faster than a simple binary
search. Completely unrolling the linear search -- which is reasonable
for only 50 elements -- gives a 1.2x speedup relative to the sentinel
linear, or a factor of almost 2.1 over the original simple linear
search. Nice!

I will grant you that many programmers will get binary search wrong
(and that this observation may have been made by messieurs Bentley and
Dijkstra). However, I believe this result obtains not because binary
search is inherently difficult, but because it is practiced wrongly by
less competent programmers, and because the myth of it being difficult
is perpetuated by instructors who don't teach it very well. When I
said the binary search code I wrote "looks right" I meant it looked
like there were no clerical errors; I'm sure the invariants were
right, as I was sure they were when I first posted it. It's become
second nature for me to make sure loop invariants are right, and
anyone who adopts this discipline can program binary searches with
confidence. (Come to think of it, asking a candidate to program a
simple binary search might make a good interview question.)

Ah, okay. I agree, programming a simple binary search would make a
good interview question.
Fortunately for the poor binary search, it still has something to
offer in response to the performance of linear/sentinel search and
unrolled linear search in our 'int array of 50 element' benchmarks.

Combining two levels of binary comparison and then doing a simple
linear search (that is, not using the sentinel technique) runs a tad
faster (roughly 10%) than a linear search with sentinel .

The function

uint
binary_sentinel( int v[], uint n, int x ){
/** NEEDS: n > 0 **/
if( v[n/2] > x ){
uint i = 0;
while( v <= x ) i++;
return i;
} else {
while( v[--n] > x ) {}
return n+1;
}
}

which combines a single binary split with a linear/sentinel search,
outperforms even the fully unrolled linear search. (Here also the
speedup is roughly 10%.)

Finally, if we compare fully unrolled linear search and fully unrolled
binary search -- both reasonable on 50 element arrays if the search is
on the critical path for performance -- the binary search is 2.3 times
as fast as the linear search. The fully unrolled binary search also
has the nice property that it can work on arrays of any size up to 50
(ok, 63), rather than just a single fixed size of 50.

You can use a switch for the unrolled linear search. Still, I like
the unrolled binary search.
That all fit with what you were saying?

Pretty much. Whether or not optimization is premature, successful
optimization requires a full assessment of alternatives, a full
understanding of the data being operated upon, and careful
measurement.


Richard Harter, (e-mail address removed)
http://home.tiac.net/~cri, http://www.varinoma.com
All my life I wanted to be someone;
I guess I should have been more specific.
 
T

Tim Rentsch

On 23 Feb 2005 02:41:17 -0800, Tim Rentsch <[email protected]>
wrote:
[snip]
Finally, if we compare fully unrolled linear search and fully unrolled
binary search -- both reasonable on 50 element arrays if the search is
on the critical path for performance -- the binary search is 2.3 times
as fast as the linear search. The fully unrolled binary search also
has the nice property that it can work on arrays of any size up to 50
(ok, 63), rather than just a single fixed size of 50.

You can use a switch for the unrolled linear search.

I tried code that used a 'switch' to vector into an unrolled linear
search. On this problem at least, the simple loop-using-sentinel ran
faster, by about 20%.

Still, I like the unrolled binary search.

I have an interesting story about that. I tried two versions of the
unrolled binary search - one version like this:

if( v[3] > x )
if( v[1] > x ) return v[0] > x ? 0 : 1;
else return v[2] > x ? 2 : 3;
else
if( v[5] > x ) return v[4] > x ? 4 : 5;
else return v[6] > x ? 6 : 7;

(only of course moreso, to cover all 50 cases), where the number of
elements in the array must match exactly the unrolling in the code;
and another version like this:

if( n < 4 || v[3] > x )
if( n < 2 || v[1] > x ) return n < 1 || v[0] > x ? 0 : 1;
else return n < 3 || v[2] > x ? 2 : 3;
else
if( n < 6 || v[5] > x ) return n < 5 || v[4] > x ? 4 : 5;
else return n < 7 || v[6] > x ? 6 : 7;

where the number of array elements is checked in parallel construction
with the tests of the values, so now any array size up to the unrolled
size will work. To my surprise the second version actually ran faster
than the first - not by much, only a few percent, but it was nice to
get the more general functionality and have it be "better than free".

Incidentally, I have no explanation for why this might be so. The
code in the second case was roughly twice as many instructions down
each path. I guess branch prediction was doing its thing.

Pretty much. Whether or not optimization is premature, successful
optimization requires a full assessment of alternatives, a full
understanding of the data being operated upon, and careful
measurement.

Roger that.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,159
Messages
2,570,881
Members
47,418
Latest member
NoellaXku

Latest Threads

Top