The linear search needs two compares per loop iteration also - one to
test the value, and one to check the loop index.
One comparison suffices if you use a sentinel. The following code
fragment is illustrative (but unchecked).
if (data[0] >= testval) return 0;
ptr = data + data_length;
while (*--ptr >= testval);
return ptr-data;
One decrement, one compare per element.
The binary search needs an add and a divide by 2 (usually a shift);
also one assignment. The linear search needs an increment. Since
25/6 > 4, I think "about 3 as fast" is a fair statement, especially
since it was given as a "reasonable guess".
It's a reasonable guess if you don't do the arithmetic and count
operations, and if you've never benchmarked the two approaches. For a
linear search the costs per iteration are
1 data compare & branch
1 increment/decrement
For the binary search the costs per iteration are:
1 data compare & branch
1/2 branch (then clause)
1 index compare and branch
1 addition
1 assignment
1 divide by 2
1 addition by 1 (avoidable)
If we just count operations that's 2 for a linear search and 6.5 for a
binary search which is just about a wash. However the code for the
linear search is a better candidate for optimization. Some years ago
I ran benchmarks on just this question; the break even point was
somewhere around 50. I expect that the results would be about the
same on more modern machines.
I used to think that too before I learned how to write a binary search
that is both simple and solid. I suspect the idea that writing a
binary search is tricky was much of the motivation of the OP; that's
part of why I wrote this.
It's not just me or you; errors in writing binary searches are (or
were) common. IIRC Bentley makes just that observation in
"Programming Pearls" and, of course, Dijkstra had some pungent
comments on the subject.
Incidentally, looking over the code, it still looks right to me. The
comment, however, has an error - should read 'how_many >= 0'. And for
those who notice nits, "increasing" should be "non-decreasing"
instead.
Well you see, "looks right" is not quite as reassuring as "proven
right". I expect it probably is - most of us reuse validated code
schemas for these little exercises.
/* Return the smallest index of an element greater than
* the supplied argument x. If there is no such element,
* return the index where the element would be if there
* were one (which will be 'how_many' if how_many < 0
* is true).
*
* NEEDS: argument 'v' array be sorted in increasing
* order. NOT CHECKED.
*
* Disclaimer: not tested or compiled in any way.
*/
int
first_larger( SomeType v[], int how_many, SomeType x ){
int i = 0, j = how_many;
if( i >= j || v[0] > x ) return 0;
while( i + 1 != j ){
int m = (i + j) / 2;
if( v[m] <= x ) i = m;
else j = m;
}
return j;
}
Actually I think we are mostly in agreement here, despite the apparent
differences in the postings. There are some different assumptions and
the language used may have been misleading in places. Let's see if
those can be straightened out.
The phrase "is a reasonable guess" (for the performance factor of
three) probably is better put as "is a reasonable back of the envelope
estimate". That's meant with the usual errors bars and ignoring of
(unknown or unspecified) second-order effects.
I admit, the estimate was skewed toward the case where the element
compare was "expensive" relative to other operations. That assumption
is reasonably good if the elements being compared are strings; fair
if the elements being compared are floating point or mixed-mode; poor
if the elements being compared are int's. I still think it's a fair
assumption for calculating a BOTE estimate, since it's standard in
comparing algorithm performance; but you're right that it's not a
good assumption when the elements being compared are int's.
You're also right that, for int arrays of 50 elements, a straight
linear search (not with sentinel) runs only a little slower than a
binary search like the one I illustrated. Out of curiosity I ran a
few benchmarks, and the simple linear search was only 25% slower than
the simple binary search. (In case anyone was wondering, the platform
here is an x86 architecture, with gcc -O2 doing the compiles.)
Using a sentinel on the linear search is nice technique, and
definitely an improvement. (Incidentally, the code snippet posted has
two errors - the two >= comparisons should be > comparisons, and the
return value should be 'ptr-data + 1' rather than 'ptr-data'.) A
linear search using the sentinel technique ran roughly 1.7 times as
fast as a simple linear search, or 40% faster than a simple binary
search. Completely unrolling the linear search -- which is reasonable
for only 50 elements -- gives a 1.2x speedup relative to the sentinel
linear, or a factor of almost 2.1 over the original simple linear
search. Nice!
I will grant you that many programmers will get binary search wrong
(and that this observation may have been made by messieurs Bentley and
Dijkstra). However, I believe this result obtains not because binary
search is inherently difficult, but because it is practiced wrongly by
less competent programmers, and because the myth of it being difficult
is perpetuated by instructors who don't teach it very well. When I
said the binary search code I wrote "looks right" I meant it looked
like there were no clerical errors; I'm sure the invariants were
right, as I was sure they were when I first posted it. It's become
second nature for me to make sure loop invariants are right, and
anyone who adopts this discipline can program binary searches with
confidence. (Come to think of it, asking a candidate to program a
simple binary search might make a good interview question.)
Fortunately for the poor binary search, it still has something to
offer in response to the performance of linear/sentinel search and
unrolled linear search in our 'int array of 50 element' benchmarks.
Combining two levels of binary comparison and then doing a simple
linear search (that is, not using the sentinel technique) runs a tad
faster (roughly 10%) than a linear search with sentinel .
The function
uint
binary_sentinel( int v[], uint n, int x ){
/** NEEDS: n > 0 **/
if( v[n/2] > x ){
uint i = 0;
while( v
<= x ) i++;
return i;
} else {
while( v[--n] > x ) {}
return n+1;
}
}
which combines a single binary split with a linear/sentinel search,
outperforms even the fully unrolled linear search. (Here also the
speedup is roughly 10%.)
Finally, if we compare fully unrolled linear search and fully unrolled
binary search -- both reasonable on 50 element arrays if the search is
on the critical path for performance -- the binary search is 2.3 times
as fast as the linear search. The fully unrolled binary search also
has the nice property that it can work on arrays of any size up to 50
(ok, 63), rather than just a single fixed size of 50.
That all fit with what you were saying?