A
arnuld
This code is from chapter 2 of Algorithms in C:
Sequential Search:
int search( int a[], int v, int l, int r)
{
int i;
for(i = 1; i <= r; i++)
if(v == a) return i;
return -1;
}
Binary Search:
int search( int a[], int v, int l, int r )
{
while( r >= 1 )
{
int m = (l + r)/2;
if(v == a[m]) return m;
if(v < a[m]) r = m - 1;
else l = m + 1;
}
return -1;
}
Now author explains the average cost of S-Search as:
(1 + 2 + 3 + .... + N)/N = (N + 1)/2
and the worst-case cost of B-Search as:
T(Base-N) <= T(Base-N/2) + 1 for N >=2 and T(Base-1) = 1
I am going to ask a very simple question. BY just looking at the code of
both searches, any man with brains can simply conclude that B-Search, on
a sorted input, is always cheaper as compared to S-Search. If it is so
clear form the code, why do I h ave to go through the incomprehensible
junk of N, T, C N(BaseN) etc etc. The book is about Algorithms in C but
when I started to read, the author is teaching me whole recurrence series
(which of course is an obtuse thing to understand):
C(Base-N) = C(Base-(N-1)) + N for N >= 2 with C(Base-1) = 1
C(Base-N) = C(Base-(N-2)) + (N - 1) + N
= C(Base-(N-3)) + (N - 2) + (N - 1) + N
= ..............
= ..............
= C(Base-1) + 2 + ... (N - 2) + (N - 1) + N
= 1 + 2 + 3 + .. + (N - 2) + (N - 1) + N
= N(N+1)/2
Is it necessary to understand whole exponential series and recurrences
and permutations and combinations and logarithmic series before you even
start to comprehend a search or sorting or any algorithm at all ? If that
is so why will I use Sedgwick, I can read Hall & Knight's "Higher
Algebra".
My experience says its enough to know that N is less than N^2 which is
less than 2^N and logN is least of them all and if B-Search takes logN on
a sorted input and S-Search takes N, then we can go with B-Search based
algorithm, provided you have estimated the cost of sorting first.
So should I go with reading all the maths or just go to some random
chapter and start working on an algorithm in C ?
Sequential Search:
int search( int a[], int v, int l, int r)
{
int i;
for(i = 1; i <= r; i++)
if(v == a) return i;
return -1;
}
Binary Search:
int search( int a[], int v, int l, int r )
{
while( r >= 1 )
{
int m = (l + r)/2;
if(v == a[m]) return m;
if(v < a[m]) r = m - 1;
else l = m + 1;
}
return -1;
}
Now author explains the average cost of S-Search as:
(1 + 2 + 3 + .... + N)/N = (N + 1)/2
and the worst-case cost of B-Search as:
T(Base-N) <= T(Base-N/2) + 1 for N >=2 and T(Base-1) = 1
I am going to ask a very simple question. BY just looking at the code of
both searches, any man with brains can simply conclude that B-Search, on
a sorted input, is always cheaper as compared to S-Search. If it is so
clear form the code, why do I h ave to go through the incomprehensible
junk of N, T, C N(BaseN) etc etc. The book is about Algorithms in C but
when I started to read, the author is teaching me whole recurrence series
(which of course is an obtuse thing to understand):
C(Base-N) = C(Base-(N-1)) + N for N >= 2 with C(Base-1) = 1
C(Base-N) = C(Base-(N-2)) + (N - 1) + N
= C(Base-(N-3)) + (N - 2) + (N - 1) + N
= ..............
= ..............
= C(Base-1) + 2 + ... (N - 2) + (N - 1) + N
= 1 + 2 + 3 + .. + (N - 2) + (N - 1) + N
= N(N+1)/2
Is it necessary to understand whole exponential series and recurrences
and permutations and combinations and logarithmic series before you even
start to comprehend a search or sorting or any algorithm at all ? If that
is so why will I use Sedgwick, I can read Hall & Knight's "Higher
Algebra".
My experience says its enough to know that N is less than N^2 which is
less than 2^N and logN is least of them all and if B-Search takes logN on
a sorted input and S-Search takes N, then we can go with B-Search based
algorithm, provided you have estimated the cost of sorting first.
So should I go with reading all the maths or just go to some random
chapter and start working on an algorithm in C ?