William Hughes said:
As an introduction to recursion I like Towers of Hanoi. Here is a
problem that is very hard to solve iteratively but easy to solve
recursively but is simple and does not require complex data
structures.
Towers of Hanoi is relatively straightforward to solve nonrecursively.
The important observation is that the binary representation of the step
number encodes the state of the game and the action to take in a fairly
simple way.
/* Determine what to do in step I of a Towers of Hanoi problem.
*
* N is the number of discs; we must have 1 <= I < 2^N. Store
* in *DD the number of the disc to move (numbered 1 up to N)
* and in *FF and *TT the numbers of the source (`from') and
* destination (`to') spindles (numbered 0, 1, 2) respectively.
* It is assumed that the overall task is to move all N discs
* from spindle 0 to spindle 2.
*/
static void hanoi_step(unsigned n, unsigned long i,
unsigned *dd, unsigned *ff, unsigned *tt)
{
unsigned m;
unsigned f = 0, t = 2, s = 1;
#define XCH(a, b) do { unsigned x = a; a = b; b = x; } while (0)
m = 1 << (n - 1);
for (;
{
if (i == m) break;
else if (i & m) { XCH(f, s); i &= ~m; }
else { XCH(s, t); }
m >>= 1; n--;
}
*dd = n; *ff = f; *tt = t;
#undef XCH
}
Being able to determine a single step in the game without computing the
rest is handy if you're using a Towers of Hanoi backup schedule, for
example: you probably know how long it is since you took a full dump,
and would like to know what kind of incremental dump to do now.
If introduction to recursion is done using Fibonacci (or
even worse factorial!) students are likely to think, "So What?".
Recursion is an even worse way to compute Fibonacci numbers. The
following algorithm computes F(n) in time proportional to log(n)
multiplications.
/* Return the Nth Fibonacci number F(N). F(0) is defined to be
* 0 and F(1) to be 1; F(I) = F(I - 1) + F(I - 2).
*/
static unsigned long fib(unsigned n)
{
unsigned long u = 0, v = 1;
unsigned long u2, v2, iiuv, uu, vv;
unsigned m = ~0u ^ (~0u >> 1);
while (m) {
u2 = u*u; v2 = v*v; iiuv = 2*u*v;
uu = u2 + iiuv; vv = u2 + v2;
if (n & m) { u = uu + vv; v = uu; }
else { u = uu; v = vv; }
m >>= 1;
}
return (u);
}
A good algorithm to compute factorials is a little harder. If we assume
we're going to use arbitrary-precision arithmetic then multiplication is
fastest when we try to make the operands equally long, which suggests a
simple algorithm which maintains a stack with the invariant that entries
higher up in the stack are smaller. Then you push the numbers 1 up to
n, each time maintaining the invariant by popping and multiplying as
necessary. When you're done, you simply multiply the remaining entries
on the stack. Again, this is not naturally recursive.
This is probably now much faster than the algorithm used to print the
answer in decimal. A simple-minded base conversion will repeatedly take
quotient and remainder by (say) R, building up the base-R digits in
reverse order. A cleverer approach is to find the largest power R^(2^k)
smaller than the number we're meant to convert, take quotient and
remainder, and convert both separately and stitch the results together
(being careful with leading zeroes; note that R^(2^(k-1)) is an upper
bound for the power of the base needed to split the remaining parts).
This /is/ a naturally recursive algorithm, and it's very efficient; it
also has the advantage that it produces output in more or less the right
order. (There are faster ways if R is a power of 2, obviously.)
I prefer looking at the problem of computing finite Fourier
transforms, to the problem of sorting.
I can see the merits of that. Of course, Fourier transforms require a
certain amount of explaining in order to motivate them and explain why
the recursive algorithm is a plausible approach, but this is useful in
itself. (My university course didn't cover Fourier transforms at all.)
-- [mdw]