Fibonacci series recursion error

S

Steven D'Aprano

Changing your definition slightly to the following:

def r(n):
if n==0 or n==1: return 0
else: return r(n-1) + r(n-2) + 1

Except that's not the same as my "R(n)". The base cases should be 1, not
0. That makes rather a big difference to the value: by n = 35, you have

R(35) = 29860703
fib(35) = 9227465

or nearly 2.25 times greater. And the difference just keeps getting
bigger...

We see it is the same as fib (off by 1) [...]
So it does not need a new name :)

I see your smiley, but there are a number of similar series as Fibonacci,
with the same recurrence but different starting values, or similar but
slightly different recurrences. E.g. Lucas, primefree, Pell, Padovan and
Perrin numbers.

(The fact that most of those start with P is almost certainly a
coincidence.)
 
H

Hans Georg Schaathun

On 02 May 2011 01:09:21 GMT, Steven D'Aprano
: Ah, I see where you're coming from now! You think I'm arguing *against*
: the use of recursion. Not at all. Earlier in this thread, I said:

Fair enough. Somebody said something about recursion mainly being
a beginner's error. I don't think it was you, but I felt that your
argument in context mainly served to reinforce such a view, whether
intended or not.

: "Consequently, the naive recursive function is ridiculously slow and
: memory-hungry.
:
: This seems to have give rise to a myth that recursion should be avoided.
: What needs to be avoided is *badly thought out* recursion."

Then we agree.

And badly thought-out iteration is as bad as badly thought-out
recursion.

: To be honest, I don't know what Python does with local variables. But I
: *guess* it uses a constant-sized record which points to the locals,
: because that's how I'd do it :)

Maybe, but would it be able to treat specially C API functions with
a large chunk of stack memory used for local variables?

: Given a choice between a complicated iterative algorithm and a simple
: recursive version, there's no prima facie reason to expect the recursive
: version to *necessarily* be slower than iteration in Python *merely*
: because it uses recursion. As always, if speed is an issue, profile and
: identify the bottlenecks before deciding how to fix them.

Then we are on the same page.

And it is becoming increasingly clear how bizarre this discussion is in
a python context. The overhead which may be caused by recursion in
hardware is only one of many sources of overhead which one accepts when
opting to use python in order to gain other benefits.
 
H

Hans Georg Schaathun

On 02 May 2011 08:56:57 GMT, Steven D'Aprano
: I see your smiley, but there are a number of similar series as Fibonacci,
: with the same recurrence but different starting values, or similar but
: slightly different recurrences. E.g. Lucas, primefree, Pell, Padovan and
: Perrin numbers.

Well, Fibonacci isn't one unique sequence. Any sequence satisfying
f(n) = f(n-1) + f(n-2) is /a/ Fibonacci sequence. Regardless of
starting values. At least according to some authors.

Ian Andersen (A First Course in Combinatorial Mathematics) prefer
the sequence 1,2,3,5 ...

Cormen, Leiserson, Rivest (Introduction to Algorithms) prefer
0,1,1,2, ... (although they also start the indexing at 0).

Penguin, Dict. of Mathematics prefer 1,1,2,3,5 but they also
suggest 0,1,1,2,3, ...

In short, don't assume that a person talking about Fibonacci
numbers assume the same base cases as you do.
 
R

rusi

:  I see your smiley, but there are a number of similar series as Fibonacci,
:  with the same recurrence but different starting values, or similar but
:  slightly different recurrences. E.g. Lucas, primefree, Pell, Padovanand
:  Perrin numbers.

Well, Fibonacci isn't one unique sequence.  Any sequence satisfying
f(n) = f(n-1) + f(n-2) is /a/ Fibonacci sequence.  Regardless of
starting values.  At least according to some authors.

And they all will, when expressed in closed form, be of the form
f(n) = phi^n + something of a smaller order
where phi = (1 + sqrt(5))/2

Since phi > 1 they are exponential, ignoring lower order terms.
 
D

Dave Angel

Sure. Serialize this Python object in a way that can be given to, say, PHP:
: foo=asdf":"qwer","zxcv":"1234"}; foo["self"]=[1,2,3,foo]

Wouldn't cyclic references give infinite recursion? And remain
infinitive if you recode it iteratively?

Well, I don't know of a decent non-recursive way to process a
recursive structure. Incidentally, this example is almost directly
from some working code of ours; I have a C function that recurses over
a Python dictionary and aborts as soon as it's found "too much" data
(for a fairly arbitrary definition of "too much", but one that's
deliberately designed to prevent infinite recursion).
<snip>

Chris Angelico

When iterating over a repeatable, potentially infinite sequence of
distinguishable values, you can safely detect infinitude ;-) (cycles)
with a linear overhead (rather than n-squared).

Given a sequence, create two iterators for it. Start them both at the
same point, but iterate the second one twice for every time you iterate
the first one. If the two ever get the same value, you have a cycle.

In the case of Python objects, you probably want to use an 'is'
comparison, rather than comparing id() for equal. But the concept
works, and easily.

DaveA
 
H

Hans Georg Schaathun

This does not make linear recursion 'bad', just impractical for general
: use on finite-memory machines. While I consider it very useful for
: learning, it is unnecessary because it is easy to write an iterative
: version. So called tail-recursion optimization saves memory by REMOVING
: RECURSION and replacing it with iteration.
: (...)
: Python does not do this automatically because 1) it can be a semantic
: change under some circumstances; 2) one who wants the iterative version
: can just as easily write it directly;

That's the silliest argument I ever heard. The programmer should write
the code the way he and application domain experts think, i.e. very
often recursively. The compiler should generate code the way the CPU
thinks (most optimally), i.e. iteratively.

Of course, I am not saying that one should always model problems
recursively, and thus also not that one should implement them
recursively. Just that when a model is agreed recursively between
the stakeholders, one should get a compiler to do the optimisation,
not a programmer.

I always thought of python as a step forward, in the way it allows
direct expression of so many alternative modes of thinking (imperative,
functional, recursion, iterators, etc). If what you say is python
philosophy, it is a step backward in requiring the programmer to
think more about low-level technical matters which even C has managed
to leave for the compiler.

: and 3) Python has a better way to
: process collections that removes essentially all the boilerplate in the
: recursive-call and while-loop versions:
 
S

Steven D'Aprano

On 02 May 2011 08:56:57 GMT, Steven D'Aprano
: I see your smiley, but there are a number of similar series as
Fibonacci, : with the same recurrence but different starting values, or
similar but : slightly different recurrences. E.g. Lucas, primefree,
Pell, Padovan and : Perrin numbers.

Well, Fibonacci isn't one unique sequence. Any sequence satisfying f(n)
= f(n-1) + f(n-2) is /a/ Fibonacci sequence. Regardless of starting
values. At least according to some authors.

According to the references I have, there is one and only one Fibonacci
sequence, the usual one invented by Fibonacci to talk about rabbits.
(Actually, we should talk about Fibonacci *numbers*, rather than
sequence.)

Wolfram Mathworld is typical, defining *the* Fibonacci numbers as those
with starting conditions f(1)=1, f(2)=1 (or f(0)=0 if you prefer).

http://mathworld.wolfram.com/FibonacciNumber.html

The Collins Dictionary of Mathematics agrees with Mathworld.

The Lucas numbers (related to, but not the same as, the Lucas sequence)
obey the same recurrence as the Fibonacci numbers, but with a different
set of starting values. So there is certainly precedent in the
mathematical literature for giving different names to the same recurrence
with different starting values.

In any case, whatever you call them, what I call R(n) is not one, as the
recurrence is different:

R(n) = R(n-1) + R(n-2) + 1

Penguin, Dict. of Mathematics prefer 1,1,2,3,5 but they also suggest
0,1,1,2,3, ...

This depends on whether you start counting from n=0 or n=1.

Even the sequence you quote, from Anderson:

1,2,3,5...

is just the usual Fibonacci numbers starting at n=2 instead of 1 or 0.
(No matter how confusing something is, there's always at least one person
who will try to make it *more* confusing.)

In short, don't assume that a person talking about Fibonacci numbers
assume the same base cases as you do.

As far as I'm concerned, there are only two definitions of Fibonacci
numbers that have widespread agreement in the mathematical community:

0,1,1,2,3 ... (starting from n=0)
1,1,2,3,5 ... (starting from n=1)

Any other definition is rather, shall we say, idiosyncratic.
 
S

Steven D'Aprano

This does not make linear recursion 'bad', just impractical for
general : use on finite-memory machines. While I consider it very
useful for : learning, it is unnecessary because it is easy to write an
iterative : version. So called tail-recursion optimization saves memory
by REMOVING : RECURSION and replacing it with iteration. : (...)
: Python does not do this automatically because 1) it can be a semantic
: change under some circumstances; 2) one who wants the iterative
version : can just as easily write it directly;

That's the silliest argument I ever heard.

You must be new to the Internet then :)

The programmer should write
the code the way he and application domain experts think, i.e. very
often recursively. The compiler should generate code the way the CPU
thinks (most optimally), i.e. iteratively.

Perhaps so, but there can be a huge gulf between what "should" happen and
what does happen. The CPython implementation is deliberately a naive, not-
very clever compiler. (PyPy seems to be the implementation aiming at
cutting-edge cleverness.)

Even such simple optimizations as constant folding are very controversial
among the CPython developers. They have good reasons for their caution:
optimizing compilers are renowned for breaking code, especially floating
point code, and there have been cases in Python's history where
optimizations have broken existing code and needed to be backed out.

The problem is, once you include side-effects, recursion and iteration
are *not* identical. Specifically, the opposition to tail-recursion
optimization in Python is that it plays havoc with the tracebacks you get
in the event of an exception. The argument goes:

* Debugging is harder than writing code in the first place, so it is more
important to simplify debugging, even at the cost of making some code
slightly harder to write.

* Recursion doesn't just mean simple recursion where a function calls
itself, but mutually recursive functions. You could have (say) five
functions that mutually called each other recursively, and in the event
of a traceback you need to see the exact calling chain, but that's
precisely the information that is blown away by tail-recursion
optimization.

* Tail-recursion is only a special case of recursion, and not a very
common special case either, so it is hardly worth the extra complexity of
the compiler to treat it specially.

Regardless of whether you agree with the arguments or not, they're hardly
"silly".
 
T

Terry Reedy

: Python does not do this automatically because 1) it can be a semantic
: change under some circumstances; 2) one who wants the iterative version
: can just as easily write it directly;

That's the silliest argument I ever heard.

Calling something silly when you have not really read it and perhaps
asked questions to really understand it is, to me, silly.

If you want a compiler that can introduce bugs into your program by
doing what it thinks you meant, rather than what you said, don't use
CPython.
: and 3) Python has a better way to
: process collections that removes essentially all the boilerplate in the
: recursive-call and while-loop versions:

To properly use modern Python, one should know and use for loops and
iterators.
 
H

Hans Georg Schaathun

On 02 May 2011 16:41:37 GMT, Steven D'Aprano
: You must be new to the Internet then :)

OK. Maybe I heard something worse last time I was an active news users,
years ago.

Anyway, most of the silly things I hear do not qualify as arguments :)

: The problem is, once you include side-effects, recursion and iteration
: are *not* identical. Specifically, the opposition to tail-recursion
: optimization in Python is that it plays havoc with the tracebacks you get
: in the event of an exception. The argument goes:

Thanks for the comprehensive background. I can see the point that
python may be harder to optimise correctly during compilation than
many other languages.

: Regardless of whether you agree with the arguments or not, they're hardly
: "silly".

I only called one argument silly, namely the one about iterative
rewriting being so simple that it is not an issue. Sometimes it isn't,
but sometimes it is.

The other arguments are valid. And they make me lean more towards
more static, compiled languages without the excessive run-time
dynamism of python.
 
M

Mark Dickinson

As far as I'm concerned, there are only two definitions of Fibonacci
numbers that have widespread agreement in the mathematical community:

0,1,1,2,3 ... (starting from n=0)
1,1,2,3,5 ... (starting from n=1)

Any other definition is rather, shall we say, idiosyncratic.

And a concrete reason for preferring the above definition (in either
form) is that divisibility properties of the sequence are much neater
with this choice:

gcd(F_m, F_n) = F_{gcd(m, n)}
 
H

harrismh777

Thomas said:
yes - but they are called one after the other, so the "twice" call
counts only for execution speed, not for recursion depth.

They *are not* called one after the other in the sense that there is
ever only one level of recursion with each call (not at all). Take a
closer look. Each call to fib() forces a double head, and each of those
forces another double head (now four), and so on... the recursion depth
exception of the OP testifies against your theory.

The thing about this problem that puzzles me, is why we might consider
recursion for a possible solution in the first place. In other words,
normally we consider using recursion when we need information further
down the stack then we have now... so that recursion is necessary
because each head call will not have the information it needs for
completion until the tail clean-up (coming back up the stack) provides
that information.

In the case of the fib() numbers (or the fib sequence) recursion is not
necessary for correctly handling of the problem. The simple
straight-forward obvious way to handle the problem is to calculate the
sequence outright in a straight-forward way. On the other hand, Otten's
use of yield (making a generator) makes some sense too, in the case that
we want to get the fib numbers over time without taking the time to
calculate the entire sequence up-front.
Just saying...

kind regards,
m harris
 
C

Chris Angelico

That's the silliest argument I ever heard.  The programmer should write
the code the way he and application domain experts think, i.e. very
often recursively.  The compiler should generate code the way the CPU
thinks (most optimally), i.e. iteratively.

The CPU can function iteratively or recursively. The main difference
between the programmer and the CPU there is that the CPU is always
imperative, while a programmer might want to think functionally, or in
an event-driven model, and then "under the covers" the compiler
translates that into imperative code.

But in my opinion, the programmer and the interpreter/compiler are
teammates. If you allow programmers to be stupid, you will get a lot
of stupid programmers writing code. (I think that's one of PHP's
biggest downsides.) If the human and the machine know each other well
enough, the resulting code can be orders of magnitude more efficient
to run, *without taking any more tme to code* because the programmer
already knew the right things to do.

Perhaps it will take you four hours to read through something, study
it, maybe eyeball the compiler's source code, maybe do some
disassembly. If you have a few years of career ahead of you, you'll
benefit many times over from those few hours, and without spending
extra time, you'll produce better code. THAT is what distinguishes the
master from the novice.

Chris Angelico
 
I

Ian Kelly

They *are not* called one after the other in the sense that there is ever
only one level of recursion with each call (not at all). Take a closer look.
Each call to fib() forces a double head, and each of those forces another
double head (now four), and so on...

The branching factor has nothing to do with the maximum stack depth.
If you don't believe me, consider this little script:

import inspect
def maxstack(n):
if n <= 0:
return len(inspect.stack())
else:
return max(maxstack(n-1), maxstack(n-2))
print maxstack(15)

This prints "17".

Now consider this script, which is similar but singly-recursive:

import inspect
def maxstack(n):
if n <= 0:
return len(inspect.stack())
else:
return maxstack(n-1)
print maxstack(15)

This also prints "17". Note: they're the same.
 the recursion depth exception of the
OP testifies against your theory.

The OP's recursion depth exception was because a malformed base case
made the recursion infinite. It had nothing to do with the branching
factor.
 
C

Chris Angelico

The branching factor has nothing to do with the maximum stack depth.

Side point: In a massively parallel environment, branching like this
COULD be implemented as a fork. I just don't know of any
implementations of Python that do so. (And of course, it works for
fib() because it needs/uses no global state, which makes the
recursions completely independent. Not all functions are so courteous,
and the compiler can't necessarily know the difference.)

Chris Angelico
 
S

Steven D'Aprano

The other arguments are valid. And they make me lean more towards more
static, compiled languages without the excessive run-time dynamism of
python.

If you value runtime efficiency over development time, sure. There are
plenty of languages which have made that decision: Pascal, C, Java, Lisp,
Forth, and many more.

Python aims at acceptable performance (between 10 and 100 times slower
than C) and much easier development (between 10 and 100 times faster than
C *wink*). If that tradeoff doesn't suit you, perhaps Python is not the
language for you.
 
R

rusi

The thing about this problem that puzzles me, is why we might consider
recursion for a possible solution in the first place....

This can be answered directly but a bit lengthily.
Instead let me ask a seemingly unrelated (but actually much the same)
question.

Here are two power functions:

def powI(x,n):
result = 1
for i in range(0,n):
result = result * x
return result

def powR(x,n): return 1 if (n==0) else (x * powR(x, n-1))

What are their space/time complexities?
Which do you prefer?
 
C

Chris Angelico

What are their space/time complexities?
Which do you prefer?

They're pretty similar actually. If you rework the first one to not
use range() but instead have a more classic C style of loop, they'll
be almost identical:

def powI(x,n):
result = 1
while n > 0:
result *= x
n-=1
return result

There's a subtle difference in that powR as you've coded it will
infinite-loop if given a negative exponent, while powI in any form
will simply return 1 (neither is technically correct, fwiw). Other
than that, the only difference is that one keeps its state on the call
stack, the other uses a temporary variable.

Performance would probably benefit from a special syntax meaning
"recurse", which would avoid the LOAD_GLOBAL for the function's name
(plus, things break if you pass recursive functions around after
compiling them - for instance, powR2=powR; def powR(x,n): os.exit() -
but if you do that, then you should expect to see nice bullet-holes in
your foot). However, I do not believe that Python would overall
benefit from any such thing.

Chris Angelico
 
R

rusi

If you value runtime efficiency over development time, sure. There are
plenty of languages which have made that decision: Pascal, C, Java, Lisp,
Forth, and many more.

Or Haskell:

Succinctness like python
Type safety better than C, C++, Java with a minimum of type
declarations
 
H

Hans Georg Schaathun

On 03 May 2011 00:21:45 GMT, Steven D'Aprano
: Python aims at acceptable performance (between 10 and 100 times slower
: than C) and much easier development (between 10 and 100 times faster than
: C *wink*). If that tradeoff doesn't suit you, perhaps Python is not the
: language for you.

It isn't the trade-off per se which bothers me, but certain features
which seem to make compilation harder without making development any
easier.

But then, it probably depeds on what kind of development you are doing.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,161
Messages
2,570,892
Members
47,430
Latest member
7dog123

Latest Threads

Top