0% found this document useful (0 votes)
54 views

Chapter 18. Dynamic Programming: Figure 18.2 Tree of Calls For Recursive Fibonacci

This document discusses the inefficiency of naively implementing the Fibonacci sequence recursively and introduces dynamic programming as a solution. It shows that the recursive Fibonacci implementation redundantly recomputes values. Dynamic programming addresses this through memoization, which stores previously computed values in a table to look up rather than recomputing. The document provides an example recursive call tree and presents an efficient memoized Fibonacci implementation using a dictionary to store computed values.

Uploaded by

ZhichaoWang
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

Chapter 18. Dynamic Programming: Figure 18.2 Tree of Calls For Recursive Fibonacci

This document discusses the inefficiency of naively implementing the Fibonacci sequence recursively and introduces dynamic programming as a solution. It shows that the recursive Fibonacci implementation redundantly recomputes values. Dynamic programming addresses this through memoization, which stores previously computed values in a table to look up rather than recomputing. The document provides an example recursive call tree and presents an efficient memoized Fibonacci implementation using a dictionary to store computed values.

Uploaded by

ZhichaoWang
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

253

Chapter 18. Dynamic Programming


While this implementation of the recurrence is obviously correct, it is terribly
inefficient. Try, for example, running fib(120), but dont wait for it to complete.
The complexity of the implementation is a bit hard to derive, but it is roughly
O(fib(n)). That is, its growth is proportional to the growth in the value of the
result, and the growth rate of the Fibonacci sequence is substantial. For
example, fib(120) is 8,670,007,398,507,948,658,051,921. If each recursive call took a
nanosecond, fib(120) would take about 250,000 years to finish.
Lets try and figure out why this implementation takes so long. Given the tiny
amount of code in the body of fib, its clear that the problem must be the
number of times that fib calls itself. As an example, look at the tree of calls
associated with the invocation fib(6).
cib(6)
cib(5)

cib(4)

cib(4)
cib(3)
cib(2)
cib(1)

cib(3)
cib(2)

cib(1)

cib(1)

cib(0)

cib(2)
cib(1)

cib(3)

cib(1)

cib(0)

cib(2)

cib(1)

cib(1)

cib(2)
cib(1)

cib(0)

cib(0)

cib(0)

Figure 18.2 Tree of calls for recursive Fibonacci


Notice that we are computing the same values over and over again. For example
fib gets called with 3 three times, and each of these calls provokes four
additional calls of fib. It doesnt require a genius to think that it might be a
good idea to record the value returned by the first call, and then look it up
rather than compute it each time it is needed. This is called memoization, and
is the key idea behind dynamic programming.
Figure 18.3 contains an implementation of Fibonacci based on this idea. The
function fastFib has a parameter, memo, that it uses to keep track of the
numbers it has already evaluated. The parameter has a default value, the
empty dictionary, so that clients of fastFib dont have to worry about supplying
an initial value for memo. When fastFib is called with an n > 1, it attempts to
look up n in memo. If it is not there (because this is the first time fastFib has
been called with that value), an exception is raised. When this happens,
fastFib uses the normal Fibonacci recurrence, and then stores the result in
memo.

You might also like