Unit 11 Dynamic Programming - 2: Structure
Unit 11 Dynamic Programming - 2: Structure
11.1 Introduction
In the previous unit we studied some concepts of dynamic programming. As
you know, dynamic programming is defined as an optimization technique
that is used for particular classes of backtracking algorithms where the sub
problems are repeatedly solved.
The basic steps for dynamic programming are given below:
1) Develop a mathematical notation that is used to find a solution and sub
solutions for the problem.
2) Prove the solution using the Principle of Optimality.
3) Derive a recurrence relation that solves the sub solutions using the
mathematical notation in step 1.
4) Write an algorithm to compute the recurrence relation.
Sometimes, we have to solve problems optimally. At some other time a
non-optimal solution also gives a good solution. We cannot judge a problem
by a single criterion. Optimization of a problem is useful in any type of
problem we have to solve.
This unit defines the Principle of Optimality and analyzes binary search
trees using dynamic programming. It also introduces the Knapsack problem
In the equation Eq: 11.1, VN(x0) is the value function where N is the number
of decision steps. We are aware that the value function explains the best
possible value of the objective, as a function of the state x0. Here
p1( x1 x0, a0)[r1( x0, a0, x1) VN 1( x1)] are the policies with respect to
distribution.
Self Assessment Questions
1. Principle of __________ is defined as a basic dynamic programming
principle which helps us to view problems as a sequence of sub
problems.
2. ______________, a mathematician, invented the Principle of Optimality.
3. All optimization problems tend to minimizing cost, time and maximizing
________.
Binary search trees are node based data structures used in many system
programming applications for managing dynamic sets. An example for
binary search trees is given in figure 11.1.
We use binary search trees for applications such as sorting, searching and
in–order traversal.
The four main operations that we perform on binary trees are:
Searching – Here we match the searching element with the root node first,
left sub-tree, right sub-tree until we find the node or if no nodes are left. We
can search a binary search tree using the pseudocode given below.
Pseudocode for Searching a Binary Search Tree
find(Y, node){
if(node = NULL)
return NULL
if(Y = node:data)
return node
else if(Y < node:data)
return find(Y,node:leftChild)
else if(Y> node:data)
return find(Y,node:rightChild)
Insertion – If the root node of the tree does not have any value, we can
insert the new node as the root node. For inserting a new element in an
existing binary search tree, first we compare the value of the new node with
the current node value. If the value of the new node is less than the current
node value, we insert it as a left sub-node. If the value of the new node is
greater than the current node value, then we insert it as a right sub-tree.
Let us now discuss the pseudocode for inserting a new element in a binary
search tree.
Pseudocode for Inserting a Value in a Binary Search Tree
//Purpose: insert data object Y into the Tree
//Inputs: data object Y (to be inserted), binary-search-tree node
//Effect: do nothing if tree already contains Y;
// otherwise, update binary search tree by adding a new node containing
data object Y
insert(Y, node){
if(node = NULL){
node = new binaryNode(Y,NULL,NULL)
return
}
if(Y = node:data)
return
else if(Y < node:data)
insert(Y, node:leftChild)
else // Y > node:data
insert(Y, node:rightChild)
}
Deletion – If the node to be removed has no children, we can just delete it.
If the node to be removed has one child, then the node is deleted and the
child is connected directly to the parent node.
To remove a node which has two children, we adopt the following
procedure:
1) We find the minimum value in the right sub-tree
2) We replace the node to be removed with the minimum value found.
3) We then remove the duplicate value from the right sub-tree.
We can delete an existing element from a binary search tree using the
following pseudocode:
Pseudocode for Deleting a Value from a Binary Search Tree
//Purpose: delete data object Y from the Tree
//Inputs: data object Y (to be deleted), binary-search-tree node
//Effect: do nothing if tree does not contain Y;
// else, update binary search tree by deleting the node containing data
object Y
delete(Y, node){
if(node = NULL) //nothing to do
return
if(Y < node:data)
delete(Y, node:leftChild)
else if(Y > node:data)
delete(Y, node:rightChild)
else { // found the node to be deleted! Take action based on number of
node children
if(node:leftChild = NULL and node:rightChild = NULL){
delete node
node = NULL
return}
else if(node:leftChild = NULL){
tempNode = node
node = node:rightChild
delete tempNode}
else if(node:rightChild = NULL){
(similar to the case when node:leftChild = NULL)
}
else { //replace node:data with minimum data from right sub-tree
tempNode = findMin(node.rightChild)
node:data = tempNode:data
delete(node:data,node:rightChild)
}
}
}
Traversal – Traversing a binary search tree involves visiting all the nodes in
the tree. First we visit the root node. Then we visit its left and right sub-trees.
We can visit a node only once. We can traverse a binary tree recursively
using the following pseudocode:
For the trees in the figure 11.2, we can find the average number
comparisons from the equation Eq: 11.2.
Where, n is the value of the node and x is the level of the node in the tree.
For the tree of figure 11.2, the average number of comparisons is given as
(1/16 * 1) + (1/32 * 2) + (1/4 * 2) + (1/8 * 3) + (1/32 * 3) + (1/2 * 3) 2.6.
Here we can see that the tree is not optimized. We can generate all 132
binary search trees and analyze them to find the optimal one. If we start this
task using the binary search tree algorithms the task becomes exhaustive.
The total number of binary search trees with n elements is equal to the nth
Catalan number, c(n), given in Eq 11.3.
2n 1
c(n) = n n 1 for n>0, c(0) 1 Eq: 11.3
The equation Eq: 11.3 reaches infinity as fast as 4n/n1.5.
Let us use dynamic programming approach to solve this problem. Let a1,
a2…an be the distinct elements given in ascending order and let p1, p2…pn
be the probabilities of searching the elements. Let c[i,j] be the smallest
average number of comparisons made in a binary search tree Ti j of
elements ai….aj, where i,j are some integer indices, 1 i j n.
Now let us find the values of C[i,j] for its the sub instances. We have to
choose a root ak for keys ai….aj, so as to derive the recurrence relation
using dynamic programming. For such binary search tree, the root consist of
the key ak, the left sub-tree Ti k 1 contains keys ai…ak-1 optimally arranged
and the right sub-tree Tk j1 contains keys ak+1…aj also optimally arranged.
Here we are taking advantage of the Principle of Optimality. If we start
counting tree levels at 1 then we can derive the following recurrence
relation:
k 1
min p p
C[i,j] = ik j k. 1+ s 1 s. (level of as in Ti k 1 +1)
+ p
s k 1 s. (level of as in Tk j1 +1)
k 1 k 1
min p p p
= ik j k + s 1 s. level of as in Ti k 1 + s i s
i j
+ p
s k 1 s. level of as in Tk j1 + p
s k 1 s
k 1
min p
= ik j s 1 s. level of as in Ti k 1 +
i j
+ p
s k 1 s. level of as in T j p
k 1 + s i s
In the recurrence relation given by Eq: 11.4, let us assume that C[i,i-1] 0
for 1 i n+1. This we can interpret as the number of comparisons in the
empty tree. The figure 11.3 shows the values required to compute the C[i,j]
formula.
Figure 11.3: Dynamic Programming Algorithm for Optimal Binary Search Tree
In figure 11.3, we can find the values at row i and columns to the left of
column j and in column j and the rows below the row i. The arrows shown
point to the pairs of entries that are added up and the smallest one is
recorded as the value of C[i,j]. We have to fill the table along its diagonal,
starting with zeroes on the main diagonal and with probabilities given
as pi, 1 i n , and moving toward the upper right corner.
Let us next discuss the dynamic programming algorithm for binary search
tree optimization.
Dynamic Programming Algorithm for Binary Search Tree
Optimization
//Input: An array P[1..n] of search probabilities for a sorted list of n keys
//Output: Average number of comparisons in successful searches in the
//optimal binary search tree and table of sub trees’ roots in the optimal
//binary search tree
for i 1 to n do
C[i,i-1] 0
C[i,i] P[i]
R[i,i] i
C[n+1,n] 0
for d 1 to n-1 do//diagonal count
for i 1 to n - d do
ji + d
minval
for k I to j do
if C[i, k-1]+C[k+1, j]< minval
minval C[I,k-1]+ C[k+1,j]; kmin k
R[i,j] kmin
sum P[i];for s i+1 to j do sum sum + P[s]
C[I,j] minval + sum
return c[1,n],R
Let us now trace the dynamic programming algorithm for binary search tree
optimization.
Algorithm Tracing for Binary Search Tree Optimization
P[5]={1,2,3,4,5}, n=5;
C[5,5]=0//array for comparisons in successful search
R[5,5]=0//root array
for i = 1 to 5 do //this loop will occur from i = 1 to i = 5
C[1,0]=0;
The space efficiency of this algorithm is in quadratic terms and the time
efficiency is in cubic terms. We can also see that the values of the root table
are always non-decreasing along each row and column. This limits values
for R[i,j] to the range r[i,j-1],….r[i+1,j] and makes it possible to reduce the
running time of the algorithm to (n2).
11.3.1 Solving binary search trees using dynamic programming
Let us illustrate the above mentioned algorithm using the four keys that we
used in the previous section. The keys and the probabilities are given in
table 11.1.
Table 11.1: Table of Keys and Probabilities
Key P Q R S T U
Probability 1/8 1/32 1/16 1/32 1/4 1/2
0 1 2 3 4 5 6
1 0 1/8
2 0 1/32
3 0 1/32
4 0 1/16
5 0 1/4
6 0 1/2
7 0
Let us compute C[1,2] as shown in equation Eq:11.5:
s Ps01 / 321 / 2
2
k 1:C [1, 0 ]C [ 2 , 2 ]
min
k 1:C [1,1]C [ 3, 2 ] Ps 1 / 801 / 2 =3/16
2
C[1,2] = s 1
Eq:11.5
Thus, from the two possible binary trees P and Q, the root of the optimal
tree has index 2 and the average number of comparisons in a successful
search in the tree is 3/6.
Let us complete the above given table. The completed table 11.3 is the main
table.
Table 11.3: Main Table
0 1 2 3 4 5 6
1 0 1/8 3/16 9/32 15/32 31/32 63/32
2 0 1/32 3/32 7/32 19/32 47/32
3 0 1/32 1/8 15/32 21/16
4 0 1/16 3/8 19/16
5 0 ¼ 1
6 0 1/2
7 0
Thus we can compute the average numbers of key comparisons in the
optimal tree to be 63/32. According to these probabilities, the optimal tree is
shown in the figure 11.4.
.
Figure 11.4: Optimal Binary Search Tree
Consider a situation where a thief breaks into a store and tries to fill his
knapsack with as much valuable goods as possible. The figure 11.5 given
above shows the number of goods with its value and weight. There are 3
items given with weights 10 kg, 20 kg and 30 kg with values Rs.60, Rs.100
and Rs. 120 respectively. The capacity of the knapsack is given as 50 kg.
We have to fill the knapsack with the items appropriately to get the
maximum value, but not exceeding the weight 50Kg
Let us try to fill the knapsack using different items as shown in the
figure 11.6.
Firstly we try to fill it using item 2 and 3 and the values add up to Rs 220.
Secondly we try to fill the knapsack using items 1 and 2 but these weights
do not fill the knapsack completely. Thirdly, we try to fill it with items 1 and 2.
This also does not fill the knapsack.
Now let us see the best possible solution for this problem from the
figure 11.7.
Here we take items 1 and 2 as such and we take the 20/30 part of item 3.
Thus the values add up to Rs 240, which is the maximum value. Now let us
formally define the Knapsack problem.
in the knapsack, then the value of an optimal solution from the first i items is
the same as the value of an optimal subset selected from the first i-1 items.
Thus we can arrive at a recurrence relation as given in equation Eq: 11.6.
max{V [i 1], vi V [i 1, wi ]} ifj wi 0
V[i,j] = V [i 1, j} ifj wi 0
Eq:11.6
0 j-wi J W
0 0 0 0 0
i-1 0 V[i-1,j-W 1] V[i-1,j]
I 0 V[i,j]
N 0 goal
We can find the time efficiency and space efficiency of the algorithm as
(nW).The time required to find the composition of an optimal solution is in
(n + W).
Activity 1
Item 1 2 3 4
Weight 3 5 2 4
Value (in Rs.) 10 15 25 45
Knapsack capacity is given as W=10. Analyze the Knapsack problem
using dynamic programming with the help of the values given above.
down approach, but maintains a table which is used for the bottom up
dynamic programming algorithms. We can initialize the table values to a
‘null’ symbol. When we have to compute a new value:
The method checks the corresponding entry in the table
If this entry is not ‘null’, it is retrieved
If this entry is ‘null’, then the value is computed using recursive calls and
the results are entered in the table.
The algorithm for solving Knapsack problem using memory functions is
given below.
Algorithm for Solving Knapsack Problem Using Memory Functions
//Input: A nonnegative integer i indicating the number of the first items
used //and a non negative integer j indicating the Knapsack’s capacity
//Output: The value of an optimal feasible subset of the first i items
//Note: uses as global variables input arrays weights[1..n],
values[1..n],and //table V[0..n,0..W] whose entries are initialized with –I’s
except for row 0 //and column 0 which are initialized as 0’s.
If V[i,j]<0
If j< Weights[i]
value MFKnapsack(i-1,j)
else
value max[MFKnapscak(i-1,j),
Values[i] + MFKnapsack[i-1,,j-Weigths[i])]
V[i,j] value
return V[i,j]
Let us now trace the above algorithm that uses memory functions.
Algorithm tracing for Knapsack Problem Using Memory Functions
i=2,j=2,weights[3]={1,2,3},values[3]={3,5,4}, V[5,5]= -1
n=5, W=5,V[0,0]=0
If V[2,2]<0//value of V[2,2]= -1,which is less than 0
If 2< Weights[2]//If 2<2 this condition is false , jump to else
value = MFKnapsack(2-1,2)//this is a recursive loop
else
value = max[MFKnapscak(2-1,2), Values[2]
+ MFKnapsack[2-1,,2-Weigths[2])]
// value = max[MFKnapsack(1,2), 5+ MFKnapsack[1,0]
V[22] value
return V[2,2]
11.6 Summary
Let us summarize what we have discussed in this unit.
In this unit we recollected the dynamic programming principle. Next we
defined the Principle of Optimality. Principle of Optimality is defined as a
basic dynamic programming principle which helps us to view problems as a
sequence of sub problems. Next we defined the binary search tree and
explained the various operations performed on the tree. We studied the
applicability of the Principle of Optimality on the binary search trees.
In the next section we studied the Knapsack problem. The problem is
defined for a set of items where each item has a weight and a value, and it
determines the number of items that minimizes the total weight and
maximizes the total value. We solved the Knapsack problem using the
dynamic programming technique. Next we discussed memory functions. It
uses a dynamic programming technique called memoization in order to
reduce the inefficiency of recursion that might occur. We also solved the
Knapsack problem using the memory functions.
11.7 Glossary
Terms Description
Recurrence relation Recurrence relation is an equation that recursively defines
a list where each term of the list is defined as a function of
the preceding terms.
NP hard problem NP hard problems are problems that are solved in non
deterministic polynomial time.
11.9 Answers
Self Assessment Questions
1. Optimality
2. Richard Ernest Bellman
3. Profits
4. Binary search trees
5. O(log n)
6. Height
7. Recurrence relation
8. Bounded Knapsack problem
9. Weight
10. Memory functions
11. Memoization
12. Top down
Terminal Questions
1. Refer section 11.2 – Principle of optimization
2. Refer section 11.3 – Optimal binary search trees
3. Refer section 11. 3 – Optimal binary search trees
4. Refer section 11.4.1 – Solving binary search trees using dynamic
programming
5. Refer section 11.5.1 – Solving Knapsack problem using memory
functions
References
Anany Levitin (2009). Introduction to Design and Analysis of Algorithms.
Dorling Kindersley, India
E-References
www.cecs.csulb.edu/~ebert/teaching/spring2002/cecs328/.../bst.pdf
www.cs.ubc.ca/~nando/550-2006/lectures/l3.pdf
http://lcm.csa.iisc.ernet.in/dsa/node91.html
http://www.ehow.com/way_5172387_binary-tree-traversal-methods.html
http://www.itl.nist.gov/div897/sqg/dads/HTML/knapsackProblem.html