0% found this document useful (0 votes)
34 views

Home Exercise 3 Algorithms

The document summarizes solving three problems - knapsack problem, matrix chain multiplication, and coin change problem - using dynamic programming. It provides the solutions and optimal values for each problem by filling out tables using Bellman equations to break down the problems into optimal subproblems.

Uploaded by

Roshan Velpula
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Home Exercise 3 Algorithms

The document summarizes solving three problems - knapsack problem, matrix chain multiplication, and coin change problem - using dynamic programming. It provides the solutions and optimal values for each problem by filling out tables using Bellman equations to break down the problems into optimal subproblems.

Uploaded by

Roshan Velpula
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Home Exercise 3: Dynamic Programming and

Randomized Algorithms
Farncesco Buzzi, Roshan Velpula, Haoran Xiong, Jiuduo Wang, Shubh Jain

1 1-Dynamic Programming for the Knapsack Prob-


lem (5 points)
Assume a 0-1-knapsack problem instance with weight restriction W = 10 and 5
items with the following profits and weights:
item 1 2 3 4 5
profit 4 3 5 6 2
weight 4 3 4 2 3
Follow the dynamic programming algorithm from the lecture and fill out the
table below with the corresponding profit values of the subproblems to pack the
first i items into a knapsack of weight j :
What is the optimal packing?

Solution:
i/j 0 1 2 3 4 5 6 7 8 9 10
0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 4 4 4 4 4 4 4
2 0 0 0 3 4 4 4 7 7 7 7
3 0 0 0 3 5 5 5 8 9 9 9
4 0 0 6 6 6 9 11 11 11 14 15
5 0 0 6 6 6 9 11 11 11 14 15
Let the table (matrix) above be V ;
As V[5,10] = V[4,10] and V[4,10] ̸= V[3,10] , the introduction of item 5 has no
effect on the optimal packing, which is determined on row i = 4 by item 4 ;
The weight of i4 is 2 and the value of i4 is 6, V[4−1,10−2] = V[3,8] = 9 =
V[4,10] − 6, move to V[3,8] ;
As V[3,8] ̸= V[2,8] , item 3 is in the optimal packing;
The weight of i3 is 4 and the value of i3 is 5, V[3−1,8−4] = V[2,4] = 4 = V[3,8] −5,
move to V[2,4] ;
As V[2,4] = V[1,4] and V[1,4] ̸= V[0,4] = 0, the introduction of item 2 has no
effect on the optimal packing, and item 1 is in the optimal packing.

1
So the optimal packing is:

i1 i2 i3 i4 i5
1 0 1 1 0

With maximum profit of 15!

2 2-Matrix Chain Multiplications (5 points)


Consider the multiplication of n matrices A1 ·A2 · · · An where matrix Ai is an ai -
by- bi matrix. The number of all possible orders of multiplications is exponential
in n and, thus, an enumeration/brute force approach will not be feasible. We
consider dynamic programming instead here. Let C(i, j) be the optimal cost
(in number of basic multiplications) to compute Ai · Ai+1 · · · Aj .
1. Which values of C(i, j) are easy to compute ("initialization of the dynamic
programming") and which value of C(i, j) corresponds to the optimal solution
(the cost of the entire matrix chain multiplication)?

• Solution:
C(i, j) is easy to compute for values i = j
C(i, j) = 0 for all i = j

2. Consider that the corresponding solution for the subproblem C(i, j) calculates
first Ai · · · Ak , then Ak+1 . ·Aj and finally multiplies the corresponding matrices
Ai ·Ai+1 · · · Aj is computed as (Ai · · · Ak )·(Ak+1 · · · Aj ). Note that the splitting
point k is unknown in advance.
Write down the Bellman equation to compute C(i, j).

• Solution:
The bellman equation to compute C(i, j) is

0, i < j
C(i, j) =
min C(i, k) + C(k, j) + pi−1 pk pj , i ≤ k < j

2
3. Consider the example of five matrices A1 (5-by-2), A2 (2-by-10), A3 (10 -
by-1), A4 (1-by-10), and A5 (10-by-2) and complete a table like the following
one with the values of C(i, j) as the dynamic programming approach would do.
What are the actual minimum number of basic multiplications needed?
Solution:
Using the above bellman equation by a dynamic approach we get the below
table of weights where we note down the minimum number of multiplications
required

i/j 1 2 3 4 5
1 0 100 30 80 60
2 − 0 20 40 44
3 − − 0 100 40
4 − − − 0 20
5 − − − − 0

k 1 2 3 4 5
1 − − 1 3 3
2 − − − 3 3
3 − − − − 3
4 − − − − −
5 − − − − −

In this case, p0 = 5, p1 = 2, p2 = 10, p3 = 1, p4 = 10, p5 = 2


Let the First Matrix be M , let the Second Matrix be K to record the location
of k;
By calculating we get M [1, 5] = 60 which is the minimum number of multi-
plications needed.
To get C(1, 5), k1 is at A3 , the chain is divided into (A1 · A2 · A3 ) · (A4 · A5 ),
To get C(1, 3), k2 is at A1 , the chain is further divided into A1 · (A2 · A3 ) ·
(A4 · A5 );

Check:

C(1, 5) = p1 p2 p3 + p3 p4 p5 + p0 p1 p3 + p0 p3 p5 = 60

3
3 Once again: Coin Change Problem (5 points)
Let us consider the coin change problem from the greedy algorithms part of the
lecture: Given a set of n coins with (arbitrary!) values v1 < v2 < . . . < vn and
a change V , the problem consists of finding the minimum number of coins that
sum up to the given change. In addition, we would also like to know which coins
are actually needed to reach this sum. Assume for the remainder that N (s) is
the subproblem of finding the optimal number of coins to reach a certain change
s and that C(s) is the corresponding set of coins used.
a) Which values of N and C do we want to compute to solve the problem?
b) Which values of N and C are easy to compute in the beginning (and how)?
c) Write down the Bellman equation to compute N (s) and C(s) from already
computed optimal values N (s∗ ) and C (s∗ ) (with s∗ < s).

Solution:

To better explain the algorithm and find the Bellman equation, we start with
an example: let us take as total change V = 9 and as coins the set 1, 4, 5, 6.
The data structure we use to illustrate the subproblems are two 4x9 matrices,
one for N and one for C, where the ith row represents the coin of value vi and
the j th column represent the different value of the change. Inside the matrix, we
have the solution to each subproblem (i, j), with change j and coin with values
from v1 to vi . In the N matrix, the solutions are the number of coins used, while
in the C matrix the solutions are the sets of coin used.

a) After completing the full tables, the optimal values found by the algo-
rithm will be in the position (in , jm ), where in is the last row and jm is the last
column. In the N table, the solution will be the minimum value, while in the
C table it will be the set with the minimum cardinality; the coordinates of the
solutions will be the same.

b) The value to initialize the matrices are in entry (0, 0), in column 0 and
row 0. For (0, 0), we have 0 as value (for N) and empty set (for C). In fact, we
do not need money when the change is 0; the same counts also for the entire
column 0. In row zero, except for (0, 0), our problem cannot be solved, as we
cannot give some change if we do not have money (we indicate this with ∞).

4
In our example, these are our initialized tables:

N 0 1 2 3 4 5 6 7 8 9
0 0 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞
1 0
4 0
5 0
6 0

C 0 1 2 3 4 5 6 7 8 9
0 ϕ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞
1 ϕ
4 ϕ
5 ϕ
6 ϕ

The bellman equation to compute C(s) and N (s)is

C(i − 1, j) if vi > j

C(s) =
min {C(i − 1, j), vi ∪ C(i, j − vi )} if vi <= j

C(i − 1, j) if vi > j

N (s) =
min {N (i − 1, j), 1 + N (i, j − vi )} if vi <= j

In the equation for C, the min function is referred to the minimum cardinality
between the two sets.

5
In our example, these are two full tables, with the solutions highlighted:

N 0 1 2 3 4 5 6 7 8 9
0 0 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞
1 0 1 2 3 4 5 6 7 8 9
4 0 1 2 3 1 2 3 4 2 3
5 0 1 2 3 1 1 2 3 4 2
6 0 1 2 3 1 1 1 2 3 2

C 0 1 2 3 4 5 6 7 8 9
0 ϕ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞
{1,1,1 {1,1,1,1 {1,1,1,1, {1,1,1,1,1 {1,1,1,1,1, {1,1,1,1,
1 ϕ {1} {1,1} {1,1,1}
,1} ,1} 1,1} ,1,1} 1,1,1} 1,1,1,1,1}
4 ϕ {1} {1,1} {1,1,1} {4} {4,1} {4,1,1} {4,1,1,1} {4,4} {4,4,1}
5 ϕ {1} {1,1} {1,1,1} {4} {5} {5,1} {5,1,1} {5,1,1,1} {5,4}
6 ϕ {1} {1,1} {1,1,1} {4} {5} {6} {6,1} {6,1,1} {5,4}

From the tables we can see that, in this particular case, the greedy algorithm
would not find an optimum solution, while the dynamic programming finds it.

6
4 4-Pure Random Search (6 points)
The first stochastic optimization algorithm, introduced before any genetic algo-
rithm (GA) or evolution strategy (ES), is the so-called pure random search, or
PRS for short.
In the following, we consider the optimization of the following two functions,
both defined on bitstrings of length n :
Example 1 For x ∈ {0, 1}n , the function OM is defined as
n
X
fOM (x) = xi
n=1

Example 2 For x ∈ {0, 1}n , the function TZ is defined as


n Y
X n
fT Z (x) = (1 − xj )
n=1 j=1

a) Describe in words what the functions fOM and fT Z compute

• Solution:

For foM :

As x ∈ {0, 1}n , let xk be the element of the set that xi = 0, and xl be the
element of set that xi = 1 P
n
xk = 0, and fOM (x) = n=1 xi = xl
P P

So fOM (x) is the total number of elements xi that equals to 1 .

For f T Z :

fT Z (x) = (1 − x1 ) + (1 − x1 ) (1 − x2 ) + · · · + (1 − x1 ) (1 − x2 ) . . . (1 − xn )

If x1 = 1, fT Z (x) = 0 + 0 + · · · + 0 = 0
If x1 = 0 and x2 = 1, fT Z (x) = 1 + 0 + · · · + 0 = 1,
If x1 = x2 = · · · = xk−1 = 0 and xk = 1, fT Z (x) = 1 + 1 + · · · + 0 = k − 1
So fT Z (x) is the total number of xi = 0 before the first xi = 1

7
b) What are the maxima of the two functions? What are the values of fOM
and of fT Z at those optima?

• Solution:
When fOM is maximized, all the elements of the bitstring equals to 1; xi =
1 for 1 ≤ i ≤ n
X
Max (fOM ) = xl = n

When fT Z is maximized, all the elements of the bitstring equals to 0; xi =


0 for 1 ≤ i ≤ n

Max (fT Z ) = 1 + 1 + · · · + 1 = n

c) Compute the expected time (in number of function evaluations) to reach


the optimum as a function of the search space dimension n.
Hint: show that the time to reach the optimum follows a geometric distri-
bution with a parameter to determine.

• Solution:
For the search space dimension Ω = {0, 1}n , the probability to find the
optimum is p = 21n
The probability that the k th search is successful where k ≥ 1 is:
 k−1
k−1 1 1
Pr(X = k) = (1 − p) p= 1− n ∗
2 2n

Which follows a geometric distribution, with the expected value and thus
expected time to find the optimum:

1
E(x) = = 2n
p

You might also like