0% found this document useful (0 votes)
22 views41 pages

5CS4-AOA-Unit-2 @zammers

Uploaded by

MAYANK SAINI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views41 pages

5CS4-AOA-Unit-2 @zammers

Uploaded by

MAYANK SAINI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Unit-2

Dynamic Programming
Definition
Dynamic programming (DP) is a general algorithm design technique for solving
problems with overlapping sub-problems. This technique was invented by American
mathematician “Richard Bellman” in 1950s.

Key Idea
The key idea is to save answers of overlapping smaller sub-problems to avoid re-
computation.

Dynamic Programming Properties


• An instance is solved using the solutions for smaller instances.
• The solutions for a smaller instance might be needed multiple times, so store their
results in a table.
• Thus each smaller instance is solved only once.
• Additional space is used to save time.

Dynamic Programming vs. Divide & Conquer


LIKE divide & conquer, dynamic programming solves problems by combining solutions
to sub-problems. UNLIKE divide & conquer, sub-problems are NOT independent in
dynamic programming.

Divide & Conquer Dynamic Programming

1. Partitions a problem into independent 1. Partitions a problem into


smaller sub-problems overlapping sub-problems

2. Doesn’t store solutions of sub-


2. Stores solutions of sub-
problems. (Identical sub-problems
problems: thus avoids
may arise - results in the same
calculations of same quantity
computations are performed
twice
repeatedly.)
3. Bottom up algorithms: in which
3. Top down algorithms: which logically the smallest sub-problems are
progresses from the initial instance explicitly solved first and the
down to the smallest sub-instances via results of these used to construct
intermediate sub-instances. solutions to progressively larger
sub-instances
Dynamic Programming vs. Divide & Conquer: EXAMPLE
Computing Fibonacci Numbers

1. Using standard recursive formula:

0 if n=0
F(n) = 1 if n=1
F(n-1) + F(n-2) if n >1

Algorithm F(n)
// Computes the nth Fibonacci number recursively by using its definitions
// Input: A non-negative integer n
// Output: The nth Fibonacci number
if n==0 || n==1 then
return n
else
return F(n-1) + F(n-2)

Algorithm F(n): Analysis


• Is too expensive as it has repeated calculation of smaller Fibonacci numbers.
• Exponential order of growth.

F(n)

F(n-1) + F(n-2)

F(n-2) + F(n-3) F(n-3) + F(n-4)


2. Using Dynamic Programming:
Algorithm F(n)
// Computes the nth Fibonacci number by using dynamic programming method
// Input: A non-negative integer n
// Output: The nth Fibonacci number
A[0] 0
A[1] 1
for i 2 to n do
A[i] A[i-1] + A[i-2]
return A[n]

Algorithm F(n): Analysis


• Since it caches previously computed values, saves time from repeated
computations of same sub-instance
• Linear order of growth

Rules of Dynamic Programming


1. OPTIMAL SUB-STRUCTURE: An optimal solution to a problem contains
optimal solutions to sub-problems
2. OVERLAPPING SUB-PROBLEMS: A recursive solution contains a “small”
number of distinct sub-problems repeated many times
3. BOTTOM UP FASHION: Computes the solution in a bottom-up fashion in the
final step

Three basic components of Dynamic Programming solution


The development of a dynamic programming algorithm must have the following three
basic components
1. A recurrence relation
2. A tabular computation
3. A backtracking procedure

Example Problems that can be solved using Dynamic Programming


method
1. Computing binomial co-efficient
2. Compute the longest common subsequence
3. Warshall’s algorithm for transitive closure
4. Floyd’s algorithm for all-pairs shortest paths
5. Some instances of difficult discrete optimization problems like
knapsack problem
traveling salesperson problem
Binomial Co-efficient
Definition:
The binomial co-efficient C(n,k) is the number of ways of choosing a subset of k
elements from a set of n elements.
1. Factorial definition
For non-negative integers n & k, we have
C(n,k) = n!/ k! (n-k)!
= (n(n-1)… (n-k+1))/ k(k-1)…1 if k ∈ {0,1,…,n}
and
C(n,k) = 0 if k > n

2. Recursive definition

1 if k=0
C(n,k) = 1 if n = k
C(n-1,k-1) + C(n-1, k) if n>k>0

Solution
Using crude DIVIDE & CONQUER method we can have the algorithm as follows:

Algorithm binomial (n,


k) if k==0 OR k==n
return 1
else
return binomial(n-1, k) + binomial(n-1, k-1)

Algorithm binomial (n, k): Analysis


• Re-computes values large number of times
• In worst case (when k=n/2), we have O(2n/n) efficiency
4,2

3,2 + 3,1

2,2 + 2,1 2,1 + 2,0

1,1 1,0 1,0


1,1
Using DYNAMIC PROGRAMMING method: This approach stores the value of
C(n,k) as they are computed i.e. Record the values of the binomial co-efficient in a table
of n+1 rows and k+1 columns, numbered from 0 to n and 0 to k respectively.

Table for computing binomials is as follows:

0 1 2 … k-1 k
0 1
1 1 1
2

n-1 C(n-1, k-1) C(n-1, k)
n C(n, k)

Algorithm binomial (n, k)


// Computes C(n, k) using dynamic programming
// input: integers n≥k≥0
// output: The value of C(n, k)
for i 0 to n do
for j 0 to min (i, k) do
if j == 0 or j == i
then A[i, j] 1
else
A[i, j] A[i-1, j-1] + A[i-1, j]
return A[n, k]

Algorithm binomial (n, k): Analysis


• Input size: n, k
• Basic operation: Addition
• Let A(n, k) be the total number of additions made by the algorithm in computing
C(n,k)
• The first k+1 rows of the table form a triangle while the remaining n-k rows form
a rectangle. Therefore we have two parts in A(n,k).
Warshall’s Algorithm
-to find TRANSITIVE CLOSURE

Some useful definitions:


 Directed Graph: A graph whose every edge is directed is called directed graph
 OR digraph 
 Adjacency matrix: The adjacency matrix A = {aij} of a directed graph is the
 boolean matrix that has 
o 1 - if there is a directed edge from ith vertex to the jth vertex
 o 0 - Otherwise 
 Transitive Closure: Transitive closure of a directed graph with n vertices can be
defined as the n-by-n matrix T={tij}, in which the elements in the ith row (1≤ i ≤
n) and the jth column(1≤ j ≤ n) is 1 if there exists a nontrivial directed path (i.e., a
directed path of a positive length) from the ith vertex to the jth vertex, otherwise
 tij is 0. 
The transitive closure provides reach ability information about a digraph. 

Computing Transitive Closure:


• We can perform DFS/BFS starting at each vertex
• Performs traversal starting at the ith vertex.
• Gives information about the vertices reachable from the ith vertex
• Drawback: This method traverses the same graph several times.
• Efficiency : (O(n(n+m))
• Alternatively, we can use dynamic programming: the Warshall’s Algorithm

Underlying idea of Warshall’s algorithm:


• Let A denote the initial boolean matrix.
• The element r(k) [ i, j] in ith row and jth column of matrix Rk (k = 0, 1, …, n) is
equal to 1 if and only if there exists a directed path from ith vertex to jth vertex
with intermediate vertex if any, numbered not higher than k
• Recursive Definition:
• Case 1:
A path from vi to vj restricted to using only vertices from {v1,v2,…,vk} as
intermediate vertices does not use vk, Then
R(k) [ i, j ] = R(k-1) [ i, j ].
• Case 2:
A path from vi to vj restricted to using only vertices from {v1,v2,…,vk} as
intermediate vertices do use vk. Then
R(k) [ i, j ] = R(k-1) [ i, k ] AND R(k-1) [ k, j ].
We conclude:
R(k)[ i, j ] = R(k-1) [ i, j ] OR (R(k-1) [ i, k ] AND R(k-1) [ k, j ] )
NOTE:
• If an element rij is 1 in R(k-1), it remains 1 in R(k)
• If an element rij is 0 in R(k-1), it has to be changed to 1 in R(k) if and only if the
element in its row I and column k and the element in its column j and row k are
both 1’s in R(k-1)

Algorithm:

Algorithm Warshall(A[1..n, 1..n])


// Computes transitive closure matrix
// Input: Adjacency matrix A
// Output: Transitive closure matrix R

R(0) A
for k 1 to n do
for i 1 to n do
for j 1 to n do
R(k)[i, j] R(k-1)[i, j] OR (R(k-1)[i, k] AND R(k-1)[k, j] )
return R(n)

Example:
Find Transitive closure for the given digraph using Warshall’s algorithm.

A C

D
B

Solution:

R(0) = A B C D
A 0 0 1 0
B 1 0 0 1
C 0 0 0 0
D 0 1 0 0
R(0) k=1 A B C D A B C D
Vertex 1 A 0 0 1 0 A 0 0 1 0
can be B 1 0 0 1 B 1 0 1 1
intermediate C 0 0 0 0 C 0 0 0 0
node D 0 1 0 0 D 0 1 0 0

R1[2,3]
= R0[2,3] OR
R0[2,1] AND R0[1,3]
= 0 OR ( 1 AND 1)
=1
R(1) k=2 A B C D A B C D
Vertex A 0 0 1 0 A 0 0 1 0
{1,2 } can B 1 0 1 1 B 1 0 1 1
be C 0 0 0 0 C 0 0 0 0
intermediate D 0 1 0 0 D 1 1 1 1
nodes
R2[4,1]
= R1[4,1] OR
R1[4,2] AND R1[2,1]
= 0 OR ( 1 AND 1)
=1

R2[4,3]
= R1[4,3] OR
R1[4,2] AND R1[2,3]
= 0 OR ( 1 AND 1)
=1

R2[4,4]
= R1[4,4] OR
R1[4,2] AND R1[2,4]
= 0 OR ( 1 AND 1)
=1

R(2) k=3 A B C D A B C D
Vertex A 0 0 1 0 A 0 0 1 0
{1,2,3 } can B 1 0 1 1 B 1 0 1 1
be C 0 0 0 0 C 0 0 0 0
intermediate D 1 1 1 1 D 1 1 1 1
nodes
NO CHANGE
R(3) k=4 A B C D A B C D
Vertex A 0 0 1 0 A 0 0 1 0
{1,2,3,4 } B 1 0 1 1 B 1 1 1 1
can be C 0 0 0 0 C 0 0 0 0
intermediate D 1 1 1 1 D 1 1 1 1
nodes
R4[2,2]
= R3[2,2] OR
R3[2,4] AND R3[4,2]
= 0 OR ( 1 AND 1)
=1

R(4) A B C D
A 0 0 1 0 TRANSITIVE CLOSURE
B 1 1 1 1 for the given graph
C 0 0 0 0
D 1 1 1 1

Efficiency:
3
• Time efficiency is Θ(n )
• Space efficiency: Requires extra space for separate matrices for
recording intermediate results of the algorithm.
Floyd’s Algorithm to find
-ALL PAIRS SHORTEST PATHS

Some useful definitions:


 Weighted Graph: Each edge has a weight (associated numerical value). Edge
weights may represent costs, distance/lengths, capacities, etc. depending on the
 problem. 
 Weight matrix: W(i,j) is 
o 0 if i=j
o ∞ if no edge b/n i and j.
o “weight of edge” if edge b/n i and j.

Problem statement:
Given a weighted graph G( V, Ew), the all-pairs shortest paths problem is to find
the shortest path between every pair of vertices ( vi, vj ) V.

Solution:
A number of algorithms are known for solving All pairs shortest path problem
• Matrix multiplication based algorithm
• Dijkstra's algorithm
• Bellman-Ford algorithm
• Floyd's algorithm

Underlying idea of Floyd’s algorithm:


• Let W denote the initial weight matrix.
• Let D(k) [ i, j] denote cost of shortest path from i to j whose intermediate vertices
are a subset of {1,2,…,k}.
• Recursive Definition
Case 1:
A shortest path from vi to vj restricted to using only vertices from
{v1,v2,…,vk} as intermediate vertices does not use vk. Then
D(k) [ i, j ] = D(k-1) [ i, j ].
Case 2:
A shortest path from vi to vj restricted to using only vertices from
{v1,v2,…,vk} as intermediate vertices do use vk. Then
D(k) [ i, j ] = D(k-1) [ i, k ] + D(k-1) [ k, j ].

We conclude:
D(k)[ i, j ] = min { D(k-1) [ i, j ], D(k-1) [ i, k ] + D(k-1) [ k, j ] }
Algorithm:

Algorithm Floyd(W[1..n, 1..n])


// Implements Floyd’s algorithm
// Input: Weight matrix W
// Output: Distance matrix of shortest paths’
length D W
for k 1 to n do
for i 1 to n do
for j 1 to n do
D [ i, j] min { D [ i, j], D [ i, k] + D [ k, j]
return D

Example:
Find All pairs shortest paths for the given weighted connected graph using
Floyd’s algorithm.

A 5

4 2 C

B 3

Solution:

D(0) = A B C
A 0 2 5
B 4 0 ∞
C ∞ 3 0
D(0) k=1 A B C A B C
Vertex 1 A 0 2 5 A 0 2 5
can be B 4 0 ∞ B 4 0 9
intermediate C ∞ 3 0 C ∞ 3 0
node
D1[2,3]
= min { D0 [2,3],
D0 [2,1] + D0 [1,3] }
= min { ∞, ( 4 + 5) }
=9

D(1) k=2 A B C A B C
Vertex 1,2 A 0 2 5 A 0 2 5
can be B 4 0 9 B 4 0 9
intermediate C ∞ 3 0 C 7 3 0
nodes
D2[3,1]
= min { D1 [3,1],
D1 [3,2] + D1 [2,1] }
= min { ∞, ( 4 + 3) }
=7

D(2) k=3 A B C A B C
Vertex 1,2,3 A 0 2 5 A 0 2 5
can be B 4 0 9 B 4 0 9
intermediate C 7 3 0 C 7 3 0
nodes
NO Change

D(3) A B C
A 0 2 5 ALL PAIRS SHORTEST
B 4 0 9 PATHS for the given
C 7 3 0 graph
0/1 Knapsack Problem
Memory function
Definition:
Given a set of n items of known weights w1,…,wn and values v1,…,vn and a knapsack
of capacity W, the problem is to find the most valuable subset of the items that fit into the
knapsack.
Knapsack problem is an OPTIMIZATION PROBLEM

Dynamic programming approach to solve knapsack problem

Step 1:
Identify the smaller sub-problems. If items are labeled 1..n, then a sub-problem would be
to find an optimal solution for Sk = {items labeled 1, 2, .. k}

Step 2:
Recursively define the value of an optimal solution in terms of solutions to smaller
problems.
Initial conditions:
V[ 0, j ] = 0 for j ≥ 0
V[ i, 0 ] = 0 for i ≥ 0

Recursive step:
max { V[ i-1, j ], vi +V[ i-1, j - wi ] }
V[ i, j ] = if j - wi ≥ 0
V[ i-1, j ] if j - wi < 0
Step 3:
Bottom up computation using iteration

Question:
Apply bottom-up dynamic programming algorithm to the following instance of the
knapsack problem Capacity W= 5

Item # Weight (Kg) Value (Rs.)


1 2 3
2 3 4
3 4 5
4 5 6

Solution:
Using dynamic programming approach, we have:
Step Calculation Table
1 Initial conditions:
V[ 0, j ] = 0 for j ≥ 0 V[i,j] j=0 1 2 3 4 5
V[ i, 0 ] = 0 for i ≥ 0 i=0 0 0 0 0 0 0
1 0
2 0
3 0
4 0
2 W1 = 2,
Available knapsack capacity = 1 V[i,j] j=0 1 2 3 4 5
W1 > WA, CASE 1 holds: i=0 0 0 0 0 0 0
V[ i, j ] = V[ i-1, j ] 1 0 0
V[ 1,1] = V[ 0, 1 ] = 0 2 0
3 0
4 0
3 W1 = 2,
Available knapsack capacity = 2 V[i,j] j=0 1 2 3 4 5
W1 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3
vi +V[ i-1, j - wi ] } 2 0
V[ 1,2] = max { V[ 0, 2 ], 3 0
3 +V[ 0, 0 ] }
4 0
= max { 0, 3 + 0 } = 3
4 W1 = 2,
Available knapsack capacity = V[i,j] j=0 1 2 3 4 5
3,4,5 i=0 0 0 0 0 0 0
W1 < WA, CASE 2 holds: 1 0 0 3 3 3 3
V[ i, j ] = max { V[ i-1, j ], 2 0
vi +V[ i-1, j - wi ] } 3 0
V[ 1,3] = max { V[ 0, 3 ],
4 0
3 +V[ 0, 1 ] }
= max { 0, 3 + 0 } = 3
5 W2 = 3,
Available knapsack capacity = 1 V[i,j] j=0 1 2 3 4 5
W2 >WA, CASE 1 holds: i=0 0 0 0 0 0 0
V[ i, j ] = V[ i-1, j ] 1 0 0 3 3 3 3
V[ 2,1] = V[ 1, 1 ] = 0 2 0 0
3 0
4 0
6 W2 = 3,
Available knapsack capacity = 2 V[i,j] j=0 1 2 3 4 5
W2 >WA, CASE 1 holds: i=0 0 0 0 0 0 0
V[ i, j ] = V[ i-1, j ] 1 0 0 3 3 3 3
V[ 2,2] = V[ 1, 2 ] = 3 2 0 0 3
3 0
4 0
7 W2 = 3,
Available knapsack capacity = 3 V[i,j] j=0 1 2 3 4 5
W2 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4
V[ 2,3] = max { V[ 1, 3 ], 3 0
4 +V[ 1, 0 ] } 4 0
= max { 3, 4 + 0 } = 4
8 W2 = 3,
Available knapsack capacity = 4 V[i,j] j=0 1 2 3 4 5
W2 < WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4
V[ 2,4] = max { V[ 1, 4 ], 3 0
4 +V[ 1, 1 ] } 4 0
= max { 3, 4 + 0 } = 4

9 W2 = 3,
Available knapsack capacity = 5 V[i,j] j=0 1 2 3 4 5
W2 < WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 2,5] = max { V[ 1, 5 ], 3 0
4 +V[ 1, 2 ] }
4 0
= max { 3, 4 + 3 } = 7

10 W3 = 4,
Available knapsack capacity = V[i,j] j=0 1 2 3 4 5
1,2,3 i=0 0 0 0 0 0 0
W3 > WA, CASE 1 holds: 1 0 0 3 3 3 3
V[ i, j ] = V[ i-1, j ] 2 0 0 3 4 4 7
3 0 0 3 4
4 0
11 W3 = 4,
Available knapsack capacity = 4 V[i,j] j=0 1 2 3 4 5
W3 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 3,4] = max { V[ 2, 4 ], 3 0 0 3 4 5
5 +V[ 2, 0 ] }
4 0
= max { 4, 5 + 0 } = 5

12 W3 = 4,
Available knapsack capacity = 5 V[i,j] j=0 1 2 3 4 5
W3 < WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 3,5] = max { V[ 2, 5 ], 3 0 0 3 4 5 7
5 +V[ 2, 1 ] }
4 0
= max { 7, 5 + 0 } = 7

13 W4 = 5,
Available knapsack capacity = V[i,j] j=0 1 2 3 4 5
1,2,3,4 i=0 0 0 0 0 0 0
W4 < WA, CASE 1 holds: 1 0 0 3 3 3 3
V[ i, j ] = V[ i-1, j ] 2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5
14 W4 = 5,
Available knapsack capacity = 5 V[i,j] j=0 1 2 3 4 5
W4 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 4,5] = max { V[ 3, 5 ], 3 0 0 3 4 5 7
6 +V[ 3, 0 ] }
4 0 0 3 4 5 7
= max { 7, 6 + 0 } = 7

Maximal value is V [ 4, 5 ] = 7/-

What is the composition of the optimal subset?


The composition of the optimal subset if found by tracing back the computations
for the entries in the table.
Step Table Remarks
1
V[i,j] j=0 1 2 3 4 5 V[ 4, 5 ] = V[ 3, 5 ]
i=0 0 0 0 0 0 0
1 0 0 3 3 3 3 ITEM 4 NOT included in the
2 0 0 3 4 4 7 subset
3 0 0 3 4 5 7
4 0 0 3 4 5 7
2
V[i,j] j=0 1 2 3 4 5 V[ 3, 5 ] = V[ 2, 5 ]
i=0 0 0 0 0 0 0
1 0 0 3 3 3 3 ITEM 3 NOT included in the
2 0 0 3 4 4 7 subset
3 0 0 3 4 5 7
4 0 0 3 4 5 7
3
V[i,j] j=0 1 2 3 4 5 V[ 2, 5 ] ≠ V[ 1, 5 ]
i=0 0 0 0 0 0 0
1 0 0 3 3 3 3 ITEM 2 included in the subset
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7
4 Since item 2 is included in the knapsack:
Weight of item 2 is 3kg, therefore,
remaining capacity of the knapsack is
(5 - 3 =) 2kg V[ 1, 2 ] ≠ V[ 0, 2 ]

V[i,j] j=0 1 2 3 4 5 ITEM 1 included in the subset


i=0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7
5 Since item 1 is included in the knapsack: Optimal subset: { item 1, item 2 }
Weight of item 1 is 2kg, therefore,
remaining capacity of the knapsack is Total weight is: 5kg (2kg + 3kg)
(2 - 2 =) 0 kg. Total profit is: 7/- (3/- + 4/-)
Efficiency:
• Running time of Knapsack problem using dynamic programming algorithm is:
O( n * W )
• Time needed to find the composition of an optimal solution is: O( n + W )

Memory function

• Memory function combines the strength of top-down and bottom-up approaches


• It solves ONLY sub-problems that are necessary and does it ONLY ONCE.

The method:
• Uses top-down manner.
• Maintains table as in bottom-up approach.
• Initially, all the table entries are initialized with special “null” symbol to
indicate that they have not yet been calculated.
• Whenever a new value needs to be calculated, the method checks
the corresponding entry in the table first:
• If entry is NOT “null”, it is simply retrieved from the table.
• Otherwise, it is computed by the recursive call whose result is then recorded in
the table.

Algorithm:

Algorithm MFKnap( i, j )
if V[ i, j] < 0
if j < Weights[ i ]
value MFKnap( i-1, j )
else
value max {MFKnap( i-1, j ),
Values[i] + MFKnap( i-1, j - Weights[i] )}
V[ i, j ] value
return V[ i, j]

Example:
Apply memory function method to the following instance of the knapsack problem
Capacity W= 5

Item # Weight (Kg) Value (Rs.)


1 2 3
2 3 4
3 4 5
4 5 6

Solution:
Using memory function approach, we have:
Computation Remarks
1 Initially, all the table entries are initialized
with special “null” symbol to indicate that V[i,j] j=0 1 2 3 4 5
they have not yet been calculated. Here i=0 0 0 0 0 0 0
null is indicated with -1 value. 1 0 -1 -1 -1 -1 -1
2 0 -1 -1 -1 -1 -1
3 0 -1 -1 -1 -1 -1
4 0 -1 -1 -1 -1 -1
2 MFKnap( 4, 5 )
V[ 1, 5 ] = 3
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 )
V[i,j] j=0 1 2 3 4 5
5 + MFKnap( 2, 1 )
i=0 0 0 0 0 0 0
MFKnap( 2, 5 )
1 0 -1 -1 -1 -1 3
2 0 -1 -1 -1 -1 -1
MFKnap( 1, 5 ) 4 + MFKnap( 1, 2 ) 3 0 -1 -1 -1 -1 -1
0 3
4 0 -1 -1 -1 -1 -1
MFKnap( 0, 5 ) 3 + MFKnap( 0, 3 )
0 3+0

3 MFKnap( 4, 5 )
V[ 1, 2 ] = 3
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 )
V[i,j] j=0 1 2 3 4 5
MFKnap( 2, 5 ) 5 + MFKnap( 2, 1 )
i=0 0 0 0 0 0 0
1 0 -1 3 -1 -1 3
2 0 -1 -1 -1 -1 -1
MFKnap( 1, 5 ) 4 + MFKnap( 1, 2 )
3
3 0 -1 -1 -1 -1 -1
3 0
4 0 -1 -1 -1 -1 -1
MFKnap( 0, 2 ) 3 + MFKnap( 0, 0 )
0 3+0

4 MFKnap( 4, 5 )
V[ 2, 5 ] = 7

MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 ) V[i,j] j=0 1 2 3 4 5


i=0 0 0 0 0 0 0
MFKnap( 2, 5 ) 5 + MFKnap( 2, 1 ) 1 0 -1 3 -1 -1 3
7
3 2 0 -1 -1 -1 -1 7
MFKnap( 1, 5 ) 4 + MFKnap( 1, 2 ) 3 0 -1 -1 -1 -1 -1
3 3 4 0 -1 -1 -1 -1 -1
5 MFKnap( 4, 5 )
V[ 2, 1 ] = 0
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 ) V[ 3, 5 ] = 7
5
7
MFKnap( 2, 5 ) 5 + MFKnap( 2, 1 )
V[i,j] j=0 1 2 3 4 5
7 0
i=0 0 0 0 0 0 0
1 0 0 3 -1 -1 3
MFKnap( 1, 1 )
2 0 0 -1 -1 -1 7
0
3 0 -1 -1 -1 -1 7
MFKnap( 0, 1 )
4 0 -1 -1 -1 -1 -1
0
6 7
MFKnap( 4, 5 ) V[ 4, 5 ] = 7
7 6
V[i,j] j=0 1 2 3 4 5
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 ) i=0 0 0 0 0 0 0
7 0 1 0 0 3 -1 -1 3
2 0 0 -1 -1 -1 7
3 0 -1 -1 -1 -1 7
4 0 -1 -1 -1 -1 7

The composition of the optimal subset if found


by tracing back the computations for the
entries in the table as done with the early
knapsack problem

Conclusion:
Optimal subset: { item 1, item 2 }

Total weight is: 5kg (2kg + 3kg)


Total profit is: 7/- (3/- + 4/-)

Efficiency:
• Time efficiency same as bottom up algorithm: O( n * W ) + O( n + W )
• Just a constant factor gain by using memory function
• Less space efficient than a space efficient version of a bottom-up algorithm
Matrix-chain multiplication
 Matrix-chain multiplication is an example of dynamic programming is an algorithm that
solves the problem of matrix-chain multiplication .We are given a sequence (chain) <A1,
 A2,…..An> of n matrices to be multiplied, and we wish to compute the product: 
 A1A2 …… An 
 We can evaluate the above expression using the standard algorithm for multiplying pairs
of matrices as a subroutine once we have parenthesized it to resolve all ambiguities in
how the matrices are multiplied together.
 Matrix multiplication is associative, and so all parenthesizations yield the same product.
A product of matrices is fully parenthesized if it is either a single matrix or the product of
two fully Parenthesized matrix products, surrounded by parentheses. For example, if the
chain of matrices is <A1,A2,A3,A4> then we can fully parenthesize the product 
A1A2A3A4 in five distinct ways: (A1(A2(A3A4)))
, (A1((A2A3)A4)) ,
((A1A2)(A3A4)) ,
((A1(A2A3))A4) ,
(((A1A2)A3)A4).
 How we parenthesize a chain of matrices can have a dramatic impact on the cost of
evaluating the product. Consider first the cost of multiplying two matrices. The standard
algorithm is given by the following pseudocode, which generalizes the SQUARE-
MATRIX-MULTIPLY procedure. 

 We can multiply two matrices A and B only if they are compatible the number of
columns of A must equal the number of rows of B. 
 If A is a p x q matrix and B is a q x r matrix, the resulting matrix C is a p x r matrix. 
 To illustrate the different costs incurred by different parenthesizations of a matrixproduct,
consider the problem of a chain <A1; A2; A3> of three matrices. 
 Suppose that the dimensions of the matrices are 10 x 100, 100 x 5, and 5 x 50, respectively. If
we multiply according to the parenthesization ((A1A2)A3), we perform 10 .100 . 5 = 5000
scalar multiplications to compute the 10 x 5 matrix product A1A2, plus another 10 .5. 50 D
2500 scalar multiplications to multiply this matrix by A3, for a total of 7500 scalar
multiplications.
 According to the parenthesization (A1(A2A3)), we perform 100 . 5 . 50 = 25,000 scalar
multiplications to compute the 100 x 50 matrix product A2A3, plus another 10 .100 .50
=50,000 scalar multiplications to multiply A1 by this matrix, for a total of 75,000 scalar
multiplications. Thus, computing the product according to the first parenthesization is 10
 times faster. 
 We state the matrix-chain multiplication problem as follows: given a
chain<A1,A2,….An>of n matrices, where for i =1,2,….n, matrix Ai has dimension 
pi_1 x pi , fully parenthesize the product A1A2…. An in a way that minimizes the number of
scalar multiplications.

1. Counting the number of parenthesizations


Before solving the matrix-chain multiplication problem by dynamic programming let us
convince ourselves that exhaustively checking all possible parenthesizations does not
yield an efficient algorithm. Denote the number of alternative parenthesizations of a
sequence of matrices by P.n/. When n D 1, we have just one matrix and therefore only
one way to fully parenthesize the matrix product. When n _ 2, a fully parenthesized
matrix product is the product of two fully parenthesized matrix subproducts, and the split
between the two subproducts may occur
between the kth and .k C 1/st matrices for any k = 1,2,….n _ 1. Thus, we obtain the
recurrence

2. Applying dynamic programming


We shall use the dynamic-programming method to determine how to optimally
parenthesize a matrix chain.
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution.
4. Construct an optimal solution from computed information.
The m and s tables computed by MATRIX-CHAIN-ORDER for n D 6 and the following
matrix dimensions:

The tables are rotated so that the main diagonal runs horizontally. The m table uses only the main
diagonal and upper triangle, and the s table uses only the upper triangle. The minimum number
of scalar multiplications to multiply the 6 matrices is m[1,6]= 15,125. Of the darker entries, the
pairs that have the same shading are taken together in line 10 when computing scalar
multiplications to multiply the 6 matrices is m[1,6]= 15,125. Of the darker entries, the pairs that
have the same shading are taken together in line 10 when computing.

3. Constructing optimal solution


Although MATRIX-CHAIN-ORDER determines the optimal number of scalar multiplications
needed to compute a matrix-chain product, it does not directly show how to multiply the
matrices. The following recursive procedure prints an optimal parenthesization of <Ai, Ai+1, ..
Aj i>, given the s table computed by MATRIX-CHAINORDER and the indices i and j . The
initial call PRINT-OPTIMAL-PARENS S(1,…n)prints an optimal parenthesization of <A1;A2;
…;An>.
Longest Common Subsequence (LCS)
 Biological applications often need to compare the DNA of two (or more) different
organisms. A strand of DNA consists of a string of molecules called bases, where the
possible bases are adenine, guanine, cytosine, and thymine. Representing each of these
bases by its initial letter, we can express a strand of DNA as a string over the finite set
 {A; C; G; T} 
  For example, the DNA of one organism may be 
S1 = ACCGGTCGAGTGCGCGGAAGCCGGCCGAA, and the DNA of another
organism may be S2 D=GTCGTTCGGAATGCCGTTGCTCTGTAAA. 
 One reason to compare two strands of DNA is to determine how “similar” the two strands
 are, as some measure of how closely related the two organisms are. 
 For example, we can say that two DNA strands are similar if one is a substring of the
other .In our example, neither S1 nor S2 is a substring of the other. Alternatively ,we
could say that two strands are similar if the number of changes needed to turn one into
the other is small.Yet another way to measure the similarity of strands S1 and S2 is by
finding a third strand S3in which the bases in S3 appear in each of S1 and S2; these bases
must appear in the same order, but not necessarily consecutively. The longer the strand
S3 we can find, the more similar S1 and S2 are. In our example, the longest strand S3 is
 GTCGTCGGAAGCCGGCCGAA. 
 A subsequence of a given sequence is just the given sequence with zero or more elements left
out. Formally, given a sequence X =<x1; x2; : : : ;xm>, another sequence Z = <1,2, …….k>
is a subsequence of X if there exists a strictly increasing sequence <i 1; i2,…iK>of indices of
X such that for all j =1;2,…..k, we have xij =j . For example, Z = <B,C, D,B> is a
subsequence of X =<A, B,C, B,D,A,B> with corresponding index sequence <2; 3; 5; 7> 
 In the longest-common-subsequence problem, we are given two sequences X =<x1; x2; :
: : ; xm> and Y = <y1; y2,…… yn> and wish to find a maximum length common
subsequence of X and Y . This section shows how to efficiently solve the LCS problem
using dynamic programming. 

1. Characterizing a longest common subsequence


In a brute-force approach to solving the LCS problem, we would enumerate all
subsequences of X and check each subsequence to see whether it is also a subsequence of
Y , keeping track of the longest subsequence we find. Each subsequence of X
corresponds to a subset of the indices
m
{1,2,…m} of X. Because X has 2 subsequences, this approach requires exponential
time, making it impractical for long sequences.

2. A recursive solution
 we should examine either one or two sub problems when finding an LCS of X = <x 1;
x2;….xm> and Y = <y1; y2,…..yn>. If xm = yn, we must find an LCS of Xm_1 and Yn_1. 
Appending xm =yn to this LCS yields an LCS of X and Y . If xm not equal to yn, then we
must solve two sub problems.
 Finding an LCS of Xm_1 and Y and finding an LCS of X and Yn_1. Whichever of these
two LCSs is longer is an LCS of X and Y . Because these cases exhaust all possibilities,
we know that one of the optimal sub problem solutions must appear within an LCS of X
and Y . Because these cases exhaust all possibilities, we know that one of the optimal
subproblem solutions must appear within an LCS of X and Y . 


 We can readily see the overlapping-sub problems property in the LCS problem. To find
an LCS of X and Y , we may need to find the LCSs of X and Yn_1 and of Xm_1 and Y .
But each of these sub problems has the sub problem of finding an LCS of X m_1 and
 Yn_1. Many other sub problems share sub problems. 
As in the matrix-chain multiplication problem, our recursive solution to the. The optimal
substructure of the LCS problem gives the recursive formula 

3. Computing the length of an LCS


Algorithm to compute the length of an LCS of two sequences is written here.
Procedure LCS-LENGTH takes two sequences X = <x1; x2; …… xm> and Y =<y1;y2;
….yn> as inputs. It stores the c[i; j] values in a table c[0 ,..m; 0,…n], and it computes the
entries in row-major order. The procedure also maintains the table b[1…m; 1… n] to
help us construct an optimal solution. Intuitively,b[i; j ]points to the table entry
corresponding to the optimal subproblem solution chosen when computing c[i; j ]. The
procedure returns the b and c tables; c[m; n] contains the length of an LCS of X and Y .
4. Constructing an LCS
The b table returned by LCS-LENGTH enables us to quickly construct an LCS of X = <x1, x2,….
xm> and Y =<y1, y2,…yn>. We simply begin at b[m; n] and trace through the table by following
the arrows. Whenever we encounter a in entry b[i; j ], it implies that xi = yj is an element
of the LCS that LCS-LENGTH.
found. With this method, we encounter the elements of this LCS in reverse order. The following
recursive procedure prints out an LCS of X and Y in the proper, forward order. The initial call is
PRINT-LCS(b,X,X:length,Y:length).

The procedure takes time O(m + n), since it decrements at least one of i and j in each recursive
call.
BRANCH AND BOUND
 The design technique known as branch and bound is very similar to backtracking in that
it searches a tree model of the solution space and is applicable to a wide variety of
 discrete combinatorial problems. 
 Each node in the combinatorial tree generated in the last Unit defines a problem state. All
 paths from the root to other nodes define the state space of the problem. 
 Solution states are those problem states‟s‟for which the path from the root to 's' defines a
tuple in the solution space. The leaf nodes in the combinatorial tree are the solution
 states. 
 Answer states are those solution states ‟s‟ for which the path from the root to 's' defines a
tuple that is a member of the set of solutions (i.e., it satisfies the implicit constraints) of
 the problem. 
  The tree organization of the solution space is referred to as the state space tree. 
 A node which has been generated and all of whose children have not yet been generated
 is called alive node. 
 The live node whose children are currently being generated is called the E-node (node
 being expanded). 
 A dead node is a generated node, which is not to be expanded further or all of whose
 children have been generated. 
  Bounding functions are used to kill live nodes without generating all their children. 
 Depth first node generation with bounding function is called backtracking. State
generation methods in which the E-node remains the E-node until it is dead lead to
 branch-and-bound method. 
 The term branch-and-bound refers to all state space search methods in which all children
 of the E-node are generated before any other live node can become the E-node. 
 In branch-and-bound terminology breadth first search(BFS)- like state space search will
be called FIFO (First In First Output) search as the list of live nodes is a first -in-first -out
 list(or queue). 
 A D-search (depth search) state space search will be called LIFO (Last In First Out)
 search, as the list of live nodes is a list-in-first-out list (or stack). 
 Bounding functions are used to help avoid the generation of sub trees that do not contain
 an answer node. 
 The branch-and-bound algorithms search a tree model of the solution space to get the
solution. However, this type of algorithms is oriented more toward optimization. An
algorithm of this type specifies a real -valued cost function for each of the nodes that
 appear in the search tree. 
Usually, the goal here is to find a configuration for which the cost function is minimized.
The branch-and-bound algorithms are rarely simple. They tend to be quite complicated in
many cases.
 Let us see how a FIFO branch-and-bound algorithm would search the state space tree for
the 4-queens problem. 
Fig A. Tree organization of 4-queen solution space. Nodes are numbered as in depth first
search
 Initially, there is only one live node, node1. This represents the case in which no queen
 has been placed on the chessboard. This node becomes the E-node. 
  It is expanded and its children, nodes2, 18, 34 and 50 are generated. 
 These nodes represent a chessboard with queen1 in row 1and columns 1, 2, 3, and 4
 respectively. 
 The only live nodes 2, 18, 34, and 50.If the nodes are generated in this order, then the
 next E-node are node 2. 
 It is expanded and the nodes 3, 8, and 13 are generated. Node 3 is immediately killed
 using the bounding function. Nodes 8 and 13 are added to the queue of live nodes. 
 Node 18 becomes the next E-node. Nodes 19, 24, and 29 are generated. Nodes 19 and 24
are killed as a result of the bounding functions. Node 29 is added to the queue of live
 nodes. 
 Now the E-node is node 34.Fig B shows the portion of the tree of Fig A. that is generated
by a FIFO branch-and-bound search. Nodes that are killed as a result of the bounding
 functions are a "B" under them. 
 Numbers inside the nodes correspond to the numbers in Fig A. Numbers outside the
 nodes give the order in which the nodes are generated by FIFO branch-and-bound. 
 At the time the answer node, node 31, is reached, the only live nodes remaining are nodes
38 and 54. 
Fig B.Portion of 4-Queen state space tree generated by FIFO branch and bound

Solving 8 queen problem by backtracking

The 8 queen problem is a case of more general set of problems namely “n queen
problem”. The basic idea: How to place n queen on n by n board, so that they don‟t attack
each other. As we can expect the complexity of solving the problem increases with n. We
will briefly introduce solution by backtracking.

First let‟s explain what is backtracking? The boar should be regarded as a set of
constraints and the solution is simply satisfying all constraints. For example: Q1 attacks
some positions, therefore Q2 has to comply with these constraints and take place, not
directly attacked by Q1. Placing Q3 is harder, since we have to satisfy constraints of Q1
and Q2. Going the same way we may reach point, where the constraints make the
placement of the next queen impossible. Therefore we need to relax the constraints and
find new solution. To do this we are going backwards and finding new admissible
solution. To keep everything in order we keep the simple rule: last placed, first displaced.
th
In other words if we place successfully queen on the i column but cannot find solution
th
for (i+1) queen, then going backwards we will try to find other admissible solution for
th
the i queen first. This process is called backtrack

Let‟s discuss this with example. For the purpose of this handout we will find solution of 4
queen problem.

Algorithm:

• Start with one queen at the first column first row


• Continue with second queen from the second column first row
• Go up until find a permissible situation
• Continue with next queen

We place the first queen on A1:

Note the positions which Q1 is attacking. So the next queen Q2 has to options: B3 or B4.
We choose the first one B3
Again with red we show the prohibited positions. It turned out that we cannot place the
third queen on the third column (we have to have a queen for each column!). In other
words we imposed a set of constraints in a way that we no longer can satisfy them in
order to find a solution. Hence we need to revise the constraints or rearrange the board up
to the state which we were stuck. Now we may ask a question what we have to change.
Since the problem happened after placing Q2 we are trying first with this queen.

OK we know that there were to possible places for Q2. B3 gives problem for the third
queen, so there is only one position left – B4:

As you can see from the new set of constraints ( the red positions) now we have
admissible position for Q3, but it will make impossible to place Q4 since the only place is
D3. Hence placing Q2 on the only one left position B4 didn‟t help. Therefore the one step
backtrack was not enough. We need to go for second backtrack. Why? The reason is that
there is no position for Q2, which will satisfy any position for Q4 or Q3. Hence we need
to deal with the position of Q1.

We have started from Q1 so we will continue upward and placing the queen at A2
Now it is easy to see that Q2 goes to B4, Q3 goes to C1 and Q4 takes D3:

1
A B C D
To find this solution we had to perform two backtracks. So what now? In order to find all
solutions we use as you can guess – backtrack!
Start again in reverse order we try to place Q4 somewhere up, which is not possible. We
backtrack to Q3 and try to find admissible place different from C1. Again we need to
backtrack. Q2 has no other choice and finally we reach Q1. We place Q1 on A3:

4
3
2
1
A B C D
Continuing further we will reach the solution on the right. Is this distinct solution? No it is
rotated first solution. In fact for 4x4 board there is only one unique solution. Placing Q1 on A4
has the same effect as placing it on A1. Hence we explored all solutions.

How implement backtrack in code. Remember that we used backtrack when we cannot find
admissible position for a queen in a column. Otherwise we go further with the next column until
we place a queen on the last column. Therefore your code must have fragment:

int PlaceQueen(int board[8], int row)

If (Can place queen on ith column)


PlaceQueen(newboard, 0)
Else
PlaceQueen(oldboard,oldplace+1)
End

If you can place queen on ith column try to place a queen on the next one, or backtrack and try to
place a queen on position above the solution found for i-1 column.

BACKTRACKING
For many real-world problems, the solution process consists of working your way through a
sequence of decision points in which each choice leads you further along some path.If you make
the correct set of choices, you end up at the solution. On the other hand, if you reach a dead end
or otherwise discover that you have made an incorrect choice somewhere along the way, you
have to backtrack to a previous decision point and try a different path. Algorithms that use this
approach are called backtracking algorithms.

1. N-QUEEN PROBLEM
 N-Queen problem is to place n-queen in a such manner on an n x n chessboard that no
 two queens attack each other by being in the same row, column or diagonal. 
 It can be see that for n=1, the problem has a trivial solution , and no solution exist for n=2
and n=3. So first we will consider the 4- queen problem and then generalize it to n –
 queens problem. 
 Since we have to place 4 queen such as q1,q2,q3,q4 on a chessboard such that o two
queens attack each other. In such a conditions each queen must be placed o a different
 row, i.e., we place queen “i” on row “i”. 
  Now place queen q1 in the first acceptable position (1,1). 
 Next we place queen q2 i such a way that they do not attack each other. We find that if
we place q2 in column 1 and 2 then the dead end is encountered. Thus the acceptable
position for q2 is column 3 i.e. (2,3)but then no position is left for placing queen q3
 safely. 
 So we backtrack and one step and place the queen in (2,4), the next best possible
 solution. Q3 is placed in (3,2). 
 Later this position also lead to dead end so we backtrack till „q1‟ and place it in (1,2), q2
in (2,4) and q3 in (3,1) and q4 in ( 4,1).
 Thus the solution is <2,4,1,3> one of the feasible solution for 4-queen problem. the other
solution is< 3,1,4,2>. 
Place (k,i) returns a Boolean value that is true if kH queen ca be placed in column i.It tests both
whether I is distinct from all previous values X1,X2,…..,Xk-1 ad whether there is no other queen
on the same diagonal.

The implicit tree for 4-queen problem for solution <2,4,1,3> which is clear from the above
diagram
ALGORITHM

Place (k, i)
{
for j := 1 to k-1 do
if (( x[ j ] = // in the same column
or (Abs( x [ j ] - i) =Abs ( j – k ))) // or in the same
diagonal then return false;
return true;
}
NQueens( k, n) //Prints all Solution to the n-queens problem
{
for i := 1 to n do
{
if Place (k, i) then
{
x[k] := i;
if ( k = n) then write ( x [1 :
n] else NQueens ( k+1, n);}}}

GRAPH COLORING

Assign colors to the vertices of a graph so that no adjacent vertices share the same color.
 Vertices i, j are adjacent if there is an edge from vertex i to vertex j 

m-colorings problem
Find all ways to color a graph with at most m
colors. Problem formulation:-
  Represent the grapth with adjacency matrix G[1:n,1:n]. 
  The colors are represented by integer numbers 1,2,….m. 
 Solution is represented by n- tuple (x1,….xn), where xi is thecolor of node i. 
Solution space tree for mColorigng when n=3 and m=3

Algorithm :- finding all m- colorings of a graph. Function mColoring is begun by first assigning
the graph to its adjacency matrix, setting the array x[ ] to zero, and then invoking the statement
mColoring( 1 );
Algorithm

mColoring( k )
// k is the index of the next vertex to color.
{
repeat
{ // Generate all legal assignments for x[k]
NextValue( k ); // Assign to x[k] a legal color
if ( x[k]=0 ) then return; // No new color possible
if ( k=n ) then // At most m colors have been used to color the n vertices
write( x[1:n] );
else mColoring( k+1);
} until ( false );
}

GENERATING A NEXT COLOR


Algorithm NextValue( k )
// x[1],…..x[ k-1 ] have been assigned integer values in the range [ 1 ,m ].
// A value for x[ k ] is determined in the range [ 0,m ]
{
repeat
{
x[k] = ( x[k] +1) mod ( m+1 ); // Next highest color.
if ( x[k]=0 ) then return; // All colors have been
used. for j = 1 to n do
{
if (( G[ k,j ] ≠ 0 ) and ( x[k] = x[j] )) then
break;
}
if( j = n+1 ) then return; // Color found
} until ( false );
}

Hamiltonian cycle

A Hamiltonian cycle is a round-trip path along n edges of connected undirected graph G that
visits every vertex once and returns to its starting position.

The above graph contains the Hamiltonian cycle


1,2,8,7,6,5,4,3,1

The above graph does not contain Hamiltonian cycles.


Find all possible Hamiltonian cycles
Problem formulation:-
  Represent the grapth with adjacency matrix G[1:n,1:n]. 
 Solution is represented by n- tuple (x1,….xn), where xi represents the ith visited vertex of
the cycle. 

 Start by setting x[ 2: n ] to zero and x[ 1 ]=1, and then executing Hamiltonian( 2 ); 

Algorithm

Hamiltonian( k )
{
repeat
{ // Generate values for x[k]
NextValue( k ); // Assign a legal next value to x[ k ]
if ( x[k]=0 ) then return; // No new value possible if
( k=n ) then write( x[1:n] );
else Hamiltonian( k+1);
} until ( false);
}
Algorithm

NextValue( k )
{
repeat
{
x[k] = ( x[k] +1) mod ( m+1 ); // Next
vertex. if ( x[k]=0 ) then return;
if( G[ x[ k-1], x[ k ] ] ≠ 0 )
{
for j:= 1 to k-1 do if ( x[ j ] = x [k ] ) then break;
if( j = k ) then // if true, then the vertex is distinct
if( ( k < n ) or ( ( k=n ) and G [ x[n], x[1]] ≠ 0 )
then return;
}
} until ( false );
}

4.9 Sum of Subset Problem


In the subset problem we have to find a subset s‟ of the given set S=<s 1,s2,s3,……sn> where the
elements of the set S are n positive integers in such manner that s‟ Є S and sum of the elements
of subset „s‟‟ is equal to some positive integer „X.
The subset problem can be solved by using the backtracking approach. In this implicit tree is a
binary tree. The root of the tree is selected in such a way that represents that no decision is yet
taken on any input. We assume that the elements of the given set are arranged in an increasing
order:

S1<S2<S3…..<Sn
The left child of the root node indicate that we have to include „S1 ‟ from the set „S and the right
child of the root node indicates that we have to exclude „S1‟.Each node stores the sum of the
partial solution elements. If at any stage the number equals to „X‟ then search is successful and
terminates.
The dead end in the tree occurs only when either of the two inequalities exists:
 The sum of s‟ is too large
 s‟ +Si +1> X 
i.e.,
 The sum of s‟ is too small i.e., 
n
s‟ +∑j=i+1 Sj<X

TRAVELLING SALESMAN PROBLEM


INTRODUCTION
TSP includes a sales person who has to visit a number of cities during a tour and the condition is
to visit all the cities exactly once and return back to the same city where the person started.
Steps:
Let G=(V,E) be a direct graph defining an instance for TSP.
 This graph is represented by a cost matrix where 
Cij=the cost of the edge, If there is a path between vertex i and vertex j
Cij=∞ , if there is no path.

 Convert cost matrix in to reduced matrix i.e., every row and column should contain
 atleast one zero entry. 
 Cost of reduced matrix is the sum of elements that are subtracted from rows and columns
 of cost matrix to make it reduced. 
  Make the state space tree for reduced matrix. 
 To find the next E node, find least cost valued node y calculation the reduced cost matrix
 with every node. 
  If <i,j> edge is to be included , then three conditions to accomplish this task: 
I. Change all entries in row i and column j of A to ∞.
II. Set A[j,i] =∞.
III. Reduce all rows and columns in resulting matrix except for rows
and columns containing ∞.
 Calculate the cost of matrix where 
Cost=L+cost(i,j)+r
Where L= cost of original reduced cost matrix
R=new reduced cost matrix.
 Repeat the above steps for all the nodes until all the nodes are generated and we get a
path.

You might also like