5CS4-AOA-Unit-2 @zammers
5CS4-AOA-Unit-2 @zammers
Dynamic Programming
Definition
Dynamic programming (DP) is a general algorithm design technique for solving
problems with overlapping sub-problems. This technique was invented by American
mathematician “Richard Bellman” in 1950s.
Key Idea
The key idea is to save answers of overlapping smaller sub-problems to avoid re-
computation.
0 if n=0
F(n) = 1 if n=1
F(n-1) + F(n-2) if n >1
Algorithm F(n)
// Computes the nth Fibonacci number recursively by using its definitions
// Input: A non-negative integer n
// Output: The nth Fibonacci number
if n==0 || n==1 then
return n
else
return F(n-1) + F(n-2)
F(n)
F(n-1) + F(n-2)
…
2. Using Dynamic Programming:
Algorithm F(n)
// Computes the nth Fibonacci number by using dynamic programming method
// Input: A non-negative integer n
// Output: The nth Fibonacci number
A[0] 0
A[1] 1
for i 2 to n do
A[i] A[i-1] + A[i-2]
return A[n]
2. Recursive definition
1 if k=0
C(n,k) = 1 if n = k
C(n-1,k-1) + C(n-1, k) if n>k>0
Solution
Using crude DIVIDE & CONQUER method we can have the algorithm as follows:
3,2 + 3,1
0 1 2 … k-1 k
0 1
1 1 1
2
…
n-1 C(n-1, k-1) C(n-1, k)
n C(n, k)
Algorithm:
R(0) A
for k 1 to n do
for i 1 to n do
for j 1 to n do
R(k)[i, j] R(k-1)[i, j] OR (R(k-1)[i, k] AND R(k-1)[k, j] )
return R(n)
Example:
Find Transitive closure for the given digraph using Warshall’s algorithm.
A C
D
B
Solution:
R(0) = A B C D
A 0 0 1 0
B 1 0 0 1
C 0 0 0 0
D 0 1 0 0
R(0) k=1 A B C D A B C D
Vertex 1 A 0 0 1 0 A 0 0 1 0
can be B 1 0 0 1 B 1 0 1 1
intermediate C 0 0 0 0 C 0 0 0 0
node D 0 1 0 0 D 0 1 0 0
R1[2,3]
= R0[2,3] OR
R0[2,1] AND R0[1,3]
= 0 OR ( 1 AND 1)
=1
R(1) k=2 A B C D A B C D
Vertex A 0 0 1 0 A 0 0 1 0
{1,2 } can B 1 0 1 1 B 1 0 1 1
be C 0 0 0 0 C 0 0 0 0
intermediate D 0 1 0 0 D 1 1 1 1
nodes
R2[4,1]
= R1[4,1] OR
R1[4,2] AND R1[2,1]
= 0 OR ( 1 AND 1)
=1
R2[4,3]
= R1[4,3] OR
R1[4,2] AND R1[2,3]
= 0 OR ( 1 AND 1)
=1
R2[4,4]
= R1[4,4] OR
R1[4,2] AND R1[2,4]
= 0 OR ( 1 AND 1)
=1
R(2) k=3 A B C D A B C D
Vertex A 0 0 1 0 A 0 0 1 0
{1,2,3 } can B 1 0 1 1 B 1 0 1 1
be C 0 0 0 0 C 0 0 0 0
intermediate D 1 1 1 1 D 1 1 1 1
nodes
NO CHANGE
R(3) k=4 A B C D A B C D
Vertex A 0 0 1 0 A 0 0 1 0
{1,2,3,4 } B 1 0 1 1 B 1 1 1 1
can be C 0 0 0 0 C 0 0 0 0
intermediate D 1 1 1 1 D 1 1 1 1
nodes
R4[2,2]
= R3[2,2] OR
R3[2,4] AND R3[4,2]
= 0 OR ( 1 AND 1)
=1
R(4) A B C D
A 0 0 1 0 TRANSITIVE CLOSURE
B 1 1 1 1 for the given graph
C 0 0 0 0
D 1 1 1 1
Efficiency:
3
• Time efficiency is Θ(n )
• Space efficiency: Requires extra space for separate matrices for
recording intermediate results of the algorithm.
Floyd’s Algorithm to find
-ALL PAIRS SHORTEST PATHS
Problem statement:
Given a weighted graph G( V, Ew), the all-pairs shortest paths problem is to find
the shortest path between every pair of vertices ( vi, vj ) V.
Solution:
A number of algorithms are known for solving All pairs shortest path problem
• Matrix multiplication based algorithm
• Dijkstra's algorithm
• Bellman-Ford algorithm
• Floyd's algorithm
We conclude:
D(k)[ i, j ] = min { D(k-1) [ i, j ], D(k-1) [ i, k ] + D(k-1) [ k, j ] }
Algorithm:
Example:
Find All pairs shortest paths for the given weighted connected graph using
Floyd’s algorithm.
A 5
4 2 C
B 3
Solution:
D(0) = A B C
A 0 2 5
B 4 0 ∞
C ∞ 3 0
D(0) k=1 A B C A B C
Vertex 1 A 0 2 5 A 0 2 5
can be B 4 0 ∞ B 4 0 9
intermediate C ∞ 3 0 C ∞ 3 0
node
D1[2,3]
= min { D0 [2,3],
D0 [2,1] + D0 [1,3] }
= min { ∞, ( 4 + 5) }
=9
D(1) k=2 A B C A B C
Vertex 1,2 A 0 2 5 A 0 2 5
can be B 4 0 9 B 4 0 9
intermediate C ∞ 3 0 C 7 3 0
nodes
D2[3,1]
= min { D1 [3,1],
D1 [3,2] + D1 [2,1] }
= min { ∞, ( 4 + 3) }
=7
D(2) k=3 A B C A B C
Vertex 1,2,3 A 0 2 5 A 0 2 5
can be B 4 0 9 B 4 0 9
intermediate C 7 3 0 C 7 3 0
nodes
NO Change
D(3) A B C
A 0 2 5 ALL PAIRS SHORTEST
B 4 0 9 PATHS for the given
C 7 3 0 graph
0/1 Knapsack Problem
Memory function
Definition:
Given a set of n items of known weights w1,…,wn and values v1,…,vn and a knapsack
of capacity W, the problem is to find the most valuable subset of the items that fit into the
knapsack.
Knapsack problem is an OPTIMIZATION PROBLEM
Step 1:
Identify the smaller sub-problems. If items are labeled 1..n, then a sub-problem would be
to find an optimal solution for Sk = {items labeled 1, 2, .. k}
Step 2:
Recursively define the value of an optimal solution in terms of solutions to smaller
problems.
Initial conditions:
V[ 0, j ] = 0 for j ≥ 0
V[ i, 0 ] = 0 for i ≥ 0
Recursive step:
max { V[ i-1, j ], vi +V[ i-1, j - wi ] }
V[ i, j ] = if j - wi ≥ 0
V[ i-1, j ] if j - wi < 0
Step 3:
Bottom up computation using iteration
Question:
Apply bottom-up dynamic programming algorithm to the following instance of the
knapsack problem Capacity W= 5
Solution:
Using dynamic programming approach, we have:
Step Calculation Table
1 Initial conditions:
V[ 0, j ] = 0 for j ≥ 0 V[i,j] j=0 1 2 3 4 5
V[ i, 0 ] = 0 for i ≥ 0 i=0 0 0 0 0 0 0
1 0
2 0
3 0
4 0
2 W1 = 2,
Available knapsack capacity = 1 V[i,j] j=0 1 2 3 4 5
W1 > WA, CASE 1 holds: i=0 0 0 0 0 0 0
V[ i, j ] = V[ i-1, j ] 1 0 0
V[ 1,1] = V[ 0, 1 ] = 0 2 0
3 0
4 0
3 W1 = 2,
Available knapsack capacity = 2 V[i,j] j=0 1 2 3 4 5
W1 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3
vi +V[ i-1, j - wi ] } 2 0
V[ 1,2] = max { V[ 0, 2 ], 3 0
3 +V[ 0, 0 ] }
4 0
= max { 0, 3 + 0 } = 3
4 W1 = 2,
Available knapsack capacity = V[i,j] j=0 1 2 3 4 5
3,4,5 i=0 0 0 0 0 0 0
W1 < WA, CASE 2 holds: 1 0 0 3 3 3 3
V[ i, j ] = max { V[ i-1, j ], 2 0
vi +V[ i-1, j - wi ] } 3 0
V[ 1,3] = max { V[ 0, 3 ],
4 0
3 +V[ 0, 1 ] }
= max { 0, 3 + 0 } = 3
5 W2 = 3,
Available knapsack capacity = 1 V[i,j] j=0 1 2 3 4 5
W2 >WA, CASE 1 holds: i=0 0 0 0 0 0 0
V[ i, j ] = V[ i-1, j ] 1 0 0 3 3 3 3
V[ 2,1] = V[ 1, 1 ] = 0 2 0 0
3 0
4 0
6 W2 = 3,
Available knapsack capacity = 2 V[i,j] j=0 1 2 3 4 5
W2 >WA, CASE 1 holds: i=0 0 0 0 0 0 0
V[ i, j ] = V[ i-1, j ] 1 0 0 3 3 3 3
V[ 2,2] = V[ 1, 2 ] = 3 2 0 0 3
3 0
4 0
7 W2 = 3,
Available knapsack capacity = 3 V[i,j] j=0 1 2 3 4 5
W2 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4
V[ 2,3] = max { V[ 1, 3 ], 3 0
4 +V[ 1, 0 ] } 4 0
= max { 3, 4 + 0 } = 4
8 W2 = 3,
Available knapsack capacity = 4 V[i,j] j=0 1 2 3 4 5
W2 < WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4
V[ 2,4] = max { V[ 1, 4 ], 3 0
4 +V[ 1, 1 ] } 4 0
= max { 3, 4 + 0 } = 4
9 W2 = 3,
Available knapsack capacity = 5 V[i,j] j=0 1 2 3 4 5
W2 < WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 2,5] = max { V[ 1, 5 ], 3 0
4 +V[ 1, 2 ] }
4 0
= max { 3, 4 + 3 } = 7
10 W3 = 4,
Available knapsack capacity = V[i,j] j=0 1 2 3 4 5
1,2,3 i=0 0 0 0 0 0 0
W3 > WA, CASE 1 holds: 1 0 0 3 3 3 3
V[ i, j ] = V[ i-1, j ] 2 0 0 3 4 4 7
3 0 0 3 4
4 0
11 W3 = 4,
Available knapsack capacity = 4 V[i,j] j=0 1 2 3 4 5
W3 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 3,4] = max { V[ 2, 4 ], 3 0 0 3 4 5
5 +V[ 2, 0 ] }
4 0
= max { 4, 5 + 0 } = 5
12 W3 = 4,
Available knapsack capacity = 5 V[i,j] j=0 1 2 3 4 5
W3 < WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 3,5] = max { V[ 2, 5 ], 3 0 0 3 4 5 7
5 +V[ 2, 1 ] }
4 0
= max { 7, 5 + 0 } = 7
13 W4 = 5,
Available knapsack capacity = V[i,j] j=0 1 2 3 4 5
1,2,3,4 i=0 0 0 0 0 0 0
W4 < WA, CASE 1 holds: 1 0 0 3 3 3 3
V[ i, j ] = V[ i-1, j ] 2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5
14 W4 = 5,
Available knapsack capacity = 5 V[i,j] j=0 1 2 3 4 5
W4 = WA, CASE 2 holds: i=0 0 0 0 0 0 0
V[ i, j ] = max { V[ i-1, j ], 1 0 0 3 3 3 3
vi +V[ i-1, j - wi ] } 2 0 0 3 4 4 7
V[ 4,5] = max { V[ 3, 5 ], 3 0 0 3 4 5 7
6 +V[ 3, 0 ] }
4 0 0 3 4 5 7
= max { 7, 6 + 0 } = 7
Memory function
The method:
• Uses top-down manner.
• Maintains table as in bottom-up approach.
• Initially, all the table entries are initialized with special “null” symbol to
indicate that they have not yet been calculated.
• Whenever a new value needs to be calculated, the method checks
the corresponding entry in the table first:
• If entry is NOT “null”, it is simply retrieved from the table.
• Otherwise, it is computed by the recursive call whose result is then recorded in
the table.
Algorithm:
Algorithm MFKnap( i, j )
if V[ i, j] < 0
if j < Weights[ i ]
value MFKnap( i-1, j )
else
value max {MFKnap( i-1, j ),
Values[i] + MFKnap( i-1, j - Weights[i] )}
V[ i, j ] value
return V[ i, j]
Example:
Apply memory function method to the following instance of the knapsack problem
Capacity W= 5
Solution:
Using memory function approach, we have:
Computation Remarks
1 Initially, all the table entries are initialized
with special “null” symbol to indicate that V[i,j] j=0 1 2 3 4 5
they have not yet been calculated. Here i=0 0 0 0 0 0 0
null is indicated with -1 value. 1 0 -1 -1 -1 -1 -1
2 0 -1 -1 -1 -1 -1
3 0 -1 -1 -1 -1 -1
4 0 -1 -1 -1 -1 -1
2 MFKnap( 4, 5 )
V[ 1, 5 ] = 3
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 )
V[i,j] j=0 1 2 3 4 5
5 + MFKnap( 2, 1 )
i=0 0 0 0 0 0 0
MFKnap( 2, 5 )
1 0 -1 -1 -1 -1 3
2 0 -1 -1 -1 -1 -1
MFKnap( 1, 5 ) 4 + MFKnap( 1, 2 ) 3 0 -1 -1 -1 -1 -1
0 3
4 0 -1 -1 -1 -1 -1
MFKnap( 0, 5 ) 3 + MFKnap( 0, 3 )
0 3+0
3 MFKnap( 4, 5 )
V[ 1, 2 ] = 3
MFKnap( 3, 5 ) 6 + MFKnap( 3, 0 )
V[i,j] j=0 1 2 3 4 5
MFKnap( 2, 5 ) 5 + MFKnap( 2, 1 )
i=0 0 0 0 0 0 0
1 0 -1 3 -1 -1 3
2 0 -1 -1 -1 -1 -1
MFKnap( 1, 5 ) 4 + MFKnap( 1, 2 )
3
3 0 -1 -1 -1 -1 -1
3 0
4 0 -1 -1 -1 -1 -1
MFKnap( 0, 2 ) 3 + MFKnap( 0, 0 )
0 3+0
4 MFKnap( 4, 5 )
V[ 2, 5 ] = 7
Conclusion:
Optimal subset: { item 1, item 2 }
Efficiency:
• Time efficiency same as bottom up algorithm: O( n * W ) + O( n + W )
• Just a constant factor gain by using memory function
• Less space efficient than a space efficient version of a bottom-up algorithm
Matrix-chain multiplication
Matrix-chain multiplication is an example of dynamic programming is an algorithm that
solves the problem of matrix-chain multiplication .We are given a sequence (chain) <A1,
A2,…..An> of n matrices to be multiplied, and we wish to compute the product:
A1A2 …… An
We can evaluate the above expression using the standard algorithm for multiplying pairs
of matrices as a subroutine once we have parenthesized it to resolve all ambiguities in
how the matrices are multiplied together.
Matrix multiplication is associative, and so all parenthesizations yield the same product.
A product of matrices is fully parenthesized if it is either a single matrix or the product of
two fully Parenthesized matrix products, surrounded by parentheses. For example, if the
chain of matrices is <A1,A2,A3,A4> then we can fully parenthesize the product
A1A2A3A4 in five distinct ways: (A1(A2(A3A4)))
, (A1((A2A3)A4)) ,
((A1A2)(A3A4)) ,
((A1(A2A3))A4) ,
(((A1A2)A3)A4).
How we parenthesize a chain of matrices can have a dramatic impact on the cost of
evaluating the product. Consider first the cost of multiplying two matrices. The standard
algorithm is given by the following pseudocode, which generalizes the SQUARE-
MATRIX-MULTIPLY procedure.
We can multiply two matrices A and B only if they are compatible the number of
columns of A must equal the number of rows of B.
If A is a p x q matrix and B is a q x r matrix, the resulting matrix C is a p x r matrix.
To illustrate the different costs incurred by different parenthesizations of a matrixproduct,
consider the problem of a chain <A1; A2; A3> of three matrices.
Suppose that the dimensions of the matrices are 10 x 100, 100 x 5, and 5 x 50, respectively. If
we multiply according to the parenthesization ((A1A2)A3), we perform 10 .100 . 5 = 5000
scalar multiplications to compute the 10 x 5 matrix product A1A2, plus another 10 .5. 50 D
2500 scalar multiplications to multiply this matrix by A3, for a total of 7500 scalar
multiplications.
According to the parenthesization (A1(A2A3)), we perform 100 . 5 . 50 = 25,000 scalar
multiplications to compute the 100 x 50 matrix product A2A3, plus another 10 .100 .50
=50,000 scalar multiplications to multiply A1 by this matrix, for a total of 75,000 scalar
multiplications. Thus, computing the product according to the first parenthesization is 10
times faster.
We state the matrix-chain multiplication problem as follows: given a
chain<A1,A2,….An>of n matrices, where for i =1,2,….n, matrix Ai has dimension
pi_1 x pi , fully parenthesize the product A1A2…. An in a way that minimizes the number of
scalar multiplications.
The tables are rotated so that the main diagonal runs horizontally. The m table uses only the main
diagonal and upper triangle, and the s table uses only the upper triangle. The minimum number
of scalar multiplications to multiply the 6 matrices is m[1,6]= 15,125. Of the darker entries, the
pairs that have the same shading are taken together in line 10 when computing scalar
multiplications to multiply the 6 matrices is m[1,6]= 15,125. Of the darker entries, the pairs that
have the same shading are taken together in line 10 when computing.
2. A recursive solution
we should examine either one or two sub problems when finding an LCS of X = <x 1;
x2;….xm> and Y = <y1; y2,…..yn>. If xm = yn, we must find an LCS of Xm_1 and Yn_1.
Appending xm =yn to this LCS yields an LCS of X and Y . If xm not equal to yn, then we
must solve two sub problems.
Finding an LCS of Xm_1 and Y and finding an LCS of X and Yn_1. Whichever of these
two LCSs is longer is an LCS of X and Y . Because these cases exhaust all possibilities,
we know that one of the optimal sub problem solutions must appear within an LCS of X
and Y . Because these cases exhaust all possibilities, we know that one of the optimal
subproblem solutions must appear within an LCS of X and Y .
We can readily see the overlapping-sub problems property in the LCS problem. To find
an LCS of X and Y , we may need to find the LCSs of X and Yn_1 and of Xm_1 and Y .
But each of these sub problems has the sub problem of finding an LCS of X m_1 and
Yn_1. Many other sub problems share sub problems.
As in the matrix-chain multiplication problem, our recursive solution to the. The optimal
substructure of the LCS problem gives the recursive formula
The procedure takes time O(m + n), since it decrements at least one of i and j in each recursive
call.
BRANCH AND BOUND
The design technique known as branch and bound is very similar to backtracking in that
it searches a tree model of the solution space and is applicable to a wide variety of
discrete combinatorial problems.
Each node in the combinatorial tree generated in the last Unit defines a problem state. All
paths from the root to other nodes define the state space of the problem.
Solution states are those problem states‟s‟for which the path from the root to 's' defines a
tuple in the solution space. The leaf nodes in the combinatorial tree are the solution
states.
Answer states are those solution states ‟s‟ for which the path from the root to 's' defines a
tuple that is a member of the set of solutions (i.e., it satisfies the implicit constraints) of
the problem.
The tree organization of the solution space is referred to as the state space tree.
A node which has been generated and all of whose children have not yet been generated
is called alive node.
The live node whose children are currently being generated is called the E-node (node
being expanded).
A dead node is a generated node, which is not to be expanded further or all of whose
children have been generated.
Bounding functions are used to kill live nodes without generating all their children.
Depth first node generation with bounding function is called backtracking. State
generation methods in which the E-node remains the E-node until it is dead lead to
branch-and-bound method.
The term branch-and-bound refers to all state space search methods in which all children
of the E-node are generated before any other live node can become the E-node.
In branch-and-bound terminology breadth first search(BFS)- like state space search will
be called FIFO (First In First Output) search as the list of live nodes is a first -in-first -out
list(or queue).
A D-search (depth search) state space search will be called LIFO (Last In First Out)
search, as the list of live nodes is a list-in-first-out list (or stack).
Bounding functions are used to help avoid the generation of sub trees that do not contain
an answer node.
The branch-and-bound algorithms search a tree model of the solution space to get the
solution. However, this type of algorithms is oriented more toward optimization. An
algorithm of this type specifies a real -valued cost function for each of the nodes that
appear in the search tree.
Usually, the goal here is to find a configuration for which the cost function is minimized.
The branch-and-bound algorithms are rarely simple. They tend to be quite complicated in
many cases.
Let us see how a FIFO branch-and-bound algorithm would search the state space tree for
the 4-queens problem.
Fig A. Tree organization of 4-queen solution space. Nodes are numbered as in depth first
search
Initially, there is only one live node, node1. This represents the case in which no queen
has been placed on the chessboard. This node becomes the E-node.
It is expanded and its children, nodes2, 18, 34 and 50 are generated.
These nodes represent a chessboard with queen1 in row 1and columns 1, 2, 3, and 4
respectively.
The only live nodes 2, 18, 34, and 50.If the nodes are generated in this order, then the
next E-node are node 2.
It is expanded and the nodes 3, 8, and 13 are generated. Node 3 is immediately killed
using the bounding function. Nodes 8 and 13 are added to the queue of live nodes.
Node 18 becomes the next E-node. Nodes 19, 24, and 29 are generated. Nodes 19 and 24
are killed as a result of the bounding functions. Node 29 is added to the queue of live
nodes.
Now the E-node is node 34.Fig B shows the portion of the tree of Fig A. that is generated
by a FIFO branch-and-bound search. Nodes that are killed as a result of the bounding
functions are a "B" under them.
Numbers inside the nodes correspond to the numbers in Fig A. Numbers outside the
nodes give the order in which the nodes are generated by FIFO branch-and-bound.
At the time the answer node, node 31, is reached, the only live nodes remaining are nodes
38 and 54.
Fig B.Portion of 4-Queen state space tree generated by FIFO branch and bound
The 8 queen problem is a case of more general set of problems namely “n queen
problem”. The basic idea: How to place n queen on n by n board, so that they don‟t attack
each other. As we can expect the complexity of solving the problem increases with n. We
will briefly introduce solution by backtracking.
First let‟s explain what is backtracking? The boar should be regarded as a set of
constraints and the solution is simply satisfying all constraints. For example: Q1 attacks
some positions, therefore Q2 has to comply with these constraints and take place, not
directly attacked by Q1. Placing Q3 is harder, since we have to satisfy constraints of Q1
and Q2. Going the same way we may reach point, where the constraints make the
placement of the next queen impossible. Therefore we need to relax the constraints and
find new solution. To do this we are going backwards and finding new admissible
solution. To keep everything in order we keep the simple rule: last placed, first displaced.
th
In other words if we place successfully queen on the i column but cannot find solution
th
for (i+1) queen, then going backwards we will try to find other admissible solution for
th
the i queen first. This process is called backtrack
Let‟s discuss this with example. For the purpose of this handout we will find solution of 4
queen problem.
Algorithm:
Note the positions which Q1 is attacking. So the next queen Q2 has to options: B3 or B4.
We choose the first one B3
Again with red we show the prohibited positions. It turned out that we cannot place the
third queen on the third column (we have to have a queen for each column!). In other
words we imposed a set of constraints in a way that we no longer can satisfy them in
order to find a solution. Hence we need to revise the constraints or rearrange the board up
to the state which we were stuck. Now we may ask a question what we have to change.
Since the problem happened after placing Q2 we are trying first with this queen.
OK we know that there were to possible places for Q2. B3 gives problem for the third
queen, so there is only one position left – B4:
As you can see from the new set of constraints ( the red positions) now we have
admissible position for Q3, but it will make impossible to place Q4 since the only place is
D3. Hence placing Q2 on the only one left position B4 didn‟t help. Therefore the one step
backtrack was not enough. We need to go for second backtrack. Why? The reason is that
there is no position for Q2, which will satisfy any position for Q4 or Q3. Hence we need
to deal with the position of Q1.
We have started from Q1 so we will continue upward and placing the queen at A2
Now it is easy to see that Q2 goes to B4, Q3 goes to C1 and Q4 takes D3:
1
A B C D
To find this solution we had to perform two backtracks. So what now? In order to find all
solutions we use as you can guess – backtrack!
Start again in reverse order we try to place Q4 somewhere up, which is not possible. We
backtrack to Q3 and try to find admissible place different from C1. Again we need to
backtrack. Q2 has no other choice and finally we reach Q1. We place Q1 on A3:
4
3
2
1
A B C D
Continuing further we will reach the solution on the right. Is this distinct solution? No it is
rotated first solution. In fact for 4x4 board there is only one unique solution. Placing Q1 on A4
has the same effect as placing it on A1. Hence we explored all solutions.
How implement backtrack in code. Remember that we used backtrack when we cannot find
admissible position for a queen in a column. Otherwise we go further with the next column until
we place a queen on the last column. Therefore your code must have fragment:
If you can place queen on ith column try to place a queen on the next one, or backtrack and try to
place a queen on position above the solution found for i-1 column.
BACKTRACKING
For many real-world problems, the solution process consists of working your way through a
sequence of decision points in which each choice leads you further along some path.If you make
the correct set of choices, you end up at the solution. On the other hand, if you reach a dead end
or otherwise discover that you have made an incorrect choice somewhere along the way, you
have to backtrack to a previous decision point and try a different path. Algorithms that use this
approach are called backtracking algorithms.
1. N-QUEEN PROBLEM
N-Queen problem is to place n-queen in a such manner on an n x n chessboard that no
two queens attack each other by being in the same row, column or diagonal.
It can be see that for n=1, the problem has a trivial solution , and no solution exist for n=2
and n=3. So first we will consider the 4- queen problem and then generalize it to n –
queens problem.
Since we have to place 4 queen such as q1,q2,q3,q4 on a chessboard such that o two
queens attack each other. In such a conditions each queen must be placed o a different
row, i.e., we place queen “i” on row “i”.
Now place queen q1 in the first acceptable position (1,1).
Next we place queen q2 i such a way that they do not attack each other. We find that if
we place q2 in column 1 and 2 then the dead end is encountered. Thus the acceptable
position for q2 is column 3 i.e. (2,3)but then no position is left for placing queen q3
safely.
So we backtrack and one step and place the queen in (2,4), the next best possible
solution. Q3 is placed in (3,2).
Later this position also lead to dead end so we backtrack till „q1‟ and place it in (1,2), q2
in (2,4) and q3 in (3,1) and q4 in ( 4,1).
Thus the solution is <2,4,1,3> one of the feasible solution for 4-queen problem. the other
solution is< 3,1,4,2>.
Place (k,i) returns a Boolean value that is true if kH queen ca be placed in column i.It tests both
whether I is distinct from all previous values X1,X2,…..,Xk-1 ad whether there is no other queen
on the same diagonal.
The implicit tree for 4-queen problem for solution <2,4,1,3> which is clear from the above
diagram
ALGORITHM
Place (k, i)
{
for j := 1 to k-1 do
if (( x[ j ] = // in the same column
or (Abs( x [ j ] - i) =Abs ( j – k ))) // or in the same
diagonal then return false;
return true;
}
NQueens( k, n) //Prints all Solution to the n-queens problem
{
for i := 1 to n do
{
if Place (k, i) then
{
x[k] := i;
if ( k = n) then write ( x [1 :
n] else NQueens ( k+1, n);}}}
GRAPH COLORING
Assign colors to the vertices of a graph so that no adjacent vertices share the same color.
Vertices i, j are adjacent if there is an edge from vertex i to vertex j
m-colorings problem
Find all ways to color a graph with at most m
colors. Problem formulation:-
Represent the grapth with adjacency matrix G[1:n,1:n].
The colors are represented by integer numbers 1,2,….m.
Solution is represented by n- tuple (x1,….xn), where xi is thecolor of node i.
Solution space tree for mColorigng when n=3 and m=3
Algorithm :- finding all m- colorings of a graph. Function mColoring is begun by first assigning
the graph to its adjacency matrix, setting the array x[ ] to zero, and then invoking the statement
mColoring( 1 );
Algorithm
mColoring( k )
// k is the index of the next vertex to color.
{
repeat
{ // Generate all legal assignments for x[k]
NextValue( k ); // Assign to x[k] a legal color
if ( x[k]=0 ) then return; // No new color possible
if ( k=n ) then // At most m colors have been used to color the n vertices
write( x[1:n] );
else mColoring( k+1);
} until ( false );
}
Hamiltonian cycle
A Hamiltonian cycle is a round-trip path along n edges of connected undirected graph G that
visits every vertex once and returns to its starting position.
Algorithm
Hamiltonian( k )
{
repeat
{ // Generate values for x[k]
NextValue( k ); // Assign a legal next value to x[ k ]
if ( x[k]=0 ) then return; // No new value possible if
( k=n ) then write( x[1:n] );
else Hamiltonian( k+1);
} until ( false);
}
Algorithm
NextValue( k )
{
repeat
{
x[k] = ( x[k] +1) mod ( m+1 ); // Next
vertex. if ( x[k]=0 ) then return;
if( G[ x[ k-1], x[ k ] ] ≠ 0 )
{
for j:= 1 to k-1 do if ( x[ j ] = x [k ] ) then break;
if( j = k ) then // if true, then the vertex is distinct
if( ( k < n ) or ( ( k=n ) and G [ x[n], x[1]] ≠ 0 )
then return;
}
} until ( false );
}
S1<S2<S3…..<Sn
The left child of the root node indicate that we have to include „S1 ‟ from the set „S and the right
child of the root node indicates that we have to exclude „S1‟.Each node stores the sum of the
partial solution elements. If at any stage the number equals to „X‟ then search is successful and
terminates.
The dead end in the tree occurs only when either of the two inequalities exists:
The sum of s‟ is too large
s‟ +Si +1> X
i.e.,
The sum of s‟ is too small i.e.,
n
s‟ +∑j=i+1 Sj<X
Convert cost matrix in to reduced matrix i.e., every row and column should contain
atleast one zero entry.
Cost of reduced matrix is the sum of elements that are subtracted from rows and columns
of cost matrix to make it reduced.
Make the state space tree for reduced matrix.
To find the next E node, find least cost valued node y calculation the reduced cost matrix
with every node.
If <i,j> edge is to be included , then three conditions to accomplish this task:
I. Change all entries in row i and column j of A to ∞.
II. Set A[j,i] =∞.
III. Reduce all rows and columns in resulting matrix except for rows
and columns containing ∞.
Calculate the cost of matrix where
Cost=L+cost(i,j)+r
Where L= cost of original reduced cost matrix
R=new reduced cost matrix.
Repeat the above steps for all the nodes until all the nodes are generated and we get a
path.