0% found this document useful (0 votes)
43 views20 pages

Daa Endsem Paper Sol Unit IV

Uploaded by

13anil67
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views20 pages

Daa Endsem Paper Sol Unit IV

Uploaded by

13anil67
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

2017-18

1. Define Graph Coloring. (2018-19)

 Graph coloring is the problem of coloring each vertex in a graph such that no two
adjacent vertices have the same color.
 If d is the degree of a given graph, then it can be colored with d+1 colors.
 Chromatic number: minimum number of colors to color the graph such that no two
adjacent vertices have the same color. Chromatic number is decided by the graph type
i.e whether the graph is complete graph, wheel graph, star graph or cycle graph.
 Some direct examples: Map coloring
 The ‘m-colorability’ optimization problem asks for the smallest integer m for which
the graph G can be colored. This integer is referred to as the chromatic number of the
graph.
 Explicit Constraint : Si ∈ {1..d} where d = chromatic no. of the graph
 Implicit Constraint: Color all vertices of a given graph such that no two adjacent
vertices should be of same color.
 As an example:

The vertices are enumerated in order x1 – x5 The min. no. of colors needed = 3

x1 x2
x5

x3 x4

2. What is backtracking? Discuss sum of subset problem with the help of an example.

 Backtracking is a method of building the solutions through enumeration.


 Invented by D.H.Lehmer in 1950s.
 Always find the optimal solution.
 Backtracking represents one of the most general techniques.
 Many problems which deal with searching for a set of solutions or which ask for an
optimal solution, satisfying some constraints, can be solved using the backtracking
formulation.
 We make sure that if the problem is finite, we will eventually try all possibilities
(assuming there is enough computing power to try all possibilities).
 In many applications of the backtrack method, the desired solution is expressible as n-
tuple (x1,x2,…,xn), where the xi are chosen from some finite set Si.

Swapna Singh KCS-503 DAA


 Eg., Sorting the array of integers in A[1..n] is a problem whose solution is expressible
by an n-tuple, where xi is the index in A of the ith smallest element.
 The criterion function P is the inequality A[xi] ≤ A[xi+1] for 1 ≤i ≤n. The set Si is
finite and includes the integer 1..n.
 Suppose mi is the size of set Si. Then there are m = m1, m2,….,mn n-tuples that are
possible candidates for satisfying the function P.
 Backtracking also finds the same answer as would be found by the brute force
approach but it finds the solution with far fewer than m trials.
 The basic idea is to build up the solution vector one component at a time and to use
modified criterion functions Pi(x1….xi) to test whether the vector being formed has
any chances of success.
 The major advantage is that if it is observed that the partial vector (x1,…,xi) can in no
way lead to an optimal solution than we can ignore mi+1 … mn possible test vectors.
 Basic Algorithm :-
 Based on depth-first recursive search
 Approach
 Tests whether solution has been found
 If found solution, return it
 Else for each choice that can be made
o Make that choice
o Recur
o If recursion returns a solution, return it

4. If no choices remain, return failure

 Some times called “search tree”

Sum Of Subset Poblem:

 Definition : Suppose we are given n distinct positive numbers(usually called weights)


and we desire to find all combinations of these numbers whose sums are m. This is
called the Sum of Subset problem.
 Backtracking algo determine problem solution by systematically searching the
solution space for the given problem instance.
 For the given solution space many tree organizations may be possible.
 Explicit Constraint : Si ∈ {0,1} 𝒌
 Implicit Constraint: 𝒘𝒙 =m
𝒊 𝒊
𝒊 𝟏
Example

Sum = 5 and w = {1,3,4}

Swapna Singh KCS-503 DAA


(0,1,8)
X1=1 X1=1

(1,2,7) (0,2,7)
X2= X2= X2= X2=
1 0 1 0
(1,3,4) Backtrack Backtrack
Backtrack

X3=1
Subset
found(1,0,1)

3. Write down an algorithm to compute Longest Common Subsequence (LCS) of two


given strings and analyze its time complexity.

X = < A, B, C, B, D, A, B >

Y = < B, D, C, A, B, A >

< B, C, A > is a common subsequence of both X and Y.

X = < A, B, C, B, D, A, B >

Y = < B, D, C, A, B, A >

< B, C, B, A > or < B, C, A, B >

is the longest common subsequence of X and Y (of length 4).

Application : in DNA matching to find two similar strands. And find the longest strand
which appears in both previous strands in the same order.

Given two sequence X = <x1 , . . . ,xm> and Y = <y1, . . . , yn>.

A subsequence which is common to both and whose length is longest is called longest
common subsequence LCS.

Step 1: Optimal Substructure

 Theorem - (Optimal substructure of an LCS)


o Let X = <x1 , . . . ,xm> and Let Y = <y1, . . . , yn>. Let Z = <z1, . . . , zk> be any
LCS of X and Y .
o If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1.
o If xm  yn, then zk  xm ⇒ Z is an LCS of Xm-1 and Y .
o If xm  yn, then zk  yn ⇒ Z is an LCS of X and Yn-1.

Step 2: A Recursive Solution

Swapna Singh KCS-503 DAA


ì 0 if i=0 or j=0
ï
c[i, j ] = í c[i - 1, j - 1] + 1 if i,j>0 and xi=y j
ïmax{c[i, j - 1], c[i - 1, j ]} if i,j>0 and x  y
î i j

Step 3: Computing the LCS and its Length

ì 0 if i=0 or j=0
ï
c[i, j ] = í c[i - 1, j - 1] + 1 if i,j>0 and xi=y j
ïmax{c[i, j - 1] ¬, c[i - 1, j ] ­} if i,j>0 and x  y
î i j

Complexity = O(mn) as it compares each symbol in string Y (1 to m) with each symbol in


X (1 to n)

Example: LCS of X=<A,B,C,B,D,A,B> and Y=<B,D,C,A,B,A>. LCS = <B,C,B,A>

Swapna Singh KCS-503 DAA


2018-19
4. Define principal of optimality. When and how Dynamic programming is applicable.

 Dynamic programming solves optimization problems by combining solutions to sub-


problems. Dynamic programming is applicable when sub-problems are not
independent, that is, Subproblems share sub-problems
 It solve every sub-problem only once and store the answer in the table for use when it
reappear. The key is to store the solutions of sub-problems to be reused in the future

E.g.: Fibonacci numbers:

Recurrence: F(n) = F(n-1) + F(n-2)

Boundary conditions: F(1) = 0, F(2) = 1

Compute: F(3) = 1, F(4) = 2, F(5) = 3

 A divide and conquer approach would repeatedly solve the common subproblems. -
Top down approach.
 Dynamic programming solves every subproblem just once and stores the answer in a
table.- Bottom up approach.
 The first step in solving an optimization problem by dynamic or greedy programming
is to characterize the structure of an optimal solution.
 We say that a problem exhibits optimal substructure if an optimal solution to the
problem contains within it optimal solutions to subproblems.
 Investigating the optimal substructure of a problem by iterating on subproblem
instances is a good way to infer a suitable space of subproblems for dynamic
programming.
 For example, the Shortest Path problem has following optimal substructure property:
If a node x lies in the shortest path from a source node u to destination node v then the
shortest path from u to v is combination of shortest path from u to x and shortest path
from x to v.

5. Solve the subset sum problem using Backtracking where n=4, m=18, w[4] =
{5,10,8,13}

Sum Of Subset Poblem:

 Definition : Suppose we are given n distinct positive numbers(usually called weights)


and we desire to find all combinations of these numbers whose sums are m. This is
called the Sum of Subset problem.
 Backtracking algo determine problem solution by systematically searching the
solution space for the given problem instance.
 For the given solution space many tree organizations may be possible.
 Explicit Constraint : Si ∈ {0,1} 𝒌

 Implicit Constraint: 𝒘𝒊𝒙𝒊 = m


𝒊 𝟏

Swapna Singh KCS-503 DAA


 As the algorithm works on the input array that is sorted in non-decreasing order of the
given weights we will rearrange the weights :
 So w[4] = {5,8,10,13}. Given m = 18

X1=1 (0,1,36) X1=0

(5,2,31) (0,2,31)
X2=1 X2=0 X2=1 X2=0

Backtrack (5,3,23) (8,3,23) Backtrack


X3=1 X3=0
X3=1 X3=0

Backtrack (5,4,13) Subset found Backtrack


=(0,1,1,0)
X4=1

Subset found
=(1,0,0,1)

6. Give Floyd-Warshall Algorithm to find the shortest path for all pair of vertices in a
graph. Give the complexity of the algorithm. Explain with example. (2019-20)

 Given:
o Directed graph G = (V, E)
o Weight function w : E → R
 Compute:
o The shortest paths between all pairs of vertices in a graph
o Representation of the result: an n × n matrix of shortest-path distances δ(u, v)

FLOYD-WARSHALL(W)

1. n ← rows[W]
2. D(0) ← W
3. for k ← 1 to n
4. for i ← 1 to n
5. for j ← 1 to n
6. dij(k) ← min (dij(k-1), dik(k-1) + dkj(k-1))
(n)
7. return D

Running time: (n3) : As there are three nested for loops each having [1..n] iterations

Swapna Singh KCS-503 DAA


1 2 1 2 3 4
1 0   
2 2 0 1 1
D
(3)
3 1  0 1
3 4
4 3 1 2 0

1 2 3 4
1 0    1 2 3 4
(0) 2  0 1 1 1 0   
D =W=
3 1  0 1 2 2 0 1 1
4  1  0 D
(4) 3 1 2 0 1
4 3 1 2 0

(1) (0)
D =D
1 2 3 4 2
1
1 0   
2  0 1 1
D
(2) 3 1  0 1
4  1 2 0 3 4

7. Define feasible and optimal solution.

A feasible solution satisfies all the problem's constraints. Given n inputs and we
are required to form a subset such that it satisfies some given constraints then
such a subset is called feasible solution.

An optimal solution is a feasible solution that results in the largest possible


objective function value when maximizing (or smallest when minimizing). A
feasible solution either maximizes or minimizes the given objective function is
called as optimal solution

8. What is dynamic programming? How is it different from recursion? Explain with


example.

Dynamic programming is a technique where you store the result of previous calculation
to avoid calculating the same once again. DP is basically a memorization technique which
uses a table to store the results of sub-problem so that if same sub-problem is encountered
again in future, it could directly return the result instead of re-calculating it.

During recursion, there may exist a case where same sub-problems are solved multiple
times.
But Dynamic programming is a technique where you store the result of previous
calculation to avoid calculating the same once again. DP is basically a memorization
technique which uses a table to store the results of sub-problem so that if same sub-

Swapna Singh KCS-503 DAA


problem is encountered again in future, it could directly return the result instead of re-
calculating it.

During recursion, there may exist a case where same sub-problems are solved multiple
times.

9. Solve the following knapsack problem using dynamic programming. P={11,21,31,33}


and w={2,11,22,15} and capacity= 40

S0 = {(0,0)} S0’ = {(11,2)}

S1 = {(0,0),(11,2)} S1’ = {(21,11),(32,13)}

S2 = {(0,0),(11,2),(21,11),(32,13)} S2’ = {(31,22),(42,24),(52,33),(63,35)}

S3 = {(0,0),(11,2),(21,11),(31,22),(32,13),(42,24),(52,33),(63,35)} [As Profit increases


but weight decreases.

S3’ = {33,15),(44,17),(54,26),(65,28),(75,39),(85,48),(96,50)} [As Wi is > 40]

S4 = {(0,0),(11,2),(21,11),(32,13),(33,15),(42,24),(44,17),(52,33),(54,26),(63,35)(65,28),

(75,39)}

Final (P,W) =(75,39)

For i = n down to 1

i=4 (75,39) ∈ S3  No (75-33,39-15)=(42,24) ∈ S3  Yes x4 = 1

i=3 (42,24) ∈ S2  No (42-31,24-22)=(11,2) ∈ S2  Yes x3 = 1

i=2 (11,2) ∈ S1  Yes x2 = 0

i=1 (11,2) ∈ S0  No (11-11,2-2)=(0,0) ∈ S0.  Yes x1 = 1

Answer = (1,0,1,1) Items included in the knapsack =( I1,I3,I4)

Swapna Singh KCS-503 DAA


2019-20
10. Define Floyd Warshall algorithm for all pair shortest path and apply the same on
following graph: (2021-22)

1 2 3 4 5 1 2 3 4 5
D 0 1 - ∞ 6 3 ∞ 1 - ∞ 6 3 ∞
2 3 - ∞ ∞ ∞ D1 2 3 - 9 6 ∞
3 ∞ ∞ - 2 ∞ 3 ∞ ∞ - 2 ∞
4 ∞ 1 1 - ∞ 4 ∞ 1 1 - ∞
5 ∞ 4 ∞ 2 - 5 ∞ 4 ∞ 2 -

1 2 3 4 5 1 2 3 4 5
D2 1 - ∞ 6 3 ∞ 1 - ∞ 6 3 ∞
2 3 - 9 6 ∞ D 3 2 3 - 9 6 ∞
3 ∞ ∞ - 2 ∞ 3 ∞ ∞ - 2 ∞
4 4 1 1 - ∞ 4 4 1 1 - ∞
5 7 4 13 2 - 5 7 4 13 2 -

1 2 3 4 5 1 2 3 4 5
1 - 4 4 3 ∞ 1 - 4 4 3 ∞
2 3 - 7 6 ∞ D5
2 3 - 7 6 ∞
D4 3 6 3 - 2 ∞ 3 6 3 - 2 ∞
4 4 1 1 - ∞ 4 4 1 1 - ∞
5 6 3 3 2 - 5 6 3 3 2 -

11. What is sum of subset problem? Draw a state space tree for sum of subset problem
using backtracking. N=6, m=30 and w[1..6] = {5,10,12,13,15,18}

Swapna Singh KCS-503 DAA


What is sum of subset problem  2018-19

12. What is travellingWhat is travelling salesman problem (TSP)? Find the solution of
following TSP using Branch & Bound method

1 2 3 4 Min. 1 2 3 4 Min.
1 0 1 15 6 1 = 1 ∞ 0 14 5 1
2 2 0 7 3 2 2 0 ∞ 5 1 2
3 9 6 0 12 6 3 3 0 ∞ 6 6
4 10 4 8 0 4 4 6 0 4 ∞ 4
Min. 0 0 4 1

Swapna Singh KCS-503 DAA


1 2 3 4 Min.
1 ∞ 0 10 4 1 Subtract the min. element in the row
2 0 ∞ 1 0 2 from each element followed by min. element
3 3 0 ∞ 5 6 of each column. Total reductions row and
4 6 0 0 ∞ 4 column wise = 13+5 = 18
Min. 0 0 4 1 13+5 Therefore, Lower Bound = 18

Final Matrix to proceed = 1 2 3 4


1 ∞ 0 10 4
2 0 ∞ 1 0
3 3 0 ∞ 5
18 4 6 0 0 ∞

1-2=21 1-3=28 1-4=22

1-2-3=24 1-2-4=21

1-2-4-3-1=21

1-2 : make row 1 and column 2 infinity and then select minimum row wise followed by
column wise and reduce the matrix.

1 2 3 4 1 2 3 4
1 ∞ ∞ ∞ ∞ 1 ∞ ∞ ∞ ∞
2 0 ∞ 1 0 0 2 0 ∞ 1 0 0
3 3 ∞ ∞ 5 3 3 0 ∞ ∞ 2 3
4 6 ∞ 0 ∞ 0 4 6 ∞ 0 ∞ 0
0 0 0 3
Reduction cost = 3

Total Cost of 1-2 = w[1,2]+ 18 + 3 = 0 + 21 = 21

1-3 : make row 1 and column 3 infinity and then select minimum row wise followed by
column wise and reduce the matrix.

Swapna Singh KCS-503 DAA


1 2 3 4 1 2 3 4
1 ∞ ∞ ∞ ∞ 1 ∞ ∞ ∞ ∞
2 0 ∞ ∞ 0 2 0 ∞ ∞ 0 0
3 3 0 ∞ 5 3 3 0 ∞ 5 0
4 6 0 ∞ ∞ 4 6 0 ∞ ∞ 0
0 0 0 0
Reduction cost = 0

Total Cost of 1-3 = w[1,3]+ 18 + 0 = 10 + 18 = 28

1-4 : make row 1 and column 3 infinity and then select minimum row wise followed by
column wise and reduce the matrix.

1 2 3 4 1 2 3 4
1 ∞ ∞ ∞ ∞ 1 ∞ ∞ ∞ ∞
2 0 ∞ 1 ∞ 2 0 ∞ 1 ∞ 0
3 3 0 ∞ ∞ 3 3 0 ∞ ∞ 0
4 6 0 0 ∞ 4 6 0 0 ∞ 0
0 0 0 0
Reduction cost = 0

Total Cost of 1-4 = w[1,4]+ 18 + 0 = 4 + 18 = 22

1-2-3 1 2 3 4 1 2 3 4
1 ∞ ∞ ∞ ∞ 1 ∞ ∞ ∞ ∞
2 ∞ ∞ ∞ ∞ 0 2 ∞ ∞ ∞ ∞ 0
3 0 ∞ ∞ 2 2 3 0 ∞ ∞ 0 2
4 6 ∞ ∞ ∞ 0 4 6 ∞ ∞ ∞ 0
2 0 0 2

Reduction cost = 2

Total Cost of 1-2-3 = w[2,3]+ 21 + 2 = 1 + 23 = 24

1-2-4
1 2 3 4
1 ∞ ∞ ∞ ∞
2 ∞ ∞ ∞ ∞ 0
3 0 ∞ ∞ ∞ 0
4 6 ∞ 0 ∞ 0
0 0 0 0
Reduction cost = 0

Total Cost of 1-2-4 = w[2,4]+ 21 + 2 = 0+ 21 = 21

1-2-4-3-1

Swapna Singh KCS-503 DAA


1 2 3 4
1 ∞ ∞ ∞ ∞
2 ∞ ∞ ∞ ∞ 0
3 0 ∞ ∞ ∞ 0
4 ∞ ∞ ∞ 0
0 0 0 0
Reduction cost = 0

Total Cost of 1-2-4-3-1 = w[3,1]+ 21 + 0 = 0+ 21 = 21

Prune (1-3) and (1-4) as both partial tours (28 and 22 resp) are greater than full tour (1-2-4-3-
1) cost 21.

So, Optimal tour = 1-2-4-3-1 with cost = 21

13. Discuss N-Queen’s Problem. Solve 4 Queen’s problem using backtracking method. (2021-
22)

N-Queen’s Problem
 The object is to place n queens on a n x n chess board in such as way as no queen can capture
another one in a single move
o Recall that a queen can move horizontal, vertical, or diagonally an infinite distance
 This implies that no two queens can be on the
 same row,
 same column,
 same diagonal

4-Queen’s Problem :
 To place four queens on an 4x4 chessboard so that no two attack, that is, so that no two of
them are on the same row, column or diagonal.
 Let the rows & columns are numbered 1 thru 4. Since each queen must be on different row, so
we can in general assume that we can place queen i on row i.
 All solutions to 4-Queens problem can therefore be represented as 4-tuples (x1,x2,x3,x4),
where xi is the column in which the queen i is placed.
 Explicit constraints are Si = {1,2,3,4} 1 ≤ i ≤ 4
 Therefore, solution space consists of 44 4 tuples.
 The implicit constraints are that no two xi ‘s can be the same. (i.e., all queens must be on
different columns and no two queens can be on the same diagonal.

NQueens(k,n)
for i = 1 to n
if Place(k,i)
x[k] = i
if (k = n)
write(x[1:n])
else
NQueens(k+1,n)
Place(k,i)
for j = 1 to k-1

Swapna Singh KCS-503 DAA


if ((x[j] = i) or (abs(x[j]-i) = abs(j-k)))
return false
return true
// initially set array x[1]=1 i.e. assuming 1st queen might be placed in first column

2021-22
14. Explain Branch and Bound in brief.

An enhancement of backtracking
a. Similarity
i. A state space tree is used to solve a problem.
b. Difference
i. The branch-and-bound algorithm does not limit us to any particular way of
traversing the tree.
ii. Used only for optimization problems (since the backtracking algorithm
requires the using of DFS traversal and is used for decision problems / non-
optimization problems.
B&B technique is a systematic method for solving optimization problems.
The idea is to set up a bounding function, which is used to compute a bound (for the value
of the objective function) at a node on a state-space tree and determine if it is promising
c. Promising (if the bound is better than the value of the best solution so far): expand
beyond the node.
d. Nonpromising (if the bound is no better than the value of the best solution so far):
not expand beyond the node (pruning the state-space tree).
Backtracking is useful / effective for decision problems, but not designed for optimization
problems where we also have a cost fn f(n) that we wish to maximize or minimize.

Swapna Singh KCS-503 DAA


The B&B design has all the elements of backtracking, except that rather than simply
stopping the entire search process any time a solution is found, we continue processing till
the best solution is found.
The backtracking uses DFS, whereas B&B is based on BFS. More appropriately it is Best
first search.
The core of this approach is a simple observation that if a lower bound for a sub-region A
from a search tree is greater than the upper bound of any other sub-region B, then the sub-
region A may be safely discarded from the search. This step is called pruning.

15. What is travellingWhat is travelling salesman problem (TSP)? Find the solution of
following TSP using Branch & Bound method

1 2 3 4 5 1 2 3 4 5
1 - 20 30 10 11 10 1 - 10 20 0 1 10
2 15 - 16 4 2 2 Row wise 2 13 - 14 2 0 2
reduction
3 3 5 - 2 4 2 3 1 3 - 0 2 2
4 19 6 18 - 3 3 4 16 3 15 - 0 3
5 16 4 7 16 - 4 5 12 0 3 12 - 4
1 0 3 0 0 25
1 2 3 4 5
Column 1 - 10 17 0 1
wise
2 12 - 11 2 0
reduction Final Matrix with
3 0 3 - 0 2 Lower Bound = 25
4 15 3 12 - 0
5 11 0 0 12 -

1-2 : : make row 1 and column 2 infinity and then select minimum row wise followed by
column wise and reduce the matrix.

Swapna Singh KCS-503 DAA


1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞
2 12 ∞ 11 2 0 0
3 0 ∞ - 0 2 0
4 15 ∞ 12 - 0 0
5 11 ∞ 0 12 - 0
0 0 0 0 0

Total Cost of (1,2) = w[1,2] + Lower bound + reduction cost = 10+25+0 = 35

1-3 : : make row 1 and column 3 infinity and then select minimum row wise followed by
column wise and reduce the matrix.
1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞
2 12 - ∞ 2 0 0
3 0 3 ∞ 0 2 0
4 15 3 ∞ - 0 0
5 11 0 ∞ 12 - 0
0 0 0 0 0

Total Cost of (1,3) = w[1,3] + Lower bound + reduction cost = 17+25+0 = 42

1-4 : : make row 1 and column 4 infinity and then select minimum row wise followed by
column wise and reduce the matrix.

1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞
2 12 - 11 ∞ 0 0
3 0 3 - ∞ 2 0
4 15 3 12 ∞ 0 0
5 11 0 0 ∞ - 0
0 0 0 0 0

Total Cost of (1,4) = w[1,4] + Lower bound + reduction cost = 0+25+0 = 25

1-5: : make row 1 and column 5 infinity and then select minimum row wise followed by
column wise and reduce the matrix.

Swapna Singh KCS-503 DAA


1 2 3 4 5 1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞ 1 ∞ ∞ ∞ ∞ ∞
2 12 - 11 2 ∞ 2 2 10 - 9 0 ∞ 2
3 0 3 - 0 ∞ 0 3 0 3 - 0 ∞ 0
4 15 3 12 - ∞ 3 4 12 0 9 - ∞ 3
5 11 0 0 12 ∞ 0 5 11 0 0 12 ∞ 0
0 0 0 0 0 0 0 0 0 5

Total Cost of (1,5) = w[1,5] + Lower bound + reduction cost = 1+25+5 = 31

1-4-2 : : make row 1 and column 4 , row 4 column 2 infinity and then select minimum row
wise followed by column wise and reduce the matrix.

1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞
2 12 ∞ 11 ∞ 0 0
3 0 ∞ - ∞ 2 0
4 ∞ ∞ ∞ ∞ ∞
5 11 ∞ 0 ∞ - 0
0 0 0 0

Total Cost of (1,4,2) = w[4,2] + reduction cost(1-4) + reduction cost = 3+25+0 = 28

1-4-3 : : make row 1 and column 4 , row 4 column 3 infinity and then select minimum row
wise followed by column wise and reduce the matrix.
1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞
2 12 - ∞ ∞ 0 0
3 0 3 ∞ ∞ 2 0
4 ∞ ∞ ∞ ∞ ∞
5 11 0 ∞ ∞ - 0
0 0 0 0

Total Cost of (1,4,3) = w[4,3] + reduction cost(1-4) + reduction cost = 12+25+0 = 37

1-4-5 : : make row 1 and column 4 , row 4 column 5 infinity and then select minimum row
wise followed by column wise and reduce the matrix.

Swapna Singh KCS-503 DAA


1 2 3 4 5 1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞ 1 ∞ ∞ ∞ ∞ ∞
2 12 - 11 ∞ ∞ 11 2 1 - 0 ∞ ∞ 11
3 0 3 - ∞ ∞ 0 3 0 3 - ∞ ∞ 0
4 ∞ ∞ ∞ ∞ ∞ 4 ∞ ∞ ∞ ∞ ∞
5 11 0 0 ∞ ∞ 0 5 11 0 0 ∞ ∞ 0
0 0 0 0 0 0 0 0 0 11

Total Cost of (1,4,5) = w[4,5] + reduction cost(1-4) + reduction cost = 0+25+11 = 36

1-4-2-3 : : make row 1 and column 4 , row 4 column 2 followed by row 2 column 3 infinity
and then select minimum row wise followed by column wise and reduce the matrix.

1 2 3 4 5 1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞ 1 ∞ ∞ ∞ ∞ ∞
2 ∞ ∞ ∞ ∞ ∞ 2 ∞ ∞ ∞ ∞ ∞
3 0 ∞ ∞ ∞ 2 0 3 0 ∞ ∞ ∞ 0 0
4 ∞ ∞ ∞ ∞ ∞ 4 ∞ ∞ ∞ ∞ ∞
5 11 ∞ ∞ ∞ - 11 5 0 ∞ ∞ ∞ - 11
0 2 13 0 2 13

Total Cost of (1,4,2,3) = w[2,3] + reduction cost(1-4-2) + reduction cost = 11+28+13 = 52

1-4-2-5 : : make row 1 and column 4 , row 4 column 2 followed by row 2 column 5 infinity
and then select minimum row wise followed by column wise and reduce the matrix.

1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞
2 ∞ ∞ ∞ ∞ ∞
3 0 ∞ - ∞ ∞ 0
4 ∞ ∞ ∞ ∞ ∞
5 11 ∞ 0 ∞ ∞ 0
0 0 0

Total Cost of (1,4,2,5) = w[2,5] + reduction cost(1-4-2) + reduction cost = 0+28+0= 28

Swapna Singh KCS-503 DAA


1-4-2-5-3: : make row 1 and column 4 , row 4 column 2 followed by row 2 column 5 , row 5
column 3 infinity and then select minimum row wise followed by column wise and reduce
the matrix.

1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞
2 ∞ ∞ ∞ ∞ ∞
3 0 ∞ - ∞ ∞ 0
4 ∞ ∞ ∞ ∞ ∞
5 ∞ ∞ ∞ ∞ ∞ 0
0 0

Total Cost of (1,4,2,5,3) = w[5,3] + reduction cost(1-4-2-5) + reduction cost = 0+28+0= 28

Total Cost of (1,4,2,5,3,1) = w[3,1] + reduction cost(1-4-2-5-3) + reduction cost = 0+28+0= 28

So, Optimal Tour = 1-4-2-5-3-1 and optimal tour cost = 28

16. Explain the method of finding Hamiltonian cycles in a graph using backtracking
method with suitable example.

 Let G = (V,E) be a connected graph with n vertices. A Hamilton cycle is a round


trip path along n edges of G that visits every vertex once and returns to its starting
position.
 If a Hamilton cycle begins at some vertex V1 ∈ V and the vertices of G are visited
in the order V1,V2,….,Vn+1, then the edges (Vi, Vi+1) are in E, 1≤ 𝑖 ≤ n , and the
Vi are distinct except for V1 & Vn+1, which are equal.
 The backtracking solution vector (x1…… xn) is defined so that xi represents the
ith visited vertex of the proposed cycle.
 Explicit Constraint : Si ∈ {1..n} where n = no. of vertices.
 Implicit Constraint: Starting from vertex Vi , visit all vertices of the given graph
exactly once and return to the starting vertex.
Hamilton(k)
repeat
{
NextValue(k)
if (x[k]=0)
return
if (k=n)
write (x[1:n])
else
Hamilton(k+1)
} until (false)

NextValue(k)

Swapna Singh KCS-503 DAA


repeat
{
x[k]=x[k]+1 mod (n+1) // next vertex
if (x[k]=0)
return
if (G[x[k-1], x[k]]!=0)
for j=1 to k-1
if x[j]=x[k]
break
if (j=k) // if true then the vertex is distinct
if ((k<n) or ((k=n) and g[x[n],x[1]]!=0))
return
} until (false)

Swapna Singh KCS-503 DAA

You might also like