0% found this document useful (0 votes)
23 views92 pages

Daa Vtu Module Iv

The document discusses dynamic programming and its general method. It provides examples of problems that can be solved using dynamic programming like the knapsack problem, shortest path problem, and 0/1 knapsack problem. It also discusses multistage graph problems and their application to resource allocation.

Uploaded by

ashwinijp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views92 pages

Daa Vtu Module Iv

The document discusses dynamic programming and its general method. It provides examples of problems that can be solved using dynamic programming like the knapsack problem, shortest path problem, and 0/1 knapsack problem. It also discusses multistage graph problems and their application to resource allocation.

Uploaded by

ashwinijp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 92

DESIGN AND ANALYSIS OF

ALGORITHMS
Design and Analysis of
MODULE-IV
Algorithms
DYNAMIC PROGRAMMING
From,
Ashwini Janagal
Associate Professor
Department of ISE,
JNN College of Engineering,
Shivamogga-577204,
Karnataka,
India.
Dynamic Programming - General Method

 This can be used when Solution to a problem is a “SEQUENCE OF

DECISIONS”.

 Example 1: Knapsack Problem

 We make decision for value of FRACTION (xi, 1<=i<=n)

 Optimal set of Decisions has Maximum , such that .


Dynamic Programming - General Method

 This can be used when Solution to a problem is a

“SEQUENCE OF DECISIONS”.

 Example 2: Shortest Path

 Decide which vertex should be the next to consider.


Dynamic Programming - General Method

 This can be used when Solution to a problem is a “SEQUENCE OF

DECISIONS”.

 Example 3: Shortest Path from ‘i’ to ‘j’

 Can you compare finding shortest path from ‘i’ to ‘j’ and finding

shortest path from ‘i’ to all other vertices?


Dynamic Programming - General Method

What if that for given problem we are notr able to make stepwise

decision?

 We take all possible decisions and try to find which one is better!!

 It takes lots of space!!!!

 If we apply “Principle of Optimality” then we can leave lots of

decisions which doesn’t lead to optimal solution.


Dynamic Programming - General Method

 Definition of Principle of optimality

 An optimal Sequence of Decisions has the property that

“Whatever the initial state and decisions are, the remaining

decisions must provide an optimal decision sequence from the

state starting from the first decision”.


Dynamic Programming - General Method

Differentiate Greedy and Dynamic Programming.

 Greedy: Only one decision sequence is generated, which is based on

Greedy Nature.

 Dynamic : Many decision sequences may be generated, except the

sequence with suboptimal subsequences.


Dynamic Programming - General Method

Example: Shortest Path.

 Lets say i, i1, i2, i3......, ik, j is shortest path from i to j;

 Decision 1: i to i1 is chosen.

 Problem statement: shortest path from i1 to j.

 If i1, r1, r2, ....., rq, j is shortest path than i1, i2, ..... , ik, j then i, i1, r1, r2, ..... ,

rq, j is optimal solution.

 So this problem needs Principle of Optimality.


Dynamic Programming - General Method

Example: 0/1 Knapsack

 Problem: Same as Knapsack Problem but xi should be 0 or 1.

 .

 Lets say y1, y2, y3... yn be an optimal sequence of 0/1 values for

x1,x2,.....xn respectively.

 If y1=0, then y2, y3, ....yn should be optimal solution.

 Else y1, y2, y3..... , yn is not optimal.


Dynamic Programming - General Method

Example: 0/1 Knapsack

 If y1=0, then y2, y3, ....yn should be optimal solution.

 Else y1, y2, y3..... , yn is not optimal.

 If y1=1, then y2, y3,....,yn should be optimal for problem KNAP(2,n,m-

w1)

 Else there is another sequence z2, z3, ... , zn such that and that is

better than ysequence.


Dynamic Programming - General Method

Example: 0/1 Knapsack

 If y1=0, then y2, y3, ....yn should be optimal solution.

 Else y1, y2, y3..... , yn is not optimal.

 If y1=1, then y2, y3,....,yn should be optimal for problem KNAP(2,n,m-

w1)

 Else there is another sequence z2, z3, ... , zn such that and that is

better than ysequence.


Dynamic Programming - General Method

Example: Shortest Path

 Ai --> Set of vertices adjacent to i.

 for each kAi, let Pk be shortest path from k to j.

 Then Shortest path from i to j will be {i,Pk| kAi}.


Dynamic Programming - General Method

Example: Knapsack
 Let y1, y2, ... yn be an optimal solution to KNAP(1, n,m).

 Let gj(m) --> Optimal solution for KNAPSACK(j+1, n,m)

 g0(m)=max{g1(m), g1(m-w1)+p1}.

 gi(m)=max{gi+1(y), gi+1(y-wi+1)+pi+1}
Dynamic Programming - General Method

Problem: Consider the case in which n=3, w1=2, w2=3, w3=4

and p1=1, p2=2, and p3=5 and m=6, then Compute g0(6).

g0(6)=max{g1(6),g1(6-2)+p1}=max{g1(6),g1(4)+1}

g1(6)=max{g2(6),g2(6-3)+p2}=max{g2(6),g2(3)+2}

g2(6)=max{g3(6),g3(6-4)+p3}=max{g3(6),g3(2)+5}=max{0,5}=5
Dynamic Programming - General Method

Problem: Consider the case in which n=3, w1=2, w2=3, w3=4

and p1=1, p2=2, and p3=5 and m=6, then Compute g0(6).

g1(6)=max{g2(6),g2(6-3)+p2}=max{g2(6),g2(3)+2}

g2(3)=max{g3(3),g3(3-4)+5}=max{0,-INF}=0 --> g1(6)=max{5,0}=5


Dynamic Programming - General Method

Problem: Consider the case in which n=3, w1=2, w2=3, w3=4

and p1=1, p2=2, and p3=5 and m=6, then Compute g0(6).

g0(6)=max{g1(6),g1(6-2)+p1}=max{g1(6),g1(4)+1}

g1(4)=max{g2(4),g2(4-3)+2}

g2(4)=max{g3(4),g3(4-4)+5}=max{0,5}=5

g2(1)=max{g3(1),g}
Dynamic Programming - MULTISTAGE GRAPH

Definition and constraints of a MULTISTAGE GRAPH:

1. A multistage graph G=(V,E) is a

2. Directed Graph

3. Vertices are partitioned into k≥2 disjoint sets Vi, .

4. Every edge (u,v) in E has to satisfy the condition that,


Dynamic Programming - MULTISTAGE GRAPH

Definition and constraints of MULTISTAGE GRAPH:

5. V1 is known as SOURCE. (Usually written as ‘s’)

6. Vk is known as SINK. (Usually written as ‘t’);


Dynamic Programming - MULTISTAGE GRAPH

MULTISTAGE GRAPH PROBLEMS:

 To find MINIMUM-COST-PATH from ‘s’ to ‘t’ (SOurce to

Sink)
Dynamic Programming - MULTISTAGE GRAPH
Dynamic Programming - MULTISTAGE GRAPH

Cost(i,j)=min{c(j,l)+cost(i+1,l)}
Cost(k-1,j)=c(j,t)
Cost(4,9)=4 ; cost(4,10)=2 ; cost(4,11)=5;
Cost(k-2,j)
cost(3,6)=min{6+cost(4,9), 5+cost(4,10)}
cost(3,7)=min{5+cost(4,9),3+cost(4,10)}
cost(3,8)=min{5+cost(4,10),6+cost(4,11)}
Dynamic Programming - MULTISTAGE GRAPH

Cost(i,j)=min{c(j,l)+cost(i+1,l)}
Cost(k-3,j))
Cost(2,2)=min{4+cost(3,6),2+cost(3,7),1+cost(3,8)}
cost(2,3)=min{2+cost(3,6),7+cost(3,7)}
cost(2,4)=min{11+cost(3,8)}
cost(2,5)=min{11+cost(3,7),8+cost(3,8)}
Dynamic Programming - MULTISTAGE GRAPH

Cost(i,j)=min{c(j,l)+cost(i+1,l)}
Cost(k-1,j)=c(j,t)
Cost(4,9)=4 ; cost(4,10)=2 ; cost(4,11)=5;
Cost(k-2,j) 2
4
cost(3,6)=min{6+cost(4,9), 5+cost(4,10)} = min{10,7} = 7
4 2
cost(3,7)=min{5+cost(4,9),3+cost(4,10)} = min{9,5} = 5
2 5
cost(3,8)=min{5+cost(4,10),6+cost(4,11)} = min{7,11} = 7
Dynamic Programming - MULTISTAGE GRAPH

Cost(3,6)= 7
Cost(i,j)=min{c(j,l)+cost(i+1,l)} Cost(3,7)= 5
Cost(k-3,j)) Cost(3,8)= 7
7 5 4
Cost(2,2)=min{4+cost(3,6),2+cost(3,7),1+cost(3,8)} = min{11,10,5} = 5
7 5
cost(2,3)=min{2+cost(3,6),7+cost(3,7)} = min{9, 12} = 9
7
cost(2,4)=min{11+cost(3,8)} = min{19} =19
cost(2,5)=min{11+cost(3,7),8+cost(3,8)} = min{16, 15} =15
5 7
Cost(k-4,j)
cost(1,1)=min(9+cost(2,2), 7+cost(2,3), 3+cost(2,4),
2+cost(2,5)) = min{14, }
Dynamic Programming - MULTISTAGE GRAPH

APPLICATION OF MULTISTAGE GRAPH PROBLEM:


RESOURCE ALLOCATION

 ‘n’ resources to be allocated to ‘r’ projects.

 For each PROJECT ‘i’, ‘j’ UNITS OF RESOURCE are

allocated and PROFIT gained is N(i,j).

 PROBLEM --> Allocate resources to ‘r’ projects such that

profit is maximum.
Dynamic Programming - MULTISTAGE GRAPH

APPLICATION OF MULTISTAGE GRAPH PROBLEM:


RESOURCE ALLOCATION

 ‘r’ stage graph.

 Except 1st and ‘r+1’ stage each stage has n+1 vertices

 Vertex --> ‘j’ unit of resource to projects 1, 2, ... i-1.

 Edge from is assigned weight N(i, l-j)


Dynamic Programming - MULTISTAGE GRAPH
WARSHALLS and FLOYD ALGORITHM

 To Find the Transitive Closure of a Graph.

 Given a directed graph, find out if a vertex j is

reachable from another vertex i for all vertex pairs (i,j)

in the given graph.


WARSHALLS and FLOYD ALGORITHM

Ex: Transitive Closure of the Given

Graph is
WARSHALLS ALGORITHM

Defn 1: Adjacency Matrix


A=[aij] is a boolean matrix for the given Directed Graph.

where aij=1 only if there is a directed edge from i to j.


WARSHALLS ALGORITHM

Defn : Transitive Closure


The Transitive Closure od a Directed Graph with ‘n’ vertices
can be defined as a n-by-n boolean matrix T={tij} , that has 1
in ‘i’th row and ‘j’th column if there exist a non-trivial
directed path from i to j. Otherwise tij=0;
WARSHALLS ALGORITHM

Find the Adjacency Matrix and Transitive Closure


of the following Graph.
WARSHALLS ALGORITHM

Find the Adjacency Matrix and Transitive Closure


of the following Graph.
WARSHALLS ALGORITHM

Find the Adjacency Matrix and Transitive Closure


of the following Graph.
WARSHALLS ALGORITHM

 Simple method to find Transitive Closure in Deapth First

Search (DFS) and Breadth First Search(BFS).

 But it has to be appliead repeatedly.

 Therefore, we apply Warshall’s Algorithm.


WARSHALLS ALGORITHM

 Warshall's algorithm constructs the Transitive Closure of

a given digraph with ‘n’ vertices through a series of ‘n-by-n

Boolean Matrices’ : R(0), R(1), ........., R(k-1),R(k)....R(n).

 Each of these matrices give information about some

directed path.

 Rijk =1 if there is path from i to j with with each

intermediate vertex numbered not higher thank:’


WARSHALLS ALGORITHM

 R0 is just an adjacency matrix.

 If
WARSHALLS ALGORITHM

 In Common Man’s Language

 If an element in Rk-l, it remains 1 in Rk

 If an element is 0 in R(k-l), it has to be changed to 1 in

R(k) if and only if the element in its row i and column k

and the element in its column j and row k are both 1's in

R(k-l)
WARSHALLS ALGORITHM

 In Common Man’s Language

 If an element in Rk-l, it remains 1 in Rk

 If an element is 0 in R(k-l), it has to be changed to 1 in

R(k) if and only if the element in its row i and column k

and the element in its column j and row k are both 1's in

R(k-l)
WARSHALLS ALGORITHM
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be


WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be


k=1 i=1 j=
[1,2,3,4]

[i,k] [k,j]
b
[1,1] [1,1]
[1,2]
[1,3]
[1,4]
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be


k=1 i=2 j=
[1,2,3,4]
b

[ ]
[i,k] [k,j]
[2,1] [1,1]
¿ 1 23 4
[1,2] 1 0 10 0
[1,3]
[1,4] 𝑅(0)= 2 0 00 1
3 0 00 0
4 1 01 0
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be


k=1 i=3 j=
[1,2,3,4]
b

[ ]
[i,k] [k,j]
[3,1] [1,1]
¿ 1 23 4
[1,2] 1 0 10 0
[1,3]
[1,4] 𝑅(0)= 2 0 00 1
3 0 00 0
4 1 01 0
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be

[ ][ ]
k=1 i=4 j=
b [1,2,3,4]
¿ 1 23 4
[i,k] [k,j]
[4,1] [1,1]
1 0 10 0
[1,2] 𝑅(0)= 2 0 00 1 𝑎 𝑎 𝑏𝑐 𝑑
[1,3]
[1,4] 3 0 00 0 𝑏 0 10 0
4 1 01 0 0 00 1
𝑐
0 00 0
𝑑
1 11 0
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be

[ ][ ]
k=2 i=1 j=
[1,2,3,4]
b
[i,k] [k,j] ¿ 1 23 4
[1,2] [2,1] 1 0 10 0 ¿ 1 23 4
[2,2]
[2,3] 𝑅 (0)= 2 0 00 1 1 0 10 1
[2,4]
3 0 0 0 𝑅 0(0)= 2 0 00 1
4 1 11 0 3 0 00 0
4 1 11 0
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be


k=2 i=2 j=

[ ]
[1,2,3,4]
b
[i,k] [k,j] ¿ 1 23 4
[2,2] [2,1] 1 0 10 1
[2,2]
[2,3] 𝑅 (0)= 2 0 00 1
[2,4]
3 0 00 0
4 1 11 0
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be

[ ]
k=2 i=3 j=
[1,2,3,4]
b
[i,k] [k,j] ¿ 1 23 4
[3,2] [2,1] 1 0 10 1
[2,2]
[2,3] 𝑅 (0)= 2 0 00 1
[2,4]
3 0 00 0
4 1 11 0
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be

[ ][ ]
k=2 i=4 j=
[1,2,3,4]
b ¿
[i,k] [k,j] 1 23 4
[4,2] [2,1] 1 0 10 1
[2,2]
𝑅 (0)= 2 0 00 1
[2,3] ¿ 1 23 4
[2,4] 3 0 00 0 1 0 10 1
4 1 1 1 𝑅0(0)= 2 0 0 0 1
3 0 00 0
4 1 11 1
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be

[ ]
k=3 i=1 j=
b [1,2,3,4] ¿ 1 23 4
[i,k] [k,j] 1 0 10 1
[1,3] [3,1]
[3,2]
𝑅 (0)= 2 0 00 1
[3,3]
[3,4]
3 0 00 0
4 1 11 1
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be

[ ]
k=3 i=2 j=
b [1,2,3,4] ¿ 1 23 4
[i,k] [k,j] 1 0 10 1
[2,3] [3,1]
[3,2]
𝑅 (0)= 2 0 00 1
[3,3]
[3,4]
3 0 00 0
4 1 11 1
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be

[ ]
k=3 i=3 j=
b [1,2,3,4] ¿ 1 23 4
[i,k] [k,j] 1 0 10 1
[3,3] [3,1]
[3,2]
𝑅 (0)= 2 0 00 1
[3,3]
[3,4]
3 0 00 0
4 1 11 1
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be

[ ]
k=3 i=4 j=
b [1,2,3,4] ¿ 1 23 4
[i,k] [k,j] 1 0 10 1
[4,3] [3,1]
[3,2]
𝑅 (0)= 2 0 00 1
[3,3]
[3,4]
3 0 00 0
4 1 11 1
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be

[ ]
k=4 i=1 j=
b [1,2,3,4] ¿ 1 23 4
[i,k] [k,j] 1 0 10 1
[1,4] [4,1]
𝑅 (0)= 2 0 00 1

[ ][ ]
[4,2]

2 3 4 ¿ 1 32 03 04 0 0
[4,3]
¿ 1 [4,4]
1 1 1 0 1 1 1 4111 111 1
𝑅 (0)= 2 0 0 0𝑅 (0)=
1 2 0 00 1
3 0 00 0 3 0 00 0
4 1 11 1 4 1 11 1
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be

[ ]
k=4 i=2 j=
b [1,2,3,4] ¿ 1 23 4
[i,k] [k,j] 1 1 11 1
[2,4] [4,1]
𝑅 (0)= 2 0 00 1

[ ][ ][ ]
[4,2]

¿ 13 02 3 0 40 0 ¿ 1 23 4
[4,3]
¿ 1 [4,4]
23 4
1 1 11 1 1 14 111 111 1 1 1 11 1
𝑅 (0)= 2 1 0 0 𝑅 1(0)= 2 1 10 1 𝑅 (0)= 2 1 11 1
3 0 00 0 3 0 00 0 3 0 00 0
4 4 1 11 1
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be

[ ]
k=4 i=3 j=
b [1,2,3,4] ¿ 1 23 4
[i,k] [k,j] 1 1 11 1
[3,4] [4,1]
[4,2]
𝑅 (0)= 2 1 11 1
[4,3]
[4,4]
3 0 00 0
4 1 11 1
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be

[ ]
k=4 i=4 j=
b [1,2,3,4] ¿ 1 23 4
[i,k] [k,j] 1 1 11 1
[4,4] [4,1]
[4,2]
𝑅 (0)= 2 1 11 1
[4,3]
[4,4]
3 0 00 0
4 1 11 1
WARSHALLS ALGORITHM

 Working of Warshall’s Algorithm

 Adjaecncy Matrix for the below graph will be

[ ]
¿ 1 23 4
1 1 11 1
𝑅 (0)= 2 1 11 1
3 0 00 0
4 1 11 1
FLOYD’s ALGORITHM - ALL PAIR SHORTEST PATH

Given a weighted connected graph (undirected or directed),

the all-pairs shortest-paths problem finds the distances (the

lengths of the shortest paths) from each vertex to all other

vertices.
FLOYD’s ALGORITHM - ALL PAIR SHORTEST PATH

Input: Weightage Matrix / Cost Matrix

Output: Distance Matrix

D=[dij] indicates the LENGTH of the SHORTEST PATH from

‘i’ to ‘j’.
FLOYD’s ALGORITHM - ALL PAIR SHORTEST PATH

Input: Weightage Matrix / Cost Matrix

Output: Distance Matrix

D=[dij] indicates the LENGTH of the SHORTEST PATH from

‘i’ to ‘j’.
FLOYD’s ALGORITHM - ALL PAIR SHORTEST PATH
FLOYD’s ALGORITHM - ALL PAIR SHORTEST PATH
Optimal Binary Search Trees
Optimal Binary Search Tree

 Very Important Data Structure.

 Example Application : Dictionary.

 Remember Huffman Coding.......Let’s refresh some con

concept of binary search trees.


Optimal Binary Search Tree

 4 Keys : A, B, C and D with search probabilities, 0.1, 0.2,

0.4 and 0.3.

 Average comparision for this tree is

=2.9
Optimal Binary Search Tree

 4 Keys : A, B, C and D with search probabilities, 0.1, 0.2,

0.4 and 0.3.

 Average comparision for this tree is

=2.1

 But are they optimal????


Optimal Binary Search Tree

 4 Keys : A, B, C and D with search probabilities, 0.1, 0.2,

0.4 and 0.3.

 Average comparision for this tree is

=2.1

 But are they optimal????


Optimal Binary Search Tree

 With n keys we can generate

 This function grows very fast.


Optimal Binary Search Tree

 let be distinct keys ordered from the smallest to the

largest.

 let be the probabilities of searching for them.

 Let C[i, j] be the smallest average number of comparisons

made in a successful search in a binary search tree


Optimal Binary Search Tree

 Then apply Dynamic Programming in the following way

 Find root ak

 Left subtee contains a1, a2, a3 ... ak-1

 Right Subtree contains ak+1,........,aj


Optimal Binary Search Tree

 Then apply Dynamic Programming in the following way

 Find root ak

 Left subtee contains a1, a2, a3 ... ak-1

 Right Subtree contains ak+1,........,aj


Optimal Binary Search Tree

 Then apply Dynamic Programming in the following way

 Find root ak

 Left subtee contains a1, a2, a3 ... ak-1

 Right Subtree contains ak+1,........,aj


Optimal Binary Search Tree

 C[i, i-1]=0 for 1≤i≤n.

 C[i,i]=pi for 1≤i≤n.


Optimal Binary Search Tree

 C[i, i-1]=0 for 1≤i≤n.

 C[i,i]=pi for 1≤i≤n.


Optimal Binary Search Tree

 C[i, i-1]=0 for 1≤i≤n.

 C[i,i]=pi for 1≤i≤n.


Optimal Binary Search Tree

GO TO NOTES
Optimal Binary Search Tree
Optimal Binary Search Tree
KNAPSACK PROBLEM - 0/1
KNAPSACK PROBLEM - 0/1
BELLMAN FORD

 SINGLE SOURCE SHORTEST PATH (GENERAL

WEIGHTS --MAY BE NEGETIVE)

 Dijkstra’s Will not work for Negative edges.


BELLMAN FORD

 Let Distance from source vertex to ‘u’ with atmost ‘l’

edges.

 Goal compute for all ‘u’


BELLMAN FORD

 Initial -- >
BELLMAN FORD

 Initial -- >

 STEP -1 : =min{6, ][2] , }


BELLMAN FORD
TRAVELLING SALESMAN PROBLEM

 Let G=(V,E) be directed graph with edge cost .

 Let

 TOUR : Simple cycle that includes every vertex in V.

 COST OF TOUR: Sum of costs of every edge in tour.

 TSP (Travelling Salesman Problem): To find a TOUR with

MINIMUM COST.
TRAVELLING SALESMAN PROBLEM

 For example if we consider a TOUR from vertex 1, then

 Every tour consists of an edge {1,k} for some k in V-{1} and a path

from k to 1. (ONLY THING IS PATH FROM k TO 1 SHOULD BE

MINIMUM)

 Let g(i,S) be length of SHORTEST PATH from vertex i to 1, going

THROUGH ALL VERTICES in S.

 We are interested in g(1, V-{1}).


TRAVELLING SALESMAN PROBLEM

g(1,V−{1})=𝑚𝑖𝑛2≤𝑘≤𝑛{𝑐1𝑘+g(k,V−{1,k})
TRAVELLING SALESMAN PROBLEM

g(1,V−{1})=𝑚𝑖𝑛2≤𝑘≤𝑛{𝑐1𝑘+g(k,V−{1,k})

You might also like