Daa 3
Daa 3
Dynamic Programming: Applications – Multistage graphs – Optimal binary search trees – 0/1 knapsack
problem, All pairs shortest path problem – Traveling sales person problem – Reliability design – Chained
1. Mrs.R.Deepa Asst.Prof
2. Mr.S.Kumarakrishnan Asst.Prof
3.Mrs.S.Jagadeeswari Asst.Prof
3.
4.
PART A
1. Write the difference between the Greedy method and Dynamic programming.
Greedy methodDynamic programming
Solutions to the sub problems do nothave Choice can be made of what looksbest
to be known at each stage. for the moment.
Only one sequence of decision is generated. Many number of decisions are generated.
10. What is the formula to calculate optimal solution in 0/1 knapsack problem?
The formula to calculate optimal solution is g0(m)=max{g1, g1(m-w1)+p1}.
13. Give the time complexity and space complexity of traveling salesperson problem.
Time complexity is O (n2 2n).
Space complexity is O (n 2n).
14. List different traversal technique in graph (UQ APRIL’12/ APRIL 2015)
1. DFS and
2. BFS.
17. Write about multistage graph with example (UQ APRIL’13 & Nov’12,APRIL/MAY’14)
A multistage graph G = (V,E) is a directed graph in which the vertices are portioned into K > = 2 disjoint
sets Vi, 1 <= i<= k. In addition, if < u,v > is an edge in E, then u < = Vi and V Vi+1 for some i, 1<= i
< k.
22. Give the commonly used designing steps for dynamic programming algorithm?
Dynamic programming design involves 4 major steps
Characterize the structure of optimal solution.
Recursively defines the value of an optimal solution.
By using bottom up technique compute value of optimal solution.
Compute an optimal solution from computed information.
23. What does dynamic programming have in common with divide and conquer?
o Both the divide and conquer and dynamic programming solve the problem by breaking it into
number of sub-problems.
o In both these methods solutions from sub-problems are collected together to form a solution to
given problem.
25.Compare divide and conquer with dynamic programming and Dynamic programming with
greedy technique.
It is an algorithm design method that can be used when the solution to a problem can be viewed
as the result of a sequence of decisions.
The idea of dynamic programming is thus quit simple: avoid calculating the same thing twice,
usually by keeping a table of known result that fills up a sub instances are solved.
Divide and conquer is a top-down method. When a problem is solved by divide and conquer, we
immediately attack the complete instance, which we then divide into smaller and smaller sub-
instances as the algorithm progresses.
Dynamic programming on the other hand is a bottom-up technique. We usually start with the
smallest and hence the simplest sub- instances. By combining their solutions, we obtain the
answers to sub-instances of increasing size, until finally we arrive at the solution of the original
instances.
The essential difference between the greedy method and dynamic programming is that the greedy
method only one decision sequence is ever generated. In dynamic programming, many decision
sequences may be generated. However, sequences containing sub-optimal sub-sequences cannot
be optimal and so will not be generated.
Because of principle of optimality, decision sequences containing subsequences that are
suboptimal are not considered. Although the total number of different decision sequences is
exponential in the number of decisions(if there are d choices for each of the n decisions to be made
then there are dn possible decision sequences),Dynamic programming algorithms often have a
polynomial complexity.
1. A multistage graph G = (V,E) is a directed graph in which the vertices are portioned into K > = 2
disjoint sets Vi, 1 <= i<= k.
2. In addition, if < u,v > is an edge in E, then u < = Vi and V Vi+1 for some i, 1<= i < k.
3. If there will be only one vertex, then the sets Vi and Vk are such that [Vi]=[Vk] = 1.
4. Let ‘s’ and ‘t’ be the source and destination respectively.
5. The cost of a path from source (s) to destination (t) is the sum of the costs of the edger on the path.
6. The MULTISTAGE GRAPH problem is to find a minimum cost path from‘s’ to‘t’.
7. Each set Vi defines a stage in the graph. Every path from ‘s’ to ‘t’ starts in stage-1, goes to stage-2
then to stage-3, then to stage-4, and so on, and terminates in stage-k.
8. This MULISTAGE GRAPH problem can be solved in 2 ways.
a) Forward Method.
b) Backward Method.
FORWARD METHOD
Assume that there are ‘k’ stages in a graph. In this FORWARD approach, we will find out the cost of
each and every node starting from the ‘k’ th stage to the 1st stage. We will find out the path (i.e.)
minimum cost path from source to the destination (i.e.) [ Stage-1 to Stage-k ].
PROCEDURE:
Maintain a cost matrix cost (n) which stores the distance from any vertex to the destination.
If a vertex is having more than one path, then we have to choose the minimum distance path and
the intermediate vertex which gives the minimum distance path will be stored in the distance array
‘D’.
In this way we will find out the minimum cost path from each and every vertex.
Finally cost (1) will give the shortest distance from source to destination.
For finding the path, start from vertex-1 then the distance array D(1) will give the minimum cost
neighbor vertex which in turn give the next nearest vertex and proceed in this way till we reach
the Destination.
For a ‘k’ stage graph, there will be ‘k’ vertex in the path.
In the above graph V1……V5 represent the stages. This 5 stage graph can be solved by using
forward approach as follows,
ALGORITHM:
When j=6
cost(6) = min (c (6,9) + cost(9),c (6,10) +cost(10))
= min(6+4 , 5 +2)
= min(10,7)
=7
cost(6) = 7 d(6) = 10
When j=5
cost(5) = min (c (5,7) + cost(7),c (5,8) +cost(8))
= min(11+5 , 8 +7)
= min(16,15)
= 15
cost(5) = 15 d(5) = 18
When j=4
cost(4) = min (c (4,8) + cost(8))
= min(11+7)
= 18
cost(4) = 18 d(4) = 8
When j=3
cost(3) = min (c (3,6) + cost(6),c (3,7) +cost(7))
= min(2+7 , 7 +5)
= min(9,12)
=9
cost(3) = 9 d(3) = 6
When j=2
cost(2) = min (c (2,6) + cost(6),c (2,7) +cost(7) ,c (2,8) +cost(8))
= min(4+7 , 2+5 , 1+7 )
= min(11,7,8)
=7
cost(2) = 7 d(2) = 7
When j=1
cost(1)= min (c (1,2)+cost(2) ,c (1,3)+cost(3) ,c (1,4)+cost(4) c(1,5)+cost(5))
p[1]=1 p[5]=12
for j=2 to 4
When j=2 p[2]=d[p[1]]=d[1]=2
When j=3 p[3]=d[p[2]]=d[2]=7
When j=4 p[4]=d[p[3]]=d[7]=10
The path through which you have to find the shortest distance from vertex 1 to vertex 12.
(ie)
D ( 1) = 2
D ( 2) = 7
D ( 7) = 10
D (10) = 12
ANALYSIS:
The time complexity of this forward method is O (V + E)
BACKWARD METHOD
If there one ‘K’ stages in a graph using back ward approach. we will find out the cost of each &
every vertex starting from 1st stage to the kth stage.
We will find out the minimum cost path from destination to source (i.e.)[from stage k to stage 1]
PROCEDURE:
1. It is similar to forward approach, but differs only in two or three ways.
2. Maintain a cost matrix to store the cost of every vertices and a distance matrix to store the
minimum distance vertex.
3. Find out the cost of each and every vertex starting from vertex 1 up to vertex k.
4. To find out the path star from vertex ‘k’, then the distance array D (k) will give the minimum cost
neighbor vertex which in turn gives the next nearest neighbor vertex and proceed till we reach the
destination.
ALGORITHM :
TRACE OUT:
n=12 k=5
bcost (1)=0 d (1)=0
for j = 2 to 12
When j=2
bcost(2) = 9 d(2)=1
When j=3
bcost(3) = 7 d(3)=1
When j=4
bcost(4) = 3 d(4)=1
When j=5
bcost(5) = 2 d(5)=1
When j=6
bcost(6) =min(c (2,6) + bcost(2),c (3,6) + bcost(3))
=min(13,9)
bcost(6) = 9 d(6)=3
When j=7
bcost(7) =min(c (3,7) + bcost(3),c (5,7) + bcost(5) ,c (2,7) + bcost(2))
=min(14,13,11)
bcost(7) = 11 d(7)=2
When j=8
bcost(8) =min(c (2,8) + bcost(2),c (4,8) + bcost(4) ,c (5,8) +bcost(5))
=min(10,14,10)
bcost(8) = 10 d(8)=2
When j=9
bcost(9) =min(c (6,9) + bcost(6),c (7,9) + bcost(7))
=min(15,15)
bcost(9) = 15 d(9)=6
When j=10
bcost(10)=min(c(6,10)+bcost(6),c(7,10)+bcost(7)),c(8,10)+bcost(8))
=min(14,14,15)
bcost(10)= 14 d(10)=6
When j=11
bcost(11) =c (8,11) + bcost(8))
bcost(11) = 16 d(11)=8
When j=12
bcost(12)=min(c(9,12)+bcost(9),c(10,12)+bcost(10),c(11,12)+bcost(11))
=min(19,16,21)
bcost(12) = 16 d(12)=10
p[1]=1 p[5]=12
for j=4 to 2
When j=4 p[4]=d[p[5]]=d[12]=10
When j=3 p[3]=d[p[4]]=d[10]=6
When j=2 p[2]=d[p[3]]=d[6]=3
The path through which you have to find the shortest distance from vertex 1 to vertex 12.
p[5]=12
p[4]= =10
p[3]= =6
p[2]= =3
p[1]=1
So the minimum cost path is,
7 2 5 2
1 3 6 10 12
Let G(V,E) be a directed graph with n vertices , ‘E’ is the set of edges & V is the set of n vertices.
Each edge has an associated non-negative length.
We want to calculate the length of the shortest path between each pair of nodes.
i.e)Shortest path between every vertex to all other vertices.
Suppose the nodes of G are numbered from 1 to n, so n={1,2,...n}
cost (i,j) is length of edge <i,j> and it is called as cost adjacency matrix.
cost(i,i)=0 for i=1,2...n,
cost(i,j)=cost of edge <i,j> for all i&j
cost i,j)=infinity, if the edge (i,j) does not exist.
The all pairs shortest path problem is to determine a matrix A such that A(i,j) is the length of a
shortest path from i to j.
The principle of optimality:
If k is the node on the shortest path from i to j then the part of the path from i to k and the
part from k to j must also be optimal, that is shorter.
15
5 15
15
Fig: floyd’s algorithm and work
ALGORITHM :
Algorithm Allpaths(cost,A,n)
//cost[1…n,1…n] is the cost adjacency matrix of graph with n vertices
//A[i,j] is the cost of shortest path from vertex i to j
//cost[i,i]=0.0 for 1<=i<=n
{
for i:=1 to n do
for j:=1 to n do
A[i,j]:=cost[i,j]; //copy cost into A
for k:=1 to n do
for i:=1 to n do
for j:=1 to n do
A[i,j]:=min{A[i,j], A[i,k]+ A[k,j];
}
At 0th iteration it nil give you the direct distances between any 2 nodes
cost(i,j) = A(i,j) = D0 =
for k= 1 to 4
When k=1
for i= 1 to 4
when i=1
for j= 1 to 4
when j=1
A[1,1]=min(A[1,1],a[1,1]+A[1,1])=min(0,0)=0
When j=2
A[1,2]=min(A[1,2],A[1,1]+A[1,2])=min(5,0+5)=5
When j=3
A[1,3]=min(A[1,3],A[1,1]+A[1,3])=min(,0+)=
When j=4
A[1,4]=min(A[1,4],A[1,1]+A[1,4])=min(,0+)=
When i=2
for j= 1 to 4
when j=1
A[2,1]=min(A[2,1],A[2,1]+A[1,1])=min(50,50+0)=50
When j=2
A[2,2]=min(A[2,2],A[2,1]+A[1,2])=min(0,50+5)=0
When j=3
A[2,3]=min(A[2,3],A[2,1]+A[1,3])=min(15,50+)=15
When j=4
A[2,4]=min(A[2,4],A[2,1]+A[1,4])=min(5,50+)=5
When i=3
for j= 1 to 4
when j=1
A[3,1]=min(A[3,1],A[3,1]+A[1,1])=min(30,30+0)=30
When j=2
A[3,2]=min(A[3,2],A[3,1]+A[1,2])=min(,30+5)=35
When j=3
A[3,3]=min(A[3,3],A[3,1]+A[1,3])=min(0,30+)=0
When j=4
A[3,4]=min(A[3,4],A[3,1]+A[1,4])=min(15,30+)=15
When i=4
for j= 1 to 4
when j=1
A[4,1]=min(A[4,1],A[4,1]+A[1,1])=min(15,15+0)=15
When j=2
A[4,2]=min(A[4,2],A[4,1]+A[1,2])=min(,15+5)=20
When j=3
A[4,3]=min(A[4,3],A[4,1]+A[1,3])=min(5,15+)=5
When j=4
A[4,4]=min(A[4,4],A[4,1]+A[1,4])=min(0,15+)=0
k
i j A[i,j]
1 1 1 0
2 5
3
4
2 1 50
2 0
3 15
4 5
3 1 30
2 35
3 0
4 15
4 1 15
2 20
3 5
4 0
0 5 0 0 0 0
50 0 15 5 P[3,2]= 1 0 0 0 0
D1= 30 35 0 15 P[4,2]= 1 P1= 0 1 0 0
15 20 5 0 0 1 0 0
0 5 20 10 P[1,3] = 2 0 0 2 2
D2= 50 0 15 5 P[1,4] = 2 P2= 0 0 0 0
30 35 0 15 0 1 0 1
15 20 5 0 0 1 0 0
0 5 20 10
D3= 45 0 15 5 P[2,1]=3
30 35 0 15
15 20 5 0
0 5 15 10
20 0 10 5 P[1,3]=4
D4= 30 35 0 15 P[2,3]=4
15 20 5 0
If you want the exact path then we have to refer the matrix p.The matrix will be,
Finally we see the trips from 1 to 2, & from 2 to 4, are also direct.
ALGORITHM :
D=L
For k = 1 to n do
For i = 1 to n do
For j = 1 to n do
D [ i , j ] = min (D[ i, j ], D[ i, k ] + D[ k, j ]
Return D
ANALYSIS:
This algorithm takes a time of (n3)
Let G(V,E) be a directed graph with edge cost cij is defined such that cij >0 for all i and j and cij
= ,if <i,j> E.
o Let V = n and assume n>1.
The cost of the tour is the sum of cost of the edges on the tour.
The tour is the shortest path that starts and ends at the same vertex (ie) 1.
Application:
1. Suppose we have to route a postal van to pick up mail from the mail boxes located at ‘n’ different
sites.
3. One vertex represents the post office from which the postal van starts and return.
4. Edge <i,j> is assigned a cost equal to the distance from site ‘i’ to site ‘j’.
5. The route taken by the postal van is a tour and we are finding a tour of minimum length.
6. Every tour consists of an edge <1,k> for some k V-{} and a path from vertex k to vertex 1.
7. The path from vertex k to vertex 1 goes through each vertex in V-{1,k} exactly once.
9. g(i,s) be the length of a shortest path starting at vertex i, going through all vertices in S, and
terminating at vertex 1.
1. Find g(i,) =ci1, 1<=i<n, hence we can use equation(2) to obtain g(i,s) for all s to size 1.
2. That we have to start with s=1,(ie) there will be only one vertex in set ‘s’.
12
7
Cost matrix
0 10 15 20
5 0 9 10
6 13 0 12
8 8 9 0
Starting position
g(1,{2,3,4})=min{c12+g(2{3,4}),c13+g(3,{2,4}),c14+g(4,{2,3})}
min{10+25,15+25,20+23}
min{35,35,43}
=35
STEP 2:
g(2,{3,4}) = min{c23+g(3{4}),c24+g(4,{3})}
min{9+20,10+15}
min{29,25}
=25
g(3,{2,4}) =min{c32+g(2{4}),c34+g(4,{2})}
min{13+18,12+13}
min{31,25}
=25
g(4,{2,3}) = min{c42+g(2{3}),c43+g(3,{2})}
min{8+15,9+18}
min{23,27}
=23
STEP 3:
12+8 =20
9+6 =15
10+8 =18
4. g(4,{2}) = min{c42 +g{2,}}
8+5 =13
9+6=15
13+5=18
STEP 4:
g{4,} =c41 = 8
g{3,} =c31 = 6
g{2,} =c21 = 5
s = 0.
i =1 to n.
s =1
i =2 to 4
= 9+6 =15
= 10+8 =18
g(3,{2}) = c32 + g(2,)
= 13+5 =18
= 12+8 =20
= 8+5 =13
= 9+6 =15
s =2
i 1, 1 s and i s.
g(2,{3,4}) = min{c23+g(3{4}),c24+g(4,{3})}
min{9+20,10+15}
min{29,25}
=25
g(3,{2,4}) =min{c32+g(2{4}),c34+g(4,{2})}
min{13+18,12+13}
min{31,25}
=25
g(4,{2,3}) = min{c42+g(2{3}),c43+g(3,{2})}
min{8+15,9+18}
min{23,27}
=23
s =3
g(1,{2,3,4})=min{c12+g(2{3,4}),c13+g(3,{2,4}),c14+g(4,{2,3})}
min{10+25,15+25,20+23}
min{35,35,43}
=35
optimal cost is 35
If we have a matrix A of size pq and B matrix of size qr. the product of these two matrixes C is given
by,
Matrix multiplication is associative, so if we want to calculate the product of more than 2 matrixes m=
m1m2……. mn.
For example,
A = 13 5
B = 5 89
C = 89 3
D = 3 34
M = (((A.B).C).D)
A.B C
= (13 * 5 * 89) * (89 * 3)
A.B.C. D
= (13 * 89 * 3) * (3 * 34)
A.B.C.D
= 13 * 3 * 34
(ic) = 13 * 5 * 89 + 13 * 89 * 3 + 13 * 3 * 34
= 10,582 no. of multiplications one required for that sequence.
2nd Sequence,
M = (A * B) * (C * D)
= 13 * 5 * 89 + 89 * 3 * 34 + 13 * 89 * 34
= 54201 no. of Multiplication
3rd Sequence,
M = (A.(BC)) . D
= 5 * 89 * 3 + 13 * 5 * 3 + 13 * 3 *34
= 2856
For comparing all these sequence, (A(BC)).D sequences less no. of multiplication.
For finding the no. of multiplication directly we are going to the Dynamic programming method.
Our aim is to find the total no. of scalar multiplication required to compute the matrix product.
Let (M1,M2,……Mi) and Mi+1,Mi+2,……Mn be the chain of matrix to be calculated using the dynamic
programming method.
In dynamic programming we always start with the smallest instances and continue till we reach the
required size.
We build the table diagonal by diagonal; diagonal s contains the elements mij such that j-1 =s.
S =0,1,……n-1
i=1,2,……n-1.
A=>135
B=>589
C=>893
D=>334
d[0]=13
d[1]=5
d[2]=89
d[3]=3
d[4]=34
if s=0,
m(1,1)=0
m(2,2)=0
m(3,3)=0
m(4,4)=0
if s=1,
if s=2,
mi,i+s =min(mik+mk+1,i+s+di-1dkdi+s)
=(0+1335+(13*5*3),5785+0+(13*89*3))
=min(1530,9256)
=1530
m13 =min(m22+m34+d1d2d4 , m23+m44+d1d3d4)
=(0+9078+(5*89*34),1335+0+(5*3*34))
=min(24208,1845)
=1845
=min(4055,54201,2856)
=2856
1 s=3
2 s=2
3 1
4 s=0
ALGORITHM:
Procedure cmatrix(n,d[0..n])
if(s==0)
m(i,j) =0
if(s==1)
for i=1 to n-1 do
m(i,i+1) =d(i-1)*d(i)*d(i+1)
else
{
m=
for i=1 to n-s do
for k=i to i+s do
if (min>[m(i,k) +m(k+1,i+s)+d(i-1)*d(k)*d(i+s)])
min=[m(i,k) +m(k+1,i+s)+d(i-1)*d(k)*d(i+s)]
m(i,i+s) =min
Given n items of known weights W1….. Wn and values a knapsack of capacity W, find the most valuable
subset of the items that fit into the knapsack.
Assume that the weights and the knapsack capacity are positive integers.
The item values do not have to be integers.
To design a dynamic programming algorithm, it is necessary to derive a recurrence relation that
expresses a solution to an instance of the knapsack problem in terms of solutions to its smaller sub
instances.
The recurrence relation is, equation
Our goal is to find F[n,w], the maximum value of a subset of the n items that fit into the knapsack of
capacity w, and an optimal subset itself.
F[0,j] = 0 as well as F[i,0] = 0 when j≥0 and i≥0.
The table can be filled by either row by row or column by column.
0 j- Wi j W
0 0 0 0
Wi Vi
Example
For the given instance of problem obtain the optimal solution for the knapsack problem
Let us start filling the table row by row using following formula:
0 1 2 3 4 5
3
4
0 1 2 3 4 5
2
3
0 0 0 0 0 0
0
0 0 3 3 3 3
1
0 0 3 4 4 7
2
0 0 3 4 5 7
3 0 0 3 4 5 7
4
As F[i,k] = F[ i – 1, k]
Capacity -> 0 1 2 3 4 5
0
F[3,5] = F[2,5] Don’t select ith item i.e ., 3 rd item . now set i = i-1, i=2
Capacity -> 0 1 2 3 4 5
F [i,k]≠ F[ i – 1, k] F[2,5] ≠ F[1,5] Select ith item, ie , select 2nd item.Set i = i -1 and k = k- wi
i.e . i = i -1, i=1 and k = 5 - 3 =2
Capacity -> 0 1 2 3 4 5
0 0 0 0 0 0
0
0 0 3 3 3 3
1 0 0 3 4 4 7
0 0 3 4 5 7
2
0 0 3 4 5 7
As F [i,k]≠ F[ i – 1, k]
i.e F[1,2] ≠ F[0,2]
select ith item
That is , select 1st item.
Set i = i - 1 and k = k – wi
i = 0 and k=2-2=0
Thus item 1 and item 2 are selected for the knapsack.
Algorithm
Algorithm Dynamic knapsack (n,, W,w[],v[])
// problem description : this algorithm is for obtaining knapsack
// solution using dynamic programming
//input : n is total number of items , W is the capacity of //knapsack, w [] stores weights of each item and
v[] stores
//the values of each item
//output : returns the total value of selected items for yhe
// knapsack
For(i← 0 to n ) do
{
for(i← 0 to W ) do
{
F[i,0] = 0 // table initialization
F[0,j] = 0
}
}
for(i← 0 to n ) do
{
for(i← 0 to W ) do
{
If(j<w[i]) then
{
F[i,j] ←F[i – 1, j]
Else if (j>= w[i]) then
F[i,j] ←max(F[i -1,j],v[i] + F [i-1, j-w[i])
}
}
}
Return F[n,W]
Analysis:
In this algorithm the basic operation is if ….else if statement within two nested for loops Hence
𝑛
C(n) = ∑ ∑𝑤 1
𝑖=0
W−0+1
𝑛
C(n) = ∑
𝑖=0
W+1
𝑛
C(n) = ∑
𝑖=0 𝑛
𝑛 𝑖=0
C(n) = ∑
𝑖=0
𝑛 𝑛
𝑖=0
C(n) = W. ∑
𝑖=0
C(n) = W( n- 0 +) + ( n-0+1)
C(n) = Wn +W + n + 1
C(n) = Wn
MEMORY FUNCTIONS:
Procedure cmatrix(n,d[0..n])
if(s==0)
m(i,j) =0
if(s==1)
for i=1 to n-1 do
m(i,i+1) =d(i-1)*d(i)*d(i+1)
else
{
m=
for i=1 to n-s do
for k=i to i+s do
if (min>[m(i,k) +m(k+1,i+s)+d(i-1)*d(k)*d(i+s)])
min=[m(i,k) +m(k+1,i+s)+d(i-1)*d(k)*d(i+s)]
m(i,i+s) =min
(APRIL’13,APRIL15,DEC21)
In it simplest from it requires us to determine whether there exist a path in the given graph, G
+(V,E) such that this path starts at vertex ‘v’ and ends at vertex ‘u’.
A more general form is to determine for a given starting vertex v6 V all vertex ‘u’ such that there
is a path from if it u.
This problem can be solved by starting at vertex ‘v’ and systematically searching the graph ‘G’
for vertex that can be reached from ‘v’.
In Breadth First Search we start at a vertex ‘v’ and mark it as having been reached (visited).
A vertex is said to have been explored by an algorithm when the algorithm has visited all vertices
adjust from it.
All unvisited vertices adjust from ‘v’ are visited next. These are new unexplored vertices.
Vertex ‘v’ has now been explored. The newly visit vertices haven’t been explored and are put on
the end of a list of unexplored vertices.
The first vertex on this list in the next to be explored. Exploration continues until no unexplored
vertex is left.
The list of unexplored vertices operates as a queue and can be represented using any of the start
queue representation.
ALGORITHM:
algorithm BFT(G,n)
for i= 1 to n do
visited[i] =0;
for i =1 to n do
if (visited[i]=0)then BFS(i)
here the time and space required by BFT on an n-vertex e-edge graph one O(n+e) and O(n) respectively
if adjacency list is used .if adjacent matrix is used then the bounds are O(n2) and O(n) respectively
Depth first search and traversal
A depth first search of a graph differs from a breadth first search in that the exploration of a vertex
v is suspended as soon as a new vertex is reached. At this time the exploration of the new vertex u begins.
When this new vertex has been explored, the exploration of u continues. The search terminates when all
reached vertices have been fully explored. This search process is best-described recursively.
ALGORITHM DFS(V)
//given an undirected (directed graph)
//G=(V,E) with n vertices and an array visited[]
//initially set to zero, this algorithm of visits all the vertices reachable
//from v.
//G and visited[] are global
{
visited[v]:=1;
for each vertex w adjacent from v do
{
if(visited[w]=0) then DFS(w);
}
}
Biconnected Graph:
A graph ‘G’ is biconnected, iff it contains no articulation points.
Real Time Example:
‘G’ is a graph representing a communication network, vertices represent communication station,
edges represent communication lines.
(i) if communication station has articulation point :
if any communication station ‘i’ is failed , then we cannot communicate with another station.
(ii) if the graph has no articulation point:
if any station ‘i’ is failed, then we can communicate with another station, other than ‘i’.
Figure– Graph G
In the above graph G ,if we delete the vertex 2 ,then it creates 2 non empty components.
Component 1:
1
Component 2 :
1 9
If we delete any other vertex other than 2, 3, and 5, then we get only one component exactly. So the above
graph is not biconnected.
The figure represents the graph which is biconnected. Because it has no articulation points.
If a graph is not biconnected, then
we may include set of edges that makes the graph biconnected .we can determine that edge if we know
the maximal subgraphs of G that are biconnected.
Let G(V,E) be a graph that is not biconnected. To make it biconnected , first identify the articulation
points by the algorithm. Once it has been determined that a connected graph G is not biconnected, it may
be desirable to determine the set of edges whose inclusion makes the graph biconnected. Determining
such a set of edges is facilitated if we know the maximal subgraphs of G that are biconnected.G’(V’,E’)
is a maximal biconnected subgraph of G iff G has no biconnected subgraph G”(V”,E’’) such that V’ V’’
and E’ E’’.A maximal biconnected subgraph is a biconnected component.
Biconnected components should contain atleast 2 vertices. Biconnected component itself has no
articulation point. The graph G can be transformed into a biconnected graph by using the edge addition
scheme of Algorithm to construct a biconnected graph. Note that once the edges (vi,vi+1) of line 6 are
added, vertex a is no longer an articulation point. Hence following the addition of edges corresponding to
all articulation points, G has no articulation points and so it is biconnected. If G has p articulation points
and b biconnected components, the scheme of algorithm introduces exactly b-p new edges into G.
V6
V1,V2,V5,V3,V6,V4
V1,V4,V3,V6,V2,V5
Fig(b) Topological order .
b) V1
This can be efficiently done if for each vertex a count of the no of its immediate predecessor is kept.
This can be easily implemented if the network is represented by its adjacency lists.
The following algorithm assumes that the network is represented by adjacency lists.
ALGORITHM:
Procedure Topological_order (count,vertex,link,n)
//the n vertices of an AOV-network are listed in topological order .
the network is represented as a set of adjacency lists with
COUNT(i)=the in-degree of vertex i//
1) top 0 //initialize stack//
2) for i 1 to n do //create a linked stack of vertices with no predecessors //
3) if COUNT (I)=0 then [ COUNT(i) top;topI]
4) end
5) for i 1 to n do //print the vertices in topological order //
6) if top = 0 then [print (‘network has a cycle ‘);stop]
7) j top;topCOUNT (top);print(j)//unstack a vertex//
8) ptr LINK(j)
9) while ptr <> 0 do
10) //decrease the count of successor vertices of j//
11) K VERTEX(ptr) // K is a successor of j//
12) COUNT(K) COUNT(K)-1 // Decrease count//
13) If COUNT(K)=0 // Add vertex K to stack//
14) Then[COUNT(K) top; topK]
15) Ptr LINK(ptr)
16) End
17) End
18) End TOPOLOGICAL_ORDER.
(OBST) Or
Obtain a optimal binary search tree for following nodes (do, if , int , while) with following
probabilities ( 0.1, 0.2 , 0.4, 0.3)
A binary search tree is one of the most important data structures in computer science.
One of its principal application is to implement a dictionary, a set of
elements with the operations of searching, insertion, and deletion.
Example:
Consider four keys A, B, C, and D to be searched for with probabilities 0.1, 0.2, 0.4, and 0.3, respectively.
Figure depicts two out of 14 possible binary search trees containing these keys.
The average number of comparisons in a successful search in the first of these trees is
0. 1+ 0.2 . 2 + 0.4 . 3+ 0.3 . 4 = 2.9,
for the second one it is
0. 2 + 0.2 . 1+ 0.4 . 2 + 0.3 . 3= 2.1.
Neither of these two trees is, in fact, optimal.
For our tiny example, we could find the optimal tree by generating all 14 binary search trees with these
keys.
As a general algorithm, this exhaustive-search approach is unrealistic: the total number of binary search
trees with n keys is equal to the nth Catalan number,
Table for cost .i.e ., cost table table for root i.e. root table
The cost table should be constructed as follows.
EXAMPLE
Obtain a optimal binary search tree for following nodes (do, if , int ,while) with following
probabilities ( 0.1, 0.2 , 0.4, 0.3)
5
Root Table 0 1 2 3
1
4
Cost Table
C[ 1, 0] = 0
C[ 2, 1] = 0
C[ 3, 2] = 0 using formulae C[ i, i – 1] =0 and C[ n + 1,n] =0
C[ 4, 3] = 0
C[ 5, 4] = 0
C[ 1, 1] = 0.1
C[ 2, 2] = 0.2 using formulae C[ i, i ] =p[i]
C[ 3, 3] = 0.4
C[ 4, 4] = 0.3
Compute C[ 1, 2 ]
The value of k can be either 1 or 2
Let i = 1 , j = 2 , use formula in equation (1),
k= 1
C[1,2]= C[ 1,0] + C[2 ,2] + p[1] +p[2]
= 0 + 0.2 + 0.1 + 0.2
= 0.5
k=2
C[1,2]=C[ 1,1] + C[3 ,2] + p[1] +p[2]
= 0.1 + 0 + 0.1 + 0.2
= 0.4-> minimum value therefore consider k = 2
Therefore in cost table C[1,2] = 0.4 and R[1,2]=2
Compute C[2, 3]
The value of k can be 2 or 3.
Let i = 2, j = 3, use formula equation (1)
k=2
C[2,3] =C[ 2,1] + C[3 ,3] + p[2] +p[3]
=0 + 0.4+ 0.2+ 0.4
=1.0
k=3
C[2,3]= C[ 2,2] + C[4 ,3] + p[2] +p[3]
= 0.2 + 0 + 0.2 + 0.4
= 0.8 -> minimum value therefore consider k = 3
Therefore in cost table C[2,3] = 0.8 and R[2,3]=3
Compute C[3, 4]
The value of k can be 3 or 4.
Let , i = 3, j = 4, use formula equation (1)
k=3
C[3,4]= C[ 3,2] + C[4 ,4] + p[3] +p[4]
= 0 + 0.3+ 0.4+ 0.3
= 1.0 -> minimum value therefore consider k = 3
k=4
C[3,4]= C[ 3,3] + C[5 ,4] + p[3] +p[4]
= 0.4 + 0 + 0.4 + 0.3= 1.1
Therefore in cost table C[3,4] = 1.0 and R[3,4]=3
The table contains values obtained upto this calculations
Cost Table 0 1 2 3 4
1
3
4
Root Table
0 1 2 3
1
2
3
Compute C[1,3]
The value of k can be 1,2 or 3
Consider i=1 , j=3.
k=1
C[1,3] = C[ 1,0] + C[2 ,3] + p[1] +p[2] + p[3]
= 0 + 0.8 + 0.1 + 0.2 + 0.4
= 1.5
k =2
Compute C [2,4]
The value of k can be 2,3 or 4
Consider i=2, j=4 and using equation (1)
k=2
C[2,4]= C[ 2,1] + C[3 ,4] + p[2] +p[3] + p[4]
= 0 + 1.0 + 0.2 + 0.4 + 0.3
= 1.9
k =3
C[2,4]= C[ 2,2] + C[4 ,4] + p[2] +p[3] + p[4]
= 0.2 + 0.3 + 0.2 + 0.4 + 0.3
= 1.4 -> minimum value therefore consider k = 3
k =4
C[2,4]= C[ 2,3] + C[5 ,4] + p[2] +p[3] + p[4]
= 0 .8+ 0 + 0.2 + 0.4 + 0.3
= 1.7
Therefore C[2,4]= 1.4 and R[2,4]= 3
Cost Table
0 1 2 3 4
4 0 0.3
5
0
Root Table 0 1 2 3
1 2 3
2 3 3
3 3
Compute C[1,4]
The value of k can be 1, 2, 3 or 4.
Consider i=1, j=4 and using equation (1)
k=1
C[1,4] = C[ 1,0] + C[2 ,4] + p[1] + p[2] +p[3] + p[4]
= 0 + 1.4 + 0.1+ 0.2 + 0.4 +0.3
= 2.4
k =2
C[1,4]= C[ 1,1] + C[3 ,4] + p[1] + p[2] +p[3] + p[4]
= 0.1 + 1.0 + 0.1 + 0.2 + 0.4 +0.3
= 2.1
k =3
C[1,4]= C[ 1,2] + C[4 ,4] + p[1] + p[2] +p[3] + p[4]
= 0.4 + 0.3 + 0.1 + 0.2 + 0.4 +0.3
= 1.7-> minimum value therefore consider k = 3
k=4
C[1,4]= C[ 1,3 + C[5 ,4] + p[1] + p[2] +p[3] + p[4]
= 1.3 + 0 + 0.1 + 0.2 + 0.4 +0.3
= 2.3
Therefore C[1,4]= 1.7 and R[1,4]= 3
Cost Table
0 1 2 3 4
4
5
Root Table
0 1 2 3
4
To build a tree R[1][n] = R[1][4] =3 becomes root.
1 2 3 4
A[i]
Value of key
Here i =1 , j=4 and k=3.
ALGORITHM
ALGORITHM OptimalBST(P [1..n])
//Finds an optimal binary search tree by dynamic //programming
//Input: An array P[1..n] of search probabilities for a sorted //list of n keys
//Output: Average number of comparisons in successful //searches in the
//optimal BST and table R of subtrees’ roots in the optimal //BST
for i ←1 to n do
{
C[i, i − 1]←0
C[i, i]←P[i]
R[i, i]←I
}
C[n + 1, n]←0
for d ←1 to n − 1 do //diagonal count
{
for i ←1 to n − d do
{
j ←i + d
minval←∞
for k←i to j do
{
if C[i, k − 1]+ C[k + 1, j]< minval
{
minval←C[i, k − 1]+ C[k + 1, j];
kmin←k
R[i, j ]←kmin
}
}
sum←P[i];
for s ←i + 1 to j do
sum←sum + P[s]
C[i, j ]←minval + sum
return C[1, n], R
}
}
Analysis:
The basic operation in above algorithm is computation of C[i,j] by finding the minimum valued k.
This operation is located within three nested for loops hence the time complexity C(n) can be
𝑛−𝑚
𝑗𝑛
C(n) = ∑ 𝑛−1
𝑖=1
𝑘=𝑖
= n3
C(n) = Θ (n3)
Pondicherry University Questions
2 Marks
11 Marks
1. Explain Travelling salesman problem (UQ APRIL’13 & NOV’12) (Ref.Pg.No.13 Qn.No.4)
2. Explain chained matrix multiplication (UQ NOV’12 & NOV’10) (Ref.Pg.No.18 Qn.No.5)
3. Explain 0/1 knapsack problem (UQ APRIL’12) (Ref.Pg.No.21 Qn.No.6)
4. Explain basic search and traversal technique (UQ APRIL’13) (Ref.Pg.No.24 Qn.No.8)
5. Explain topological sorting (UQ NOV’10, APRIL/MAY’14) (Ref.Pg.No. 30 Qn.No.10)
6. Explain multistage graph with example (UQ :APRIL/MAY’14) (Ref.Pg.No.3,Qn.No.2)
7.Explain Biconnected components (UQ:APRIL/MAY’14) ) (Ref.Pg.No.26,Qn.No.9)
8. Solve the following instance of the 0/1, Knapsack problem given the knapsack capacity in W = 5 using
dynamic programming and explain it.
9. Solve the all-pair shortest-path problem for the digraph with the following weight matrix(16)