0% found this document useful (0 votes)
22 views66 pages

Daa 3

Uploaded by

gauthaam04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views66 pages

Daa 3

Uploaded by

gauthaam04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Course Code : U20CST409


Course Name : DESIGN AND ANALYSIS OF ALGORITHMS

UNIT UNIT III DYNAMIC PROGRAMMING (12 Hrs)

Dynamic Programming: Applications – Multistage graphs – Optimal binary search trees – 0/1 knapsack

problem, All pairs shortest path problem – Traveling sales person problem – Reliability design – Chained

Matrix Multiplication – Graph Applications: AND/OR graphs – Connected components – Identification of

articulation points – Bi-connected components.

PREPARED BY VERIFIED BY APPROVED BY

1. Mrs.R.Deepa Asst.Prof

2. Mr.S.Kumarakrishnan Asst.Prof

3.Mrs.S.Jagadeeswari Asst.Prof
3.
4.
PART A
1. Write the difference between the Greedy method and Dynamic programming.
Greedy methodDynamic programming

Greedy Method Dynamic Programming

It is used to find optimal solution Enumerable all decision sequencesand


among all feasible solution then pick the best.

Solutions to the sub problems do nothave Choice can be made of what looksbest
to be known at each stage. for the moment.

Only one sequence of decision is generated. Many number of decisions are generated.

It does not guarantee to give an optimal It definitely gives an optimal solution


solution always. always.

2. Define dynamic programming. (UQ APRIL’12,APRIL 2015,DEC 2021)


Dynamic programming is an algorithm design method that can be used when a solution to the problem is
viewed as the result of sequence of decisions.

3. What are the features of dynamic programming?


Optimal solutions to sub problems are retained so as to avoid re-computing their values. Decision
sequences containing subsequences that are sub optimal are not considered. It definitely gives the optimal
solution always.

4. What are the drawbacks of dynamic programming?


Time and space requirements are high, since storage is needed for all level. Optimality should be checked
at all levels.

5. Write the general procedure of dynamic programming. (UQ APRIL’13)


The development of dynamic programming algorithm can be broken into a sequence of 4 steps.
1. Characterize the structure of an optimal solution.
2. Recursively defines the value of the optimal solution.
3. Compute the value of an optimal solution in the bottom-up fashion.
4. Construct an optimal solution from the computed information.

6. Define principle of optimality.


It states that an optimal sequence of decisions has the property that whenever the initial stage or decisions
must constitute an optimal sequence with regard to stage resulting from the first decision.

7. Give an example of dynamic programming and explain.


An example of dynamic programming is knapsack problem. The solution to the knapsack problem can be
viewed as a result of sequence of decisions. We have to decide the value of xi for 1<i<n. First we make a
decision on x1 and then on x2 and so on. An optimal sequence of decisions maximizes the object function

8. Explain any one method of finding the shortest path.(DEC 2021)


One way of finding a shortest path from vertex i to j in a directed graph G is to decide which vertex should
be the second, which is the third, which is the fourth, and so on, until vertex j is reached. An optimal
sequence of decisions is one that results in a path of least length.
9. Define 0/1 knapsack problem(JUL 21).
The solution to the knapsack problem can be viewed as a result of sequence of decisions. We have to
decide the value of xi. xi is restricted to have the value 0 or 1 and by using the function knap(l, j, y) we
can represent the problem as maximum pi xi subject to wi xi < y where l - iteration, j - number of objects,
y – capacity.

10. What is the formula to calculate optimal solution in 0/1 knapsack problem?
The formula to calculate optimal solution is g0(m)=max{g1, g1(m-w1)+p1}.

11. Write about traveling salesperson problem.


Let g = (V, E) be a directed. The tour of G is a directed simple cycle that includes every vertex in V. The
cost of a tour is the sum of the cost of the edges on the tour. The traveling salesperson problem to find a
tour of minimum cost.

12. Write some applications of traveling salesperson problem.


Routing a postal van to pick up mail from boxes located at n different sites.
 Using a robot arm to tighten the nuts on some piece of machinery on an assembly line. Production
environment in which several commodities are manufactured on the same set of machines.

13. Give the time complexity and space complexity of traveling salesperson problem.
Time complexity is O (n2 2n).
Space complexity is O (n 2n).
14. List different traversal technique in graph (UQ APRIL’12/ APRIL 2015)
1. DFS and
2. BFS.

15. What is biconnected component? (UQ Nov’12)


Graph with no articulation point then it is called as biconnected component
16. What is articulation point?
When a node in a graph is deleted, the graph divided into two or more graph then the node is called
articulation point.

17. Write about multistage graph with example (UQ APRIL’13 & Nov’12,APRIL/MAY’14)
A multistage graph G = (V,E) is a directed graph in which the vertices are portioned into K > = 2 disjoint
sets Vi, 1 <= i<= k. In addition, if < u,v > is an edge in E, then u < = Vi and V  Vi+1 for some i, 1<= i
< k.

18. What is all pair shortest path problem?(APRIL-MAY’14,DEC 21,JUL 21)


Let G(V,E) be a directed graph with n vertices , ‘E’ is the set of edges & V is the set of n vertices.
Each edge has an associated non-negative length. Calculate the length of the shortest path between each
pair of nodes i.e Shortest path between every vertex to all other vertices. The all pairs shortest path
problem is to determine a matrix A such that A(i,j) is the length of a shortest path from i to j.
The principle of optimality: If k is the node on the shortest path from i to j then the part of the path
from i to k and the part from k to j must also be optimal, that is shorter.

19.Write an algorithm to find shortest path between all pairs of nodes.

20.What are optimal binary search trees OBST?


 Let { a1, a2,….an} be a set of identifiers such that a1<a2<a3…let p(i) be the probability with
which we can search for ai is Successful search.
 Let , qi be the probability of searching an element x such that
ai<x<ai+1 where 0≤i≤ n is unsuccessful search . thus p(i) is
probability of successful search and q(i) is the probability of
unsuccessful search.
 Then a tree which is build with optimum cost from
∑ 𝑛 p(i) ∑ 𝑛 q(i) is called optimal binary search tree
𝑖=1
𝑖=1

21.Give any two properties of dynamic programming approaches.


Optimal substructure:
The dynamic programming technique makes use principle of optimality to find the optimal solution from
subproblems.
Overlapping Sub-problems:
 The dynamic programming is a technique in which the problem
 is divided into subproblems.
 The solutions of subproblems are shared to get the final solution
 to the problem.
 It avoids repetition of work

22. Give the commonly used designing steps for dynamic programming algorithm?
Dynamic programming design involves 4 major steps
 Characterize the structure of optimal solution.
 Recursively defines the value of an optimal solution.
 By using bottom up technique compute value of optimal solution.
 Compute an optimal solution from computed information.

23. What does dynamic programming have in common with divide and conquer?
o Both the divide and conquer and dynamic programming solve the problem by breaking it into
number of sub-problems.
o In both these methods solutions from sub-problems are collected together to form a solution to
given problem.

24. State time and space efficiency of OBST.

 Space Efficiency : Quadratic

 Time Efficiency : Cubic

25.Compare divide and conquer with dynamic programming and Dynamic programming with

greedy technique.

Divide and Conquer Dynamic Programming


The divide-and-conquer paradigm The development of a dynamic-
involves three steps at each level of the programming algorithm can be broken into
recursion: a sequence of four steps.
• Divide the problem into a number of sub a. Characterize the structure of an optimal
problems. solution.
• Conquer the sub problems by solving b. Recursively define the value of an
them recursively. If the sub problem sizes optimal solution.
are small enough, however, just solve the
sub problems in a straightforward manner. c. Compute the value of an optimal
solution in a bottom-up fashion.
• Combine the solutions to the sub
problems into the solution for the original d. Construct an optimal solution from
problem. computed information

They call themselves recursively one or


more times to deal with closely related sub Dynamic Programming is not recursive.
problems.
D&C does more work on the sub-problems DP solves the sub problems only once and
and hence has more time consumption. then stores it in the table.
In D&C the sub problems are independent In DP the sub-problems are not
of each other. independent.
Example: Merge Sort, Binary Search Example : Matrix chain multiplication

Dynamic Programming Greedy Technique

Greedy method focuses on expanding


Focuses on principle of optimality.
partially constructed solutions.

It provides many results such as feasible


It provides specific answers.
solution.

Less efficient More efficient


PART – B
1. Explain dynamic program general method

 It is an algorithm design method that can be used when the solution to a problem can be viewed
as the result of a sequence of decisions.
 The idea of dynamic programming is thus quit simple: avoid calculating the same thing twice,
usually by keeping a table of known result that fills up a sub instances are solved.
 Divide and conquer is a top-down method. When a problem is solved by divide and conquer, we
immediately attack the complete instance, which we then divide into smaller and smaller sub-
instances as the algorithm progresses.
 Dynamic programming on the other hand is a bottom-up technique. We usually start with the
smallest and hence the simplest sub- instances. By combining their solutions, we obtain the
answers to sub-instances of increasing size, until finally we arrive at the solution of the original
instances.
 The essential difference between the greedy method and dynamic programming is that the greedy
method only one decision sequence is ever generated. In dynamic programming, many decision
sequences may be generated. However, sequences containing sub-optimal sub-sequences cannot
be optimal and so will not be generated.
 Because of principle of optimality, decision sequences containing subsequences that are
suboptimal are not considered. Although the total number of different decision sequences is
exponential in the number of decisions(if there are d choices for each of the n decisions to be made
then there are dn possible decision sequences),Dynamic programming algorithms often have a
polynomial complexity.

2. Explain multistage graph with example (UQ :APRIL/MAY’14)

1. A multistage graph G = (V,E) is a directed graph in which the vertices are portioned into K > = 2
disjoint sets Vi, 1 <= i<= k.
2. In addition, if < u,v > is an edge in E, then u < = Vi and V  Vi+1 for some i, 1<= i < k.
3. If there will be only one vertex, then the sets Vi and Vk are such that [Vi]=[Vk] = 1.
4. Let ‘s’ and ‘t’ be the source and destination respectively.
5. The cost of a path from source (s) to destination (t) is the sum of the costs of the edger on the path.
6. The MULTISTAGE GRAPH problem is to find a minimum cost path from‘s’ to‘t’.
7. Each set Vi defines a stage in the graph. Every path from ‘s’ to ‘t’ starts in stage-1, goes to stage-2
then to stage-3, then to stage-4, and so on, and terminates in stage-k.
8. This MULISTAGE GRAPH problem can be solved in 2 ways.
a) Forward Method.
b) Backward Method.

FORWARD METHOD
Assume that there are ‘k’ stages in a graph. In this FORWARD approach, we will find out the cost of
each and every node starting from the ‘k’ th stage to the 1st stage. We will find out the path (i.e.)
minimum cost path from source to the destination (i.e.) [ Stage-1 to Stage-k ].

PROCEDURE:
 Maintain a cost matrix cost (n) which stores the distance from any vertex to the destination.
 If a vertex is having more than one path, then we have to choose the minimum distance path and
the intermediate vertex which gives the minimum distance path will be stored in the distance array
‘D’.
 In this way we will find out the minimum cost path from each and every vertex.
 Finally cost (1) will give the shortest distance from source to destination.
 For finding the path, start from vertex-1 then the distance array D(1) will give the minimum cost
neighbor vertex which in turn give the next nearest vertex and proceed in this way till we reach
the Destination.
 For a ‘k’ stage graph, there will be ‘k’ vertex in the path.
 In the above graph V1……V5 represent the stages. This 5 stage graph can be solved by using
forward approach as follows,

ALGORITHM:

Algorithm FGraph (G,k,n,p)


// The i/p is a k-stage graph G=(V,E) with ‘n’ vertex indexed in order of stages.
// E is a set of edges.
// c[i,j] is the cost of <i,j>
// p[1:k] is a minimum cost path.
{
cost[n]=0.0;
for j=n-1 to 1 step-1 do
{
//compute cost[j],
Let ‘r’ be the vertex such that <j,r> is an edge of ‘G’ &
c[j,r]+cost[r] is minimum.
cost[j] = c[j+r] + cost[r];
d[j] =r;
}
// find a minimum cost path.
p[1]=1;
p[k]=n;
for j=2 to k-1 do
p[j]=d[p[j-1]];
}
TRACE OUT:
n=12 k=5
cost (12)=0 d (12)=0
for j = 11 to 1
When j=11
cost (11)=5 d (11)=12
When j=10
cost (10)=2 d (10)=12
When j=9
cost ( 9)=4 d ( 9)=12
When j=8
Note:
If there is more than one path from vertex j to next stage vertices,then we find the closest vertex
by the following formula:

 For forward approach,

cost(8) = min {C (8,10) +


Cost (10), C (8,11) + Cost (11) }
= min (5 + 2, 6 + 5)
= min (7,11)
=7
cost(8) =7 d(8)=10
When j=7
cost(7) = min(c (7,9)+ cost(9),c (7,10)+ cost(10))
= min(4+4,3+2)
= min(8,5)
=5
cost(7) = 5 d(7) = 10

When j=6
cost(6) = min (c (6,9) + cost(9),c (6,10) +cost(10))
= min(6+4 , 5 +2)
= min(10,7)
=7
cost(6) = 7 d(6) = 10
When j=5
cost(5) = min (c (5,7) + cost(7),c (5,8) +cost(8))
= min(11+5 , 8 +7)
= min(16,15)
= 15
cost(5) = 15 d(5) = 18

When j=4
cost(4) = min (c (4,8) + cost(8))
= min(11+7)
= 18
cost(4) = 18 d(4) = 8

When j=3
cost(3) = min (c (3,6) + cost(6),c (3,7) +cost(7))
= min(2+7 , 7 +5)
= min(9,12)
=9
cost(3) = 9 d(3) = 6

When j=2
cost(2) = min (c (2,6) + cost(6),c (2,7) +cost(7) ,c (2,8) +cost(8))
= min(4+7 , 2+5 , 1+7 )
= min(11,7,8)
=7
cost(2) = 7 d(2) = 7

When j=1
cost(1)= min (c (1,2)+cost(2) ,c (1,3)+cost(3) ,c (1,4)+cost(4) c(1,5)+cost(5))

= min(9+7 , 7 +9 , 3+18 , 2+15)


= min(16,16,21,17)
= 16
cost(1) = 16 d(1) = 2

p[1]=1 p[5]=12
for j=2 to 4
When j=2 p[2]=d[p[1]]=d[1]=2
When j=3 p[3]=d[p[2]]=d[2]=7
When j=4 p[4]=d[p[3]]=d[7]=10

The path through which you have to find the shortest distance from vertex 1 to vertex 12.

(ie)

p[1] p[2] p[3] p[4]

Start from vertex - 1

D ( 1) = 2
D ( 2) = 7
D ( 7) = 10
D (10) = 12

So, the minimum –cost path is,


9 2 3 2

 The cost is 9+2+3+2+=16

ANALYSIS:
The time complexity of this forward method is O (V + E)

BACKWARD METHOD

 If there one ‘K’ stages in a graph using back ward approach. we will find out the cost of each &
every vertex starting from 1st stage to the kth stage.

 We will find out the minimum cost path from destination to source (i.e.)[from stage k to stage 1]

PROCEDURE:
1. It is similar to forward approach, but differs only in two or three ways.
2. Maintain a cost matrix to store the cost of every vertices and a distance matrix to store the
minimum distance vertex.
3. Find out the cost of each and every vertex starting from vertex 1 up to vertex k.
4. To find out the path star from vertex ‘k’, then the distance array D (k) will give the minimum cost
neighbor vertex which in turn gives the next nearest neighbor vertex and proceed till we reach the
destination.

ALGORITHM :

Algorithm BGraph (G,k,n,p)


// The i/p is a k-stage graph G=(V,E) with ‘n’ vertex indexed in order of stages
// E is a set of edges.
// c[i,j] is the cost of<i,j>
// p[1:k] is a minimum cost path.
{
bcost[1]=0.0;
for j=2 to n do
{
//compute bcost[j],
Let ‘r’ be the vertex such that <r,j> is an edge of ‘G’ & bcost[r]+c[r,j] is
minimum.
bcost[j] = bcost[r] + c[r,j];
d[j] =r;
}
// find a minimum cost path.
p[1]=1;
p[k]=n;
for j= k-1 to 2 do
p[j]=d[p[j+1]];
}

TRACE OUT:
n=12 k=5
bcost (1)=0 d (1)=0
for j = 2 to 12
When j=2
bcost(2) = 9 d(2)=1
When j=3
bcost(3) = 7 d(3)=1
When j=4
bcost(4) = 3 d(4)=1
When j=5
bcost(5) = 2 d(5)=1
When j=6
bcost(6) =min(c (2,6) + bcost(2),c (3,6) + bcost(3))
=min(13,9)
bcost(6) = 9 d(6)=3
When j=7
bcost(7) =min(c (3,7) + bcost(3),c (5,7) + bcost(5) ,c (2,7) + bcost(2))
=min(14,13,11)
bcost(7) = 11 d(7)=2

When j=8
bcost(8) =min(c (2,8) + bcost(2),c (4,8) + bcost(4) ,c (5,8) +bcost(5))
=min(10,14,10)
bcost(8) = 10 d(8)=2
When j=9
bcost(9) =min(c (6,9) + bcost(6),c (7,9) + bcost(7))
=min(15,15)
bcost(9) = 15 d(9)=6
When j=10
bcost(10)=min(c(6,10)+bcost(6),c(7,10)+bcost(7)),c(8,10)+bcost(8))
=min(14,14,15)
bcost(10)= 14 d(10)=6
When j=11
bcost(11) =c (8,11) + bcost(8))
bcost(11) = 16 d(11)=8
When j=12
bcost(12)=min(c(9,12)+bcost(9),c(10,12)+bcost(10),c(11,12)+bcost(11))
=min(19,16,21)
bcost(12) = 16 d(12)=10

p[1]=1 p[5]=12
for j=4 to 2
When j=4 p[4]=d[p[5]]=d[12]=10
When j=3 p[3]=d[p[4]]=d[10]=6
When j=2 p[2]=d[p[3]]=d[6]=3

The path through which you have to find the shortest distance from vertex 1 to vertex 12.
p[5]=12
p[4]= =10
p[3]= =6
p[2]= =3
p[1]=1
So the minimum cost path is,

7 2 5 2
1 3 6 10 12

The cost is 16.

3. Explain all pairs shortest path.

 Let G(V,E) be a directed graph with n vertices , ‘E’ is the set of edges & V is the set of n vertices.
 Each edge has an associated non-negative length.
 We want to calculate the length of the shortest path between each pair of nodes.
 i.e)Shortest path between every vertex to all other vertices.
 Suppose the nodes of G are numbered from 1 to n, so n={1,2,...n}
 cost (i,j) is length of edge <i,j> and it is called as cost adjacency matrix.
 cost(i,i)=0 for i=1,2...n,
 cost(i,j)=cost of edge <i,j> for all i&j
 cost i,j)=infinity, if the edge (i,j) does not exist.

 The all pairs shortest path problem is to determine a matrix A such that A(i,j) is the length of a
shortest path from i to j.
 The principle of optimality:
If k is the node on the shortest path from i to j then the part of the path from i to k and the
part from k to j must also be optimal, that is shorter.

 First create a cost adjacency matrix for the given graph.


 Copy the above matrix to matrix D, which will give the direct distance between nodes.
 We have to perform n*n iterations for each iteration of k. The matrix D will give you the distance
between nodes with only (1,2...,k)as intermediate nodes.
 At the iteration k, we have to check for each pair of nodes (i,j) whether or not there exists a path
from i to j passing through node k.
 Likewise we have to find the value for n iterations (ie) for n nodes.

15

5 15

15
Fig: floyd’s algorithm and work

ALGORITHM :
Algorithm Allpaths(cost,A,n)
//cost[1…n,1…n] is the cost adjacency matrix of graph with n vertices
//A[i,j] is the cost of shortest path from vertex i to j
//cost[i,i]=0.0 for 1<=i<=n
{
for i:=1 to n do
for j:=1 to n do
A[i,j]:=cost[i,j]; //copy cost into A
for k:=1 to n do
for i:=1 to n do
for j:=1 to n do
A[i,j]:=min{A[i,j], A[i,k]+ A[k,j];
}

COST ADJACENCY MATRIX :

 At 0th iteration it nil give you the direct distances between any 2 nodes

cost(i,j) = A(i,j) = D0 =

It gives the direct distances between the two nodes.


 At 1st iteration we have to check the each pair(i,j) whether there is a path through node 1.if so we
have to check whether it is minimum than the previous value and if i is so than the distance through
1 is the value of d1(i,j).at the same time we have to solve the intermediate node in the matrix position
p(i,j).

for k= 1 to 4
When k=1
for i= 1 to 4
when i=1
for j= 1 to 4
when j=1
A[1,1]=min(A[1,1],a[1,1]+A[1,1])=min(0,0)=0
When j=2
A[1,2]=min(A[1,2],A[1,1]+A[1,2])=min(5,0+5)=5
When j=3
A[1,3]=min(A[1,3],A[1,1]+A[1,3])=min(,0+)=
When j=4
A[1,4]=min(A[1,4],A[1,1]+A[1,4])=min(,0+)=

When i=2
for j= 1 to 4
when j=1
A[2,1]=min(A[2,1],A[2,1]+A[1,1])=min(50,50+0)=50
When j=2
A[2,2]=min(A[2,2],A[2,1]+A[1,2])=min(0,50+5)=0
When j=3
A[2,3]=min(A[2,3],A[2,1]+A[1,3])=min(15,50+)=15
When j=4
A[2,4]=min(A[2,4],A[2,1]+A[1,4])=min(5,50+)=5

When i=3
for j= 1 to 4
when j=1
A[3,1]=min(A[3,1],A[3,1]+A[1,1])=min(30,30+0)=30
When j=2
A[3,2]=min(A[3,2],A[3,1]+A[1,2])=min(,30+5)=35
When j=3
A[3,3]=min(A[3,3],A[3,1]+A[1,3])=min(0,30+)=0
When j=4
A[3,4]=min(A[3,4],A[3,1]+A[1,4])=min(15,30+)=15

When i=4
for j= 1 to 4
when j=1
A[4,1]=min(A[4,1],A[4,1]+A[1,1])=min(15,15+0)=15
When j=2
A[4,2]=min(A[4,2],A[4,1]+A[1,2])=min(,15+5)=20
When j=3
A[4,3]=min(A[4,3],A[4,1]+A[1,3])=min(5,15+)=5
When j=4
A[4,4]=min(A[4,4],A[4,1]+A[1,4])=min(0,15+)=0

k
i j A[i,j]

1 1 1 0
2 5
3 
4 
2 1 50
2 0
3 15
4 5
3 1 30
2 35
3 0
4 15
4 1 15
2 20
3 5
4 0

0 5   0 0 0 0
50 0 15 5 P[3,2]= 1 0 0 0 0
D1= 30 35 0 15 P[4,2]= 1 P1= 0 1 0 0
15 20 5 0 0 1 0 0

When k=2,we have obtained the matrix D2 and P2

0 5 20 10 P[1,3] = 2 0 0 2 2
D2= 50 0 15 5 P[1,4] = 2 P2= 0 0 0 0
30 35 0 15 0 1 0 1
15 20 5 0 0 1 0 0

0 5 20 10
D3= 45 0 15 5 P[2,1]=3
30 35 0 15
15 20 5 0

0 5 15 10
20 0 10 5 P[1,3]=4
D4= 30 35 0 15 P[2,3]=4
15 20 5 0

 D4 will give the shortest distance between any pair of nodes.

 If you want the exact path then we have to refer the matrix p.The matrix will be,

0042 0 direct path


3040
P= 0 1 0 0
0100

 Since, p[1,3]=4,the shortest path from 1 to3 passes through 4.


 Looking now at p[1,4]&p[4,3] we discover that between 1 & 4, we have to go to node 2 but that
from 4 to 3 we proceed directly.

 Finally we see the trips from 1 to 2, & from 2 to 4, are also direct.

 The shortest path from 1 to 3 is 1,2,4,3.

ALGORITHM :

Function Floyd (L[1..r,1..r]):array[1..n,1..n]


array D[1..n,1..n]

D=L
For k = 1 to n do
For i = 1 to n do
For j = 1 to n do

D [ i , j ] = min (D[ i, j ], D[ i, k ] + D[ k, j ]

Return D

ANALYSIS:
This algorithm takes a time of  (n3)

4. Explain Travelling salesman problem (UQ APRIL’13 & NOV’12)

 Let G(V,E) be a directed graph with edge cost cij is defined such that cij >0 for all i and j and cij
= ,if <i,j>  E.
o Let V = n and assume n>1.

 The traveling salesman problem is to find a tour of minimum cost.

 A tour of G is a directed cycle that include every vertex in V.

 The cost of the tour is the sum of cost of the edges on the tour.

 The tour is the shortest path that starts and ends at the same vertex (ie) 1.

Application:

1. Suppose we have to route a postal van to pick up mail from the mail boxes located at ‘n’ different
sites.

2. An n+1 vertex graph can be used to represent the situation.

3. One vertex represents the post office from which the postal van starts and return.

4. Edge <i,j> is assigned a cost equal to the distance from site ‘i’ to site ‘j’.
5. The route taken by the postal van is a tour and we are finding a tour of minimum length.

6. Every tour consists of an edge <1,k> for some k  V-{} and a path from vertex k to vertex 1.

7. The path from vertex k to vertex 1 goes through each vertex in V-{1,k} exactly once.

8. The function which is used to find the path is

g(1,V-{1}) = min{ cij + g(j,s-{j})}

9. g(i,s) be the length of a shortest path starting at vertex i, going through all vertices in S, and
terminating at vertex 1.

10. The function g(1,v-{1}) is the length of an optimal tour.

STEPS TO FIND THE PATH:

1. Find g(i,) =ci1, 1<=i<n, hence we can use equation(2) to obtain g(i,s) for all s to size 1.

2. That we have to start with s=1,(ie) there will be only one vertex in set ‘s’.

3. Then s=2, and we have to proceed until |s| <n-1.

4. for example consider the graph.

12
7

Cost matrix

0 10 15 20
5 0 9 10
6 13 0 12
8 8 9 0

g(i,s) set of nodes/vertex have to visited.

Starting position

g(i,s) =min{cij +g(j,s-{j})


STEP 1:

g(1,{2,3,4})=min{c12+g(2{3,4}),c13+g(3,{2,4}),c14+g(4,{2,3})}

min{10+25,15+25,20+23}

min{35,35,43}
=35

STEP 2:

g(2,{3,4}) = min{c23+g(3{4}),c24+g(4,{3})}

min{9+20,10+15}

min{29,25}

=25

g(3,{2,4}) =min{c32+g(2{4}),c34+g(4,{2})}

min{13+18,12+13}

min{31,25}

=25

g(4,{2,3}) = min{c42+g(2{3}),c43+g(3,{2})}

min{8+15,9+18}

min{23,27}

=23

STEP 3:

1. g(3,{4}) = min{c34 +g{4,}}

12+8 =20

2. g(4,{3}) = min{c43 +g{3,}}

9+6 =15

3. g(2,{4}) = min{c24 +g{4,}}

10+8 =18
4. g(4,{2}) = min{c42 +g{2,}}

8+5 =13

5. g(2,{3}) = min{c23 +g{3,}}

9+6=15

6. g(3,{2}) = min{c32 +g{2,}}

13+5=18

STEP 4:

g{4,} =c41 = 8

g{3,} =c31 = 6

g{2,} =c21 = 5

s = 0.

i =1 to n.

g(1,) = c11 => 0

g(2,) = c21 => 5

g(3,) = c31 => 6

g(4,) = c41 => 8

s =1

i =2 to 4

g(2,{3}) = c23 + g(3,)

= 9+6 =15

g(2,{4}) = c24 + g(4,)

= 10+8 =18
g(3,{2}) = c32 + g(2,)

= 13+5 =18

g(3,{4}) = c34 + g(4,)

= 12+8 =20

g(4,{2}) = c42 + g(2,)

= 8+5 =13

g(4,{3}) = c43 + g(3,)

= 9+6 =15

s =2

i  1, 1 s and i  s.

g(2,{3,4}) = min{c23+g(3{4}),c24+g(4,{3})}

min{9+20,10+15}

min{29,25}

=25

g(3,{2,4}) =min{c32+g(2{4}),c34+g(4,{2})}

min{13+18,12+13}

min{31,25}

=25

g(4,{2,3}) = min{c42+g(2{3}),c43+g(3,{2})}

min{8+15,9+18}

min{23,27}

=23
s =3

g(1,{2,3,4})=min{c12+g(2{3,4}),c13+g(3,{2,4}),c14+g(4,{2,3})}

min{10+25,15+25,20+23}

min{35,35,43}
=35

optimal cost is 35

the shortest path is,

g(1,{2,3,4}) = c12 + g(2,{3,4}) => 1->2

g(2,{3,4}) = c24 + g(4,{3}) => 1->2->4

g(4,{3}) = c43 + g(3{}) => 1->2->4->3->1

So the optimal tour is 1  2  4 3  1

5. Explain chained matrix multiplication (UQ NOV’12 & NOV’10)

If we have a matrix A of size pq and B matrix of size qr. the product of these two matrixes C is given
by,

cij =  aik bkj , 1 i  p, 1 j  r , 1= k =q.

it requires a total of pqr scalar multiplication.

Matrix multiplication is associative, so if we want to calculate the product of more than 2 matrixes m=
m1m2……. mn.

For example,
A = 13  5
B = 5  89
C = 89  3
D = 3  34

Now, we will see some of the sequence,

M = (((A.B).C).D)
A.B C
= (13 * 5 * 89) * (89 * 3)
A.B.C. D
= (13 * 89 * 3) * (3 * 34)
A.B.C.D
= 13 * 3 * 34
(ic) = 13 * 5 * 89 + 13 * 89 * 3 + 13 * 3 * 34
= 10,582 no. of multiplications one required for that sequence.

2nd Sequence,

M = (A * B) * (C * D)
= 13 * 5 * 89 + 89 * 3 * 34 + 13 * 89 * 34
= 54201 no. of Multiplication

3rd Sequence,

M = (A.(BC)) . D
= 5 * 89 * 3 + 13 * 5 * 3 + 13 * 3 *34
= 2856

For comparing all these sequence, (A(BC)).D sequences less no. of multiplication.

For finding the no. of multiplication directly we are going to the Dynamic programming method.

STRAIGHT FORWARD METHOD:

Our aim is to find the total no. of scalar multiplication required to compute the matrix product.

Let (M1,M2,……Mi) and Mi+1,Mi+2,……Mn be the chain of matrix to be calculated using the dynamic
programming method.

In dynamic programming we always start with the smallest instances and continue till we reach the
required size.

We maintain a table mij, 1 i  j n,


Where mij gives the optimal solution.

Sizes of all the matrixes are stored in the array d[0..n]

We build the table diagonal by diagonal; diagonal s contains the elements mij such that j-1 =s.

RULES TO FILL THE TABLE Mij:

S =0,1,……n-1

If s=0 => m(i,i) =0 ,i =1,2,……n


If s=1 => m(i,i+1) = d(i-1)*di *d(i+1)

i=1,2,……n-1.

If 1< s <n =>mi,i+s =min(mik+mk+1,i+s+di-1dkdi+s)


i k  i+s i = 1,2,……n-s
Apply this to the example,

A=>135
B=>589
C=>893
D=>334

Single dimension array is used to store the sizes.

d[0]=13
d[1]=5
d[2]=89
d[3]=3
d[4]=34

if s=0,

m(1,1)=0
m(2,2)=0
m(3,3)=0
m(4,4)=0

if s=1,

mi,i+1 = d(i-1)*di *d(i+1)

m12 = d0*d1*d2 =13*5*89=5785


m23 = d1*d2*d3 =5*89*3=1335
m34 = d2*d3*d4 =89*3*34=9078

if s=2,

mi,i+s =min(mik+mk+1,i+s+di-1dkdi+s)

m13 =min(m11+m23+d0d1d3 , m12+m33+d0d2d3)

=(0+1335+(13*5*3),5785+0+(13*89*3))

=min(1530,9256)

=1530
m13 =min(m22+m34+d1d2d4 , m23+m44+d1d3d4)

=(0+9078+(5*89*34),1335+0+(5*3*34))

=min(24208,1845)

=1845

m14 =min(m11+m24+d0d1d4 , m12+m34+d0d2d4, m13+m44+d0d3d4)

=min(4055,54201,2856)

=2856

the matrix table mij will look like,

1 s=3

2 s=2

3 1

4 s=0

Here there is no need to fill the lower diagonal.

ALGORITHM:

Procedure cmatrix(n,d[0..n])

For s=0 to n-1 do


{

if(s==0)

for I=1 ton do

m(i,j) =0

if(s==1)
for i=1 to n-1 do
m(i,i+1) =d(i-1)*d(i)*d(i+1)
else
{
m=
for i=1 to n-s do
for k=i to i+s do

if (min>[m(i,k) +m(k+1,i+s)+d(i-1)*d(k)*d(i+s)])

min=[m(i,k) +m(k+1,i+s)+d(i-1)*d(k)*d(i+s)]

m(i,i+s) =min

6. Explain 0/1 knapsack problem (UQ APRIL’12,JUL 21,DEC 21)

Given n items of known weights W1….. Wn and values a knapsack of capacity W, find the most valuable
subset of the items that fit into the knapsack.
Assume that the weights and the knapsack capacity are positive integers.
The item values do not have to be integers.
To design a dynamic programming algorithm, it is necessary to derive a recurrence relation that
expresses a solution to an instance of the knapsack problem in terms of solutions to its smaller sub
instances.
The recurrence relation is, equation

The initial conditions are equation (2)

Our goal is to find F[n,w], the maximum value of a subset of the n items that fit into the knapsack of
capacity w, and an optimal subset itself.
F[0,j] = 0 as well as F[i,0] = 0 when j≥0 and i≥0.
The table can be filled by either row by row or column by column.

0 j- Wi j W
0 0 0 0
Wi Vi

Example

For the given instance of problem obtain the optimal solution for the knapsack problem

Item Weight Value


1 2 $3
2 3 $4
3 4 $5
4 5 $6

The capacity of knapsack is W=5


Solution :
Initially , F[0,j] = 0 and F[ I,0] = 0.
There are 0 to n rows and 0 to W columns in the table.
0 1 2 3 4 5

Let us start filling the table row by row using following formula:

Compute F[1 , 1] with i = 1 , j= 1 , wi = 2 and vi =3


As j< wi we will obtain F[1,1] as
F[1,1] = F[ i – 1, j]
= F[ 0, 1]
F[1,1] = 0
Compute F[1 , 2] with i = 1 , j= 2 , wi = 2 and vi =3
As j≥ wi we will obtain F[1,2] as
F[1,2]= Max{ F[i -1,j],vi + F [i-1, j-wi]}
= Max {F[ 0,2]) , ( 3 + F[ 0,0])}
= Max{ 0, 3 + 0}
F[1,2] = 3
Compute F[1 , 3] with i = 1 , j= 3 , wi = 2 and vi =3
As j≥ wi we will obtain F[1,3] as
F[1,3]= Max{ F[i -1,j],vi + F [i-1, j-wi]}
= Max {F[ 0,3]) , ( 3 + F[ 0,1])}
= Max{0, 3 + 0}
F[1,3] = 3
Compute F[1 , 4] with i = 1 , j= 4 , wi = 2 and vi =3
As j≥ wi we will obtain F[1,4] as
F[1,4] = Max{ F[i -1,j],vi + F [i-1, j-wi]}
= Max {F[ 0,4]) , ( 3 + F[ 0,2])}
= Max{0, 3 + 0}
F[1,4]= 3
Compute F [1 , 5] with i = 1 , j= 5, wi = 2 and vi =3
As j≥ wi we will obtain F[1,5] as
F[1,5]= Max{ F[i -1,j],vi + F [i-1, j-wi]}
= Max {F[ 0,5]) , ( 3 + F[ 0,3])}
= Max{0, 3 + 0}
F[1, 5]= 3

The table with these values can be


0 1 2 3 4 5
0

Compute F[2 , 1] with i = 2 , j= 1, wi = 3 and vi =4


As j≤ wi we will obtain F[2,1] as
F[2,1] = F[i – 1, j] = F[1, 1]
F[2,1] = 0

Compute F [2 , 2] with i = 2 , j= 2, wi = 3 and vi =4


As j≤ wi we will obtain F[2,2] as
F[2,2] = F[i – 1, j]
= F[1, 2]
F[2,2] = 3

Compute F [2 , 3] with i = 2 , j= 3, wi = 3 and vi =4


As j≥wi we will obtain F[2,3] as
F [2 , 3]= max{ F[i -1,j],vi + F [i-1, j-wi]}
= Max { F[ 1,3]) , ( 4 + F[ 1,0])}
= Max{ 3, 4+ 0}
F[2,3] = 4

Compute F [2 , 4] with i = 2 , j= 4, wi = 3 and vi =4


As j≥wi , we will obtain F[2,4] as
F [2,4] = max{ F[i -1,j],vi + F [i-1, j-wi]}
= Max { F[ 1,4]) , ( 4 + F[ 1,1])}
= Max{ 3, 4+ 0}
F[2,4] = 4

Compute F [2 , 5] with i = 2 , j= 5, wi = 3 and vi =4


As j≥wi , we will obtain F[2,5] as
F [2 , 5] = max{ F[i -1,j],vi + F [i-1, j-wi]}
= Max { F[ 1,5]) , ( 4 + F[ 1,2])}
= Max{ 3, 4+ 3}
F[2,5] = 7
The table with these values can be

0 1 2 3 4 5

3
4

Compute F [3 , 1] with i = 3 , j= 1, wi = 4 and vi =5


As j< wi we will obtain F[3,1] as
F[3,1] = F[i – 1, j]
= F[2, 1]
F[3,1] = 0
Compute F [3 , 2] with i = 3 , j=2, wi = 4 and vi =5
As j<wi we will obtain F[3,2] as
F[3,2]= F[i – 1, j]
= F[2, 2]
F[3,2]= 3
Compute F [3 , 3] with i = 3 , j=3, wi = 4 and vi =5
As j<wi we will obtain F[3,3] as
F[3,3]= F[i – 1, j]
= F[2, 3]
F[3,3] = 4
Compute F [3 , 4] with i = 3 , j=4, wi = 4 and vi =5
As j<wi we will obtain F[3,4] as
F [3,4]= max{ F[i -1,j],vi + F [i-1, j-wi]}
= Max { F[ 2,4]) , ( 5 + F[ 2,0])}
= Max{ 4, 5+ 0}
F[3,4] = 5
Compute F [3 , 5 ] with i = 3 , j=5 wi = 4 and vi =5
As j<wi we will obtain F[3,5] as
F [3,5]= max{ F[i -1,j],vi + F [i-1, j-wi]}
= Max { F[ 2,5]) , ( 5 + F[ 2,1])}
= Max{ 7, 5+ 0}
F[3,5] = 7
The table with these values can be
0 1 2 3 4 5

Compute F [4 , 1] with i = 4 , j=1, wi = 5 and vi =6


As j<wi we will obtain F[4,1] as
F[4,1] = F[i – 1, j]
= F[3, 1]
F[4,1]= 0
Compute F [4 , 2] with i = 4 , j=2, wi = 5 and vi =6
As j<wi we will obtain F[4,2] as
F[4,2] = F[i – 1, j]
= F[3, 2]
F[4,2]= 3
Compute F [4 , 3] with i = 4 , j=3 wi = 5 and vi =6
As j<wi we will obtain F[4,3] as
F[4,3] = F[i – 1, j]
= F[3, 3]
F[4,3]= 4
Compute F [4 , 4] with i = 4 , j=4 wi = 5 and vi =6
As j<wi we will obtain F[4,4] as
F[4,4] = F[i – 1, j]
= F[3, 4]
F[4,4] = 5
Compute F [4 , 5] with i = 4 , j=5 wi = 5 and vi =6
As j<wi we will obtain F[4,5] as
F[4,5]= Max{ F[i -1,j],vi + F [i-1, j-wi]}
= Max { F[ 3,5]) , ( 6 + F[ 3,0])}
= Max{ 7, 6+ 0}
F[4,5]=7

The table with these values can be

0 1 2 3 4 5

2
3

Identification of Knapsack Items:


F[ n, W] is the total value of selected items that can be placed in the knapsack. Following steps are used
repeatedly.

Let us apply these steps to the problem we have in final table.


0 1 2 3 4 5

0 0 0 0 0 0
0

0 0 3 3 3 3
1
0 0 3 4 4 7

2
0 0 3 4 5 7

3 0 0 3 4 5 7
4

Start : Let i = 4 and k = 5


Capacity ->0 1 2 3 4 5

As F[i,k] = F[ i – 1, k]

F[ 4,5] = F[3,5] , Don’t select i th item.


Now set i = i - 1, i=3

Capacity -> 0 1 2 3 4 5
0

F[3,5] = F[2,5] Don’t select ith item i.e ., 3 rd item . now set i = i-1, i=2

Capacity -> 0 1 2 3 4 5

F [i,k]≠ F[ i – 1, k] F[2,5] ≠ F[1,5] Select ith item, ie , select 2nd item.Set i = i -1 and k = k- wi
i.e . i = i -1, i=1 and k = 5 - 3 =2
Capacity -> 0 1 2 3 4 5

0 0 0 0 0 0
0
0 0 3 3 3 3

1 0 0 3 4 4 7

0 0 3 4 5 7
2
0 0 3 4 5 7

As F [i,k]≠ F[ i – 1, k]
i.e F[1,2] ≠ F[0,2]
select ith item
That is , select 1st item.
Set i = i - 1 and k = k – wi
i = 0 and k=2-2=0
Thus item 1 and item 2 are selected for the knapsack.
Algorithm
Algorithm Dynamic knapsack (n,, W,w[],v[])
// problem description : this algorithm is for obtaining knapsack
// solution using dynamic programming
//input : n is total number of items , W is the capacity of //knapsack, w [] stores weights of each item and
v[] stores
//the values of each item
//output : returns the total value of selected items for yhe
// knapsack
For(i← 0 to n ) do
{
for(i← 0 to W ) do
{
F[i,0] = 0 // table initialization
F[0,j] = 0
}
}
for(i← 0 to n ) do
{
for(i← 0 to W ) do
{
If(j<w[i]) then
{
F[i,j] ←F[i – 1, j]
Else if (j>= w[i]) then
F[i,j] ←max(F[i -1,j],v[i] + F [i-1, j-w[i])
}
}
}
Return F[n,W]
Analysis:
In this algorithm the basic operation is if ….else if statement within two nested for loops Hence
𝑛
C(n) = ∑ ∑𝑤 1

𝑖=0
W−0+1
𝑛
C(n) = ∑
𝑖=0
W+1
𝑛
C(n) = ∑
𝑖=0 𝑛
𝑛 𝑖=0
C(n) = ∑
𝑖=0
𝑛 𝑛
𝑖=0
C(n) = W. ∑
𝑖=0

C(n) = W( n- 0 +) + ( n-0+1)
C(n) = Wn +W + n + 1
C(n) = Wn

7. Explain approaches using recursion


Although dynamic programming algorithms give efficient result, there is some
unsatisfactory about the bottom up approach. Because it leads to compute values that might be completely
irrelevant. So we go for the top down approach to achieve the same efficiency.
The matrix multiplication using dynamic programming can be replaced that means
replacing the table M by a function fm which is calculated as required in other words we would like to
find a function fm such that fm(I,j) equal to mi,j for 1<=I<=j<=n; But that can be calculated recursively,
writing such a function is simple. We have to program the rules for calculating M.
Function fm(i,j)
if i=j then {only one matrix is involved}
return 0;
m=infinity
for k= I to j-1 do
m=min

MEMORY FUNCTIONS:

Procedure cmatrix(n,d[0..n])

For s=0 to n-1 do


{

if(s==0)

for I=1 ton do

m(i,j) =0

if(s==1)
for i=1 to n-1 do
m(i,i+1) =d(i-1)*d(i)*d(i+1)

else
{
m=
for i=1 to n-s do
for k=i to i+s do

if (min>[m(i,k) +m(k+1,i+s)+d(i-1)*d(k)*d(i+s)])

min=[m(i,k) +m(k+1,i+s)+d(i-1)*d(k)*d(i+s)]

m(i,i+s) =min

8. Explain basic search and traversal technique

(APRIL’13,APRIL15,DEC21)

9. TECHNIQUES FOR GRAPHS:

 The fundamental problem concerning graphs is the reach-ability problem.

 In it simplest from it requires us to determine whether there exist a path in the given graph, G
+(V,E) such that this path starts at vertex ‘v’ and ends at vertex ‘u’.

 A more general form is to determine for a given starting vertex v6 V all vertex ‘u’ such that there
is a path from if it u.

 This problem can be solved by starting at vertex ‘v’ and systematically searching the graph ‘G’
for vertex that can be reached from ‘v’.

 We describe 2 search methods for this.

i. Breadth first Search and Traversal.


ii. Depth first Search and Traversal.

Breadth first search and traversal:

 In Breadth First Search we start at a vertex ‘v’ and mark it as having been reached (visited).

 The vertex ‘v’ is at this time said to be unexplored.

 A vertex is said to have been explored by an algorithm when the algorithm has visited all vertices
adjust from it.

 All unvisited vertices adjust from ‘v’ are visited next. These are new unexplored vertices.
 Vertex ‘v’ has now been explored. The newly visit vertices haven’t been explored and are put on
the end of a list of unexplored vertices.

 The first vertex on this list in the next to be explored. Exploration continues until no unexplored
vertex is left.

 The list of unexplored vertices operates as a queue and can be represented using any of the start
queue representation.

ALGORITHM:

Algorithm BFS (v)

// A breadth first search of ‘G’ is carried out.


// beginning at vertex-v; For any node i, visit.
// if ‘i’ has already been visited. The graph ‘v’
// and array visited [] are global; visited []
// initialized to zero.
{
y=v; // q is a queue of unexplored 1visited (v)= 1
repeat
{
for all vertices ‘w’ adjacent from u do
{
if (visited[w]=0) then
{
Add w to q;
visited[w]=1
}
}
if q is empty then return;// No delete u from q;
} until (false)
}

algrothim : breadth first traversal

algorithm BFT(G,n)

for i= 1 to n do
visited[i] =0;
for i =1 to n do
if (visited[i]=0)then BFS(i)

here the time and space required by BFT on an n-vertex e-edge graph one O(n+e) and O(n) respectively
if adjacency list is used .if adjacent matrix is used then the bounds are O(n2) and O(n) respectively
Depth first search and traversal

A depth first search of a graph differs from a breadth first search in that the exploration of a vertex
v is suspended as soon as a new vertex is reached. At this time the exploration of the new vertex u begins.
When this new vertex has been explored, the exploration of u continues. The search terminates when all
reached vertices have been fully explored. This search process is best-described recursively.

ALGORITHM DFS(V)
//given an undirected (directed graph)
//G=(V,E) with n vertices and an array visited[]
//initially set to zero, this algorithm of visits all the vertices reachable
//from v.
//G and visited[] are global
{
visited[v]:=1;
for each vertex w adjacent from v do
{
if(visited[w]=0) then DFS(w);
}
}

10. Explain biconnected components (UQ:APRIL/MAY’14)(APR/MAY 21)


Articulation Point:
A vertex ‘v’ in a connected graph ‘G’ is an articulation point, iff the deletion of vertex v together
with all vertices incident to v disconnects the graph into 2 or more
non empty components.

Biconnected Graph:
A graph ‘G’ is biconnected, iff it contains no articulation points.
Real Time Example:
‘G’ is a graph representing a communication network, vertices represent communication station,
edges represent communication lines.
(i) if communication station has articulation point :
 if any communication station ‘i’ is failed , then we cannot communicate with another station.
(ii) if the graph has no articulation point:
if any station ‘i’ is failed, then we can communicate with another station, other than ‘i’.
Figure– Graph G

In the above graph G ,if we delete the vertex 2 ,then it creates 2 non empty components.

Component 1:
1
Component 2 :

1 9

If we delete any other vertex other than 2, 3, and 5, then we get only one component exactly. So the above
graph is not biconnected.

The figure represents the graph which is biconnected. Because it has no articulation points.
If a graph is not biconnected, then
we may include set of edges that makes the graph biconnected .we can determine that edge if we know
the maximal subgraphs of G that are biconnected.
Let G(V,E) be a graph that is not biconnected. To make it biconnected , first identify the articulation
points by the algorithm. Once it has been determined that a connected graph G is not biconnected, it may
be desirable to determine the set of edges whose inclusion makes the graph biconnected. Determining
such a set of edges is facilitated if we know the maximal subgraphs of G that are biconnected.G’(V’,E’)
is a maximal biconnected subgraph of G iff G has no biconnected subgraph G”(V”,E’’) such that V’ V’’
and E’ E’’.A maximal biconnected subgraph is a biconnected component.
Biconnected components should contain atleast 2 vertices. Biconnected component itself has no
articulation point. The graph G can be transformed into a biconnected graph by using the edge addition
scheme of Algorithm to construct a biconnected graph. Note that once the edges (vi,vi+1) of line 6 are
added, vertex a is no longer an articulation point. Hence following the addition of edges corresponding to
all articulation points, G has no articulation points and so it is biconnected. If G has p articulation points
and b biconnected components, the scheme of algorithm introduces exactly b-p new edges into G.

Algorithm To Construct A Biconnected Graph:


1 for each articulation point ‘a’ do
2 {
3 let B1,B2……..Bk be the biconnected components containing vertex a;
4 let Vi,Vi !=a be a vertex in Bi, 1<=i<=n;
5 add to ‘G’ the edges (Vi,Vi+1),1<=i<=n;
6}

Pseudo Code To Determine Bicomponents :


Algorithm Bicomp (u,v)
{
//u is the start vertex for depth first search.
// v is its parents ,if any in the depth first spanning tree .
//it is assumed that ,the global array ‘dfn’ is initially zero and that the global variable
// ‘num’ is initialized to 1.
//’n’ is the number of vertices
{
dfn[u]:=num;L[u]:=num;num=num+1;
for each vertex ‘a’ adjacent from ‘u’ do
{
if((v!=w)&&(dfn[w]<dfn[u])) then
add [u,w] to the top of the stack S;
if(dfn[w]=0) then
{
if(L[w]>=dfn[u]) then
{
write(“New Bicomponents”);
repeat
{
delete an edge from top of stack s;
let the edge be (x,y);
write(x,y);
}
until(((x,y)=(u,w))||((x,y)=(w,u)));
}
Bicomp(w,u);//w is unvisited
L(u):=min(L(u),L(w));
}
else if(w!=v) then
L(u):=min(L(u),dfn[w]);
}
}

11. Explain topological sorting (UQ NOV’10, APRIL/MAY’14)(SEP 22)


A directed graph G in which the vertices represent tasks/events and the edges represents
activities to move from one event to another then that graph is known as activity on verten
network/AOV-network/PERT graph.
PERT graph is made to analyze interrelated activities of complete projects. The purpose of a
topological sort is to order the events in a linear manner, with the restriction that an event cannot precede
other events that must first take place.
A topological sort defines a linear ordering on these orders of a directed graph that have the
following property: If node P as a predecessor of node Q, then Q cannot be the predecessor of node P. A
topological sort cannot order the nodes in a cycle.

V6
V1,V2,V5,V3,V6,V4
V1,V4,V3,V6,V2,V5
Fig(b) Topological order .

ALGORITHM: To sort the tasks into topological order:


Let n be the no of vertices
1. For i 1 to n do //o/p the vertices//
2. If every vertex has no predecessor then (the network has a cycle and stop)
3. Pick a vertex i.e. which has no predecessor.
4. O/p V
5. Delete V and all edges leading and of V from the network.
6. End.
Apply this for the above given graph, we will get:
a) Initial

b) V1

Topological order generated is:


V1,V4,V3,V6,V2,V5.
Here on step (b) we have, 3 vertices V2,V3&V4 which has no predecessor and any of these can be
the next vertex in topological order.
In this problem, the functions are:
i) Decide whether a vertex has any predecessor.
ii) Decide a vertex together with all its incident edges.

 This can be efficiently done if for each vertex a count of the no of its immediate predecessor is kept.
 This can be easily implemented if the network is represented by its adjacency lists.
The following algorithm assumes that the network is represented by adjacency lists.

ALGORITHM:
Procedure Topological_order (count,vertex,link,n)
//the n vertices of an AOV-network are listed in topological order .
the network is represented as a set of adjacency lists with
COUNT(i)=the in-degree of vertex i//
1) top 0 //initialize stack//
2) for i  1 to n do //create a linked stack of vertices with no predecessors //
3) if COUNT (I)=0 then [ COUNT(i)  top;topI]
4) end
5) for i 1 to n do //print the vertices in topological order //
6) if top = 0 then [print (‘network has a cycle ‘);stop]
7) j  top;topCOUNT (top);print(j)//unstack a vertex//
8) ptr LINK(j)
9) while ptr <> 0 do
10) //decrease the count of successor vertices of j//
11) K VERTEX(ptr) // K is a successor of j//
12) COUNT(K)  COUNT(K)-1 // Decrease count//
13) If COUNT(K)=0 // Add vertex K to stack//
14) Then[COUNT(K)  top; topK]
15) Ptr LINK(ptr)
16) End
17) End
18) End TOPOLOGICAL_ORDER.

COUNT LINK VERTEX LINK


0
1
1
1
3 0
2 0
The head nodes of these lists contain two fields: COUNT & LINK. The COUNT field contains the
in-degree of that vertex and LINK is a pointer to the first node on the adjacency list. Each list node has 2
fields: VERTEX & LINK.
COUNT Fields can be set at the time of i/p. When edge < i ,j > is i/p the count of vertex j is
incremented by 1. The list of vertices with zero count is maintained as a stack.
11. Write a pseudo code to find Optimal binary search trees using dynamic programming

(OBST) Or
Obtain a optimal binary search tree for following nodes (do, if , int , while) with following
probabilities ( 0.1, 0.2 , 0.4, 0.3)
 A binary search tree is one of the most important data structures in computer science.
 One of its principal application is to implement a dictionary, a set of
elements with the operations of searching, insertion, and deletion.
Example:
Consider four keys A, B, C, and D to be searched for with probabilities 0.1, 0.2, 0.4, and 0.3, respectively.

Figure depicts two out of 14 possible binary search trees containing these keys.
The average number of comparisons in a successful search in the first of these trees is
0. 1+ 0.2 . 2 + 0.4 . 3+ 0.3 . 4 = 2.9,
for the second one it is
0. 2 + 0.2 . 1+ 0.4 . 2 + 0.3 . 3= 2.1.
Neither of these two trees is, in fact, optimal.
For our tiny example, we could find the optimal tree by generating all 14 binary search trees with these
keys.
As a general algorithm, this exhaustive-search approach is unrealistic: the total number of binary search
trees with n keys is equal to the nth Catalan number,

Optimal binary search tree


 For such a binary search tree , the root contains key ak, the left subtree Ti k -1 contains keys ai, . . .
, ak−1 optimally arranged, and the right subtree Tjk+1 contains keys ak+1, . . . , aj also optimally
arranged.
 If we count tree levels starting with 1 to make the comparison numbers equal the keys’ levels, the
following recurrence relation is obtained:

Initially we assume that C[I ,i-1] = 0 for I ranging from 1 to n+1.


Then set C[I, i] = pi where 1≤ i≤ j≤n.

that means we have to two tables in optimal binary search tree.

Table for cost .i.e ., cost table table for root i.e. root table
The cost table should be constructed as follows.

Fig :Table of the dynamic programming algorithm for constructing an


optimal binary search tree.
1. Fill up C[i ,i-1 ] by 0 and C[ n+1, n] by 0
2. Fill up C[i ,i ] by p[i]
3. Fill up C[i ,j] using formula

EXAMPLE
Obtain a optimal binary search tree for following nodes (do, if , int ,while) with following
probabilities ( 0.1, 0.2 , 0.4, 0.3)

Solution : There are 4 nodes.


Hence n = 4
a[i]
p[i]
Initial Tables :
Cost Table : 0 1 2 3 4

5
Root Table 0 1 2 3
1

4
Cost Table
C[ 1, 0] = 0
C[ 2, 1] = 0
C[ 3, 2] = 0 using formulae C[ i, i – 1] =0 and C[ n + 1,n] =0
C[ 4, 3] = 0
C[ 5, 4] = 0

C[ 1, 1] = 0.1
C[ 2, 2] = 0.2 using formulae C[ i, i ] =p[i]
C[ 3, 3] = 0.4
C[ 4, 4] = 0.3

The root table


R[1, 1] =1
R[2, 2] =2 using formulae R[ i, i ] =i
R[3, 3] =3
R[4, 4] =4

Now let us compute C[i, j] diagonally using formula


--- (1)

Compute C[ 1, 2 ]
The value of k can be either 1 or 2
Let i = 1 , j = 2 , use formula in equation (1),
k= 1
C[1,2]= C[ 1,0] + C[2 ,2] + p[1] +p[2]
= 0 + 0.2 + 0.1 + 0.2
= 0.5
k=2
C[1,2]=C[ 1,1] + C[3 ,2] + p[1] +p[2]
= 0.1 + 0 + 0.1 + 0.2
= 0.4-> minimum value therefore consider k = 2
Therefore in cost table C[1,2] = 0.4 and R[1,2]=2

Compute C[2, 3]
The value of k can be 2 or 3.
Let i = 2, j = 3, use formula equation (1)
k=2
C[2,3] =C[ 2,1] + C[3 ,3] + p[2] +p[3]
=0 + 0.4+ 0.2+ 0.4
=1.0
k=3
C[2,3]= C[ 2,2] + C[4 ,3] + p[2] +p[3]
= 0.2 + 0 + 0.2 + 0.4
= 0.8 -> minimum value therefore consider k = 3
Therefore in cost table C[2,3] = 0.8 and R[2,3]=3

Compute C[3, 4]
The value of k can be 3 or 4.
Let , i = 3, j = 4, use formula equation (1)
k=3
C[3,4]= C[ 3,2] + C[4 ,4] + p[3] +p[4]
= 0 + 0.3+ 0.4+ 0.3
= 1.0 -> minimum value therefore consider k = 3
k=4
C[3,4]= C[ 3,3] + C[5 ,4] + p[3] +p[4]
= 0.4 + 0 + 0.4 + 0.3= 1.1
Therefore in cost table C[3,4] = 1.0 and R[3,4]=3
The table contains values obtained upto this calculations

Cost Table 0 1 2 3 4
1

3
4

Root Table
0 1 2 3

1
2
3

Compute C[1,3]
The value of k can be 1,2 or 3
Consider i=1 , j=3.
k=1
C[1,3] = C[ 1,0] + C[2 ,3] + p[1] +p[2] + p[3]
= 0 + 0.8 + 0.1 + 0.2 + 0.4
= 1.5

k =2

C[1,3]= C[ 1,1] + C[3 ,3] + p[1] +p[2] + p[3]


= 0.1 + 0.4 + 0.1 + 0.2 + 0.4
= 1.2
k =3
C[1,3]= C[ 1,2] + C[4 ,3] + p[1] +p[2] + p[3]
= 0 .4+ 0 + 0.1 + 0.2 + 0.4
= 1.1-> minimum value therefore consider k = 3
Therefore C[1,3] = 1.1 and R[1,3] = 3

Compute C [2,4]
The value of k can be 2,3 or 4
Consider i=2, j=4 and using equation (1)
k=2
C[2,4]= C[ 2,1] + C[3 ,4] + p[2] +p[3] + p[4]
= 0 + 1.0 + 0.2 + 0.4 + 0.3
= 1.9
k =3
C[2,4]= C[ 2,2] + C[4 ,4] + p[2] +p[3] + p[4]
= 0.2 + 0.3 + 0.2 + 0.4 + 0.3
= 1.4 -> minimum value therefore consider k = 3
k =4
C[2,4]= C[ 2,3] + C[5 ,4] + p[2] +p[3] + p[4]
= 0 .8+ 0 + 0.2 + 0.4 + 0.3
= 1.7
Therefore C[2,4]= 1.4 and R[2,4]= 3
Cost Table
0 1 2 3 4

0 0.1 0.4 1.1

1 0 0.2 0.8 1.4


2
3 0 0.4 1.0

4 0 0.3

5
0

Root Table 0 1 2 3

1 2 3

2 3 3

3 3

Compute C[1,4]
The value of k can be 1, 2, 3 or 4.
Consider i=1, j=4 and using equation (1)
k=1
C[1,4] = C[ 1,0] + C[2 ,4] + p[1] + p[2] +p[3] + p[4]
= 0 + 1.4 + 0.1+ 0.2 + 0.4 +0.3
= 2.4

k =2
C[1,4]= C[ 1,1] + C[3 ,4] + p[1] + p[2] +p[3] + p[4]
= 0.1 + 1.0 + 0.1 + 0.2 + 0.4 +0.3
= 2.1
k =3
C[1,4]= C[ 1,2] + C[4 ,4] + p[1] + p[2] +p[3] + p[4]
= 0.4 + 0.3 + 0.1 + 0.2 + 0.4 +0.3
= 1.7-> minimum value therefore consider k = 3

k=4
C[1,4]= C[ 1,3 + C[5 ,4] + p[1] + p[2] +p[3] + p[4]
= 1.3 + 0 + 0.1 + 0.2 + 0.4 +0.3
= 2.3
Therefore C[1,4]= 1.7 and R[1,4]= 3

Cost Table

0 1 2 3 4

4
5
Root Table
0 1 2 3

4
To build a tree R[1][n] = R[1][4] =3 becomes root.

1 2 3 4
A[i]

Key = 3 means int


Therefore “int” becomes root of optimal binary search tree.
The tree is

Value of key
Here i =1 , j=4 and k=3.

The tree can be with optimum cost C[1,4] = 1.7

ALGORITHM
ALGORITHM OptimalBST(P [1..n])
//Finds an optimal binary search tree by dynamic //programming
//Input: An array P[1..n] of search probabilities for a sorted //list of n keys
//Output: Average number of comparisons in successful //searches in the
//optimal BST and table R of subtrees’ roots in the optimal //BST
for i ←1 to n do
{
C[i, i − 1]←0
C[i, i]←P[i]
R[i, i]←I
}
C[n + 1, n]←0
for d ←1 to n − 1 do //diagonal count
{
for i ←1 to n − d do
{
j ←i + d
minval←∞
for k←i to j do
{
if C[i, k − 1]+ C[k + 1, j]< minval
{
minval←C[i, k − 1]+ C[k + 1, j];
kmin←k
R[i, j ]←kmin
}
}
sum←P[i];
for s ←i + 1 to j do
sum←sum + P[s]
C[i, j ]←minval + sum
return C[1, n], R
}
}

Analysis:

The basic operation in above algorithm is computation of C[i,j] by finding the minimum valued k.
This operation is located within three nested for loops hence the time complexity C(n) can be
𝑛−𝑚
𝑗𝑛
C(n) = ∑ 𝑛−1
𝑖=1
𝑘=𝑖

= n3
C(n) = Θ (n3)
Pondicherry University Questions

2 Marks

1 Define dynamic programming. (UQ APRIL’12) (Ref.Pg.No.1 Qn.No.2)


2. Write the general procedure of dynamic programming. (UQ APRIL’13)
(Ref.Pg.No.1 Qn.No.5)
3. List different traversal technique in graph (UQ APRIL’12) (Ref.Pg.No.77 Qn.No.2)
4. What is biconnected component? (UQ Nov’12) (Ref.Pg.No.2 Qn.No.15)
5. Write about multistage graph with example (UQ APRIL’13 & Nov’12,Apr/May’14) (Ref.Pg.No.2
Qn.No.17)
6. What is all pair shortest path problem?(APRIL/MAY’14) (Ref.Pg.No.2 Qn.No.18)

11 Marks

1. Explain Travelling salesman problem (UQ APRIL’13 & NOV’12) (Ref.Pg.No.13 Qn.No.4)
2. Explain chained matrix multiplication (UQ NOV’12 & NOV’10) (Ref.Pg.No.18 Qn.No.5)
3. Explain 0/1 knapsack problem (UQ APRIL’12) (Ref.Pg.No.21 Qn.No.6)
4. Explain basic search and traversal technique (UQ APRIL’13) (Ref.Pg.No.24 Qn.No.8)
5. Explain topological sorting (UQ NOV’10, APRIL/MAY’14) (Ref.Pg.No. 30 Qn.No.10)
6. Explain multistage graph with example (UQ :APRIL/MAY’14) (Ref.Pg.No.3,Qn.No.2)
7.Explain Biconnected components (UQ:APRIL/MAY’14) ) (Ref.Pg.No.26,Qn.No.9)
8. Solve the following instance of the 0/1, Knapsack problem given the knapsack capacity in W = 5 using
dynamic programming and explain it.

Item Weight Value


1 4 10
2 3 20
3 2 15
4 5 25

9. Solve the all-pair shortest-path problem for the digraph with the following weight matrix(16)

You might also like