0% found this document useful (0 votes)
42 views27 pages

Ch-8-Daa-Mb Students

Dynamic programming is an algorithm design technique for solving problems with overlapping subproblems by storing solutions to already solved subproblems in a table to avoid recomputing them. It involves breaking problems down into smaller subproblems and storing the solutions to subproblems to build up the solution to the original problem. Examples of dynamic programming algorithms include Warshall's algorithm for finding the transitive closure of a relation and Floyd's algorithm for finding all-pairs shortest paths in a weighted graph.

Uploaded by

0101191713
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views27 pages

Ch-8-Daa-Mb Students

Dynamic programming is an algorithm design technique for solving problems with overlapping subproblems by storing solutions to already solved subproblems in a table to avoid recomputing them. It involves breaking problems down into smaller subproblems and storing the solutions to subproblems to build up the solution to the original problem. Examples of dynamic programming algorithms include Warshall's algorithm for finding the transitive closure of a relation and Floyd's algorithm for finding all-pairs shortest paths in a weighted graph.

Uploaded by

0101191713
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Dynamic Programming

• Dynamic Programming is a general algorithm


design technique for solving problems with
overlapping subproblems.
• “Programming” here means “planning”
• Main idea:
• solve several smaller (overlapping) subproblems
• record solutions in a table so that each
subproblem is only solved once
• final state of the table will be (or contain)
solution
1
Example: Fibonacci numbers
• Recall definition of Fibonacci numbers:
f(0) = 0
f(1) = 1
f(n) = f(n-1) + f(n-2) [defined by recurrence relation]

• Computing the nth Fibonacci number recursively (top- down):


f(n)

f(n-1) + f(n-2)

f(n-2) + f(n-3) f(n-3) + f(n-4)


2
Example: Fibonacci numbers
Computing the nth fibonacci number using bottom-up
iteration:
• f(0) = 0
• f(1) = 1
• f(2) = 0+1 = 1
• f(3) = 1+1 = 2
• f(4) = 1+2 = 3
• f(5) = 2+3 = 5…………..

•f(n) = f(n-1)+f(n-2)

0 1 1 . . . F(n-2) F(n-1) F(n)

3
Examples of Dynamic Programming Algorithms

• Warshall’s algorithm for transitive closure

•Floyd’s algorithms for all-pairs shortest paths

4
Warshall’s Algorithm: Transitive Closure
• Computes the transitive closure of a relation

• (Alternatively: all paths in a directed graph)

• Example of transitive closure:

3 3
1 1 Presence of intermediate vertices
vertices = 1

0 0 1 0 0 0 1 0
1 0 0 1 1 1 1 1
4 0 0 0 0 4 0 0 0 0
2 2 1 1 1 1
0 1 0 0

5
Adjacency Matrix and Transitive-Closure
Adjacency Matrix, A={aij} of a directed graph is a Boolean
matrix that has 1 in its ith row and jth column if and only if
there is a directed edge from the ith vertex to the jth vertex.

Transitive closure of a directed graph with n vertices can be


defined as a n by n Boolean matrix T={tij}, in which element
of ith row(1<=i<=n) and jth column(1<=j<=n) is 1 if there
exists a non-trival directed path(i.e, a directed path of
positive length) from the ith vertex to the jth vertex
otherwise, tij=0
6
Example

7
Warshall’s Algorithm
Constructs transitive closure T as the last matrix in the sequence of n-
by-n matrices R(0), … , R(k), … , R(n) where
R(k)[i,j] = 1 iff there is nontrivial path from i to j with only first k
vertices(not higher than k) allowed as intermediate
K=0: No intermediate vertex
K=1: with 1 intermediate vertex
K=2: with 2 intermediate vertex……….
Note that R(0) = A (adjacency matrix),
R(n) = T (transitive closure)
The centeral point of the algorithm is that we can compute all the
elements of each matrix R(k) from its immediate predecessor R(k-1) in
8
series
Warshall’s Algorithm
It implies the following rules for generating R(k) from
R(k-1):
Rule 1: If an element in row i and column j is 1 in R(k-1),
it remains 1 in R(k)
Rule 2: If an element in row i and column j is 0 in R(k-1),
it has to be changed to 1 in R(k) if and only if the
element in its row i and column k and the element in its
column j and row k are both 1’s in R(k-1)

Recurrence relating elements R(k) to elements of R(k-1) is:


R(k)[i,j] = R(k-1)[i,j] or (R(k-1)[i,k] and R(k-1)[k,j])
9
Warshall’s Algorithm

• In the kth stage determine if a path exists between two


vertices i, j using just vertices among 1,…,k

{
R(k)[i,j] =
R(k-1)[i,j]
or
(path using just 1 ,…,k-1)

(R(k-1)[i,k] and R(k-1)[k,j]) (path from i to k


k and from k to i
i
kth stage
using just 1 ,…,k-1)

j
10
Warshall’s Algorithm : Transitive
Closure

11
Warshall’s Algorithm

Time efficiency: Θ(n3)


12
Floyd’s Algorithm:
All pairs shortest paths
• In a weighted graph, find shortest paths between every
pair of vertices, called as Distance Matrix.

• Same idea: construct solution through series of matrices


D(0), D(1), … using an initial subset of the vertices as
intermediaries.
4 3

• Example: 1
1
6
1 5

4
2 3

13
Diagraph and its Weight Matrix

14
Floyd’s Algorithm
On the k-th iteration, the algorithm determines shortest paths
between every pair of vertices i, j that use only vertices among
1,…,k as intermediate

D(k)[i,j] = min {D(k-1)[i,j], D(k-1)[i,k] + D(k-1)[k,j]}


D(k-1)[i,k]
k
i
D(k-1)[k,j]
D(k-1)[i,j]

j
15
Floyd’s Algorithm

Time efficiency: Θ(n3)


16
Greedy Approach
Construct a solution through a sequence of
steps expanding a partially constructed
solution obtained so far , until a complete
solution to the problem is reached. On each
step the following requirements should be met.

Feasible: It has to satisfy the problems constraint


Locally optimal: It has to be the best local choice
Irrevocable: Once made it cannot be changed on
subsequent steps of the algorithm
17
Minimum Spanning Trees
• Spanning tree of a connected graph G is a connected
acyclic subgraph of G that includes all of G’s vertices
• Minimum spanning tree of a weighted, connected
graph G is a spanning tree of G of minimum total weight
where the weight of the tree is defined as the sum of
weights on all its edges.

18
Prims Algorithm
Given n points , connect them in the cheapest
possible way so that there will be a path between
every pair of points. We can represent points by
vertices of a graph , possible connections with edges
of a graph and connection cost by the edge weight.

Prims Algorithm constructs a minimum spanning


tree through a sequence of expanding subtrees. The
initial subtree in such a sequence consists of a single
vertex selected arbitrarily from the set V of the
graph’s vertices. On each iteration we expand the
current tree by attaching to it the nearest vertex not
in that tree. 19
Prims Algorithm
Start with tree T1 consisting of one (any) vertex and
“grow” tree one vertex at a time to produce MST through
a series of expanding subtrees T1, T2, …, Tn

On each iteration, construct Ti+1 from Ti by adding vertex


not in Ti that is closest to those already in Ti (this is a
“greedy” step!)

Stop when all vertices are included

20
Example

21
22
Kruskals Algorithm
Sort the edges in nondecreasing order of
lengths

“Grow” tree one edge at a time to produce


MST through a series of expanding forests
F1, F2, …, Fn-1

On each iteration, add the next edge on the


sorted list unless this would create a cycle.
(If it would, skip the edge.) 23
Kruskal’s Algorithm
// Returns the MST by Kruskal’s Algorithm
// Input: A weighted connected graph G = (V, E)
// Output: Set of edges comprising a MST
sort the edges E by their weights
ET ← ∅
while |ET | + 1 < |V | do
e ← next edge in E
if Et ∪ {e} does not have a cycle
then ET ← ET ∪ {e}
24
Return ET
Example of Kruskal’s Algorithm

25
Example of Kruskal’s Algorithm(cont’d)

26
Efficiency
Algorithm looks easier than Prim’s but is
harder to implement (checking for cycles!)
This algorithm repeatedly chooses the
smallest-weight edge that does not form a
cycle.
Kruskal’s Algorithm is O(E log V ) using
efficient cycle detection.

27

You might also like