0% found this document useful (0 votes)
5 views

Day2.3 Algorithmic ProblemSolving

Problem solving

Uploaded by

Rishav Dhama
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Day2.3 Algorithmic ProblemSolving

Problem solving

Uploaded by

Rishav Dhama
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 107

Algorithms for Problem

Solving

1
Reference

Introduction to Algorithms
by
T.H. Cormen
C.E. Leiserson
R.L. Rivest
C. Stein

2
Graphs
• A graph G = (V, E)
– V = set of vertices
– E = set of edges = subset of V  V
– Thus |E| = O(|V|2)

3
Graphs
• We will typically express running times in
terms of |E| and |V| (often dropping the |’s)
– If |E|  |V|2 the graph is dense
– If |E|  |V| the graph is sparse
• If you know you are dealing with dense or
sparse graphs, different data structures may
make sense

4
Graph Searching
• Given: a graph G = (V, E), directed or
undirected
• Goal: methodically explore every vertex and
every edge
• Ultimately: build a tree on the graph
– Pick a vertex as the root
– Choose certain edges to produce a tree
– Note: might also build a forest if graph is not
connected

5
Representing Graphs
• Assume V = {1, 2, …, n}
• An adjacency matrix represents the graph as a
n x n matrix A:
– A[i, j] = 1 if edge (i, j)  E (or weight of edge)
= 0 if edge (i, j)  E

6
Graphs: Adjacency Matrix
• Example:
A 1 2 3 4
1
a 1

2 d
4 2
3
b c
??
3 4

7
Graphs: Adjacency Matrix
• Example:
A 1 2 3 4
1
a 1 0 1 1 0

2 d
4 2 0 0 1 0
b c 3 0 0 0 0
3 4 0 0 1 0

8
Graphs: Adjacency Matrix
• How much storage does the adjacency matrix
require?
• A: O(V2)
• What is the minimum amount of storage
needed by an adjacency matrix representation
of an undirected graph with 4 vertices?
• A: 6 bits
– Undirected graph → matrix is symmetric
– No self-loops → don’t need diagonal

9
Graphs: Adjacency Matrix
• The adjacency matrix is a dense
representation
– Usually too much storage for large graphs
– But can be very efficient for small graphs
• Most large interesting graphs are sparse
– E.g., planar graphs, in which no edges cross, have
|E| = O(|V|) by Euler’s formula
– For this reason the adjacency list is often a more
appropriate respresentation

10
Graphs: Adjacency List
• Adjacency list: for each vertex v  V, store a
list of vertices adjacent to v
• Example:
– Adj[1] = {2,3} 1
– Adj[2] = {3}
– Adj[3] = {} 2 4
– Adj[4] = {3}
• Variation: can also keep 3
a list of edges coming into vertex
11
Graphs: Adjacency List
• How much storage is required?
– The degree of a vertex v = # incident edges
• Directed graphs have in-degree, out-degree
– For directed graphs, # of items in adjacency lists is
 out-degree(v) = |E|
takes (V + E) storage (Why?)
– For undirected graphs, # items in adj lists is
 degree(v) = 2 |E| (handshaking lemma)
also (V + E) storage
• So: Adjacency lists take O(V+E) storage
12
Graph Searching
• Given: a graph G = (V, E), directed or
undirected
• Goal: methodically explore every vertex and
every edge
• Ultimately: build a tree on the graph
– Pick a vertex as the root
– Choose certain edges to produce a tree
– Note: might also build a forest if graph is not
connected

13
Breadth-First Search
• “Explore” a graph, turning it into a tree
– One vertex at a time
– Expand frontier of explored vertices across the
breadth of the frontier
• Builds a tree over the graph
– Pick a source vertex to be the root
– Find (“discover”) its children, then their children,
etc.

14
Breadth-First Search
• Again will associate vertex “colors” to guide the
algorithm
– White vertices have not been discovered
• All vertices start out white
– Grey vertices are discovered but not fully explored
• They may be adjacent to white vertices
– Black vertices are discovered and fully explored
• They are adjacent only to black and gray vertices
• Explore vertices by scanning adjacency list of grey
vertices

15
Breadth-First Search
BFS(G, s) {
initialize vertices;
Q = {s}; // Q is a queue (duh); initialize to s
while (Q not empty) {
u = RemoveTop(Q);
for each v  u->adj {
if (v->color == WHITE)
v->color = GREY;
v->d = u->d + 1;
v->p = u; What does v->d represent?
Enqueue(Q, v);
What does v->p represent?
}
u->color = BLACK;
}
}

16
Breadth-First Search: Example
r s t u

   

   
v w x y

17
Breadth-First Search: Example
r s t u

 0  

   
v w x y

Q: s
18
Breadth-First Search: Example
r s t u

1 0  

 1  
v w x y

Q: w r
19
Breadth-First Search: Example
r s t u

1 0 2 

 1 2 
v w x y

Q: r t x
20
Breadth-First Search: Example
r s t u

1 0 2 

2 1 2 
v w x y

Q: t x v
21
Breadth-First Search: Example
r s t u

1 0 2 3

2 1 2 
v w x y

Q: x v u
22
Breadth-First Search:
Example
r s t u

1 0 2 3

2 1 2 3
v w x y

Q: v u y
23
Breadth-First Search:
Example
r s t u

1 0 2 3

2 1 2 3
v w x y

Q: u y
24
Breadth-First Search:
Example
r s t u

1 0 2 3

2 1 2 3
v w x y

Q: y
25
Breadth-First Search:
Example
r s t u

1 0 2 3

2 1 2 3
v w x y

Q: Ø
26
BFS: The Code Again
BFS(G, s) {
initialize vertices; Touch every vertex: O(V)
Q = {s};
while (Q not empty) {
u = RemoveTop(Q);
for each v  u->adj { u = every vertex, but only once
if (v->color == WHITE) (Why?)
v->color = GREY;
So v = every v->d = u->d + 1;
vertex that v->p = u;
appears in}some Enqueue(Q, v);
other vert’s
u->color = BLACK;
adjacency
} list What will be the running time?
}
Total running time: O(V+E)
27
BFS: The Code Again
BFS(G, s) {
initialize vertices;
Q = {s};
while (Q not empty) {
u = RemoveTop(Q);
for each v  u->adj {
if (v->color == WHITE)
v->color = GREY;
v->d = u->d + 1;
v->p = u;
Enqueue(Q, v);
}
u->color = BLACK; What will be the storage cost
}
}
in addition to storing the tree?
Total space used:
O(max(degree(v))) = O(E)
28
Breadth-First Search: Properties
• BFS calculates the shortest-path distance to
the source node
– Shortest-path distance (s,v) = minimum number
of edges from s to v, or  if v not reachable from s
• BFS builds breadth-first tree, in which paths to
root represent shortest paths in G
– Thus can use BFS to calculate shortest path from
one vertex to another in O(V+E) time

29
Depth-First Search
• Depth-first search is another strategy for
exploring a graph
– Explore “deeper” in the graph whenever possible
– Edges are explored out of the most recently
discovered vertex v that still has unexplored edges
– When all of v’s edges have been explored,
backtrack to the vertex from which v was
discovered

30
DFS Code
DFS(G) DFS_Visit(u)
{ {
for each vertex u  G->V u->color = YELLOW;
{ time = time+1;
u->d = time;
u->color = WHITE;
for each v  u->Adj[]
}
{
time = 0;
if (v->color == WHITE)
for each vertex u  G->V DFS_Visit(v);
{ }
if (u->color == WHITE) u->color = BLACK;
DFS_Visit(u); time = time+1;
} u->f = time;
} }

31
Single-Source
Shortest Path
• Problem: given a weighted directed graph G,
find the minimum-weight path from a given
source vertex s to another vertex v
– “Shortest-path” = minimum weight
– Weight of path is sum of edges
– E.g., a road map: what is the shortest path from
Kolkata to Jaipur?

32
Shortest Path Properties
• In graphs with negative weight cycles, some
shortest paths will not exist (Why?):

<0

33
Relaxation
• A key technique in shortest path algorithms is
relaxation
– Idea: for all v, maintain upper bound d[v] on (s,v)
Relax(u,v,w) {
if (d[v] > d[u]+w) then d[v]=d[u]+w;
}
2 2
5 9 5 6

Relax Relax
2 2
5 7 5 6
34
Bellman-Ford Algorithm
BellmanFord()
for each v  V Initialize d[], which
will converge to
d[v] = ;
shortest-path value 
d[s] = 0;
for i=1 to |V|-1 Relaxation:
for each edge (u,v)  E Make |V|-1 passes,
Relax(u,v, w(u,v)); relaxing each edge
for each edge (u,v)  E
Test for solution
if (d[v] > d[u] + w(u,v)) Under what condition
return “no solution”; do we get a solution?

Relax(u,v,w): if (d[v] > d[u]+w) then d[v]=d[u]+w

35
DAG Shortest Paths
• Problem: finding shortest paths in DAG
– Bellman-Ford takes O(VE) time.
– How can we do better?
– Idea: use topological sort
• If were lucky and processes vertices on each shortest
path from left to right, would be done in one pass
• Every path in a dag is subsequence of topologically
sorted vertex order, so processing verts in that order,
we will do each path in forward order (will never relax
edges out of vert before doing all edges into vert).
• Thus: just one pass. What will be the running time?

36
Dijkstra’s Algorithm
• If no negative edge weights, we can beat BFS
• Similar to breadth-first search
– Grow a tree gradually, advancing from vertices
taken from a queue
• Also similar to Prim’s algorithm for MST
– Use a priority queue keyed on d[v]

37
Dijkstra’s Algorithm
Dijkstra(G) B
10 2
for each v  V
A 4 3 D
d[v] = ;
d[s] = 0; S = ; Q = V; 5
C
1
while (Q  )
Ex: run the algorithm
u = ExtractMin(Q);
S = S U {u};
for each v  u->Adj[]
if (d[v] > d[u]+w(u,v)) Relaxation
Note: this Step
is really a d[v] = d[u]+w(u,v);
call to Q->DecreaseKey()
38
Dijkstra’s Algorithm
Dijkstra(G)
How many times is
for each v  V ExtractMin() called?
d[v] = ;
d[s] = 0; S = ; Q = V;
while (Q  ) How many times is
DecreaseKey() called?
u = ExtractMin(Q);
S = S U {u};
for each v  u->Adj[]
if (d[v] > d[u]+w(u,v))
d[v] = d[u]+w(u,v);
What will be the total running time?
39
Dijkstra’s Algorithm
Dijkstra(G)
How many times is
for each v  V ExtractMin() called?
d[v] = ;
d[s] = 0; S = ; Q = V;
while (Q  ) How many times is
DecreaseKey() called?
u = ExtractMin(Q);
S = S U {u};
for each v  u->Adj[]
if (d[v] > d[u]+w(u,v))
d[v] = d[u]+w(u,v);
A: O(E lg V) using binary heap for Q
Can achieve O(V lg V + E) with Fibonacci heaps
40
Dijkstra’s Algorithm
Dijkstra(G)
for each v  V
d[v] = ;
d[s] = 0; S = ; Q = V;
while (Q  )
u = ExtractMin(Q);
S = S U{u};
for each v  u->Adj[]
if (d[v] > d[u]+w(u,v))
d[v] = d[u]+w(u,v);
Correctness: we must show that when u is
removed from Q, it has already converged 41
Correctness Of Dijkstra's Algorithm
p2
u
s
y
x
p2

• Note that d[v]  (s,v) v


• Let u be first vertex picked s.t.  shorter path than d[u] d[u] > (s,u)
• Let y be first vertex V-S on actual shortest path from s→u  d[y] = (s,y)
– Because d[x] is set correctly for y's predecessor x  S on the shortest path, and
– When we put x into S, we relaxed (x,y), giving d[y] the correct value

42
Djikstra’s Algorithm
with Negative Weights
• What will happen if Djikstra’s algorithm is run
a graph with negative weight edges?

s 1
b

2 -2

43
Djikstra’s Algorithm with
Negative Weights
• Actual shortest path to b is of cost zero but
not discovered by Djikstra’s algorithm.

s 1
b

2 -2

44
Disjoint-Set Union
Problem
• Want a data structure to support disjoint sets
– Collection of disjoint sets S = {Si}, Si ∩ Sj = 
• Need to support following operations:
– MakeSet(x): S = S U {{x}}
– Union(Si, Sj): S = S - {Si, Sj} U {Si U Sj}
– FindSet(X): return Si  S such that x  Si
• Before discussing implementation details, we
look at example application: MSTs

45
Kruskal’s Algorithm
Kruskal()
{
T = ;
for each v  V
MakeSet(v);
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

46
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

47
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

48
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1?
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

49
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

50
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2? 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

51
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

52
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5?
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

53
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

54
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8? 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

55
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

56
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9?
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

57
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

58
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13? 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

59
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

60
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14? 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

61
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

62
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17?
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

63
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19?
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

64
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21? 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

65
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25?
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

66
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

67
Kruskal’s Algorithm
Run the algorithm:
Kruskal() 2 19
{ 9
14 17
T = ; 8 25
5
for each v  V
MakeSet(v); 21 13 1
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}

68
Kruskal’s Algorithm
Kruskal() What will affect the running time?
{
T = ;
for each v  V
MakeSet(v);
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}
69
Kruskal’s Algorithm
Kruskal() What will affect the running time?
1 Sort
{
O(V) MakeSet() calls
T = ; O(E) FindSet() calls
for each v  V O(V) Union() calls
MakeSet(v); (Exactly how many Union()s?)
sort E by increasing edge weight w
for each (u,v)  E (in sorted order)
if FindSet(u)  FindSet(v)
T = T U {{u,v}};
Union(FindSet(u), FindSet(v));
}
70
Hash Tables
• Motivation: symbol tables
– A compiler uses a symbol table to relate symbols
to associated data
• Symbols: variable names, procedure names, etc.
• Associated data: memory location, call graph, etc.
– For a symbol table (also called a dictionary), we
care about search, insertion, and deletion
– We typically don’t care about sorted order

71
Hash Tables
• More formally:
– Given a table T and a record x, with key (= symbol)
and satellite data, we need to support:
• Insert (T, x)
• Delete (T, x)
• Search(T, x)
– We want these to be fast, but don’t care about
sort the records
• The structure we will use is a hash table
– Supports all the above in O(1) expected time!
72
Direct Addressing
• Suppose:
– The range of keys is 0..m-1
– Keys are distinct
• The idea:
– Set up an array T[0..m-1] in which
• T[i] = x if x T and key[x] = i
• T[i] = NULL otherwise
– This is called a direct-address table
• Operations take O(1) time!
• So what’s the problem?

73
The Problems
• Direct addressing works well when the range
m of keys is relatively small
• But what if the keys are 32-bit integers?
– Problem 1: direct-address table will have
232 entries, more than 4 billion
– Problem 2: even if memory is not an issue, the
time to initialize the elements to NULL may be
• Solution: map keys to smaller range 0..m-1
• This mapping is called a hash function

74
Hash Functions
• Next problem: collision
T

U 0
(universe of keys)
h(k1)
k1
h(k4)
K k4
(actual k5
h(k2) = h(k5)
keys)

k2 h(k3)
k3

m-1
75
Resolving Collisions
• How can we solve the problem of collisions?
• Solution 1: chaining
• Solution 2: open addressing

76
Open Addressing
• Basic idea:
– To insert: if slot is full, try another slot, …, until an
open slot is found (probing)
– To search, follow same sequence of probes as would
be used when inserting the element
• If reach element with correct key, return it
• If reach a NULL pointer, element is not in table
• Good for fixed sets (adding but no deletion)
– Example: spell checking
• Table needn’t be much bigger than n

77
Chaining
• Chaining puts elements that hash to the same
slot in a linked list: T

U ——
(universe of keys) k1 k4 ——
——
k1
——
K k4 k5 ——
(actual
k7 k5 k2 k7 ——
keys)
——
k2 k3 ——
k8 k3
k6
k8 k6 ——
——
78
Design Paradigms
• Greedy
• Dynamic Programming
• Divide and Conquer

79
Greedy algorithms
• A greedy algorithm always makes the choice that
looks best at the moment
• The hope: a locally optimal choice will lead to a
globally optimal solution
• For some problems, it works
• Greedy algorithms tend to be easier to code
• Many examples in scheduling
• Huffman Coding

80
Divide and Conquer Vs.
Dynamic Programming

• Divide-and-Conquer: a top-down approach.


• Many smaller instances are computed more
than once.

• Dynamic programming: a bottom-up


approach.
• Solutions for smaller instances are stored in
a table for later use.

81
Why Dynamic Programming?

• Sometimes the natural way of dividing an instance


suggested by the structure of the problem leads us to
consider several overlapping subinstances.

• If we solve each of these independently, a large


number of identical subinstances result

•If we pay no attention to this duplication, it is likely


that we will end up with an inefficient algorithm.

82
Bottom-up technique
• Avoid calculating the same thing twice,
usually by keeping a table of known results,
which we fill up as subinstances are solved.

• Dynamic programming is a bottom-up


technique.

• Memoization is a variant of dynamic


programming that offers the efficiency of
dynamic programming (by avoiding solving
common subproblems more than once) but
maintains a top-down flow
83
Minimum Spanning Tree
• Problem: given a connected, undirected,
weighted graph, find a spanning tree using
edges that minimize the total weight
6 4
5 9

14 2
10
15

3 8
84
Minimum Spanning Tree
• Which edges form the minimum spanning tree
(MST) of the below graph?

A
6 4
5 9
H B C

14 2
10
15
G E D
3 8
F
85
Minimum Spanning Tree
• Answer:

A
6 4
5 9
H B C

14 2
10
15
G E D
3 8
F
86
Prim’s Algorithm
MST-Prim(G, w, r)
Q = V[G];
for each u  Q
key[u] = ;
key[r] = 0;
p[r] = NULL;
while (Q not empty)
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

87
Prim’s Algorithm
MST-Prim(G, w, r) 6 4
5 9
Q = V[G];
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15

p[r] = NULL;
3 8
while (Q not empty)
u = ExtractMin(Q); Run on example graph
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

88
Prim’s Algorithm
MST-Prim(G, w, r) 6  4
5 9
Q = V[G];   
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
  
p[r] = NULL;
3  8
while (Q not empty)
u = ExtractMin(Q); Run on example graph
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

89
Prim’s Algorithm
MST-Prim(G, w, r) 6  4
5 9
Q = V[G];   
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
r 0  
p[r] = NULL;
3  8
while (Q not empty)
u = ExtractMin(Q); Pick a start vertex r
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

90
Prim’s Algorithm
MST-Prim(G, w, r) 6  4
5 9
Q = V[G];   
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
u 0  
p[r] = NULL;
3  8
while (Q not empty)
u = ExtractMin(Q);Dark vertices have been removed from Q
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

91
Prim’s Algorithm
MST-Prim(G, w, r) 6  4
5 9
Q = V[G];   
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
u 0  
p[r] = NULL;
3 3 8
while (Q not empty)
u = ExtractMin(Q); Bold arrows indicate parent pointers
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

92
Prim’s Algorithm
MST-Prim(G, w, r) 6  4
5 9
Q = V[G]; 14  
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
u 0  
p[r] = NULL;
3 3 8
while (Q not empty)
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

93
Prim’s Algorithm
MST-Prim(G, w, r) 6  4
5 9
Q = V[G]; 14  
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
0  
p[r] = NULL;
3 3 8
while (Q not empty) u
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

94
Prim’s Algorithm
MST-Prim(G, w, r) 6  4
5 9
Q = V[G]; 14  
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
0 8 
p[r] = NULL;
3 3 8
while (Q not empty) u
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

95
Prim’s Algorithm
MST-Prim(G, w, r) 6  4
5 9
Q = V[G]; 10  
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
0 8 
p[r] = NULL;
3 3 8
while (Q not empty) u
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

96
Prim’s Algorithm
MST-Prim(G, w, r) 6  4
5 9
Q = V[G]; 10  
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
0 8 
p[r] = NULL;
3 3 8
while (Q not empty) u
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

97
Prim’s Algorithm
MST-Prim(G, w, r) 6  4
5 9
Q = V[G]; 10 2 
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
0 8 
p[r] = NULL;
3 3 8
while (Q not empty) u
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

98
Prim’s Algorithm
MST-Prim(G, w, r) 6  4
5 9
Q = V[G]; 10 2 
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
0 8 15
p[r] = NULL;
3 3 8
while (Q not empty) u
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

99
Prim’s Algorithm
 u
MST-Prim(G, w, r) 6 4
5 9
Q = V[G]; 10 2 
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
0 8 15
p[r] = NULL;
3 3 8
while (Q not empty)
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

100
Prim’s Algorithm
 u
MST-Prim(G, w, r) 6 4
5 9
Q = V[G]; 10 2 9
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
0 8 15
p[r] = NULL;
3 3 8
while (Q not empty)
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

101
Prim’s Algorithm
4 u
MST-Prim(G, w, r) 6 4
5 9
Q = V[G]; 10 2 9
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
0 8 15
p[r] = NULL;
3 3 8
while (Q not empty)
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

102
Prim’s Algorithm
4 u
MST-Prim(G, w, r) 6 4
5 9
Q = V[G]; 5 2 9
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
0 8 15
p[r] = NULL;
3 3 8
while (Q not empty)
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

103
Prim’s Algorithm
u
MST-Prim(G, w, r) 6 4 4
5 9
Q = V[G]; 5 2 9
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
0 8 15
p[r] = NULL;
3 3 8
while (Q not empty)
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

104
Prim’s Algorithm
u 4
MST-Prim(G, w, r) 6 4
5 9
Q = V[G]; 5 2 9
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
0 8 15
p[r] = NULL;
3 3 8
while (Q not empty)
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

105
Prim’s Algorithm
u
MST-Prim(G, w, r) 6 4 4
5 9
Q = V[G]; 5 2 9
for each u  Q
key[u] = ; 14 2
10
key[r] = 0; 15
0 8 15
p[r] = NULL;
3 3 8
while (Q not empty)
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

106
Prim’s Algorithm
MST-Prim(G, w, r) 6 4 4
5 9
Q = V[G]; 5 2 9
for each u  Q
key[u] = ; 14 2 u
10
key[r] = 0; 15
0 8 15
p[r] = NULL;
3 3 8
while (Q not empty)
u = ExtractMin(Q);
for each v  Adj[u]
if (v  Q and w(u,v) < key[v])
p[v] = u;
key[v] = w(u,v);

107

You might also like