0% found this document useful (0 votes)
0 views29 pages

Dynamic Programming About Design and Analysis of Algorithm

The document discusses concepts related to graphs, including subgraphs, spanning trees, and various graph search algorithms such as Depth First Search (DFS) and Breadth First Search (BFS). It explains the properties of spanning trees, including minimum spanning trees (MST) and algorithms like Kruskal's and Prim's for finding MSTs. Additionally, it touches on dynamic programming as a method for solving optimization problems through decision sequences.

Uploaded by

projectengima88
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views29 pages

Dynamic Programming About Design and Analysis of Algorithm

The document discusses concepts related to graphs, including subgraphs, spanning trees, and various graph search algorithms such as Depth First Search (DFS) and Breadth First Search (BFS). It explains the properties of spanning trees, including minimum spanning trees (MST) and algorithms like Kruskal's and Prim's for finding MSTs. Additionally, it touches on dynamic programming as a method for solving optimization problems through decision sequences.

Uploaded by

projectengima88
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Graphs

Subgraphs and Spanning Trees:

Subgraphs: A graph G’ = (V’, E’) is a subgraph of


graph G = (V, E) iff V’  V and E’  E.

The undirected graph G is connected,


• if for every pair of vertices u, v there exists a path
from u to v.
• If a graph is not connected, the vertices of the
graph can be divided into connected
components.
• Two vertices are in the same connected component
iff they are connected by a path.

Tree is a connected acyclic graph.


A spanning tree of a graph G = (V, E) is a tree that
contains all vertices of V and is a subgraph of G. A single
graph can have multiple spanning trees.

Lemma 1: Let T be a spanning tree of a graph G. Then

1. Any two vertices in T are connected by a unique


simple path.

2. If any edge is removed from T, then T becomes


disconnected.

3. If we add any edge into T, then the new graph will


contain a cycle.

4. Number of edges in T is n-1.

Graphs search algorithms :


[acyclic graphs]

Given a graph G = (V, E) and a vertex V in V (G)


traversing can be done in two ways.

5. Depth first search

6. Breadth first search

7. D-search (Depth Search)

Depth first search:


• With depth first search, the start state is chosen
to begin,
o then some successor of the start state,
▪ then some successor of that state,
• then some successor of that and so
on,
• trying to reach a goal state.

• If depth first search reaches a state S without


successors,

• or if all the successors of a state S have been


chosen (visited) and a goal state has not get been
found, then it “backs up”
• that means it goes to the immediately previous
state or predecessor formally, the state whose
successor was ‘S’ originally.

For example consider the figure. The circled letters are


state and arrows are branches.
• Suppose S is the start and G is the only goal state.
• Depth first search will first visit S, then A then D.
• But D has no successors, so we must back up to
A and try its second successor, E.
• But this doesn’t have any successors either, so we
back up to A again.
• But now we have tried all the successors of A and
haven’t found the goal state G so we must back to
‘S’.
• Now ‘S’ has a second successor, B.
o But B has no successors, so we back up to S
again and choose its third successor, C.
o C has one successor, F.
o The first successor of F is H, and the first of H is J.
o J doesn’t have any successors, so we back up to
H and try its second successor.
o And that’s G, the only goal state. So the solution
path to the goal is S, C, F, H and G and the states
considered were in order S, A, D, E, B, C, F, H, J,
G.

Disadvantages:

• It works very fine when search graphs are trees


or lattices, but can get struck in an infinite loop
on graphs.
o This is because depth first search can travel
around a cycle in the graph forever.
o To eliminate this keep a list of states
previously visited, and never permit
search to return to any of them.

8. One more problem is that, the state space tree


may be of infinite depth,
9. to prevent consideration of paths that are too
long, a maximum is often placed on the depth
of nodes to be expanded, and any node at that
depth is treated as if it had no successors.

10. We cannot come up with shortest solution to the


problem.

Breadth first search:

• Given an graph G = (V, E),


• breadth-first search starts at some source vertex
S and “discovers" which vertices are reachable
from S.
• Define the distance between a vertex V and S to
be the minimum number of edges on a path from
S to V.
• Breadth-first search discovers vertices in
increasing order of distance, and hence can be
used as an algorithm for computing shortest paths
o (where the length of a path = number of
edges on the path).
• Breadth-first search is named because it visits
vertices across the entire breadth.

To illustrate this let us consider the following tree:


• Breadth first search finds states level by level.
• Here we first check all the immediate successors
of the start state.
o Then all the immediate successors of these,
▪ then all the immediate successors of
these, and so on until we find a goal
node.
• Suppose S is the start state and G is the goal
state.
o In the figure, start state S is at level 0; A, B
and C are at level 1;
o D, e and F at level 2;
o H and I at level 3; and
o J, G and K at level 4.
o So breadth first search, will consider in order
S, A, B, C, D, E, F, H, I, J and G and then stop
because it has reached the goal node.
• Breadth first search does not have the danger of
infinite loops as we consider states in order of
increasing number of branches (level) from the
start state.

• One simple way to implement breadth first search


is to use a queue data structure consisting of just
a start state.

• Any time we need a new state, we pick it from the


front of the queue and

o any time we find successors, we put them at


the end of the queue.
o That way we are guaranteed to not try (find
successors of) any states at level ‘N’ until all
states at level ‘N – 1’ have been tried.

Depth Search (D-Search):

• The exploration of a new node cannot begin until


the node currently being explored is fully
explored.
• D-search like state space search is called LIFO
(Last In First Out) search which uses stack data
structure.
• To illustrate the D-search let us consider the
following tree:
The search order for goal node (G) is as follows: S, A, B, C,
F, H, I, J, G.

Spanning Trees (MST):

• A spanning tree for a connected graph is a tree whose


vertex set is the same as the vertex set of the given
graph, and whose edge set is a subset of the edge set
of the given graph. i.e., any connected graph will have
a spanning tree.

Weight of a spanning tree w (T) is the sum of weights


of all edges in T.

Minimum spanning tree

The Minimum spanning tree (MST) is a spanning tree


with the smallest possible weight.
Application of minimum spanning trees

real-world examples:

• One practical application of a MST would be in the


design of a network.
For instance, a group of individuals, who are
separated by varying distances, wish to be
connected together in a telephone network.
Although MST cannot do anything about the distance
from one connection to another, it can be used to
determine the least cost paths with no cycles in this
network, thereby connecting everyone at a minimum
cost.

• Another useful application of MST would be finding


airline routes.
The vertices of the graph would represent
cities, and the edges would represent routes
between the cities.
Obviously, the further one has to travel, the more it will cost,
so MST can be applied to optimize airline routes by finding
the least costly paths with no cycles

BFS AND DFS spanning Trees

• Using BFS and DFS you can built spanning trees


• BFS and DFS impose a tree (the BFS/DFS tree)
along with some auxiliary edges(cross edges) on
the structure of graph.
• Therefore we can compute a spanning tree in a
graph.
• The computed spanning tree is not a minimum
spanning tree.
Why spanning trees
o Trees are much more structured objects than
graphs.
▪ For example, trees break up nicely into
subtrees, upon which subproblems can
be solved recursively.
o For directed graphs the other edges of the
graph can be classified as follows:

Example

Depth first search


Recall

• Depth first search of undirected graph proceeds as


follows.
• The start vertex V is visited.
• Next an unvisited vertex 'W' adjacent to 'V' is
selected and a depth first search from 'W' is
initiated.
• When a vertex 'u' is reached such that all its
adjacent vertices have been visited, we back up
to the last vertex visited, which has an unvisited
vertex 'W' adjacent to it
o and initiate a depth first search from
W(reclusive).
• The search terminates when no unvisited vertex
can be reached from any of the visited ones.

Let us consider the following Graph (G):

The adjacency list for G is:


• If the depth first is initiated from vertex 1,
then the vertices of G are visited in the order:
1, 2, 4, 8, 5, 6, 3, 7. The depth first spanning
tree is as follows:

• The spanning trees obtained using depth first searches


are called depth first spanning trees.
• The edges rejected in the context of depth first search
are called a back edges.
• Depth first spanning tree has no cross edges.
Breadth first search and traversal:

• Starting at vertex 'V' and marking it as visited,


• BFS differs from DFS in that all unvisited vertices
adjacent to V are visited next.
o Then unvisited vertices adjacent to there
vertices are visited and so on
Consider the same graph as above

o A breadth first search beginning at vertex 1


of the graph would first visit 1 and then 2 and
3.
Graphs algorithms

There are two more algorithms based on minimum spanning trees


:
• the Kruskal algorithm
• the Prim algorithm.

Kruskal’s Algorithm
• This is a greedy algorithm. A greedy algorithm chooses some
local optimum (i.e.picking an edge with the least weight in a
MST).
• Kruskal's algorithm works as follows:
o Take a graph with 'n' vertices, keep on adding the
shortest (least cost) edge, while avoiding the creation of
cycles, until (n - 1) edges have been added.
o Sometimes two or more edges may have the same cost.
▪ The order in which the edges are chosen, in this
case, does not matter.
o Different MSTs may result, but they will all have the
same total cost, which will always be the minimum cost.

Example

To understand Kruskal's algorithm let us consider the following


example −
Step 1 - Remove all loops and Parallel Edges
Remove all loops and parallel edges from the given graph.

In case of parallel edges, keep the one which has the least cost
associated and remove all others.

Step 2 - Arrange all edges in their increasing order of weight


The next step is to create a set of edges and weight, and arrange
them in an ascending order of weightage (cost).

Step 3 - Add the edge which has the least weightage


• Now we start adding edges to the graph beginning from the
one which has the least weight.
• Throughout, we shall keep checking that the spanning
properties remain intact.
• In case, by adding one edge, the spanning tree property does
not hold then we shall consider not to include the edge in the
graph.

• The least cost is 2 and edges involved are B,D and D,T.
• We add them. Adding them does not violate spanning tree
properties, so we continue to our next edge selection.
Next cost is 3, and associated edges are A,C and C,D. We add them
again −

Next cost in the table is 4, and we observe that adding it will create
a circuit in the graph. −

We ignore it. In the process we shall ignore/avoid all edges that


create a circuit.
We observe that edges with cost 5 and 6 also create circuits. We
ignore them and move on.

Now we are left with only one node to be added. Between the two
least cost edges available 7 and 8, we shall add the edge with cost
7.

By adding edge S,A we have included all the nodes of the graph
and we now have minimum cost spanning tree.

MINIMUM-COST SPANNING TREES:


PRIM'S ALGORITHM
• Like Kruskal’s algorithm, Prim’s algorithm is
also a Greedy algorithm. It starts with an
empty spanning tree.
• The idea is to maintain two sets of vertices.
The first set contains the vertices already
included in the MST, the other set contains the
vertices not yet included.
• At every step, it considers all the edges that
connect the two sets, and picks the minimum
weight edge from these edges.
o After picking the edge, it moves the other
endpoint of the edge to the set containing
MST.
Algorithm
1) Create a set mstSet that keeps track of vertices
already included in MST.
2) Assign a key value to all vertices in the input graph.
Initialize all key values as INFINITE. Assign key value as
0 for the first vertex so that it is picked first.
3) While mstSet doesn’t include all vertices
….a) Pick a vertex u which is not there in mstSet and
has minimum key value.
….b) Include u to mstSet.
….c) Update key value of all adjacent vertices of u. To
update the key values, iterate through all adjacent
vertices. For every adjacent vertex v, if weight of edge u-
v is less than the previous key value of v, update the key
value as weight of u-v

Example

The vertices included in MST are shown in green color.


• Pick the vertex with minimum key value and not already
included in MST (not in mstSET).
• The vertex 1 is picked and added to mstSet.
o So mstSet now becomes {0, 1}. Update the key values
of adjacent vertices of 1. The key value of vertex 2
becomes 8.

• Pick the vertex with minimum key value and not already included in
MST (not in mstSET).
o We can either pick vertex 7 or vertex 2, let vertex 7 is picked. So
mstSet now becomes {0, 1, 7}.
o Update the key values of adjacent vertices of 7. The key value of
vertex 6 and 8 becomes finite (1 and 7 respectively).

• Pick the vertex with minimum key value and not already included in
MST (not in mstSET). Vertex 6 is picked.
o So mstSet now becomes {0, 1, 7, 6}. Update the key values of
adjacent vertices of 6. The key value of vertex 5 and 8 are
updated.

• We repeat the above steps until mstSet includes all vertices of given
graph. Finally, we get the following graph.

Dynamic Programming

• Dynamic programming, as greedy method, is a powerful


algorithm design technique that can be used when the
solution to the problem may be viewed as the result of a
sequence of decisions.
o In the greedy method we make irrevocable
decisions one at a time, using a greedy criterion.
o However, in dynamic programming we examine the
decision sequence to see whether an optimal
decision sequence contains optimal decision
subsequence.

• When optimal decision sequences contain optimal


decision subsequences, we can establish recurrence
equations, called dynamic-programming recurrence
equations,
o that enable us to solve the problem in an efficient
way.

• Dynamic programming is based on the principle of


optimality (also coined by Bellman).

• The principle of optimality states that no matter


whatever the initial state and initial decision are, the
remaining decision sequence must constitute an optimal
decision sequence with regard to the state resulting from
the first decision.

• The principle implies that an optimal decision sequence


is comprised of optimal decision subsequences.

o Since the principle of optimality may not hold for


some formulations of some problems, it is
necessary to verify that it does hold for the problem
being solved.
▪ Dynamic programming cannot be applied
when this principle does not hold.

The steps in a dynamic programming solution are:

 Verify that the principle of optimality holds

 Set up the dynamic-programming recurrence equations

 Solve the dynamic-programming recurrence


equations for the value of the optimal solution.

 Perform a trace back step in which the solution itself is


constructed.

• Dynamic programming differs from the greedy


method since the greedy method produces only one
feasible solution, which may or may not be optimal,
o While dynamic programming produces all possible
sub-problems at most once, one of which
guaranteed to be optimal.
o Optimal solutions to sub-problems are retained in a
table, thereby avoiding the work of recomputing the
answer every time a sub-problem is encountered
• The divide and conquer principle solve a large problem,
by breaking it up into smaller problems which can be
solved independently.
o In dynamic programming this principle is carried to
an extreme:
o when we don't know exactly which smaller
problems to solve, we simply solve them all, then
store the answers away in a table to be used later
in solving larger problems(divide and concur ).
• Care is to be taken to avoid recomputing previously
computed values, otherwise the recursive program
will have prohibitive complexity.
• In some cases, the solution can be improved and in
other cases, the dynamic programming technique is the
best approach.

Two difficulties may arise in any application of dynamic


programming:

1. It may not always be possible to combine the solutions


of smaller problems to form the solution of a larger
one.

2. The number of small problems to solve may be un-acceptably


large.

Dynamic programing can be demonstrated using a


multistage graph
Example
Consider the following example to understand the concept of multistage graph.

According to the formula, we have to calculate the cost (i, j) using the following steps

Step-1: Cost (K-2, j)


In this step, three nodes (node 4, 5. 6) are selected as j. Hence, we have three options
to choose the minimum cost at this step.

Cost(3, 4) = min {c(4, 7) + Cost(7, 9),c(4, 8) + Cost(8, 9)} = 7

Cost(3, 5) = min {c(5, 7) + Cost(7, 9),c(5, 8) + Cost(8, 9)} = 5

Cost(3, 6) = min {c(6, 7) + Cost(7, 9),c(6, 8) + Cost(8, 9)} = 5

Step-2: Cost (K-3, j)


Two nodes are selected as j because at stage k - 3 = 2 there are two nodes, 2 and 3.
So, the value i = 2 and j = 2 and 3.

Cost(2, 2) = min {c(2, 4) + Cost(4, 8) + Cost(8, 9),c(2, 6) + Cost(6, 8) + Cost(8, 9)} = 8


Cost(2, 3) = {c(3, 4) + Cost(4, 8) + Cost(8, 9), c(3, 5) + Cost(5, 8)+ Cost(8, 9), c(3, 6) + Cost(6, 8) +
Cost(8, 9)} = 10

Step-3: Cost (K-4, j)


Cost (1, 1) = {c(1, 2) + Cost(2, 6) + Cost(6, 8) + Cost(8, 9), c(1, 3) + Cost(3, 5) + Cost(5, 8) + Cost(8,
9))} = 12
c(1, 3) + Cost(3, 6) + Cost(6, 8 + Cost(8, 9))} = 13

Hence, the path having the minimum cost is 1→ 3→ 5→ 8→ 9.


The complexity analysis
The complexity analysis of the algorithm is fairly straightforward. Here, if G has
E edges, then the time for the first for loop is  (V +E).

Applied example
Travelling Salesman Problem
Travelling Salesman Problem

Problem Statement
• A traveler needs to visit all the cities from a list, where distances between all the
cities are known and each city should be visited just once.

• What is the shortest possible route that he visits each city exactly once and returns
to the origin city?

Solution
• Travelling salesman problem is the most notorious computational problem. We can
use brute-force approach to evaluate every possible tour and select the best one.
For n number of vertices in a graph, there are (n - 1)! number of possibilities.

.
Brute force approach

Example
Dynamic programming
• Instead of brute-force using dynamic programming approach, the solution can be
obtained in lesser time, though there is no polynomial time algorithm.

• Let us consider a graph G = (V, E), where V is a set of cities and E is a set of
weighted edges.

• An edge e(u, v) represents that vertices u and v are connected. Distance


between vertex u and v is d(u, v), which should be non-negative.
• Suppose we have started at city 1 and after visiting some cities now we are in
city j. Hence, this is a partial tour.
• We certainly need to know j, since this will determine which cities are most
convenient to visit next.
• We also need to know all the cities visited so far, so that we don't repeat any of
them. Hence, this is an appropriate sub-problem

Example
Consider the problem above

ø
g(2, )=5

ø
g(3, )=6

ø
g(4, )=8
G(2,{3})=15
G(2,{4})=8
G(3,{2})=5
G(3,{4})=8
G(4,{2})=5
G(4,{3})=6
G(2,{3,4})=25
G(3,{2,4})=25
G(4,{2,3})=23

You might also like