0% found this document useful (0 votes)
11 views30 pages

Greedy Method

The document discusses the Greedy Method, a technique used for solving optimization problems by making decisions based on currently available information, without considering future consequences. It outlines the characteristics, components, and applications of Greedy algorithms, including examples like the Fractional Knapsack Problem and Job Sequencing with Deadline. Additionally, it explains the concept of Minimum Spanning Trees and their properties in graph theory.

Uploaded by

kivexe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views30 pages

Greedy Method

The document discusses the Greedy Method, a technique used for solving optimization problems by making decisions based on currently available information, without considering future consequences. It outlines the characteristics, components, and applications of Greedy algorithms, including examples like the Fractional Knapsack Problem and Job Sequencing with Deadline. Additionally, it explains the concept of Minimum Spanning Trees and their properties in graph theory.

Uploaded by

kivexe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Design & Analysis of Algorithms

Unit-2 (Greedy Method)

Unit-2
Greedy Method

The greedy method is one of the strategies like Divide and conquer used to solve the
problems. This method is used for solving optimization problems. An optimization
problem is a problem that demands either maximum or minimum results. Let's
understand through some terms.

The Greedy method is the simplest and straightforward approach. It is not an algorithm,
but it is a technique. The main function of this approach is that the decision is taken on
the basis of the currently available information. Whatever the current information is
present, the decision is made without worrying about the effect of the current decision
in future.

This technique is basically used to determine the feasible solution that may or may not
be optimal. The feasible solution is a subset that satisfies the given criteria. The optimal
solution is the solution which is the best and the most favorable solution in the subset.
In the case of feasible, if more than one solution satisfies the given criteria then those
solutions will be considered as the feasible, whereas the optimal solution is the best
solution among all the solutions.

Characteristics of Greedy method:


The following are the characteristics of a greedy method:
● To construct the solution in an optimal way, this algorithm creates two sets
where one set contains all the chosen items, and another set contains the
rejected items.
● A Greedy algorithm makes good local choices in the hope that the solution
should be either feasible or optimal.

Components of Greedy Algorithm:


Greedy algorithms have the following five components −
● A candidate set − A solution is created from this set.
● A selection function − Used to choose the best candidate to be added to the
solution.
● A feasibility function − Used to determine whether a candidate can be used to
contribute to the solution.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

● An objective function − Used to assign a value to a solution or a partial solution.


● A solution function − Used to indicate whether a complete solution has been
reached.

Applications of Greedy Algorithm

● It is used in finding the shortest path.


● It is used to find the minimum spanning tree using the prim's algorithm or the
Kruskal's algorithm.
● It is used in a job sequencing with a deadline.
● This algorithm is also used to solve the fractional knapsack problem.

Let's understand through an example

Suppose there is a problem 'P'. I want to travel from A to B shown as below:

P:A→B

The problem is that we have to travel this journey from A to B. There are various
solutions to go from A to B. We can go from A to B by walk, car, bike, train, aeroplane,
etc. There is a constraint in the journey that we have to travel this journey within 12 hrs.
If I go by train or aeroplane then only, I can cover this distance within 12 hrs. There are
many solutions to this problem but there are only two solutions that satisfy the
constraint.

If we say that we have to cover the journey at the minimum cost. This means that we
have to travel this distance as minimum as possible, so this problem is known as a
minimization problem. Till now, we have two feasible solutions, i.e., one by train and
another one by air. Since travelling by train will lead to the minimum cost so it is an
optimal solution. An optimal solution is also the feasible solution, but providing the best
result so that solution is the optimal solution with the minimum cost. There would be
only one optimal solution.

The problem that requires either minimum or maximum result then that problem is
known as an optimization problem. Greedy method is one of the strategies used for
solving the optimization problems.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

We have to travel from the source to the destination at the minimum cost. Since we
have three feasible solutions having cost paths as 10, 20, and 5. 5 is the minimum cost
path so it is the optimal solution. This is the local optimum, and in this way, we find the
local optimum at each stage in order to calculate the global optimal solution.

Fractional Knapsack Problem


The fractional knapsack problem is also one of the techniques which are used to solve
the knapsack problem. In fractional knapsack, the items are broken in order to maximize
the profit. The problem in which we break the item is known as a Fractional knapsack
problem.

This problem can be solved with the help of using two techniques:

● Brute-force approach: The brute-force approach tries all the possible solutions
with all the different fractions but it is a time-consuming approach.
● Greedy approach: In Greedy approach, we calculate the ratio of profit/weight, and
accordingly, we will select the item. The item with the highest ratio would be
selected first.

There are basically three approaches to solve the problem:

● The first approach is to select the item based on the maximum profit.
● The second approach is to select the item based on the minimum weight.
● The third approach is to calculate the ratio of profit/weight.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

Consider the below example:

Objects : 1 2 3 4 5 6 7
Profit (P) : 10 15 7 8 9 4
Weight(w): 1 3 5 4 1 3 2
W (Weight of the knapsack): 15
n (no of items): 7

First approach:
The second approach is to select the item based on the maximum profit.
Object Profit Weight Remaining weight

3 15 5 15 - 5 = 10

2 10 3 10 - 3 = 7

6 9 3 7-3=4

5 8 1 4-1=3

7 7 * ¾ = 5.25 3 3-3=0
The total profit would be equal to (15 + 10 + 9 + 8 + 5.25) = 47.25

Second approach:

The second approach is to select the item based on the minimum weight.
Object Profit Weight Remaining weight

1 5 1 15 - 1 = 14

5 7 1 14 - 1 = 13

7 4 2 13 - 2 = 11

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

2 10 3 11 - 3 = 8

6 9 3 8-3=5

4 7 4 5-4=1

3 15 * 1/5 = 3 1 1-1=0

In this case, the total profit would be equal to (5 + 7 + 4 + 10 + 9 + 7 + 3) = 46

Third approach:
In the third approach, we will calculate the ratio of profit/weight.

Objects: 1 2 3 4 5 6 7
Profit (P): 5 10 15 7 8 9 4
Weight(w): 1 3 5 4 1 3 2

In this case, we first calculate the profit/weight ratio.

Object 1: 5/1 = 5
Object 2: 10/3 = 3. 33
Object 3: 15/5 = 3
Object 4: 7/4 = 1.7
Object 5: 8/1 = 8
Object 6: 9/3 = 3
Object 7: 4/2 = 2

P:w: 5 3.3 3 1.7 8 3 2

In this approach, we will select the objects based on the maximum profit/weight ratio.
Since the P/W of object 5 is maximum so we select object 5.

Object Profit Weight Remaining weight

5 8 1 15 - 8 = 7

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

After object 5, object 1 has the maximum profit/weight ratio, i.e., 5. So, we select object
1 shown in the below table:
Object Profit Weight Remaining weight

5 8 1 15 - 1 = 14

1 5 1 14 - 1 = 13

After object 1, object 2 has the maximum profit/weight ratio, i.e., 3.3. So, we select
object 2 having profit/weight ratio as 3.3.
Object Profit Weight Remaining weight

5 8 1 15 - 1 = 14

1 5 1 14 - 1 = 13

2 10 3 13 - 3 = 10

After object 2, object 3 has the maximum profit/weight ratio, i.e., 3. So, we select object
3 having profit/weight ratio as 3.
Object Profit Weight Remaining weight

5 8 1 15 - 1 = 14

1 5 1 14 - 1 = 13

2 10 3 13 - 3 = 10

3 15 5 10 - 5 = 5

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

After object 3, object 6 has the maximum profit/weight ratio, i.e., 3. So we select object
6 having profit/weight ratio as 3.
Object Profit Weight Remaining weight

5 8 1 15 - 1 = 14

1 5 1 14 - 1 = 13

2 10 3 13 - 3 = 10

3 15 5 10 - 5 = 5

6 9 3 5-3=2

After object 6, object 7 has the maximum profit/weight ratio, i.e., 2. So we select object
7 having profit/weight ratio as 2.
Object Profit Weight Remaining weight

5 8 1 15 - 1 = 14

1 5 1 14 - 1 = 13

2 10 3 13 - 3 = 10

3 15 5 10 - 5 = 5

6 9 3 5-3=2

7 4 2 2-2=0

As we can observe in the above table that the remaining weight is zero which means
that the knapsack is full. We cannot add more objects in the knapsack. Therefore, the
total profit would be equal to (8 + 5 + 10 + 15 + 9 + 4), i.e., 51.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

In the first approach, the maximum profit is 47.25. The maximum profit in the second
approach is 46. The maximum profit in the third approach is 51. Therefore, we can say
that the third approach, i.e., maximum profit/weight ratio is the best approach among all
the approaches

Job Sequencing with Deadline

In job sequencing problem, the objective is to find a sequence of jobs, which is


completed within their deadlines and gives maximum profit.
Let us consider, a set of n given jobs which are associated with deadlines and profit is
earned, if a job is completed by its deadline. These jobs need to be ordered in such a
way that there is maximum profit.
It may happen that all of the given jobs may not be completed within their deadlines.
Assume, deadline of ith job Ji is di and the profit received from this job is pi. Hence, the
optimal solution of this algorithm is a feasible solution with maximum profit.
Thus, D(i)>0 for 1⩽i⩽n

Initially, these jobs are ordered according to profit, i.e.

p1⩾p2⩾p3⩾...⩾pn

Algorithm: Job-Sequencing-With-Deadline (D, J, n, k)


D(0) := J(0) := 0
k := 1
J(1) := 1 // means first job is selected
for i = 2 … n do
r := k
while D(J(r)) > D(i) and D(J(r)) ≠ r do
r := r – 1
if D(J(r)) ≤ D(i) and D(i) > r then
for l = k … r + 1 by -1 do
J(l + 1) := J(l)
J(r + 1) := i
k := k + 1

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

Analysis
In this algorithm, we are using two loops, one is within another. Hence, the complexity of
this algorithm is O(n2).

Example
Let us consider a set of given jobs as shown in the following table. We have to find a
sequence of jobs, which will be completed within their deadlines and will give maximum
profit. Each job is associated with a deadline and profit.

Job J1 J2 J3 J4 J5

Deadline 2 1 3 2 1

Profit 60 100 20 40 20

To solve this problem, the given jobs are sorted according to their profit in a descending
order. Hence, after sorting, the jobs are ordered as shown in the following table.

Job J2 J1 J4 J3 J5

Deadline 1 2 2 3 1

Profit 100 60 40 20 20

From this set of jobs, first we select J2, as it can be completed within its deadline and
contributes maximum profit.
● Next, J1 is selected as it gives more profit compared to J4.
● In the next clock, J4 cannot be selected as its deadline is over, hence J3 is
selected as it executes within its deadline.
● The job J5 is discarded as it cannot be executed within its deadline.

Thus, the solution is the sequence of jobs (J2, J1, J3), which are being executed within
their deadline and gives maximum profit.
Total profit of this sequence is 100 + 60 + 20 = 180.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

Spanning Tree

A spanning tree is a subset of an undirected Graph that has all the vertices connected
by minimum number of edges.
If all the vertices are connected in a graph, then there exists at least one spanning tree.
In a graph, there may exist more than one spanning tree.
Example
In the following graph, the highlighted edges form a spanning tree.

Minimum Spanning Tree


A Minimum Spanning Tree (MST) is a subset of edges of a connected weighted
undirected graph that connects all the vertices together with the minimum possible total
edge weight.

Properties of Minimum Spanning Tree

● If we remove any edge from the spanning tree, then it becomes disconnected.
Therefore, we cannot remove any edge from the spanning tree.
● If we add an edge to the spanning tree then it creates a loop. Therefore, we
cannot add any edge to the spanning tree.
● In a graph, each edge has a distinct weight, then there exists only a single and
unique minimum spanning tree. If the edge weight is not distinct, then there can
be more than one minimum spanning tree.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

● A complete undirected graph can have an nn-2 number of spanning trees.


● Every connected and undirected graph contains atleast one spanning tree.
● The disconnected graph does not have any spanning tree.
● In a complete graph, we can remove maximum (e-n+1) edges to construct a
spanning tree.

Methods of Minimum Spanning Tree

There are two methods to find Minimum Spanning Tree

1. Kruskal's Algorithm
2. Prim's Algorithm

As we have discussed, one graph may have more than one spanning tree. If there are n
number of vertices, the spanning tree should have n - 1 number of edges. In this context,
if each edge of the graph is associated with a weight and there exists more than one
spanning tree, we need to find the minimum spanning tree of the graph.

Kruskal's Algorithm:

An algorithm to construct a Minimum Spanning Tree for a connected weighted graph. It


is a Greedy Algorithm. The Greedy Choice is to put the smallest weight edge that does
not because a cycle in the MST constructed so far.

Steps for finding MST using Kruskal's Algorithm:

1. Arrange the edge of G in order of increasing weight.


2. Starting only with the vertices of G and proceeding sequentially add each edge
which does not result in a cycle, until (n - 1) edges are used.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

For Example: Find the Minimum Spanning Tree of the following graph using Kruskal's
algorithm.

First we initialize the set A to the empty set and create |v| trees, one containing each
vertex with MAKE-SET procedure. Then sort the edges in E into order by non-decreasing
weight.

There are 9 vertices and 12 edges. So MST formed (9-1) = 8 edges

Now, check for each edge (u, v) whether the endpoints u and v belong to the same tree.
If they do then the edge (u, v) cannot be supplementary. Otherwise, the two vertices
belong to different trees, and the edge (u, v) is added to A, and the vertices in two trees
are merged in by union procedure.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

Step1: So, first take (h, g) edge

Step 2: then (g, f) edge.

Step 3: then (a, b) and (i, g) edges are considered, and the forest becomes

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

Step 4: Now, edge (h, i). Both h and i vertices are in the same set. Thus it creates a
cycle. So this edge is discarded.
Then edge (c, d), (b, c), (a, h), (d, e), (e, f) are considered, and the forest becomes.

Step 5: In (e, f) edge both endpoints e and f exist in the same tree so discarded this
edge. Then (b, h) edge, it also creates a cycle.

Step 6: After that edge (d, f) and the final spanning tree is shown as in dark lines.

Step 7: This step will be required Minimum Spanning Tree because it contains all the 9
vertices and (9 - 1) = 8 edges
e → f, b → h, d → f [cycle will be formed]

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

Analysis:

Where E is the number of edges in the graph and V is the number of vertices, Kruskal's
Algorithm can be shown to run in O (E log E) time, or simply, O (E log V) time, all with
simple data structures. These running times are equivalent because:

● E is at most V2 and log V2= 2 x log V is O (log V).


● If we ignore isolated vertices, which will each their components of the minimum
spanning tree, V ≤ 2 E, so log V is O (log E).

Thus the total time is O (E log E) = O (E log V).

Prim's Algorithm
Prim’s algorithm is a greedy approach to find the minimum spanning tree. In this
algorithm, to form a MST we can start from an arbitrary vertex.The idea is to maintain
two sets of vertices:

● Contain vertices already included in MST.


● Contain vertices not yet included.

Steps for finding MST using Prim's Algorithm:


1. Create MST set that keeps track of vertices already included in MST.
2. Assign key values to all vertices in the input graph. Initialize all key values as
INFINITE (∞). Assign key values like 0 for the first vertex so that it is picked first.
3. While MST set doesn't include all vertices.
a. Pick vertex u which is not is MST set and has minimum key value. Include
'u'to MST set.
b. Update the key value of all adjacent vertices of u. To update, iterate
through all adjacent vertices. For every adjacent vertex v, if the weight of
edge u.v less than the previous key value of v, update key value as a
weight of u.v.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

MST-PRIM (G, w, r)
1. for each u ∈ V [G]
2. do key [u] ← ∞
3. π [u] ← NIL
4. key [r] ← 0
5. Q ← V [G]
6. While Q ? ∅
7. do u ← EXTRACT - MIN (Q)
8. for each v ∈ Adj [u]
9. do if v ∈ Q and w (u, v) < key [v]
10. then π [v] ← u
11. key [v] ← w (u, v)

Example: Generate minimum cost spanning tree for the following graph using Prim's
algorithm.

Solution: In Prim's algorithm, first we initialize the priority Queue Q. to contain all the
vertices and the key of each vertex to ∞ except for the root, whose key is set to 0.
Suppose 0 vertex is the root, i.e., r. By EXTRACT - MIN (Q) procure, now u = r and Adj [u]
= {5, 1}.
Removing u from set Q and adds it to set V - Q of vertices in the tree. Now, update the
key and π fields of every vertex v adjacent to u but not in a tree.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

1. Taking 0 as starting vertex


2. Root = 0
3. Adj [0] = 5, 1
4. Parent, π [5] = 0 and π [1] = 0
5. Key [5] = ∞ and key [1] = ∞
6. w [0, 5) = 10 and w (0,1) = 28
7. w (u, v) < key [5] , w (u, v) < key [1]
8. Key [5] = 10 and key [1] = 28
9. So update key value of 5 and 1 is:

Now by EXTRACT_MIN (Q) Removes 5 because key [5] = 10 which is minimum so u = 5.

1. Adj [5] = {0, 4} and 0 is already in heap


2. Taking 4, key [4] = ∞ π [4] = 5
3. (u, v) < key [v] then key [4] = 25
4. w (5,4) = 25
5. w (5,4) < key [4]
6. date key value and parent of 4.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

Now remove 4 because key [4] = 25 which is minimum, so u =4

1. Adj [4] = {6, 3}


2. Key [3] = ∞ key [6] = ∞
3. w (4,3) = 22 w (4,6) = 24
4. w (u, v) < key [v] w (u, v) < key [v]
5. w (4,3) < key [3] w (4,6) < key [6]

Update key value of key [3] as 22 and key [6] as 24.

And the parent of 3, 6 as 4.

1. π[3]= 4 π[6]= 4

1. u = EXTRACT_MIN (3, 6) [key [3] < key [6]]


2. u = 3 i.e. 22 < 24

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

Now remove 3 because key [3] = 22 is minimum so u =3.

1. Adj [3] = {4, 6, 2}


2. 4 is already in heap
3. 4 ≠ Q key [6] = 24 now becomes key [6] = 18
4. Key [2] = ∞ key [6] = 24
5. w (3, 2) = 12 w (3, 6) = 18
6. w (3, 2) < key [2] w (3, 6) < key [6]

Now in Q, key [2] = 12, key [6] = 18, key [1] = 28 and parent of 2 and 6 is 3.

1. π [2] = 3 π[6]=3

Now by EXTRACT_MIN (Q) Removes 2, because key [2] = 12 is minimum.

1. u = EXTRACT_MIN (2, 6)
2. u=2 [key [2] < key [6]]
3. 12 < 18
4. Now the root is 2
5. Adj [2] = {3, 1}
6. 3 is already in a heap
7. Taking 1, key [1] = 28
8. w (2,1) = 16
9. w (2,1) < key [1]

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

So update key value of key [1] as 16 and its parent as 2.

1. π[1]= 2

Now by EXTRACT_MIN (Q) Removes 1 because key [1] = 16 is minimum.

1. Adj [1] = {0, 6, 2}


2. 0 and 2 are already in heap.
3. Taking 6, key [6] = 18
4. w [1, 6] = 14
5. w [1, 6] < key [6]

Update key value of 6 as 14 and its parent as 1.

1. Π [6] = 1

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

Now all the vertices have been spanned, Using above the table we get Minimum
Spanning Tree.

1. 0 → 5 → 4 → 3 → 2 → 1 → 6
2. [Because Π [5] = 0, Π [4] = 5, Π [3] = 4, Π [2] = 3, Π [1] =2, Π [6] =1]

Thus the final spanning Tree is

Total Cost = 10 + 25 + 22 + 12 + 16 + 14 = 99

Single Source Shortest Paths

In a shortest- paths problem, we are given a weighted, directed graphs G = (V, E), with
weight function w: E → R mapping edges to real-valued weights. The weight of path p =
(v0,v1,..... vk) is the total of the weights of its constituent edges:

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

We define the shortest - path weight from u to v by δ(u,v) = min (w (p): u→v), if there is a
path from u to v, and δ(u,v)= ∞, otherwise.

The shortest path from vertex s to vertex t is then defined as any path p with weight w
(p) = δ(s,t).

The breadth-first- search algorithm is the shortest path algorithm that works on
unweighted graphs, that is, graphs in which each edge can be considered to have unit
weight.In a Single Source Shortest Paths Problem, we are given a Graph G = (V, E), we
want to find the shortest path from a given source vertex s ∈ V to every vertex v ∈ V.

Variants:

There are some variants of the shortest path problem.

● Single- destination shortest - paths problem: Find the shortest path to a given
destination vertex t from every vertex v. By shift the direction of each edge in the
graph, we can shorten this problem to a single - source problem.
● Single - pair shortest - path problem: Find the shortest path from u to v for given
vertices u and v. If we determine the single - source problem with source vertex u,
we clarify this problem also. Furthermore, no algorithms for this problem are
known that run asymptotically faster than the best single - source algorithms in
the worst case.
● All - pairs shortest - paths problem: Find the shortest path from u to v for every
pair of vertices u and v. Running a single - source algorithm once from each
vertex can clarify this problem; but it can generally be solved faster, and its
structure is of interest in the own right.

Shortest Path
Given a graph G = (V, E), we maintain for each vertex v ∈ V a predecessor π [v] that is
either another vertex or NIL. During the execution of shortest paths algorithms, however,
the π values need not indicate shortest paths. As in breadth-first search, we shall be
interested in the predecessor subgraph Gn= (Vn,En) induced by the value π. Here again,
we define the vertex set Vπ, to be the set of vertices of G with non - NIL predecessors,
plus the source s:

Vπ= {v ∈ V: π [v] ≠ NIL} ∪ {s} }

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

The directed edge set EΠ is the set of edges induced by the Π values for vertices in VΠ:
EΠ= {(Π[v], v) ∈ E: v ∈ VΠ - {s}}

A shortest - paths tree rooted at s is a directed subgraph G = (V' E'), where V'∈ V
andE'∈E, such that
1. V' is the set of vertices reachable from s in G
2. G' forms a rooted tree with root s, and
3. For all v ∈ V', the unique, simple path from s to v in G' is the shortest path from s
to v in G.

Shortest paths are not naturally unique, and neither is shortest - paths trees.

Properties of Shortest Path:

1. Optimal substructure property: All subpaths of shortest paths are shortest paths.

Let P1 be x - y sub path of shortest s - v path. Let P2 be any x - y path. Then cost of P1≤
cost of P2,otherwise P not shortest s - v path.

2. Triangle inequality: Let d (v, w) be the length of shortest path from v to w. Then,
d (v, w) ≤ d (v, x) + d (x, w)

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

3. Upper-bound property: We always have d[v] ≥ δ(s, v) for all vertices v ∈ V, and once
d[v] conclude the value δ(s, v), it never changes.
4. No-path property: If there is no path from s to v, then we regularly have d[v] = δ(s, v) =
∞.
5. Convergence property: If s->u->v is a shortest path in G for some u, v ∈ V, and if d[u]
= δ(s, u) at any time prior to relaxing edge (u, v), then d[v] = δ(s, v) at all times thereafter.

Relaxation

The single - source shortest paths are based on a technique known as relaxation, a
method that repeatedly decreases an upper bound on the actual shortest path weight of
each vertex until the upper bound equivalent the shortest - path weight. For each vertex
v ∈ V, we maintain an attribute d [v], which is an upper bound on the weight of the
shortest path from source s to v. We call d [v] the shortest path estimate.

INITIALIZE - SINGLE - SOURCE (G, s)

1. for each vertex v ∈ V [G]


2. do d [v] ← ∞
3. π [v] ← NIL
4. d [s] ← 0

After initialization, π [v] = NIL for all v ∈ V, d [v] = 0 for v = s, and d [v] = ∞ for v ∈ V - {s}.

The development of relaxing an edge (u, v) consists of testing whether we can improve
the shortest path to v found so far by going through u and if so, updating d [v] and π [v].
A relaxation step may decrease the value of the shortest - path estimate d [v] and
updated v's predecessor field π [v].

Fig: Relaxing an edge (u, v) with weight w (u, v) = 2. The shortest-path estimate of each
vertex appears within the vertex.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

(a) Because v. d > u. d + w (u, v) prior to relaxation, the value of v. d decreases

(b) Here, v. d < u. d + w (u, v) before relaxing the edge, and so the relaxation step
leaves v. d unchanged.

The subsequent code performs a relaxation step on edge (u, v)

RELAX (u, v, w)
1. If d [v] > d [u] + w (u, v)
2. then d [v] ← d [u] + w (u, v)

3. π [v] ← u

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

Dijkstra Algorithm

Dijkstra algorithm is a single-source shortest path algorithm. Here, single-source means


that only one source is given, and we have to find the shortest path from the source to
all the nodes.

Let's understand the working of Dijkstra's algorithm. Consider the below graph.

First, we have to consider any vertex as a source vertex. Suppose we consider vertex 0
as a source vertex.
Here we assume that 0 as a source vertex, and distance to all the other vertices is
infinity. Initially, we do not know the distances. First, we will find out the vertices which
are directly connected to the vertex 0. As we can observe in the above graph that two
vertices are directly connected to vertex 0.63.5K

How to Manage iPhone App Notifications

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

Let's assume that the vertex 0 is represented by 'x' and the vertex 1 is represented by 'y'.
The distance between the vertices can be calculated by using the below formula:

d(x, y) = d(x) + c(x, y) < d(y)


= (0 + 4) < ∞
=4<∞

Since 4<∞ so we will update d(v) from ∞ to 4.

Therefore, we come to the conclusion that the formula for calculating the distance
between the vertices:

{if( d(u) + c(u, v) < d(v))


d(v) = d(u) +c(u, v) }
Now we consider vertex 0 same as 'x' and vertex 4 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)


= (0 + 8) < ∞
=8<∞

Therefore, the value of d(y) is 8. We replace the infinity value of vertices 1 and 4 with the
values 4 and 8 respectively. Now, we have found the shortest path from the vertex 0 to 1
and 0 to 4. Therefore, vertex 0 is selected. Now, we will compare all the vertices except
the vertex 0. Since vertex 1 has the lowest value, i.e., 4; therefore, vertex 1 is selected.

Since vertex 1 is selected, so we consider the path from 1 to 2, and 1 to 4. We will not
consider the path from 1 to 0 as the vertex 0 is already selected.

First, we calculate the distance between the vertex 1 and 2. Consider the vertex 1 as 'x',
and the vertex 2 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)


= (4 + 8) < ∞
= 12 < ∞

Since 12<∞ so we will update d(2) from ∞ to 12.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

Now, we calculate the distance between the vertex 1 and vertex 4. Consider the vertex 1
as 'x' and the vertex 4 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)


= (4 + 11) < 8
= 15 < 8
Since 15 is not less than 8, we will not update the value d(4) from 8 to 12.
Till now, two nodes have been selected, i.e., 0 and 1. Now we have to compare the
nodes except the node 0 and 1. The node 4 has the minimum distance, i.e., 8. Therefore,
vertex 4 is selected.
Since vertex 4 is selected, so we will consider all the direct paths from the vertex 4. The
direct paths from vertex 4 are 4 to 0, 4 to 1, 4 to 8, and 4 to 5. Since the vertices 0 and 1
have already been selected so we will not consider the vertices 0 and 1. We will
consider only two vertices, i.e., 8 and 5.
First, we consider the vertex 8. First, we calculate the distance between the vertex 4 and
8. Consider the vertex 4 as 'x', and the vertex 8 as 'y'.
d(x, y) = d(x) + c(x, y) < d(y)
= (8 + 7) < ∞

= 15 < ∞

Since 15 is less than the infinity so we update d(8) from infinity to 15.
Now, we consider the vertex 5. First, we calculate the distance between the vertex 4 and
5. Consider the vertex 4 as 'x', and the vertex 5 as 'y'.
d(x, y) = d(x) + c(x, y) < d(y)
= (8 + 1) < ∞
=9<∞
Since 5 is less than the infinity, we update d(5) from infinity to 9.
Till now, three nodes have been selected, i.e., 0, 1, and 4. Now we have to compare the
nodes except the nodes 0, 1 and 4. The node 5 has the minimum value, i.e., 9. Therefore,
vertex 5 is selected.
Since the vertex 5 is selected, so we will consider all the direct paths from vertex 5. The
direct paths from vertex 5 are 5 to 8, and 5 to 6.
First, we consider the vertex 8. First, we calculate the distance between the vertex 5 and
8. Consider the vertex 5 as 'x', and the vertex 8 as 'y'.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

d(x, y) = d(x) + c(x, y) < d(y)


= (9 + 15) < 15
= 24 < 15
Since 24 is not less than 15 so we will not update the value d(8) from 15 to 24.
Now, we consider the vertex 6. First, we calculate the distance between the vertex 5 and
6. Consider the vertex 5 as 'x', and the vertex 6 as 'y'.
d(x, y) = d(x) + c(x, y) < d(y)
= (9 + 2) < ∞
= 11 < ∞
Since 11 is less than infinity, we update d(6) from infinity to 11.
Till now, nodes 0, 1, 4 and 5 have been selected. We will compare the nodes except the
selected nodes. The node 6 has the lowest value as compared to other nodes.
Therefore, vertex 6 is selected.
Since vertex 6 is selected, we consider all the direct paths from vertex 6. The direct
paths from vertex 6 are 6 to 2, 6 to 3, and 6 to 7.
First, we consider the vertex 2. Consider the vertex 6 as 'x', and the vertex 2 as 'y'.
d(x, y) = d(x) + c(x, y) < d(y)
= (11 + 4) < 12
= 15 < 12

Since 15 is not less than 12, we will not update d(2) from 12 to 15
Now we consider the vertex 3. Consider the vertex 6 as 'x', and the vertex 3 as 'y'.
d(x, y) = d(x) + c(x, y) < d(y)
= (11 + 14) < ∞
= 25 < ∞
Since 25 is less than ∞, so we will update d(3) from ∞ to 25.
Now we consider the vertex 7. Consider the vertex 6 as 'x', and the vertex 7 as 'y'.
d(x, y) = d(x) + c(x, y) < d(y)
= (11 + 10) < ∞
= 22 < ∞

Since 22 is less than ∞ so, we will update d(7) from ∞ to 22.

By: Dr. Preeti


Design & Analysis of Algorithms
Unit-2 (Greedy Method)

Till now, nodes 0, 1, 4, 5, and 6 have been selected. Now we have to compare all the
unvisited nodes, i.e., 2, 3, 7, and 8. Since node 2 has the minimum value, i.e., 12 among
all the other unvisited nodes. Therefore, node 2 is selected.
Since node 2 is selected, so we consider all the direct paths from node 2. The direct
paths from node 2 are 2 to 8, 2 to 6, and 2 to 3.
First, we consider the vertex 8. Consider the vertex 2 as 'x' and 8 as 'y'.
d(x, y) = d(x) + c(x, y) < d(y)
= (12 + 2) < 15
= 14 < 15
Since 14 is less than 15, we will update d(8) from 15 to 14.
Now, we consider the vertex 6. Consider the vertex 2 as 'x' and 6 as 'y'.
d(x, y) = d(x) + c(x, y) < d(y)
= (12 + 4) < 11
= 16 < 11
Since 16 is not less than 11 so we will not update d(6) from 11 to 16.
Now, we consider the vertex 3. Consider the vertex 2 as 'x' and 3 as 'y'.
d(x, y) = d(x) + c(x, y) < d(y)
= (12 + 7) < 25
= 19 < 25
Since 19 is less than 25, we will update d(3) from 25 to 19.
Till now, nodes 0, 1, 2, 4, 5, and 6 have been selected. We compare all the unvisited
nodes, i.e., 3, 7, and 8. Among nodes 3, 7, and 8, node 8 has the minimum value. The
nodes which are directly connected to node 8 are 2, 4, and 5. Since all the directly
connected nodes are selected so we will not consider any node for the updation.
The unvisited nodes are 3 and 7. Among the nodes 3 and 7, node 3 has the minimum
value, i.e., 19. Therefore, the node 3 is selected. The nodes which are directly connected
to the node 3 are 2, 6, and 7. Since the nodes 2 and 6 have been selected so we will
consider these two nodes.Now, we consider the vertex 7. Consider the vertex 3 as 'x'
and 7 as 'y'.
d(x, y) = d(x) + c(x, y) < d(y)
= (19 + 9) < 21
= 28 < 21
Since 28 is not less than 21, so we will not update d(7) from 28 to 21.

By: Dr. Preeti

You might also like