0% found this document useful (0 votes)
2 views

Design & Analysis of algorithm- 3

The document provides an overview of Greedy Algorithms, detailing their components, applications, and examples such as the Knapsack problem and Selection Sort. It explains the greedy method as a technique for solving optimization problems by making decisions based on current information without considering future consequences. Additionally, it discusses the advantages and disadvantages of using greedy algorithms, emphasizing their effectiveness in certain scenarios while noting potential pitfalls in achieving optimal solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Design & Analysis of algorithm- 3

The document provides an overview of Greedy Algorithms, detailing their components, applications, and examples such as the Knapsack problem and Selection Sort. It explains the greedy method as a technique for solving optimization problems by making decisions based on current information without considering future consequences. Additionally, it discusses the advantages and disadvantages of using greedy algorithms, emphasizing their effectiveness in certain scenarios while noting potential pitfalls in achieving optimal solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 114

Parul University

Session: July-Dec. 2024

05/28/2025 1 1
Slide No.
Parul University

203105374: Design & Analysis of


Algorithm

Date 05/28/2025 2
Slide No.
Unit-3 Greedy Algorithm
● Greedy Methods: Introduction and Problem statements
● Selection Sort
● Greedy methods with examples such as Optimal Reliability Allocation
● Knapsack problem
● Minimum Spanning trees – Prim’s and Kruskal’s algorithms,
● Single source shortest paths - Dijkstra’s and Bellman Ford algorithms.

05/28/2025 3
Slide No.
Greedy Methods
● The greedy method is one of the strategies like Divide and conquer used to
solve the problems.
● This method is used for solving optimization problems. An optimization
problem is a problem that demands either maximum or minimum results. Let's
understand through some terms.
● The Greedy method is the simplest and straightforward approach. It is not an
algorithm, but it is a technique.
● The main function of this approach is that the decision is taken on the basis of
the currently available information.
● Whatever the current information is present, the decision is made without
worrying about the effect of the current decision in future.

05/28/2025 4
Slide No.
Cont…
● This technique is basically used to determine the feasible solution that may or may
not be optimal.
● The feasible solution is a subset that satisfies the given criteria. The optimal solution
is the solution which is the best and the most favorable solution in the subset.
● In the case of feasible, if more than one solution satisfies the given criteria then
those solutions will be considered as the feasible, whereas the optimal solution is the
best solution among all the solutions.
● Feasible solution:- Most problems have n inputs and its solution contains a subset of
inputs that satisfies a given constraint(condition). Any subset that satisfies the
constraint is called feasible solution.
● Optimal solution: To find a feasible solution that either maximizes or minimizes a
given objective function. A feasible solution that does this is called optimal solution.

05/28/2025 5
Slide No.
Components of Greedy Algorithm
● The components that can be used in the greedy algorithm are:
1) Candidate set: A solution that is created from the set is known as a candidate
set.
2) Selection function: This function is used to choose the candidate or subset
which can be added in the solution.
3) Feasibility function: A function that is used to determine whether the
candidate or subset can be used to contribute to the solution or not.
4) Objective function: A function is used to assign the value to the solution or the
partial solution.
5) Solution function: This function is used to intimate whether the complete
function has been reached or not.

05/28/2025 6
Slide No.
Algorithm for Greedy method
● Algorithm Greedy(a,n)
● //a[1:n] contains the n inputs.
● {
Solution :=0;

For i=1 to n do

X:=select(a);

If Feasible(solution, x) then

Solution :=Union(solution,x);

}
● Return solution; }
05/28/2025 7
Slide No.
● Selection:- Function, that selects an input from a[] and removes it. The
selected input’s value is assigned to x.
● Feasible:- Boolean-valued function that determines whether x can be included
into the solution vector.
● Union:- function that combines x with solution and updates the objective
function.

05/28/2025 8
Slide No.
Applications of Greedy Algorithm
● It is used in finding the shortest path.
● It is used to find the minimum spanning tree using the prim's algorithm or the
Kruskal's algorithm.
● It is used in a job sequencing with a deadline.
● This algorithm is also used to solve the fractional knapsack problem.

Disadvantages of using Greedy algorithm


● Greedy algorithm makes decisions based on the information available at each
phase without considering the broader problem. So, there might be a
possibility that the greedy solution does not give the best solution for every
problem.

05/28/2025 9
Slide No.
Cont..
● It follows the local optimum choice at each stage with a intend of finding the
global optimum. Let's understand through an example.
● Consider the graph which is given below:
We have to travel from the source to the destination at the minimum cost. Since we
have three feasible solutions having cost paths as 10, 20, and 5. 5 is the minimum cost
path so it is the optimal solution. This is the local optimum, and in this way, we find the
local optimum at each stage in order to calculate the global optimal solution.

05/28/2025 Slide 10
No.
Example.
● Let's understand through an Suppose there is a problem 'P'. I want to travel
from A to B shown as below:
● P:A→B
● The problem is that we have to travel this journey from A to B. There are
various solutions to go from A to B. We can go from A to B by walk, car, bike,
train, aeroplane, etc. There is a constraint in the journey that we have to
travel this journey within 12 hrs. If I go by train or aeroplane then only, I can
cover this distance within 12 hrs. There are many solutions to this problem
but there are only two solutions that satisfy the constraint.

05/28/2025 Slide 11
No.
● If we say that we have to cover the journey at the minimum cost. This means that we
have to travel this distance as minimum as possible, so this problem is known as a
minimization problem. Till now, we have two feasible solutions, i.e., one by train and
another one by air. Since travelling by train will lead to the minimum cost so it is an
optimal solution. An optimal solution is also the feasible solution, but providing the best
result so that solution is the optimal solution with the minimum cost. There would be
only one optimal solution.

● The problem that requires either minimum or maximum result then that problem is
known as an optimization problem. Greedy method is one of the strategies used for
solving the optimization problems.

05/28/2025 Slide 12
No.
Greedy Algorithm Examples
● Most networking algorithms use the greedy approach. Here is a list of few
of them −
1. Travelling Salesman Problem
2. Prim's Minimal Spanning Tree Algorithm
3. Kruskal's Minimal Spanning Tree Algorithm
4. Dijkstra's Minimal Spanning Tree Algorithm
5. Graph - Map Coloring
6. Knapsack Problem
7. Job Scheduling Problem

05/28/2025 Slide 13
No.
Selection Sort
● In selection sort, the smallest value among the unsorted elements of the array is
selected in every pass and inserted to its appropriate position into the array. It is also
the simplest algorithm. It is an in-place comparison sorting algorithm. In this
algorithm, the array is divided into two parts, first is sorted part, and another one is
the unsorted part. Initially, the sorted part of the array is empty, and unsorted part is
the given array. Sorted part is placed at the left, while the unsorted part is placed at
the right.

05/28/2025 Slide 14
No.
● Lets consider the following array as an example: arr[] = {64, 25, 12, 22, 11}
● First pass:
● For the first position in the sorted array, the whole array is traversed from index 0 to 4
sequentially. The first position where 64 is stored presently, after traversing whole
array it is clear that 11 is the lowest value.
● Thus, replace 64 with 11. After one iteration 11, which happens to be the least value
in the array, tends to appear in the first position of the sorted list.

05/28/2025 Slide 15
No.
Second Pass:
● For the second position, where 25 is present, again traverse the rest of the
array in a sequential manner.
● After traversing, we found that 12 is the second lowest value in the array
and it should appear at the second place in the array, thus swap these
values.

05/28/2025 Slide 16
No.
Third Pass:
● Now, for third place, where 25 is present again traverse the rest of the array and find
the third least value present in the array.
● While traversing, 22 came out to be the third least value and it should appear at the
third place in the array, thus swap 22 with element present at third position.

05/28/2025 Slide 17
No.
Fourth pass:
● Similarly, for fourth position traverse the rest of the array and find the fourth
least element in the array
● As 25 is the 4th lowest value hence, it will place at the fourth position.

05/28/2025 Slide 18
No.
Fifth Pass:
● At last the largest value present in the array automatically get placed at the last
position in the array
● The resulted array is the sorted array.

05/28/2025 Slide 19
No.
Algorithm
1. Step 1 − Set MIN to location 0
2. Step 2 − Search the minimum element in the list
3. Step 3 − Swap with value at location MIN
4. Step 4 − Increment MIN to point to next element
5. Step 5 − Repeat until list is sorted

05/28/2025 Slide 20
No.
● Procedure Selection Sort ● end for
● list : array of items
● n : size of list ● /* swap the minimum element with
the current element*/
● for i = 1 to n - 1 ● if indexMin != i then
● /* set current element as minimum*/
● min = i
● /* check the element to be minimum */
● for j = i+1 to n
● if list[j] < list[min] then
● min = j;
● end if

05/28/2025 Slide 21
No.
Complexity Analysis of Selection Sort
● Time Complexity: The time complexity of Selection Sort is O(N2) as there are two
nested loops:
● One loop to select an element of Array one by one = O(N)
● Another loop to compare that element with every other Array element = O(N)
● Therefore overall complexity = O(N) * O(N) = O(N*N) = O(N2)

05/28/2025 Slide 22
No.
Knapsack Problem
● Suppose you have been given a knapsack or bag with a limited weight capacity,
and each item has some weight and value. The problem here is that “Which
item is to be placed in the knapsack such that the weight limit does not exceed
and the total value of the items is as large as possible?”.

● Consider the real-life example. Suppose there is a thief and he enters the
museum. The thief contains a knapsack, or we can say a bag that has limited
weight capacity. The museum contains various items of different values. The
thief decides what items are should he keep in the bag so that profit would
become maximum.

05/28/2025 Slide 23
No.
Some important points related to the knapsack problem are:
● It is a combinatorial optimization-related problem.
● Given a set of N items – usually numbered from 1 to N; each of these items has a mass wi
and a value vi.
● It determines the number of each item to be included in a collection so that the total
weight M is less than or equal to a given limit and the total value is as large as possible.

05/28/2025 Slide 24
No.
Some important points related to the knapsack problem are:
● It is a combinatorial optimization-related problem.
● Given a set of N items – usually numbered from 1 to N; each of these items has a mass wi
and a value vi.
● It determines the number of each item to be included in a collection so that the total
weight M is less than or equal to a given limit and the total value is as large as possible.

● Knapsack Problem Variants:


1. 0/1 knapsack problem
2. Fractional knapsack problem

05/28/2025 Slide 25
No.
1. 0/1 knapsack problem
● A knapsack means a bag. It is used for solving knapsack problems. This problem
is solved by using a dynamic programming approach. In this problem, the items
are either completely filled or no items are filled in a knapsack. 1 means items
are completely filled or 0 means no item in the bag.
● For example, we have two items having weights of 12kg and 13kg, respectively.
If we pick the 12kg item then we cannot pick the 10kg item from the 12kg item
(because the item is not divisible); we have to pick the 12kg item completely.
● In this problem, we cannot take the fraction of the items. Here, we have to
decide whether we have to take the item, i.e., x = 1 or not, i.e., x = 0.
● The greedy approach does not provide the optimal result in this problem.

05/28/2025 Slide 26
No.
2. Fractional knapsack problem
● This problem is also used for solving real-world problems. It is solved by using
the Greedy approach. In this problem we can also divide the items means we
can take a fractional part of the items that is why it is called the fractional
knapsack problem.
● For example, if we have an item of 13 kg then we can pick the item of 12 kg and
leave the item of 1 kg. To solve the fractional problem, we first compute the
value per weight of each item.

05/28/2025 Slide 27
No.
Fractional Knapsack Problem
● Given the weights and profits of N items, in the form of {profit, weight} put
these items in a knapsack of capacity W to get the maximum total profit in
the knapsack. In Fractional Knapsack, we can break items for maximizing the
total value of the knapsack.
● Greedy approach: In Greedy approach, we calculate the ratio of
profit/weight, and accordingly, we will select the item. The item with the
highest ratio would be selected first.
● There are basically three approaches to solve the problem:
○ The first approach is to select the item based on the maximum profit.
○ The second approach is to select the item based on the minimum weight.
○ The third approach is to calculate the ratio of profit/weight.

05/28/2025 Slide 28
No.
Pseudocode of Fractional SW ← SW + W[i]
knapsack problem: SP ← SP + V[i]
else
GREEDY_FRACTIONAL_KNAPSACK( frac ← (M – SW) / W[i]
X, V, W, M) S ← S ∪ X[i] * frac
S←Φ // Add fraction of item X[i]
// Set of selected items, initially empty SP ← SP + V[i] * frac
SW ← 0 // weight of selected items // Add fraction of profit
SP ← 0 // profit of selected items SW ← SW + W[i] * frac
i←1 // Add fraction of weight
while i ≤ n do end
if (SW + w[i]) ≤ M then i←i+1
S ← S ∪ X[i] end

05/28/2025 Slide 29
No.
Example
● Objects: 1 2 3 4 5 6 7

● Profit (P): 5 10 15 7 8 9 4

● Weight(w): 1 3 5 4 1 3 2

● W (Weight of the knapsack): 15

● n (no of items): 7

05/28/2025 Slide 30
No.
First approach:

Object Profit Weight Remaining


weight

3 15 5 15 - 5 = 10
2 10 3 10 - 3 = 7
6 9 3 7-3=4
5 8 1 4-1=3
7 7 * ¾ = 5.25 3 3-3=0

● The total profit would be equal to (15 + 10 + 9 + 8 + 5.25) = 47.25

05/28/2025 Slide 31
No.
Second approach:
The second approach is to select the item based on the minimum weight.

Object Profit Weight Remainin


g weight
1 5 1 15 - 1 = 14
In this case, the total
5 7 1 14 - 1 = 13 profit would be equal
7 4 2 13 - 2 = 11 to (5 + 7 + 4 + 10 + 9 +
7 + 3) = 46
2 10 3 11 - 3 = 8
6 9 3 8-3=5
4 7 4 5-4=1
3 15 * 1/5 = 1 1-1=0
3
05/28/2025 Slide 32
No.
Third approach:
In the third approach, we will calculate ● Object 5: 8/1 = 8
the ratio of profit/weight. ● Object 6: 9/3 = 3
Objects: 1 2 3 4 5 6 7 ● Object 7: 4/2 = 2
Profit (P): 5 10 15 7 8 9 4
Weight(w): 1 3 5 4 1 3 2 P:w: 5 3.3 3 1.7 8 3 2
In this case, we first calculate the In this approach, we will select the
profit/weight ratio. objects based on the maximum
● Object 1: 5/1 = 5 profit/weight ratio. Since the P/W of
● Object 2: 10/3 = 3. 33 object 5 is maximum so we select
object 5.
● Object 3: 15/5 = 3
● Object 4: 7/4 = 1.7

05/28/2025 Slide No.


Third approach:
P:w: 5 3.3 3 1.7 8 3 2
Object Profit Weight Remaining
weight

5 8 1 15 - 1 = 14

1 5 1 14 - 1 = 13

2 10 3 13 - 3 = 10

3 15 5 10 - 5 = 5

6 9 3 5-3=2

7 4 2 2-2=0

05/28/2025 Slide No.


Conclusion
● As we can observe in the above table that the remaining weight is zero which means
that the knapsack is full. We cannot add more objects in the knapsack.
● Therefore, the total profit would be equal to (8 + 5 + 10 + 15 + 9 + 4), i.e., 51.
● In the first approach, the maximum profit is 47.25. The maximum profit in the second
approach is 46. The maximum profit in the third approach is 51.
● Therefore, we can say that the third approach, i.e., maximum profit/weight ratio is the
best approach among all the approaches.

05/28/2025 Slide No.


Complexity Analysis
● The fractional knapsack problem can be solved by first sorting the items
according to their values, and it can be done in O(NlogN)
● This approach starts with finding the most valuable item, and we consider the
most valuable item as much as possible, so start with the highest value item
denoted by vi. Then, we consider the next item from the sorted list, and in this
way, we perform the linear search in O(N) time complexity.
● Therefore, the overall running time would be O(NlogN) plus O(N) equals to
O(NlogN). We can say that the fractional knapsack problem can be solved much
faster than the 0/1 knapsack problem.
● Time Complexity: O(N * logN)
Auxiliary Space: O(N)

05/28/2025 Slide No.


Minimum Spanning Tree
A minimum spanning tree (MST) is defined as a spanning tree that has the
minimum weight among all the possible spanning trees

A spanning tree is defined as a tree-like subgraph of a connected, undirected graph


that includes all the vertices of the graph. It is a subset of the edges of the graph that
forms a tree (acyclic) where every node of the graph is a part of the tree.

The minimum spanning tree has all the properties of a spanning tree with an added
constraint of having the minimum possible weights among all possible spanning
trees. Like a spanning tree, there can also be many possible MSTs for a graph.

05/28/2025 Slide No.


05/28/2025 Slide No.
Properties of a Spanning Tree:
● The number of vertices (V) in the graph and the spanning tree is the same.
● There is a fixed number of edges in the spanning tree which is equal to one less
than the total number of vertices ( E = V-1 ).
● The spanning tree should not be disconnected, as in there should only be a
single source of component, not more than that.
● The spanning tree should be acyclic, which means there would not be any cycle
in the tree.
● The total cost (or weight) of the spanning tree is defined as the sum of the edge
weights of all the edges of the spanning tree.
● There can be many possible spanning trees for a graph.

05/28/2025 Slide No.


Algorithms to find Minimum Spanning Tree:

1. Kruskal’s Minimum Spanning Tree Algorithm:


2. Prim’s Minimum Spanning Tree Algorithm:

05/28/2025 Slide No.


Kruskal’s Minimum Spanning Tree Algorithm:
The algorithm workflow is as below:
1. First, it sorts all the edges of the graph by their weights,
2. Then starts the iterations of finding the spanning tree.
3. At each iteration, the algorithm adds the next lowest-weight edge one by one,
such that the edges picked until now does not form a cycle.

● This algorithm can be implemented efficiently using a DSU ( Disjoint-Set ) data


structure to keep track of the connected components of the graph.
● This is used in a variety of practical applications such as network design,
clustering, and data analysis.

05/28/2025 Slide No.


Algorithm
MST- KRUSKAL (G, w)
1. A ← ∅
2. for each vertex v ∈ V [G]
3. do MAKE - SET (v)
4. sort the edges of E into non decreasing order by weight w
5. for each edge (u, v) ∈ E, taken in non decreasing order by weight
6. do if FIND-SET (μ) ≠ if FIND-SET (v)
7. then A ← A ∪ {(u, v)}
8. UNION (u, v)
9. return A

05/28/2025 Slide No.


● Example 1

● The graph contains 9 vertices and 14 edges. So, the minimum spanning tree
formed will be having (9 – 1) = 8 edges.
After sorting:

05/28/2025 Slide No.


● Now pick all edges one by one from the sorted list of edges
● Step 1: Pick edge 7-6. No cycle is formed, include it.
Weight Source Destination
1 7 6
2 8 2
2 6 5
4 0 1
4 2 5
6 8 6
7 2 3
7 7 8
8 0 7
8 1 2
9 3 4
10 5 4
11 1 7
14 3 5

05/28/2025 Slide No.


● Step 2: Pick edge 8-2. No cycle is formed, include it.

05/28/2025 Slide No.


● Step 3: Pick edge 6-5. No cycle is formed, include it.

● Step 4: Pick edge 0-1. No cycle is formed, include it.


05/28/2025 Slide No.
● Step 5: Pick edge 2-5. No cycle is formed, include it.

● Step 6: Pick edge 8-6. Since including this edge results in the cycle, discard
it. Pick edge 2-3: No cycle is formed, include it.
05/28/2025 Slide No.
Step 7: Pick edge 7-8. Since including this edge results in the cycle, discard it. Pick edge 0-
7. No cycle is formed, include it.

Step 8: Pick edge 1-2. Since including this edge results in the cycle, discard it. Pick edge 3-
4. No cycle is formed, include it.

05/28/2025 Slide No.


● Example 2

05/28/2025 Slide No.


05/28/2025 Slide No.
Complexity Analysis
● Analysis: Where E is the number of edges in the graph and V is the number of
vertices, Kruskal's Algorithm can be shown to run in O (E log E) time, or simply, O
(E log V) time, all with simple data structures. These running times are
equivalent because:
● E is at most V2 and log V2= 2 x log V is O (log V).
● If we ignore isolated vertices, which will each their components of the minimum
spanning tree, V ≤ 2 E, so log V is O (log E).
● Thus the total time is
● O (E log E) = O (E log V).

05/28/2025 Slide No.


● O(E * logE) or O(E * logV)
● Sorting of edges takes O(E * logE) time.
● After sorting, we iterate through all edges and apply the find-union
algorithm. The find and union operations can take at most O(logV)
time.
● So overall complexity is O(E * logE + E * logV) time.
● The value of E can be at most O(V2), so O(logV) and O(logE) are the
same. Therefore, the overall time complexity is O(E * logE) or
O(E*logV)
● Auxiliary Space: O(V + E), where V is the number of vertices and E
is the number of edges in the graph.

05/28/2025 Slide No.


Prim’s algorithm:
● Like Kruskal’s algorithm, Prim’s algorithm is also a Greedy algorithm. This
algorithm always starts with a single node and moves through several
adjacent nodes, in order to explore all of the connected edges along the way.
● The algorithm starts with an empty spanning tree.
● The idea is to maintain two sets of vertices. The first set contains the vertices
already included in the MST, and the other set contains the vertices not yet
included.
● At every step, it considers all the edges that connect the two sets and picks
the minimum weight edge from these edges. After picking the edge, it moves
the other.
● A group of edges that connects two sets of vertices in a graph is called cut in
graph theory.

05/28/2025 Slide No.


● So, at every step of Prim’s algorithm, find a cut, pick the minimum weight edge from
the cut, and include this vertex in MST Set (the set that contains already included
vertices).

05/28/2025 Slide No.


How does Prim’s Algorithm Work?
● The working of Prim’s algorithm can be described by using the following
steps:
1. Step 1: Determine an arbitrary vertex as the starting vertex of the MST.

2. Step 2: Follow steps 3 to 5 till there are vertices that are not included in the MST
(known as fringe vertex).
3. Step 3: Find edges connecting any tree vertex with the fringe vertices.

4. Step 4: Find the minimum among these edges.

5. Step 5: Add the chosen edge to the MST if it does not form any cycle.

6. Step 6: Return the MST and exit

05/28/2025 Slide No.


Illustration of Prim’s Algorithm:
● Consider the following graph as an example for which we need to find the
Minimum Spanning Tree (MST).

05/28/2025 Slide No.


Step 1: Firstly, we select an arbitrary vertex that acts as the starting vertex of
the Minimum Spanning Tree. Here we have selected vertex 0 as the starting
vertex.

05/28/2025 Slide No.


● Step 2: All the edges connecting the incomplete MST and other
vertices are the edges {0, 1} and {0, 7}. Between these two the edge
with minimum weight is {0, 1}. So include the edge and vertex 1 in the
MST.

05/28/2025 Slide No.


Step 3: The edges connecting the incomplete MST to other vertices are {0, 7}, {1, 7} and
{1, 2}. Among these edges the minimum weight is 8 which is of the edges {0, 7} and {1,
2}. Let us here include the edge {0, 7} and the vertex 7 in the MST. [We could have also
included edge {1, 2} and vertex 2 in the MST].

05/28/2025 Slide No.


Step 4: The edges that connect the incomplete MST with the fringe vertices are
{1, 2}, {7, 6} and {7, 8}. Add the edge {7, 6} and the vertex 6 in the MST as it has
the least weight (i.e., 1).

05/28/2025 Slide No.


Step 5: The connecting edges now are {7, 8}, {1, 2}, {6, 8} and {6, 5}. Include
edge {6, 5} and vertex 5 in the MST as the edge has the minimum weight (i.e.,
2) among them.

05/28/2025 Slide No.


Step 6: Among the current connecting edges, the edge {5, 2} has the minimum
weight. So include that edge and the vertex 2 in the MST.

05/28/2025 Slide No.


Step 7: The connecting edges between the incomplete MST and the other
edges are {2, 8}, {2, 3}, {5, 3} and {5, 4}. The edge with minimum weight is edge
{2, 8} which has weight 2. So include this edge and the vertex 8 in the MST.

05/28/2025 Slide No.


Step 8: See here that the edges {7, 8} and {2, 3} both have same weight which
are minimum. But 7 is already part of MST. So we will consider the edge {2, 3}
and include that edge and vertex 3 in the MST.

05/28/2025 Slide No.


Step 9: Only the vertex 4 remains to be included. The minimum weighted edge
from the incomplete MST to 4 is {3, 4}.

05/28/2025 Slide No.


The final structure of the MST is as follows and the weight of
the edges of the MST is (4 + 8 + 1 + 2 + 4 + 2 + 7 + 9) = 37.

05/28/2025 Slide No.


Note: If we had selected the edge {1, 2} in the third step then the MST would
look like the following.

05/28/2025 Slide No.


How to implement Prim’s Algorithm?
● Follow the given steps to utilize the Prim’s Algorithm mentioned above for finding
MST of a graph:
1. Create a set mstSet that keeps track of vertices already included in MST.

2. Assign a key value to all vertices in the input graph. Initialize all key values as INFINITE.
Assign the key value as 0 for the first vertex so that it is picked first.

3. While mstSet doesn’t include all vertices

a) Pick a vertex u that is not there in mstSet and has a minimum key value.

b) Include u in the mstSet.

c) Update the key value of all adjacent vertices of u. To update the key values, iterate
through all adjacent vertices.

1) For every adjacent vertex v, if the weight of edge u-v is less than the previous key
value of v, update the key value as the weight of u-v.
05/28/2025 Slide No.
● The idea of using key values is to pick the minimum weight edge from the cut. The
key values are used only for vertices that are not yet included in MST, the key value
for these vertices indicates the minimum weight edges connecting them to the set of
vertices included in MST.

Time Complexity:
● O(V2), If the input graph is represented using an adjacency list, then the time
complexity of Prim’s algorithm can be reduced to O(E * logV) with the help of a
binary heap. In this implementation, we are always considering the spanning
tree to start from the root of the graph

05/28/2025 Slide No.


Other Implementations of Prim’s Algorithm:
● Prim’s Algorithm for Adjacency Matrix Representation – In this article we have
discussed the method of implementing Prim’s Algorithm if the graph is
represented by an adjacency matrix.
● Prim’s Algorithm for Adjacency List Representation – In this article Prim’s
Algorithm implementation is described for graphs represented by an
adjacency list.
● Prim’s Algorithm using Priority Queue: In this article, we have discussed a
time-efficient approach to implement Prim’s algorithm.

05/28/2025 Slide No.


Dijkstra's Algorithm
● Dijkstra's Algorithm is a Graph algorithm that finds the shortest path from a
source vertex to all other vertices in the Graph (single source shortest path).
It is a type of Greedy Algorithm that only works on Weighted Graphs having
positive weights.
● The time complexity of Dijkstra's Algorithm is O(V2) with the help of the
adjacency matrix representation of the graph.
● This time complexity can be reduced to O((V + E) log V) with the help of an
adjacency list representation of the graph, where V is the number of vertices
and E is the number of edges in the graph.

05/28/2025 Slide No.


● Given a graph and a source vertex in the ● Output: 0 4 12 19 21 11 9 8 14
graph, find the shortest paths from the Explanation: The distance from 0 to 1 = 4.
source to all vertices in the given graph. The minimum distance from 0 to 2 = 12. 0-
● Examples: >1->2
The minimum distance from 0 to 3 = 19. 0-
● Input: src = 0, the graph is shown below. >1->2->3
The minimum distance from 0 to 4 = 21. 0-
>7->6->5->4
The minimum distance from 0 to 5 = 11. 0-
>7->6->5
The minimum distance from 0 to 6 = 9. 0-
>7->6
The minimum distance from 0 to 7 = 8. 0-
>7
The minimum distance from 0 to 8 = 14. 0-
>1->2->8

05/28/2025 Slide No.


Dijkstra shortest path algorithm for Adjacency Matrix in
O(V2):
● The idea is to generate a SPT (shortest path tree) with a given source as a root.
Maintain an Adjacency Matrix with two sets,
● one set contains vertices included in the shortest-path tree,
● other set includes vertices not yet included in the shortest-path tree.
● At every step of the algorithm, find a vertex that is in the other set (set not yet
included) and has a minimum distance from the source.

05/28/2025 Slide No.


Follow the steps below to solve the problem:
● Create a set sptSet (shortest path tree set) that keeps track of vertices included in the
shortest path tree, i.e., whose minimum distance from the source is calculated and
finalized. Initially, this set is empty.
● Assign a distance value to all vertices in the input graph. Initialize all distance values
as INFINITE. Assign the distance value as 0 for the source vertex so that it is picked
first.
● While sptSet doesn’t include all vertices
○ Pick a vertex u that is not there in sptSet and has a minimum distance value.

○ Include u to sptSet.

○ Then update the distance value of all adjacent vertices of u.

■ To update the distance values, iterate through all adjacent vertices.

■ For every adjacent vertex v, if the sum of the distance value of u (from source) and weight of
edge u-v, is less than the distance value of v, then update the distance value of v.
05/28/2025 Slide No.
Pseudocode:
● function Dijkstra_Algorithm(Graph, source_node)
● // iterating through the nodes in Graph and set their distances to INFINITY
● for each node N in Graph:
● distance[N] = INFINITY
● previous[N] = NULL
● If N != source_node, add N to Priority Queue G
● // setting the distance of the source node of the Graph to 0
● distance[source_node] = 0

● // iterating until the Priority Queue G is not empty
● while G is NOT empty:

05/28/2025 Slide No.


● // selecting a node Q having the least distance and marking it as visited
Q = node in G with the least distance[]
mark Q visited

● // iterating through the unvisited neighboring nodes of the node Q and performing
relaxation accordingly
for each unvisited neighbor node N of Q:
temporary_distance = distance[Q] + distance_between(Q, N)

● // if the temporary distance is less than the given distance of the path to the Node,
updating the resultant distance with the minimum value
if temporary_distance < distance[N]
distance[N] := temporary_distance

05/28/2025 Slide No.
● previous[N] := Q
● // returning the final list of distance
● return distance[], previous[]

05/28/2025 Slide No.


Illustration:
● We use a boolean array sptSet[] to represent the set of vertices included in SPT. If a
value sptSet[v] is true, then vertex v is included in SPT, otherwise not. Array dist[] is
used to store the shortest distance values of all vertices.
● To understand the Dijkstra’s Algorithm lets take a graph and find the shortest path
from source to all nodes.
● Consider below graph and src = 0

05/28/2025 Slide No.


Step 1:
● The set sptSet is initially empty and distances assigned to vertices are {0, INF, INF, INF, INF, INF,
INF, INF} where INF indicates infinite.
● Now pick the vertex with a minimum distance value. The vertex 0 is picked, include it in sptSet.
So sptSet becomes {0}. After including 0 to sptSet, update distance values of its adjacent
vertices.
● Adjacent vertices of 0 are 1 and 7. The distance values of 1 and 7 are updated as 4 and 8.
● The following subgraph shows vertices and their distance values, only the vertices with finite
distance values are shown. The vertices included in SPT are shown in green colour.

05/28/2025 Slide No.


● Step 2:
● Pick the vertex with minimum distance value and not already included in SPT
(not in sptSET). The vertex 1 is picked and added to sptSet.
● So sptSet now becomes {0, 1}. Update the distance values of adjacent vertices
of 1.
● The distance value of vertex 2 becomes 12.

05/28/2025 Slide No.


● Step 3:
● Pick the vertex with minimum distance value and not already included in SPT
(not in sptSET). Vertex 7 is picked. So sptSet now becomes {0, 1, 7}.
● Update the distance values of adjacent vertices of 7. The distance value of
vertex 6 and 8 becomes finite (15 and 9 respectively).

05/28/2025 Slide No.


Step 4:
Pick the vertex with minimum distance value and not already included in SPT (not
in sptSET). Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}.
Update the distance values of adjacent vertices of 6. The distance value of vertex
5 and 8 are updated.

05/28/2025 Slide No.


We repeat the above steps until sptSet includes all vertices of the
given graph. Finally, we get the following Shortest Path Tree (SPT)

05/28/2025 Slide No.


Example 2:-

05/28/2025 Slide No.


Output:-

● Distance from the Source Node to 1: 4


● Distance from the Source Node to 2: 12
● Distance from the Source Node to 3: 19
● Distance from the Source Node to 4: 12
● Distance from the Source Node to 5: 8
● Distance from the Source Node to 6: 10

05/28/2025 Slide No.


Advantages of Dijkstra's Algorithm
1. One primary advantage of using Dijkstra's Algorithm is that it has an almost
linear time and space complexity.
2. We can use this algorithm to calculate the shortest path from a single vertex to
all other vertices and a single source vertex to a single destination vertex by
stopping the algorithm once we get the shortest distance for the destination
vertex.
3. This algorithm only works for directed weighted graphs, and all the edges of
this graph should be non-negative.

05/28/2025 Slide No.


Disadvantages of Dijkstra's Algorithm
1. Dijkstra's Algorithm performs a concealed exploration that utilizes a lot of time
during the process.
2. This algorithm is impotent to handle negative edges.
3. Since this algorithm heads to the acyclic graph, it cannot calculate the exact
shortest path.
4. It also requires maintenance to keep a record of vertices that have been
visited.

05/28/2025 Slide No.


Applications of Dijkstra's Algorithm
1. Digital Mapping Services in Google Maps
2. Social Networking Applications
3. Telephone Network
4. Flight Program
5. IP routing to find Open Shortest Path First
6. Robotic Path
7. Designate the File Server

05/28/2025 Slide No.


Dijkstra’s shortest path algorithm for Adjacency List using Heap in
O(E logV):
● For Dijkstra’s algorithm, it is always recommended to use heap (or priority queue) as the
required operations (extract minimum and decrease key) match with the specialty of the
heap (or priority queue).
● However, the problem is, that priority_queue doesn’t support the decrease key. To
resolve this problem, do not update a key, but insert one more copy of it. So we allow
multiple instances of the same vertex in the priority queue. This approach doesn’t
require decreasing key operations and has below important properties.
● Whenever the distance of a vertex is reduced, we add one more instance of a vertex in
priority queue. Even if there are multiple instances, we only consider the instance with
minimum distance and ignore other instances.
● The time complexity remains O(E * LogV) as there will be at most O(E) vertices in the
priority queue and O(logE) is the same as O(logV)

05/28/2025 Slide No.


Complexity
● Time Complexity: O(E * logV), Where E is the number of edges and
V is the number of vertices.
● Auxiliary Space: O(V)

05/28/2025 Slide No.


Bellman ford algorithm
● Bellman ford algorithm is a single-source shortest path algorithm. This algorithm
is used to find the shortest distance from the single vertex to all the other
vertices of a weighted graph.
● There are various other algorithms used to find the shortest path like Dijkstra
algorithm, etc.
● If the weighted graph contains the negative weight values, then the Dijkstra
algorithm does not confirm whether it produces the correct answer or not.
● In contrast to Dijkstra algorithm, bellman ford algorithm guarantees the correct
answer even if the weighted graph contains the negative weight values.

05/28/2025 Slide No.


Steps for finding the shortest distance to all vertices from the source using the
Bellman-Ford algorithm:
● This step initializes distances from the source to all vertices as infinite and
distance to the source itself as 0. Create an array dist[] of size |V| with all values
as infinite except dist[src] where src is source vertex.
● This step calculates shortest distances. Do following |V|-1 times where |V| is
the number of vertices in given graph. Do following for each edge u-v
○ If dist[v] > dist[u] + weight of edge uv, then update dist[v] to

○ dist[v] = dist[u] + weight of edge uv


● This step reports if there is a negative weight cycle in the graph. Again traverse
every edge and do following for each edge u-v
……If dist[v] > dist[u] + weight of edge uv, then “Graph contains negative weight
cycle”

05/28/2025 Slide No.


Cont…
● The idea of step 3 is, step 2 guarantees the shortest distances if the graph
doesn’t contain a negative weight cycle. If we iterate through all edges one more
time and get a shorter path for any vertex, then there is a negative weight cycle
● Below is the illustration of the above algorithm:
○ Step 1: Let the given source vertex be 0. Initialize all distances as infinite, except the
distance to the source itself. Total number of vertices in the graph is 5, so all edges
must be processed 4 times.

05/28/2025 Slide No.


Step 1

05/28/2025 Slide No.


● Step 2: Let all edges are processed in the following order: (B, E), (D, B), (B, D), (A,
B), (A, C), (D, C), (B, C), (E, D).
● We get the following distances when all edges are processed the first time. The
first row shows initial distances.
● The second row shows distances when edges (B, E), (D, B), (B, D) and (A, B) are
processed.
● The third row shows distances when (A, C) is processed. The fourth row shows
when (D, C), (B, C) and (E, D) are processed.

05/28/2025 Slide No.


Step 2:

05/28/2025 Slide No.


Step 3: The first iteration guarantees to give all shortest paths which are at
most 1 edge long. We get the following distances when all edges are
processed second time (The last row shows final values).

05/28/2025 Slide No.


● Step 4: The second iteration guarantees to give all shortest paths
which are at most 2 edges long.
● The algorithm processes all edges 2 more times.
● The distances are minimized after the second iteration, so third
and fourth iterations don’t update the distances.

05/28/2025 Slide No.


Examples 2
● The following are the distances of vertices:
○ A: 0
○ B: 1
○ C: 3
○ D: 5
○ E: 0
○ F: 3

05/28/2025 Slide No.


Complexity Analysis
● Time Complexity: O(V * E), where V is the number of vertices in
the graph and E is the number of edges in the graph
● Auxiliary Space: O(E)

05/28/2025 Slide No.


Important Points
● Negative weights are found in various applications of graphs. For example,
instead of paying the cost for a path, we may get some advantage if we follow
the path.
● Bellman-Ford works better (better than Dijkstra’s) for distributed systems. Unlike
Dijkstra’s where we need to find the minimum value of all vertices, in Bellman-
Ford, edges are considered one by one.
● Bellman-Ford does not work with an undirected graph with negative edges as it
will be declared as a negative cycle.

05/28/2025 Slide No.


Algorithm
● function bellmanFord(G, S) ● previous[V] <- U
● for each vertex V in G ●
● distance[V] <- infinite ● for each edge (U,V) in G
● previous[V] <- NULL ● If distance[U] + edge_weight(U, V) < dis
● distance[S] <- 0 tance[V}
● ● Error: Negative Cycle Exists
● for each vertex V in G ●
● for each edge (U,V) in G ● return distance[], previous[]
● tempDistance <- distance[U] + edge_w
eight(U, V)
● if tempDistance < distance[V]
● distance[V] <- tempDistance
05/28/2025 Slide No.
Drawbacks of Bellman ford algorithm
● The bellman ford algorithm does not produce a correct answer if the sum of the
edges of a cycle is negative. Let's understand this property through an example.
Consider the below graph.

05/28/2025 Slide No.


05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
05/28/2025 Slide No.
<Session Name>: Source
https://www.geeksforgeeks.org/

05/28/2025 Slide No.


Time for a Break !

05/28/2025 Slide No.


Any Doubts/Questions

05/28/2025 Slide No.


Thank You

05/28/2025 113
Icons To Be Used (Suggestions Only)

Hands on
Doubts/ Tools Exercise
Questions

Coding Test Your


Reference
Standards Understanding

A Welcome Contacts
Demonstration Break

05/28/2025 Faculty Name: _____________ Slide No.

You might also like