Design & Analysis of algorithm- 3
Design & Analysis of algorithm- 3
05/28/2025 1 1
Slide No.
Parul University
Date 05/28/2025 2
Slide No.
Unit-3 Greedy Algorithm
● Greedy Methods: Introduction and Problem statements
● Selection Sort
● Greedy methods with examples such as Optimal Reliability Allocation
● Knapsack problem
● Minimum Spanning trees – Prim’s and Kruskal’s algorithms,
● Single source shortest paths - Dijkstra’s and Bellman Ford algorithms.
05/28/2025 3
Slide No.
Greedy Methods
● The greedy method is one of the strategies like Divide and conquer used to
solve the problems.
● This method is used for solving optimization problems. An optimization
problem is a problem that demands either maximum or minimum results. Let's
understand through some terms.
● The Greedy method is the simplest and straightforward approach. It is not an
algorithm, but it is a technique.
● The main function of this approach is that the decision is taken on the basis of
the currently available information.
● Whatever the current information is present, the decision is made without
worrying about the effect of the current decision in future.
05/28/2025 4
Slide No.
Cont…
● This technique is basically used to determine the feasible solution that may or may
not be optimal.
● The feasible solution is a subset that satisfies the given criteria. The optimal solution
is the solution which is the best and the most favorable solution in the subset.
● In the case of feasible, if more than one solution satisfies the given criteria then
those solutions will be considered as the feasible, whereas the optimal solution is the
best solution among all the solutions.
● Feasible solution:- Most problems have n inputs and its solution contains a subset of
inputs that satisfies a given constraint(condition). Any subset that satisfies the
constraint is called feasible solution.
● Optimal solution: To find a feasible solution that either maximizes or minimizes a
given objective function. A feasible solution that does this is called optimal solution.
05/28/2025 5
Slide No.
Components of Greedy Algorithm
● The components that can be used in the greedy algorithm are:
1) Candidate set: A solution that is created from the set is known as a candidate
set.
2) Selection function: This function is used to choose the candidate or subset
which can be added in the solution.
3) Feasibility function: A function that is used to determine whether the
candidate or subset can be used to contribute to the solution or not.
4) Objective function: A function is used to assign the value to the solution or the
partial solution.
5) Solution function: This function is used to intimate whether the complete
function has been reached or not.
05/28/2025 6
Slide No.
Algorithm for Greedy method
● Algorithm Greedy(a,n)
● //a[1:n] contains the n inputs.
● {
Solution :=0;
For i=1 to n do
X:=select(a);
If Feasible(solution, x) then
Solution :=Union(solution,x);
}
● Return solution; }
05/28/2025 7
Slide No.
● Selection:- Function, that selects an input from a[] and removes it. The
selected input’s value is assigned to x.
● Feasible:- Boolean-valued function that determines whether x can be included
into the solution vector.
● Union:- function that combines x with solution and updates the objective
function.
05/28/2025 8
Slide No.
Applications of Greedy Algorithm
● It is used in finding the shortest path.
● It is used to find the minimum spanning tree using the prim's algorithm or the
Kruskal's algorithm.
● It is used in a job sequencing with a deadline.
● This algorithm is also used to solve the fractional knapsack problem.
05/28/2025 9
Slide No.
Cont..
● It follows the local optimum choice at each stage with a intend of finding the
global optimum. Let's understand through an example.
● Consider the graph which is given below:
We have to travel from the source to the destination at the minimum cost. Since we
have three feasible solutions having cost paths as 10, 20, and 5. 5 is the minimum cost
path so it is the optimal solution. This is the local optimum, and in this way, we find the
local optimum at each stage in order to calculate the global optimal solution.
05/28/2025 Slide 10
No.
Example.
● Let's understand through an Suppose there is a problem 'P'. I want to travel
from A to B shown as below:
● P:A→B
● The problem is that we have to travel this journey from A to B. There are
various solutions to go from A to B. We can go from A to B by walk, car, bike,
train, aeroplane, etc. There is a constraint in the journey that we have to
travel this journey within 12 hrs. If I go by train or aeroplane then only, I can
cover this distance within 12 hrs. There are many solutions to this problem
but there are only two solutions that satisfy the constraint.
05/28/2025 Slide 11
No.
● If we say that we have to cover the journey at the minimum cost. This means that we
have to travel this distance as minimum as possible, so this problem is known as a
minimization problem. Till now, we have two feasible solutions, i.e., one by train and
another one by air. Since travelling by train will lead to the minimum cost so it is an
optimal solution. An optimal solution is also the feasible solution, but providing the best
result so that solution is the optimal solution with the minimum cost. There would be
only one optimal solution.
● The problem that requires either minimum or maximum result then that problem is
known as an optimization problem. Greedy method is one of the strategies used for
solving the optimization problems.
05/28/2025 Slide 12
No.
Greedy Algorithm Examples
● Most networking algorithms use the greedy approach. Here is a list of few
of them −
1. Travelling Salesman Problem
2. Prim's Minimal Spanning Tree Algorithm
3. Kruskal's Minimal Spanning Tree Algorithm
4. Dijkstra's Minimal Spanning Tree Algorithm
5. Graph - Map Coloring
6. Knapsack Problem
7. Job Scheduling Problem
05/28/2025 Slide 13
No.
Selection Sort
● In selection sort, the smallest value among the unsorted elements of the array is
selected in every pass and inserted to its appropriate position into the array. It is also
the simplest algorithm. It is an in-place comparison sorting algorithm. In this
algorithm, the array is divided into two parts, first is sorted part, and another one is
the unsorted part. Initially, the sorted part of the array is empty, and unsorted part is
the given array. Sorted part is placed at the left, while the unsorted part is placed at
the right.
05/28/2025 Slide 14
No.
● Lets consider the following array as an example: arr[] = {64, 25, 12, 22, 11}
● First pass:
● For the first position in the sorted array, the whole array is traversed from index 0 to 4
sequentially. The first position where 64 is stored presently, after traversing whole
array it is clear that 11 is the lowest value.
● Thus, replace 64 with 11. After one iteration 11, which happens to be the least value
in the array, tends to appear in the first position of the sorted list.
05/28/2025 Slide 15
No.
Second Pass:
● For the second position, where 25 is present, again traverse the rest of the
array in a sequential manner.
● After traversing, we found that 12 is the second lowest value in the array
and it should appear at the second place in the array, thus swap these
values.
05/28/2025 Slide 16
No.
Third Pass:
● Now, for third place, where 25 is present again traverse the rest of the array and find
the third least value present in the array.
● While traversing, 22 came out to be the third least value and it should appear at the
third place in the array, thus swap 22 with element present at third position.
05/28/2025 Slide 17
No.
Fourth pass:
● Similarly, for fourth position traverse the rest of the array and find the fourth
least element in the array
● As 25 is the 4th lowest value hence, it will place at the fourth position.
05/28/2025 Slide 18
No.
Fifth Pass:
● At last the largest value present in the array automatically get placed at the last
position in the array
● The resulted array is the sorted array.
05/28/2025 Slide 19
No.
Algorithm
1. Step 1 − Set MIN to location 0
2. Step 2 − Search the minimum element in the list
3. Step 3 − Swap with value at location MIN
4. Step 4 − Increment MIN to point to next element
5. Step 5 − Repeat until list is sorted
05/28/2025 Slide 20
No.
● Procedure Selection Sort ● end for
● list : array of items
● n : size of list ● /* swap the minimum element with
the current element*/
● for i = 1 to n - 1 ● if indexMin != i then
● /* set current element as minimum*/
● min = i
● /* check the element to be minimum */
● for j = i+1 to n
● if list[j] < list[min] then
● min = j;
● end if
05/28/2025 Slide 21
No.
Complexity Analysis of Selection Sort
● Time Complexity: The time complexity of Selection Sort is O(N2) as there are two
nested loops:
● One loop to select an element of Array one by one = O(N)
● Another loop to compare that element with every other Array element = O(N)
● Therefore overall complexity = O(N) * O(N) = O(N*N) = O(N2)
05/28/2025 Slide 22
No.
Knapsack Problem
● Suppose you have been given a knapsack or bag with a limited weight capacity,
and each item has some weight and value. The problem here is that “Which
item is to be placed in the knapsack such that the weight limit does not exceed
and the total value of the items is as large as possible?”.
● Consider the real-life example. Suppose there is a thief and he enters the
museum. The thief contains a knapsack, or we can say a bag that has limited
weight capacity. The museum contains various items of different values. The
thief decides what items are should he keep in the bag so that profit would
become maximum.
05/28/2025 Slide 23
No.
Some important points related to the knapsack problem are:
● It is a combinatorial optimization-related problem.
● Given a set of N items – usually numbered from 1 to N; each of these items has a mass wi
and a value vi.
● It determines the number of each item to be included in a collection so that the total
weight M is less than or equal to a given limit and the total value is as large as possible.
05/28/2025 Slide 24
No.
Some important points related to the knapsack problem are:
● It is a combinatorial optimization-related problem.
● Given a set of N items – usually numbered from 1 to N; each of these items has a mass wi
and a value vi.
● It determines the number of each item to be included in a collection so that the total
weight M is less than or equal to a given limit and the total value is as large as possible.
05/28/2025 Slide 25
No.
1. 0/1 knapsack problem
● A knapsack means a bag. It is used for solving knapsack problems. This problem
is solved by using a dynamic programming approach. In this problem, the items
are either completely filled or no items are filled in a knapsack. 1 means items
are completely filled or 0 means no item in the bag.
● For example, we have two items having weights of 12kg and 13kg, respectively.
If we pick the 12kg item then we cannot pick the 10kg item from the 12kg item
(because the item is not divisible); we have to pick the 12kg item completely.
● In this problem, we cannot take the fraction of the items. Here, we have to
decide whether we have to take the item, i.e., x = 1 or not, i.e., x = 0.
● The greedy approach does not provide the optimal result in this problem.
05/28/2025 Slide 26
No.
2. Fractional knapsack problem
● This problem is also used for solving real-world problems. It is solved by using
the Greedy approach. In this problem we can also divide the items means we
can take a fractional part of the items that is why it is called the fractional
knapsack problem.
● For example, if we have an item of 13 kg then we can pick the item of 12 kg and
leave the item of 1 kg. To solve the fractional problem, we first compute the
value per weight of each item.
05/28/2025 Slide 27
No.
Fractional Knapsack Problem
● Given the weights and profits of N items, in the form of {profit, weight} put
these items in a knapsack of capacity W to get the maximum total profit in
the knapsack. In Fractional Knapsack, we can break items for maximizing the
total value of the knapsack.
● Greedy approach: In Greedy approach, we calculate the ratio of
profit/weight, and accordingly, we will select the item. The item with the
highest ratio would be selected first.
● There are basically three approaches to solve the problem:
○ The first approach is to select the item based on the maximum profit.
○ The second approach is to select the item based on the minimum weight.
○ The third approach is to calculate the ratio of profit/weight.
05/28/2025 Slide 28
No.
Pseudocode of Fractional SW ← SW + W[i]
knapsack problem: SP ← SP + V[i]
else
GREEDY_FRACTIONAL_KNAPSACK( frac ← (M – SW) / W[i]
X, V, W, M) S ← S ∪ X[i] * frac
S←Φ // Add fraction of item X[i]
// Set of selected items, initially empty SP ← SP + V[i] * frac
SW ← 0 // weight of selected items // Add fraction of profit
SP ← 0 // profit of selected items SW ← SW + W[i] * frac
i←1 // Add fraction of weight
while i ≤ n do end
if (SW + w[i]) ≤ M then i←i+1
S ← S ∪ X[i] end
05/28/2025 Slide 29
No.
Example
● Objects: 1 2 3 4 5 6 7
● Profit (P): 5 10 15 7 8 9 4
● Weight(w): 1 3 5 4 1 3 2
● n (no of items): 7
05/28/2025 Slide 30
No.
First approach:
3 15 5 15 - 5 = 10
2 10 3 10 - 3 = 7
6 9 3 7-3=4
5 8 1 4-1=3
7 7 * ¾ = 5.25 3 3-3=0
05/28/2025 Slide 31
No.
Second approach:
The second approach is to select the item based on the minimum weight.
5 8 1 15 - 1 = 14
1 5 1 14 - 1 = 13
2 10 3 13 - 3 = 10
3 15 5 10 - 5 = 5
6 9 3 5-3=2
7 4 2 2-2=0
The minimum spanning tree has all the properties of a spanning tree with an added
constraint of having the minimum possible weights among all possible spanning
trees. Like a spanning tree, there can also be many possible MSTs for a graph.
● The graph contains 9 vertices and 14 edges. So, the minimum spanning tree
formed will be having (9 – 1) = 8 edges.
After sorting:
● Step 6: Pick edge 8-6. Since including this edge results in the cycle, discard
it. Pick edge 2-3: No cycle is formed, include it.
05/28/2025 Slide No.
Step 7: Pick edge 7-8. Since including this edge results in the cycle, discard it. Pick edge 0-
7. No cycle is formed, include it.
Step 8: Pick edge 1-2. Since including this edge results in the cycle, discard it. Pick edge 3-
4. No cycle is formed, include it.
2. Step 2: Follow steps 3 to 5 till there are vertices that are not included in the MST
(known as fringe vertex).
3. Step 3: Find edges connecting any tree vertex with the fringe vertices.
5. Step 5: Add the chosen edge to the MST if it does not form any cycle.
2. Assign a key value to all vertices in the input graph. Initialize all key values as INFINITE.
Assign the key value as 0 for the first vertex so that it is picked first.
a) Pick a vertex u that is not there in mstSet and has a minimum key value.
c) Update the key value of all adjacent vertices of u. To update the key values, iterate
through all adjacent vertices.
1) For every adjacent vertex v, if the weight of edge u-v is less than the previous key
value of v, update the key value as the weight of u-v.
05/28/2025 Slide No.
● The idea of using key values is to pick the minimum weight edge from the cut. The
key values are used only for vertices that are not yet included in MST, the key value
for these vertices indicates the minimum weight edges connecting them to the set of
vertices included in MST.
Time Complexity:
● O(V2), If the input graph is represented using an adjacency list, then the time
complexity of Prim’s algorithm can be reduced to O(E * logV) with the help of a
binary heap. In this implementation, we are always considering the spanning
tree to start from the root of the graph
○ Include u to sptSet.
■ For every adjacent vertex v, if the sum of the distance value of u (from source) and weight of
edge u-v, is less than the distance value of v, then update the distance value of v.
05/28/2025 Slide No.
Pseudocode:
● function Dijkstra_Algorithm(Graph, source_node)
● // iterating through the nodes in Graph and set their distances to INFINITY
● for each node N in Graph:
● distance[N] = INFINITY
● previous[N] = NULL
● If N != source_node, add N to Priority Queue G
● // setting the distance of the source node of the Graph to 0
● distance[source_node] = 0
●
● // iterating until the Priority Queue G is not empty
● while G is NOT empty:
●
● // iterating through the unvisited neighboring nodes of the node Q and performing
relaxation accordingly
for each unvisited neighbor node N of Q:
temporary_distance = distance[Q] + distance_between(Q, N)
● // if the temporary distance is less than the given distance of the path to the Node,
updating the resultant distance with the minimum value
if temporary_distance < distance[N]
distance[N] := temporary_distance
●
05/28/2025 Slide No.
● previous[N] := Q
● // returning the final list of distance
● return distance[], previous[]
05/28/2025 113
Icons To Be Used (Suggestions Only)
Hands on
Doubts/ Tools Exercise
Questions
A Welcome Contacts
Demonstration Break