0% found this document useful (0 votes)
102 views196 pages

Unit3 Greedy

Uploaded by

202201083
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views196 pages

Unit3 Greedy

Uploaded by

202201083
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 196

DESIGN AND ANALYSIS OF ALGORITHMS

(BTECCE21501)
Prof. Jayendra Jadhav
Vishwakarma University, Pune
UNIT 3
Greedy Technique and Dynamic
Programming
Outline

 Greedy Method:
– Applications: Fractional Knapsack problem, 0/1 Knapsack problem,
Coin changing problem, Container loading problem, Job sequencing
with deadlines.
– Minimum Cost Spanning Trees: Prim’s algorithm and Kruskal’s
Algorithm,
– Single Source Shortest path problem: Dijkstra’s algorithm &
Bellman Ford Algorithm, Optimal Merge pattern, Huffman Trees.
 Dynamic Programming:
– Principle of optimality, Stassen’s method for Matrix multiplication,
Floyd’s algorithm, Multi stage graph, Optimal Binary Search Trees,
Knapsack Problem.
Greedy Method
Greedy Method

 The greedy method is an algorithmic approach used for


solving optimization problems by making a series of
choices, each of which looks the best at the moment.
 The method is called "greedy" because it makes the
choice that seems the best at the moment, without
worrying about the overall problem's complexity or
future consequences.
 It doesn't worry whether the current best result will
bring the overall optimal result.
Greedy Method – Key Principles

 Greedy Choice Property

 Optimal Substructure
Greedy Method – Key Principles

 Greedy Choice Property:

– If an optimal solution to the problem can be found by choosing the


best choice at each step without reconsidering the previous steps
once chosen, the problem can be solved using a greedy approach.
This property is called greedy choice property.
 Optimal Substructure:

– A problem exhibits an optimal substructure if an optimal solution


to the problem contains optimal solutions to its sub-problems. This
means the greedy choice at each step should contribute to the
overall optimal solution.
Characteristics of the Greedy Method

 Local Optimization (Greedy Choice Property)

 No Backtracking

 Iterative Process

 Feasibility

 Efficiency of Greedy Algorithms


Characteristics of the Greedy Method
 Local Optimization (Greedy Choice Property)
– The greedy method makes decisions based on what seems best at the
moment, choosing the option that provides the most immediate benefit
or value.
– The algorithm doesn't look ahead to future consequences or try to
globally optimize; instead, it aims for a quick win with every choice.
 No Backtracking
– Once a decision is made, it is not reconsidered or revised.
– This makes the greedy method simple and efficient but can also lead to
suboptimal solutions if the initial choices were not ideal.
 Iterative Process
– The solution is built incrementally, step-by-step.
– At each step, the algorithm adds the best available choice to the current
solution until a final solution is reached.
Characteristics of the Greedy Method
 Feasibility
– The algorithm only makes choices that maintain a feasible solution at each
step.
– It avoids options that would make the solution invalid according to the
problem constraints.
 Efficiency of greedy algorithms
– Greedy algorithms are typically efficient in terms of time complexity,
often faster than other approaches like dynamic programming or
exhaustive search.
– Many greedy algorithms run in linear time (e.g., 𝑂(𝑛) or 𝑂(𝑛 log 𝑛)),
depending on the problem
Greedy Algorithm
Applications of Greedy Method
 Fractional Knapsack Problem
 0/1 Knapsack Problem
 Coin Changing Problem
 Container loading Problem
 Job sequencing with Deadlines
 Minimum Cost Spanning Trees
– Prim’s algorithm
– Kruskal’s Algorithm
 Single Source Shortest Path Problem
– Dijkstra’s algorithm & Bellman Ford Algorithm
– Optimal Merge Pattern
– Huffman Trees
Knapsack problem
Knapsack problem

 The Knapsack Problem is a classic problem in combinatorial


optimization :
Given a set of items, each with a weight and a value, determine
which items to include in the collection (Knapsack) so that the
total weight is less than or equal to a given limit and the total
value is as large as possible.
Knapsack problem

 Version of Knapsack Problem

– Fractional Knapsack Problem

– 0/1 Knapsack Problem


Knapsack problem

 Fractional Knapsack Problem

– Items are divisible; you can take any fraction of an item.

– Solved using Greedy Approach

 0/1 Knapsack Problem

– Items are indivisible; you either take them or not.

– Solved using Dynamic Programming.


Fractional Knapsack problem

 The fractional knapsack problem is one of the

techniques used to solve the knapsack problem.

 In fractional knapsack, Items are divisible; you can take

any fraction of an item.

 Solved using Greedy Approach


Fractional Knapsack problem

Problem Statement

 Given:

– A set of 𝒏 items, each with a weight 𝒘𝒊 and a value 𝐯𝒊 .

– A knapsack with a maximum weight capacity 𝑾.

 Objective:

– To maximize the total value of the items in the knapsack


without exceeding the weight capacity 𝑾.

– You are allowed to take fractional parts of any item.


Fractional Knapsack problem – Greedy Approach

 To solve the Fractional Knapsack Problem, the greedy

approach involves the following steps:

1. Calculate the Value-to-Weight Ratio

2. Sort Items

3. Select Items for the Knapsack

4. Stop when the Knapsack is Full


Fractional Knapsack problem – Greedy Approach

1. Calculate the Value-to-Weight Ratio

For each item, compute its value-to-weight ratio:

𝐯𝒊
𝐫𝐚𝐭𝐢𝐨𝒊 =
𝒘𝒊

2. Sort Items

Sort all items in descending order of their value-to-weight


𝐯
ratio ( 𝒊 ).
𝒘𝒊
Fractional Knapsack problem – Greedy Approach

3. Select Items for the Knapsack


o Initialize the total value of the knapsack to 0.
o Iterate through the sorted list of items:
– If adding the whole item does not exceed the knapsack's remaining
capacity, take it completely.
– If adding the whole item exceeds the capacity, take only the fraction
that fits.
o Update the total value of the knapsack accordingly.
4. Stop when the Knapsack is Full
o The process stops when the knapsack is full or all items have been
considered.
Example1

 For the given set of items and the knapsack capacity

of 10 kg, find the subset of the items to be added in


the knapsack such that the profit is maximum.

Items 1 2 3 4 5
Weights (Kg) 3 3 2 5 1
Values 10 15 10 20 8
Fractional Knapsack Problem Example

 Given, n = 5

Wi = {3, 3, 2, 5, 1}

Vi = {10, 15, 10, 12, 8}

 Step1 : Calculate the Value-to-Weight Ratio

Items 1 2 3 4 5
Weights (Kg) 3 3 2 5 1
Values 10 15 10 20 8
𝐯𝒊
3.3 5 5 4 8
𝒘𝒊
Fractional Knapsack Problem Example

 Step2 : Sort all items in descending order of their value-to-weight ratio

(𝐯𝒊/𝒘𝒊).

Items 5 2 3 4 1
Weights (Kg) 1 3 2 5 3
Values 8 15 10 20 10
𝐯𝒊
8 5 5 4 3.3
𝒘𝒊
Fractional Knapsack Problem Example

 Step3 : Select Items for the Knapsack

Initially, Knapsack = 0

Items 5 2 3 4 1

Weights (Kg) 1 3 2 5 3

Values 8 15 10 20 10
𝐯𝒊
8 5 5 4 3.3
𝒘𝒊
Knapsack 1 1 1 4/5 0
Remaining Weight 10-1=9 9-3=6 6-2=4 4-4=0 0
Fractional Knapsack Problem Example

 Hence, the knapsack holds the


Weights = [(1 * 1) + (1 * 3) + (1 * 2) + (4/5 * 5)] = 10
 with maximum value
Value = [(1 * 8) + (1 * 15) + (1 * 10) + (4/5 * 20)] = 37

Items 5 2 3 4 1
Weights (Kg) 1 3 2 5 3
Values 8 15 10 20 10
𝐯𝒊
8 5 5 4 3.3
𝒘𝒊
Knapsack 1 1 1 4/5 0
Example2

 For the given set of items and the knapsack capacity

of 16 kg, find the subset of the items to be added in


the knapsack such that the profit is maximum.

Items 1 2 3 4 5 6
Weights (Kg) 6 10 3 5 1 3
Values 6 2 1 8 3 5
Example2

 Step1 : Calculate the Value-to-Weight Ratio

Items 1 2 3 4 5 6
Weights (Kg) 6 10 3 5 1 3
Values 6 2 1 8 3 5
Vi/Wi 1.00 0.20 0.33 1.60 3.00 1.66
Example2

 Step2 : Sort all items in descending order of their value-to-weight ratio

(𝐯𝒊/𝒘𝒊).

Items 5 6 4 1 3 2
Weights (Kg) 1 3 5 6 3 10
Values 3 5 8 6 1 2
Vi/Wi 3.00 1.66 1.60 1.00 0.33 0.20
Example2

 Step3 : Select Items for the Knapsack

Initially, Knapsack = 0

Total Capacity = 16

Items 5 6 4 1 3 2

Weights (Kg) 1 3 5 6 3 10

Values 3 5 8 6 1 2
Vi/Wi 3.00 1.66 1.60 1.00 0.33 0.20
Knapsack 1 1 1 1 1/3 0

Remaining Wi 16-1=15 15-3=12 12-5=7 7-6=1 1-1=0 0


Example2

 Weight = [(1*1)+(1*3)+(1*5)+(1*6)+(1/3*3)] = 16

 Value = [(1*3)+(1*5)+(1*8)+(1*6)+(1/3*1)] = 22.33

Items 5 6 4 1 3 2

Weights (Kg) 1 3 5 6 3 10

Values 3 5 8 6 1 2
Vi/Wi 3.00 1.66 1.60 1.00 0.33 0.20
Knapsack 1 1 1 1 1/3 0

Remaining Wi 16-1=15 15-3=12 12-5=7 7-6=1 1-1=0 0


Example3

 For the given set of items and the knapsack capacity

of 5 kg, find the subset of the items to be added in


the knapsack such that the profit is maximum.

Items 1 2 3 4 5
Weights (Kg) 2 1 3 2 4
Values 12 10 25 15 25
Fractional Knapsack Problem Complexity
Fractional Knapsack Problem Complexity
Coin Changing Problem
Coin Changing Problem

Given: Coins of different Denomination and Amount.


Problem : Find out minimum number of coins to make change of given
amount.
o Using minimum number of coins, (Denomination) available for
change of Amount P
o Coin used for changes are available in infinite numbers
Coin Changing Problem

 The Coin Change Problem using the greedy method

involves finding the minimum number of coins needed


to make a given amount of change using a set of
available coin denominations.

 The greedy algorithm attempts to achieve this by

always picking the largest denomination coin that


does not exceed the remaining amount.
Coin Changing Problem
1. Sort the coin denominations in descending order (if they are not already
sorted).
2. Initialize:
– `Total number of coins` = 0.
– `Remaining amount` = Target amount.
3. Iterate through each coin denomination:
– For each coin, determine how many of that coin can fit into the
remaining amount without exceeding it.
– Subtract the equivalent value from the remaining amount.
– Increment the total number of coins by the count used for that
coin.
4. Repeat the process until the remaining amount is zero.
5. Output the total number of coins used.
Coin Changing Problem Example1

 For Example

Coin denominations: [1,5,10,25]

Target amount: 63
Coin Changing Problem Example1

Coin denominations: [1,5,10,25]


Coin Changing Problem Example1
Coin Changing Problem Example1
Coin Changing Problem Example2
Coin Changing Problem Example2
Coin Changing Problem Example2
Coin Changing Problem Example2
Coin Changing Problem Example3
Coin Changing Problem Example3
Coin Changing Problem
Coin Changing Problem
Container Loading Problem

 The Container Loading Problem involves packing items into


containers or bins with a fixed capacity, aiming to use the
minimum number of containers.
 The goal is to efficiently utilize the available space while
minimizing the number of containers used.
Container Loading Problem

Problem Statement :

A large ship is to be loaded with containers of cargos.


Different containers, although of equal size, will have
different weights. Let 𝒘𝒊 be the weight of the 𝑖𝑡ℎ
container, 1 ≤ 𝑖 ≤ 𝑛, and the capacity of the ship is 𝒄.
We need to load the ship with the maximum number
of containers.
Container Loading Problem
Container Loading Problem Using Greedy
Container Loading Problem Example1
Container Loading Problem Example1
Container Loading Problem Example1
Container Loading Problem Example1
Container Loading Problem Example2
Container Loading Problem
Container Loading Problem
Job Sequencing with Deadlines

 Job Sequencing with Deadlines is an optimization problem


that involves scheduling a set of jobs to maximize the
total profit.
 Each job has a deadline by which it must be completed,
and a profit associated with it if it is completed on time.
 The objective is to find the optimal sequence in which the
jobs should be performed to achieve the maximum profit
without violating any job deadlines.
Job Sequencing with Deadlines
Job Sequencing with Deadlines

 Sort the job in descending order of profit values.


 Select maximum deadline from jobs.
Slots: [None, None, None,…]
Job Sequencing with Deadlines Example1

Consider the following example with 5 jobs:

Job Deadline Profit


A 2 100
B 1 19
C 2 27
D 1 25
E 3 15
Job Sequencing with Deadlines Example1

Sort Jobs by Profit in Descending Order:


 The first step is to sort the jobs based on their profit in
descending order to ensure we always consider the most
profitable jobs first.

Job Deadline Profit


A 2 100
C 2 27
D 1 25
B 1 19
E 3 15
Job Sequencing with Deadlines Example1

Initialize a Schedule:
 Determine the maximum deadline to decide the number of
available slots.
 Here, the maximum deadline is 3 (from job E).
 Create a schedule table of size equal to the maximum
deadline.
 Create a schedule array of size 3 initialized to None:
 Initially, all slots are free.
Slots: [None, None, None]
Job Sequencing with Deadlines Example1

Place Jobs in Slots:


 Place each job in the latest available slot before its deadline. If a slot is
occupied, try the previous slots.

Job Deadline Profit Action Slots after Action

Place in slot 2 (latest available before or on


A 2 100 [None, A, None]
deadline).
Place in slot 1 (slot 2 is taken; slot 1 is the latest
C 2 27 [C, A, None]
available).

D 1 25 Slot 1 is taken; skip D. [C, A, None]

B 1 19 Slot 1 is taken; skip B. [C, A, None]

Place in slot 3 (latest available before or on


E 3 15 [C, A, E]
deadline).
Job Sequencing with Deadlines Example1

Job Deadline Profit Action Slots after Action

Place in slot 2 (latest available before or on


A 2 100 [None, A, None]
deadline).
Place in slot 1 (slot 2 is taken; slot 1 is the latest
C 2 27 [C, A, None]
available).

D 1 25 Place in slot 1 (already taken by C; skip D). [C, A, None]

B 1 19 Slot 1 is taken; skip B. [C, A, None]

Place in slot 3 (latest available before or on


E 3 15 [C, A, E]
deadline).
Job Sequencing with Deadlines Example2

Consider the following example with 5 jobs:

Job Deadline Profit


J1 2 20
J2 2 60
J3 1 40
J4 3 100
J5 4 80
Job Sequencing with Deadlines Example2

Sort Jobs by Profit in Descending Order:

Job Deadline Profit


J4 3 100
J5 4 80
J2 2 60
J3 1 40
J1 2 20
Job Sequencing with Deadlines Example2

Initialize a Schedule Table:


 Determine the maximum deadline to decide the number of
available slots.
 The maximum deadline is 4.
 Create a schedule array of size 4 initialized to None.
 Initially, all slots are free.
Slots: [None, None, None, None]
Job Sequencing with Deadlines Example2

Place Jobs in Slots:


 Iterate over the sorted jobs and place each job in the latest available slot
before its deadline.

Job Deadline Profit Action Slots after Action

J4 3 100 Place in slot 3 (latest available before or on deadline). [None, None, J4, None]

J5 4 80 Place in slot 4 (latest available before or on deadline). [None, None, J4, J5]

J2 2 60 Place in slot 2 (latest available before or on deadline). [None, J2, J4, J5]

J3 1 40 Place in slot 1 (latest available before or on deadline). [J3, J2, J4, J5]

J1 2 20 Slot 2 is occupied; skip J1. [J3, J2, J4, J5]


Job Sequencing with Deadlines Example2

Job Deadline Profit Action Slots after Action

J4 3 100 Place in slot 3 (latest available before or on deadline). [None, None, J4, None]

J5 4 80 Place in slot 4 (latest available before or on deadline). [None, None, J4, J5]

J2 2 60 Place in slot 2 (latest available before or on deadline). [None, J2, J4, J5]

J3 1 40 Place in slot 1 (latest available before or on deadline). [J3, J2, J4, J5]

J1 2 20 Slot 2 is occupied; skip J1. [J3, J2, J4, J5]


Job Sequencing with Deadlines
Minimum Cost Spanning Trees

 Prim’s algorithm

 Kruskal’s Algorithm
Undirected & Connected Graphs

An undirected graph is a graph A connected graph is a graph in


in which the edges do not which there is always a path
point in any direction from a vertex to any other
(i.e. the edges are vertex.
bidirectional).
Spanning Trees

 A spanning tree is a sub-graph of an undirected

connected graph, which includes all the vertices of the


graph with a minimum possible number of edges.

 In other words,

“A spanning tree is a tree that spans (includes) all the


vertices of the original graph.”
Spanning Trees

 The total number of spanning trees with n vertices

that can be created from a complete graph is equal to


n(n-2).

 If we have n = 4, the maximum number of possible

spanning trees is equal to 44-2 = 16.

 Thus, 16 spanning trees can be formed from a

complete graph with 4 vertices.


Spanning Trees

 To understand the concept of spanning tree, consider


the below graph:
– The graph can be represented as G(V, E),
where
– 'V' is the number of vertices, and
– 'E' is the number of edges.
Spanning Trees

– The spanning tree of the graph would


be represented as G`(V`, E`).
– V` = V : The number of vertices in the
spanning tree would be the same as the
number of vertices in the graph, but the
number of edges would be different.
– The number of edges in the spanning
tree is the subset of the number of
edges in the original graph. Therefore,
the number of edges can be written as:
E`€ E
E` = |V| - 1
Spanning Trees

 Key Characteristics of Spanning Trees:


– Contains All Vertices: A spanning tree of a graph must include every
vertex of the graph.
V` = V
– Minimum Number of Edges: A spanning tree with n vertices has
exactly n - 1 edges.
E` = |V| - 1
– No Cycles: A spanning tree cannot have any cycles; if it did, it
would not be a tree.
– Connected: A spanning tree must be connected; it must be
possible to travel from any vertex to any other vertex in the graph
using the edges of the spanning tree.
Spanning Trees

 Consider the graph:


– The graph contains 5 vertices.
– The vertices in the spanning tree
would be the same as the graph; 3

V` = 5.
– The number of edges in the spanning
tree would be equal to (E` = |V| - 1)
– E` = 4.
– The possible spanning trees:
Spanning Trees

3
Minimum Cost Spanning Trees

 The cost of the spanning tree is the sum of the

weights of all the edges in the tree.

 A Minimum Spanning Tree (MST) is a special type of

spanning tree where the sum of the weights of the


edges is minimized.
Minimum Cost Spanning Trees
Minimum Cost Spanning Trees
Minimum Cost Spanning Trees

 A Minimum Spanning Tree (MST) is commonly solved

using algorithms:

– Kruskal's Algorithm.

– Prim's Algorithm
Minimum Cost Spanning Trees

 Kruskal's Algorithm:

– Builds the MST by sorting edges by weight and adding


the smallest edge to the tree, provided it does not form
a cycle.

 Prim's Algorithm:

– Builds the MST by starting with a single vertex and


adding the smallest edge that connects a vertex inside
the tree to a vertex outside the tree.
Prim’s Algorithm

 Prim's algorithm is a minimum spanning tree


algorithm that takes a graph as input and finds the
subset of the edges of that graph which

– form a tree that includes every vertex

– has the minimum sum of weights among all the trees

that can be formed from the graph


Prim’s Algorithm

 The algorithm may informally be described as performing


the following steps:
– Initialize a tree with a single vertex, chosen arbitrarily
from the graph.
– Grow the tree by one edge: Of the edges that connect
the tree to vertices not yet in the tree, find the
minimum-weight edge, and transfer it to the tree.
– Repeat step 2 (until all vertices are in the tree).
Prim’s Algorithm

 Consider a weighted graph


Prim’s Algorithm

 Step 1 - First, we have to choose a


vertex from the graph. Let's choose B.
 Step 2 - Now, we have to choose and
add the shortest edge from vertex B.
There are two edges from vertex B
that are B to C with weight 10 and
edge B to D with weight 4. Among the
edges, the edge BD has the minimum
weight. So, add it to the MST.
Prim’s Algorithm

 Step 3 - Now, again, choose the edge with the minimum


weight among all the other edges. In this case, the edges DE
and CD are such edges. Add them to MST and explore the
adjacent of C, i.e., E and A. So, select the edge DE and add it to
the MST.
Prim’s Algorithm

 Step 4 - Now, select the edge CD, and add it to the MST.
Prim’s Algorithm

 Step 5 - Now, choose the edge CA. Here, we cannot select the
edge CE as it would create a cycle to the graph. So, choose the
edge CA and add it to the MST.
Prim’s Algorithm

 Step 5 - So, the graph produced in step 5 is the minimum


spanning tree of the given graph. The cost of the MST is given
below -
 Cost of MST = 4 + 2 + 1 + 3 = 10 units.
Prim’s Algorithm

6
B D

7 5

4
A 3 2 F

8 2
C E
3
Kruskal’s Algorithm

 Kruskal's Algorithm is a greedy algorithm used to find

the Minimum Spanning Tree (MST) of a connected,


undirected graph.

 It works by selecting the edges in increasing order of

weight and adding them to the MST, as long as they


do not form a cycle.
Kruskal’s Algorithm

 Below are the steps for finding MST using Kruskal’s


algorithm:
– Sort all the edges in increasing order of their weight.

– Pick the smallest edge. Check if it forms a cycle with the


spanning tree formed so far. If the cycle is not formed,
include this edge. Else, discard it.
– Repeat step#2 until there are (V-1) edges in the spanning
tree.
Kruskal’s Algorithm

 Consider a weighted graph


Kruskal’s Algorithm
SR. NO. Edge Weight
 Sort all the edges in increasing order of their
1 7↔6 1
weight. 2 8↔2 2
3 6↔5 2
4 0↔1 4
5 2↔5 4
6 8↔6 6
7 2↔3 7
8 7↔8 7
9 0↔7 8
10 1↔2 8
11 3↔4 9
12 5↔4 10
13 1↔7 11
14 3↔5 14
Kruskal’s Algorithm
 Now pick all edges one by one from the sorted SR. NO. Edge Weight
list of edges 1 7↔6 1
 Step 1: Pick edge 7-6. No cycle is formed, 2 8↔2 2
include it. 3 6↔5 2
4 0↔1 4
5 2↔5 4
6 8↔6 6
7 2↔3 7
8 7↔8 7
9 0↔7 8
10 1↔2 8
11 3↔4 9
12 5↔4 10
13 1↔7 11
14 3↔5 14
Kruskal’s Algorithm
 Step 2: Pick edge 8-2. No cycle is formed, SR. NO. Edge Weight
include it. 1 7↔6 1
2 8↔2 2
3 6↔5 2
4 0↔1 4
5 2↔5 4
6 8↔6 6
7 2↔3 7
8 7↔8 7
9 0↔7 8
10 1↔2 8
11 3↔4 9
12 5↔4 10
13 1↔7 11
14 3↔5 14
Kruskal’s Algorithm
 Step 3: Pick edge 6-5. No cycle is formed, SR. NO. Edge Weight
include it. 1 7↔6 1
2 8↔2 2
3 6↔5 2
4 0↔1 4
5 2↔5 4
6 8↔6 6
7 2↔3 7
8 7↔8 7
9 0↔7 8
10 1↔2 8
11 3↔4 9
12 5↔4 10
13 1↔7 11
14 3↔5 14
Kruskal’s Algorithm
 Step 4: Pick edge 0-1. No cycle is formed, SR. NO. Edge Weight
include it. 1 7↔6 1
2 8↔2 2
3 6↔5 2
4 0↔1 4
5 2↔5 4
6 8↔6 6
7 2↔3 7
8 7↔8 7
9 0↔7 8
10 1↔2 8
11 3↔4 9
12 5↔4 10
13 1↔7 11
14 3↔5 14
Kruskal’s Algorithm
 Step 5: Pick edge 2-5. No cycle is formed, SR. NO. Edge Weight
include it. 1 7↔6 1
2 8↔2 2
3 6↔5 2
4 0↔1 4
5 2↔5 4
6 8↔6 6
7 2↔3 7
8 7↔8 7
9 0↔7 8
10 1↔2 8
11 3↔4 9
12 5↔4 10
13 1↔7 11
14 3↔5 14
Kruskal’s Algorithm
 Step 6: Pick edge 8-6. Since including this edge SR. NO. Edge Weight
results in the cycle, discard it. Pick edge 2-3: 1 7↔6 1
No cycle is formed, include it. 2 8↔2 2
3 6↔5 2
4 0↔1 4
5 2↔5 4
6 8↔6 6
7 2↔3 7
8 7↔8 7
9 0↔7 8
10 1↔2 8
11 3↔4 9
12 5↔4 10
13 1↔7 11
14 3↔5 14
Kruskal’s Algorithm
 Step 7: Pick edge 7-8. Since including this edge SR. NO. Edge Weight
results in the cycle, discard it. Pick edge 0-7. 1 7↔6 1
No cycle is formed, include it. 2 8↔2 2
3 6↔5 2
4 0↔1 4
5 2↔5 4
6 8↔6 6
7 2↔3 7
8 7↔8 7
9 0↔7 8
10 1↔2 8
11 3↔4 9
12 5↔4 10
13 1↔7 11
14 3↔5 14
Kruskal’s Algorithm
 Step 8: Pick edge 1-2. Since including this edge SR. NO. Edge Weight
results in the cycle, discard it. Pick edge 3-4. 1 7↔6 1
No cycle is formed, include it. 2 8↔2 2
3 6↔5 2
4 0↔1 4
5 2↔5 4
6 8↔6 6
7 2↔3 7
8 7↔8 7
9 0↔7 8
10 1↔2 8
11 3↔4 9
12 5↔4 10
13 1↔7 11
14 3↔5 14
Kruskal’s Algorithm
 Cost of Spanning Tree = 4+8+1+2+4+2+7+9 SR. NO. Edge Weight
= 37 1 7↔6 1
2 8↔2 2
3 6↔5 2
4 0↔1 4
5 2↔5 4
6 8↔6 6
7 2↔3 7
8 7↔8 7
9 0↔7 8
10 1↔2 8
11 3↔4 9
12 5↔4 10
13 1↔7 11
14 3↔5 14
Kruskal’s Algorithm

6
B D

7 5

4
A 3 2 F

8 2
C E
3
Solve using Prim’s & Kruskal’s Algorithm
Single Source Shortest Path Problem

 The Single Source Shortest Path (SSSP) Problem is a

classic problem in graph theory.

 The goal is to find the shortest paths from a given

source node to all other nodes in a weighted graph,


where the edges have non-negative weights (in some
algorithms, negative weights can also be handled).
Single Source Shortest Path Problem

 In SSSP Problem:
– You are given a weighted graph (which can be
directed or undirected).
– The graph has vertices connected by edges with
non-negative or negative weights.
– You are asked to determine the shortest (minimum
weight) path from a single source vertex to every
other vertex in the graph.
Single Source Shortest Path Problem

 Key Terms:
– Shortest Path: The path between two vertices such
that the sum of the weights of the edges in the path
is minimized.
– Source Vertex: The starting point of the shortest
path search.
– Weighted Graph: A graph where edges have weights
representing the cost or distance between vertices.
Single Source Shortest Path Problem

 Dijkstra’s algorithm

 Bellman Ford Algorithm

 Optimal Merge Pattern

 Huffman Trees
Dijkstra’s algorithm

 Dijkstra’s Algorithm is a famous greedy algorithm

used to solve the Single Source Shortest Path


Problem in a graph with non-negative edge weights.

 It finds the shortest path from a given source vertex

to all other vertices in the graph.

 It was conceived by Dutch computer scientist Edsger

W. Dijkstra in 1956.
Dijkstra’s algorithm

 The algorithm maintains a set of visited vertices and a set of


unvisited vertices.
 It starts at the source vertex and iteratively selects the
unvisited vertex with the smallest tentative distance from the
source.
 It then visits the neighbours of this vertex and updates their
tentative distances if a shorter path is found.
 This process continues until the destination vertex is reached,
or all reachable vertices have been visited.
Dijkstra’s algorithm
 Steps of Dijkstra’s Algorithm:
– Initialize distances: Set the distance to the source vertex as 0 and to all other
vertices as infinity (∞).
– Mark all vertices as unvisited:The algorithm will visit each vertex only once.
– Start at the source vertex: Choose the unvisited vertex with the smallest
distance (initially the source) and explore its neighbours.
– Relax the edges: For each neighbouring vertex, check if a shorter path to that
vertex exists through the current vertex. If so, update the shortest known
distance.
– Mark the current vertex as visited: Once all the neighbouring vertices are
processed, mark the current vertex as visited (it won’t be visited again).
– Repeat: Continue to the next unvisited vertex with the smallest known
distance, and repeat the process until all vertices are visited or the shortest
path to the target vertex is found.
Dijkstra’s algorithm
 For a vertex 𝒖 and its neighbour 𝒗, the algorithm checks whether the path
from the source to 𝒗 through 𝒖 is shorter than the currently known
shortest path to 𝒗. If it is, the distance to 𝒗 is updated.
 The Relaxation Formula
[𝒅 𝒖 + 𝒄 𝒖, 𝒗 ] < 𝒅 𝒗
𝒅 𝒗 = 𝒅 𝒖 + 𝒄 𝒖, 𝒗
 Where:
– 𝑑 𝑢 is the current shortest distance from the source to vertex 𝑢.
– 𝑑 𝑣 is the current shortest distance from the source to vertex 𝑣.
– 𝑐 𝑢, 𝑣 is the cost (weight) of the edge between vertices 𝑢 and 𝑣.
 If going from the source to 𝒗 via 𝒖 gives a shorter path than the current
known distance to 𝒗, then the distance to 𝒗 is updated to the new shorter
distance:
𝒅 𝒗 = 𝒅 𝒖 + 𝒄 𝒖, 𝒗
Dijkstra’s algorithm

5
B D

4 6

8
A 1 2 F

2 6
C E
10
Dijkstra’s algorithm

5
B D
4 6

8
A 1 2 F

2 6
C E
10
Dijkstra’s algorithm
B 5 D
4 6

8
A 1 2 F

2 6
C E
10
Dijkstra’s algorithm
B 5 D
4 6 𝒅 𝒖 + 𝒄 𝒖, 𝒗 ] < 𝒅 𝒗
8 𝒅 𝒗 = 𝒅 𝒖 + 𝒄 𝒖, 𝒗
A 1 2 F

2 6
C E
10
Dijkstra’s algorithm
B 5 D
4 6 𝒅 𝒖 + 𝒄 𝒖, 𝒗 ] < 𝒅 𝒗
8 𝒅 𝒗 = 𝒅 𝒖 + 𝒄 𝒖, 𝒗
A 1 2 F

2 6
C E
10
Dijkstra’s algorithm
B 5 D
4 6 𝒅 𝒖 + 𝒄 𝒖, 𝒗 ] < 𝒅 𝒗
8 𝒅 𝒗 = 𝒅 𝒖 + 𝒄 𝒖, 𝒗
A 1 2 F

2 6
C E
10
Dijkstra’s algorithm
B 5 D
4 6 𝒅 𝒖 + 𝒄 𝒖, 𝒗 ] < 𝒅 𝒗
8 𝒅 𝒗 = 𝒅 𝒖 + 𝒄 𝒖, 𝒗
A 1 2 F

2 6
C E
10
Dijkstra’s algorithm
B 5 D
4 6 𝒅 𝒖 + 𝒄 𝒖, 𝒗 ] < 𝒅 𝒗
8 𝒅 𝒗 = 𝒅 𝒖 + 𝒄 𝒖, 𝒗
A 1 2 F

2 6
C E
10
Dijkstra’s algorithm
B 5 D
4 6 𝒅 𝒖 + 𝒄 𝒖, 𝒗 ] < 𝒅 𝒗
8 𝒅 𝒗 = 𝒅 𝒖 + 𝒄 𝒖, 𝒗
A 1 2 F

2 6
C E
10
Dijkstra’s algorithm

B 5 D
4 6

8
A 1 2 F

2 6
C E
10
Dijkstra’s algorithm

A B C D E F

A
5
B D
B
4 6

8 C
A 1 2 F
D
2 6
C E E
10

E
Dijkstra’s algorithm

A B C D E F

A 0 ∞ ∞ ∞ ∞ ∞
5
B D
A 0 4 2 ∞ ∞ ∞
4 6

8 C 0 3 2 10 12 ∞
A 1 2 F
B 0 3 2 8 12 ∞
2 6
C E D 0 3 2 8 10 14
10

E 0 3 2 8 10 14

F 0 3 2 8 10 14
Dijkstra’s algorithm

5
B D A B C D E F
4 6
A 0 ∞ ∞ ∞ ∞ ∞
8
A 1 2 F
A 0 4 2 ∞ ∞ ∞

2 6 C 0 3 2 10 12 ∞
C E
10
B 0 3 2 8 12 ∞

D 0 3 2 8 10 14
𝒅 𝒖 + 𝒄 𝒖, 𝒗 ] < 𝒅 𝒗
𝒅 𝒗 = 𝒅 𝒖 + 𝒄 𝒖, 𝒗 E 0 3 2 8 10 14

F 0 3 2 8 10 14
Dijkstra’s Algorithm
 The goal is to find the shortest path from vertex 0 to all other vertices.
Dijkstra’s Algorithm

 Initialize distances
Distance from 0 to 0 = 0
Distance from 0 to 1 = ∞
Distance from 0 to 2 = ∞
Distance from 0 to 3 = ∞
Distance from 0 to 4 = ∞
Distance from 0 to 5 = ∞
Distance from 0 to 6 = ∞
Dijkstra’s algorithm
Dijkstra’s algorithm
Dijkstra’s algorithm
Dijkstra’s algorithm
Dijkstra’s algorithm
Dijkstra’s algorithm
Dijkstra’s algorithm
Dijkstra’s algorithm
Negative Weight Cycle

 A negative weight cycle in a graph is a cycle where the sum of the

edge weights is negative.

 If a graph contains a negative weight cycle, the total cost of

traveling around the cycle can decrease indefinitely, meaning there


is no well-defined shortest path.
 For Example:
Negative Weight Cycle
 Why do we need to be careful with negative weights?

– Negative weight edges can create negative weight cycles i.e. a cycle that

will reduce the total path distance by coming back to the same point.

– Negative weight cycles can give an incorrect result when trying to find

out the shortest path


 Shortest path algorithms like Dijkstra's

Algorithm that aren't able to detect such a


cycle can give an incorrect result because
they can go through a negative weight cycle
and reduce the path length.
Bellman Ford Algorithm

 The Bellman-Ford Algorithm is an algorithm used to solve the

Single Source Shortest Path Problem.

 It is particularly useful for graphs where edge weights can be

negative, unlike Dijkstra's algorithm which only works for non-


negative weights.

 Additionally, the Bellman-Ford algorithm can detect negative

weight cycles in a graph.


Bellman Ford Algorithm

 The Bellman-Ford algorithm’s primary principle is that it


starts with a single source and calculates the distance to
each node.
 The distance is initially unknown and assumed to be
infinite, but as time goes on, the algorithm relaxes those
paths by identifying a few shorter paths.
 Hence it is said that Bellman-Ford is based on “Principle
of Relaxation”.
Bellman Ford Algorithm

 Key Features:

– Can handle negative weights.

– Detects if a graph contains a negative weight cycle (a

cycle whose total weight is negative).

– Works for both directed and undirected graphs.


Bellman Ford Algorithm
Bellman Ford Algorithm
Bellman Ford Algorithm
Bellman Ford Algorithm

Step 1:
 Initialize a distance array Dist[] to store the shortest distance for each vertex from
the source vertex.
 Initially distance of source will be 0 and Distance of other vertices will be ∞.
Bellman Ford Algorithm
Step 2: 1st Relaxation
 Start relaxing the edges, during 1st Relaxation:
 Current Distance of B > (Distance of A) + (Weight of A to B)
Dist[B] : ∞ > (0 + 5) Dist[B] = 5
Bellman Ford Algorithm
Step 3: 2nd Relaxation
 Current Distance of D > (Distance of B) + (Weight of B to D)
Dist[D] : ∞ > (5 + 2) Dist[D] = 7
 Current Distance of C > (Distance of B) + (Weight of B to C)
Dist[C] : ∞ > 5 + 1 Dist[C] = 6
Bellman Ford Algorithm
Step 4: 3rd Relaxation
 Current Distance of F > (Distance of D ) + (Weight of D to F)
Dist[F] : ∞ > 7 + 2 Dist[F] = 9
 Current Distance of E > (Distance of C ) + (Weight of C to E)
Dist[E] : ∞ > 6 + 1 Dist[E] = 7
Bellman Ford Algorithm
Step 5: 4th Relaxation
 Current Distance of D > (Distance of E) + (Weight of E to D)
Dist[D] : 7 > 7 + (-1) Dist[D] = 6
 Current Distance of E > (Distance of F ) + (Weight of F to E)
Dist[E] : 7 > 9 + (-3) Dist[E] = 6
Bellman Ford Algorithm
Step 6: 5th Relaxation
 Current Distance of F > (Distance of D) + (Weight of D to F)
Dist[F] : 9 > 6 + 2 Dist[F] = 8
 Current Distance of D > (Distance of E ) + (Weight of E to D)
Dist[D] : 6 > 6 + (-1) Dist[D] = 5
 Since the graph has 6 vertices, So during the 5th relaxation the shortest distance for
all the vertices should have been calculated.
Bellman Ford Algorithm

Step 7: 6th Relaxation

 Now the final relaxation i.e. the 6th relaxation should indicate the presence of

negative cycle if there is any changes in the distance array of 5th relaxation.

 During the 6th relaxation, following changes can be seen:

 Current Distance of E > (Distance of F) + (Weight of F to E)

Dist[E] : 6 > 8 + (-3) Dist[E]=5

 Current Distance of F > (Distance of D ) + (Weight of D to F)

Dist[F] : 8 > 5 + 2 Dist[F]=7

 Since, we observer changes in the Distance array Hence, we can conclude the

presence of a negative cycle in the graph.


Bellman Ford Algorithm

Result: A negative cycle (D->F->E) exists in the graph.


Bellman Ford Algorithm
Optimal Merge Pattern

 The Optimal Merge Pattern problem is a classic

example in greedy algorithms and is related to the


optimal way of merging multiple files (or lists, data
sets, etc.) together.

 This problem arises in data compression and file

merging scenarios.
Optimal Merge Pattern

 Problem Definition:

– Given a set of files with different sizes, the goal is to

merge these files into one single file with the least
possible merging cost.

– The cost of merging two files is the sum of their

sizes, and the goal is to find the optimal sequence of


merges that minimizes this cost.
Optimal Merge Pattern

 Approach:

– To solve this problem efficiently, we can use a greedy

approach.

– The idea is to always merge the two smallest files

first to minimize the merging cost at every step.


Optimal Merge Pattern
Optimal Merge Pattern

Consider the set files:

20, 30, 10, 5, 30


Optimal Merge Pattern

Consider the set files:

20, 30, 10, 5, 30

Arrange these elements in ascending order:

5, 10, 20, 30, 30

After this, pick two smallest numbers and repeat this


until we left with only one number.
Optimal Merge Pattern

Normal
20, 30, 10, 5, 30
M1 = 20+30 = 50
M2 = 50+10 = 60
M3 = 60+5 = 65
M4 = 65+30 = 95
Merge Cost = 50+60+65+95 = 270
Optimal Merge Pattern

Greedy Approach
Merge first two files
Normal
(5,10, 20, 30, 30) => (15, 20, 30, 30)
20, 30, 10, 5, 30
Merge next two files
M1 = 20+30 = 50
(15, 20, 30, 30) => (30, 30, 35)
M2 = 50+10 = 60
Merge next two files
M3 = 60+5 = 65
(30, 30, 35) => (35, 60)
M4 = 65+30 = 95
Merge next two files
Merge Cost = 50+60+65+95 = 270
(35, 60) => (95)
Total Merge Cost =15+35+60+95=205
Optimal Merge Pattern

Merge first two files


(5,10, 20, 30, 30)=> (15, 20, 30, 30)
Merge next two files
(15, 20, 30, 30)=> (30, 30, 35)
Optimal Merge Pattern

Merge first two files


(5,10, 20, 30, 30) => (15, 20, 30, 30)
Merge next two files
(15, 20, 30, 30) => (30, 30, 35)
Merge next two files
(30, 30, 35) => (35, 60)
Merge next two files
(35, 60) => (95)
Total Merge Cost
=15+35+60+95=205
Optimal Merge Pattern

Consider the set files:

5, 3, 2, 7, 9, 13
Optimal Merge Pattern

Consider the set files:

5, 3, 2, 7, 9, 13

Merg Cost = 5 + 10 + 16 + 23 + 39 = 93
Optimal Merge Pattern
Huffman Trees

 A Huffman Tree is a special type of binary tree used in Huffman


coding, a compression technique that assigns variable-length
codes to input characters based on their frequencies.
 The primary objective of using a Huffman Tree is to generate
the most efficient binary codes for data, so frequently
occurring characters are represented by shorter codes, and
less frequent ones are represented by longer codes.
 This approach minimizes the total number of bits required to
represent a set of data.
Huffman Trees : Key Points

 Binary Tree: Huffman trees are always binary, meaning that


each node has at most two children.
 Optimal Coding: The tree helps generate a prefix-free
encoding, where no code is a prefix of another, ensuring
unambiguous decoding.
 Greedy Algorithm: Huffman coding uses a greedy
approach to build the tree, minimizing the total size of the
encoded message.
Huffman Trees : Key Concepts
 Leaf Nodes: Each leaf node of the Huffman Tree represents a character
from the input data, along with its frequency (or weight).
 Internal Nodes: Internal nodes represent the sum of the frequencies of
two child nodes. These nodes do not correspond to characters but help
in defining the tree structure.
 Binary Codes Assignment: The binary codes are assigned by traversing
the tree from the root to the leaves. Each left edge is assigned a 0, and
each right edge is assigned a 1. The binary code for a character is the
sequence of 0s and 1s on the path from the root to the corresponding
leaf node.
 Prefix-Free Property: Huffman Trees ensure that no character's binary
code is a prefix of another. This means that the encoding is prefix-free,
which allows for unambiguous decoding of the compressed data.
Steps to build a Huffman Trees
 Frequency of Characters:.

– Begin by calculating the frequency of each character in the input data

 Create Leaf Nodes:

– For each character, create a leaf node with the character and its frequency.

 Build the Tree:

– Combine the two nodes with the smallest frequencies into a new internal node
whose frequency is the sum of the two. These nodes become children of the new
node.
– Repeat this process until all nodes are merged into a single tree. The final root
node will represent the combined frequency of all characters.
 Assign Codes:

– Once the tree is built, assign binary codes to each character. The left branch
typically represents a 0, and the right branch represents a 1.
Huffman Trees

https://opendsa-server.cs.vt.edu/ODSA/Books/Everything/html/Huffman.html
Huffman Trees

Step1 : Create Leaf Nodes:

Sort Leaf Nodes:


Huffman Trees
Huffman Trees
Huffman Trees
Huffman Trees
Huffman Trees
Huffman Trees
Huffman Trees
Huffman Trees
Huffman Trees

0 1

0 1

0 1 0 1

0 1

0 1

0 1
Huffman Trees

0 1 Huffman Codes
E 0
0 1 U 100
D 101
L 110
0 1 0 1
C 1110
Z 111100
0 1
K 111101
M 11111
0 1

0 1
Huffman Trees

Consider the Example


“ENGINEERING”
Huffman Trees

Step 1 : Find the Frequency of occurrences

ENGINEERING
E N G I R
3 3 2 2 1

Step2 : Sort the Frequency

R G I E N
1 2 2 3 3
Huffman Trees

R1 G2 I2 E3 N3

R G I E N
1 2 2 3 3
Huffman Trees

R G I E N
1 2 2 3 3
Huffman Trees

 Applications of Huffman Tree:

– Data Compression/File Compression: Used in file


compression formats like ZIP and GZIP.

– Multimedia Encoding: Applied in JPEG, PNG, and MP3

encoding.

– Network Protocols: Reduces bandwidth usage by minimizing

the number of bits used to transmit frequent data.


THANK YOU !!!

You might also like