0% found this document useful (0 votes)
24 views4 pages

Cmpe232 Lecture Notes 04

The document discusses Minimum Spanning Trees (MST), highlighting Prim's and Kruskal's algorithms for constructing MSTs in undirected edge-weighted graphs. It explains the concept of relaxation in MST algorithms and the use of priority queues, specifically binary heaps, to optimize performance. Additionally, it compares the efficiency of Prim's and Kruskal's algorithms based on graph density, emphasizing their respective strengths in dense and sparse graphs.

Uploaded by

vehiri3156
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views4 pages

Cmpe232 Lecture Notes 04

The document discusses Minimum Spanning Trees (MST), highlighting Prim's and Kruskal's algorithms for constructing MSTs in undirected edge-weighted graphs. It explains the concept of relaxation in MST algorithms and the use of priority queues, specifically binary heaps, to optimize performance. Additionally, it compares the efficiency of Prim's and Kruskal's algorithms based on graph density, emphasizing their respective strengths in dense and sparse graphs.

Uploaded by

vehiri3156
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

CMPE232: Data Structures and Algorithms

Lecture Notes
Özgür Özdemir
Istanbul Bilgi University
[email protected]
Week 04

Minimum Spanning Tree


The Minimum Spanning Tree (MST) is the path capturing all vertices of a given undirected edge-weighted
graph without a cycle and with the least sum of weights. Intuitively, the graph must be connected to
construct the spanning tree. However, the algorithms can be executed over the disconnected nodes to
build the spanning forest of the graph.
In the solution of MST, the algorithms aim to observe the tree by gradually constructing connected
subgraphs with the least weights. Once a new path leading to the smaller weights is observed, the stored
distance and connections are updated to maintain the minimum tree. This process is called relaxation.
One of the fundamental MST approaches, i.e. Prim’s Algorithm, utilizes relaxation to form the tree by
gradually growing it in each step. Later, this approach will be the basis of shortest path problems in
finding the minimum distance between two vertices.
The MST problem is mostly associated with the applications in distributed systems. The cargo
delivery systems may observe the MST of the recipients to provide effective deliverance. The airline
industry can utilize MST solutions to assess the efficiency of airports regarding flight routes. Moreover,
the effectiveness of power distribution of a power plant through transmission lines can be observed and
designed accordingly.

Greedy Algorithms
Greedy algorithms search local optimum points at each step. The greedy paradigm is efficient in con-
trolled environments compared to the brute-force approaches; however, the global optimum is not guar-
anteed for all cases. An example is illustrated below by greedily selecting the highest children in each
depth for searching the maximum value in the given tree.

13 7

10 27 11 13

7 11 30 10 30 27

Binary Search Tree Unstructed Tree

Greedy Search Greedy Search


result: 30 ✓ result: 27 ✗
complexity: O(logn) complexity: O(logn)

Brute Force Brute Force


result: 30 ✓ result: 30 ✓
complexity: O(n) complexity: O(n)

1
Prim’s Algorithm
Prim’s Algorithm is a greedy MST algorithm starting from an arbitrary vertex and relaxing the connected
vertices depending on the shortest weight neighbours in each step. Prim’s Algorithm essentially utilizes
priority queues to efficiently access the vertices with the minimum weights.

Priority Queues
Priority Queues (PQ) prioritize the dequeued element according to its order in the queue. Fundamentally,
two types of priority queues exist: Minimum-PQ and Maximum-PQ. Elementary data structures, e.g.
arrays, linked lists, trees etc., can be utilized to apply prioritized dequeuing operations. For example,
the following alterations can be applied to apply minimum priority dequeuing.
As shown in Algorithm 1, the dequeuing operation in
Algorithm 1: Minimum Priority De- PQs consists of two steps: finding the element with
queueing using Binary Search Trees the highest priority and removing it from the queue.
Input: Q : queue in BST Compared to regular arrays or linked lists, utilizing
function dequeue(Q): BST reduces the time complexity to O(logn). How-
x ← Q.f indM in() ▷ O(logn) ever, it is not the optimal solution since it is bound
Q ← Q.remove(x) ▷ O(depth) to the depth of the tree. That is, if the elements
return x are already sorted, in the worst-case scenario, the
end function
complexity increases to the O(n).
Therefore, Binary Heap data structure is commonly exploited for priority queues since it guarantees
O(logn) complexity, unlike BSTs. Binary Heaps are binary trees built by a single constraint, i.e. each
parent node is either lesser or higher depending on the order. Thus, the highest priority element is
always presented in the root node. Therefore, the dequeuing operation removes the root node and then
reconstructs the tree to maintain the heap structure.

Algorithm 2: Minimum Priority Queue


using Minimum Binary Heap
Input: H : queue in min-binary heap,
n : node
function enqueue(H, n):
H.insert(n)
curr ← n
while curr.parent > curr do
H.swap(curr.parent, curr)
curr ← curr.parent Example of Minimum Binary Heap
end function Tree
function dequeue(H): 5
x ← H.root
H.root ← H.tail
heapify(H, H.root) ▷ O(logn) 7 12
return x
end function
13 9 20 17
function heapify(H, n):
min ← n
if n.lef t < min then
min ← n.lef t
if n.right < min then
min ← n.right
if min ̸= n then
H ← H.swap(min, n)
heapify(H, n)
end function

2
Algorithm 3: Prim’s Algorithm Algorithm 4: Eager Prim’s Algorithm for
for MST MST
Input: G : undirected weighted Input: G : undirected weighted graph
graph function EagerPrimAlg(G):
function PrimAlg(G): edges ← initialize an edge array
pq ← initialize a priority queue dist ← initialize a double vertices array
marked ← initialize a boolean and set ∞ to each vertex
array of vertices marked ← initialize a boolean array of
mst ← initialize a queue vertices
relax(G, G.V[0] ) pq ← initialize a priority queue with
while pq ̸= ∅ do (vertex, -) pairs
e ← pq.dequeue() v ← G.V [0]
if each v ∈ e.V is marked dist[v] ← 0
then pq.enqueue(v, 0)
continue while pq ̸= ∅ do
else v ← pq.dequeue()
mst.enqueue(e) relax(G, v )
foreach v ∈ e do end function
if not marked[v]
then function relax(G, v ):
relax(G, v ) marked[v] ← true
foreach each edge e ∈ G.Adj(v) do
function relax(G, v ): n ← e.other(v)
marked[v] ← true if marked[n] then
foreach each edge e ∈ continue
G.Adj(v) do if e.weight < dist[n] then
if not marked[e.other(v)] edges[n] ← e
then dist[n] ← e.weight
pq.enqueue(e) if n ∈ pq then
end function pq.change(n, dist[n])
end function
While the original Prim’s Algorithm given in Algorithm 3 runs in O(E.logE) time complexity, Eager
Prim’s Algorithm given in Algorithm 4 runs in O(E.logV ). Because Eager Prim’s Algorithm reduces the
time complexity from logE to logV , it runs efficiently on dense graphs, i.e. graphs with a high number
of edges, compared to the original version.

3
B C
1 6
2 5
4 A F

8
D E
10

Example of the Minimum Spanning Tree of the given graph. The edges of the MST are coloured red.

Kruskal’s Algorithm
Kruskal’s Algorithm is an alternative approach to find the MST on a given graph. Unlike visiting
connected vertices step-by-step starting from an arbitrary node in Prim’s Algorithm, Kruskal’s Algorithm
aims to connect the disconnected minimal edges until they form a tree. Kruskal’s Algorithm stops
growing the tree once a cycle is observed. The cycle assumption falls down in directed graphs because
the algorithm fails to detect unreachable vertices. Along with the Priority Queues, Kruskal’s Algorithm
exploits the Union-Find approach to detect cycles.

3
Algorithm 5: Union-Find
Union-Find
Input: T : binary tree, x : queried element
function find(T, x): Union-Find is a methodology to represent a set
if x ̸= x.parent then of elements sharing similar characteristics. A
find(T, x.parent) binary tree structure can be exploited to rep-
else resent the set. The root node is utilized as the
return x
end function representative (rank) of the unified set. The
necessary functions to perform Union-Find op-
function union(T, x, y): erations are given in Algorithm 5. Since a
y ← find(T, y) binary tree is exploited to represent the sets,
x ← find(T, x ) the time complexity of running the find (and
if x = y then accordingly union) function is O(logn). Note
return that only the root node (representative) is sig-
else nificant; the rest of the elements can be placed
y.parent ← x
end function in child nodes without any constraints.

Algorithm 6: Kruskal’s Algorithm for MST


Input: G : undirected weighted graph
function KruskalAlg(G):
uf ← initialize a set of trees for union-find
pq ← initialize a priority queue
mst ← initialize a queue
foreach e ∈ G.E do ▷ O(E.logE)
pq.enqueue(e)
while pq ̸= ∅ and mst.size < G.V − 1 do ▷ O(V )
e ← pq.dequeue()
v, w ← e.V
if find(uf, v ) = find(uf, w ) then ▷ O(logV )
continue
else
union(uf, v, w ) ▷ O(logV )
mst.enqueue(e)
return mst
end function

As given in Algorithm 6, most of the effort in Kruskal’s Algorithm is given to enqueue all edges to
the priority queue. Therefore, the algorithm is bound to O(E.logE) time complexity. If the graph has
a high number of edges, i.e. dense graph, the algorithm’s effectiveness decreases. On the other hand,
Kruskal’s Algorithm may operate more efficiently compared to Prim’s Algorithm on sparse graphs.

Prim’s Algorithm Kruskal’s Algorithm

starts from single arbitrary vertex, grow starts from edges, connect the minimal edges
the tree by neighbouring vertices in each step in each step while preventing cycles

spanning forests are obtained in disconnected


anomalies occur in disconnected graphs
graphs

O(E.logV ) O(E.logE)

efficient on dense graphs efficient on sparse graphs

Table: Comparison between Prim’s and Kruskal’s Algorithm on searching MST.

You might also like