0% found this document useful (0 votes)
104 views100 pages

DAA Unit-4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views100 pages

DAA Unit-4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 100

Heap & Heap Sort Algorithm

Source: Internet, reference books, NPTEL


Introduction
 A heap data structure is a binary tree with the following two properties.
1. It is a complete binary tree: Each level of the tree is completely filled, except
possibly the bottom level. At this level it is filled from left to right.
2. It satisfies the heap order property: the data item stored in each node is
greater than or equal to the data item stored in its children node.

a a 9 9

b c b c 6 7 6 7

d e f d e f 2 4 8 2 4 1

Binary Tree but not a Complete Binary Tree Not a Heap


Heap - Heap Heap

2
Array Representation of Heap
 Heap can be implemented using an Array.
 An array 𝐴 that represents a heap is an object with two attributes:
1. 𝑙𝑒𝑛𝑔𝑡ℎ[𝐴], which is the number of elements in the array, and
2. ℎ𝑒𝑎𝑝−𝑠𝑖𝑧𝑒[𝐴], the number of elements in the heap stored within array 𝐴

1
6

1 1
4 0

Array representation
8 7 9 3
of heap

2 4 1 1 1 1 8 7 9 3 2 4 1
Hea
6 4 0
p
3
Array Representation of Heap
 In the array 𝐴, that represents a heap
1. length[𝐴] = heap-size[𝐴]
2. For any node 𝒊 the parent node is 𝒊/𝟐
3. For any node 𝒊, its left child is 𝟐𝒊 and right child is 𝟐𝒊 + 𝟏

𝟏 1 For node , parent node is


6
𝟐 𝟑 For node ,
1 1 Left child node is node
4 0 Right child is node
𝟒 𝟓 𝟔 𝟕
8 7 9 3

𝟖 𝟗 𝟏𝟎 1
2 4 1 1 1 1 8 7 9 3 2 4 1
Hea 4
6 4 0
p
4
Types of Heap
1. Max-Heap − Where the value of the 9
root node is greater than or equal to
either of its children.
6 7

2 4 1

1 2. Min-Heap − Where the value of the root


node is less than or equal to either of its
children.
2 4

6 7 9
5
Introduction to Heap Sort
1. Build the complete binary tree using given elements.
2. Create Max-heap to sort in ascending order.
3. Once the heap is created, swap the last node with the root node and
delete the last node from the heap.
4. Repeat step 2 and 3 until the heap is empty.

6
Heap Sort – Example 1
Sort the following elements in
Ascending 4order
10 3 5 1

Step 1 : Create Complete Binary Tree


4

4 10 3 5 1 1
0
3

Now, a binary tree is 5 1


created and we have
to convert it into a
Heap.

7
Heap Sort – Example 1
Sort the following elements in
Ascending 4order
10 3 5 1

Step 2 : Create Max Heap


1
4 10 is greater
0 than 4
4
1 10
4 3 5 1 So, swap 10 & 4
1
0 4
0
3
Swa
p

In a Max Heap, 5 1
parent node is
always greater than
or equal to the child
nodes.
8
Heap Sort – Example 1
Sort the following elements in
Ascending 4order
10 3 5 1

Step 2 : Create Max Heap


1
5 is greater
0
than 4
10 4
5 3 5
4 1 So, swap 5 & 4
5
4 3
Swa
p

In a Max Heap, 4
5 1
parent node is
always greater than Max Heap is
or equal to the child created
nodes.
9
Heap Sort – Example 1
Sort the following elements in
Ascending 4order
10 3 5 1

Step 3 : Apply Heap Sort


1
1
0

10
1 5 3 4 1
0 5 3
Swa
p
1. Swap the first and 4 1
the last nodes and 0
2. Delete the last
node.

10
Heap Sort – Example 1
Sort the following elements in
Ascending 4order
10 3 5 1

Step 3 : Apply Heap Sort


5
1 Max Heap
Property is
1
5 5
1
4 3 4
1 10 violated so,
1
4
5 3 create a Max
Swa Heap again.
p

1
4

11
Heap Sort – Example 1
Sort the following elements in
Ascending 4order
10 3 5 1

Step 3 : Apply Heap Sort


1
5 Max Heap is
created
5
1 4 3 1
5 10
4 3
Swa
p
1. Swap the first and 5
1
the last nodes and
2. Delete the last
node.

12
Heap Sort – Example 1
Sort the following elements in
Ascending 4order
10 3 5 1

Step 3 : Apply Heap Sort


3
1
4 Create Max
Heap again
1
4
3 4
1 3
4 5 10 Max Heap is
1
4 3
4 created
Swa
p

1. Swap the first and


the last nodes and
2. Delete the last
node.

13
Heap Sort – Example 1
Sort the following elements in
Ascending 4order
10 3 5 1

Step 3 : Apply Heap Sort


1
3 Already a Max
Heap
3
1 1
3 4 5 10
3
1
Swa
p

1. Swap the first and


the last nodes and
2. Delete the last
node.

14
Heap Sort – Example 1
Sort the following elements in
Ascending 4order
10 3 5 1

Step 3 : Apply Heap Sort


1 Already a Max
Heap
1 3 4 5 10

Remove the last


element from heap
and the sorting is
over.

15
Heap Sort – Example 2
 Sort given element in ascending order using heap sort. 19, 7, 16, 1, 14,
17
19 7
1 16
1 1 14
7 17
1
4 7 6

Step 1: Create binary Step 2: Create


tree Max-heap
1 1
9 9

1 1 1
7 7
6 4 6
7

1 1 1 1
1 1 7
4 7 4 7
6

16
Heap Sort – Example 2
Step 3 Step 4

19
1 14 17 1 7 16
1 16
1 14 17
1 1 7 19
6 9 7 6
1 1 Create Max-
9
6 6
7
Swap heap
1 1 & 1
1
4 7 remo 7
6
4
ve
the
1
1 7 last 1 7
6
9
eleme
nt

17
Heap Sort – Example 2
Step 5 Step 6

17
7 14 16 1 7
1 19 7
1 14 16
7 1 17 19
7 6
1 1 Create Max-
7 7
7 6
Swap heap
1 1 & 1
1 7
4 6 remo 6
4
ve
the
1
1 7 last 1
7
eleme
nt

18
Heap Sort – Example 2
Step 7 Step 8

16
1 14 7 1 17 19 1 14
1 7 16 17 19
6 4
1 1 Create Max-
1 1
6 4
Swap heap
1 &
7 1 7
4 remo 1
4
ve
the
1
1 last
6
eleme
nt

19
Heap Sort – Example 2
Step 9 Step 10

14
7 1 7
1 16 17 19 7
1 1
7 14 16 17 19
4
1 Already a
7 Swap & 7
1
4
remove Max-heap
Swap & remove
1 the last the last element
1 7 element 1
7
4

Step 11
1 7 14 16 17 19

Remove
1
The entire array
theis last
element
sorted now.
20
Exercises
 Sort the following elements using Heap Sort Method.
1. 34, 18, 65, 32, 51, 21
2. 20, 50, 30, 75, 90, 65, 25, 10, 40

 Sort the following elements in Descending order using Hear Sort


Algorithm.
1. 65, 77, 5, 23, 32, 45, 99, 83, 69, 81

21
Heap Sort – Algorithm
# Input: Array A
# Output: Sorted array A

Algorithm: Heap_Sort(A[1,…,n])
BUILD-MAX-HEAP(A)
for i length[A] downto 2
do exchange A[1] A[i]
heap-size[A] heap-size[A] – 1
MAX-HEAPIFY(A, 1, n)

22
Heap Sort – Algorithm
Algorithm: BUILD-MAX-HEAP(A) 4 1 3 2 9 7
heap-size[A] ← length[A] 1
4
for i ← ⌊length[A]/2⌋ downto
1 2 3
1 3
do MAX-HEAPIFY(A, i)
4 5 6
heap- 2 9 7
size[A] = 6
4 1 7 2 9 3 4 9 7 2 1 3 9 4 7 2 1 3
i= 1 i= 1 i= 1
4 4 4
9
3 2 1
2 3 2 3 2 3
1 3
7 9
1 7 9
4 7
4 5 6 4 5 6 4 5 6
2 9 3
7 2 9
1 3 2 1 3

23
Heap Sort – Algorithm
# Input: Array A
# Output: Sorted array A 3
9 4 7 2 1 9
3

1
Algorithm: Heap_Sort(A[1,…,n]) 9
BUILD-MAX-HEAP(A) 2 3
for i length[A] downto 2 4 7
do exchange A[1] A[i] 4 5 6

heap-size[A] heap-size[A] – 1 2 1 3
MAX-HEAPIFY(A, 1, n)

24
Heap Sort – Algorithm
Algorithm: Max-heapify(A, i, n)
l LEFT(i) l 2 1 3 4 7 2 1 9
r RIGHT(i) r 3
1
if l ≤ n and A[l] > A[i] Yes 3
2 3
then largest l largest 2
4 7
else largest i 4 5
if r ≤ n and A[r] > A[largest] Yes 2 1
then largest r largest 3

if largest i Yes
then exchange A[i] A[largest]
MAX-HEAPIFY(A, largest, n)

25
Heap Sort – Algorithm
# Input: Array A
# Output: Sorted array A 3 4 7 2 1 9

1
Algorithm: Heap_Sort(A[1,…,n]) 7
BUILD-MAX-HEAP(A) 2 3
for i length[A] downto 2 4 3
do exchange A[1] A[i] 4 5
heap-size[A] heap-size[A] – 1 2 1
MAX-HEAPIFY(A, 1, n)

26
Heap Sort Algorithm – Analysis
# Input: Array A
# Output: Sorted array A
heap-size[A] ← length[A]
for i ← ⌊length[A]/2⌋ downto𝒏1/ 𝟐
Algorithm: Heap_Sort(A[1,…,n])
𝐎 (𝒍𝒐𝒈 ⁡𝒏)
do MAX-HEAPIFY(A, i)
BUILD-MAX-HEAP(A)𝐎 (𝒏 𝒍𝒐𝒈 ⁡𝒏 )
for i length[A] downto 2
𝒏− 𝟏
do exchange A[1] A[i]
heap-size[A] heap-size[A] – 1
MAX-HEAPIFY(A, 1, n) 𝑶 (𝒏−𝟏)(𝒍𝒐𝒈𝒏)

Running time of heap sort algorithm is:

27
Binomial Heap
Binomial Tree
 A binomial heap is a collection of binomial trees.

 Binomial tree Bk is an ordered tree defined recursively. The binomial tree


B0 has one node. The binomial tree Bk consists of two binomial trees Bk-1
and they are connected such that the root of one tree is the leftmost child
of the other.

 Binomial tree properties:


 Bk has 2k nodes
 Bk has height k
 There are exactly ( Cik) nodes at depth i for i=0, 1, 2,…,k.
 The root has degree k which is greater than other node in the tree.
Each of the root’s child is the root of a subtree Bi.

29
Binomial Tree Example

B0 B1 B2 B3
6
10
7 5 15 23
4 7
15 20
8 10 11
6
25

B4
4
21
11 6
4 32
34 3
5 23
31 33
7 14
22
9

30
Binomial Heap Properties
 Each binomial tree in H obeys the min heap property: key of a node is
greater or equal to the key of its parent. The root has the smallest key in
the tree.
 There is at most one binomial tree whose root has a given degree.
 The binomial trees in the binomial heap are arranged in increasing order
of degree
Example:
head[H] 5 1
2

7 10 12
10 13
3

15 15
10 12

16

31
Binomial Heap Implementation
 Each node has the following fields:
 p: parent
 child: leftmost child
 sibling
 Degree
 Key

 Roots of the trees are connected using linked list.

32
Binomial Heap Implementation

p
a) c)
key
degree
child sibling
NIL NIL
2 1
0 2
head[H] NIL NIL

b)
10 12
head[H] 2 1 1 0
NIL NIL

10 12

15
15
0
NIL NIL

33
Binomial Heap Operations
 Create heap
 Find minimum key
 Union two binomial heap
 Insert a node
 Extract minimum node
 Decrease a key
 Delete a node

34
Create A New Binomial Heap
 The operation simply creates a new pointer and sets it to NIL.

 Pseudocode:
Binomial-Heap-Create()
1 head[H] <- NIL
2 return head[H]

 Run time is θ(1).

35
Find Minimum Key
 Since the binomial heap is a min-heap-order, the minimum key of each binomial tree
must be at the root. This operation checks all the roots to find the minimum key.
 Pseudocode: this implementation assumes that there are no keys with value ∞

Binomial-Heap-Minimum(H)
1 y <- NIL
2 x <- head[H]
3 min <- ∞
4 while x is not NIL
5 do if key[x] < min then
6 min <- key[x]
7 y <- x
8 x <- sibling[x]
9 return y

 Run time: The run time is in the order of O(log n) since the most number of roots in
binomial heap is |_(log n)_| +1
36
Find Minimum Key Example
a) b)

head[H] 2 5 1 head[H] 2 5 1

7 10 12 7 10 12

15 15

c) d)

head[H] 2 5 1 head[H] 2 5 1

7 10 12 7 10 12

15 15

37
Union Two Binomial Heaps
This operation consists of the following steps
 Merge two binomial heaps. The resulting heap has the roots in increasing order of
degree

 For each tree in the binomial heap H, if it has the same order with another tree, link
the two trees together such that the resulting tree obeys min-heap-order.

 Case1: degree[x] ≠ degree[next-x].


The pointer move one position further down to the list.
 Case2: degree[x] = degree[next-x] = degree[sibling[next-x]].
Again pointer move one position further down to the list, and next iteration
executes either case 3 or case 4.
 Case3: degree[x] = degree[next-x] ≠ degree[sibling[next-x]] and key[x] <= key[next-x].
We remove next-x from the root list and link to the x.
 Case4: degree[x] = degree[next-x] ≠ degree[sibling[next-x]] and key[x] >= key[next-x].
We remove x from the root list and link it to next-x.

38
Union Two Binomial Heaps

head[H1] 2 11 1
a) head[H2] 3 4

20 10 12
9

15

b) head[H1] 2 3 11 4 1

20 9 10 12

15

39
Union Two Binomial Heaps

c) head[H1] 2 4 1

3 11 9 10 12

20 15

d) head[H1] 2 1

3 4 10 12

11 9 15

20

40
Union two Binomial Heap
Binomial-Heap-Union(H1,H2)
1 H <- Make-Binomial-Heap()
2 Head[H] <- Binomial-Merge(H1,H2)
3 Free the objects H1 and H2 but not the lists they point to
4 If head[H] = NIL
5 then return H
6 Prev-x <-NIL
7 X <- head[H]
8 Next-x <- sibling[x]
9 while next-x not NIL
10 do if(degree[x] not degree[next-x]) or
(sibling[next-x not NIL and degree[sibling[next-x]]=degree[x])
11 then prev-x <-x
12 x <- next-x
13 else if key[x] <= key[next-x]
14 then sibling[x] <- sibling[next-x]
15 Binomial-Link(next-x,x)
16 else if prev-x = NIL
17 then head[H] <-next-x
18 else sibling[prev-x] <- next-x
19 Binomial-Link(x,next-x)
20 x <- next-x
21 next-x <- sibling[x]
22 return H

41
Union Two Binomial Heaps
Pseudocode:
Binomial-Link(y,z)
1 p[y] <- z
2 sibling[y] <- child[z]
3 child[z] <- y
4 degree[z] <- degree[z] + 1

 Example: link node 5 to node 1

5 1
1

7 12
5 12
child

parent
7 sibling

42
Union Two Binomial Heaps
 Binomial-Heap-Merge(H1,H2)
P  Head[H];
P1  Head[H1];
P2  Head[H2]
while P1 ≠ NIL OR P2 ≠ NIL do
if degree[P1] < degree[P2] then
sibling [P] P1;
P1  sibling[P1]
P<-sibling[p]
else
sibling[P]  P2;
P2  sibling[P2]
P<-sibling[p]

 Run time: The running time is O (log n)


 The total number of combined roots is at most |_logH1_| + |_logH2_| +2. Binomial-Heap-Merge is O
(log n) + the while loop is O (log n). Thus, the total time is O(log n).
43
Insert New Node
Create a new heap H’ and set head[H’] to the new node.
Union the new heap H’ with the existing heap H.

Pseudocode:
Binomial-Heap-Insert(H,x)
1 H’ <- Make-Binomial-Heap()
2 p[x] <- NIL
3 child[x] <- NIL
4 sibling[x] <- NIL
5 degree[x] <- 0
6 head[H’] <- x
7 H <- Binomial-Heap-Union(H,H’)
Run time: O(log n)

44
Insert New Node Example

New node: 5

head[H] head[H’] 5
1

10 12

15

head[H] 5 1

10 12

15

45
Extract Node With Minimum Key
 This operation is started by finding and removing the node x with minimum key from the
binomial heap H. Create a new binomial heap H’ and set to the list of x’s children in the
reverse order. Unite H and H’ to get the resulting binomial heap.
 Pseudocode
Binomial-Heap-Extract-Min(H)
1 find the root x with the minimum key in the root list of H,
and remove x from the root list of H.
2 H’ <- Make-Binomial-Heap()
3 reverse the order of the linked list of x’s children,
and set head[H’] to point to the head of the resulting list.
4 H <- Binomial-Heap-Union(H,H’)
5 Return x
 Run time: O(log n)

46
Extract Minimum Key Example
head[H] 5 1
2

7 10 12 10 12
3

15
10 12 15

15

head[H] 5 1
2

7 10 12 10 12
3

15
10 12 15

15

47
Extract Minimum Key Example

head[H] 5 head[H’] 12 10
2

7 15
10 12
2

15
10 12

15

head[H] 12 5
2

10 7
10 12
2

15 15
10 12

15

48
Decreasing a key
 The current key is replaced with a new key. To maintain the min-heap property, it is then compared
to the key of the parent. If its parent’s key is greater then the key and data will be exchanged. This
process continues until the new key is greater than the parent’s key or the new key is in the root.

 Pseudocode:
Binomial-Heap-Decrease-Key(H,x,k)
1 if k > key[x]
2 then error “new key is greater than current key”
3 key[x] <-k
4 y <-x
5 z <-p[y]
6 while z not NIL and key[y] < key[z]
7 do exchange key[y] <-> key[z]
8 if y and z have satellite fields, exchange them, too.
9 y <- z
10 z <- p[y]

49
Decreasing a key
 Execution time: This procedure takes O(log n) since the maximum depth
of x is |_log n_|.

head[H] 5 2 head[H] 5 2

10 12
10 12

15 1

head[H] 5 2 head[H] 5 1

1 12
2 12

10
10

50
Delete a Node
 With assumption that there is no node in H has a key of -∞.
 The key of deleting node is first decreased to -∞.
 This node is then deleted using extracting min procedure.
Pseudocode: (from book)
Binomial-Heap-Delete(H,x)
1 Binomial-Heap-Decrease-Key(H,x,-∞)
2 Binomial-Heap-Extract-Min(H)

 Run time: O(log n) since the run time of both Binomial-Heap-Decrease-Key


and Binomial-Heap-Extract-Min procedures are in order of O(log n).

51
Example
a) b)

head[H] 5 2
head[H] 5 2

10 12
-∞ 12

15
15

c) d)

head[H] 5 -∞ head[H] 5

2 12
head[H’] 12 2

15 15

52
e) f)
head[H] 5 12 2 head[H] 5 2

15 12 15

g)

head[H] 2

5 15

12

53
Compare With Binary Heap
Procedure Binomial Heap Binary Heap

Make-Heap O (1) O (1)

Insert O (log n) O (log n)

Minimum O (log n) O (1)

Extract-Min O (log n) O (log n)

Union O (log n) O (n)

Decrease-Key O (log n) O (log n)

Delete O (log n) O (log n)

54
Fibonacci Heap
Heaps are mainly used for implementing priority queue. We have discussed
below heaps:
 Binary Heap
 Binomial Heap
 In terms of Time Complexity, Fibonacci Heap beats both Binary and
Binomial Heaps.

1) Find Min: Θ(1) [Same as both Binary and Binomial]


2) Delete Min: O(Log n) [Θ(Log n) in both Binary and Binomial]
3) Insert: Θ(1) [Θ(Log n) in Binary and Θ(1) in Binomial]
4) Decrease-Key: Θ(1) [Θ(Log n) in both Binary and Binomial]
5) Merge: Θ(1) [Θ(m Log n) or Θ(m+n) in Binary and Θ(Log n) in Binomial]

55
Priority Queues Performance Cost Summary

56
 Original motivation: improve Dijkstra's shortest path algorithm from O(E
log V ) to O(E + V log V ).

 Basic idea.
 Similar to binomial heaps, but less rigid structure.
 Binomial heap: eagerly consolidate trees after each insert.
 Fibonacci heap: lazily defer consolidation until next delete-min.

 Fibonacci Heap is a collection of trees with min-heap or max-heap


property. In Fibonacci Heap, trees can have any shape even all trees can
be single nodes (This is unlike Binomial Heap where every tree has to be
Binomial Tree).

57
Fibonacci Heaps: Structure
Fibonacci heap.
 Set of heap-ordered trees. (each parent larger than its children)
 Maintain pointer to minimum element.
 Set of marked nodes

58
Memory Representation of the Nodes in a
Fibonacci Heap
 The roots of all the trees are linked together for faster access. The child
nodes of a parent node are connected to each other through a circular
doubly linked list as shown below.
 In Fibonacci heap, each tree of degree n has atleast Fn+2 nodes in it.{}
 There are two main advantages of using a circular doubly linked list.
 Deleting a node from the tree takes O(1) time.
 The concatenation of two such lists takes O(1) time. p
key
degree
L-sibling R-sibling

mark
Any child

Mark(18) = 1 (lost one of its children)


Mark(38) = 0 (lost no chilld)

59
Fibonacci Heaps: Structure
Fibonacci heap.
 Set of heap-ordered trees. (each parent larger than its children)
 Maintain pointer to minimum element. (find-min takes O(1) time)
 Set of marked nodes

60
Fibonacci Heaps: Structure
Fibonacci heap.
 Set of heap-ordered trees. (each parent larger than its children)
 Maintain pointer to minimum element.
 Set of marked nodes

61
Fibonacci heap: Notations
 Notation.
 n = number of nodes in heap.
 rank(x) or degree(x) = number of children of node x.
 rank(H) = max rank of any node in heap H.
 trees(H) = number of trees in heap H.
 marks(H) = number of marked nodes in heap H

62
Fibonacci heap operation
 Insert
 Link
 Delete min
 Decrease Key
 Union
 Delete

63
1. Fibonacci Heaps: Insert
 Insert.
 Create a new singleton tree.
 Add to root list(left child of min[h]); update min pointer (if necessary).

64
Insert => O(1)
 Insert.
 Create a new singleton tree.
 Add to root list (left child of min[h]); update min pointer (if necessary).

65
2. Fibonacci Heap: Linking Operation
Linking operation: Make larger root be a child of smaller root.

66
3. Fibonacci Heaps: Delete Min
Delete min.
 Delete min; meld its children into root list; update min.
 Consolidate trees so that no two roots have same rank.

67
Delete Min
Delete min.
 Delete min; meld its children into root list; update min.
 Consolidate trees so that no two roots have same rank.

68
Delete Min
Delete min.
 Delete min; meld its children into root list; update min.
 Consolidate trees so that no two roots have same rank.

69
4. Fibonacci Heaps: Decrease Key
 Intuition for deceasing the key of node x.
 If heap-order is not violated, just decrease the key of x.
 Otherwise, cut tree rooted at x and meld into root list.
 To keep trees flat: as soon as a node has its second child cut, cut it off and meld into
root list (and unmark it).

70
Decrease Key
Case 1. [heap order not violated]
 Decrease key of x.
 Change heap min pointer (if necessary).

71
Decrease Key
Case 2a. [heap order violated]
 Decrease key of x.
 Cut tree rooted at x, meld into root list, and unmark.
 If parent p of x is unmarked (hasn't yet lost a child), mark it;
 Otherwise, cut p, meld into root list, and unmark (and do so recursively for all
ancestors that lose a second child)

72
Decrease Key
Case 2a. [heap order violated]
 Decrease key of x.
 Cut tree rooted at x, meld into root list, and unmark.
 If parent p of x is unmarked (hasn't yet lost a child), mark it;
 Otherwise, cut p, meld into root list, and unmark (and do so recursively for all
ancestors that lose a second child)

73
Decrease Key
Case 2a. [heap order violated]
 Decrease key of x.
 Cut tree rooted at x, meld into root list, and unmark.
 If parent p of x is unmarked (hasn't yet lost a child), mark it;
 Otherwise, cut p, meld into root list, and unmark (and do so recursively for all
ancestors that lose a second child)

74
Decrease Key
Case 2b. [heap order violated]
 Decrease key of x.
 Cut tree rooted at x, meld into root list, and unmark.
 If parent p of x is unmarked (hasn't yet lost a child), mark it; Otherwise, cut p, meld
into root list, and unmark (and do so recursively for all ancestors that lose a second
child).

75
Decrease Key
Case 2b. [heap order violated]
 Decrease key of x.
 Cut tree rooted at x, meld into root list, and unmark.
 If parent p of x is unmarked (hasn't yet lost a child), mark it; Otherwise, cut p, meld
into root list, and unmark (and do so recursively for all ancestors that lose a second
child).

76
Decrease Key
Case 2b. [heap order violated]
 Decrease key of x.
 Cut tree rooted at x, meld into root list, and unmark.
 If parent p of x is unmarked (hasn't yet lost a child), mark it; Otherwise, cut p, meld
into root list, and unmark (and do so recursively for all ancestors that lose a second
child).

77
5. Fibonacci Heaps: Union
 Union Combine two Fibonacci heaps.

78
6. Fibonacci Heaps: Delete
Delete node x.
 decrease-key of x to ∞
 delete-min element in heap.

79
Disjoint set data structure
 What is a Disjoint set data structure?
 Two sets are called disjoint sets if they don’t have any element in common, the
intersection of sets is a null set

80
Disjoint set data structure
Consider a situation with a number of persons and the following tasks to be performed on them:
• Add a new friendship relation, i.e. a person x becomes the friend of another person y i.e adding new
element to a set.
• Find whether individual x is a friend of individual y (direct or indirect friend)

We are given 10 individuals say, a, b, c, d, e, f, g, h, i, j


Following are relationships to be added:
 a <-> b , b <-> d , c <-> f, c <-> I, j <-> e, g <-> j

Given queries like whether a is a friend of d or not. We basically need to create following 4 groups and maintain a quickly
accessible connection among group items:
 G1 = {a, b, d}
 G2 = {c, f, i}
 G3 = {e, g, j}
 G4 = {h}

81
 Find whether x and y belong to the same group or not, i.e. to find if x and y are
direct/indirect friends.

 Partitioning the individuals into different sets according to the groups in which they fall.
This method is known as a Disjoint set Union which maintains a collection of Disjoint
sets and each set is represented by one of its members.

 To answer the above question two key points to be considered are:


 How to Resolve sets? Initially, all elements belong to different sets. After working on the given
relations, we select a member as a representative. There can be many ways to select a representative,
a simple one is to select with the biggest index.
 Check if 2 persons are in the same group? If representatives of two individuals are the same, then
they’ll become friends.

82
A disjoint–set is a data structure that keeps track of a set of elements partitioned into
several disjoint (non-overlapping) subsets. Operations on Disjoint set are:

 Find: It determines in which subset a particular element is in and returns the


representative of that particular set. An item from this set typically acts as a
“representative” of the set.

 Union: It merges two different subsets into a single subset, and the representative of
one set becomes representative of another.

 The disjoint–set also supports one other important operation called MakeSet, which
creates a set containing only a given element in it.

83
How to Implement Disjoint Sets?
 Disjoint–set forests are data structures where each set is represented by a tree data in
which each node holds a reference to its parent and the representative of each set is the
root of that set’s tree.
 Find follows parent nodes until it reaches the root.
 Union combines two trees into one by attaching one tree’s root into the root of the other.

84
Example
For example, consider five disjoint sets S1, S2, S3, S4, and S5 represented by a tree.
Each set initially contains only one element each, so their parent pointer points to
itself or NULL.

 S1 = {1}, S2 ={2}, S3 = {3}, S4 ={4} and S5 = {5}

 The Find operation on element i will return representative of Si, where 1 <= i <= 5,
i.e., Find(i) = i.

85
 If we do Union (S3, S4), S3 and S4 will be merged into one disjoint set, S3. Now,

 S1 = {1}, S2 ={2}, S3 = {3, 4} and S5 = {5}.

 Find(4) will return representative of set S3, i.e., Find(4) = 3

86
 If we do Union (S1, S2), S1 and S2 will be merged into one disjoint set, S1. Now,

 S1 = {1, 2}, S3 = {3, 4} and S5 = {5}.

 Find(2) or Find(1) will return the representative of set S1, i.e., Find(2) = Find(1) = 1

87
 If we do Union (S3, S1), S3 and S1 will be merged into one disjoint set, S3. Now,

 S3 = {1, 2, 3, 4} and S5 = {5}.

88
Implementation

89
Complexity analysis
 Find – The time complexity of the Find operation is O(n) in the worst case
(consider the case when all the elements are in the same set and we need
to find the parent of a given element then we may need to make n
recursive calls).
 Union - For the union query (say Union(u, v)) we need to find the parents
of u and v making its time complexity O(n).

 This approach is not much efficient, so to get the full-fledged advantage of


Disjoint Set Union we will look for some minor modifications that can make
our code efficient.

90
Optimization-1: Path compression
 Let u be a node and p be its parent. When we are finding parent of u
recursively, we are also finding parent of all those intermediate nodes
which we visit in the path u → p.
 The trick here is to shorten the path to the parent of all those intermediate
nodes by directly connecting them to the leader of the set.

You can see that in the process of finding


the parent of node 8 we have attached all
the intermediate nodes to 1.

91
 To achieve this, we just need to modify our find function a bit and that
minor modification is - During the recursive call of finding parent of
parent[u] we will also pass the result to parent[u] (please refer to the last
line of underlying pseudocode for better understanding)

Find(u):
If(parent[u]==u):
return u
return parent[u]=Find(parent[u])

 This is firstly finding the root of the set and in the process of stack
unwinding, we are attaching all the intermediate nodes to their
representative. Yes by modifying that line we reduced the time complexity
from O(n) to O(log(n))

92
Optimization-2: Union by size
 In this optimization we will change the union_set operation. To be precise,
we will change which tree gets attached to the other one. In the naive
implementation the second tree always got attached to the first one. In
practice that can lead to trees containing chains of length O(n) .
With this optimization we will avoid this by
choosing very carefully which tree gets
attached.
Time complexity: O(logn)

93
 if we combine both optimizations - path compression with union by size -
we will reach nearly constant time queries. It turns out, that the final
amortized time complexity is O(α(n)) , where α(n) is the inverse
Ackermann function, which grows very slowly. In fact it grows so slowly,
that it doesn't exceed 4 for all reasonable n (approximately n < 10 600 ).

 Amortized complexity is the total time per operation, evaluated over a


sequence of multiple operations. The idea is to guarantee the total time of
the entire sequence, while allowing single operations to be much slower
then the amortized time. E.g. in our case a single call might take log n in
the worst case, but if we do m such calls back to back we will end up with
an average time of α(n)

94
Applications of Disjoint Set
 It is used to find Cycle in a graph as in Kruskal's algorithm, DSUs are used.
 Checking for connected components in a graph.
 Searching for connected components in an image.

95
Median-of-median finding Algorithm
 Median-finding algorithms (also called linear-time selection algorithms) use a divide
and conquer strategy to efficiently compute the ith smallest number in an unsorted list of
size n, where i is an integer between 1 and n.

 Given an array A = [1,...,n] of n numbers and an index i, where 1 ≤ i ≤ n, find the i th


smallest element of A. {n/2th element}.

 Solution1: This problem can certainly be solved using a sorting algorithm to sort a list
of numbers and return the value of n/2th index. But naïve approach will take O(n log n)
to sort the array first and get the median.

 Solution2: Another solution to solve the problem in O(n) time, which is based on Quick
sort algorithm.

96
 1. Choose a pivot
 Re-arrange smaller number in the left and greater number in the right to the pivot
element
 Check if the pivot index if median
 If pivot index is less than median index, recursively search for the median in the
right sub array.
 If pivot index is greater than median index, recursively search for the median in the
left sub array.
 In worst case the pivot divides the array in 1 and n-1 element, which eventually forms a
chain and takes O(n2)

97
 Median-of-median ( A, i )
 Divide the list into sublists each of length five (if there are fewer than five elements available for the last list, that is fine)
 Sort each sublist and determine the median.(Sorting very small lists takes linear time since these sublists have five elements,
and this takes O(n) time.)
 Use the median-of-median algorithm to recursively determine the median of the set of all the
medians
 Use this median as the pivot element, x. The pivot is an approximate median of the whole list and
then each recursive step approximate the correct true median

98
Example:
A=[25, 21, 98, 100, 76, 22, 43, 60, 89, 87]
 First, we break the list into lists of five elements:
 A1 = [25,21,98,100,76] and A2=[22,43,60,89,87]
 Sort each list:
 A1 = [21,25,76,98,100 ] and A2 = [22,43,60,87,89]
 Then, get the median out of each list and put them in a list of medians, M:
= [76, 60]
 Sort this: M = [60,76], pick median 2/2=1 => M[1]=76
 Use this as the pivot element and put all elements in A that are less than
76 to the left and all elements greater than 76 to the right:

 A = [25,22,43,60,21,76,100,89,87,98].
 Find the index of 76, which is 5, median =>76
99
Thank You!

You might also like