Unit - 5 Priority Queue
Unit - 5 Priority Queue
In the normal queue data structure, insertion is performed at the end of the queue and deletion is performed based on
the FIFO principle. This queue implementation may not be suitable for all applications.
Consider a networking application where the server has to respond for requests from multiple clients using queue data
structure. Assume four requests arrived at the queue in the order of R1, R2, R3 & R4 where R1 requires 20 units of
time, R2 requires 2 units of time, R3 requires 10 units of time and R4 requires 5 units of time. A queue is as follows...
1. R1 : 20 units of time
2. R2 : 22 units of time (R2 must wait until R1 completes 20 units and R2 itself requires 2 units. Total 22
units)
3. R3 : 32 units of time (R3 must wait until R2 completes 22 units and R3 itself requires 10 units. Total 32
units)
4. R4 : 37 units of time (R4 must wait until R3 completes 35 units and R4 itself requires 5 units. Total 37
units)
Here, the average waiting time for all requests (R1, R2, R3 and R4) is (20+22+32+37)/4 ≈ 27 units of time.
That means, if we use a normal queue data structure to serve these requests the average waiting time for each request
is 27 units of time.
Now, consider another way of serving these requests. If we serve according to their required amount of time, first we
serve R2 which has minimum time (2 units) requirement. Then serve R4 which has second minimum time (5 units)
requirement and then serve R3 which has third minimum time (10 units) requirement and finally R1 is served which has
1. R2 : 2 units of time
2. R4 : 7 units of time (R4 must wait until R2 completes 2 units and R4 itself requires 5 units. Total 7
units)
3. R3 : 17 units of time (R3 must wait until R4 completes 7 units and R3 itself requires 10 units. Total 17
units)
4. R1 : 37 units of time (R1 must wait until R3 completes 17 units and R1 itself requires 20 units. Total 37
units)
Here, the average waiting time for all requests (R1, R2, R3 and R4) is (2+7+17+37)/4 ≈ 15 units of time.
From the above two situations, it is very clear that the second method server can complete all four requests with very
less time compared to the first method. This is what exactly done by the priority queue.
Priority queue is a variant of a queue data structure in which insertion is performed in the order of arrival and
an array,
a linked list,
a heap data structure or a binary search tree.
Among these data structures, heap data structure provides an efficient implementation of
priority queues.
For example, assume that elements are inserted in the order of 8, 5, 3 and 2. And they are removed in the order 8, 5, 3
and 2.
insert() - New element is added at a particular position based on the decreasing order of elements which
requires O(n) time complexity as it needs to shift existing elements inorder to insert new element in decreasing order.
This insert() operation requires O(n) time complexity.
findMax() - Finding the maximum element in the queue is very simple because maximum element is at the
beginning of the queue. This findMax() operation requires O(1) time complexity.
remove() - To remove an element from the max priority queue, first we need to find the largest element
using findMax() operation which requires O(1) time complexity, then that element is deleted with constant time
complexity O(1) and finally we need to rearrange the remaining elements in the list which requires O(n) time
complexity. This remove() operation requires O(1) + O(1) + O(n) ≈ O(n) time complexity.
For example, assume that elements are inserted in the order of 2, 3, 5 and 8. And they are removed in the order of 8, 5,
3 and 2.
There are two types of heap data structures and they are as follows...
1. Max Heap
2. Min Heap
Max Heap
Max heap data structure is a specialized full binary tree data structure. In a max heap nodes are arranged based on
node value.
Max heap is a specialized full binary tree in which every parent node contains greater or equal value than its
child nodes.
1. Finding Maximum
2. Insertion
3. Deletion
Example
Consider the above max heap. Insert a new node with value 85.
Step 1 - Insert the newNode with value 85 as last leaf from left to right. That means newNode is added as a
right child of node with value 75. After adding max heap is as follows...
Step 2 - Compare newNode value (85) with its Parent node value (75). That means 85 > 75
Step 3 - Here newNode value (85) is greater than its parent value (75), then swap both of them. After swapping, max
heap is as follows.
Step 4 - Now, again compare newNode value (85) with its parent node value (89).
Here, newNode value (85) is smaller than its parent node value (89). So, we stop insertion process. Finally, max heap
after insertion of a new node with value 85 is as follows...
Deletion Operation in Max Heap
In a max heap, deleting the last node is very simple as it does not disturb max heap properties.
Deleting root node from a max heap is little difficult as it disturbs the max heap properties. We use the following steps to
delete the root node from a max heap...
Step 1 - Swap the root node with last node in max heap
Step 3 - Now, compare root value with its left child value.
Step 4 - If root value is smaller than its left child, then compare left child with its right sibling. Else goto Step 6
Step 5 - If left child value is larger than its right sibling, then swap root with left child otherwise swap root with
its right child.
Step 6 - If root value is larger than its left child, then compare root value with its right child value.
Step 7 - If root value is smaller than its right child, then swap root with right child otherwise stop the process.
Step 8 - Repeat the same until root node fixes at its exact position.
Example
Consider the above max heap. Delete root node (90) from the max heap.
Step 1 - Swap the root node (90) with last node 75 in max heap. After swapping max heap is as follows...
Step 2 - Delete last node. Here the last node is 90. After deleting node with value 90 from heap, max heap is as
follows...
Step 3 - Compare root node (75) with its left child (89).
Here, root value (75) is smaller than its left child value (89). So, compare left child (89) with its right sibling (70).
Step 4 - Here, left child value (89) is larger than its right sibling (70), So, swap root (75) with left child (89).
Step 5 - Now, again compare 75 with its left child (36).
Here, node with value 75 is larger than its left child. So, we compare node 75 with its right child 85.
Step 6 - Here, node with value 75 is smaller than its right child (85). So, we swap both of them. After swapping max
heap is as follows.
Step 7 - Now, compare node with value 75 with its left child (15).
Here, node with value 75 is larger than its left child (15) and it does not have right child. So we stop the process.
35 33 42 10 14 19 27 44 26 31
35 33 42 10 14 19 27 44 26 31
Quick sort
QuickSort is a Divide and Conquer algorithm. It picks an element as pivot and partitions the given array
around the picked pivot. There are many different versions of quickSort that pick pivot in different ways.
1. Always pick first element as pivot.
2. Always pick last element as pivot (implemented below)
3. Pick a random element as pivot.
4. Pick median as pivot.
Although there are many different ways to choose the pivot value, we will simply use the first item in the list. The role of
the pivot value is to assist with splitting the list. The actual position where the pivot value belongs in the final sorted list,
commonly called the split point, will be used to divide the list for subsequent calls to the quick sort.
Quicksort partitions an array and then calls itself recursively twice to sort the two resulting subarrays. This
algorithm is quite efficient for large-sized data sets
Algorithm
Quicksort is a divide and conquer algorithm.
It divides the large array into smaller sub-arrays. And then quicksort recursively sort the sub-arrays.
Pivot
1. Picks an element called the "pivot".
Partition
2. Rearrange the array elements in such a way that the all values lesser than the pivot should come before
the pivot and all the values greater than the pivot should come after it.
This method is called partitioning the array. At the end of the partition function, the pivot element will be
placed at its sorted position.
Recursive
3. Do the above process recursively to all the sub-arrays and sort the elements.
Base Case
If the array has zero or one element, there is no need to call the partition method.
So we need to stop the recursive call when the array size is less than or equal to 1.
Quick sort program
#include<stdio.h>
int main(){
int x[20],size,i;
scanf("%d",&size);
for(i=0;i<size;i++)
scanf("%d",&x[i]);
quicksort(x,0,size-1);
for(i=0;i<size;i++)
printf(" %d",x[i]);
return 0;
int pivot,j,temp,i;
if(first<last){
pivot=first;
i=first+1;
j=last;
while(i<j){
while(x[i]<=x[pivot]&&i<last)
i++;
while(x[j]>x[pivot])
j--;
if(i<j){
temp=x[i];
x[i]=x[j];
x[j]=temp;
temp=x[pivot];
x[pivot]=x[j];
x[j]=temp;
quicksort(x,first,j-1);
quicksort(x,j+1,last);
}
Merge Sort
Like QuickSort, Merge Sort is a Divide and Conquer algorithm. It divides input array in two halves, calls
itself for the two halves and then merges the two sorted halves. The merge() function is used for merging
two halves. The merge(arr, l, m, r) is key process that assumes that arr[l..m] and arr[m+1..r] are sorted and
merges the two sorted sub-arrays into one. See following C implementation for details.
MergeSort(arr[], l, r)
If r > l
1. Find the middle point to divide the array into two halves:
middle m = (l+r)/2
2. Call mergeSort for first half:
Call mergeSort(arr, l, m)
3. Call mergeSort for second half:
Call mergeSort(arr, m+1, r)
4. Merge the two halves sorted in step 2 and 3:
Call merge(arr, l, m, r)
The following diagram from wikipedia shows the complete merge sort process for an example array {38,
27, 43, 3, 9, 82, 10}.
Merge sort program
void merge(int arr[], int l, int m, int r)
{
int i, j, k;
int n1 = m - l + 1;
int n2 = r - m;
merge(arr, l, m, r);
}
}
return 0;
}
Radix sort
The lower bound for Comparison based sorting algorithm (Merge Sort, Heap Sort, Quick-Sort .. etc) is
Ω(nLogn), i.e., they cannot do better than nLogn.
Counting sort is a linear time sorting algorithm that sort in O(n+k) time when elements are in range from 1
to k.
Radix sort is a sorting technique that sorts the elements by first grouping the individual digits of the
same place value. Then, sort the elements according to their increasing/decreasing order.
Algorithm:
For each digit i where i varies from the least significant digit to the most significant digit of a number
Sort input array using countsort algorithm according to ith digit.
The array becomes : 10,11,17,21,34,44,123,654 which is sorted. This is how our algorithm works.
// The main function to that sorts arr[] of size n using Radix Sort
void radixsort(int arr[], int n) {
int m = getMax(arr, n);
int exp;
for (exp = 1; m / exp > 0; exp *= 10)
countSort(arr, n, exp);
}
int main() {
int arr[] = { 170, 45, 75, 90, 802, 24, 2, 66 };
int n = sizeof(arr) / sizeof(arr[0]);
radixsort(arr, n);
print(arr, n);
return 0;
}
// main function
int main()
{
int arr[] = {12, 11, 13, 5, 6, 7};
int n = sizeof(arr)/sizeof(arr[0]);
heapSort(arr, n);