0% found this document useful (0 votes)
1 views22 pages

Unit Ii

Unit II covers the Divide and Conquer strategy, which involves dividing a problem into smaller subproblems, solving them recursively, and combining their solutions. It discusses various algorithms such as Strassen's matrix multiplication, binary search, quick sort, and merge sort, highlighting their efficiency and applications. Additionally, it addresses the advantages and disadvantages of these algorithms, emphasizing their time complexities and use cases.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views22 pages

Unit Ii

Unit II covers the Divide and Conquer strategy, which involves dividing a problem into smaller subproblems, solving them recursively, and combining their solutions. It discusses various algorithms such as Strassen's matrix multiplication, binary search, quick sort, and merge sort, highlighting their efficiency and applications. Additionally, it addresses the advantages and disadvantages of these algorithms, emphasizing their time complexities and use cases.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT II: Divide and Conquer

Basic strategy, matrix operations, Strassen's matrix multiplication, binary search, quick sort, merge sort,
amortized analysis, application of amortized analysis, advanced data structures like Fibonacci heap,
binomial heap, disjoint set representation

Basic strategy:
A problem-solving strategy that involves breaking a problem into smaller subproblems, solving each subproblem
recursively, and then combining their solutions to solve the original problem.
The basic strategy of the Divide and Conquer paradigm involves three main steps for solving a problem:
1. Divide: Break the original problem into smaller subproblems that are similar to the original problem but
smaller in size. The size of these subproblems decreases with each recursive step.
2. Conquer: Solve the subproblems recursively. If the subproblem sizes are small enough, solve them directly
using base case solutions (e.g., simple arithmetic operations).
3. Combine: Merge the solutions of the subproblems to form the solution of the original problem.
Example: Merge Sort (Divide and Conquer Strategy)
• Divide: Split the array into two halves.
• Conquer: Recursively sort each half.
• Combine: Merge the two sorted halves to produce a single sorted array.
The divide and conquer strategy is effective when a problem can be broken down into independent or nearly
independent subproblems that can be solved recursively and then combined.
Advantages:
• Efficiency: By dividing the problem, the solution often achieves better time complexity (e.g., O(nlogn) in
merge sort).
• Parallelization: Subproblems can often be solved in parallel, making this approach well-suited for
distributed computing.

Matrix operations:
Basic operations like matrix addition, subtraction, and multiplication. This forms the foundation for more complex
algorithms.
Strassen's matrix multiplication
An algorithm that multiplies two matrices faster than the standard matrix multiplication method, reducing the time
complexity from O(n3) to O(n2.81).
Divide and Conquer :
Following is simple Divide and Conquer method to multiply two square matrices.
1. Divide matrices A and B in 4 sub-matrices of size N/2 x N/2 as shown in the below diagram.
2. Calculate following values recursively. ae + bg, af + bh, ce + dg and cf + dh.
Strassen's algorithm, developed by Volker Strassen in 1969, is a fast algorithm for matrix multiplication. It is an
efficient divide-and-conquer method that reduces the number of arithmetic operations required to multiply two
matrices compared to the conventional matrix multiplication algorithm (the naive approach).
The traditional matrix multiplication algorithm has a time complexity of O(n3) for multiplying two n x n matrices.
However, Strassen's algorithm improves this to O(n ^ log 2 7), which is approximately O(n2.81). The algorithm
achieves this improvement by recursively breaking down the matrix multiplication into smaller subproblems and
combining the results.

The efficiency of Strassen's algorithm comes from the fact that it reduces the number of recursive calls, which
means fewer multiplication operations are needed overall. However, due to its higher constant factors and increased
overhead, Strassen's algorithm is sometimes slower than the naive algorithm for small matrices or practical
implementations. For huge matrices, it can provide a significant speedup. Additionally, further optimized
algorithms like Coppersmith-Winograd algorithm have been developed to improve matrix multiplication even
more, especially for huge matrices.

Binary search:
A search algorithm that works on sorted arrays by repeatedly dividing the search interval in half. Its time
complexity is O(log n).
Binary search is a search algorithm used to find the position of a target value within a sorted array. It works by
repeatedly dividing the search interval in half until the target value is found or the interval is empty. The search
interval is halved by comparing the target element with the middle value of the search space.
Binary search follows the divide and conquer approach in which the list is divided into two halves, and the item is
compared with the middle element of the list. If the match is found then, the location of the middle element is
returned. Otherwise, we search into either of the halves depending upon the result produced through the match.
To apply Binary Search algorithm:
• The data structure must be sorted.
• Access to any element of the data structure should take constant time.
Below is the step-by-step algorithm for Binary Search:
• Divide the search space into two halves by finding the middle index “mid”.
• Compare the middle element of the search space with the key.
• If the key is found at middle element, the process is terminated.
• If the key is not found at middle element, choose which half will be used as the next search space.
 If the key is smaller than the middle element, then the left side is used for next search.
 If the key is larger than the middle element, then the right side is used for next search.
• This process is continued until the key is found or the total search space is exhausted.

Pseudo Code:

The Binary Search Algorithm can be implemented in the following two ways
• Iterative Binary Search Algorithm
• Recursive Binary Search Algorithm

Iterative Binary Search Algorithm:


Here we use a while loop to continue the process of comparing the key and splitting the search space in
two halves.
Implementation of Iterative Binary Search Algorithm:
import java.io.*;
class BinarySearch
{
int binarySearch(int arr[], int x)
{
int low = 0, high = arr.length - 1;
while (low <= high)
{
int mid = low + (high - low) / 2;
if (arr[mid] == x) // Check if x is present at mid
return mid;
if (arr[mid] < x) // If x greater, ignore left half
low = mid + 1;
else // If x is smaller, ignore right half
high = mid - 1;
}
return -1; // If we reach here, then element was not present
}

public static void main(String args[])


{
BinarySearch ob = new BinarySearch();
int arr[] = { 2, 3, 4, 10, 40 };
int n = arr.length;
int x = 10;
int result = ob.binarySearch(arr, x);
if (result == -1)
System.out.println("Element is not present in array");
else
System.out.println("Element is present at " + "index " + result);
}
}
Output:
Element is present at index 3
Time Complexity: O(log N)
Auxiliary Space: O(1)

Recursive Binary Search Algorithm:


Create a recursive function and compare the mid of the search space with the key. And based on the result
either return the index where the key is found or call the recursive function for the next search space.
Implementation of Recursive Binary Search Algorithm:
class BinarySearch
{
int binarySearch(int arr[], int low, int high, int x)
{
if (high >= low)
{
int mid = low + (high - low) / 2;
if (arr[mid] == x) // If the element is present at the middle itself
return mid;
if (arr[mid] > x) // If element is smaller than mid, then it can only be present in left subarray
return binarySearch(arr, low, mid - 1, x);

return binarySearch(arr, mid + 1, high, x); // Else the element can only be present in right
subarray
}
return -1;
}

public static void main(String args[])


{
BinarySearch ob = new BinarySearch();
int arr[] = { 2, 3, 4, 10, 40 };
int n = arr.length;
int x = 10;
int result = ob.binarySearch(arr, 0, n - 1, x);
if (result == -1)
System.out.println(
"Element is not present in array");
else
System.out.println(
"Element is present at index " + result);
}
}
Output
Element is present at index 3

Complexity Analysis of Binary Search Algorithm:


Auxiliary Space: O(1), If the recursive call stack is considered then the auxiliary space will be O(logN).
Applications of Binary Search Algorithm:
• Binary search can be used as a building block for more complex algorithms used in machine learning, such
as algorithms for training neural networks or finding the optimal hyperparameters for a model.
• It can be used for searching in computer graphics such as algorithms for ray tracing or texture mapping.
• It can be used for searching a database.
Advantages of Binary Search:
• Binary search is faster than linear search, especially for large arrays.
• More efficient than other searching algorithms with a similar time complexity, such as interpolation search
or exponential search.
• Binary search is well-suited for searching large datasets that are stored in external memory, such as on a
hard drive or in the cloud.
Disadvantages of Binary Search:
• The array should be sorted.
• Binary search requires that the data structure being searched be stored in contiguous memory locations.
• Binary search requires that the elements of the array be comparable, meaning that they must be able to be
ordered.

Quick sort
A highly efficient sorting algorithm that uses a divide-and-conquer approach. It works by selecting a 'pivot' and
partitioning the array around the pivot, then recursively sorting the subarrays.
QuickSort is a sorting algorithm based on the Divide and Conquer algorithm that picks an element as a pivot and
partitions the given array around the picked pivot by placing the pivot in its correct position in the sorted array.
Quicksort picks an element as pivot, and then it partitions the given array around the picked pivot element. In quick
sort, a large array is divided into two arrays in which one holds values that are smaller than the specified value
(Pivot), and another array holds the values that are greater than the pivot.
Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array into two sub-arrays such
that each element in the left sub-array is less than or equal to the pivot element and each element in the right sub-
array is larger than the pivot element.
Conquer: Recursively, sort two subarrays with Quicksort.
Combine: Combine the already sorted array.
Choosing the Pivot:
Picking a good pivot is necessary for the fast implementation of quicksort. However, it is typical to determine a
good pivot. Some of the ways of choosing a pivot are as follows –
There are many different choices for picking pivots.
➢ Pivot can be random, i.e. select the random pivot from the given array.
➢ Pivot can either be the rightmost element or the leftmost element of the given array.
➢ Select median as the pivot element.
Pseudo Code:

Code:

import java.io.*;
class GFG
{
static void swap(int[] arr, int i, int j) // A utility function to swap two elements
{
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
static int partition(int[] arr, int low, int high)

/*This function takes last element as pivot, places the pivot element at its correct position in sorted array, and
places all smaller to left of pivot and all greater elements to right of pivot*/

{
int pivot = arr[high]; // Choosing the pivot
int i = (low - 1); // Index of smaller element and indicates the right position of pivot found so far
for (int j = low; j <= high - 1; j++)
{
if (arr[j] < pivot) // If current element is smaller than the pivot
{
i++; // Increment index of smaller element
swap(arr, i, j);
}
}
swap(arr, i + 1, high);
return (i + 1);
}

static void quickSort(int[] arr, int low, int high)


{
if (low < high) {
int pi = partition(arr, low, high);
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}
public static void printArr(int[] arr)
{
for (int i = 0; i < arr.length; i++) {
System.out.print(arr[i] + " ");
}
}

public static void main(String[] args)


{
int[] arr = { 10, 7, 8, 9, 1, 5 };
int N = arr.length;
quickSort(arr, 0, N - 1);
System.out.println("Sorted array:");
printArr(arr);
}
}
Complexity Analysis of Quick Sort Algorithm:
Advantages of Quick Sort:
• It is a divide-and-conquer algorithm that makes it easier to solve problems.
• It is efficient on large data sets.
• It has a low overhead, as it only requires a small amount of memory to function.
• It is Cache Friendly as we work on the same array to sort and do not copy data to any auxiliary array.
• Fastest general purpose algorithm for large data when stability is not required.
• It is tail recursive and hence all the tail call optimization can be done.
Disadvantages of Quick Sort:
• It has a worst-case time complexity of O(N 2 ), which occurs when the pivot is chosen poorly.
• It is not a good choice for small data sets.
• It is not a stable sort, meaning that if two elements have the same key, their relative order will not be
preserved in the sorted output in case of quick sort, because here we are swapping elements according to
the pivot’s position (without considering their original positions).

Merge Sort:
Another divide-and-conquer algorithm that splits the array into halves, recursively sorts them, and then merges the
sorted halves. Its time complexity is O(n log n).
Merge sort is similar to the quick sort algorithm as it uses the divide and conquer approach to sort the elements. It is
one of the most popular and efficient sorting algorithm. It divides the given list into two equal halves, calls itself for
the two halves and then merges the two sorted halves. We have to define the merge() function to perform the
merging.
The sub-lists are divided again and again into halves until the list cannot be divided further. Then we combine the
pair of one element lists into two-element lists, sorting them in the process. The sorted two-element pairs is merged
into the four-element lists, and so on until we get the sorted list.
1. Divide: Divide the list or array recursively into two halves until it can no more be divided.
2. Conquer: Each subarray is sorted individually using the merge sort algorithm.
3. Merge: The sorted subarrays are merged back together in sorted order. The process continues until all
elements from both subarrays have been merged.

Pseudo Code:
Code:

import java.io.*; while (j < n2) {


class GfG { arr[k] = R[j];
// Merges two subarrays of arr[]. j++;
// First subarray is arr[l..m] k++;
// Second subarray is arr[m+1..r] }
static void merge(int arr[], int l, int m, int r) }
{ // Main function that sorts arr[l..r] using merge()
// Find sizes of two subarrays to be merged static void sort(int arr[], int l, int r)
int n1 = m - l + 1; {
int n2 = r - m; if (l < r) {

// Create temp arrays // Find the middle point


int L[] = new int[n1]; int m = l + (r - l) / 2;
int R[] = new int[n2];
// Sort first and second halves
// Copy data to temp arrays sort(arr, l, m);
for (int i = 0; i < n1; ++i) sort(arr, m + 1, r);
L[i] = arr[l + i];
for (int j = 0; j < n2; ++j) // Merge the sorted halves
R[j] = arr[m + 1 + j]; merge(arr, l, m, r);
}
// Merge the temp arrays }
// Initial indices of first and second subarrays
int i = 0, j = 0; static void printArray(int arr[])
{
// Initial index of merged subarray array int n = arr.length;
int k = l; for (int i = 0; i < n; ++i)
while (i < n1 && j < n2) { System.out.print(arr[i] + " ");
if (L[i] <= R[j]) { System.out.println();
arr[k] = L[i]; }
i++;
} public static void main(String args[])
else { {
arr[k] = R[j]; int arr[] = { 12, 11, 13, 5, 6, 7 };
j++;
} System.out.println("Given array is");
k++; printArray(arr);
}
// Copy remaining elements of L[] if any sort(arr, 0, arr.length - 1);
while (i < n1) {
arr[k] = L[i]; System.out.println("\nSorted array is");
i++; printArray(arr);
k++; }
} }

// Copy remaining elements of R[] if any

Complexity Analysis of Quick Sort Algorithm:


Applications of Merge Sort:
• Sorting large datasets
• External sorting (when the dataset is too large to fit in memory)
• Inversion counting
• Merge Sort and its variations are used in library methods of programming languages. For example
its variation TimSort is used in Python, Java Android and Swift. The main reason why it is
preferred to sort non-primitive types is stability which is not there in QuickSort. For
example Arrays.sort in Java uses QuickSort while Collections.sort uses MergeSort.
• It is a preferred algorithm for sorting Linked lists.
• It can be easily parallelized as we can independently sort subarrays and then merge.
• The merge function of merge sort to efficiently solve the problems like union and intersection of
two sorted arrays.
Advantages of Merge Sort:
• Stability : Merge sort is a stable sorting algorithm, which means it maintains the relative order of
equal elements in the input array.
• Guaranteed worst-case performance: Merge sort has a worst-case time complexity of O(N
logN) , which means it performs well even on large datasets.
• Simple to implement: The divide-and-conquer approach is straightforward.
• Naturally Parallel : We independently merge subarrays that makes it suitable for parallel
processing.
Disadvantages of Merge Sort:
• Space complexity: Merge sort requires additional memory to store the merged sub-arrays during
the sorting process.
• Not in-place: Merge sort is not an in-place sorting algorithm, which means it requires additional
memory to store the sorted data. This can be a disadvantage in applications where memory usage
is a concern.
• Slower than Quicksort in general: Quicksort is more cache friendly because it works in-place.

Amortized Analysis
A method for analysing the average time complexity of an algorithm over a sequence of operations, rather than a
single operation.
Amortized analysis is a powerful technique for data structure analysis, involving the total runtime of a sequence of
operations, which is often what we really care about.
In amortized analysis, one averages the total time required to perform a sequence of data-structure operations over
all operations performed. Upshot of amortized analysis: worst-case cost per query may be high for one particular
query, so long as overall average cost per query is small in the end!
Amortized analysis is a worst-case analysis. That is, it measures the average performance of each operation in the
worst case.
In Amortized Analysis, we analyze a sequence of operations and guarantee a worst-case average time that is lower
than the worst-case time of a particularly expensive operation.
The example data structures whose operations are analyzed using Amortized Analysis are Hash Tables, Disjoint
Sets, and Splay Trees.
Amortized analysis is a technique used in computer science to analyze the average-case time complexity of
algorithms that perform a sequence of operations, where some operations may be more expensive than others. The
idea is to spread the cost of these expensive operations over multiple operations, so that the average cost of each
operation is constant or less.
Types of amortized analyses:
Three common types of amortized analyses:
1. Aggregate Analysis: determine upper bound T(n) on total cost of sequence of n operations. So amortized
complexity is T(n)/n.
2. Accounting Method: assign certain charge to each operation (independent of the actual cost of the
operation). If operation is cheaper than the charge, then build up credit to use later.
3. Potential Method: one comes up with potential energy of a data structure, which maps each state of entire
data-structure to a real number (its “potential”). Differs from accounting method because we assign credit to
the data structure as a whole, instead of assigning credit to each operation.
Aggregate Method:
The method we used in the above analysis is the aggregate method: just add up the cost of all the operations and
then divide by the number of operations.

In aggregate analysis, there are two steps. First, we must show that a sequence of n operations takes T(n) time in
𝑇(𝑛)
the worst case. Then, we show that each operation takes time, on average. Therefore, in aggregate analysis,
𝑛
each operation has the same cost.
A common example of aggregate analysis is a modified stack. Stacks are a linear data structure that have two
constant-time operations. push(element) puts an element on the top of the stack, and pop() takes the top element
off of the stack and returns it. These operations are both constant-time, so a total of n operations (in any order)
will result in O(n) total time.

Aggregate method is the simplest method. Because it’s simple, it may not be able to analyse more complicated
algorithms.

Accounting Method:
This method allows an operation to store credit into a bank for future use, if its assigned amortized cost > its
actual cost; it also allows an operation to pay for its extra actual cost using existing credit, if its assigned
amortized cost < its actual cost.
Potential Method:
The potential method is similar to the accounting method. However, instead of thinking about the analysis in
terms of cost and credit, the potential method thinks of work already done as potential energy that can pay for
later operations. This is similar to how rolling a rock up a hill creates potential energy that then can bring it back
down the hill with no effort. Unlike the accounting method, however, potential energy is associated with the data
structure as a whole, not with individual operations.
This method defines a potential function Φ that maps a data structure (DS) configuration to a value. This function
Φ is equivalent to the total unused credits stored up by all past operations (the bank account balance). Now

And

In order for the amortized bound to hold, Φ should never go below Φ(initial DS) at any point. If Φ(initial DS) = 0,
which is usually the case, then Φ should never go negative (intuitively, we cannot ”owe the bank”).
Advantages of amortized analysis:
1. More accurate predictions: Amortized analysis provides a more accurate prediction of the average-case
complexity of an algorithm over a sequence of operations, rather than just the worst-case complexity of
individual operations.
2. Provides insight into algorithm behavior: By analyzing the amortized cost of an algorithm, we can gain
insight into how it behaves over a longer period of time and how it handles different types of inputs.
3. Helps in algorithm design: Amortized analysis can be used as a tool for designing algorithms that are
efficient over a sequence of operations.
Useful in dynamic data structures: Amortized analysis is particularly useful in dynamic data structures like
heaps, stacks, and queues, where the cost of an operation may depend on the current state of the data
structure.

Disadvantages of amortized analysis:


1. Complexity: Amortized analysis can be complex, especially when multiple operations are involved, making
it difficult to implement and understand.
2. Limited applicability: Amortized analysis may not be suitable for all types of algorithms, especially those
with highly unpredictable behavior or those that depend on external factors like network latency or I/O
operations.
3. Lack of precision: Although amortized analysis provides a more accurate prediction of average-case
complexity than worst-case analysis, it may not always provide a precise estimate of the actual
performance of an algorithm, especially in cases where there is high variance in the cost of operations.
Application of Amortized analysis
A heap data structure that supports a set of operations that run in better amortized time than other heaps. It's
particularly efficient for algorithms like Dijkstra’s or Prim’s.
Amortized analysis is a powerful technique used in analysing algorithms, especially in cases where an individual
operation might be expensive, but the average cost of operations over a sequence of actions is lower. Below are
some key applications of amortized analysis:
1. Dynamic Array (Vector) Operations:
• Application: Dynamic arrays, such as std::vector in C++, automatically resize themselves when they run
out of space.
• Amortized Analysis: Inserting an element into a dynamic array may require the array to resize (typically
doubling in size), which is an expensive operation. However, since the resizing happens only occasionally,
the amortized cost of inserting an element over many operations is constant, i.e., O(1).
2. Stack Operations with Multipop:
• Application: Stacks that allow both push and pop operations, and a special "multipop" operation that pops
several elements at once.
• Amortized Analysis: Even though a multipop operation can remove several elements at once and seems
expensive, the total cost of all pop operations (including multipop) over a series of push operations is still
O(1) per operation.
3. Splay Trees (Self-Adjusting Binary Search Trees):
• Application: Splay trees are a type of self-adjusting binary search tree used to maintain dynamic sets.
• Amortized Analysis: The amortized time complexity of search, insert, and delete operations in splay trees
is O(log n), even though an individual operation may take longer.
4. Union-Find (Disjoint Set Union with Path Compression and Union by Rank):
• Application: Union-Find data structures are used to manage disjoint sets, commonly applied in network
connectivity and Kruskal’s algorithm for minimum spanning trees.
• Amortized Analysis: With path compression and union by rank, the amortized time complexity for union
and find operations becomes nearly constant, i.e., O(α(n)), where α(n) is the inverse Ackermann function,
which grows very slowly.
5. Incremental Resizing (Binary Counter or Accounting):
• Application: Consider a binary counter that increments by 1. Each bit flip seems to be expensive.
• Amortized Analysis: Although some increments cause several bits to flip, the amortized cost of each
increment is O(1) because a bit only flips once per sequence of increments.
6. Fibonacci Heap:
• Application: Fibonacci heaps are a type of priority queue that supports a variety of operations, such as
insertion, deletion, decrease key, and meld (merge).
• Amortized Analysis: While the delete operation takes O(log n) time, most other operations like insert and
decrease key have an amortized cost of O(1).
7. Splay Trees in Data Compression (Move-to-Front Heuristic):
• Application: The move-to-front heuristic is used in data compression, where frequently accessed elements
are moved to the front of a list.
• Amortized Analysis: The amortized cost of accessing an element in such a list is O(1) when applying a
self-adjusting strategy like splaying.
8. Deque Operations (Double-Ended Queue):
• Application: Deques support insertion and deletion at both ends.
• Amortized Analysis: Similar to dynamic arrays, the resizing of deques is amortized, and hence insertion
and deletion at both ends typically have an O(1) amortized cost.
Amortized analysis ensures that over a long sequence of operations, the average cost per operation is small, even if
individual operations are expensive.

Advanced data structures like:-


Heaps are the abstract data type which is used to show the relationship between parents and children:
Heap is categorized into Min-Heap and Max-Heap:
1. A min-heap is a tree in which, for all the nodes, the key value of the parent must be smaller than
the key value of the children.
2. A max-heap is a tree in which, for all the nodes, the key value of the parent must be greater than
the key value of the children.

1. Fibonacci heap:
A Fibonacci heap is defined as the collection of rooted-tree in which all the trees must hold the
property of Min-heap. That is, for all the nodes, the key value of the parent node should be greater than the
key value of the parent node:
The above Fibonacci Heap consists of five rooted min-heap-ordered trees with 14 nodes. The min-
heap-ordered tree means the tree which holds the property of a min-heap. The dashed line shows the
root list. The minimum node in the Fibonacci heap is the node containing the key = 3 pointed by the
pointer FH-min.
Here, 18, 39, 26, and 35 are marked nodes, meaning they have lost one child. The potential of the
Fibonacci series = No. of rooted tree + Twice the number of marked nodes = 5 + 2 * 4 = 13.

In the above figure, we can observe that each node contains four pointers, the parent points to the
parent (Upward), the child points to the child (downward), and the left and right pointers for the
siblings (sideways).
Properties of Fibonacci Heap:
1. It can have multiple trees of equal degrees, and each tree doesn't need to have 2^k nodes.
2. All the trees in the Fibonacci Heap are rooted but not ordered.
3. All the roots and siblings are stored in a separated circular-doubly-linked list.
4. The degree of a node is the number of its children. Node X -> degree = Number of X's children.
5. Each node has a mark-attribute in which it is marked TRUE or FALSE. The FALSE indicates the
node has not any of its children. The TRUE represents that the node has lost one child. The newly
created node is marked FALSE.
6. The potential function of the Fibonacci heap is F(FH) = t[FH] + 2 * m[FH]
7. The Fibonacci Heap (FH) has some important technicalities listed below:
1. min[FH] - Pointer points to the minimum node in the Fibonacci Heap
2. n[FH] - Determines the number of nodes
3. t[FH] - Determines the number of rooted trees
4. m[FH] - Determines the number of marked nodes
5. F(FH) - Potential Function.

2. Binomial heap:
A heap similar to a binary heap, but with a more complex structure that allows faster merging of two heaps. It is
used in applications requiring fast merging.
What is a Binomial tree?
A Binomial tree Bk is an ordered tree defined recursively, where k is defined as the order of the binomial tree.

• If the binomial tree is represented as B0, then the tree consists of a single node.
• In general terms, Bk consists of two binomial trees, i.e., Bk-1 and Bk-1 that are linked together in which
one tree becomes the left subtree of another binomial tree.
We can understand it with the example given below:-
i. If B0, where k is 0, there would exist only one node in the tree.
ii. If B1, where k is 1. Therefore, there would be two binomial trees of B0 in which one B0 becomes the left subtree
of another B0.
iii. If B2, where k is 2. Therefore, there would be two binomial trees of B1 in which one B1 becomes the left subtree
of another B1.
iv. If B3, where k is 3. Therefore, there would be two binomial trees of B2 in which one B2 becomes the left subtree
of another B2.

What is a Binomial Heap?


A binomial heap can be defined as the collection of binomial trees that satisfies the heap properties, i.e., min-heap.
The min-heap is a heap in which each node has a value lesser than the value of its child nodes. Mainly, Binomial
heap is used to implement a priority queue. It is an extension of binary heap that gives faster merge or union
operations along with other operations provided by binary heap.
Properties of Binomial heap:
There are following properties for a binomial heap with n nodes -

• Every binomial tree in the heap must follow the min-heap property, i.e., the key of a node is greater than or
equal to the key of its parent.
• For any non-negative integer k, there should be atleast one binomial tree in a heap where root has degree k.
The first property of the heap ensures that the min-heap property is hold throughout the heap. Whereas the second
property listed above ensures that a binary tree with n nodes should have at most 1 + log2 n binomial trees,
here log2 is the binary logarithm.
The above figure has three binomial trees, i.e., B0, B2, and B3. The above all three binomial trees satisfy the min
heap's property as all the nodes have a smaller value than the child nodes.
The above figure also satisfies the second property of the binomial heap. For example, if we consider the value of k
as 3, we can observe in the above figure that the binomial tree of degree 3 exists in a heap.

Operations on Binomial Heap


The operations that can be performed on binomial heap are listed as follows -

• Creating a binomial heap


• Finding the minimum key
• Union or merging of two binomial heaps
• Inserting a node
• Extracting minimum key
• Decreasing a key
• Deleting a node

Creating a new binomial heap

When we create a new binomial heap, it simply takes O(1) time because creating a heap will create the head
of the heap in which no elements are attached.

Finding the minimum key

As stated above, binomial heap is the collection of binomial trees, and every binomial tree satisfies the min-
heap property. It means that the root node contains a minimum value. Therefore, we only have to compare
the root node of all the binomial trees to find the minimum key. The time complexity of finding the
minimum key in binomial heap is O(logn).

Union or merging of two binomial heaps


It is the most important operation performed on the binomial heap. Merging in a heap can be done by comparing
the keys at the roots of two trees, and the root node with the larger key will become the child of the root with a
smaller key than the other. The time complexity for finding a union is O(logn). The function to merge the two trees
is given as follows –
To perform the union of two binomial heaps, we have to consider the below cases -
Case 1: If degree[x] is not equal to degree[next x], then move pointer ahead.
Case 2: if degree[x] = degree[next x] = degree[sibling(next x)] then,
Move the pointer ahead.
Case 3: If degree[x] = degree[next x] but not equal to degree[sibling[next x]]
and key[x] < key[next x] then remove [next x] from root and attached to x.
Case 4: If degree[x] = degree[next x] but not equal to degree[sibling[next x]]
and key[x] > key[next x] then remove x from root and attached to [next x].
Now, let's understand the merging or union of two binomial heaps with the help of an example. Consider two
binomial heaps -

We can see that there are two binomial heaps, so, first, we have to combine both heaps. To combine the heaps, first,
we need to arrange their binomial trees in increasing order.

Now, first apply Case1 that says 'if degree[x] ≠ degree[next x] then move pointer ahead' but in the above
example, the degree[x] = degree[next[x]], so this case is not valid.
Now, apply Case2 that says 'if degree[x] = degree[next x] = degree[sibling(next x)] then Move pointer
ahead'. So, this case is also not applied in the above heap.
Now, apply Case3 that says ' If degree[x] = degree[next x] ≠ degree[sibling[next x]] and key[x] < key[next x]
then remove [next x] from root and attached to x'. We will apply this case because the above heap follows the
conditions of case 3 -
Insert an element in the heap
Inserting an element in the heap can be done by simply creating a new heap only with the element to be inserted,
and then merging it with the original heap. Due to the merging, the single insertion in a heap takes O(logn) time.
Now, let's understand the process of inserting a new node in a heap using an example.

First, we have to combine both of the heaps. As both node 12 and node 15 are of degree 0, so node 15 is attached to
node 12 as shown below –
Disjoint set representation
Also known as union-find, this data structure manages a partition of a set into disjoint (non-overlapping) subsets.
It's commonly used in network connectivity and minimum spanning tree algorithms.
The disjoint set data structure is also known as union-find data structure and merge-find set. It is a data structure
that contains a collection of disjoint or non-overlapping sets. The disjoint set means that when the set is partitioned
into the disjoint subsets. The various operations can be performed on the disjoint subsets. In this case, we can add
new sets, we can merge the sets, and we can also find the representative member of a set. It also allows to find out
whether the two elements are in the same set or not efficiently.
The disjoint set can be defined as the subsets where there is no common element between the two sets. Let's
understand the disjoint sets through an example.

s1 = {1, 2, 3, 4}
s2 = {5, 6, 7, 8}
We have two subsets named s1 and s2. The s1 subset contains the elements 1, 2, 3, 4, while s2 contains the
elements 5, 6, 7, 8. Since there is no common element between these two sets, we will not get anything if we
consider the intersection between these two sets. This is also known as a disjoint set where no elements are
common. Now the question arises how we can perform the operations on them. We can perform only two
operations, i.e., find and union.
In the case of find operation, we have to check that the element is present in which set. There are two sets named s1
and s2 shown below:
Suppose we want to perform the union operation on these two sets. First, we have to check whether the elements on
which we are performing the union operation belong to different or same sets. If they belong to the different sets,
then we can perform the union operation; otherwise, not. For example, we want to perform the union operation
between 4 and 8. Since 4 and 8 belong to different sets, so we apply the union operation. Once the union operation
is performed, the edge will be added between the 4 and 8 shown as below:
When the union operation is applied, the set would be represented as:

s1Us2 = {1, 2, 3, 4, 5, 6, 7, 8}
Suppose we add one more edge between 1 and 5. Now the final set can be represented as:
s3 = {1, 2, 3, 4, 5, 6, 7, 8}
If we consider any element from the above set, then all the elements belong to the same set; it means that the cycle
exists in a graph.

You might also like