Sorting
Sorting
A sorting algorithm is used to arrange elements of an array/list in a specific order. For example,
• Bubble Sort
• Selection Sort
• Insertion Sort
• Merge Sort
• Quicksort
• Counting Sort
• Radix Sort
• Bucket Sort
• Heap Sort
• Shell Sort
The efficiency of any sorting algorithm is determined by the time complexity and space
complexity of the algorithm.
1. Time Complexity: Time complexity refers to the time taken by an algorithm to complete its
execution with respect to the size of the input. It can be represented in different forms:
Big-O notation (O)
Omega notation (Ω)
Theta notation (Θ)
2. Space Complexity: Space complexity refers to the total amount of memory used by the
algorithm for a complete execution. It includes both the auxiliary memory and the input.
The auxiliary memory is the additional space occupied by the algorithm apart from the input
data. Usually, auxiliary memory is considered for calculating the space complexity of an
algorithm.
A sorting algorithm is considered stable if the two or more items with the same value maintain
the same relative positions even after sorting.
For example, in the image below, there are two items with the same value 3. An unstable
sorting algorithm allows two possibilities where the two positions of 3 may or may not be
maintained.
Unstable sorting with two
possible outcomes
However, after a stable sorting algorithm, there is always one possibility where the positions are
maintained as in the original array.
Quicksort No
Heap Sort No
Shell Sort No
Bubble Sort
Bubble sort is a sorting algorithm that compares two adjacent elements and swaps them until
they are in the intended order.
Just like the movement of air bubbles in the water that rise up to the surface, each element of
the array move to the end in each iteration. Therefore, it is called a bubble sort.
1. Starting from the first index, compare the first and the second elements.
2. If the first element is greater than the second element, they are swapped.
3. Now, compare the second and the third elements. Swap them if they are not in order.
4. The above process goes on until the last element.
2. Remaining Iteration
After each iteration, the largest element among the unsorted elements is placed at the end.
Put the largest element at the end
In each iteration, the comparison takes place up to the last unsorted element.
The array is sorted if all elements are kept in the right order
Selection sort is a sorting algorithm that selects the smallest element from an unsorted list in
each iteration and places that element at the beginning of the unsorted list.
Compare minimum with the third element. Again, if the third element is smaller, then
assign minimum to the third element otherwise do nothing. The process goes on until the
last element. Compare minimum with
the remaining elements
3. After each iteration, minimum is placed in the front of the unsorted list.
Insertion sort is a sorting algorithm that places an unsorted element at its suitable place in each
iteration.
Insertion sort works similarly as we sort cards in our hand in a card game.
We assume that the first card is already sorted then, we select an unsorted card. If the unsorted
card is greater than the card in hand, it is placed on the right otherwise, to the left. In the same
way, other unsorted cards are taken and put in their right place.
Initial array
1. The first element in the array is assumed to be sorted. Take the second element and
store it separately in key.
Compare key with the first element. If the first element is greater than key, then key is
placed in front of the first element.
Take the third element and compare it with the elements on the left of it. Placed it just
behind the element smaller than it. If there is no element smaller than it, then place it at
the beginning of the array.
Place 1 at the beginning
3. Similarly, place every unsorted element at its correct position.
Place 4 behind 1
Place 3 behind 1 and the
array is sorted
Merge Sort is one of the most popular sorting algorithms that is based on the principle of Divide
and Conquer Algorithm.
Here, a problem is divided into multiple sub-problems. Each sub-problem is solved individually.
Finally, sub-problems are combined to form the final solution.
Merge Sort example
Shell Sort
Shell sort is a generalized version of the insertion sort algorithm. It first sorts elements that are
far apart from each other and successively reduces the interval between the elements to be
sorted.
Working of Shell Sort
Initial
array
2. We are using the shell's original sequence (N/2, N/4, ...1) as intervals in our
algorithm.
In the first loop, if the array size is N = 8 then, the elements lying at the interval of
N/2 = 4 are compared and swapped if they are not in order.
a. The 0th element is compared with the 4th element.
b. If the 0th element is greater than the 4th one then, the 4th element is first
stored in temp variable and the 0th element (ie. greater element) is stored
in the 4th position and the element stored in temp is stored in the 0th
position.
Rearrange
the elements at n/4 interval
You might get confused at this point.
All the
elements in the array lying at the current interval are compared.
The elements at 4th and 2nd position are compared. The elements at 2nd and 0th
position are also compared. All the elements in the array lying at the current
interval are compared.
4. The same process goes on for remaining elements.
Rearrange
all the elements at n/4 interval
5. Finally, when the interval is N/8 = 8/8 =1 then the array elements lying at the
interval of 1 are sorted. The array is now completely sorted.
Heap Sort
Heap sort is a comparison-based sorting algorithm that is based on the binary heap data
structure. It works by creating a max-heap from the input array, and then repeatedly removing
the largest element from the heap and adding it to the sorted output array.
Heap sort is a very efficient sorting algorithm, and it has a time complexity of O(n log n) for all
cases (best, average, and worst). It is also a stable sorting algorithm, meaning that it preserves
the order of equal elements in the input array.
External Sort
External sorting is a term for a class of sorting algorithms that can handle massive amounts of
data. External sorting is required when the data being sorted does not fit into the main memory
of a computing device (usually RAM) and instead, must reside in the slower external memory
(usually a hard drive).
External sorting typically uses a hybrid sort-merge strategy. In the sorting phase, chunks of data
small enough to fit in the main memory are read, sorted, and written out to a temporary file. In
the merge phase, the sorted sub-files are combined into a single larger file.
NP Hard
NP-hard is a term used in computer science to describe problems that are at least as hard as
any problem in the NP complexity class. The NP complexity class is the set of all decision
problems whose solutions can be verified in polynomial time.
NP-hard problems are also known as computationally intractable problems. This means that
there is no known polynomial-time algorithm for solving these problems. However, there may be
exponential-time algorithms or approximation algorithms for solving these problems.
NP-hard problems are often encountered in real-world applications, such as scheduling, routing,
and optimization. However, the lack of polynomial-time algorithms for solving these problems
can make it difficult to find optimal solutions.
Here are some examples of NP-hard problems in real-world applications:
Despite the lack of polynomial-time algorithms for solving NP-hard problems, there are a
number of techniques that can be used to find good solutions to these problems. These
techniques include:
Approximation algorithms
Heuristic algorithms
Metaheuristic algorithms
Approximation algorithms find solutions that are guaranteed to be within a certain factor of the
optimal solution. Heuristic algorithms find solutions that are not guaranteed to be optimal, but
that are often good solutions. Metaheuristic algorithms are a type of heuristic algorithm that are
often used to solve complex optimization problems.
NP Complete
NP-complete is a term used in computer science to describe problems that are in the NP
complexity class and are also NP-hard. This means that they are both difficult to solve and at
least as hard as any problem in the NP complexity class.
NP-complete problems are also known as the hardest problems in NP. This is because if any
NP-complete problem could be solved in polynomial time, then all NP problems could be solved
in polynomial time.
The main difference between NP-hard and NP-complete problems is that NP-complete
problems are also in the NP complexity class. This means that there is a polynomial-time
algorithm for verifying solutions to NP-complete problems.
NP-hard problems, on the other hand, may or may not be in the NP complexity class. If an NP-
hard problem is not in the NP complexity class, then it is known as a strong NP-hard problem.
Significance of NP-completeness
There are a number of different shortest path algorithms, but some of the most common include:
Dijkstra's algorithm
Bellman-Ford algorithm
Floyd-Warshall algorithm
Dijkstra's algorithm
Dijkstra's algorithm is a greedy algorithm that works by iteratively selecting the node with the
shortest distance from the source node and updating the distances of its neighboring nodes.
The algorithm terminates when the destination node is reached or when there are no more
nodes to update.
Bellman-Ford algorithm
The Bellman-Ford algorithm is a dynamic programming algorithm that can handle graphs with
negative edge weights. The algorithm works by iteratively relaxing the edges of the graph, until
the shortest path distances to all nodes have been found.
Floyd-Warshall algorithm
The Floyd-Warshall algorithm is an all-pairs shortest path algorithm, meaning that it finds the
shortest paths between all pairs of nodes in the graph. The algorithm works by iteratively
updating a distance matrix that stores the shortest path distances between all pairs of nodes.
Routing in networks
Navigation systems
Supply chain management
Social network analysis
Recommendation systems
Minimum Spanning Tree
A minimum spanning tree (MST) of a graph is a tree that connects all of the vertices in the
graph with the minimum total weight of edges. MSTs are useful for a variety of problems, such
as network design, clustering, and routing.
There are a number of different algorithms for finding the MST of a graph. Two of the most
common algorithms are Prim's algorithm and Kruskal's algorithm.
Prim's algorithm
Prim's algorithm works by starting at a single vertex and greedily adding edges to the MST until
all of the vertices in the graph are connected. The algorithm chooses the edge with the
minimum weight that connects a vertex in the MST to a vertex that is not yet in the MST.
Kruskal's algorithm
Kruskal's algorithm works by sorting all of the edges in the graph by weight. The algorithm then
iteratively adds edges to the MST in order of increasing weight, as long as the addition of the
edge does not create a cycle in the MST.
Network design: MSTs can be used to design networks with the minimum total cost.
Clustering: MSTs can be used to cluster data points by finding the edges that connect the
data points with the minimum total weight.
Routing: MSTs can be used to find optimal routes between destinations.
Depth-first Search
Depth-first search (DFS) is a recursive algorithm that traverses a graph by visiting all of the
nodes in a given branch before moving on to the next branch. It is a powerful algorithm that can
be used to solve a variety of problems, such as finding the shortest path between two nodes,
finding connected components in a graph, and detecting cycles in a graph.
DFS works by starting at a root node and recursively exploring all of its neighbors. If a neighbor
has not already been visited, it is added to a stack and explored in the same way. The algorithm
terminates when there are no more nodes to explore.