0% found this document useful (0 votes)
18 views

Sorting

The document discusses various sorting algorithms and their complexities. It provides descriptions of bubble sort, selection sort, insertion sort, merge sort, and shell sort algorithms. It explains the time and space complexities of different sorting algorithms and compares their performance. It also discusses the stability of sorting algorithms and provides examples of stable and unstable sorting.

Uploaded by

smartguykrish
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Sorting

The document discusses various sorting algorithms and their complexities. It provides descriptions of bubble sort, selection sort, insertion sort, merge sort, and shell sort algorithms. It explains the time and space complexities of different sorting algorithms and compares their performance. It also discusses the stability of sorting algorithms and provides examples of stable and unstable sorting.

Uploaded by

smartguykrish
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Sorting

A sorting algorithm is used to arrange elements of an array/list in a specific order. For example,

Different Sorting Algorithms

• Bubble Sort
• Selection Sort
• Insertion Sort
• Merge Sort
• Quicksort
• Counting Sort
• Radix Sort
• Bucket Sort
• Heap Sort
• Shell Sort

Complexity of Sorting Algorithms

The efficiency of any sorting algorithm is determined by the time complexity and space
complexity of the algorithm.

1. Time Complexity: Time complexity refers to the time taken by an algorithm to complete its
execution with respect to the size of the input. It can be represented in different forms:
Big-O notation (O)
Omega notation (Ω)
Theta notation (Θ)

2. Space Complexity: Space complexity refers to the total amount of memory used by the
algorithm for a complete execution. It includes both the auxiliary memory and the input.

The auxiliary memory is the additional space occupied by the algorithm apart from the input
data. Usually, auxiliary memory is considered for calculating the space complexity of an
algorithm.

Let's see a complexity analysis of different sorting algorithms.

Sorting Time Complexity Time Complexity - Time Complexity - Space


Algorithm - Best Worst Average Complexity
2 2
Bubble Sort n n n 1
Selection Sort n2 n2 n2 1
Insertion Sort n n2 n2 1
Merge Sort nlog n nlog n nlog n n
Quicksort nlog n n2 nlog n log n
Counting Sort n+k n+k n+k max
Radix Sort n+k n+k n+k max
Bucket Sort n+k n2 n n+k
Heap Sort nlog n nlog n nlog n 1
Shell Sort nlog n n2 nlog n 1

Stability of Sorting Algorithm

A sorting algorithm is considered stable if the two or more items with the same value maintain
the same relative positions even after sorting.

For example, in the image below, there are two items with the same value 3. An unstable
sorting algorithm allows two possibilities where the two positions of 3 may or may not be
maintained.
Unstable sorting with two
possible outcomes

However, after a stable sorting algorithm, there is always one possibility where the positions are
maintained as in the original array.

Stable sorting with the positions


preserved

Here's a table showing the stablilty of different sorting algorithm.

Sorting Algorithm Stability

Bubble Sort Yes


Selection Sort No

Insertion Sort Yes

Merge Sort Yes

Quicksort No

Counting Sort Yes

Radix Sort Yes

Bucket Sort Yes

Heap Sort No

Shell Sort No

Bubble Sort

Bubble sort is a sorting algorithm that compares two adjacent elements and swaps them until
they are in the intended order.

Just like the movement of air bubbles in the water that rise up to the surface, each element of
the array move to the end in each iteration. Therefore, it is called a bubble sort.

Working of Bubble Sort

Suppose we are trying to sort the elements in ascending order.

1. First Iteration (Compare and Swap)

1. Starting from the first index, compare the first and the second elements.
2. If the first element is greater than the second element, they are swapped.
3. Now, compare the second and the third elements. Swap them if they are not in order.
4. The above process goes on until the last element.

Compare the Adjacent


Elements

2. Remaining Iteration

The same process goes on for the remaining iterations.

After each iteration, the largest element among the unsorted elements is placed at the end.
Put the largest element at the end

In each iteration, the comparison takes place up to the last unsorted element.

Compare the adjacent elements


The array is sorted when all the unsorted elements are placed at their correct positions.

The array is sorted if all elements are kept in the right order

Selection Sort Algorithm

Selection sort is a sorting algorithm that selects the smallest element from an unsorted list in
each iteration and places that element at the beginning of the unsorted list.

Working of Selection Sort

1. Set the first element as minimum.


Select first element as minimum
2. Compare minimum with the second element. If the second element is smaller than
minimum, assign the second element as minimum.

Compare minimum with the third element. Again, if the third element is smaller, then
assign minimum to the third element otherwise do nothing. The process goes on until the
last element. Compare minimum with
the remaining elements
3. After each iteration, minimum is placed in the front of the unsorted list.

Swap the first with minimum


4. For each iteration, indexing starts from the first unsorted element. Step 1 to 3 are
repeated until all the elements are placed at their correct positions.
The first iteration
The second iteration

The third iteration


The fourth iteration

Insertion Sort Algorithm

Insertion sort is a sorting algorithm that places an unsorted element at its suitable place in each
iteration.

Insertion sort works similarly as we sort cards in our hand in a card game.

We assume that the first card is already sorted then, we select an unsorted card. If the unsorted
card is greater than the card in hand, it is placed on the right otherwise, to the left. In the same
way, other unsorted cards are taken and put in their right place.

A similar approach is used by insertion sort.

Working of Insertion Sort

Suppose we need to sort the following array.

Initial array

1. The first element in the array is assumed to be sorted. Take the second element and
store it separately in key.

Compare key with the first element. If the first element is greater than key, then key is
placed in front of the first element.

If the first element is


greater than key, then key is placed in front of the first element.
2. Now, the first two elements are sorted.

Take the third element and compare it with the elements on the left of it. Placed it just
behind the element smaller than it. If there is no element smaller than it, then place it at
the beginning of the array.
Place 1 at the beginning
3. Similarly, place every unsorted element at its correct position.

Place 4 behind 1
Place 3 behind 1 and the
array is sorted

Merge Sort Algorithm

Merge Sort is one of the most popular sorting algorithms that is based on the principle of Divide
and Conquer Algorithm.

Here, a problem is divided into multiple sub-problems. Each sub-problem is solved individually.
Finally, sub-problems are combined to form the final solution.
Merge Sort example

Shell Sort
Shell sort is a generalized version of the insertion sort algorithm. It first sorts elements that are
far apart from each other and successively reduces the interval between the elements to be
sorted.
Working of Shell Sort

1. Suppose, we need to sort the following array.

Initial
array
2. We are using the shell's original sequence (N/2, N/4, ...1) as intervals in our
algorithm.

In the first loop, if the array size is N = 8 then, the elements lying at the interval of
N/2 = 4 are compared and swapped if they are not in order.
a. The 0th element is compared with the 4th element.
b. If the 0th element is greater than the 4th one then, the 4th element is first
stored in temp variable and the 0th element (ie. greater element) is stored
in the 4th position and the element stored in temp is stored in the 0th
position.

Rearrange the elements at n/2 interval


This process goes on for all the remaining elements.
Rearrange all the elements at n/2 interval
3. In the second loop, an interval of N/4 = 8/4 = 2 is taken and again the elements
lying at these intervals are sorted.

Rearrange
the elements at n/4 interval
You might get confused at this point.

All the
elements in the array lying at the current interval are compared.
The elements at 4th and 2nd position are compared. The elements at 2nd and 0th
position are also compared. All the elements in the array lying at the current
interval are compared.
4. The same process goes on for remaining elements.

Rearrange
all the elements at n/4 interval
5. Finally, when the interval is N/8 = 8/8 =1 then the array elements lying at the
interval of 1 are sorted. The array is now completely sorted.

Rearrange the elements at n/8 interval

Heap Sort
Heap sort is a comparison-based sorting algorithm that is based on the binary heap data
structure. It works by creating a max-heap from the input array, and then repeatedly removing
the largest element from the heap and adding it to the sorted output array.
Heap sort is a very efficient sorting algorithm, and it has a time complexity of O(n log n) for all
cases (best, average, and worst). It is also a stable sorting algorithm, meaning that it preserves
the order of equal elements in the input array.
External Sort
External sorting is a term for a class of sorting algorithms that can handle massive amounts of
data. External sorting is required when the data being sorted does not fit into the main memory
of a computing device (usually RAM) and instead, must reside in the slower external memory
(usually a hard drive).

External sorting typically uses a hybrid sort-merge strategy. In the sorting phase, chunks of data
small enough to fit in the main memory are read, sorted, and written out to a temporary file. In
the merge phase, the sorted sub-files are combined into a single larger file.

NP Hard
NP-hard is a term used in computer science to describe problems that are at least as hard as
any problem in the NP complexity class. The NP complexity class is the set of all decision
problems whose solutions can be verified in polynomial time.

NP-hard problems are also known as computationally intractable problems. This means that
there is no known polynomial-time algorithm for solving these problems. However, there may be
exponential-time algorithms or approximation algorithms for solving these problems.

Some examples of NP-hard problems include:

The traveling salesman problem (TSP)


The knapsack problem
The subset sum problem
The Hamiltonian circuit problem
The clique problem

NP-hard problems are often encountered in real-world applications, such as scheduling, routing,
and optimization. However, the lack of polynomial-time algorithms for solving these problems
can make it difficult to find optimal solutions.
Here are some examples of NP-hard problems in real-world applications:

Scheduling airline flights


Routing delivery trucks
Designing VLSI circuits
Finding the optimal portfolio of investments
Cracking cryptographic algorithms

Despite the lack of polynomial-time algorithms for solving NP-hard problems, there are a
number of techniques that can be used to find good solutions to these problems. These
techniques include:

Approximation algorithms
Heuristic algorithms
Metaheuristic algorithms

Approximation algorithms find solutions that are guaranteed to be within a certain factor of the
optimal solution. Heuristic algorithms find solutions that are not guaranteed to be optimal, but
that are often good solutions. Metaheuristic algorithms are a type of heuristic algorithm that are
often used to solve complex optimization problems.

NP Complete
NP-complete is a term used in computer science to describe problems that are in the NP
complexity class and are also NP-hard. This means that they are both difficult to solve and at
least as hard as any problem in the NP complexity class.

NP-complete problems are also known as the hardest problems in NP. This is because if any
NP-complete problem could be solved in polynomial time, then all NP problems could be solved
in polynomial time.

Some examples of NP-complete problems include:

The Boolean satisfiability problem (SAT)


The independent set problem
The vertex cover problem
The Hamiltonian circuit problem
The clique problem

NP-complete problems are often encountered in real-world applications, such as scheduling,


routing, and optimization. However, the lack of polynomial-time algorithms for solving these
problems can make it difficult to find optimal solutions.

Difference between NP-hard and NP-complete

The main difference between NP-hard and NP-complete problems is that NP-complete
problems are also in the NP complexity class. This means that there is a polynomial-time
algorithm for verifying solutions to NP-complete problems.

NP-hard problems, on the other hand, may or may not be in the NP complexity class. If an NP-
hard problem is not in the NP complexity class, then it is known as a strong NP-hard problem.

Significance of NP-completeness

NP-completeness is a significant concept in computer science because it allows us to classify


problems as being either easy or hard to solve. If a problem is NP-complete, then we know that
there is no known polynomial-time algorithm for solving it.

Shortest Path Algorithm


A shortest path algorithm is an algorithm that finds the shortest path between two nodes in a
graph. Graphs are a data structure used to represent networks of nodes and edges, where
edges represent connections between nodes.

There are a number of different shortest path algorithms, but some of the most common include:

Dijkstra's algorithm
Bellman-Ford algorithm
Floyd-Warshall algorithm

Dijkstra's algorithm

Dijkstra's algorithm is a greedy algorithm that works by iteratively selecting the node with the
shortest distance from the source node and updating the distances of its neighboring nodes.
The algorithm terminates when the destination node is reached or when there are no more
nodes to update.

Bellman-Ford algorithm

The Bellman-Ford algorithm is a dynamic programming algorithm that can handle graphs with
negative edge weights. The algorithm works by iteratively relaxing the edges of the graph, until
the shortest path distances to all nodes have been found.

Floyd-Warshall algorithm

The Floyd-Warshall algorithm is an all-pairs shortest path algorithm, meaning that it finds the
shortest paths between all pairs of nodes in the graph. The algorithm works by iteratively
updating a distance matrix that stores the shortest path distances between all pairs of nodes.

Applications of shortest path algorithms

Shortest path algorithms have a wide range of applications, including:

Routing in networks
Navigation systems
Supply chain management
Social network analysis
Recommendation systems
Minimum Spanning Tree
A minimum spanning tree (MST) of a graph is a tree that connects all of the vertices in the
graph with the minimum total weight of edges. MSTs are useful for a variety of problems, such
as network design, clustering, and routing.

There are a number of different algorithms for finding the MST of a graph. Two of the most
common algorithms are Prim's algorithm and Kruskal's algorithm.

Prim's algorithm

Prim's algorithm works by starting at a single vertex and greedily adding edges to the MST until
all of the vertices in the graph are connected. The algorithm chooses the edge with the
minimum weight that connects a vertex in the MST to a vertex that is not yet in the MST.

Kruskal's algorithm

Kruskal's algorithm works by sorting all of the edges in the graph by weight. The algorithm then
iteratively adds edges to the MST in order of increasing weight, as long as the addition of the
edge does not create a cycle in the MST.

Applications of minimum spanning trees

Minimum spanning trees have a wide range of applications, including:

Network design: MSTs can be used to design networks with the minimum total cost.
Clustering: MSTs can be used to cluster data points by finding the edges that connect the
data points with the minimum total weight.
Routing: MSTs can be used to find optimal routes between destinations.
Depth-first Search
Depth-first search (DFS) is a recursive algorithm that traverses a graph by visiting all of the
nodes in a given branch before moving on to the next branch. It is a powerful algorithm that can
be used to solve a variety of problems, such as finding the shortest path between two nodes,
finding connected components in a graph, and detecting cycles in a graph.

DFS works by starting at a root node and recursively exploring all of its neighbors. If a neighbor
has not already been visited, it is added to a stack and explored in the same way. The algorithm
terminates when there are no more nodes to explore.

You might also like