0% found this document useful (0 votes)
36 views53 pages

Searching and Sorting

The document discusses the selection sort and binary search algorithms. Selection sort works by iterating through an array, finding the minimum value, and swapping it into the current sorting position. This results in an time complexity of O(n^2) in all cases. Binary search works on a sorted array by comparing the search value to the middle element and recursively searching either the left or right half. This results in an average time complexity of O(log n).
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views53 pages

Searching and Sorting

The document discusses the selection sort and binary search algorithms. Selection sort works by iterating through an array, finding the minimum value, and swapping it into the current sorting position. This results in an time complexity of O(n^2) in all cases. Binary search works on a sorted array by comparing the search value to the middle element and recursively searching either the left or right half. This results in an average time complexity of O(log n).
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 53

Selection Sort Algorithm

In this article, we will discuss the Selection sort Algorithm. The working
procedure of selection sort is also simple. This article will be very helpful and
interesting to students as they might face selection sort as a question in their
examinations. So, it is important to discuss the topic.

In selection sort, the smallest value among the unsorted elements of the array
is selected in every pass and inserted to its appropriate position into the array.
It is also the simplest algorithm. It is an in-place comparison sorting algorithm.
In this algorithm, the array is divided into two parts, first is sorted part, and
another one is the unsorted part. Initially, the sorted part of the array is empty,
and unsorted part is the given array. Sorted part is placed at the left, while the
unsorted part is placed at the right.

In selection sort, the first smallest element is selected from the unsorted array
and placed at the first position. After that second smallest element is selected
and placed in the second position. The process continues until the array is
entirely sorted.

The average and worst-case complexity of selection sort is O(n2), where n is


the number of items. Due to this, it is not suitable for large data sets.

Selection sort is generally used when -

o A small array is to be sorted


o Swapping cost doesn't matter
o It is compulsory to check all elements

Now, let's see the algorithm of selection sort.

Algorithm
1. SELECTION SORT(arr, n)
2.
3. Step 1: Repeat Steps 2 and 3 for i = 0 to n-1
4. Step 2: CALL SMALLEST(arr, i, n, pos)
5. Step 3: SWAP arr[i] with arr[pos]
6. [END OF LOOP]
7. Step 4: EXIT
8.
9. SMALLEST (arr, i, n, pos)
10.Step 1: [INITIALIZE] SET SMALL = arr[i]
11.Step 2: [INITIALIZE] SET pos = i
12.Step 3: Repeat for j = i+1 to n
13.if (SMALL > arr[j])
14. SET SMALL = arr[j]
15.SET pos = j
16.[END OF if]
17.[END OF LOOP]
18.Step 4: RETURN pos

Working of Selection sort Algorithm


Now, let's see the working of the Selection sort Algorithm.

To understand the working of the Selection sort algorithm, let's take an


unsorted array. It will be easier to understand the Selection sort via an
example.

Let the elements of array are -

Now, for the first position in the sorted array, the entire array is to be scanned
sequentially.
At present, 12 is stored at the first position, after searching the entire array, it
is found that 8 is the smallest value.

So, swap 12 with 8. After the first iteration, 8 will appear at the first position in
the sorted array.

For the second position, where 29 is stored presently, we again sequentially


scan the rest of the items of unsorted array. After scanning, we find that 12 is
the second lowest element in the array that should be appeared at second
position.

Now, swap 29 with 12. After the second iteration, 12 will appear at the second
position in the sorted array. So, after two iterations, the two smallest values
are placed at the beginning in a sorted way.

The same process is applied to the rest of the array elements. Now, we are
showing a pictorial representation of the entire sorting process.
Now, the array is completely sorted.

Selection sort complexity


Now, let's see the time complexity of selection sort in best case, average case,
and in worst case. We will also see the space complexity of the selection sort.

1. Time Complexity
Case Time Complexity

Best Case O(n2)

Average Case O(n2)


Worst Case O(n2)

o Best Case Complexity - It occurs when there is no sorting required, i.e.


the array is already sorted. The best-case time complexity of selection
sort is O(n2).
o Average Case Complexity - It occurs when the array elements are in
jumbled order that is not properly ascending and not properly
descending. The average case time complexity of selection sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are
required to be sorted in reverse order. That means suppose you have to
sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of selection sort
is O(n2).

Implementation of selection sort


1. #include <stdio.h>
2.
3. void selection(int arr[], int n)
4. {
5. int i, j, small;
6.
7. for (i = 0; i < n-1; i++) // One by one move boundary of unsorted subarra
y
8. {
9. small = i; //minimum element in unsorted array
10.
11. for (j = i+1; j < n; j++)
12. if (arr[j] < arr[small])
13. small = j;
14.// Swap the minimum element with the first element
15. int temp = arr[small];
16. arr[small] = arr[i];
17. arr[i] = temp;
18. }
19.}
20.
21.void printArr(int a[], int n) /* function to print the array */
22.{
23. int i;
24. for (i = 0; i < n; i++)
25. printf("%d ", a[i]);
26.}
27.
28.int main()
29.{
30. int a[] = { 12, 31, 25, 8, 32, 17 };
31. int n = sizeof(a) / sizeof(a[0]);
32. printf("Before sorting array elements are - \n");
33. printArr(a, n);
34. selection(a, n);
35. printf("\nAfter sorting array elements are - \n");
36. printArr(a, n);
37. return 0;
38.}

Output:
#include <stdio.h>
int main() {
int arr[10]={6,12,0,18,11,99,55,45,34,2};
int n=10;
int i, j, position, swap;
for (i = 0; i < (n - 1); i++) {
position = i;
for (j = i + 1; j < n; j++) {
if (arr[position] > arr[j])
position = j;
}
if (position != i) {
swap = arr[i];
arr[i] = arr[position];
arr[position] = swap;
}
}
for (i = 0; i < n; i++)
printf("%d\t", arr[i]);
return 0;
}

Output
0 2 6 11 12 18 34 45 55 99
Binary Search Algorithm
In this article, we will discuss the Binary Search Algorithm. Searching is the
process of finding some particular element in the list. If the element is present
in the list, then the process is called successful, and the process returns the
location of that element. Otherwise, the search is called unsuccessful.

Linear Search and Binary Search are the two popular searching techniques.
Here we will discuss the Binary Search Algorithm.

Binary search is the search technique that works efficiently on sorted lists.
Hence, to search an element into some list using the binary search technique,
we must ensure that the list is sorted.

Binary search follows the divide and conquer approach in which the list is
divided into two halves, and the item is compared with the middle element of
the list. If the match is found then, the location of the middle element is
returned. Otherwise, we search into either of the halves depending upon the
result produced through the match.

NOTE: Binary search can be implemented on sorted array elements. If the list
elements are not arranged in a sorted manner, we have first to sort them.

Now, let's see the algorithm of Binary Search.

Algorithm
1. Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array, 'lowe
r_bound' is the index of the first array element, 'upper_bound' is the index of t
he last array element, 'val' is the value to search
2. Step 1: set beg = lower_bound, end = upper_bound, pos = - 1
3. Step 2: repeat steps 3 and 4 while beg <=end
4. Step 3: set mid = (beg + end)/2
5. Step 4: if a[mid] = val
6. set pos = mid
7. print pos
8. go to step 6
9. else if a[mid] > val
10.set end = mid - 1
11.else
12.set beg = mid + 1
13.[end of if]
14.[end of loop]
15.Step 5: if pos = -1
16.print "value is not present in the array"
17.[end of if]
18.Step 6: exit

Working of Binary search


Now, let's see the working of the Binary Search Algorithm.

To understand the working of the Binary search algorithm, let's take a sorted
array. It will be easy to understand the working of Binary search with an
example.

There are two methods to implement the binary search algorithm -

o Iterative method
o Recursive method

The recursive method of binary search follows the divide and conquers
approach.

Let the elements of array are –


Let the element to search is, K = 56

We have to use the below formula to calculate the mid of the array -

mid = (beg + end)/2

So, in the given array -

beg = 0

end = 8

mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.


Now, the element to search is found. So algorithm will return the index of the
element matched.

Binary Search complexity


Now, let's see the time complexity of Binary search in the best case, average
case, and worst case. We will also see the space complexity of Binary search.
1. Time Complexity
Case Time Complexity

Best Case O(1)

Average Case O(logn)

Worst Case O(logn)

o Best Case Complexity - In Binary search, best case occurs when the
element to search is found in first comparison, i.e., when the first middle
element itself is the element to be searched. The best-case time
complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of Binary
search is O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs, when
we have to keep reducing the search space till it has only one element.
The worst-case time complexity of Binary search is O(logn).

o #include <stdio.h>
o int binarySearch(int a[], int beg, int end, int val)
o {
o int mid;
o if(end >= beg)
o { mid = (beg + end)/2;
o /* if the item to be searched is present at middle */
o if(a[mid] == val)
o {
o return mid+1;
o }
o /* if the item to be searched is smaller than middle, then it can o
nly be in left subarray */
o else if(a[mid] < val)
o {
o return binarySearch(a, mid+1, end, val);
o }
o /* if the item to be searched is greater than middle, then it can o
nly be in right subarray */
o else
o {
o return binarySearch(a, beg, mid-1, val);
o }
o }
o return -1;
o }
o int main() {
o int a[] = {11, 14, 25, 30, 40, 41, 52, 57, 70}; // given array
o int val = 40; // value to be searched
o int n = sizeof(a) / sizeof(a[0]); // size of array
o int res = binarySearch(a, 0, n-1, val); // Store result
o printf("The elements of the array are - ");
o for (int i = 0; i < n; i++)
o printf("%d ", a[i]);
o printf("\nElement to be searched is - %d", val);
o if (res == -1)
o printf("\nElement is not present in the array");
o else
o printf("\nElement is present at %d position of array", res);
o return 0;
o }

Output

#include <stdio.h>

// A iterative binary search function. It returns location of x in

// given array arr[l..r] if present, otherwise -1

int binarySearch(int arr[], int l, int r, int x)

while (l <= r)

int m = l + (r-l)/2;

// Check if x is present at mid

if (arr[m] == x)

return m;

// If x greater, ignore left half

if (arr[m] < x)

l = m + 1;
// If x is smaller, ignore right half

else

r = m - 1;

// if we reach here, then element was not present

return -1;

int main(void)

int arr[] = {2, 3, 4, 10, 40};

int n = sizeof(arr)/ sizeof(arr[0]);

int x = 10;

int result = binarySearch(arr, 0, n-1, x);

(result == -1)? printf("Element is not present in array")

: printf("Element is present at index %d", result);

return 0;

Output
Element is present at index 3
Bubble sort program in C
Bubble sort is a simple and intuitive sorting algorithm. It repeatedly swaps
adjacent elements if they are in the wrong order until the array is sorted. In
this algorithm, the largest element "bubbles up" to the end of the array in
each iteration. Bubble sort is inefficient for large data sets, but it is useful for
educational purposes and small data sets. In this article, we will implement the
bubble sort algorithm in C programming language.

The first step is to define the bubble sort function. This function takes an
integer array and the size of the array as its parameters. The function returns
nothing as it modifies the original array. Here is the function definition:

1. void bubble_sort(int arr[], int n) {


2. int i, j;
3. for (i = 0; i < n - 1; i++) {
4. for (j = 0; j < n - i - 1; j++) {
5. if (arr[j] > arr[j + 1]) {
6. int temp = arr[j];
7. arr[j] = arr[j + 1];
8. arr[j + 1] = temp;
9. }
10. }
11. }
12.}

The function has two loops. The outer loop runs from the first element to the
second-last element of the array. The inner loop runs from the first element to
the second-last element of the unsorted part of the array. The condition of the
inner loop is n - i - 1 because the last i elements of the array are already
sorted.

In each iteration of the inner loop, we compare adjacent elements. If the left
element is greater than the right element, we swap them. After the inner loop
completes, the largest element is guaranteed to be at the end of the unsorted
part of the array.

Now, we can write the main function to test our bubble sort implementation.
Here is the main function along with the previous part:

C Program:

1. #include <stdio.h>
2.
3. void bubble_sort(int arr[], int n) {
4. int i, j;
5. for (i = 0; i < n - 1; i++) {
6. for (j = 0; j < n - i - 1; j++) {
7. if (arr[j] > arr[j + 1]) {
8. int temp = arr[j];
9. arr[j] = arr[j + 1];
10. arr[j + 1] = temp;
11. }
12. }
13. }
14.}
15.int main() {
16. int arr[] = {64, 34, 25, 12, 22, 11, 90};
17. int n = sizeof(arr) / sizeof(arr[0]);
18. bubble_sort(arr, n);
19. printf("Sorted array: ");
20. for (int i = 0; i < n; i++) {
21. printf("%d ", arr[i]);
22. }
23. return 0;
24.}
The main function creates an integer array arr of size 7 and initializes it with
random numbers. We then calculate the size of the array by dividing the size
of the array by the size of an integer element. Next, we call the bubble_sort
function to sort the array. Finally, we print the sorted array using a for loop.

When we run the program, we should see the following output:


Sorted array: 11 12 22 25 34 64 90

This output shows that our bubble sort implementation correctly sorted the
array in ascending order.

To run the program, we need to compile it using a C compiler. Here is an


example compilation command for GCC:

1. gcc -o bubble_sort bubble_sort.c

This command compiles the bubble_sort.c file and produces an executable file
named bubble_sort.

In summary, the bubble sort algorithm repeatedly swaps adjacent elements


until the array is sorted. The algorithm has a time complexity of O(n 2), which
makes it inefficient for large data sets. However, it is useful for educational
purposes and small data sets. We implemented the bubble sort algorithm in C
programming language and tested it using a simple example.

Characteristics:
o Bubble sort is a simple sorting algorithm.
o It works by repeatedly swapping adjacent elements if they are in the
wrong order.
o The algorithm sorts the array in ascending or descending order.
o It has a time complexity of O(n 2) in the worst case, where n is the size of
the array.
Usage:
o Bubble sort is useful for educational purposes and small data sets.
o It is not suitable for large data sets because of its time complexity.

Advantages:
o Bubble sort is easy to understand and implement.
o It requires minimal additional memory space to perform the sorting.

Disadvantages:
o It is not efficient for large data sets because of its time complexity.
o It has poor performance compared to other sorting algorithms, such as
quicksort and mergesort.

Conclusion:
Bubble sort is a simple and intuitive sorting algorithm that is useful for
educational purposes and small data sets. However, its time complexity makes
it inefficient for large data sets. Therefore, it is not commonly used in real-
world applications. Other sorting algorithms, such as quicksort and mergesort,
are more efficient for large data sets.
Linear Search Algorithm
In this article, we will discuss the Linear Search Algorithm. Searching is the
process of finding some particular element in the list. If the element is present
in the list, then the process is called successful, and the process returns the
location of that element; otherwise, the search is called unsuccessful.

Two popular search methods are Linear Search and Binary Search. So, here we
will discuss the popular searching technique, i.e., Linear Search Algorithm.

Linear search is also called as sequential search algorithm. It is the simplest


searching algorithm. In Linear search, we simply traverse the list completely
and match each element of the list with the item whose location is to be
found. If the match is found, then the location of the item is returned;
otherwise, the algorithm returns NULL.

It is widely used to search an element from the unordered list, i.e., the list in
which items are not sorted. The worst-case time complexity of linear search
is O(n).

The steps used in the implementation of Linear Search are listed as follows -

o First, we have to traverse the array elements using a for loop.


o In each iteration of for loop, compare the search element with the
current array element, and -
o If the element matches, then return the index of the
corresponding array element.
o If the element does not match, then move to the next element.
o If there is no match or the search element is not present in the given
array, return -1.

Now, let's see the algorithm of linear search.


Algorithm
1. Linear_Search(a, n, val) // 'a' is the given array, 'n' is the size of given array, 'val'
is the value to search
2. Step 1: set pos = -1
3. Step 2: set i = 1
4. Step 3: repeat step 4 while i <= n
5. Step 4: if a[i] == val
6. set pos = i
7. print pos
8. go to step 6
9. [end of if]
10.set ii = i + 1
11.[end of loop]
12.Step 5: if pos = -1
13.print "value is not present in the array "
14.[end of if]
15.Step 6: exit

Working of Linear search


Now, let's see the working of the linear search Algorithm.

To understand the working of linear search algorithm, let's take an unsorted


array. It will be easy to understand the working of linear search with an
example.

Let the elements of array are -


Let the element to be searched is K = 41

Now, start from the first element and compare K with each element of the
array.

The value of K, i.e., 41, is not matched with the first element of the array. So,
move to the next element. And follow the same process until the respective
element is found.
Now, the element to be searched is found. So algorithm will return the index
of the element matched.

Linear Search complexity


Now, let's see the time complexity of linear search in the best case, average
case, and worst case. We will also see the space complexity of linear search.

1. Time Complexity
Case Time Complexity
Best Case O(1)

Average Case O(n)

Worst Case O(n)

o Best Case Complexity - In Linear search, best case occurs when the
element we are finding is at the first position of the array. The best-case
time complexity of linear search is O(1).
o Average Case Complexity - The average case time complexity of linear
search is O(n).
o Worst Case Complexity - In Linear search, the worst case occurs when
the element we are looking is present at the end of the array. The worst-
case in linear search could be when the target element is not present in
the given array, and we have to traverse the entire array. The worst-case
time complexity of linear search is O(n).

The time complexity of linear search is O(n) because every element in the
array is compared only once.

2. Space Complexity
Space Complexity O(1)

o The space complexity of linear search is O(1).

Implementation of Linear Search


Now, let's see the programs of linear search in different programming
languages.
Program: Write a program to implement linear search in C language.

1. #include <stdio.h>
2. int linearSearch(int a[], int n, int val) {
3. // Going through array sequencially
4. for (int i = 0; i < n; i++)
5. {
6. if (a[i] == val)
7. return i+1;
8. }
9. return -1;
10.}
11.int main() {
12. int a[] = {70, 40, 30, 11, 57, 41, 25, 14, 52}; // given array
13. int val = 41; // value to be searched
14. int n = sizeof(a) / sizeof(a[0]); // size of array
15. int res = linearSearch(a, n, val); // Store result
16. printf("The elements of the array are - ");
17. for (int i = 0; i < n; i++)
18. printf("%d ", a[i]);
19. printf("\nElement to be searched is - %d", val);
20. if (res == -1)
21. printf("\nElement is not present in the array");
22. else
23. printf("\nElement is present at %d position of array", res);
24. return 0;
25.}

Output
Insertion Sort Algorithm
In this article, we will discuss the Insertion sort Algorithm. The working
procedure of insertion sort is also simple. This article will be very helpful and
interesting to students as they might face insertion sort as a question in their
examinations. So, it is important to discuss the topic.

Insertion sort works similar to the sorting of playing cards in hands. It is


assumed that the first card is already sorted in the card game, and then we
select an unsorted card. If the selected unsorted card is greater than the first
card, it will be placed at the right side; otherwise, it will be placed at the left
side. Similarly, all unsorted cards are taken and put in their exact place.

The same approach is applied in insertion sort. The idea behind the insertion
sort is that first take one element, iterate it through the sorted array. Although
it is simple to use, it is not appropriate for large data sets as the time
complexity of insertion sort in the average case and worst case is O(n2), where
n is the number of items. Insertion sort is less efficient than the other sorting
algorithms like heap sort, quick sort, merge sort, etc.

Insertion sort has various advantages such as -

o Simple implementation
o Efficient for small data sets
o Adaptive, i.e., it is appropriate for data sets that are already substantially
sorted.

Now, let's see the algorithm of insertion sort.


Algorithm
The simple steps of achieving the insertion sort are listed as follows -

Step 1 - If the element is the first element, assume that it is already sorted.
Return 1.

Step2 - Pick the next element, and store it separately in a key.

Step3 - Now, compare the key with all elements in the sorted array.

Step 4 - If the element in the sorted array is smaller than the current element,
then move to the next element. Else, shift greater elements in the array
towards the right.

Step 5 - Insert the value.

Step 6 - Repeat until the array is sorted.

Working of Insertion sort Algorithm


Now, let's see the working of the insertion sort Algorithm.

To understand the working of the insertion sort algorithm, let's take an


unsorted array. It will be easier to understand the insertion sort via an
example.

Let the elements of array are -

Initially, the first two elements are compared in insertion sort.


Here, 31 is greater than 12. That means both elements are already in
ascending order. So, for now, 12 is stored in a sorted sub-array.

Now, move to the next two elements and compare them.

Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with
25. Along with swapping, insertion sort will also check it with all elements in
the sorted array.

For now, the sorted array has only one element, i.e. 12. So, 25 is greater than
12. Hence, the sorted array remains sorted after swapping.

Now, two elements in the sorted array are 12 and 25. Move forward to the
next elements that are 31 and 8.

Both 31 and 8 are not sorted. So, swap them.

After swapping, elements 25 and 8 are unsorted.


So, swap them.

Now, elements 12 and 8 are unsorted.

So, swap them too.

Now, the sorted array has three items that are 8, 12 and 25. Move to the next
items that are 31 and 32.

Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and
31.

Move to the next elements that are 32 and 17.

17 is smaller than 32. So, swap them.


Swapping makes 31 and 17 unsorted. So, swap them too.

Now, swapping makes 25 and 17 unsorted. So, perform swapping again.

Now, the array is completely sorted.

Insertion sort complexity


Now, let's see the time complexity of insertion sort in best case, average case,
and in worst case. We will also see the space complexity of insertion sort.

1. Time Complexity
Case Time Complexity

Best Case O(n)

Average Case O(n2)

Worst Case O(n2)

o Best Case Complexity - It occurs when there is no sorting required, i.e.


the array is already sorted. The best-case time complexity of insertion
sort is O(n).
o Average Case Complexity - It occurs when the array elements are in
jumbled order that is not properly ascending and not properly
descending. The average case time complexity of insertion sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are
required to be sorted in reverse order. That means suppose you have to
sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of insertion sort
is O(n2).

2. Space Complexity
Space Complexity O(1)

Stable YES

o The space complexity of insertion sort is O(1). It is because, in insertion


sort, an extra variable is required for swapping.

Implementation of insertion sort


Now, let's see the programs of insertion sort in different programming
languages.

Program: Write a program to implement insertion sort in C language.

1. #include <stdio.h>
2.
3. void insert(int a[], int n) /* function to sort an aay with insertion sort */
4. {
5. int i, j, temp;
6. for (i = 1; i < n; i++) {
7. temp = a[i];
8. j = i - 1;
9.
10. while(j>=0 && temp <= a[j]) /* Move the elements greater than temp t
o one position ahead from their current position*/
11. {
12. a[j+1] = a[j];
13. j = j-1;
14. }
15. a[j+1] = temp;
16. }
17.}
18.
19.void printArr(int a[], int n) /* function to print the array */
20.{
21. int i;
22. for (i = 0; i < n; i++)
23. printf("%d ", a[i]);
24.}
25.
26.int main()
27.{
28. int a[] = { 12, 31, 25, 8, 32, 17 };
29. int n = sizeof(a) / sizeof(a[0]);
30. printf("Before sorting array elements are - \n");
31. printArr(a, n);
32. insert(a, n);
33. printf("\nAfter sorting array elements are - \n");
34. printArr(a, n);
35.
36. return 0;
37.}

Output:
Quick Sort Algorithm
In this article, we will discuss the Quicksort Algorithm. The working procedure
of Quicksort is also simple. This article will be very helpful and interesting to
students as they might face quicksort as a question in their examinations. So,
it is important to discuss the topic.

Sorting is a way of arranging items in a systematic manner. Quicksort is the


widely used sorting algorithm that makes n log n comparisons in average case
for sorting an array of n elements. It is a faster and highly efficient sorting
algorithm. This algorithm follows the divide and conquer approach. Divide and
conquer is a technique of breaking down the algorithms into subproblems,
then solving the subproblems, and combining the results back together to
solve the original problem.

Divide: In Divide, first pick a pivot element. After that, partition or rearrange
the array into two sub-arrays such that each element in the left sub-array is
less than or equal to the pivot element and each element in the right sub-
array is larger than the pivot element.

Conquer: Recursively, sort two subarrays with Quicksort.

Combine: Combine the already sorted array.

Quicksort picks an element as pivot, and then it partitions the given array
around the picked pivot element. In quick sort, a large array is divided into
two arrays in which one holds values that are smaller than the specified value
(Pivot), and another array holds the values that are greater than the pivot.
After that, left and right sub-arrays are also partitioned using the same
approach. It will continue until the single element remains in the sub-array.

Choosing the pivot


Picking a good pivot is necessary for the fast implementation of quicksort.
However, it is typical to determine a good pivot. Some of the ways of
choosing a pivot are as follows -

o Pivot can be random, i.e. select the random pivot from the given array.
o Pivot can either be the rightmost element of the leftmost element of the
given array.
o Select median as the pivot element.

Algorithm
Algorithm:

1. QUICKSORT (array A, start, end)


2. {
3. 1 if (start < end)
4. 2 {
5. 3 p = partition(A, start, end)
6. 4 QUICKSORT (A, start, p - 1)
7. 5 QUICKSORT (A, p + 1, end)
8. 6 }
9. }

Partition Algorithm:

The partition algorithm rearranges the sub-arrays in a place.

1. PARTITION (array A, start, end)


2. {
3. 1 pivot ? A[end]
4. 2 i ? start-1
5. 3 for j ? start to end -1 {
6. 4 do if (A[j] < pivot) {
7. 5 then i ? i + 1
8. 6 swap A[i] with A[j]
9. 7 }}
10. 8 swap A[i+1] with A[end]
11. 9 return i+1
12.}

Working of Quick Sort Algorithm


Now, let's see the working of the Quicksort Algorithm.

To understand the working of quick sort, let's take an unsorted array. It will
make the concept more clear and understandable.

Let the elements of array are -


In the given array, we consider the leftmost element as pivot. So, in this case,
a[left] = 24, a[right] = 27 and a[pivot] = 24.

Since, pivot is at left, so algorithm starts from right and move towards left.

Now, a[pivot] < a[right], so algorithm moves forward one position towards
left, i.e. –

Now, a[left] = 24, a[right] = 19, and a[pivot] = 24.

Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and
pivot moves to right, as -
Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so
algorithm starts from left and moves to right.

As a[pivot] > a[left], so algorithm moves one position to right as -

Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so
algorithm moves one position to right as –
Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap
a[pivot] and a[left], now pivot is at left, i.e. –

Since, pivot is at left, so algorithm starts from right, and move to left. Now,
a[left] = 24, a[right] = 29, and a[pivot] = 24. As a[pivot] < a[right], so algorithm
moves one position to left, as -
Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so,
swap a[pivot] and a[right], now pivot is at right, i.e. -

Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the
algorithm starts from left and move to right.
Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are
pointing the same element. It represents the termination of procedure.

Element 24, which is the pivot element is placed at its exact position.

Elements that are right side of element 24 are greater than it, and the
elements that are left side of element 24 are smaller than it.

Now, in a similar manner, quick sort algorithm is separately applied to the left
and right sub-arrays. After sorting gets done, the array will be –

Quicksort complexity
Now, let's see the time complexity of quicksort in best case, average case, and
in worst case. We will also see the space complexity of quicksort.
1. Time Complexity
Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)

Worst Case O(n2)

o Best Case Complexity - In Quicksort, the best-case occurs when the


pivot element is the middle element or near to the middle element. The
best-case time complexity of quicksort is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in
jumbled order that is not properly ascending and not properly
descending. The average case time complexity of quicksort
is O(n*logn).
o Worst Case Complexity - In quick sort, worst case occurs when the
pivot element is either greatest or smallest element. Suppose, if the
pivot element is always the last element of the array, the worst case
would occur when the given array is sorted already in ascending or
descending order. The worst-case time complexity of quicksort is O(n2).

Though the worst-case complexity of quicksort is more than other sorting


algorithms such as Merge sort and Heap sort, still it is faster in practice.
Worst case in quick sort rarely occurs because by changing the choice of pivot,
it can be implemented in different ways. Worst case in quicksort can be
avoided by choosing the right pivot element.

2. Space Complexity
Space Complexity O(n*logn)
Stable NO

o The space complexity of quicksort is O(n*logn).

Implementation of quicksort
Now, let's see the programs of quicksort in different programming languages.

Program: Write a program to implement quicksort in C language.

1. #include <stdio.h>
2. /* function that consider last element as pivot,
3. place the pivot at its exact position, and place
4. smaller elements to left of pivot and greater
5. elements to right of pivot. */
6. int partition (int a[], int start, int end)
7. {
8. int pivot = a[end]; // pivot element
9. int i = (start - 1);
10.
11. for (int j = start; j <= end - 1; j++)
12. {
13. // If current element is smaller than the pivot
14. if (a[j] < pivot)
15. {
16. i++; // increment index of smaller element
17. int t = a[i];
18. a[i] = a[j];
19. a[j] = t;
20. }
21. }
22. int t = a[i+1];
23. a[i+1] = a[end];
24. a[end] = t;
25. return (i + 1);
26.}
27.
28./* function to implement quick sort */
29.void quick(int a[], int start, int end) /* a[] = array to be sorted, start = Starting
index, end = Ending index */
30.{
31. if (start < end)
32. {
33. int p = partition(a, start, end); //p is the partitioning index
34. quick(a, start, p - 1);
35. quick(a, p + 1, end);
36. }
37.}
38.
39./* function to print an array */
40.void printArr(int a[], int n)
41.{
42. int i;
43. for (i = 0; i < n; i++)
44. printf("%d ", a[i]);
45.}
46.int main()
47.{
48. int a[] = { 24, 9, 29, 14, 19, 27 };
49. int n = sizeof(a) / sizeof(a[0]);
50. printf("Before sorting array elements are - \n");
51. printArr(a, n);
52. quick(a, 0, n - 1);
53. printf("\nAfter sorting array elements are - \n");
54. printArr(a, n);
55.
56. return 0;
57.}

Output:

Merge Sort Algorithm


In this article, we will discuss the merge sort Algorithm. Merge sort is the
sorting technique that follows the divide and conquer approach. This article
will be very helpful and interesting to students as they might face merge sort
as a question in their examinations. In coding or technical interviews for
software engineers, sorting algorithms are widely asked. So, it is important to
discuss the topic.

Merge sort is similar to the quick sort algorithm as it uses the divide and
conquer approach to sort the elements. It is one of the most popular and
efficient sorting algorithm. It divides the given list into two equal halves, calls
itself for the two halves and then merges the two sorted halves. We have to
define the merge() function to perform the merging.
The sub-lists are divided again and again into halves until the list cannot be
divided further. Then we combine the pair of one element lists into two-
element lists, sorting them in the process. The sorted two-element pairs is
merged into the four-element lists, and so on until we get the sorted list.

Now, let's see the algorithm of merge sort.

Algorithm
In the following algorithm, arr is the given array, beg is the starting element,
and end is the last element of the array.

1. MERGE_SORT(arr, beg, end)


2.
3. if beg < end
4. set mid = (beg + end)/2
5. MERGE_SORT(arr, beg, mid)
6. MERGE_SORT(arr, mid + 1, end)
7. MERGE (arr, beg, mid, end)
8. end of if
9.
10.END MERGE_SORT

The important part of the merge sort is the MERGE function. This function
performs the merging of two sorted sub-arrays that are A[beg…
mid] and A[mid+1…end], to build one sorted array A[beg…end]. So, the
inputs of the MERGE function are A[], beg, mid, and end.

The implementation of the MERGE function is given as follows -

1. /* Function to merge the subarrays of a[] */


2. void merge(int a[], int beg, int mid, int end)
3. {
4. int i, j, k;
5. int n1 = mid - beg + 1;
6. int n2 = end - mid;
7.
8. int LeftArray[n1], RightArray[n2]; //temporary arrays
9.
10. /* copy data to temp arrays */
11. for (int i = 0; i < n1; i++)
12. LeftArray[i] = a[beg + i];
13. for (int j = 0; j < n2; j++)
14. RightArray[j] = a[mid + 1 + j];
15.
16. i = 0, /* initial index of first sub-array */
17. j = 0; /* initial index of second sub-array */
18. k = beg; /* initial index of merged sub-array */
19.
20. while (i < n1 && j < n2)
21. {
22. if(LeftArray[i] <= RightArray[j])
23. {
24. a[k] = LeftArray[i];
25. i++;
26. }
27. else
28. {
29. a[k] = RightArray[j];
30. j++;
31. }
32. k++;
33. }
34. while (i<n1)
35. {
36. a[k] = LeftArray[i];
37. i++;
38. k++;
39. }
40.
41. while (j<n2)
42. {
43. a[k] = RightArray[j];
44. j++;
45. k++;
46. }
47.}

Working of Merge sort Algorithm


Now, let's see the working of merge sort Algorithm.

To understand the working of the merge sort algorithm, let's take an unsorted
array. It will be easier to understand the merge sort via an example.

Let the elements of array are -

According to the merge sort, first divide the given array into two equal halves.
Merge sort keeps dividing the list into equal parts until it cannot be further
divided.

As there are eight elements in the given array, so it is divided into two arrays
of size 4.
Now, again divide these two arrays into halves. As they are of size 4, so divide
them into new arrays of size 2.

Now, again divide these arrays to get the atomic value that cannot be further
divided.

Now, combine them in the same manner they were broken.

In combining, first compare the element of each array and then combine them
into another array in sorted order.

So, first compare 12 and 31, both are in sorted positions. Then compare 25
and 8, and in the list of two values, put 8 first followed by 25. Then compare
32 and 17, sort them and put 17 first followed by 32. After that, compare 40
and 42, and place them sequentially.

In the next iteration of combining, now compare the arrays with two data
values and merge them into an array of found values in sorted order.
Now, there is a final merging of the arrays. After the final merging of above
arrays, the array will look like

Now, the array is completely sorted.

Merge sort complexity


Now, let's see the time complexity of merge sort in best case, average case,
and in worst case. We will also see the space complexity of the merge sort.

1. Time Complexity
Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)

Worst Case O(n*logn)

o Best Case Complexity - It occurs when there is no sorting required, i.e.


the array is already sorted. The best-case time complexity of merge sort
is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in
jumbled order that is not properly ascending and not properly
descending. The average case time complexity of merge sort
is O(n*logn).
o Worst Case Complexity - It occurs when the array elements are
required to be sorted in reverse order. That means suppose you have to
sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of merge sort
is O(n*logn).

2. Space Complexity
Space Complexity O(n)

Stable YES

o The space complexity of merge sort is O(n). It is because, in merge sort,


an extra variable is required for swapping.

Implementation of merge sort


Now, let's see the programs of merge sort in different programming
languages.

Program: Write a program to implement merge sort in C language.

1. #include <stdio.h>
2.
3. /* Function to merge the subarrays of a[] */
4. void merge(int a[], int beg, int mid, int end)
5. {
6. int i, j, k;
7. int n1 = mid - beg + 1;
8. int n2 = end - mid;
9.
10. int LeftArray[n1], RightArray[n2]; //temporary arrays
11.
12. /* copy data to temp arrays */
13. for (int i = 0; i < n1; i++)
14. LeftArray[i] = a[beg + i];
15. for (int j = 0; j < n2; j++)
16. RightArray[j] = a[mid + 1 + j];
17.
18. i = 0; /* initial index of first sub-array */
19. j = 0; /* initial index of second sub-array */
20. k = beg; /* initial index of merged sub-array */
21.
22. while (i < n1 && j < n2)
23. {
24. if(LeftArray[i] <= RightArray[j])
25. {
26. a[k] = LeftArray[i];
27. i++;
28. }
29. else
30. {
31. a[k] = RightArray[j];
32. j++;
33. }
34. k++;
35. }
36. while (i<n1)
37. {
38. a[k] = LeftArray[i];
39. i++;
40. k++;
41. }
42.
43. while (j<n2)
44. {
45. a[k] = RightArray[j];
46. j++;
47. k++;
48. }
49.}
50.
51.void mergeSort(int a[], int beg, int end)
52.{
53. if (beg < end)
54. {
55. int mid = (beg + end) / 2;
56. mergeSort(a, beg, mid);
57. mergeSort(a, mid + 1, end);
58. merge(a, beg, mid, end);
59. }
60.}
61.
62./* Function to print the array */
63.void printArray(int a[], int n)
64.{
65. int i;
66. for (i = 0; i < n; i++)
67. printf("%d ", a[i]);
68. printf("\n");
69.}
70.
71.int main()
72.{
73. int a[] = { 12, 31, 25, 8, 32, 17, 40, 42 };
74. int n = sizeof(a) / sizeof(a[0]);
75. printf("Before sorting array elements are - \n");
76. printArray(a, n);
77. mergeSort(a, 0, n - 1);
78. printf("After sorting array elements are - \n");
79. printArray(a, n);
80. return 0;
81.}

Output:

You might also like