0% found this document useful (0 votes)
2 views

chapter8

The document discusses two search algorithms: Linear Search and Binary Search. Linear Search is used for unsorted arrays and checks each element sequentially, while Binary Search is efficient for sorted arrays, reducing the search space by half each iteration. Additionally, the document introduces Bubble Sort as a basic sorting algorithm that arranges elements in ascending order through repeated comparisons and swaps.

Uploaded by

raghavrk3032
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

chapter8

The document discusses two search algorithms: Linear Search and Binary Search. Linear Search is used for unsorted arrays and checks each element sequentially, while Binary Search is efficient for sorted arrays, reducing the search space by half each iteration. Additionally, the document introduces Bubble Sort as a basic sorting algorithm that arranges elements in ascending order through repeated comparisons and swaps.

Uploaded by

raghavrk3032
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

M8.

1: Search Operation:
Searching is the process of finding the location of given element in the linear array.
The search is said to be successful if the given element is found i.e. the element does
exists in the array; otherwise unsuccessful.
There are basically two approaches to search operation:
• Linear Search
• Binary Search
The algorithm that one chooses generally depends on organization of the array
elements. If the elements are in random order, then one have to use linear search
technique, and if the array elements are sorted, then it is preferable to use binary
search technique. These two search techniques are described follows:

A. Linear Search:
Given no information about the array ‘a’, the only way to search for given
element ‘item’ is to compare ‘item’ with each element of ‘a’ one by one. This
method, which traverses ‘a’ sequentially to locate ‘item’ is called ‘linear search’
or ‘sequential search’.

Algorithm LinearSearch (a, n, item, loc)


Here ‘a’ is a linear array of size ‘n’. This algorithm finds the location of the element ‘item’ in linear
array ‘a’. If search ends in success it sets ‘loc’ to the index of the element; otherwise it sets ‘loc’ to -1.
Step1: Begin
Step2: Set i := 0
Step3: Repeat Step 4 to 7 while i<n
Step4: If a[i] = item
then goto Step 5
else goto Step 7
Step5: Set loc := i
Step6: EXIT
Step7: Increment the value of i by 1
Step8: Set loc : -1
Step9: END.

Page 1 www.magix.in
Program function for linear search:
// This C program is used to find the location of a given element with in an array. This //
program is also an simple example of linear search on linear array.

#inclue<stdio.h>
#inclue<conio.h>
void main()
{
int position, fnd =0;
int num[ ]= {24, 34, 12, 44, 56, 17};
clrscr();
printf(“Enter the int number to find from array: \t”);
scanf(“%d”, &fnd);
position = LinearSearch(&num[0], 6, fnd); // position= LinearSearch(num,6,fnd);
clrscr();
printf(“U search for no. %d results in position \n%d”, fnd, position);
getch();
}

LinearSearch ( int *a, int n, int item)


{
int k;
for ( k =0; k< n; k++)
{
if(a[k] == item)
return k;
}
return -1;
}

Output:
(1) Enter the int Number to find from array: 5
You search for number 5 results in position -1

(2) Enter the int Number to find from array: 44


You search for number 5 results in position 3

Analysis of Linear Search:


In the best possible case, the ‘item’ may occur at first position. In this case, the
search operation terminates in success with just one comparison. However, the worst
case occurs when either the item is present at last position or missing from the array. In
the former case, the search terminates in success with ‘n’ comparisons. In the later
case, the search terminates in failure with ‘n’ comparisons. Thus, we find that in worst
case the linear search is ‘O(n)’ operations, while in best case the linear search is ‘O(1)’
operations.
Page 2 www.magix.in
Advantages of linear search:
Ø Very simple approach.
Ø Works well for small arrays.
Ø Used to search when the elements are not sorted.
Disadvantages of linear search:
Ø Less efficient if the array size is large.
Ø If the elements are already sorted, linear search is not efficient.

B. Binary Search:
Suppose the elements of the array ‘A’ are sorted in ascending order (if the
elements are numbers) or dictionary order (if the elements are strings). The best
searching algorithm, called binary search, is used to find the location of the given
element. We do use this approach in our daily life. For example, suppose we want
to find the meaning of the term ‘modem’ in a computer dictionary. Obviously, we
don’t search page by page. We open the dictionary in the middle (roughly) to
determine which half contains the term being sought. Then for subsequent search
one half is discarded, and we search in the other half. This process is continued till
other we have located the required term or that term is missing from the
dictionary, which will indicated by the fact that at the end we will left with only one page.
So in computer science, a binary search is an algorithm for locating the
position of an element in a sorted list by checking the middle, eliminating half of
the list from consideration, and then performing the search on the remaining half.
If the middle element is equal to the sought value, then the position has been
found; otherwise, the upper half or lower half is chosen for search based on
whether the element is greater than or less than the middle element. The method
reduces the number of elements needed to be checked by a factor of two each
time, and finds the target value, if it exists in logarithmic time. A binary search is a
dichotomist divide and conquer search algorithm.
Example:
To illustrate the working of the binary search technique, consider the
following sorted array A with 7 elements:
2, 9, 12, 22, 27, 40, 60
And we want to search element 12.
Solution:
Given array ‘a’:

Page 3 www.magix.in
To start with, we take beg=0, end=6, and compute location of the middle
element as:
Mid = (beg + end) / 2 = ( 0 + 6) / 2 =3
Since a[mid] i.e. a[3] != 12, and beg < end. We start next iteration.
As a[mid] = 22, which is greater than searching element (22>12), therefore,
we take
end = mid -1 = 3–1= 2
whereas beg remains unchanged.
Thus mid = (beg+end) / 2 = (0+2)/2 = 1
Since a[mid] i.e. a[1] != 12, and beg<end. We start next iteration:
As a[mid] = 9, which is less than searching element (9 < 12), therefore, we
take
beg = mid+1 = 1+1 = 2
Whereas ‘end’ remains unchanged. Since beg=end, again compute location
of the middle element as
Mid = (beg + end) / 2 = (2+2)/2 = 2
Since a[mid], i.e. a[2] = 12 , the search terminates on success.
This algorithm can be solved via two ways: iterative and recursive.

Algorithm BinarySearchIterative (a, n, item, loc)


Here ‘a’ is an sorted linear array of size ‘n’. This algorithm finds the location of the element ‘item’ in linear array ‘a’.
If search ends in success it sets ‘loc’ to the index of the element; otherwise it sets ‘loc’ to -1.
Step1: Begin
Step2: Set beg:=0 & end:=n-1
Step3: Set mid := (beg + end)/2
Step4: Repeat Steps 5 to 8 while beg<=end AND a[mid] != item
Step5: if ( item < a[mid] )
then goto step 6
else goto step 7
Step6: Set end := mid-1 and goto step 8
Step7: Set beg := mid+1 and goto step 8
Step8: Set mid = (beg+end)/2;
Step9: if(beg>end)
then goto step 10
else goto step 11
Step10: Set loc := -1 and goto Step 12
Step11: Set loc := mid and goto Step 12
Step12: End

Page 4 www.magix.in
Program for binary search iterative:

//This C program is used to search the location of a given element with in a array with
// binary search iterative algorithm.
#include<stdio.h>
#include<conio.h>
void main( )
{
int position, fnd=0;
int num[ ]= {3, 10, 15, 20, 35, 40, 60} ;
clrscr() ;
printf(“Enter the int number to find from array: \t”) ;
scanf(“%d”, &fnd) ;
position = BinarySearch(num, 7, fnd) ;
clrscr() ;
printf(“UR search for number %d results in position \n %d”, fnd, position) ;
getch() ;
}
BinarySearch(int *a, int n, int item)
{
int beg, end, mid;
beg=0; end=n-1;
mid = (beg+end)/2;
while( (beg<=end) && (a[mid] != item) )
{
if(item < a[mid])
end=mid-1;
else
beg=mid+1;
mid=(beg+end)/2;
}
if(beg > end)
return -1;
else
return mid;
}

Output:
(1) Enter the int Number to find from array: 5
Ur Search for number 5 results in position -1
(2) Enter the int Number to find from array: 10
Ur Search for number 5 results in position 1

Page 5 www.magix.in
Algorithm BinarySearchRecursive (a, Lb, UB, item)
Here ‘a’ is an sorted linear array. This algorithm finds the location of the element ‘item’ in linear array ‘a’. If search
ends in success it sets ‘loc’ to the index of the element; otherwise it sets ‘loc’ to -1. The base criteria for recursive
algorithm is that search will terminate either the element is located or ‘beg’ becomes greater than ‘end’. Here Lb is
used for LowerBound and UB is used for UpperBound.

Step1: Begin
Step2: if (Lb > UB)
then goto step 3
else goto step 4
Step3: Return -1 and EXIT
Step4: Set mid := (Lb + UB)/2
Step5: if ( item = a[mid])
then goto step 6
else goto step 7
Step6: Return mid and exit
Step7: if ( item < a[mid] )
then goto step 8
else goto step 9
Step8: Call BinarySearchRecursive(a, Lb, mid-1, item)
Step9: Call BinarySearchRecursive(a, mid+1, UB, item)
Step10: End

Function for binary search recursive:

//This C function is used to search the location of a given element with in ‘a’ array with
// binary search recursive algorithm.

int BinarySearchRecursive(int a[ ], int lb, int ub, int item)


{
int mid;
if (lb > ub)
{
return -1;
}
else
{
mid = (lb + ub) / 2;
if ( item == a[mid] )
return mid;
else if ( item < a[mid])
return BinarySearchRecursive( a, lb, mid-1, item);
else
return BinarySearchRecursive( a, mid+1, ub, item);
}
}

Page 6 www.magix.in
Analysis of Binary Search:
In each iteration or in each recursive call, the search is reduced to one half of the
array. Therefore, for ‘n’ elements in the array, there will be log2n iterations or
recursive calls. Thus the complexity of binary search is O(log2n). The complexity
will be same irrespective of the position of the element, even if it is not present in
the array.

Advantages of Binary Search:


Ø Simple technique
Ø Very efficient searching technique

Disadvantages of Binary Search:


Ø The list of elements where searching takes place should be sorted.
Ø It is necessary to obtain the middle element which is possible only if the
elements are stored in the array. If the elements are stored in linked list,
this method can not be used.

Difference between binary search and linear Search:

Binary Search Linear Search

• Works on sorted as well as


• Works only on sorted items.
unsorted items.

• Very efficient if the items are


• Very efficient if the items are
less and present in the
sorted.
beginning of the list.

• Works well with arrays and not on • Works with arrays and linked
linked list. lists.

• More number of comparisons


are required if the items are
• Numbers of comparisons are less.
present in the later part of the
array or if elements are more.

Page 7 www.magix.in
M8.2: Sorting Operation:
Sorting is the process of arranging the elements of the array in some logical order.
This logical order may be ascending or descending in case of numeric values or
dictionary order in case of alphanumeric values.
There are many algorithms to do this task efficiently. But at this stage, we will
discuss the easiest algorithm first, known as “Bubble Sort”.

M8.2.1: Bubble Sort:


The Bubble sort method requires (n-1) passes to sort an array. In each pass every
element a[i] is compared with a[i+1], for i=0 to (n-k), where ‘k’ is the pass number
and if they are out of order i.e. if a[i]>a[i+1], they are swapped. This will cause the
largest element move or bubble up. Thus after the end of first pass the largest
element in the array will be placed in nth position, and on each successive pass, the
next largest element is placed at position (n-1), (n-2),……., 2 respectively. For more
clarity, examine the following steps:
Pass 1:
Step 1: if a[0] > a[1] then swap a[0] and a[1]
Step 2: if a[1] > a[2] then swap a[1] and a[2]
.
.
Step n-1 : if a[n-2] > a[n-1] then swap a[n-2] and a[n-1]

Pass 2:
Step 1: if a[0] > a[1] then swap a[0] and a[1]
Step 2: if a[1] > a[2] then swap a[1] and a[2]
.
.
Step n-1 : if a[n-3] > a[n-2] then swap a[n-3] and a[n-2]
.
.
.
Pass k:
Step 1: if a[0] > a[1] then swap a[0] and a[1]
Step 2: if a[1] > a[2] then swap a[1] and a[2]
.
.
.
Step n-k : if a[n-k+1] > a[n-k] then swap a[n-k+1] and a[n-k]
.
.
Pass n-1:
Step 1: if a[0] > a[1] then swap a[0] and a[1]

Page 8 www.magix.in
Example of Bubble Sort:
Consider the sorting of the following array in ascending order:
12, 40, 3, 2, 15
Note that, the output of a given pass becomes the input for the next pass.

(Illustration of Bubble sort method)


Function for Bubble Sort Basic:
void BubbleSortBasic (int *a, int n)
{
int j, k, temp;
for ( k =1; k<n; k++)
{
for ( j=0; j<n-k; j++)
{
if ( a[ j ] > a[ j+1] )
{
temp = a[ j ] ;
a[ j ] = a[ j + 1] ;
a[ j+1] = temp ;
}
}
}
}
Page 9 www.magix.in
Algorithm for Basic Bubble Sort:
Algorithm BubbleSortBasic (a, UB)
Here a is an linear array. This algorithm sorts the array with the bubble sort algorithm. It uses a temporary variable
‘temp’ to facilitate the exchange of two values and variable i is used as loop control variable.
Step1: Begin
Step2: Set k := 1
Step3: Repeat Step 4, 5 & 8 while ( k < n )
Step4: Set j := 0
Step5: Repeat Step 6 to 7 while ( j < n – k )
Step6: if ( a[ j ] > a[ j+1] )
then Exchange the a[j] element with a[j+1]
Step7: Set j := j + 1
Step8: Set k := k + 1
Step9: END

Bubble sort technique posses one very important property:-


“Once there is no swapping of elements in a particular pass, there will be no further
swapping of elements in the subsequent passes.”
This property can be used to reduce the unnecessary (redundant) passes. For this
purpose, we can use a flag to determine if any interchange has taken place, if yes
only then proceed with the next pass; otherwise stop. The modified version of the
basic technique is given below:
Function for Bubble Sort Modified:

void BubbleSortModified(int *a, int n)


{
int j, k, temp, exchange_flag=1 ;
for(k=1; (k<n) && (exchange_flag); k++)
{
exchange_flag=0 ;
for(j=0; j<n-k; j++)
{
if(a[j] > a[j+1])
{
exchange_flag = 1;
temp =a[j];
a[j] = a[j+1];
a[j+1]= temp;
}
}
}
}

Page 10 www.magix.in
Algorithm for Modified Bubble Sort:
Algorithm BubbleSortModified (a, UB)
Here a is an linear array. This algorithm sorts the array with the bubble sort algorithm. It uses a
temporary variable ‘temp’ to facilitate the exchange of two values and variable i is used as loop control
variable.
Step1: Begin
Step2: Set k := 1 and Ex_flag := 1
Step3: Repeat Step 4, 5 & 11 while ( k < n ) and (Ex_flag = 1)
Step4: Set Ex_flag :=0 and j := 0
Step5: Repeat Step 6 and 10 while ( j < n – k )
Step6: if ( a[ j ] > a[ j+1] )
Step7: Exchange the a[j] element with a[j+1]
Step8: Set Ex_flag := 1
Step9: End if
Step10: Set j := j + 1
Step11: Set k := k + 1
Step12: END
Analysis of Bubble Sort Method:
In Bubble sort method, after (n-1) passes, the array will be sorted in ascending
order. In this algo, the first pass requires (n-1) comparisons, the second pass
requires (n-2), …. , kth pass requires (n-k), and the last pass requires only one
comparison. Therefore, total comparisons are:
f(n) = (n-1) + (n-2) + (n-3) + …….. +(n-k) + ….. + 3 + 2 + 1
= n(n-1) /2
f(n) = O(n2)

Advantages of bubble sort:


Ø Very Simple and easy to program.
Ø Straight forward approach.

Disadvantages of bubble sort:


Ø It runs slowly and hence, it is not efficient. More efficient sorting techniques are
present.
Ø Even if the elements are sorted, in basic bubble sort, n-1 passes are required to
sort.

Page 11 www.magix.in
M8.2.2: Selection Sort:
As the name indicates, we first find the smallest item in the list and we exchange it with
the first item. Obtain the second smallest in the list and exchange it with the second
element and so on. Finally, all the items will be arranged in ascending order. This technique
is called “Selection sort”.
Selection sort is a sorting algorithm, a comparison sort that works as follows:
1. Find the minimum value in the list.
2. Swap it with the value in the first position.
3. Sort the remainder of the list (excluding the first value).

It has O(n2) complexity, making it inefficient on large lists, and generally performs worse
than the similar insertion sort. Selection sort is noted for its simplicity, and also has
performance advantages over more complicated algorithms in certain situations.
The following figure shows the sorting of a list “ 45 20 40 5 15 ” using selection
sort method:

Analysis of Selection Sort:


Selection sort is not difficult to analyze compared to other sorting algorithms since none of
the loops depend on the data in the array. Selecting the lowest element requires scanning
all n elements (this takes n − 1 comparisons) and then swapping it into the first position.
Finding the next lowest element requires scanning the remaining “n – 1” elements (this
takes n-2 comparisons) and so on.
Therefore total comparisons are:
f(n) = (n - 1) + ( n – 2) + ( n – 3) + . . . . . + 3 + 2 + 1
=n(n–1)/2
f(n) = O ( n2 )
This complexity is for all cases i.e. best case, worst case and average case.
Page 12 www.magix.in
Function in ‘C’ to implement selection sort:
void SelectionSort ( int a[ ] , int n )
{
int temp, small, loc, i, j ;
for ( i = 1 ; i <= ( n-1 ) ; i++ )
{
small = a[ i – 1 ] ;
loc = i – 1 ;
for ( j = i ; j <= ( n-1) ; j++ )
{
if ( a[ j ] < small )
{
small = a[ j ] ;
loc = j ;
}
}
if ( loc != ( i – 1 ) )
{
temp = a [ i – 1 ] ;
a [ i – 1 ] = a [ loc ] ;
a [ loc ] = temp ;
}
}
}
Algorithm for selection sort:
SelectionSort ( a , n )
Here ‘a’ is an linear array with n elements in memory. This algorithm sorts the elements in the ascending
order. Temporary variable ‘small’ is used to hold the current smallest element, temporary variable ‘temp’ is
used to facilitate the exchange of two values and variable ‘i’ and ‘j’ are used as loop control variables.
Step1: Begin
Step2: for i = 1 to (n – 1 ) by 1 do
Step2.1: Set small := a [ i – 1]
Step2.2: Set loc := i – 1
Step2.3: for j = i to ( n – 1 ) by 1 do
Step2.3.1: if ( a[ j ] < small ) then
Step2.3.1.1: Set small := a [ j ]
Step2.3.1.2: Set loc := j
Step2.3.2: End if
Step2.4: End for
Step2.5: if ( loc <> ( i – 1) ) then
Step2.5.1: Set temp := a [ i – 1 ]
Step2.5.2: Set a [ i – 1 ] := a [ loc ]
Step2.5.3: Set a [ loc ] := temp
Step2.5.4: end if
Step3: End for
Step4: END
Page 13 www.magix.in
Advantages of using selection Sort:
Ø Easy to understand.
Ø And hence, easy to implement.
Ø In-place sort (requires no additional storage space).

Disadvantages of using selection Sort:


Ø Selection sort is unable to recognize already parts of partially sorted lists.
Ø Method uses internal sorting (I.e. requires entire array to be in memory –
sometimes large amounts of memory).
Ø Doesn't scale well: O(n2).

M8.2.3: Insertion Sort:


Among simple sort algorithms insertion sort is one of the best (uses the least number
of comparisons than Bubble sort and selection sort) and it is sometimes used as a
stage in complex sorting algorithm when the subsequence to be sorted shrinks to
certain level (for example, it can used with Quicksort).

The insertion sort works just like its name suggests- it inserts each item into its
proper place in the final list. This algorithm consider the elements one at a time,
inserting each in its suitable place among those already considered (keeping them
sorted). An example of an insertion sort occurs in everyday life while playing cards. To
sort the card in one’s hand one extract a card, shift the remaining cards, and then
insert the extracted card in the correct place. This process is repeated until all the
cards are in the correct sequence.
To illustrate the working of the insertion sort method, consider the following array ‘a’
with 7 elements as:
35, 20, 40, 100, 3, 10, 15

Since a[1] < a[0], insert element a[1] before a[0] giving the following array:

Since a[2] > a[1], no action is performed.

Page 14 www.magix.in
Since a[3] > a[2], again no action is performed.

Since a[4] is less than a[3], a[2], a[1] as well as a[0], therefore insert a[4] before a[0],
giving the following array.

Since a[5] is less than a[4], a[3], a[2] and a[1], therefore insert a[5] before a[1],
giving the following array:

Since a[6] is less than a[5], a[4], a[3] and a[2], therefore insert a[6] before a[2];
giving the following sorted array:

(Successive passes of insertion sort method)

Function in ‘C’ to implement insertion sort:


void insertionSort(int a[ ], int array_size)
{
int i, j, current;
for ( i=1; i < array_size; i++)
{
current = a[i];
j=i–1;
while ( ( j >= 0 ) && ( a [ j ] > current ) )
{
a[j+1]=a[j];
j = j - 1;
}
a [ j + 1 ] = current;
}
}

Page 15 www.magix.in
Algorithm for insertion sort:
The various steps required to implement insertion sort are summarized in the
algorithm given below:
InsertionSort ( a , arraysize )
Here ‘a’ is an linear array with ‘array_size’ elements in memory. This algorithm sorts the elements in the
ascending order. It uses temporary variable ‘current’ to facilitate the exchange of two values and variable ‘i’
and ‘j’ are used as loop control and index variables.
Step1: Begin
Step2: for i = 1 to (arraysize – 1 ) by 1 do
Step2.1: Set current := a[ i ]
Step2.2: Set j := i – 1
Step2.3: while ( ( current < a[ j ] ) AND ( j >= 0 ) ) do
Step2.3.1: Set a [ j + 1 ] := a [ j ]
Step2.3.2: Set j := j - 1
Step2.4: end while
Step2.5: Set a [ j + 1 ] = current
Step3: End for
Step4: END

Analysis of Insertion Sort:


In best case, when the array is already sorted in ascending order; each pass requires
only one comparison. Therefore, the complexity will be:
f(n) = O ( n )
It is a linear function of ‘n’.
The worst case performance occurs when the elements of the input array are in
descending order. In that case, the first pass requires one comparison, the second
pass requires two comparisons, third pass requires three comparisons, . . . . , and
finally the last pass requires (n-1) comparisons. Therefore, total number of
comparisons are:
f(n) = 1 + 2 + 3 + . . . . + (n – 3) + (n – 2) + (n – 1)
=n(n–1)/2
f(n) = O ( n2 )

Advantages of using selection Sort:


Ø Takes into account already sorted elements of the list.
Ø And hence, works quite well for a partially sorted list.

Disadvantages of using selection Sort:


Ø Still a relatively slow algorithm.
Ø This is especially so if the starting list is in reverse order (worst case scenario).

Page 16 www.magix.in
M8.2.4: Quick Sort:
The quick sort is an in-place, divide-and-conquer algorithm. Quicksort sorts by
employing a divide and conquer strategy to divide a list into two sub-lists.

The steps are:


Ø Pick an element, called a pivot, from the list.
Ø Reorder the list so that all elements which are less than the pivot come before
the pivot and so that all elements greater than the pivot come after it (equal
values can go either way). After this partitioning, the pivot is in its final position.
This is called the partition operation.
Ø Recursively sort the sub-list of lesser elements and the sub-list of greater
elements.

The base case of the recursion are lists of size zero or one, which are always sorted.

For example, an array have 9 elements and are 6, 2, 1, 3, 4, 5, 8, 7, 0.

(a) Step 1:
low = first = 0
high = last = 8
pivot = list [(low+ high) /2] = list[(0+8)/2] = list[4] =5

Step 2: low <= high, i.e. 0<8


Step 3: list[low] < pivot is false because list[0] > pivot, i.e. 6 > 5
Step 4: low = 0 [remains previous value ]
Step 5: list[high] > pivot is false because list[8] < pivot , i.e., 0 < 5
Step 6: high = 8 [ remains previous value ]
Step 7: low < high, i.e., 0 < 8
temp = list [ low ] , i.e. temp = 6
list[ 0 ] = list [ high ] , i.e. list [ 0 ] = 0
list [ 8 ] = temp i.e. list [ 8 ] = 6
low = low + 1 i.e. low = 0+1
high = high – 1 i.e. high = 8 – 1
Page 17 www.magix.in
Now the list becomes:

(b) Step 2: low <= high i.e. 1 < 7


Step 3: list[low] < pivot i.e. list[1] < pivot, i.e. 2 < 5
Step 4: low = low + 1 i.e. low = 2
Step 3: list [low] < pivot i.e. list[2] < pivot, i.e. 1 < 5
Step 4: low = low + 1 i.e. low = 3
Step 3 : list[low] < pivot i.e. list[3] < pivot, i.e. 3 < 5
Step 4: low = low + 1 i.e. low = 4
Step 5 : list[high] > pivot, i.e. list[7] > pivot, i.e. 7 > 5
Step 6 : high = high – 1 i.e. high = 6
Step 5 : list[high] > pivot i.e. list[6] > pivot, i.e. 8 > 5
Step 6 : high = high – 1 i.e. high = 5
Step 5 : list[high] > pivot i.e. list[5] > pivot, i.e. 4 > 5 (false)
Step 6: high = high i.e. high = 5 [remains previous value]
Step 7: low <= high, i.e., 4 < 5
temp = list [low] , i.e. temp = 5
list[low] = list [ high ] , i.e. list [4] = 4
list[high] = temp i.e. list [5] = 5
low = low + 1 i.e. low = 4+1=5
high= high – 1 i.e. high = 5–1 = 4

Now, the list is broken into two sub-list

If we continue above process, finally we get a sorted list.


Page 18 www.magix.in
Function in ‘C’ to implement quick sort:
void quick_sort ( int list[ ] , int first, int last )
{
int temp, low, high, pivot ;
low = first ;
high = last ;
pivot = list[ (first+last) / 2 ] ;
do
{
/* find member above ... */
while ( list [ low ] < pivot )
low++ ;

/* find element below ... */


while ( list [ high ] > pivot )
high - - ;

if ( low <= high )


{
/* swap two elements */
temp = list [ low ] ;
list [ low++ ] = list [ high ] ;
list [ high - - ] = temp ;
}
} while ( low <= high ) ;

/* recurse */
if ( first < high )
quick_sort ( list, first, high ) ;

if ( low < last )


quick_sort ( list, low, last ) ;
}

Algorithm for quick sort:


QuickSort ( list , first, last )
Here ‘list’ is an linear array in memory. And ‘first’ represents the position of the first element in the list, while ‘last’
represents the position of the last element in the list. This algorithm sorts the elements in the ascending order.

Step1: Begin
Step2: Set low := first
Step3: Set high := last
Page 19 www.magix.in
Step4: Set pivot := list [ ( low + high ) / 2 ] //middle element
Step5: While ( low <= high ) do
Step5.1: While ( list [low] < pivot ) do
Step5.1.1: Set low = low + 1
Step5.2: End While
Step5.3: While ( list [ high ] > pivot )
Step5.3.1: Set high = high – 1
Step5.4: End While
Step5.5: If ( low <= high )
Step5.5.1: Interchange value of list [ low ] with list [ high ]
Step5.5.2: Set low = low + 1
Step5.5.3: Set high = high – 1
Step5.6: End if
Step6: End While
Step7: If ( first < high )
Step7.1: Call QuickSort ( list, first, high )
Step8: End if
Step9: If ( low < last )
Step10: Call QuickSort ( list, low, last )
Step11: End if
Step12: END
Analysis of Quick sort:
Quicksort's running time depends on the result of the partitioning routine - whether
it's balanced or unbalanced. This is determined by the pivot element used for
partitioning. If the result of the partition is unbalanced, quicksort can run as slowly as
insertion sort; if it's balanced, the algorithm runs asymptotically as fast as merge
sort. That is why picking the "best" pivot is a crucial design decision.
The Wrong Way: the popular way of choosing the pivot is to use the first element;
this is acceptable only if the input is random, but if the input is presorted, or in the
reverse order, then the first elements provides a bad, unbalanced, partition. If the
input is presorted and as the first element is chosen consistently throughout the
recursive calls, quicksort has taken quadratic time to do nothing at all.
The Safe Way: the safe way to choose a pivot is to simply pick one randomly; it is
unlikely that a random pivot would consistently provide us with a bad partition
throughout the course of the sort.
Median-of-Three Way: best case partitioning would occur if PARTITION produces
two subproblems of almost equal size - one of size [n/2] and the other of size [n/2]-1.
In order to achieve this partition, the pivot would have to be the median of the entire
input; unfortunately this is hard to calculate and would consume much of the time,
slowing down the algorithm considerably.

Page 20 www.magix.in
The best-case behavior of the quicksort algorithm occurs when in each recursion step
the partitioning produces two parts of equal length. In order to sort ‘n’ elements, in
this case the running time is in Θ(n log2(n)). This is because the recursion depth is
log(n) and on each level there are ‘n’ elements to be treated.

The worst case occurs when in each recursion step an unbalanced partitioning is
produced, namely that one part consists of only one element and the other part
consists of the rest of the elements. Then the recursion depth is n-1 and quicksort
runs in time Θ(n2).

Proposition: The time complexity of quicksort is in


Θ(n log2(n)) in the average case and
Θ(n2) in the worst case

Advantages of using quick Sort:


Ø One of the fastest algorithms on average.
Ø Does not need additional memory (the sorting takes place in the array - this is
called in-place processing). Compare with mergesort: mergesort needs
additional memory for merging.
Disadvantages of using quick Sort:
Ø The worst-case complexity is O(n2).

M8.2.5: Heap Sort:


Heap sort forces a certain property onto an array which makes it into what is known
as a heap. The elements of the array can be thought of as lying in a tree structure:

Page 21 www.magix.in
The “children” of ‘a[i]’ are ‘a[2*i]’ and ‘a[2*i+1]’. The tree structure is purely notional;
there are no pointers etc. Note that the array indices run through the "nodes" in
breadth-first order, i.e. parent, children, grand-children,....
An array a[i….j] is called a heap if the value of each element is greater than or equal
to the values of its children, if any. Clearly, if a[1..N] is heap, then a[1] is the largest
element of the array.
We can take advantage of this situation by using a heap to help us sort the given
array ‘a’ with ‘n’ elements.
The general approach of heap sort is as follows:
1. From the given array, build the initial max heap.
2. Interchange the root (maximum) element with the last element.
3. Use “downheap” operation from root node to rebuild the heap of size one less
than the starting.
4. Repeat steps 1 and 2 until there are no more elements.

Let us consider an working example to illustrate the working of heapsort algorithm.


For this purpose consider the following array:
10, 5, 70, 15, 12, 35, 50

(Illustration of effect of heapsort on array ‘a’)


In the discussion of heapsort algorithm, we are considering that the array is indexed
from 1 to ‘n’ not from 0 to n-1.
Page 22 www.magix.in
Functions in ‘C’ to implement heap sort:
void HeapSortMethod ( int a[ ], int n )
{
int i, temp ;
Heapify ( a, n ) ;
for ( i = n ; i > 1 ; i ++ )
{
temp = a [ 1 ] ;
a[1] =a[i];
a [ i ] = temp ;
Downheap ( a, 1, i ) ;
}
}
void Heapify ( int a[ ], int n)
{
int i, index ;
index = n / 2 ;
for ( i = index; I >= 1; i - - )
Downheap ( heap, i, n ) ;
}
void Downheap ( int heap[ ] , int start, int finish )
{
int index, lchild, rchild, maximum, temp ;
lchild = 2 * start ; /* index of left child */
rchild = lchild + 1 ; /* index of right child */
if ( lchild <= finish )
{
maximum = heap [lchild] ;
index = lchild ;
if ( rchild <= finish )
{
if ( heap [ rchild ] > maximum )
{
maximum = heap [ rchild ] ;
index = rchild ;
}
}
if ( head [ start ] < head [ index ] )
{
temp = heap [ start] ;
heap [ start ] = heap [ index ] ;
heap [ index ] = temp ;
Downheap ( heap, index, finish ) ;
}
}
}
Page 23 www.magix.in
Algorithm for heap sort:
Algorithm for heap sort involves three subalgorithm in the process of sorting an array:

Subalgo1:
HeapSort ( a, n )
Here ‘a’ is an array of size ‘n’ in memory. This algorithm sorts this array in ascending order using
heap sort technique.
Step1: Begin
Step2: Call Heapify( a, n )
Step3: for i = n to 2 by -1 do
Step3.1: Interchange elements a[1] and a[i]
Step3.2: Call Downheap ( a, 1, i )
Step3.3: End for
Step4: End.
- - - - - - - - - -- - -- - - -- - -- - - -- - - - -- - - - - - -- - - -- - -- - - -- - -- - -- - - -- - -- - - -- - - - - -- - -- - - -- - -

Subalgo2:
Heapify ( a, n )
Here ‘a’ is an array of size ‘n’ in memory. This algorithm builds the max heap using the procedure
described above.
Step1: Begin
Step2: Set index := parent of node with index ‘n’
Step3: for i = index to 1 by -1 do
Step3.1: Call Downheap ( a, i, n )
Step3.2: End for
Step4: End
- - - - - - - - - -- - -- - - -- - -- - - -- - - - -- - - - - - -- - - -- - -- - - -- - -- - -- - - -- - -- - - -- - - - - -- - -- - - -- - -

Subalgo3:
Downheap ( heap, start, finish )
Here ‘heap’ is a linear array, ‘start’ is the index of the element from where downheap operation is to start,
and finish is the index of the last (bottom) element of the heap. The variable ‘index’ is used to keep track of
the index of the largest child.
Step1: Begin
Step2: if heap[start] is not a leaf node then
Step2.1: Set index:= index of the child with largest value
Step2.2: if ( heap [ start ] < heap [ index ] then
Step2.2.1: Swap heap[start] and heap[index]
Step2.2.2: Call downheap(heap, index, finish )
Step2.3: End if
Step2.4: End if
Step3: End.
Page 24 www.magix.in
Analysis of Heap sort:
In the heapsort algorithm, we call function ‘heapify( )’ to build the initial heap, which
is O(log2n) operation, and inside the loop we call utility function ‘downheap( )’. The
loop is executed (n-1) times, and in each iteration, the element in the root node of
the heap is swapped with the last element of the reduced size heap and heap is
rebuild. Swapping two elements take constant time. A complete binary tree with ‘n’
nodes has O( log2(n+1)) levels. In the worst case, if the root element is bumped down
to a leaf position, the ‘downheap( )’ operation would perform O(log2n) swaps. So the
swap plus ‘downheap( )’ is O(log2n). Multiplying this activity by (n-1) iteration shows
that the sorting loop is O(n log2n) .

Combining the heap build, which is O(n) , and the sorting loop, we can see that
complexity of ‘heapsort( )’ is O(n log2n).

Advantages of using heap Sort:


Ø The primary advantage of the heap sort is its efficiency. The execution time
efficiency of the heap sort is O(n log2 n).
Ø Doesn't use recursions.
Ø Heapsort has a better worst case performance than quicksort.

Disadvantages of using heap Sort:


Ø Slower than Quick and Merge sorts.

M8.3: Sorting Classification:


Sorting is classified into two categories depending on the environment:

1. Internal Sorting:
This type of sorting is used when the list does not contain a large number of
elements. When sorting is done by this method, all the records to be sorted
are present are present in the main memory.

2. External Sorting:
In this type of sorting, it is not possible to store all the records in the main
memory of the computer. They are stored on devices such as tapes and
disks. Then they are brought into the memory part by part and stored. The
final list is in sorted order. There are separate algorithms designed for this
purpose.

Page 25 www.magix.in
M8.4: Efficiency Factors:
Efficiency depends on the following factors of the algorithm:
1) Use of storage space
2) Use of computer time
3) Programming effort
4) Statistical deviation

1) Use of storage space:


A program requires very small amount of space for itself in the memory. In
fact it is negligible as compared to the list to be sorted. The given list of
elements is stored in an array. There may be one or two lists used to sort
the given list. The number of lists depends on the type of algorithm used.
Some algorithms like the merge sort require a temporary list to store the
sorted array while some use the original list only. The temporary list
requires memory equal to that of the original list. Once the sorting is
completed, the elements are transferred to the original list. Another factor
that can consume a lot of space is the recursive implementation of the
sorting algorithm. Recursive programs require more space than the
iterative programs. This amount depends on the number of recursive calls
made to the procedure. Finally, we conclude that memory is required to
store the program, to execute the program and to store the list of elements.

2) Use of Computer time:


The major time is consumed by the various operations like swapping,
comparison and assignment. The time also depends on the number of
objects present in the list. Time is a very critical factor in sorting.

3) Programming effort:
There are various methods used for sorting. If space and time are not
restricted and the improvement in the time and space factor is not very
significant then it is advisable to use the simple methods to sort. In such a
case, it is not advisable to take efforts and find out a new algorithm. For
any sorting method, correctness of the algorithm matters a lot.

4) Statistical deviation:
Usually the performance of sorting methods is affected by the initial order
of the elements in the list. If it is not then its performance is the same for
any order. However, if it does then the best case, worst case and the
average case are calculated separately for the algorithm.

Page 26 www.magix.in
M8.5: Comparison of Sorting Methods:
Sorting process plays a vital role in the field of computer science. If the list or array is
in an order then the search operation is fast. Any information required can be
obtained in very less time. Every technique has its own advantages and
disadvantages in comparison with each other. The following figure given the idea of
criteria used to compare the sorting techniques:

Complexity:
All the sorting techniques work on the concept of comparing the data in the
array or list. Normally the sorting techniques are basically compared with their
complexity of the algorithms. The complexity is normally represented by
asymptotic notation. The big-oh notation is the asymptotic notation that is most
commonly used to judge the efficiency of every sorting technique.

Speed:
The sorting techniques that have the same complexity may not have the same
speed on the given same input. Most generally the techniques are compared
with best case, worst case and average case efficiency.

Space:
For some sorting techniques no extra space is required but for some it is
required. i.e. heap sort requires extra space to sort the data.

Using the above specified criteria we can prepare a table of comparison of sorting
techniques:

Sorting Best time Worst time Average Space


techniques Case

Selection Sort O(n2) O(n2) O(n2) Constant

Insertion Sort O(n) O(n2) O(n2) Constant

Bubble Sort O(n2) O(n2) O(n2) Constant

Quick Sort O(n log2n) O(n2) O(n log2n) Constant

Heap Sort O(n log2n) O(n log2n) O(n log2n) Constant

(Comparison chart for different sorting algorithms)


Page 27 www.magix.in

You might also like