0% found this document useful (0 votes)
14 views24 pages

DS Unit-2 SearchSort

The syllabus covers fundamental concepts in data structures and algorithms, including data organization, algorithm analysis, sorting, searching, and hashing techniques. It also delves into stacks, queues, linked lists, trees, and graphs, detailing their operations and complexities. The course aims to equip students with the ability to implement and analyze various data structures and algorithms to solve practical problems.

Uploaded by

asdflkj.amrutha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views24 pages

DS Unit-2 SearchSort

The syllabus covers fundamental concepts in data structures and algorithms, including data organization, algorithm analysis, sorting, searching, and hashing techniques. It also delves into stacks, queues, linked lists, trees, and graphs, detailing their operations and complexities. The course aims to equip students with the ability to implement and analyze various data structures and algorithms to solve practical problems.

Uploaded by

asdflkj.amrutha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

20-08-2024

SYLLABUS
UNIT 1: Data Structures and Algorithms Basics (8 Hours)
Introduction: Basic terminologies, elementary data organizations, data structure operations;
abstract data types (ADT) and their characteristics.
Algorithms: Definition, characteristics, analysis of an algorithm, asymptotic notations, time
and space trade-offs.
Array ADT: Definition, operations and representations – row-major and column- major.

UNIT 2: Sorting, Searching and Hashing (10 Hours)


Sorting: Different approaches to sorting, properties of different sorting algorithms (insertion,
Shell, quick, merge, heap, counting), performance analysis and comparison.
Searching: Necessity of a robust search mechanism, searching linear lists (linear search,
binary search) and complexity analysis of search methods.
Hashing: Hash functions and hash tables, closed and open hashing, randomization methods
(division method, mid-square method, folding), collision resolution techniques.
1

SYLLABUS Text/Reference Books


UNIT 3: Stacks and Queues (8 Hours)
Stack ADT: Allowable operations, algorithms and their complexity analysis, applications of stacks–expression
1. G.A.V. Pai, Data Structures and Algorithms: Concepts, Techniques and Application, First
conversion and evaluation (algorithmic analysis), multiple stacks. Edition, McGraw Hill, 2017.
Queue ADT: Allowable operations, algorithms and their complexity analysis for simple queue and circular 2. Ellis Horowitz, Sartaj Sahni and Susan Anderson-Freed, Fundamentals of Data Structures
queue, introduction to double-ended queues and priority queues. in C, Second Edition, Universities Press, 2008.
3. Mark Allen Weiss, Data Structures and Algorithm Analysis in C, Third Edition, Pearson
UNIT 4: Linked Lists (10 Hours)
Singly Linked Lists: Representation in memory, algorithms of several operations: traversing, searching,
Education, 2007.
insertion, deletion, reversal, ordering, etc. 4. Thomas H Cormen, Algorithms Unlocked, MIT Press, 2013
Doubly and Circular Linked Lists: Operations and algorithmic analysis. 5. Reema Thareja, Data Structures using C, Third Edition, Oxford University Press, 2023
Linked representation of stacks and queues. 6. Narasimha Karumanchi, Data Structures and Algorithms Made Easy: Data Structures and
Algorithmic Puzzles Fifth Edition, Career Monk Publications, 2016.
UNIT 5: Trees and Graphs (8 Hours)
Trees: Basic tree terminologies, binary tree and operations, binary search tree (BST) and operations with time
7. Aditya Bhargava, Grokking Algorithms: An Illustrated Guide for Programmers and Other
analysis of algorithms, threaded binary trees. Curious People, First Edition, Manning Publications, 2016.
Self-balancing Search Trees: Tree rotations, AVL tree and operations, 8. K. R. Venugopal and Sudeep R. Prasad, Mastering C, Second Edition, McGraw Hill,
Graphs: Basic terminologies, representation of graphs, traversals (DFS, BFS) with complexity analysis, path 2015.
finding (Dijkstra's SSSP, Floyd's APSP), and spanning tree (Prim's and Kruskal's algorithms). 9. A. K. Sharma, Data Structures using C, Second Edition, Pearson Education, 2013.

1
20-08-2024

Course Outcomes Searching:


● Necessity of a robust search mechanism
On completion of the course the student will be able to ● Searching linear lists (linear search, binary search)
1. Identify different ADTs, their operations and specify their ● Complexity analysis of search methods
complexities.
2. Apply linear data structures to address practical challenges and Sorting:
analyze their complexity. ● Different approaches to sorting
3. Implement different sorting, searching, and hashing methods and ● Properties of different sorting algorithms (insertion, Shell,
analyze their time and space requirements. quick, merge, heap, counting)
4. Analyse non-linear data structures to develop solutions for real- ● Performance analysis and comparison.
world applications.
6

Application of Searching
LINEAR & BINARY SEARCH
Searching in an array or other data structure has two possible
outcomes:
• Successful search (value found)
• Unsuccessful search (value not found)

We will examine two searching algorithms:


• Linear search
• Binary search

7 8

2
20-08-2024

LINEAR OR SEQUENTIAL SEARCHING LINEAR OR SEQUENTIAL SEARCHING


• In linear search, each element of an array is read one by one sequentially and it is compared
[0] [1] [2] [3] [4] [ 700 ] with the desired element.

• Searches begins with array from the first index position to the last in a linear progression.
Number 701466868 Number 281942902 Number 233667136 Number 580625685 Number 506643548 Number 155778322
• A search will be unsuccessful if all the elements are read and the desired element is
not found.

Number 580625685
Each record in list has an associated key.
In this example, the keys are ID numbers.

Given a particular key, how can we efficiently


retrieve the record from the list?

9 10

LINEAR OR SEQUENTIAL SEARCHING LINEAR OR SEQUENTIAL SEARCHING

No of Comparisons = 11

Comparison = 05
11 12

3
20-08-2024

ALGORITHM FOR LINEAR SEARCH PROGRAM FOR LINEAR SEARCH


#include <stdio.h> // Input the element to search for
• Let A be an array of n elements, A[1],A[2],A[3], ...... A[n]. int main() { printf("Enter the element to search for: ");
• “data” is the element to be searched. int n, i, search, found = 0; scanf("%d", &search);
• Then this algorithm will find the location “loc” of data in A. // Input the number of elements in the array // Search for the element
Set loc = – 1, if the search is unsuccessful. printf("Enter the number of elements in the array: "); for (i = 0; i < n; i++) {
1. Input an array A of n elements and “data” to be searched and initialize loc = – 1. scanf("%d", &n); if (arr[i] == search) {
2. Initialize i = 0; and repeat through step 3 if (i < n) by incrementing i by one. // Declare the array found = 1;
3. If (data = =A[i]) int arr[n]; printf("Element %d found at index %d and position is
%d. \n", search, i, i+1);
(a) loc = i // Input the elements of the array
(b) GOTO step 4 break;
printf("Enter %d elements:\n", n);
4. If (loc >= 0) } }
for (i = 0; i < n; i++) {
(a) Display “Search is successful and data is found at ‘loc+1’ position” printf("Element %d: ", i + 1);
if (!found) {
5. Else scanf("%d", &arr[i]);
printf("Element %d not found in the array.\n", search);
(a) Display “Data is not found and searching is unsuccessful” }
} return 0;
6. Exit }

13 14

TIME COMPLEXITY TIME COMPLEXITY


Assumptions:
Time Complexity of the linear search is found by number of comparisons made 1. All keys are equally likely in a search
in searching a data. 2. We always search for a key that is in the array
Example:
• In the best case, the desired element is present in the first position of the array, • We have an array of 10 records.
i.e., only one comparison is made. • If search for the first record, then it requires 1 array access; if the second,
then 2 array accesses. etc.
So f (n) = O (1). The average of all these searches is:
• In the worst case the desired element is present in the nth (or last) position of (1+2+3+4+5+6+7+8+9+10)/10 = 5.5
the array, so n comparisons are made.
• So in the Average case, the desired element is found in the half position of the array
So f (n) = O (n + 1). ~= O(n) with the probability p, then
f (n) =(1+2+…+n)/n = n(n+1)/2n = (n+1)/2
= O [(n + 1)/2]. ~= O(n)

15 16

4
20-08-2024

BINARY SEARCH BINARY SEARCH


Binary search is an extremely efficient algorithm when it is compared to linear Then apply the following conditions to search a “data”.
search. 1. Find the middle element of the array (i.e., n/2 is the middle element if the
Binary search technique searches “data” in minimum possible comparisons. array or the sub-array contains n elements).
Suppose the given array is a sorted one, otherwise first we have to sort the array 2. Compare the middle element with the data to be searched and then there are
elements first. following three cases.
(a) If it is a desired element, then search is successful.
(b) If it is less than desired data, then search only the first half of the array, i.e., the
elements which come to the left side of the middle element.
(c) If it is greater than the desired data, then search only the second half of the array, i.e.,
the elements which come to the right side of the middle element.
3. Repeat the same step until an element is found or end of the search area.

17 18

Binary Search Binary Search


Example: sorted array of integer keys. Target=7. Example: sorted array of integer keys. Target=7.

[0] [1] [2] [3] [4] [5] [6] [0] [1] [2] [3] [4] [5] [6]

3 6 7 11 32 33 53 3 6 7 11 32 33 53

Is 7 = midpoint key? NO. Is 7 < midpoint key? YES.


19 20

5
20-08-2024

Binary Search Binary Search


Example: sorted array of integer keys. Target=7. Example: sorted array of integer keys. Target=7.

[0] [1] [2] [3] [4] [5] [6] [0] [1] [2] [3] [4] [5] [6]

3 6 7 11 32 33 53 3 6 7 11 32 33 53

Search for the target in the area before midpoint.


Find approximate midpoint
21 22

Binary Search Binary Search


Example: sorted array of integer keys. Target=7. Example: sorted array of integer keys. Target=7.

[0] [1] [2] [3] [4] [5] [6] [0] [1] [2] [3] [4] [5] [6]

3 6 7 11 32 33 53 3 6 7 11 32 33 53

Target = key of midpoint? NO. Target < key of midpoint? NO.


23 24

6
20-08-2024

Binary Search Binary Search


Example: sorted array of integer keys. Target=7. Example: sorted array of integer keys. Target=7.

[0] [1] [2] [3] [4] [5] [6] [0] [1] [2] [3] [4] [5] [6]

3 6 7 11 32 33 53 3 6 7 11 32 33 53

Search for the target in the area after midpoint.


Target > key of midpoint? YES.
25 26

ALGORITHM FOR BINARY SEARCH


Let A be an array of n elements A[1], A[2], A[3],...... A[n].
Binary Search “Data” is an element to be searched.
“mid” denotes the middle location of a segment (or array or sub-array) of the element of A.
LB and UB is the lower and upper bound of the array which is under consideration.
Example: sorted array of integer keys. Target=7. 1. Input an array A of n elements and “data” to be sorted
2. LB = 0, UB = n; mid = int ((LB+UB)/2)
3. Repeat step 4 and 5 while (LB <= UB) and (A[mid]! = data)
[0] [1] [2] [3] [4] [5] [6]
4. If (data < A[mid])
UB = mid–1
3 6 7 11 32 33 53
5. Else
LB = mid + 1
6. Mid = int ((LB + UB)/2)
7. If (A[mid] == data)
Display “The data found at “mid” Position.”

Find approximate midpoint. 8. Else


Display “The data is not found”
Is target = midpoint key? YES.
27 9. Exit 28

7
20-08-2024

How the unsuccessful search is terminated in the TIME COMPLEXITY


binary search routine.
• Worst case: 11 items in the list took 4 tries

• How about the worst case for a list with 32 items ?


• 1st try - list has 16 items
• 2nd try - list has 8 items
• 3rd try - list has 4 items
• 4th try - list has 2 items
• 5th try - list has 1 item

How about the worst case for a list with 32000 items ?
29 30

TIME COMPLEXITY TIME COMPLEXITY


List has 250 items List has 512 items • List of 11 took 4 tries • How long (worst case) will it take to find an item in a list 32,000 items long?

• List of 32 took 5 tries 210 = 1024 213 = 8192 211 = 2048


14
2 = 16384 212 = 4096 215 = 32768
1st try - 125 items 1st try - 256 items • List of 250 took 8 tries
2nd try - 128 items • So, it will take only 15 tries!
2nd try - 63 items • List of 512 took 9 tries
3rd try - 64 items In general, algorithms that divide the data in half on each call are very efficient because they
3rd try - 32 items can find the desired element in a logarithmic number of steps.
4th try - 32 items
4th try - 16 items • 32 = 25 and 512 = 29 Each step reduces the problem size by a constant factor (by half).
5th try - 16 items So, after one step, you have n/2 elements remaining, after two steps you have n/4 elements
5th try - 8 items 6th try - 8 items remaining, after three steps you have n/8 elements remaining, and so on.
6th try - 4 items This means that these algorithms can handle very large inputs very quickly.
7th try - 4 items
7th try - 2 items We say that the binary search algorithm runs in log2 n time.
8th try - 2 items
log n means the log to the base 2 of some value of n.
8th try - 1 item 9th try - 1 item 31 8 = 23 log 8 = 3 16 = 24 log 16 = 4 32

8
20-08-2024

TIME COMPLEXITY Difference between linear and binary search


• Time Complexity is measured by the number f (n) of comparisons
to locate “data” in A, which contain n elements.
• In each comparison the size of the search area is reduced by half.
• The best-case time complexity would be O(1)
• Hence in the worst case, at most log2n comparisons required.
• So f (n) = O ([log2n] +1). ~= O(log n)
• Time Complexity in the average case is almost approximately
equal to the running time of the worst case.

33 34

Difference between linear and binary search Advantages of Binary Search Algorithm
• It follows the technique of eliminating half the array elements and is, therefore, more
efficient than a linear search for large amounts of data.
• Lower time complexity, which means it takes less time to compile.
• This is a better option than linear search because it splits the array in half rather than
sequentially traversing through each element.

Limitations of Binary Search Algorithm


• The binary search algorithm can only be applied to a sorted array.
• It would be time-consuming to sort and search for the desired element in small, unsorted
arrays. Binary search is not recommended in these cases.
• It is less localized than a linear search algorithm for in-memory searches at short
intervals.
35 36

9
20-08-2024

Difference between linear and binary search Advanced Search Algorithms


• Linear Search • Binary Search • Interpolation Search
1. Data can be in any order. 1. Data should be in a sorted order. • more like what people really do
2. Multidimensional array also can be used. 2. Only single dimensional array is used.
3.TimeComplexity:- O( n) 3. Time Complexity:- O(log n 2)
• Indexed Searching
4. Not an efficient method to be used if there 4. Efficient for large inputs also. • Binary Search Trees
is a large list.
• Hash Table Searching
• Grover's Algorithm
• best-first
• A*

37 38

Advanced Search Algorithms


SORTING

1. Insertion Sort,
2. Shell Sort,
3. Quick Sort,
4. Merge Sort,
5. Heap Sort,
39 6. Counting Sort 40

10
20-08-2024

Sorting Classification Insertion Sort


In memory sorting External sorting

Comparison sorting Specialized Sorting

# of tape accesses
O(N2) O(N log N) O(N)

• Bubble Sort • Merge Sort • Bucket Sort • External Merge


• Selection Sort • Quick Sort • Radix Sort Sort
• Insertion Sort • Heap Sort
• Shell Sort
41 42

Insertion Sort Insertion Sort


• Insertion sort algorithm sorts a set of values by inserting values
into an existing sorted file.
• Compare the second element with first, if the first element is
greater than second; place it before the first one.
• Otherwise place is just after the first one.
• Compare the third value with second. If the third value is greater
than the second value then place it just after the second.
• Otherwise place the second value to the third place.
• And compare third value with the first value. If the third value is
greater than the first value place the third value to second place.
• otherwise place the first value to second place.
• And place the third value to first place and so on. 43 44

11
20-08-2024

Insertion Sort Insertion Sort


Sorted Unsorted

23 78 45 8 32 56 Original List

After pass 1
23 78 45 8 32 56

23 45 78 8 32 56 After pass 2

After pass 3
8 23 45 78 32 56

After pass 4
8 23 32 45 78 56

After pass 5
45 8 23 32 45 56 78 46

Insertion Sort Insertion Sort: ALGORITHM


Let A be a linear array of n numbers A [1], A [2], A [3], ...... ,A [n].
Swap be a temporary variable to interchange the two values.
Pos is the control variable to hold the position of each pass.
1. Input an array A of n numbers
2. Initialize i = 1 and repeat through steps 4 by incrementing i by one.
(a) If (i < = n – 1)
(b) key = A [i],
(c) Pos = i – 1
3. Repeat the step 3 if ( (Pos >= 0) and A[Pos] > key) /* Move the elements greater than
temp to one position ahead from their current position*/
(a) A [Pos+1] = A [Pos]
(b) Pos = Pos-1
4. A [Pos +1] = key
47 5. Exit 48

12
20-08-2024

Insertion Sort: Program Insertion Sort: TIME COMPLEXITY


#include <stdio.h> int main() {
#define MAX_SIZE 100 int arr[MAX_SIZE]; WORST CASE
int n;
void insertionSort(int arr[], int n) { The worst case occurs when the array A is in reverse order and the inner while loop must use the maximum
// Input the number of elements number (n – 1) of comparisons. Hence
for (int i = 1; i < n; i++) {
int key = arr[i];
printf("Enter the number of elements (max %d): ", MAX_SIZE); f(n) = (n – 1) + ....... 2 + 1 = (n (n – 1))/2 = O(n2).
scanf("%d", &n);
int pos = i - 1;
// Input the elements
printf("Enter %d numbers:\n", n);
AVERAGE CASE
// Move elements of arr[0..i-1], that are greater than key, for (int i = 0; i < n; i++) { On the average case there will be approximately (n – 1)/2 comparisons in the inner while loop. Hence the
// to one position ahead of their current position scanf("%d", &arr[i]); average case
}
while (pos >= 0 && arr[pos] > key) { // Sort the array using Insertion Sort f (n) = (n – 1)/2 + ...... + 2/2 +1/2 = n (n – 1)/4 = O(n2).
arr[pos + 1] = arr[pos]; insertionSort(arr, n);
pos = pos - 1;
// Display the sorted array BEST CASE
} printf("Sorted array:\n");
arr[pos + 1] = key; for (int i = 0; i < n; i++) { The best case occurs when the array A is in sorted order and the outer for loop will iterate for (n – 1) times.
printf("%d ", arr[i]); And the inner while loop will not execute because the given array is a sorted array, i.e.,
} } printf("\n");
} return 0; f (n) = O(n).
}
49 50

Shell Sort Shell Sort At the


interval of
• It is a sorting algorithm that is an extended version of insertion sort. n/2=
• Shell sort has improved the average time complexity of insertion sort. 8/2=4
• As similar to insertion sort, it is a comparison-based and in-place sorting
algorithm.
• Shell sort is efficient for medium-sized data sets.
• In insertion sort, at a time, elements can be moved ahead by one position
only. To move an element to a far-away position, many movements are
required that increase the algorithm's execution time.
• But shell sort overcomes this drawback of insertion sort. It allows the
movement and swapping of far-away elements as well.
• This algorithm first sorts the elements that are far away from each other, then
it subsequently reduces the gap between them.
• This gap is called as interval/Gap Size. N/2, N/4,....,1 as the intervals.
51 52

13
20-08-2024

Shell Sort At the Shell Sort At the


interval of interval of
n/4 = n/8 =
8/4 =2 8/8 =1

Insertion
Sort

53 54

Shell Sort
Shell Sort function Shell_Sort(a, n) Shell Sort Algorithm
The steps or procedure for the shell sort
1. step=round(n/2) algorithm is as follows-
2. Repeat Step 3 to 10 while step>0 Step 1) Initialize the interval value, step = n/2.
3. Repeat Step 4 to 9 for i=step to n-1 (In this example, n is the size of the array)
4. Temp=a[i] Step 2) Put all the elements within a distance of
5. j=i the interval step in a sub-list.
6. Repeat Step 7 and 8 while j>=step Step 3) Sort those sub-lists using insertion sort.
and a[j-step]>temp Step 4) Set new interval, step=step/2.
Step 5) If step>0, go to step 2. Else go to step 6.
7. a[j]= a[j-step] Step 6) The resultant output will be the sorted
8. j=j-step // End of Step 6 array.
9. a[j]=temp // End of Step 3
10. step = (step/2) //End of Step 2
55 56

14
20-08-2024

Shell Sort: TIME COMPLEXITY Merge Sort


WORST CASE
The worst case occurs when the array A is in reverse order and the inner while loop must use the maximum Merge-sort is a sorting algorithm based on the divide-and-conquer
number (n – 1) of comparisons. Hence depends on step sequence;
paradigm .
• Best known is O (n2) and occurs when array is sorted in reverse order.
Divide: divide the input data S in two disjoint subsets S1 and S2
AVERAGE CASE Recur: solve the sub-problems associated with S1 and S2
On the average case there will be approximately (n – 1)/2 comparisons in the inner while loop. Hence the average
case also depends on step sequence;
Conquer: combine the solutions for S1 and S2 into a solution for S
• Best known is O (n log n).
• It is a recursive algorithm.
BEST CASE
The best case occurs when the array A is in sorted order and the outer for loop will iterate for (n – 1) times. And
– Divides the list into halves,
the inner while loop will not execute because the given array is a sorted array, i.e., – Sort each halve separately (recursively), and
f (n) = O(n log n).
– Then merge the sorted halves into one sorted array.
• O (1) extra space and it is not stable.
• It is a complex algorithm and it’s not nearly as efficient as the merge, heap, and quick sorts.
57 58

Merge Sort - Example Merge Sort - Example


40 20 10 80 60 50 7 30
6 3 9 1 5 4 7 2
divide 5 4 7 2
6 3 9 1

divide 5 4 divide
6 3 9 1 7 2 20 40 10 80 50 60 7 30

divide divide divide divide


6 3 9 1 5 4 7 2
merge merge merge merge
4 5
3 6 1 9 2 7
10 20 40 80 7 30 50 60
merge merge
2 4 5 7
1 3 6 9
merge
7 10 20 30 40 50 60 80
1 2 3 4 5 6 7 9 59 60

15
20-08-2024

Merge Algorithm (Two Sequences) Merge Sort Algorithm


MERGE (ARR, BEG, MID, END)
Step 1: [INITIALIZE] SET I = BEG, J = MID + 1, INDEX = 0
MERGE_SORT(ARR, BEG, END)
Step 2: Repeat while (I <= MID) AND (J<=END) Step 1: IF BEG < END
IF ARR[I] < ARR[J] SET MID = (BEG + END)/2
Merging two sequences: ELSE
SET TEMP[INDEX] = ARR[I]
SET I = I + 1 CALL MERGE_SORT (ARR, BEG, MID)
SET TEMP[INDEX] = ARR[J]
CALL MERGE_SORT (ARR, MID + 1, END)
1. Access the first item from both sequences SET J = J + 1
[END OF IF]
MERGE (ARR, BEG, MID, END)
SET INDEX = INDEX + 1
[END OF IF]
2. While neither sequence is finished [END OF LOOP]
Step 3: [Copy the remaining elements of right sub-array, if any]
Step 2: END
IF I > MID
1. Compare the current items of both Repeat while J <= END
SET TEMP[INDEX] = ARR[J]
SET INDEX = INDEX + 1, SET J = J + 1

2. Copy smaller current item to the output [END OF LOOP]


[Copy the remaining elements of left sub-array, if any]
ELSE

3. Access next item from that input sequence Repeat while I <= MID
SET TEMP[INDEX] = ARR[I]
SET INDEX = INDEX + 1, SET I = I + 1
[END OF LOOP]
3. Copy any remaining from first sequence to output [END OF IF]
Step 4: [Copy the contents of TEMP back to ARR] SET K=
Step 5: Repeat while K < INDEX
4. Copy any remaining from second to output SET ARR[K] = TEMP[K]
SET K = K + 1
[END OF LOOP]
Step 6: END
61 62

Quick Sort Algorithm Quick Sort Algorithm


QuickSort is a sorting algorithm that follows the divide and conquer approach.
On average, it’s faster than insertion sort, and selection sort and have a similar
performance to merge sort.
The concept of quicksort is to select an element as a pivot and partition the
given array around the pivot.
British computer scientist Tony Hoare developed this sorting algorithm in 1959.
There are several strategies to pick pivot in partitioning:
1. Always pick the first element
2. Always pick the last element
3. Pick random element

63 64

16
20-08-2024

Quick Sort Algorithm Quick Sort Algorithm

65 66

Quick Sort Algorithm Quick Sort Algorithm


1. Select key/Pivot void quicksort(int arr[], int left, int right) /* partition */
2. Find the first larger element than pivot from LHS { while (i <= j) {
3. Find the first smaller element than pivot from RHS int i = left, j = right; while (arr[i] < pivot)
int tmp; i++;
4. If we get larger element from LHS and smaller element than pivot from RHS without int pivot = arr[(left + right) / 2]; while (arr[j] > pivot)
crossover then } j--;
Swap larger element with smaller element if (i <= j) {
tmp = arr[i];
5. With crossover arr[i] = arr[j];
/* recursion */
Swap pivot element with smaller And spilt the array arr[j] = tmp;
if (left < j)
quicksort(arr, left, j); i++;
6. If only smaller element from RHS found, j--;
if (i < right)
Swap pivot element with smaller And spilt the array }
quicksort(arr, i, right);
}
7. If only larger element from LHS found,
Element is in right position And spilt the array 67 68

17
20-08-2024

Counting Sort Counting Sort Working


• Counting sort technique doesn't perform sorting by comparing elements. Step I: Find out the maximum element (let it be max) from the given array.
• It is an algorithm used to sort the elements of an array by counting and storing the
frequency of each distinct element in an auxiliary array.
• Sorting is done by mapping the value of each element as an index of the auxiliary
array.
• After that, it performs some arithmetic operations to calculate each object's index
position in the output sequence. Step II: Initialize an array of length max+1 with all elements 0.
• It is a sorting technique that is based on the keys between specific ranges.
This array is used for storing the count of the elements in the array.
• It is effective when range is not greater than number of objects to be sorted.
• It is not used as a general-purpose sorting algorithm.
• It performs sorting by counting objects having distinct key values like hashing.
• It can be used to sort the negative input values.
69 70

Counting Sort Working Counting Sort Working


Step III: Store the count of each element at their respective index in count array Step V: Find the index of each element of the original array in the count array.
For example: if the count of element 3 is 2 then, 2 is stored in the 3rd position This gives the cumulative count.
of count array. Place the element at the index calculated as shown in figure below.
If element "5" is not present in the array, then 0 is stored in 5th position.

Step IV: Store cumulative sum of the elements of the count array.
It helps in placing the elements into the correct index of the sorted array.

71 Step VI: After placing each element at its correct position, decrease its count by 1.72

18
20-08-2024

Counting Sort Working Counting Sort Working

73 74

Counting Sort Working Counting Sort Working

75 76

19
20-08-2024

Counting Sort Counting Sort

77 78

Counting Sort Counting Sort Algorithm


Consider an array Arr[] of size N that we want to sort:

• Step 1: Declare an auxiliary array Aux[] of size max(Arr[])+1 and initialize it with 0.

• Step 2: Traverse array Arr[] and map each element of Arr[] as an index of Aux[] array, i.e.,
execute Aux[Arr[i]]++ for 0 <= i < N.

• Step 3: Calculate the prefix sum at every index of array Arr[].

• Step 4: Create an array sortedArr[] of size N.

• Step 5: Traverse array Arr[] from right to left and update sortedArr[] as
sortedArr[ Aux[ Arr[i] ] - 1] - Arr[i]. Also, update Aux[] as Aux[ Arr[i] ]--.

As we are using the index of an array to map elements of array Arr[], if the difference between the
maximum element and minimum element is huge, counting sort will not be feasible.

79 80

20
20-08-2024

Counting Sort Algorithm Applications of Counting Sort Algorithm


countingSort(array, size) 1. If the range of input data is not much bigger than the number of objects to
max <- find largest element in array be sorted, counting sort is efficient. Consider the following scenario: the
initialize count array with all zeros data is 10, 5, 10K, 5K, and the input sequence is 1 to 10K.
for j <- 0 to size 2. It isn't a sorting system based on comparisons.
find the total count of each unique element and 3. It has an O(n) running time complexity, with space proportional to the data
range.
store the count at jth index in count array
4. It's frequently used as a subroutine in other sorting algorithms, such as
for i <- 1 to max radix sort.
find the cumulative sum and store it in count array itself 5. Counting sort counts the occurrences of the data object in O using partial
for j <- size down to 1 hashing (1).
restore the elements to array 6. The counting sort can also be used with negative inputs.
decrease count of each element restored by 1
81 82

Advantages of Counting Sort Complexity of Counting Sort Algorithm


1. Counting sort generally performs faster than all comparison-based sorting Finding M (maximum element of array Arr[]) and mapping elements of array
algorithms, such as merge sort and quicksort, if the range of input is of the Arr[] takes O(N) time
order of the number of input.
Initializing Aux[] array takes O(M) time
2. Counting sort is easy to code
Making sortedArr[] takes O(N+M) time
3. Counting sort is a stable algorithm.

Therefore, the overall time complexity is O(N+M).


Disadvantages of Counting Sort
Worst-case time complexity: O(N+M) ≈ O(N).
1. Counting sort doesn’t work on decimal values.
Average-case time complexity: O(N+M).
2. Counting sort is inefficient if the range of values to be sorted is very large.
3. Counting sort is not an In-place sorting algorithm, It uses extra space for Best-case time complexity: O(N+M).
sorting the array elements.
83 84

21
20-08-2024

Selection Sort ADD ON Selection Sort


Sorted Unsorted
• Selection sort algorithm finds the smallest element of the array and
interchanges it with the element in the first position of the array. 23 78 45 8 32 56 Original List

• Then it finds the second smallest element from the remaining elements in the
After pass 1
array and places it in the second position of the array and so on. 8 78 45 23 32 56
• Two loops are used in the Selection Sort algorithm.
• The outer loop moves from the first element in the array to the next to last 8 23 45 78 32 56 After pass 2

element; the inner loop moves from the second element of the array to the
After pass 3
last element, looking for values that are smaller than the element currently 8 23 32 78 45 56
being pointed at by the outer loop.
• After each iteration of the inner loop, the most minimum value in the array is 8 23 32 45 78 56
After pass 4

assigned to its proper place in the array.


After pass 5

85
8 23 32 45 56 78 86

Selection Sort Selection Sort: ALGORITHM


Let A be a linear array of n numbers A[1], A[2], A[3], ……… A[k], A[k+1], …….. A[n].
Swap be a temporary variable for swapping (or interchanging) the position of the numbers.
Min is the variable to store smallest number and Loc is the location of the smallest element.

1. Input n numbers of an array A


2. Initialize i = 0 and repeat through step5 if (i < n – 1)
(a) min = A[i]
(b) loc = i
3. Initialize j = i + 1 and repeat through step 4 if (j < n – 1)
4. if (A[j] < min)
(a) min = A[j]
(b) loc = j
5. if (loc ! = i)
(a) swap = A[i]
(b) A[i] = A[loc]
(c) A[loc] = swap
6. Display “The sorted numbers of array A”
7. Exit
87 88

22
20-08-2024

Selection Sort: TIME COMPLEXITY SORT BEST AVERAGE WORST


MEMORY
SPACE
STABILITY METHOD

WORST CASE
BUBBLE O(n) O(n2) O(n2) O(1) constant Stable Exchange
The worst case occurs when the array A is in reverse order and the inner while loop must use the maximum
number (n – 1) of comparisons. Hence
SELECTION O(n) O(n2) O(n2) O(1) constant Stable Selection
f(n) = (n – 1) + ....... 2 + 1 = (n (n – 1))/2 = O(n2).
INSERTION O(n) O(n2) O(n2) O(1) constant Stable Insertion
AVERAGE CASE
On the average case there will be approximately (n – 1)/2 comparisons in the inner while loop. Hence the
SHELL O(n) O(n log n) O(n log n) O(1) constant Instable Insertion
average case
QUICK O(n log n) O(n log n) O(n2) O(1) constant Stable Partitioning
f (n) = (n – 1)/2 + ...... + 2/2 +1/2 = n (n – 1)/4 = O(n2).
RADIX O(n log n) O(n log n) O(n2) Depends Stable Partitioning
BEST CASE
Partitioning
The best case occurs when the array A is in sorted order and the outer for loop will iterate for (n – 1) times. MERGE O(n log n) O(n log n) O(n log n) Depends Instable
+ Merging
And the inner while loop will not execute because the given array is a sorted array, i.e.,
f (n) = O(n). HEAP O(n log n) O(n log n) O(n log n) O(1) constant Instable Selection
89 COUNTING O(N+M) O(N+M) O(N+M) O(1) constant Stable Counting 90

Sorting practice problem


• Consider the following array values. Solve by all methods Question?
1) 77, 11, 33, 2, 6, 1, 99
2) 22, 11, 34, -5, 3, 40, 9, 16, 6 • A good question deserve a good grade…
3) 23, 17, 5, 90, 12, 44, 38, 84, 77
4) 98, 81, 71, 34, 50, 54, 31, 71, 88, 90
5) 99, 72, 46, 79, 81, 34, -14, 13, -27, 47, -33, 35
6) 46, -2, 83, 41, 102, -5, 17, 31, 64, 49, -18, 37, 83
7) 39, -23, 17, 90, 33, 72, 46, 79, -11, 52, 64, -5, 71
8) 24, 37, 46, -11, 85, 47, 33, 66, 22, 84, 95, 55, -14, -9, 76, 35
9) 44, 68, 191, 119, -37, 83, 82, 196, 45, 158, 130, 76, 153, 39, 25

91 92

23
20-08-2024

93

24

You might also like