Lecture 2 DAA
Lecture 2 DAA
The sorting problem involves arranging a sequence of ( n ) distinct numbers in ascending or descending
order. One theoretical approach to solve this problem is Exhaustive Search, where we generate all
possible permutations of the given numbers and then check each permutation to find the sorted order.
Therefore, exhaustive search is extremely inefficient and is not used in practice for solving the sorting
problem. More efficient algorithms, like Selection Sort, are used instead.
---
Selection sort is a comparison-based sorting algorithm that works by repeatedly selecting the smallest
(or largest) element from the unsorted part of the array and swapping it with the first element of the
unsorted part. It is an in-place algorithm, meaning it does not require additional memory.
Example:
Consider the array `[64, 25, 12, 22, 11]`.
- Step 1: Find the smallest element in the entire array (`11`). Swap it with the first element.
- Array becomes: `[11, 25, 12, 22, 64]`.
- Step 2: Find the smallest element from the remaining unsorted part (`12`). Swap it with the second
element.
- Array becomes: `[11, 12, 25, 22, 64]`.
- Step 3: Repeat the process until the entire array is sorted.
---
The time complexity of selection sort can be analyzed by counting the number of comparisons and
swaps.
Number of Comparisons:
- In the first iteration, we compare the first element with the other ( n-1 ) elements.
- In the second iteration, we compare the second element with the remaining ( n-2 ) elements, and so on.
- The total number of comparisons is:
[ T(n) = (n-1) + (n-2) + (n-3) + ... + 1 ]
This is an arithmetic series and sums up to:
[ T(n) = n*(n-1)/2 ]
Number of Swaps:
- In each iteration, one swap is made (except when the element is already in its correct position).
- This gives a total of ( n-1 ) swaps in the worst case.
Time Complexity:
- Since the dominant term in the expression for the number of comparisons is (1/2 (n^2)), we drop the
lower-order terms and constants.
- The overall time complexity is:
[
O(n^2)
]
Selection sort runs in quadratic time, which makes it inefficient for large arrays.
---
Selection sort is a simple and easy-to-implement sorting algorithm with a time complexity of ( O(n^2) ),
making it inefficient for large datasets. However, for small arrays or educational purposes, it can be
useful due to its simplicity. In practical scenarios, algorithms like Merge Sort, Quick Sort, and Heap Sort
are preferred because they have better time complexities.
---
GATE Questions
Answer: c) ( O(n^2) )
Answer: a) (n(n-1))/2)
Answer: c) ( O(n^2) )
Pseudocode for Exhaustive Search to sort an array by generating all possible permutations:
j←n-1
WHILE arr[j] ≤ arr[i]:
j←j-1
DO:
IF isSorted(arr, n):
PRINT "Sorted array found: ", arr
RETURN
END FUNCTION
Explanation of Pseudocode:
1. `isSorted(arr, n)`:
- This function checks if the array is sorted in ascending order by comparing each element with the
previous one.
- If any element is smaller than the previous one, the array is not sorted, and the function returns
`False`.
2. `nextPermutation(arr, n)`:
- This function generates the next lexicographical permutation of the array using a standard algorithm
(similar to C++'s `next_permutation` function).
- If no next permutation exists (i.e., the array is in its largest permutation), it returns `False`.
- Otherwise, it modifies the array to the next permutation and returns `True`.
3. `exhaustiveSearchSort(arr, n)`:
- The array is first sorted to start with the smallest lexicographical permutation.
- A loop generates all possible permutations of the array.
- For each permutation, `isSorted()` checks if the array is sorted.
- If a sorted permutation is found, it prints the array and terminates.
- The loop continues until all permutations have been checked.
---
Example:
This pseudocode demonstrates how to approach sorting via exhaustive search in a straightforward
manner by checking all permutations, though it is not efficient for large datasets.