0% found this document useful (0 votes)
11 views5 pages

Lecture 2 DAA

Uploaded by

navya10.t
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views5 pages

Lecture 2 DAA

Uploaded by

navya10.t
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Lecture 2 Notes:

Sorting Problem Through Exhaustive Search

The sorting problem involves arranging a sequence of ( n ) distinct numbers in ascending or descending
order. One theoretical approach to solve this problem is Exhaustive Search, where we generate all
possible permutations of the given numbers and then check each permutation to find the sorted order.

Exhaustive Search Explanation:


- If you have ( n ) distinct numbers, the total number of different ways to arrange these numbers is ( n! )
(factorial of ( n )).
- To sort these numbers using exhaustive search:
1. Generate all ( n! ) permutations.
2. For each permutation, check whether it is sorted.
3. The first sorted permutation found is the correct answer.

Time Complexity of Exhaustive Search:


- Number of Permutations: ( n! )
- For large values of ( n ), this approach is impractical because ( n! ) grows exponentially with ( n ).
- Example: For ( n = 5 ), there are ( 5! = 120 ) permutations.
- For ( n = 10 ), there are ( 10! = 3,628,800 ) permutations.

Therefore, exhaustive search is extremely inefficient and is not used in practice for solving the sorting
problem. More efficient algorithms, like Selection Sort, are used instead.

---

Selection Sort Algorithm

Selection sort is a comparison-based sorting algorithm that works by repeatedly selecting the smallest
(or largest) element from the unsorted part of the array and swapping it with the first element of the
unsorted part. It is an in-place algorithm, meaning it does not require additional memory.

How Selection Sort Works:


1. Start with the first element in the array.
2. Search the entire unsorted part of the array to find the smallest element.
3. Swap the smallest element with the first element of the unsorted part.
4. Repeat the process for the remaining unsorted portion of the array.

Algorithm (Iterative Approach):


```c Language
void selectionSort(int arr[], int n) {
for (int i = 0; i < n-1; i++) {
int minIndex = i;
for (int j = i+1; j < n; j++) {
if (arr[j] < arr[minIndex]) {
minIndex = j;
}
}
// Swap the found minimum element with the first element
int temp = arr[minIndex];
arr[minIndex] = arr[i];
arr[i] = temp;
}
}
```

Example:
Consider the array `[64, 25, 12, 22, 11]`.

- Step 1: Find the smallest element in the entire array (`11`). Swap it with the first element.
- Array becomes: `[11, 25, 12, 22, 64]`.
- Step 2: Find the smallest element from the remaining unsorted part (`12`). Swap it with the second
element.
- Array becomes: `[11, 12, 25, 22, 64]`.
- Step 3: Repeat the process until the entire array is sorted.

---

Time Complexity Analysis of Selection Sort

The time complexity of selection sort can be analyzed by counting the number of comparisons and
swaps.

Number of Comparisons:
- In the first iteration, we compare the first element with the other ( n-1 ) elements.
- In the second iteration, we compare the second element with the remaining ( n-2 ) elements, and so on.
- The total number of comparisons is:
[ T(n) = (n-1) + (n-2) + (n-3) + ... + 1 ]
This is an arithmetic series and sums up to:
[ T(n) = n*(n-1)/2 ]

Number of Swaps:
- In each iteration, one swap is made (except when the element is already in its correct position).
- This gives a total of ( n-1 ) swaps in the worst case.

Time Complexity:
- Since the dominant term in the expression for the number of comparisons is (1/2 (n^2)), we drop the
lower-order terms and constants.
- The overall time complexity is:
[
O(n^2)
]
Selection sort runs in quadratic time, which makes it inefficient for large arrays.

Best, Worst, and Average Case Complexity:


- Best Case: ( O(n^2) ) (Even if the array is already sorted, selection sort will still perform all
comparisons.)
- Worst Case: ( O(n^2) )
- Average Case: ( O(n^2) )
---

Characteristics of Selection Sort


- In-Place Sorting: Selection sort does not require extra space, except for a few temporary variables for
swapping. Hence, it is an in-place sorting algorithm.
- Stable or Unstable?: Selection sort is unstable because it may swap elements that are equal in value,
changing their relative order in the array.
- Comparison-based: Selection sort is a comparison-based sorting algorithm and does not use any
advanced data structures.

---

Note on Selection Sort

Selection sort is a simple and easy-to-implement sorting algorithm with a time complexity of ( O(n^2) ),
making it inefficient for large datasets. However, for small arrays or educational purposes, it can be
useful due to its simplicity. In practical scenarios, algorithms like Merge Sort, Quick Sort, and Heap Sort
are preferred because they have better time complexities.

---

GATE Questions

1. Selection Sort Time Complexity


What is the time complexity of selection sort in the worst case?
- a) ( O(n) )
- b) ( O(log n) )
- c) ( O(n^2) )
- d) ( O(n^3) )

Answer: c) ( O(n^2) )

2. Number of Comparisons in Selection Sort


How many comparisons does selection sort make in the best case when sorting an array of size ( n )?
- a) (n(n-1))/2
- b) ( n^2 )
- c) ( n log n )
- d) ( n )

Answer: a) (n(n-1))/2)

3. Selection Sort Best Case


The best-case time complexity of selection sort is:
- a) ( O(n) )
- b) ( O(n log n) )
- c) ( O(n^2) )
- d) ( O(1) )

Answer: c) ( O(n^2) )
Pseudocode for Exhaustive Search to sort an array by generating all possible permutations:

Pseudocode for Exhaustive Search Sort

FUNCTION isSorted(arr, n):


FOR i FROM 1 TO n-1:
IF arr[i] < arr[i-1]:
RETURN False
RETURN True

FUNCTION nextPermutation(arr, n):


i←n-2
WHILE i ≥ 0 AND arr[i] ≥ arr[i + 1]:
i←i-1
IF i < 0:
REVERSE arr
RETURN False

j←n-1
WHILE arr[j] ≤ arr[i]:
j←j-1

SWAP arr[i] AND arr[j]


REVERSE arr[i + 1, n - 1]
RETURN True

FUNCTION exhaustiveSearchSort(arr, n):


SORT arr to get the smallest permutation

DO:
IF isSorted(arr, n):
PRINT "Sorted array found: ", arr
RETURN

WHILE nextPermutation(arr, n) == True

PRINT "No sorted permutation found"

END FUNCTION

Explanation of Pseudocode:

1. `isSorted(arr, n)`:
- This function checks if the array is sorted in ascending order by comparing each element with the
previous one.
- If any element is smaller than the previous one, the array is not sorted, and the function returns
`False`.
2. `nextPermutation(arr, n)`:
- This function generates the next lexicographical permutation of the array using a standard algorithm
(similar to C++'s `next_permutation` function).
- If no next permutation exists (i.e., the array is in its largest permutation), it returns `False`.
- Otherwise, it modifies the array to the next permutation and returns `True`.

3. `exhaustiveSearchSort(arr, n)`:
- The array is first sorted to start with the smallest lexicographical permutation.
- A loop generates all possible permutations of the array.
- For each permutation, `isSorted()` checks if the array is sorted.
- If a sorted permutation is found, it prints the array and terminates.
- The loop continues until all permutations have been checked.

---

Example:

For an array `arr = [3, 1, 4, 2]`:


- The array is first sorted to `[1, 2, 3, 4]`.
- The algorithm generates and checks all permutations until it finds a sorted one, which in this case is the
first permutation.

This pseudocode demonstrates how to approach sorting via exhaustive search in a straightforward
manner by checking all permutations, though it is not efficient for large datasets.

You might also like