0% found this document useful (0 votes)
5 views9 pages

1

Uploaded by

Krishna Manohar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views9 pages

1

Uploaded by

Krishna Manohar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Asymptotic notations are mathematical notations used to analyze the efficiency or

performance of algorithms and data structures. They help in characterizing how the time
or space complexity of an algorithm or data structure grows with the input size. In the
context of data structures, asymptotic notations are often used to describe the worst-
case, average-case, or best-case behavior of operations such as searching, inserting, or
deleting elements.

The three most commonly used asymptotic notations in data structures are:

1. Big O Notation (O):

- Big O notation provides an upper bound on the growth rate of the algorithm or data
structure's complexity.

- It describes the worst-case scenario, i.e., the upper limit on how the resource (time or
space) consumption will grow.

- For example, O(n) indicates that the complexity grows linearly with the input size,
O(log n) suggests logarithmic growth, and O(1) represents constant time complexity.

2. Omega Notation (Ω):

- Omega notation provides a lower bound on the growth rate of the complexity.

- It describes the best-case scenario, i.e., the lower limit on how the resource
consumption will grow.

- For example, Ω(n) implies that the complexity will grow at least linearly with the input
size.

3. Theta Notation (Θ):

- Theta notation provides both upper and lower bounds on the growth rate of the
complexity.

- It describes the tight relationship between the algorithm or data structure and the
input size.

- For example, Θ(n) suggests that the complexity grows linearly with the input size and
is the most precise notation when you can establish both upper and lower bounds.
When analyzing the performance of data structures, you can use these notations to
express how the time or space complexity of operations behaves as the size of the data
structure (e.g., the number of elements in an array or the number of nodes in a tree)
increases. This helps in making informed decisions when choosing the appropriate data
structure for a particular problem or optimizing algorithms to achieve better
performance.

Here are a few examples related to data structures:

- When analyzing the time complexity of searching for an element in an unsorted array,
you might express it as O(n) since you may need to check all n elements in the worst
case.

- For a binary search tree (BST), the average-case time complexity for searching is often
expressed as O(log n), where n is the number of nodes in the tree.

- The space complexity of a hash table, which can be expressed as O(n), indicates that it
requires linear space in terms of the number of elements stored.

These notations are invaluable in comparing and selecting data structures and
algorithms to meet specific performance requirements in various applications.

Sure, let's go over the algorithms, provide examples, and include some Python programs and numerical
examples for Quick Sort, Radix Sort, and Merge Sort.

### Quick Sort:

**Algorithm:**

Quick Sort is a divide-and-conquer sorting algorithm. It works by selecting a 'pivot' element from the array and
partitioning the other elements into two sub-arrays, according to whether they are less than or greater than
the pivot. The sub-arrays are then recursively sorted.

**Example:**

Suppose we have an unsorted array: `[5, 2, 9, 3, 6]`

1. Choose a pivot element (e.g., the last element, `6`).

2. Partition the array into elements less than or equal to the pivot and elements greater than the pivot:
- Lesser: `[5, 2, 3]`

- Pivot: `6`

- Greater: `[9]`

3. Recursively apply Quick Sort to the lesser and greater sub-arrays.

4. Combine the sorted sub-arrays and the pivot to get the final sorted array: `[2, 3, 5, 6, 9]`

**Python Program:**

```python

def quick_sort(arr):

if len(arr) <= 1:

return arr

else:

pivot = arr.pop() # Choose the pivot (here, the last element)

lesser, greater = [], []

for element in arr:

if element <= pivot:

lesser.append(element)

else:

greater.append(element)

return quick_sort(lesser) + [pivot] + quick_sort(greater)

unsorted_array = [5, 2, 9, 3, 6]

sorted_array = quick_sort(unsorted_array)

print(sorted_array)

```

### Radix Sort:

**Algorithm:**

Radix Sort is a non-comparative integer sorting algorithm that sorts elements by processing individual digits. It
starts with the least significant digit and moves towards the most significant digit.
**Example:**

Suppose we have an unsorted array: `[170, 45, 75, 90, 802, 24, 2, 66]`

1. Start sorting from the least significant digit (rightmost) to the most significant digit (leftmost):

- 0: `[170, 90, 802]`

- 1: `[]` (no numbers with 1 in the rightmost digit)

- 2: `[802, 2]`

- 3: `[170]`

- 4: `[24]`

- 5: `[45, 75]`

- 6: `[66]`

- 7: `[]`

- 8: `[802]`

- 9: `[90]`

2. Combine the buckets: `[170, 90, 802, 2, 24, 45, 75, 66]`

**Python Program:**

```python

def radix_sort(arr):

max_num = max(arr)

exp = 1

while max_num // exp > 0:

counting_sort(arr, exp)

exp *= 10

def counting_sort(arr, exp):

n = len(arr)

output = [0] * n

count = [0] * 10
for i in range(n):

index = (arr[i] // exp) % 10

count[index] += 1

for i in range(1, 10):

count[i] += count[i - 1]

i=n-1

while i >= 0:

index = (arr[i] // exp) % 10

output[count[index] - 1] = arr[i]

count[index] -= 1

i -= 1

for i in range(n):

arr[i] = output[i]

unsorted_array = [170, 45, 75, 90, 802, 24, 2, 66]

radix_sort(unsorted_array)

print(unsorted_array)

```

### Merge Sort:

**Algorithm:**

Merge Sort is a divide-and-conquer sorting algorithm that divides the array into two halves, sorts them, and
then merges the sorted halves back together.

**Example:**

Suppose we have an unsorted array: `[38, 27, 43, 3, 9, 82, 10]`


1. Divide the array into two halves:

- Left: `[38, 27, 43]`

- Right: `[3, 9, 82, 10]`

2. Recursively sort both halves:

- Left: `[27, 38, 43]`

- Right: `[3, 9, 10, 82]`

3. Merge the sorted halves: `[3, 9, 10, 27, 38, 43, 82]`

**Python Program:**

```python

def merge_sort(arr):

if len(arr) <= 1:

return arr

mid = len(arr) // 2

left_half = arr[:mid]

right_half = arr[mid:]

left_half = merge_sort(left_half)

right_half = merge_sort(right_half)

return merge(left_half, right_half)

def merge(left, right):

result = []

i, j = 0, 0

while i < len(left) and j < len(right):

if left[i] < right[j]:

result.append(left[i])

i += 1

else:
result.append(right[j])

j += 1

result.extend(left[i:])

result.extend(right[j:])

return result

unsorted_array = [38, 27, 43, 3, 9, 82, 10]

sorted_array = merge_sort(unsorted_array)

print(sorted_array)

```

These programs demonstrate the implementations of Quick Sort, Radix Sort, and Merge Sort in Python. You
can use these programs as a reference to understand how these algorithms work and apply them to your own
datasets.

Certainly, I can provide you with numerical examples of counting sort, insertion sort, selection sort, and bubble
sort. Let's go through each of them one by one:

### Counting Sort:

Counting Sort is an integer sorting algorithm that works by counting the number of occurrences of each
element and using that information to place elements in the correct sorted position.

**Example:**

Suppose we have an unsorted array: `[4, 2, 2, 8, 3, 3, 1]`.

1. Count the occurrences of each element:

- 1 occurs 1 time

- 2 occurs 2 times

- 3 occurs 2 times
- 4 occurs 1 time

- 8 occurs 1 time

2. Calculate the cumulative counts:

- 1 occurs at index 0

- 2 occurs at index 1

- 3 occurs at index 3

- 4 occurs at index 5

- 8 occurs at index 6

3. Create a new sorted array by placing elements in their correct positions based on cumulative counts:

- Sorted array: `[1, 2, 2, 3, 3, 4, 8]`

### Insertion Sort:

Insertion Sort is a simple sorting algorithm that builds the final sorted array one item at a time.

**Example:**

Suppose we have an unsorted array: `[5, 2, 9, 3, 6]`.

1. Start with the first element and consider it as a sorted list: `[5]`.

2. Take the next element (`2`) and insert it into the correct position within the sorted list: `[2, 5]`.

3. Continue with the next element (`9`) and insert it into the correct position: `[2, 5, 9]`.

4. Continue with the next element (`3`) and insert it into the correct position: `[2, 3, 5, 9]`.

5. Finally, insert the last element (`6`) into its correct position: `[2, 3, 5, 6, 9]`.

### Selection Sort:

Selection Sort is a simple sorting algorithm that repeatedly selects the minimum element from the unsorted
portion of the array and moves it to the beginning of the sorted portion.

**Example:**
Suppose we have an unsorted array: `[64, 25, 12, 22, 11]`.

1. Find the minimum element in the unsorted portion and swap it with the first element: `[11, 25, 12, 22, 64]`.

2. Move the boundary between the sorted and unsorted portions to the right by one element.

3. Find the minimum element in the new unsorted portion and swap it with the second element: `[11, 12, 25,
22, 64]`.

4. Move the boundary again.

5. Continue this process until the array is sorted.

### Bubble Sort:

Bubble Sort is a simple sorting algorithm that repeatedly steps through the list, compares adjacent elements,
and swaps them if they are in the wrong order.

**Example:**

Suppose we have an unsorted array: `[64, 34, 25, 12, 22, 11, 90]`.

1. Start with the first pair of elements (`64` and `34`). Since `64` is greater, swap them: `[34, 64, 25, 12, 22, 11,
90]`.

2. Continue comparing and swapping adjacent elements until the largest element (`90`) bubbles to the end:
`[34, 25, 12, 22, 11, 64, 90]`.

3. Repeat the process for the remaining unsorted portion of the array until the entire array is sorted.

These examples illustrate how counting sort, insertion sort, selection sort, and bubble sort work on different
input arrays to produce sorted outputs.

You might also like