0% found this document useful (0 votes)
15 views26 pages

Unit - 4 DS

The document discusses linear search and binary search techniques, explaining their algorithms, advantages, and disadvantages. It also covers sorting algorithms, including selection sort, bubble sort, and insertion sort, detailing their processes, time complexities, and examples. The comparison between linear and binary search highlights their efficiency and requirements, while the sorting section emphasizes the importance of sorting in data organization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views26 pages

Unit - 4 DS

The document discusses linear search and binary search techniques, explaining their algorithms, advantages, and disadvantages. It also covers sorting algorithms, including selection sort, bubble sort, and insertion sort, detailing their processes, time complexities, and examples. The comparison between linear and binary search highlights their efficiency and requirements, while the sorting section emphasizes the importance of sorting in data organization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Linear seach and binary search techniques

### **Linear Search**

Linear search is a simple searching algorithm used to find an element in a list or array. It works by
sequentially checking each element until the desired element is found or the end of the list is reached.

#### **How It Works**


1. Start from the first element.
2. Compare the target value with the current element.
3. If the current element matches the target, return its index or position.
4. If not, move to the next element and repeat the process.
5. If the target is not found by the end of the list, return an indication that the element is not present
(e.g., `-1`).

#### **Algorithm**
1. **Input**: Array `arr[]` of size `n` and target `x`.
2. **Output**: Index of `x` if found, else `-1`.
3. **Steps**:
```
for i = 0 to n-1 do
if arr[i] == x then
return i
return -1
```

#### **Advantages**
- Simple and easy to implement.
- Does not require the list to be sorted.

#### **Disadvantages**
- Inefficient for large datasets.
- Time complexity: \( O(n) \) (worst and average cases).

---

### **Binary Search**

Binary search is a more efficient searching algorithm that works on sorted arrays. It repeatedly divides
the search range in half, eliminating half of the elements in each step.

#### **How It Works**


1. Start with the entire array as the search range.
2. Calculate the middle index: \( \text{mid} = \lfloor (\text{low} + \text{high}) / 2 \rfloor \).
3. Compare the middle element with the target:
- If it matches the target, return the index.
- If the target is smaller than the middle element, search the left half.
- If the target is larger than the middle element, search the right half.
4. Repeat the process until the target is found or the range is empty.

#### **Algorithm**
1. **Input**: Sorted array `arr[]`, size `n`, and target `x`.
2. **Output**: Index of `x` if found, else `-1`.
3. **Steps**:
```
low = 0
high = n - 1

while low ≤ high do


mid = (low + high) // 2
if arr[mid] == x then
return mid
else if arr[mid] > x then
high = mid - 1
else
low = mid + 1
return -1
```

#### **Advantages**
- Highly efficient for large datasets.
- Time complexity: \( O(\log n) \).

#### **Disadvantages**
- Requires the array to be sorted beforehand.
- Slightly more complex to implement.

---

### **Comparison**

| **Aspect** | **Linear Search** | **Binary Search** |


|--------------------|----------------------------|-----------------------------|
| **Precondition** | Works on unsorted arrays. | Requires sorted arrays. |
| **Time Complexity** | \( O(n) \) | \( O(\log n) \) |
| **Efficiency** | Inefficient for large data | Highly efficient for large data |
| **Use Case** | Small datasets or unsorted data | Large datasets with sorted data |

---

### **Example**

#### Linear Search:


- Array: `[10, 20, 30, 40, 50]`
- Target: `30`
- Steps:
- Compare `10` → No
- Compare `20` → No
- Compare `30` → Yes → Return index `2`.

#### Binary Search:


- Sorted Array: `[10, 20, 30, 40, 50]`
- Target: `30`
- Steps:
- `low = 0, high = 4`, mid = `2`, compare `30` → Match → Return index `2`.

Sorting

Selection sort

### **Definition of Sorting**

Sorting is the process of arranging elements of a list or array in a specific order. The order can be
**ascending** (smallest to largest) or **descending** (largest to smallest). Sorting is a fundamental
operation in computer science, often used to organize data for easier searching, comparison, and
analysis.

### **Importance of Sorting**


- Simplifies search operations (e.g., binary search requires sorted data).
- Enhances data organization for applications like databases and file systems.
- Serves as a prerequisite for many algorithms in computer science.

---

### **Types of Sorting Algorithms**

Sorting algorithms can be broadly classified into:


1. **Comparison-Based Sorting**:
- Algorithms like Bubble Sort, Selection Sort, Merge Sort, etc.
- Elements are compared to one another to determine their order.

2. **Non-Comparison-Based Sorting**:
- Algorithms like Counting Sort, Radix Sort, and Bucket Sort.
- These rely on mathematical properties rather than direct comparisons.

---

### **Selection Sort**

**Selection Sort** is a simple comparison-based sorting algorithm. It works by repeatedly finding the
smallest (or largest) element from the unsorted part of the list and moving it to the sorted part.

#### **How It Works**


1. Divide the array into two parts: **sorted** and **unsorted**.
2. Find the smallest element in the unsorted part.
3. Swap the smallest element with the first element of the unsorted part.
4. Repeat this process for all elements until the entire array is sorted.

---

### **Algorithm**

1. **Input**: Array `arr[]` of size `n`.


2. **Output**: Sorted array in ascending order.
3. **Steps**:
```
for i = 0 to n-2 do
min_index = i
for j = i+1 to n-1 do
if arr[j] < arr[min_index] then
min_index = j
Swap arr[i] and arr[min_index]
```

---

### **Example**

#### Input Array:


`[64, 25, 12, 22, 11]`

#### Sorting Steps:


1. **Initial Array**: `[64, 25, 12, 22, 11]`
- Smallest element: `11` → Swap with `64`.
- Result: `[11, 25, 12, 22, 64]`

2. **Second Pass**:
- Smallest element in `[25, 12, 22, 64]`: `12` → Swap with `25`.
- Result: `[11, 12, 25, 22, 64]`

3. **Third Pass**:
- Smallest element in `[25, 22, 64]`: `22` → Swap with `25`.
- Result: `[11, 12, 22, 25, 64]`

4. **Fourth Pass**:
- Smallest element in `[25, 64]`: `25` → No swap needed.
- Result: `[11, 12, 22, 25, 64]`

#### Final Sorted Array:


`[11, 12, 22, 25, 64]`
---

### **Time Complexity**

| Case | Time Complexity | Explanation |


|------------------|-----------------|---------------------------------------|
| **Best Case** | \( O(n^2) \) | Every element is compared in each pass. |
| **Worst Case** | \( O(n^2) \) | Same number of comparisons as in best case. |
| **Average Case** | \( O(n^2) \) | Comparisons remain the same. |

#### **Space Complexity**:


- \( O(1) \): Only a constant amount of extra space is used.

---

### **Advantages**
- Simple and easy to understand.
- Requires minimal additional memory (in-place sorting).

### **Disadvantages**
- Inefficient for large datasets due to \( O(n^2) \) complexity.
- Comparatively slower than other algorithms like Quick Sort or Merge Sort.

Bubble Sort

Bubble Sort is a simple sorting algorithm that repeatedly steps through the list, compares
adjacent elements, and swaps them if they are in the wrong order. The process continues until the
list is sorted.

The algorithm gets its name because the smaller elements "bubble" to the top (beginning of the
array) and the larger elements "sink" to the bottom (end of the array).

How It Works

1. Start from the first element of the array.


2. Compare each pair of adjacent elements.
o If the current element is larger than the next element, swap them.
o Otherwise, move to the next pair.
3. After each pass, the largest element in the unsorted part of the array is moved to its
correct position.
4. Repeat the process for the remaining unsorted part until no swaps are needed.
Algorithm

1. Input: Array arr[] of size n.


2. Output: Sorted array in ascending order.
3. Steps:

css
Copy code
for i = 0 to n-1 do
swapped = false
for j = 0 to n-i-2 do
if arr[j] > arr[j+1] then
Swap arr[j] and arr[j+1]
swapped = true
if swapped == false then
break

Example

Input Array:

[64, 34, 25, 12, 22, 11, 90]

Sorting Steps:

1. Pass 1:
o Compare 64 and 34 → Swap → [34, 64, 25, 12, 22, 11, 90]
o Compare 64 and 25 → Swap → [34, 25, 64, 12, 22, 11, 90]
o Compare 64 and 12 → Swap → [34, 25, 12, 64, 22, 11, 90]
o Compare 64 and 22 → Swap → [34, 25, 12, 22, 64, 11, 90]
o Compare 64 and 11 → Swap → [34, 25, 12, 22, 11, 64, 90]
o Compare 64 and 90 → No Swap → [34, 25, 12, 22, 11, 64, 90]

Largest element (90) is placed correctly.

2. Pass 2:
o Compare 34 and 25 → Swap → [25, 34, 12, 22, 11, 64, 90]
o Compare 34 and 12 → Swap → [25, 12, 34, 22, 11, 64, 90]
o Compare 34 and 22 → Swap → [25, 12, 22, 34, 11, 64, 90]
o Compare 34 and 11 → Swap → [25, 12, 22, 11, 34, 64, 90]
o Compare 34 and 64 → No Swap → [25, 12, 22, 11, 34, 64, 90]

Second largest element (64) is placed correctly.

3. Pass 3:
o Compare 25 and 12 → Swap → [12, 25, 22, 11, 34, 64, 90]
o Compare 25 and 22 → Swap → [12, 22, 25, 11, 34, 64, 90]
o Compare 25 and 11 → Swap → [12, 22, 11, 25, 34, 64, 90]
o Compare 25 and 34 → No Swap → [12, 22, 11, 25, 34, 64, 90]

Third largest element (34) is placed correctly.

4. Pass 4:
o Compare 12 and 22 → No Swap → [12, 22, 11, 25, 34, 64, 90]
o Compare 22 and 11 → Swap → [12, 11, 22, 25, 34, 64, 90]
o Compare 22 and 25 → No Swap → [12, 11, 22, 25, 34, 64, 90]

Fourth largest element (25) is placed correctly.

5. Pass 5:
o Compare 12 and 11 → Swap → [11, 12, 22, 25, 34, 64, 90]
o Compare 12 and 22 → No Swap → [11, 12, 22, 25, 34, 64, 90]

Fifth largest element (22) is placed correctly.

6. Pass 6:
o Compare 11 and 12 → No Swap → [11, 12, 22, 25, 34, 64, 90]

Array is now sorted.

Final Sorted Array:

[11, 12, 22, 25, 34, 64, 90]

Insertion Sort

Insertion Sort is a simple and intuitive sorting algorithm that builds the final sorted array one
element at a time. It works by repeatedly picking an element from the unsorted part and placing
it in its correct position in the sorted part of the array.

How It Works

1. Divide the array into two parts: sorted and unsorted.


2. Start with the first element as a single-element sorted part.
3. Pick the next element from the unsorted part.
4. Insert the picked element into its correct position in the sorted part by shifting larger
elements to the right.
5. Repeat the process until all elements are sorted.
Algorithm

1. Input: Array arr[] of size n.


2. Output: Sorted array in ascending order.
3. Steps:

css
Copy code
for i = 1 to n-1 do
key = arr[i]
j = i - 1
while j >= 0 and arr[j] > key do
arr[j+1] = arr[j]
j = j - 1
arr[j+1] = key

Example

Input Array:

[12, 11, 13, 5, 6]

Sorting Steps:

1. Initial Array: [12, 11, 13, 5, 6]


2. Step 1 (i = 1, key = 11):
o Compare 11 with 12 → Shift 12 to the right.
o Insert 11 at position 0.
o Result: [11, 12, 13, 5, 6]
3. Step 2 (i = 2, key = 13):
o Compare 13 with 12 → No shift needed.
o Insert 13 at position 2.
o Result: [11, 12, 13, 5, 6]
4. Step 3 (i = 3, key = 5):
o Compare 5 with 13 → Shift 13 to the right.
o Compare 5 with 12 → Shift 12 to the right.
o Compare 5 with 11 → Shift 11 to the right.
o Insert 5 at position 0.
o Result: [5, 11, 12, 13, 6]
5. Step 4 (i = 4, key = 6):
o Compare 6 with 13 → Shift 13 to the right.
o Compare 6 with 12 → Shift 12 to the right.
o Compare 6 with 11 → Shift 11 to the right.
o Insert 6 at position 1.
o Result: [5, 6, 11, 12, 13]

Final Sorted Array:

[5, 6, 11, 12, 13]

Time Complexity

Time
Case Explanation
Complexity
When the array is already sorted, only one comparison per
Best Case O(n)O(n)
element is made.
When the array is sorted in reverse order, every element must be
Worst Case O(n2)O(n2)
compared with all others.
Average
O(n2)O(n2) Comparisons and shifts dominate.
Case

Space Complexity

 O(1)O(1): In-place sorting (no extra space required).

Advantages

 Simple to implement.
 Efficient for small datasets or partially sorted data.
 Stable sort (preserves the relative order of equal elements).

Disadvantages

 Inefficient for large datasets due to O(n2)O(n2) complexity.


 Performance degrades for reverse-ordered arrays.

Quick Sort

Quick Sort is a highly efficient and widely used sorting algorithm. It follows the divide-and-
conquer approach to sort elements by partitioning the array into smaller subarrays and
recursively sorting them.
How It Works

1. Choose a Pivot: Select an element as the pivot. This element will help partition the array.
2. Partition: Rearrange the elements such that:
o Elements smaller than the pivot are placed to the left.
o Elements larger than the pivot are placed to the right.
3. Recursive Sort:
o Recursively apply the above steps to the subarrays (left and right of the pivot).
o Continue until the subarrays are of size 0 or 1 (already sorted).

Algorithm

1. Input: Array arr[] of size n.


2. Output: Sorted array in ascending order.
3. Steps:

less
Copy code
function quickSort(arr, low, high):
if low < high then
pivotIndex = partition(arr, low, high)
quickSort(arr, low, pivotIndex - 1)
quickSort(arr, pivotIndex + 1, high)

function partition(arr, low, high):


pivot = arr[high]
i = low - 1
for j = low to high - 1 do
if arr[j] < pivot then
i = i + 1
Swap arr[i] and arr[j]
Swap arr[i + 1] and arr[high]
return i + 1

Example

Input Array:

[10, 80, 30, 90, 40, 50, 70]

Sorting Steps:

1. Initial Array:
[10, 80, 30, 90, 40, 50, 70]
2. First Partition (Pivot = 70):
oRearrange so elements smaller than 70 are on the left, and larger ones are on the
right.
o Result: [10, 30, 40, 50, 70, 90, 80]
o Pivot position = 4.
3. Left Subarray: [10, 30, 40, 50]
o Pivot = 50.
o Result: [10, 30, 40, 50] (already sorted).
4. Right Subarray: [90, 80]
o Pivot = 80.
o Result: [80, 90] (sorted).
5. Final Sorted Array:
[10, 30, 40, 50, 70, 80, 90]

Time Complexity

Case Time Complexity Explanation


Best Case O(nlog⁡n)O(nlogn) The array is split into two equal halves at each step.
Average The pivot divides the array into reasonably balanced
O(nlog⁡n)O(nlogn)
Case subarrays.
Occurs when the pivot is the smallest or largest element
Worst Case O(n2)O(n2)
repeatedly (e.g., already sorted data).

Space Complexity

 In-Place Implementation: O(log⁡n)O(logn) due to the recursive stack for partitioning.


 Non-In-Place Implementation: Additional memory may be required for subarrays.

Advantages

 Fast and efficient for large datasets.


 In-place sorting (requires minimal extra memory).
 Average-case performance is excellent.

Disadvantages

 Worst-case performance is poor (O(n2)O(n2)).


 Unstable sorting algorithm (relative order of equal elements may not be preserved).
 Requires careful selection of the pivot to avoid unbalanced partitions.
Pivot Selection Strategies

1. First Element: Simple but can lead to poor performance if the array is already sorted.
2. Last Element: Same issue as the first element.
3. Random Pivot: Reduces the likelihood of worst-case scenarios.
4. Median of Three: Selects the median of the first, middle, and last elements to improve
partitioning balance.

Merge Sort

Merge Sort is a highly efficient, stable sorting algorithm based on the divide-and-
conquer paradigm. It divides the array into smaller subarrays, sorts them, and then merges them
to produce the final sorted array.

How It Works

1. Divide: Split the array into two halves until each subarray contains a single element (or
no elements).
2. Conquer: Recursively sort the two halves.
3. Merge: Combine the two sorted halves into a single sorted array.

Algorithm

1. Input: Array arr[] of size n.


2. Output: Sorted array in ascending order.
3. Steps:

less
Copy code
function mergeSort(arr, left, right):
if left < right then
mid = (left + right) / 2
mergeSort(arr, left, mid)
mergeSort(arr, mid + 1, right)
merge(arr, left, mid, right)

function merge(arr, left, mid, right):


n1 = mid - left + 1
n2 = right - mid
Create two temporary arrays L[0..n1-1] and R[0..n2-1]

Copy data to temporary arrays:


for i = 0 to n1-1:
L[i] = arr[left + i]
for j = 0 to n2-1:
R[j] = arr[mid + 1 + j]

Merge the temporary arrays back into arr:


i = 0, j = 0, k = left
while i < n1 and j < n2:
if L[i] <= R[j] then
arr[k] = L[i]
i = i + 1
else:
arr[k] = R[j]
j = j + 1
k = k + 1

Copy remaining elements of L[] and R[] (if any):


while i < n1:
arr[k] = L[i]
i = i + 1
k = k + 1
while j < n2:
arr[k] = R[j]
j = j + 1
k = k + 1

Example

Input Array:

[38, 27, 43, 3, 9, 82, 10]

Sorting Steps:

1. Initial Array:
[38, 27, 43, 3, 9, 82, 10]
2. Divide Step:
o Split into [38, 27, 43, 3] and [9, 82, 10].
3. Recursive Divide:
o [38, 27, 43, 3] → [38, 27] and [43, 3]
o [9, 82, 10] → [9, 82] and [10]
4. Sort Subarrays:
o [38, 27] → [27, 38]
o [43, 3] → [3, 43]
o [9, 82] → [9, 82]
5. Merge Step:
o Merge [27, 38] and [3, 43] → [3, 27, 38, 43]
o Merge [9, 82] and [10] → [9, 10, 82]
6. Final Merge:
o Merge [3, 27, 38, 43] and [9, 10, 82] → [3, 9, 10, 27, 38, 43, 82]

Final Sorted Array:


[3, 9, 10, 27, 38, 43, 82]

Time Complexity

Case Time Complexity Explanation


The array is always divided into halves, and merging
Best Case O(nlog⁡n)O(nlogn)
takes O(n)O(n).
Average
O(nlog⁡n)O(nlogn) Similar process in all cases.
Case
Worst Case O(nlog⁡n)O(nlogn) Same behavior regardless of the input order.

Space Complexity

 Auxiliary Space: O(n)O(n), as temporary arrays are used for merging.


 In-Place Sorting: No, since additional space is required for merging.

Advantages

 Guaranteed O(nlog⁡n)O(nlogn) time complexity.


 Stable sorting algorithm (preserves relative order of equal elements).
 Suitable for sorting linked lists.

Disadvantages

 Requires extra space for temporary arrays.


 Slower for small datasets compared to simpler algorithms like Insertion Sort.

Heap and Heap Sort

What is a Heap?

A Heap is a specialized tree-based data structure that satisfies the Heap Property:

1. Max-Heap Property: For every node, the value of the parent node is greater than or
equal to the values of its children.
Example: Parent≥Child1,Child2Parent≥Child1,Child2
2. Min-Heap Property: For every node, the value of the parent node is less than or equal to
the values of its children.
Example: Parent≤Child1,Child2Parent≤Child1,Child2

Types of Heaps

1. Max-Heap: Used for priority queues where the largest element is prioritized.
2. Min-Heap: Used for priority queues where the smallest element is prioritized.

Characteristics of a Heap

 A heap is always a complete binary tree, meaning all levels are completely filled except
possibly the last level, which is filled from left to right.
 It is commonly represented using arrays for efficient access.

Array Representation of a Heap

In an array representation of a heap:

 For an element at index ii:


o Parent: arr[i−12]arr[2i−1]
o Left Child: arr[2i+1]arr[2i+1]
o Right Child: arr[2i+2]arr[2i+2]

What is Heap Sort?

Heap Sort is a sorting algorithm that uses a binary heap to sort an array. It works by first
building a max-heap (or min-heap) and then repeatedly extracting the largest (or smallest)
element from the heap.

Algorithm

1. Build a Max-Heap: Rearrange the array into a max-heap structure.


2. Extract Elements:
o Swap the root (largest element) with the last element.
o Reduce the size of the heap and restore the heap property.
o Repeat until the heap size is 1.

Steps of Heap Sort

1. Input: Array arr[] of size n.


2. Output: Sorted array in ascending order.
3. Steps:

sql
Copy code
function heapSort(arr):
n = len(arr)

# Step 1: Build Max-Heap


for i = n//2 - 1 to 0:
heapify(arr, n, i)

# Step 2: Extract elements from the heap


for i = n-1 to 0:
Swap arr[0] and arr[i] # Move current root to end
heapify(arr, i, 0)

function heapify(arr, n, i):


largest = i
left = 2*i + 1
right = 2*i + 2

# Check if left child exists and is larger than root


if left < n and arr[left] > arr[largest]:
largest = left

# Check if right child exists and is larger than root


if right < n and arr[right] > arr[largest]:
largest = right

# If root is not the largest, swap and continue heapifying


if largest != i:
Swap arr[i] and arr[largest]
heapify(arr, n, largest)

Example

Input Array:

[4, 10, 3, 5, 1]

Steps:

1. Build Max-Heap:
o Rearrange into a max-heap: [10, 5, 3, 4, 1].
2. Extract Elements:
o Swap 10 with 1: [1, 5, 3, 4, 10].
o Restore heap property for remaining elements: [5, 4, 3, 1, 10].
3. Repeat until sorted:
o [4, 1, 3, 5, 10]
o [3, 1, 4, 5, 10]
o [1, 3, 4, 5, 10]

Final Sorted Array:

[1, 3, 4, 5, 10]

Time Complexity

Case Time Complexity Explanation


Best Case O(nlog⁡n)O(nlogn) Heap construction and heapify dominate.
Average Case O(nlog⁡n)O(nlogn) Consistent performance for all inputs.
Worst Case O(nlog⁡n)O(nlogn) Similar process for all scenarios.

Space Complexity

 Auxiliary Space: O(1)O(1), as it is an in-place sorting algorithm.

Advantages

1. Efficient for large datasets.


2. In-place sorting (no extra memory required).
3. Consistent O(nlog⁡n)O(nlogn) time complexity.

Disadvantages

1. Not stable (does not preserve the order of equal elements).


2. More complex to implement than simpler algorithms like Bubble Sort or Insertion Sort.

Algorithm Design Techniques:

Greedy Algorithm in Algorithm Design


A Greedy Algorithm is a problem-solving approach that makes a sequence of choices, each of
which is locally optimal, with the hope that these local solutions lead to a globally optimal
solution. It is widely used in algorithm design for optimization problems.

How Greedy Algorithms Work

1. Make a Greedy Choice: At each step, select the option that looks the best at the
moment.
2. Feasibility: Ensure the choice is valid for the problem constraints.
3. Optimal Substructure: Solve subproblems using the same greedy strategy.

Characteristics of Greedy Algorithms

1. Greedy Choice Property: A globally optimal solution can be arrived at by choosing a


local optimum.
2. Optimal Substructure: The problem can be broken down into smaller subproblems, and
solving them optimally leads to the solution of the entire problem.

Steps in Designing a Greedy Algorithm

1. Define the Problem: Understand the problem constraints and the optimization goal.
2. Identify the Greedy Choice: Determine how to make a choice at each step that seems
optimal.
3. Prove Correctness: Show that the greedy choice leads to an overall optimal solution
(using proof or mathematical induction).
4. Implement the Algorithm: Translate the greedy strategy into code.

Applications of Greedy Algorithms

Greedy algorithms are used in problems involving:

1. Optimization: Finding the maximum or minimum value.


2. Graph Problems: Minimum spanning tree, shortest paths.
3. Resource Allocation: Scheduling, job sequencing.

Examples of Greedy Algorithms


1. Activity Selection Problem

Problem: Given nn activities with start and end times, select the maximum number of activities
that don't overlap.
Greedy Strategy: Always choose the activity with the earliest finish time.

2. Huffman Encoding

Problem: Generate an optimal prefix code for a set of characters with given frequencies.
Greedy Strategy: Combine the two least frequent nodes iteratively.

3. Kruskal's Algorithm

Problem: Find the minimum spanning tree of a graph.


Greedy Strategy: Always pick the smallest edge that doesn't form a cycle.

4. Dijkstra's Algorithm

Problem: Find the shortest path from a source vertex to all other vertices in a graph.
Greedy Strategy: Pick the vertex with the smallest tentative distance.

Advantages

1. Simplicity: Easy to understand and implement.


2. Efficiency: Typically faster than other methods like dynamic programming.
3. Applicability: Works well for many real-world problems.

Disadvantages

1. Non-Optimality: Does not guarantee an optimal solution for all problems.


2. Local Decisions: Focuses on immediate gain, which might not always lead to the best
overall solution.

Comparison with Other Techniques


Feature Greedy Algorithm Dynamic Programming
Approach Locally optimal choice Solve overlapping subproblems.
Optimality May or may not be optimal Guarantees optimality.
Efficiency Generally faster May be slower due to recursion or table filling.
Examples Kruskal, Huffman Coding Knapsack, Longest Common Subsequence.

Example: Coin Change Problem

Problem: Given denominations of coins and a total amount, find the minimum number of coins
needed.

Greedy Algorithm:

1. Sort the coins in descending order.


2. Pick the largest denomination less than or equal to the amount.
3. Repeat until the total amount is zero.

Example:

 Coins: {1,2,5,10}{1,2,5,10}
 Total Amount: 2727
Steps:
 Pick 1010, total becomes 1717.
 Pick 1010, total becomes 77.
 Pick 55, total becomes 22.
 Pick 22, total becomes 00.
Result: {10,10,5,2}{10,10,5,2} (4 coins).

Conclusion

The greedy algorithm is a simple yet powerful approach for solving optimization problems.
However, it is essential to analyze whether the problem satisfies the Greedy Choice
Property and Optimal Substructure to ensure that a greedy solution is correct.

Divide-and-Conquer Algorithm

Divide-and-Conquer is a fundamental algorithm design paradigm that breaks a problem into


smaller subproblems, solves them recursively, and then combines their solutions to solve the
original problem. It is widely used for problems where a recursive structure can simplify the
solution process.
Steps in Divide-and-Conquer

1. Divide: Break the problem into smaller subproblems, ideally of equal size.
2. Conquer: Solve each subproblem recursively. If a subproblem is small enough, solve it
directly.
3. Combine: Merge the solutions of the subproblems to form the solution of the original
problem.

Characteristics

1. Recursive Approach: Uses recursion to break down and solve problems.


2. Optimal Substructure: The problem can be broken down into independent subproblems
whose solutions can be combined.
3. Overlapping Subproblems: Some problems solved by Divide-and-Conquer may also
exhibit this property, but unlike Dynamic Programming, it does not store solutions.

General Form of a Divide-and-Conquer Algorithm

For a problem of size nn:

 Divide: Partition the problem into kk subproblems of size n/kn/k.


 Conquer: Recursively solve the subproblems.
 Combine: Merge the kk solutions.

The time complexity is expressed using a recurrence relation:

T(n)=kT(nk)+f(n)T(n)=kT(kn)+f(n)

Where:

 kT(n/k)kT(n/k): Time for kk subproblems.


 f(n)f(n): Time to divide and combine.

Examples of Divide-and-Conquer Algorithms

1. Merge Sort

 Divide: Split the array into two halves.


 Conquer: Sort each half recursively.
 Combine: Merge the sorted halves into a single sorted array.
 Time Complexity: O(nlog⁡n)O(nlogn).

2. Quick Sort

 Divide: Choose a pivot and partition the array into two subarrays.
 Conquer: Sort the subarrays recursively.
 Combine: Combine the sorted subarrays.
 Time Complexity: O(nlog⁡n)O(nlogn) (average case).

3. Binary Search

 Divide: Divide the search range into two halves.


 Conquer: Recursively search in the relevant half.
 Combine: No explicit combining; the result propagates upward.
 Time Complexity: O(log⁡n)O(logn).

4. Matrix Multiplication (Strassen’s Algorithm)

 Divide: Split the matrices into submatrices.


 Conquer: Multiply submatrices recursively.
 Combine: Combine the results to get the final matrix product.
 Time Complexity: O(n2.81)O(n2.81) (better than naive O(n3)O(n3)).

5. Closest Pair of Points

 Divide: Split points into two halves.


 Conquer: Recursively find the closest pair in each half.
 Combine: Check the boundary region for closer pairs.
 Time Complexity: O(nlog⁡n)O(nlogn).

Advantages of Divide-and-Conquer

1. Efficiency: Reduces problem size at each step, leading to faster algorithms.


2. Parallelism: Subproblems can often be solved in parallel.
3. Structured Approach: Provides a clear way to break down complex problems.

Disadvantages of Divide-and-Conquer

1. Recursive Overhead: Recursion can lead to high memory usage.


2. Merge Cost: Combining results may involve additional computational overhead.
3. Not Always Optimal: For problems without independent subproblems, it may not be
suitable.
Comparison with Other Techniques

Feature Divide-and-Conquer Dynamic Programming


Divides into independent Solves overlapping subproblems and
Approach
subproblems stores results
Optimal Substructure Required Required
Overlapping
May or may not exist Always exists
Subproblems
Examples Merge Sort, Quick Sort Fibonacci, Knapsack

Applications

1. Sorting: Merge Sort, Quick Sort.


2. Searching: Binary Search.
3. Numerical Algorithms: Strassen’s Matrix Multiplication, Fast Fourier Transform.
4. Computational Geometry: Closest Pair of Points, Convex Hull.

Conclusion

The Divide-and-Conquer technique is a powerful tool for designing algorithms that break
problems into smaller, more manageable pieces. When combined with an efficient merging
strategy, it can significantly improve performance for a wide range of computational tasks.

Dynamic Programming (DP)

Dynamic Programming (DP) is an algorithmic technique used to solve optimization problems


by breaking them down into smaller, overlapping subproblems. Unlike Divide-and-Conquer,
which solves subproblems independently, DP stores the results of already-solved subproblems to
avoid redundant computations.

Key Concepts of Dynamic Programming

1. Optimal Substructure:
A problem exhibits optimal substructure if its solution can be constructed efficiently from
the solutions of its subproblems.
Example: In the Fibonacci sequence, F(n)=F(n−1)+F(n−2)F(n)=F(n−1)+F(n−2).
2. Overlapping Subproblems:
A problem has overlapping subproblems if the same subproblem is solved multiple times
during computation.
Example: In the Fibonacci sequence, F(3)F(3) is computed when finding
both F(4)F(4) and F(5)F(5).
3. Memoization vs. Tabulation:
o Memoization (Top-Down Approach): Recursively solve the problem and store
intermediate results in a table (cache).
o Tabulation (Bottom-Up Approach): Solve the problem iteratively by filling up a
table starting from the smallest subproblems.

Steps to Solve a Problem Using DP

1. Define the Subproblem: Break the problem into smaller subproblems that solve part of
the larger problem.
2. Recursive Relation: Derive a formula that relates the solution of a subproblem to larger
subproblems.
3. Base Cases: Define the simplest cases that can be solved directly.
4. Memoization or Tabulation: Implement the solution using a top-down or bottom-up
approach.

Dynamic Programming Example Problems

1. Fibonacci Sequence

 Problem: Compute the nn-th Fibonacci number.


 Recursive Relation: F(n)=F(n−1)+F(n−2)F(n)=F(n−1)+F(n−2).
 Base Cases: F(0)=0,F(1)=1F(0)=0,F(1)=1.
 Tabulation Code:

python
Copy code
def fibonacci(n):
dp = [0] * (n + 1)
dp[0], dp[1] = 0, 1
for i in range(2, n + 1):
dp[i] = dp[i - 1] + dp[i - 2]
return dp[n]

2. Longest Common Subsequence (LCS)

 Problem: Find the length of the longest subsequence common to two strings.
 Tabulation Code:

python
Copy code
def lcs(X, Y):
m, n = len(X), len(Y)
dp = [[0] * (n + 1) for _ in range(m + 1)]
for i in range(1, m + 1):
for j in range(1, n + 1):
if X[i - 1] == Y[j - 1]:
dp[i][j] = dp[i - 1][j - 1] + 1
else:
dp[i][j] = max(dp[i - 1][j], dp[i][j - 1])
return dp[m][n]

3. 0/1 Knapsack Problem

 Problem: Given weights and values of nn items, find the maximum value that can be
obtained by selecting items without exceeding the weight limit.
 Recursive
Relation:K(n,W)={0if n=0 or W=0K(n−1,W)if Wn>Wmax⁡(K(n−1,W),Vn+K(n−1,W−W
n))otherwiseK(n,W)=⎩⎨⎧0K(n−1,W)max(K(n−1,W),Vn+K(n−1,W−Wn))if n=0 or W=0i
f Wn>Wotherwise
 Base Case: K(0,W)=0K(0,W)=0.
 Tabulation Code:

python
Copy code
def knapsack(weights, values, W):
n = len(weights)
dp = [[0] * (W + 1) for _ in range(n + 1)]
for i in range(1, n + 1):
for w in range(W + 1):
if weights[i - 1] <= w:
dp[i][w] = max(dp[i - 1][w], values[i - 1] + dp[i - 1][w
- weights[i - 1]])
else:
dp[i][w] = dp[i - 1][w]
return dp[n][W]

Advantages of Dynamic Programming

1. Avoids Redundant Work: By storing results of subproblems, it avoids repeated


computations.
2. Improves Time Complexity: Converts exponential-time problems into polynomial-time
problems.
3. Guarantees Optimal Solutions: Always produces the best result for optimization
problems.
Disadvantages

1. Space Complexity: Requires additional memory for storing subproblem results.


2. Not Always Applicable: Only works for problems with overlapping subproblems and
optimal substructure.

Applications

1. Optimization Problems: 0/1 Knapsack, Minimum Cost Path.


2. String Problems: Longest Common Subsequence, Longest Palindromic Substring.
3. Graph Algorithms: Shortest Paths (Bellman-Ford, Floyd-Warshall).
4. Game Theory: Optimal game strategies.

Comparison with Divide-and-Conquer

Feature Dynamic Programming Divide-and-Conquer


Solves overlapping
Approach Solves independent subproblems.
subproblems.
Optimal Substructure Required Required
Overlapping
Yes No
Subproblems
Merge Sort, Quick Sort, Binary
Examples Knapsack, LCS, Fibonacci
Search

Conclusion

Dynamic Programming is a powerful algorithmic tool for solving problems with overlapping
subproblems and optimal substructure. By avoiding redundant calculations and using a
systematic approach, it ensures efficient solutions to a wide range of computational problems.

You might also like