0% found this document useful (0 votes)
20 views

Algorithm Materials

The document discusses various algorithms including linear search, binary search, heap sort, and Rabin-Karp string searching. Linear search scans an unsorted list sequentially to find a target value. Binary search divides a sorted list in half at each step to more quickly locate a target value. Heap sort uses a binary heap data structure to sort an array in O(n log n) time. Rabin-Karp string searching uses hashing to efficiently find patterns within text.

Uploaded by

nandhusekar2002
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Algorithm Materials

The document discusses various algorithms including linear search, binary search, heap sort, and Rabin-Karp string searching. Linear search scans an unsorted list sequentially to find a target value. Binary search divides a sorted list in half at each step to more quickly locate a target value. Heap sort uses a binary heap data structure to sort an array in O(n log n) time. Rabin-Karp string searching uses hashing to efficiently find patterns within text.

Uploaded by

nandhusekar2002
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 43

Algorithm – CS3401

Unit – 1
1.Asymptotic Notations and its Properties:

- Definition: Asymptotic notations, like Big O, Theta, and Omega, describe


the growth rate of an algorithm's time or space complexity.

- Example: For an algorithm with time complexity O(n^2), it means that


the running time grows quadratically with the size of the input.

2.Recurrence Relation - Substitution Method:

- Definition: The substitution method solves recurrence relations by


making an educated guess of the form of the solution and then proving its
correctness using mathematical induction.

- Example: For T(n) = 2T(n/2) + n, guess T(n) = O(n log n), and prove it
using induction.

3.Lower Bound (Solving Recurrence Equation):

- Definition: Lower bounds represent the best possible performance of an


algorithm and are often derived by solving recurrence equations.

- Example: The lower bound for comparison-based sorting algorithms is


Ω(n log n), proven through techniques like decision trees.

4.Linear Search:

- Definition: Linear search scans through each element in a list until the
target element is found.

Certainly! The linear search algorithm, also known as a sequential search, is a


straightforward method for finding a target element within a list. It works by
iterating through each element in the list and comparing it with the target value
until a match is found or the end of the list is reached.

Here's a simple implementation of the linear search algorithm in Python:

```python

def linear_search(arr, target):

"""

Perform linear search to find the target in the given list.

Parameters:

- arr: The list to search.

- target: The value to search for.

Returns:

- If the target is found, return the index of the target.

- If the target is not found, return -1.

"""

for i in range(len(arr)):

if arr[i] == target:

return i # Return the index if the target is found

return -1 # Return -1 if the target is not in the list

# Example usage

my_list = [4, 2, 7, 1, 9, 3, 6]

target_value = 7
result = linear_search(my_list, target_value)

if result != -1:

print(f"Target {target_value} found at index {result}")

else:

print(f"Target {target_value} not found in the list")

```

In this example, the target value is 7, and it is found at index 2 in the list
`my_list`. The output will be:

```

Target 7 found at index 2

```

Keep in mind that the linear search algorithm has a time complexity of O(n),
where n is the length of the list. It is suitable for small lists or unsorted data.
For larger datasets or sorted data, more efficient algorithms like binary search
may be preferred.

- Example: Given an array [5, 2, 8, 1, 7] and the target 8, linear search


would find the element in the third position.

5.Binary Search:

- Definition: Binary search divides a sorted array in half repeatedly,


reducing the search space by half with each step.

Binary search is an efficient algorithm for finding a target element within a


sorted list. The key idea is to repeatedly divide the search interval in half. If
the value of the search key is less than the item in the middle of the interval,
narrow the interval to the lower half. Otherwise, narrow it to the upper half.
Repeatedly check until the value is found or the interval is empty.

Here's a simple implementation of the binary search algorithm in Python:

```python

def binary_search(arr, target):

"""

Perform binary search to find the target in the given sorted list.

Parameters:

- arr: The sorted list to search.

- target: The value to search for.

Returns:

- If the target is found, return the index of the target.

- If the target is not found, return -1.

"""

low, high = 0, len(arr) - 1

while low <= high:

mid = (low + high) // 2 # Calculate the middle index

if arr[mid] == target:

return mid # Target found at index mid

elif arr[mid] < target:


low = mid + 1 # Discard the left half

else:

high = mid - 1 # Discard the right half

return -1 # Target not found in the list

# Example usage

sorted_list = [1, 2, 3, 4, 6, 7, 8, 9, 10]

target_value = 6

result = binary_search(sorted_list, target_value)

if result != -1:

print(f"Target {target_value} found at index {result}")

else:

print(f"Target {target_value} not found in the list")

```

In this example, the target value is 6, and it is found at index 4 in the sorted
list `sorted_list`. The output will be:

```

Target 6 found at index 4

```

Binary search has a time complexity of O(log n), making it more efficient
than linear search for large datasets, especially when the data is sorted.
- Example: Searching for 7 in the sorted array [1, 2, 5, 7, 8] would find
the element in the fourth position using binary search.

6.Heap Sort:

- Definition: Heap sort involves building a max-heap and repeatedly


extracting the maximum element to sort an array.

Heap Sort is a comparison-based sorting algorithm that uses a binary heap data
structure to build a max-heap (or min-heap), and then sorts the heap. In a max-
heap, the value of each node is greater than or equal to the values of its
children. Conversely, in a min-heap, the value of each node is less than or equal
to the values of its children.

The basic idea of the heap sort algorithm involves two main steps:

1. **Heapify the Array:** Convert the input array into a binary heap. This is
done by repeatedly applying heapify operations to build a max-heap.

2. **Extract Elements from the Heap:** Repeatedly remove the maximum


element (for a max-heap) from the heap and place it at the end of the array.
After each removal, heapify the remaining elements.

Here's a Python implementation of the Heap Sort algorithm:

```python

def heapify(arr, n, i):

largest = i

left_child = 2 * i + 1
right_child = 2 * i + 2

# Check if left child exists and is greater than the root

if left_child < n and arr[left_child] > arr[largest]:

largest = left_child

# Check if right child exists and is greater than the largest so far

if right_child < n and arr[right_child] > arr[largest]:

largest = right_child

# Swap the root if needed

if largest != i:

arr[i], arr[largest] = arr[largest], arr[i]

heapify(arr, n, largest)

def heap_sort(arr):

n = len(arr)

# Build a max heap

for i in range(n // 2 - 1, -1, -1):

heapify(arr, n, i)

# Extract elements from the heap

for i in range(n - 1, 0, -1):

arr[i], arr[0] = arr[0], arr[i] # Swap the root with the last element

heapify(arr, i, 0) # Heapify the reduced heap


# Example usage

unsorted_array = [12, 11, 13, 5, 6, 7]

heap_sort(unsorted_array)

print("Sorted array:", unsorted_array)

```

The `heapify` function is used to maintain the heap property, and `heap_sort`
orchestrates the overall sorting process. In the example, the output will be:

```

Sorted array: [5, 6, 7, 11, 12, 13]

```

Heap Sort has a time complexity of O(n log n) and is an in-place sorting
algorithm.

- Example: Given an array [4, 10, 3, 5, 1], heap sort would arrange it in
ascending order: [1, 3, 4, 5, 10].

7.Rabin-Karp Algorithm:

- Definition: Rabin-Karp is a string-searching algorithm that uses hashing


to find patterns in texts.

The Rabin-Karp algorithm is a string-searching algorithm that uses hashing to


find patterns within a text efficiently. It's particularly useful for multiple
pattern searches in a single text. The algorithm has an average-case time
complexity of O(n + m), where n is the length of the text, and m is the length of
the pattern.

Here's a basic explanation of the Rabin-Karp algorithm:


1. **Hashing:** Calculate the hash value of the pattern and the hash values of
overlapping substrings of the text using a rolling hash function.

2. **Comparison:** Compare the hash values. If the hash values match, perform
a character-by-character comparison to ensure it's not a spurious hash collision.

3. **Rolling Hash:** As you slide the window of the pattern over the text,
update the hash value using a rolling hash function. This allows you to efficiently
compute the hash value of the new substring based on the previous substring's
hash value.

Here's a Python implementation of the Rabin-Karp algorithm:

```python

def rabin_karp_search(text, pattern):

prime = 101 # A prime number used in hashing

text_len = len(text)

pattern_len = len(pattern)

pattern_hash = sum(ord(pattern[i]) * (prime ** i) for i in range(pattern_len))


# Initial hash of the pattern

text_hash = sum(ord(text[i]) * (prime ** i) for i in range(pattern_len)) #


Initial hash of the first substring

for i in range(text_len - pattern_len + 1):

if pattern_hash == text_hash:

if text[i:i + pattern_len] == pattern:

return i # Pattern found at index i


if i < text_len - pattern_len:

# Update the rolling hash for the next substring

text_hash = (text_hash - ord(text[i])) / prime + ord(text[i +


pattern_len]) * (prime ** (pattern_len - 1))

return -1 # Pattern not found in the text

# Example usage

text = "ABABCABABCDABCABC"

pattern = "ABC"

result = rabin_karp_search(text, pattern)

if result != -1:

print(f"Pattern '{pattern}' found at index {result}")

else:

print(f"Pattern '{pattern}' not found in the text")

```

In this example, the pattern "ABC" is found in the text at indices 2, 10, and 14.
The output will be:

```

Pattern 'ABC' found at index 2

Pattern 'ABC' found at index 10

Pattern 'ABC' found at index 14

```
The Rabin-Karp algorithm is useful when you need to search for multiple
patterns in the same text efficiently. It's important to choose a good rolling
hash function and a prime number to avoid hash collisions.

- Example: Searching for the pattern "ABC" in the text "AABC" using
Rabin-Karp would efficiently identify the match.

8.Knuth-Morris-Pratt:

- Definition: The Knuth-Morris-Pratt algorithm efficiently searches for


occurrences of a pattern in a text using a prefix function.

The Knuth-Morris-Pratt (KMP) algorithm is a string-searching algorithm that


efficiently finds occurrences of a pattern within a text. It is known for its
linear time complexity, making it very efficient for large texts. The key idea
behind KMP is to avoid unnecessary character comparisons by utilizing
information about the pattern itself.

Here's a basic explanation of the Knuth-Morris-Pratt algorithm:

1. **Preprocessing (Building the Longest Prefix Suffix Array):** Before


searching, the KMP algorithm preprocesses the pattern to create an auxiliary
array known as the Longest Prefix Suffix (LPS) array. The LPS array at index
`i` represents the length of the longest proper prefix which is also a suffix for
the substring pattern[0...i].

2. **Searching:** While searching for the pattern in the text, the KMP
algorithm uses the LPS array to determine the maximum number of characters
to skip when a mismatch occurs.

Here's a Python implementation of the Knuth-Morris-Pratt algorithm:


```python

def compute_lps_array(pattern):

m = len(pattern)

lps = [0] * m

j = 0 # Length of the previous longest prefix suffix

for i in range(1, m):

while j > 0 and pattern[i] != pattern[j]:

j = lps[j - 1]

if pattern[i] == pattern[j]:

j += 1

lps[i] = j

return lps

def kmp_search(text, pattern):

n = len(text)

m = len(pattern)

lps = compute_lps_array(pattern)

i = 0 # Index for text[]

j = 0 # Index for pattern[]

while i < n:

if pattern[j] == text[i]:

i += 1
j += 1

if j == m:

return i - j # Pattern found at index i - j

elif i < n and pattern[j] != text[i]:

if j != 0:

j = lps[j - 1]

else:

i += 1

return -1 # Pattern not found in the text

# Example usage

text = "ABABDABACDABABCABAB"

pattern = "ABABCABAB"

result = kmp_search(text, pattern)

if result != -1:

print(f"Pattern '{pattern}' found at index {result}")

else:

print(f"Pattern '{pattern}' not found in the text")

```

In this example, the pattern "ABABCABAB" is found in the text at index 10.
The output will be:
```

Pattern 'ABABCABAB' found at index 10

```

The Knuth-Morris-Pratt algorithm is particularly useful when searching for


patterns in large texts, as its time complexity is O(n + m), where n is the
length of the text and m is the length of the pattern.

- Example: Searching for the pattern "ababc" in the text "ababcababc"


using KMP would find two occurrences without unnecessary comparisons.

9.Matching Algorithm:

- Definition: Matching algorithms determine the similarity or equivalence


between patterns or sequences.

It seems like your question is a bit general, and it could refer to various
matching algorithms depending on the context. I'll provide an overview of
different types of matching algorithms, and if you have a specific type of
matching in mind (e.g., string matching, pattern matching, graph matching),
please let me know, and I can provide more detailed information.

1. **String Matching Algorithms:**

- **Brute Force:** The simplest method, where you check each position in the
text for a match.

- **Knuth-Morris-Pratt (KMP):** Efficient algorithm for finding occurrences


of a pattern in a text.

- **Rabin-Karp:** Uses hashing to find patterns within a text efficiently.

2. **Pattern Matching in Data Structures:**


- **Trie Matching:** Used in string matching when you have a collection of
patterns.

- **Aho-Corasick Algorithm:** Efficient algorithm for multiple string


matching.

3. **Graph Matching Algorithms:**

- **Graph Isomorphism:** Determines if two graphs are isomorphic.

- **Maximum Common Subgraph (MCS):** Finds the largest common subgraph


between two graphs.

- **Graph Edit Distance (GED):** Measures the dissimilarity between two


graphs.

4. **Image Matching Algorithms:**

- **Feature Matching:** Compares features extracted from images.

- **Template Matching:** Matches a template image with a larger image.

- **Scale-Invariant Feature Transform (SIFT):** Robust method for


matching features between images.

5. **Document Matching Algorithms:**

- **Cosine Similarity:** Measures the cosine of the angle between two


vectors of document term frequencies.

- **Jaccard Similarity:** Measures the similarity between two sets.

- **TF-IDF Matching:** Considers term frequency and inverse document


frequency.

6. **Sequence Matching Algorithms:**

- **Longest Common Subsequence (LCS):** Finds the longest subsequence


common to two sequences.
- **Smith-Waterman Algorithm:** Local sequence alignment for biological
sequences.

If you have a specific type of matching in mind or a particular context, feel


free to specify, and I can provide more targeted information.

- Example: DNA sequence matching algorithms are used to find genetic


similarities between different organisms.

10.Native String:

- Definition: Native string operations refer to the built-in


functions or methods provided by programming languages for
manipulating strings.

It seems like there might be some confusion or missing context in


your question. The term "Native String" isn't a standard phrase in
computer science or programming. However, I can provide
information on native strings in the context of programming
languages.

In most programming languages, a "native string" typically refers to


the built-in string data type provided by the language. Here are a
few examples:

1. **C/C++:**

- In C, strings are represented as arrays of characters (`char[]`).

- In C++, the `std::string` class provides a higher-level abstraction


for strings.
2. **Java:**

- The `String` class in Java is used to represent strings.

- Java has a set of methods and features specifically designed for


string manipulation.

3. **Python:**

- Strings in Python are represented using the `str` type.

- Python provides a rich set of string manipulation functions and


methods.

4. **JavaScript:**

- In JavaScript, strings are represented with the `String` type.

- JavaScript has a variety of built-in string methods for string


manipulation.

5. **C# (C Sharp):**

- The `string` type in C# represents strings.

- C# provides a wide range of string manipulation features.

If you meant something else by "Native String" or if you have a


specific context in mind, please provide more details, and I'll do my
best to assist you!

- Example: In Python, using the native string function


`str.upper()` converts a string to uppercase.
Unit – 2

Sure, let's dive into each of these topics:

1. Graph Traversal (BFS & DFS)

Applications:

1. BFS Applications:

- Social Network Connectivity: BFS can be used to find the


shortest path between two individuals in a social network.

- Web Crawling: BFS is employed by search engines to crawl web


pages efficiently and in a systematic order.

- Puzzle solving: BFS can be applied to solve puzzles like the


Rubik's Cube by exploring possible states.

2. DFS Applications:

- Topological Sorting: DFS can be used to find a topological


ordering of vertices, which is crucial in scheduling tasks with
dependencies.

- Maze Generation and Solving: DFS is employed to generate mazes


and find paths through them.

- Connected Components: DFS helps find connected components in


an undirected graph.

Differences between BFS and DFS:

1. Order of Visiting:

- BFS: Visits nodes level by level, exploring all neighbors of a node


before moving to the next level.
- DFS: Goes as deep as possible along each branch before
backtracking.

2. Data Structures Used:

- BFS: Uses a queue data structure for implementation.

- DFS: Uses a stack (or recursion) for implementation.

3. Completeness:

- BFS: Guarantees the shortest path in an unweighted graph.

- DFS: Does not guarantee the shortest path.

4. Memory Usage:

- BFS: Uses more memory as it needs to store all the nodes at the
current level.

- DFS: Generally uses less memory compared to BFS.

5. Applications Example:

- BFS Example: Shortest path finding in a maze.

- DFS Example: Topological sorting in a directed acyclic graph.

2. Minimum Spanning Tree - Prim's Algorithm / Kruskal's Algorithm

Prim's Algorithm:

1. Application:

- Network Design: Prim's algorithm is used to find the minimum


cost network connecting all nodes.
2. Steps:

- Start with an arbitrary node.

- Add the minimum cost edge connecting the current tree to an


unvisited node at each step.

- Repeat until all nodes are included.

3. Time Complexity:

- O(V^2) with adjacency matrix, O((V + E) log V) with adjacency


list (using a binary heap).

4. Example:

- Consider a network of cities with distances between them. Prim's


algorithm can find the minimum cost network to connect all cities.

Kruskal's Algorithm:

1. Application:

- Circuit Design: Kruskal's algorithm is used in designing printed


circuit boards.

2. Steps:

- Sort all the edges in non-decreasing order of their weight.

- Select the smallest edge, and add it to the spanning tree if it


doesn't form a cycle.

- Repeat until the spanning tree has V-1 edges.


3. Time Complexity:

- O(E log E) with quicksort on the edges.

4. Example:

- Consider a set of roads with associated costs. Kruskal's


algorithm can find the minimum cost set of roads connecting all
locations.

Differences between Prim's and Kruskal's:

1. Greedy Approach:

- Prim's: Operates in a greedy manner by always choosing the


smallest edge connected to the current tree.

- Kruskal's: Also follows a greedy approach by choosing the


smallest available edge in each step.

2. Data Structures:

- Prim's: Often implemented using a priority queue.

- Kruskal's: Implemented using a disjoint-set data structure.

3. Complexity:

- Prim's: Can be more efficient on dense graphs.

- Kruskal's: Typically more efficient on sparse graphs.

4. Connection Strategy:
- Prim's: Grows the tree from a starting vertex.

- Kruskal's: Builds the tree by considering all edges.

5. Cycle Checking:

- Prim's: Doesn't explicitly check for cycles as it always connects


to the nearest unvisited node.

- Kruskal's: Checks for cycles explicitly before adding an edge.

3. Maximum Bipartite Matching

1. Application:

- Job Assignment: Bipartite matching can be used to efficiently


assign jobs to workers based on their skills and preferences.

2. Definition:

- A bipartite graph is a graph whose vertices can be divided into


two disjoint sets such that every edge connects a vertex in one set
to a vertex in the other set.

3. Algorithm:

- Hopcroft-Karp algorithm is commonly used for finding maximum


bipartite matching.

4. Complexity:

- O(E * sqrt(V)) using the Hopcroft-Karp algorithm.


5. Example:

- Consider a set of jobs and workers with certain skills. The goal is
to maximize the number of jobs assigned to workers based on
compatibility.

4. Shortest Path Algorithms - Dijkstra's Algorithm / Floyd-


Warshall Algorithm / Bellman-Ford Algorithm

Dijkstra's Algorithm:

1. Application:

- Routing and Networking: Dijkstra's algorithm is used to find the


shortest path in computer networks.

2. Steps:

- Start with the initial node and set tentative distances to all
other nodes to infinity.

- Update distances based on the sum of the current distance and


the edge weight.

- Repeat until the destination is reached.

3. Time Complexity:

- O((V + E) log V) with a priority queue.

4. Example:

- In a map with cities and roads, Dijkstra's algorithm can find the
shortest path between two cities.
Floyd-Warshall Algorithm:

1. Application:

- All Pairs Shortest Path: Floyd-Warshall is used to find the


shortest paths between all pairs of vertices.

2. Steps:

- Initialize a matrix with the direct distances between vertices.

- Update the matrix by considering all vertices as intermediate


points.

- Repeat until all vertices are considered.

3. Time Complexity:

- O(V^3).

4. Example:

- In a transportation network with various routes, Floyd-Warshall


can find the shortest paths between all locations.

Bellman-Ford Algorithm:

1. Application:

- Distributed Systems: Bellman-Ford is used in distance vector


routing algorithms in computer networks.

2. Steps:

- Initialize distances to all vertices as infinity.


- Relax edges repeatedly, updating the distance estimates.

- Detect negative cycles.

3. Time Complexity:

- O(V * E) in the worst case.

4. Example:

- In a network with different routes and varying costs, Bellman-


Ford can find the shortest path between two locations.

5. Ford-Fulkerson Method and Flow Network

1. Application:

- Network Flow: Ford-Fulkerson is used to find the maximum flow


in a network.

2. Algorithm:

- Start with an initial feasible flow.

- Augment the flow by finding an augmenting path repeatedly until


no more augmenting paths exist.

3. Termination:

Unit – 3
1. Greedy Algorithms:
Optimal Merge Pattern:

- Definition: The optimal merge pattern is a problem of


merging multiple sorted sequences into a single sequence with the
minimum number of comparisons.

- Greedy Choice: Always merge the two smallest sequences.

- Example Algorithm:

```python

def optimal_merge_pattern(sizes):

total_cost = 0

while len(sizes) > 1:

sizes.sort()

merged_size = sizes[0] + sizes[1]

total_cost += merged_size

sizes = [merged_size] + sizes[2:]

return total_cost

```

- Example: Given sizes `[10, 5, 20, 10]`, the optimal merge


pattern would be to merge 5 and 10, then merge 15 and 10, and
finally merge 25 and 20.

Huffman Trees and Algorithm:

- Definition: Huffman coding is a method of encoding


characters based on their frequency, using variable-length codes.

- Greedy Choice: Build a tree by always merging the two lowest


frequency nodes.
- Example Algorithm:

```python

from heapq import heappush, heappop, heapify

def build_huffman_tree(freq):

heap = [[weight, [symbol, ""]] for symbol, weight in


freq.items()]

heapify(heap)

while len(heap) > 1:

lo = heappop(heap)

hi = heappop(heap)

for pair in lo[1:]:

pair[1] = '0' + pair[1]

for pair in hi[1:]:

pair[1] = '1' + pair[1]

heappush(heap, [lo[0] + hi[0]] + lo[1:] + hi[1:])

return heap[0]

```

- Example: Given frequencies `{'a': 5, 'b': 9, 'c': 12, 'd': 13,


'e': 16, 'f': 45}`, the Huffman tree would be constructed, and the
codes for each symbol would be generated.

Activity Selection Problem:


- Definition: Given a set of activities with start and finish
times, the activity selection problem is to select the maximum
number of non-overlapping activities.

- Greedy Choice: Always choose the next activity that finishes


first.

- Example Algorithm:

```python

def activity_selection(start, finish):

n = len(finish)

activities = list(range(n))

activities.sort(key=lambda x: finish[x])

selected_activities = [activities[0]]

for i in range(1, n):

if start[activities[i]] >= finish[selected_activities[-1]]:

selected_activities.append(activities[i])

return selected_activities

```

- Example: Given start times `[1, 3, 0, 5, 8, 5]` and finish


times `[2, 4, 6, 7, 9, 9]`, the optimal set of non-overlapping
activities would be selected.

Elements of Greedy Strategy:

- Greedy Choice Property: A global optimum can be arrived at


by selecting a local optimum.

- Optimal Substructure: An optimal solution to the problem


contains optimal solutions to its subproblems.
- Choice of Subproblem: A greedy algorithm makes a choice that
is the best at the moment.

- Greedy-Choice Property: A global optimum can be arrived at by


selecting a local optimum.

- Example: In the Knapsack problem, a greedy strategy of


selecting items based on maximum value-to-weight ratio
demonstrates these elements.

2. Dynamic Programming:

Optimal Binary Search Trees:

- Definition: Optimal Binary Search Trees are binary search


trees that minimize the weighted sum of search costs.

- Optimal Substructure: An optimal BST can be formed from


optimal subtrees.

- Example Algorithm:

```python

def optimal_bst(keys, freq):

n = len(keys)

dp = [[0] * n for _ in range(n)]

for i in range(n):

dp[i][i] = freq[i]

for cl in range(2, n + 1):

for i in range(n - cl + 1):

j = i + cl - 1
dp[i][j] = float('inf')

for r in range(i, j + 1):

c = dp[i][r - 1] if r > i else 0

c += dp[r + 1][j] if r < j else 0

c += sum(freq[i:j + 1])

if c < dp[i][j]:

dp[i][j] = c

return dp[0][n - 1]

```

- Example: Given keys `[10, 12, 20]` and frequencies `[34, 8,


50]`, the optimal binary search tree would be constructed.

Matrix-Chain Multiplication:

- Definition: Given a sequence of matrices, matrix-chain


multiplication is the problem of finding the most efficient way to
multiply these matrices.

- Optimal Substructure: An optimal parenthesization of the


product corresponds to optimal parenthesizations of its
subproblems.

- Example Algorithm:

```python

def matrix_chain_order(p):

n = len(p) - 1

m = [[0] * n for _ in range(n)]


for l in range(2, n + 1):

for i in range(n - l + 1):

j=i+l-1

m[i][j] = float('inf')

for k in range(i, j):

q = m[i][k] + m[k + 1][j] + p[i] * p[k + 1] * p[j + 1]

if q < m[i][j]:

m[i][j] = q

return m[0][n - 1]

```

- Example: Given dimensions of matrices `[10, 20, 30]`, the


optimal way to parenthesize the product would be determined.

Multistage Graph:

- Definition: A multistage graph is a graph that is partitioned


into several stages, and each stage contains a set of nodes.

- Optimal Substructure: Optimal solutions to subproblems


contribute to the optimal solution of the whole problem.

- Example Algorithm:

```python

def min_cost_multistage_graph(cost, stages):

n = len(cost)

dist = [float('inf')] * n

dist[n - 1] = 0

for i in range(n - 2, -1, -1):


for j in stages[i]:

dist[j] = min(cost[j][k] + dist[k] for k in stages[i])

return dist[0]

```

- Example: Given a multistage graph represented by a cost


matrix and stages, the minimum cost from the start to the end
would be calculated.

Elements:

Unit – 4
1. Backtracking Problems:

a. N-Queen Problem:

1. Problem Statement:

- Place N queens on an N×N chessboard in such a way that no


two queens attack each other.

2. Algorithm Steps:
- Start in the leftmost column.

- If all queens are placed, return true.

- Try all rows in the current column.

- For each row, check if the queen can be placed without


conflicting with existing queens.

- If a safe spot is found, place the queen and recursively try


to place the rest of the queens.

3. Code Example (Python):

```python

def is_safe(board, row, col, n):

Check if there is a queen in the same row

Check upper diagonal on left side

Check lower diagonal on left side

If no conflicts, it's safe to place the queen

def solve_n_queens_util(board, col, n):

Base case: All queens are placed

Check for all rows in the current column

Place queen if it's safe

Recur to place the rest of the queens

If placing queen in the current position leads to


a solution, return True

def solve_n_queens(n):
Initialize the board

Start from the leftmost column

Call the utility function to solve the problem

```

b. Hamiltonian Cycle Problem:

1. Problem Statement:

- Find a Hamiltonian cycle in a given graph, if it exists.

2. Algorithm Steps:

- Start at any vertex.

- Try to extend the path to other vertices, ensuring that


each vertex is visited exactly once.

- Backtrack if a dead end is reached and undo the last move.

- Repeat until a Hamiltonian cycle is found or all possibilities


are exhausted.

3. Code Example (Python):

```python

def is_valid_move(v, pos, path, graph):

Check if vertex v can be added to the path

Check if v is not already in the path

Check if there is an edge between the last vertex in the


path and v
def hamiltonian_cycle_util(graph, path, pos):

Base case: All vertices are included in the path

Check if there is an edge between the last vertex and


the starting vertex

Try different vertices as the next candidate in


Hamiltonian cycle

If the vertex is a valid move, add it to the path

Recur to check if the path can be extended

def hamiltonian_cycle(graph):

Initialize an empty path

Start from the first vertex and explore Hamiltonian cycle

```

c. Graph Coloring Problem:

1. Problem Statement:

- Color the vertices of a graph in such a way that no two


adjacent vertices have the same color.

2. Algorithm Steps:

- Start with the first vertex and assign a color.


- Move to the next vertex and assign the lowest numbered
color that has not been used by its neighbors.

- Backtrack if it's not possible to assign a color, and try a


different color.

- Repeat until all vertices are colored.

3. Code Example (Python):

```python

def is_safe(v, c, graph, color):

Check if it's safe to color vertex v with color c

Check if any adjacent vertices have the same color

def graph_coloring_util(graph, m, color, v):

Base case: All vertices are colored

Return True

Try different colors for the current vertex

If it's safe to color, mark it with the color

Recur to color the rest of the vertices

def graph_coloring(graph, m):

Initialize an array to store colors

Start from the first vertex and explore coloring

```
2. Optimization Problems:

a. Travelling Salesman Problem:

1. Problem Statement:

- Find the shortest possible route that visits each city


exactly once and returns to the original city.

2. Algorithm Steps:

- Start from any city as the initial city.

- Choose the nearest unvisited city as the next city to visit.

- Update the current city to the chosen city and mark it as


visited.

- Repeat until all cities are visited.

- Return to the initial city to complete the cycle.

3. Code Example (Python):

```python

def tsp(graph, start):

Initialize a list to track visited cities

Mark the starting city as visited

Initialize a variable to store the total distance

Repeat until all cities are visited

Find the nearest unvisited city


Update the current city and mark it as visited

Update the total distance

Return to the starting city to complete the cycle

```

b. Knapsack Problem:

1. Problem Statement:

- Given a set of items, each with a weight and a value,


determine the maximum value that can be obtained by selecting a
subset of items with a total weight not exceeding a given
capacity.

2. Algorithm Steps:

- Create a table to store solutions for subproblems.

- Iterate through each item and calculate the maximum value


that can be obtained considering the current item.

- Fill the table with optimal values for each subproblem.

- Trace back the selected items to obtain the final subset.

3. Code Example (Python):

```python

def knapsack(values, weights, capacity):

Initialize a table to store solutions for subproblems

Iterate through each item


Calculate the maximum value for the current item

Update the table with the optimal value

Trace back the selected items to obtain the final subset

```

3. Assignment Problem:

1. Problem Statement:

- Assign a set of tasks to a set of agents in such a way that


the total cost is minimized.

2. Algorithm Steps:

- Create a cost matrix representing the cost of assigning each


task to each agent.

- Use the Hungarian algorithm to find the optimal assignment.

- Reduce the matrix and find the minimum number of lines to


cover all zeros.

- Subtract the smallest uncovered value from all uncovered


values and add it to the intersection of covered rows and
columns.

- Repeat until an optimal assignment is found.

3. Code Example (Python):

```python
def hungarian_algorithm(cost_matrix):

Step 1: Subtract the smallest element in each row from


all elements in that row

Step 2: Subtract the smallest element in each column


from all elements in that column

Step 3: Cover all zeros with the minimum number of lines

Step 4: Test for optimality and update the matrix if


needed

Step 5: Repeat until an optimal assignment is found

```

Unit – 5
1. Approximation Algorithm for TSP:

a. Overview of Approximation Algorithms:

- Approximation algorithms provide near-optimal solutions for NP-


hard problems in a reasonable amount of time.

- They trade optimality for efficiency, making them suitable for


large-scale problems like the Traveling Salesman Problem (TSP).

b. Christofides Algorithm:
- Christofides algorithm guarantees a solution within 3/2 times the
optimal for metric TSP instances.

- Steps involve finding a minimum spanning tree, matching odd-


degree vertices, and solving the resulting Eulerian circuit.

c. Performance Characteristics:

- Christofides achieves a performance ratio of 3/2, making it a


1.5-approximation algorithm.

- The algorithm's time complexity is polynomial, providing


efficiency for large instances.

d. Example:

- Consider a TSP instance with distances between cities: A-B (2),


A-C (3), B-C (4). Optimal solution: A-B-C-A (total distance 9).

- Christofides might return a solution like A-C-B-A with a distance


of 7.5 (1.5 times the optimal).

e. Limitations:

- While efficient, Christofides doesn't always guarantee the best


possible approximation for all instances.

2. Randomized Algorithms:

i. Finding kth Smallest Number:

- Randomized QuickSelect is a common algorithm for this.


- Randomly choose a pivot, partition elements, and recursively
select a subarray to find the kth smallest element.

ii. Randomized QuickSort:

- Randomized algorithm for sorting by partitioning the array using


a randomly chosen pivot.

- Provides average-case time complexity of O(n log n).

iii. Primality Testing:

- Solovay-Strassen and Miller-Rabin are randomized algorithms for


primality testing.

- They use random numbers to efficiently determine if a number is


likely prime.

iv. Concept and Application:

- Randomized algorithms use randomness to achieve different


outcomes for the same input, providing a probabilistic guarantee.

- Applications include quick selection, sorting, and primality


testing, where randomness improves average-case performance.

v. Example:

- In randomized quicksort, if the pivot selection is truly random,


the algorithm is less susceptible to worst-case scenarios.

3. Bin Packing Method / Next-Fit Decreasing Height Algorithm:


a. Bin Packing Overview:

- Bin packing is an optimization problem where items of different


sizes must be packed into a finite number of bins.

b. Next-Fit Decreasing Height (NR) Algorithm:

- Variant of the Next-Fit algorithm where items are sorted in


decreasing order before packing.

- Items are placed in the next bin if they fit, else a new bin is
used.

c. Characteristics:

- NR is a simple heuristic algorithm, easy to implement with a


greedy strategy.

- It does not guarantee an optimal solution but is efficient for


certain instances.

d. Example:

- Consider bins with a capacity of 10. Items: 5, 8, 3, 4, 6. Next-Fit


would use 2 bins (10, 9) while Next-Fit Decreasing Height would use
3 bins (10, 8, 6).

e. Limitations:

- Greedy algorithms like NR may not always produce the optimal


solution, especially for dynamic or changing input scenarios.

You might also like