0% found this document useful (0 votes)
11 views4 pages

Desgin

The document discusses various concepts in algorithm analysis, including definitions, time complexity, and comparisons of different sorting algorithms. It explains the importance of analyzing algorithms for efficiency and scalability, and provides examples of recurrence relations, growth rates, and specific algorithms like Binary Search, Quick Sort, and Heap Sort. Additionally, it covers data structures such as Red-Black Trees and Disjoint Sets, along with their properties and applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views4 pages

Desgin

The document discusses various concepts in algorithm analysis, including definitions, time complexity, and comparisons of different sorting algorithms. It explains the importance of analyzing algorithms for efficiency and scalability, and provides examples of recurrence relations, growth rates, and specific algorithms like Binary Search, Quick Sort, and Heap Sort. Additionally, it covers data structures such as Red-Black Trees and Disjoint Sets, along with their properties and applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

1. Define and explain the concept of 6. Use the iteration method to solve 10.

Mathematically analyze a non-recursive


algorithm analysis. Why is analyzing an T(n)=3T(n/2)+nT(n) = 3T(n/2) + n algorithm: Max Element in Array
algorithm's efficiency important? Using iteration (repeated substitution): Answer:
Algorithm analysis is the process of evaluating T(n)=3T(n/2)+n=3[3T(n/4)+n/2]+n=32T(n/4)+3n/ Algorithm: Loop through array and track max.
the efficiency of an algorithm in terms of time 2+n=33T(n/8)+32(n/4)+3n/2+nT(n) = 3T(n/2) + n int max = A[0];
and space complexity. The goal is to estimate = 3[3T(n/4) + n/2] + n = 3^2T(n/4) + 3n/2 + n = for (int i = 1; i < n; i++) {
how the runtime and resource usage of an 3^3T(n/8) + 3^2(n/4) + 3n/2 + n if (A[i] > max)
algorithm grow with input size. Generalizing: max = A[i];
Efficiency analysis helps in selecting the most T(n)=3kT(n/2k)+n∑i=0k−1(3/2)iT(n) = 3^k }
optimal algorithm among alternatives, especially T(n/2^k) + n \sum_{i=0}^{k-1} (3/2)^i Basic operations: comparison and assignment.
for large datasets. It ensures better Stop when n/2^k = 1 \Rightarrow k = log_2 n Total iterations: (n - 1)
performance, resource utilization, and T(n)=3log⁡2nT(1)+n∑i=0log⁡2n−1(3/2)iT(n) = Each iteration: 1 comparison, possibly 1
scalability. There are three cases commonly 3^{\log_2 n} T(1) + n \sum_{i=0}^{\log_2 n - 1} assignment
analyzed: (3/2)^i Time complexity: Θ(n)\boxed{\Theta(n)}
Best Case (optimistic scenario) Geometric series sum gives: 1. Explain and compare Selection Sort,
Worst Case (pessimistic scenario) T(n)=O(nlog⁡23)T(n) = \mathcal{O}(n^{\log_2 Bubble Sort, and Insertion Sort with time
Average Case (expected behavior over all 3}) complexity analysis.
inputs) 7. Write the time complexity analysis of Selection Sort
Analyzing algorithms is important because it Binary Search. - Algorithm: Repeatedly selects the smallest (or
guides developers and computer scientists in Binary Search works by dividing the sorted array largest) element from the unsorted portion of the
designing efficient software solutions and helps in half repeatedly. array and swaps it with the first unsorted
in understanding the computational limitations of Input size = n element.
a given problem. At each step: reduce size to n/2 - Time Complexity:
2. Differentiate Big-O, Big-Ω, and Big-Θ Recurrence: T(n)=T(n/2)+1T(n) = T(n/2) + 1 - Best Case: O(n^2)
Notations. Solving: - Average Case: O(n^2)
Big-O Notation (O) T(n)=O(log⁡n)T(n) = \mathcal{O}(\log n) - Worst Case: O(n^2)
- Represents the upper bound of an algorithm's Best Case: Element found in middle: - Advantages: Simple to implement, minimizes
running time. Ω(1)\Omega(1) the number of swaps.
- Describes the worst-case complexity. Worst Case: Logarithmic steps needed: Bubble Sort
- Example: O(n^2) for Bubble Sort. O(log⁡n)\mathcal{O}(\log n) - Algorithm: Repeatedly iterates through the
Big-Ω Notation (Ω) Thus, time complexity: array, comparing adjacent elements and
- Represents the lower bound of an algorithm's Θ(log⁡n)\boxed{\Theta(\log n)} swapping them if they are in the wrong order.
running time. 8. Analyze and find time complexity of Merge - Time Complexity:
- Describes the best-case complexity. Sort using recurrence relation. - Best Case: O(n)
- Example: Ω(n) for Linear Search (if the item is Merge Sort splits the array into two halves and - Average Case: O(n^2)
at the beginning). merges them: - Worst Case: O(n^2)
Big-Θ Notation (Θ) Time Complexity Analysis of Merge Sort - Advantages: Simple to implement, stable
- Represents the tight bound of an algorithm's The recurrence relation for Merge Sort is: sorting algorithm.
running time. T(n) = 2T(n/2) + n Insertion Sort
- Captures both upper and lower bounds. Applying Master Theorem - Algorithm: Builds the sorted array one element
- Example: Θ(n log n) for Merge Sort in all - a = 2 (number of sub-problems) at a time by inserting each element into its
cases. - b = 2 (size of sub-problems) proper position.
3. Compare the growth rates of functions - f(n) = n (work done outside recursive calls) - Time Complexity:
logn\log n, nn, nlognn \log n, n2n^2, 2n2^n, Comparison with Master Theorem Cases - Best Case: O(n)
n!n! f(n) = Θ(n^log₂2) = Θ(n) - Average Case: O(n^2)
Growth Rates of Functions This corresponds to Case 2 of the Master - Worst Case: O(n^2)
The growth rate determines how the execution Theorem. - Advantages: Simple to implement, stable
time or memory requirement increases with Solution sorting algorithm, efficient for small datasets.
input size. From lowest to highest growth: T(n) = Θ(n log n) Conclusion
Comparison of Growth Rates Time Complexity:The time complexity of Merge - Selection Sort: Suitable for small datasets,
1. log n: Very slow growth, ideal. Sort is Θ(n log n). minimizes swaps.
2. n: Linear growth. 9.Classify the following algorithms into - Bubble Sort: Simple to implement, stable, but
3. n log n: Slightly faster than linear; Merge Sort. efficiency classes inefficient for large datasets.
4. n^2: Quadratic; Bubble Sort. Algorithms can be classified into efficiency - Insertion Sort: Efficient for small datasets,
5. 2^n: Exponential; Brute-force subset classes based on their time complexity, which stable, and simple to implement.
problems. describes how the running time grows relative to 2. Write and explain the algorithm of Binary
6. n!: Factorial; Traveling Salesman brute force. the input size nn. The most common efficiency Search. What is its time complexity?
Key Takeaways classes use asymptotic notations like Big O Binary Search works on a sorted array. It
- Algorithms with lower growth rates are notation to represent upper bounds on the repeatedly divides the search interval in half.
preferred for large data sets due to their growth of running time. Some typical efficiency Algorithm:
scalability.-- Understanding growth rates helps in classes are: Start with low = 0, high = n - 1
selecting the most efficient algorithm for a Constant Time O(1)O(1): The running time While low <= high:
problem.--- Exponential and factorial growth does not change with input size. Example: mid = (low + high) / 2
rates are typically impractical for large inputs. accessing an array element by index. If A[mid] == key, return mid
4.Explain and solve a recurrence relation using the Logarithmic Time O(log⁡n)O(\log n): Time If key < A[mid], set high = mid - 1
Master Theorem: Else, set low = mid + 1
grows proportionally to the logarithm of the input
The Master Theorem is a powerful tool used to
size. Example: Binary Search. Return -1 if not found
determine the time complexity of divide-and-conquer
algorithms. It solves recurrence relations of the form: Linear Time — O(n)O(n): Time grows Time Complexity: O(log n)
Given recurrence relation: T(n) = 2T(n/2) + n proportionally to the input size. Example: Linear 3. Differentiate between Depth First Search
Applying Master Theorem Search. (DFS) and Breadth First Search (BFS) with
- a = 2 (number of subproblems) Linearithmic Time O(nlog⁡n)O(n \log n): Time examples.
- b = 2 (size of each subproblem) DFS: Explores as far as possible along a branch
grows proportional to nn times the logarithm of
- f(n) = n (work done outside recursion)
nn. Example: Efficient sorting algorithms like before backtracking. Uses Stack (recursion or
Comparison
f(n) = n Merge Sort and Quick Sort (average case). explicit).
n^log_b(a) = n^log_2(2) = n Quadratic Time O(n2)O(n^2): Time grows BFS: Explores neighbors level by level. Uses
Since f(n) = Θ(n^log_b(a)), this falls under Case 2 of the proportional to the square of the input size. Queue.
Master Theorem. Example: Simple sorting algorithms like Bubble Example Graph: A-B-C-D
Solution--T(n) = Θ(n log n) DFS(A): A -> B -> C -> D
Sort, Selection Sort, and Insertion Sort.
Therefore, the time complexity is Θ(n log n).
Cubic Time O(n3)O(n^3): Time grows BFS(A): A -> B -> C -> D
5. State and prove the recurrence T(n)=T(n−1)+nT(n)
= T(n-1) + n using the substitution method. proportional to the cube of the input size. Applications: DFS for pathfinding, BFS for
Given recurrence relation: T(n) = T(n-1) + n Example: Some matrix multiplication algorithms. shortest path in unweighted graph.
Guess and Proof Exponential Time O(2n)O(2^n): Time doubles 4. Explain AVL Trees. How are rotations used
We guess that T(n) = O(n^2) with every additional input element. Common in to maintain balance in AVL Trees?
Base Case A self-balancing binary search tree that ensures
T(1) = 1 (assumed)
brute-force solutions for NP-hard problems like
the Travelling Salesman Problem (naive efficient insertion, deletion, and search
Inductive Hypothesis
Assume T(k) ≤ c * k^2 for all k < n approach). operations by maintaining a balanced structure.
Inductive Step Factorial Time — O(n!)O(n!): Time grows AVL Tree is a self-balancing binary search tree
T(n) = T(n-1) + n factorially, extremely inefficient, often for where the balance factor (height(left) -
≤ c(n-1)^2 + n exhaustive permutation algorithms. height(right)) is -1, 0, or +1.
= c(n^2 - 2n + 1) + n If balance factor becomes < -1 or > 1, rotations
= cn^2 - 2cn + c + n
are performed:
= cn^2 - (2c - 1)n + c
Choosing c LL Rotation, RR Rotation, LR Rotation, RL
Choose c such that (2c - 1) > 0, which holds true for c > Rotation
1/2. Ensures O(log n) time for search, insert, delete.
Conclusion∴ T(n) = O(n^2)
5. What are Red-Black Trees? Mention the 10. Write and explain Quick Sort algorithm. 4. Compare Dijkstra’s and Bellman-Ford
properties that make them balanced. Analyze its time complexity. Algorithms. When is Bellman-Ford
Red-Black Tree is a balanced binary search tree Quick Sort Algorithm preferred?
with each node colored red or black. Quick Sort is a divide-and-conquer algorithm Dijkstra:
Properties: that sorts an array by selecting a 'pivot' element For non-negative edge weights
Every node is either red or black and partitioning the other elements into two sub- Faster (O(V log V + E))
Root is black arrays, according to whether they are less than Bellman-Ford:
Red node cannot have red children or greater than the pivot. Handles negative edge weights
Every path from a node to null has the same Steps Slower (O(VE))
number of black nodes 1. Choose Pivot: Select a pivot element from the Can detect negative weight cycles
Ensures O(log n) operations array. Bellman-Ford is preferred when negative
6. Explain Heap and Heap Sort with 2. Partition: Partition the array around the pivot weights exist.
algorithm and time complexity. element. 5. Explain Huffman Encoding Algorithm.
Heap: A complete binary tree where each node 3. Recursion: Recursively apply the above steps Construct a Huffman Tree for a given
is greater (Max-Heap) or smaller (Min-Heap) to the sub-arrays. frequency table.
than its children. Example Code (Python) Huffman Encoding compresses data by
The heap is one maximally efficient def quick_sort(arr): assigning variable-length codes based on
implementation of an abstract data type called a if len(arr) <= 1: frequency.
priority queue, and in fact, priority queues are return arr Build a priority queue of characters by
often referred to as "heaps", regardless of how pivot = arr[len(arr) // 2] frequency.
they may be implemented. In a heap, the left = [x for x in arr if x < pivot] Repeatedly merge lowest-frequency nodes.
highest (or lowest) priority element is always middle = [x for x in arr if x == pivot] Time Complexity: O(n log n)
stored at the root. right = [x for x in arr if x > pivot] Construct tree and derive prefix-free codes.
Heap Sort: return quick_sort(left) + middle + 6. Solve the 0/1 Knapsack Problem using
Build a Max-Heap from input quick_sort(right) both Greedy and Dynamic Programming
Swap root with last node Time Complexity Analysis approaches.
Reduce heap size and heapify root - Best Case: O(n log n) - When the pivot divides Greedy: Based on value/weight ratio (not
Repeat--Time Complexity: O(n log n) the array into two halves of equal size. optimal for 0/1 case).
7. What is Disjoint Set? Explain Union-Find - Average Case: O(n log n) - When the pivot DP Approach:
algorithm and its applications. divides the array into two halves of roughly Use a 2D array dp[n+1][W+1]
Disjoint Set is a data structure for tracking a set equal size. Recurrence: dp[i][w] = max(dp[i-1][w], val[i-1] +
of elements partitioned into non-overlapping - Worst Case: O(n^2) - When the pivot is always dp[i-1][w-wt[i-1]])
sets. the smallest or largest element, leading to highly Time Complexity: O(nW)
Operations: unbalanced partitions.
MakeSet(x): Creates a set with one element Optimizations 7. Describe Floyd-Warshall Algorithm for All-
Find(x): Finds the representative of the set - Randomized Pivot: Select a random pivot Pairs Shortest Path. Include algorithm and
Union(x, y): Merges two sets element to avoid worst-case scenarios. complexity.
Optimizations: Path compression and union by - Median-of-Three: Select the median of three Floyd-Warshall finds shortest paths between all
rank elements as the pivot to reduce the likelihood of pairs of vertices.
Applications: Kruskal’s MST, Connected worst-case scenarios. Use adjacency matrix.
Components Advantages Triple nested loop to update distance[i][j] =
8. Describe Divide and Conquer paradigm - Fast: Quick Sort is generally fast and efficient. min(distance[i][j], distance[i][k] + distance[k][j])
with an example. - In-Place: Can be implemented as an in-place Time Complexity: O(V³)
Divide and Conquer breaks a problem into sub- sorting algorithm. 2. Explain Kruskal’s Algorithm with Union-
problems, solves them independently, and Limitations Find structure. How does it build a Minimum
combines the results. - Worst-Case Performance: Can degrade to Spanning Tree?
Example: Merge Sort O(n^2) in the worst case. Kruskal's algorithm is a popular algorithm in
Divide: Split array into halves - Not Stable: Quick Sort is not a stable sorting graph theory used to find the Minimum
Conquer: Recursively sort algorithm. Spanning Tree (MST) of a connected,
Combine: Merge two sorted halves 1. Explain Prim’s Algorithm for Minimum undirected, and weighted graph.
Benefits: Simplifies problems and reduces time Spanning Tree with an example and time Key Features
complexity complexity. 1. Minimum Spanning Tree: Finds the subset of
9. Explain Merge Sort with algorithm and Prim's algorithm is a greedy algorithm used to edges that connect all vertices with minimum
time complexity. find the Minimum Spanning Tree (MST) of a total edge weight.
Merge Sort is a Divide and Conquer algorithm. connected, undirected, and weighted graph. 2. Greedy Approach: Selects edges in
Algorithm: Key Features increasing order of weight, ensuring no cycles
If array size > 1: 1. Minimum Spanning Tree: Finds the subset of are formed.
Divide array into two halves edges that connect all vertices with minimum 3. Disjoint Set Data Structure: Used to efficiently
Recursively apply merge sort total edge weight. check for cycles.
Merge sorted halves 2. Greedy Approach: Selects the minimum Steps
Time Complexity: O(n log n) for all cases weight edge that connects a vertex in the MST 1. Sort all edges in non-decreasing order of
Space Complexity: O(n) to a vertex not yet in the MST. weight.
3. Describe Dijkstra’s Algorithm for Single 3. Priority Queue: Often used to efficiently select 2. Select the smallest edge that does not form a
Source Shortest Path. Include a working the minimum weight edge. cycle.
example. Steps 3. Repeat step 2 until V-1 edges are selected,
Dijkstra's algorithm is a well-known algorithm in 1. Start with an arbitrary vertex. where V is the number of vertices.
graph theory used for finding the shortest path 2. Select the minimum weight edge that Time Complexity
between nodes in a weighted graph. It works by connects a vertex in the MST to a vertex not yet - With Disjoint Set: O(E log E) or O(E log V),
iteratively exploring the nearest unvisited node in the MST. where E is the number of edges and V is the
and updating the shortest distances. 3. Repeat step 2 until all vertices are included in number of vertices.
Key Features the MST. Applications
1. Shortest Path: Finds the minimum distance Time Complexity 1. Network Design: Used in designing network
path from a source node to all other nodes. - With Priority Queue: O((V + E) log V), where V topology to minimize cost.
2. Weighted Graphs: Works with graphs having is the number of vertices and E is the number of 2. Cluster Analysis: Applied in clustering data
non-negative edge weights. edges. points.
3. Greedy Approach: Selects the node with the - Without Priority Queue: O(V^2). 3. Image Segmentation: Used in computer
smallest distance that hasn't been processed Applications vision for segmenting images.
yet. 1. Network Design: Used in designing network
Time Complexity topology to minimize cost. Advantages
- With Priority Queue: O((V + E) log V), where V 2. Cluster Analysis: Applied in clustering data 1. Efficient: Suitable for sparse graphs.
is the number of vertices and E is the number of points. 2. Simple: Easy to implement.
edges. 3. Image Segmentation: Used in computer
- Without Priority Queue: O(V^2). vision for segmenting images. Limitations
Applications Advantages 1. Not Suitable for Dense Graphs: Other
1. Network Routing: Used in routing protocols to 1. Efficient: Suitable for dense graphs. algorithms like Prim's might be more efficient.
determine the shortest path. 2. Simple: Easy to implement. Find.
2. GPS Navigation: Helps in finding the shortest Comparison with Kruskal's Algorithm Steps:
route between locations. - Prim's Algorithm: More efficient for dense Sort all edges
3. Traffic Optimization: Used to minimize travel graphs. For each edge (u,v), if Find(u) != Find(v), add it
time in traffic networks. - Kruskal's Algorithm: More efficient for sparse and Union(u,v)
Limitations graphs. Time Complexity: O(E log E)
1. Non-Negative Weights: Does not work with Uses Disjoint Set with path compression and
graphs having negative edge weights. union by rank.
2. Static Graphs: Does not handle dynamic
changes in graph structure efficiently
8. Explain Optimal Binary Search Tree 2. Rabin-Karp Algorithm 8. State Space Tree for Backtracking
(OBST) problem using Dynamic This algorithm uses hashing to find any one of a Problems
Programming. set of pattern strings in a text. State Space Tree is a tree representation of all
OBST minimizes the expected search cost given Algorithm: the states (choices made at each step) explored
key probabilities. Compute hash of pattern P. during backtracking.
Use DP to calculate minimum cost of subtrees. Compute hash of each substring of T of length Characteristics:
dp[i][j] = min(dp[i][r-1] + dp[r+1][j] + m. Each node represents a partial solution.
sum(prob[i..j])) If the hash values match, compare the actual Leaf node represents a complete solution or
Time Complexity: O(n³) substring with P. dead end.
9. What is Matrix Chain Multiplication Time Complexity: Helps visualize the backtracking process.
problem? Solve it using Dynamic Average Case: O(n + m) Example:
Programming. Worst Case: O(nm) (due to hash collisions) n-Queen Problem:
Matrix Chain Multiplication Problem Advantages: Root: Empty board
Given a sequence of matrices A1, A2, ..., An, the Efficient for multiple pattern matching. Children: Queen placed at row 0 in various
problem is to find the most efficient way to Uses rolling hash for quick computation. columns
multiply these matrices together, minimizing the Example: Continue branching and pruning invalid
total number of scalar multiplications. Pattern: "test", Text: "this is a test text" placements
Dynamic Programming Solution Hash("test") = H 1. Explain the Branch and Bound method.
Let: Compute rolling hashes of each 4-length How is it used to solve the Travelling
- m[i, j] be the minimum number of scalar window in text and compare. Salesman Problem (TSP)? Illustrate with an
multiplications required to multiply matrices Ai to 3. Finite Automata for String Matching example and state space tree.
Aj. String matching using Finite Automata Branch and Bound (B&B) Method:
- p[i-1] be the number of rows in matrix Ai. constructs a deterministic finite automaton It is a systematic method for solving optimization
- p[i] be the number of columns in matrix Ai. (DFA) that processes the input string and problems, especially combinatorial ones.
recognizes when a pattern occurs. The main idea is to divide the problem into
Recurrence Relation Steps: subproblems (branching), compute bounds for
m[i, j] = min(m[i, k] + m[k+1, j] + p[i-1]*p[k]*p[j]) Build the transition function for the pattern. the minimum or maximum cost in subproblems,
for i <= k < j Scan the text using the DFA. and discard subproblems that cannot produce
Steps Time Complexity: better solutions than the current best
1. Initialize m[i, i] = 0 for all i. Preprocessing: O(m * |Σ|) (Σ is the alphabet) (bounding).
2. Compute m[i, j] for all i and j using the Matching: O(n) This pruning reduces the search space and thus
recurrence relation. Benefits: speeds up the solution.
Example Once DFA is built, it is very fast. Application to TSP:
Suppose we have four matrices A1 (30x35), A2 Useful for high-performance applications. The Travelling Salesman Problem (TSP) asks
(35x15), A3 (15x5), and A4 (5x10). 4. Knuth-Morris-Pratt (KMP) Algorithm for the shortest possible route visiting each city
Solution The KMP algorithm avoids unnecessary exactly once and returning to the starting point.
Using dynamic programming, we can compute comparisons by preprocessing the pattern. B&B solves TSP by building a state space tree
the minimum number of scalar multiplications Steps: where:
required: Compute longest proper prefix which is also a Each node represents a partial solution (a path
m[1, 4] = min(m[1, 1] + m[2, 4] + 30*35*10, m[1, suffix (LPS array). visiting a subset of cities).
2] + m[3, 4] + 30*15*10, m[1, 3] + m[4, 4] + Use LPS to skip characters. Branching adds a new city to the partial path.
30*5*10) Time Complexity: Bounding computes a lower bound on the cost
After computation, we get the minimum number Preprocessing: O(m) to complete the tour from the current partial
of scalar multiplications. Searching: O(n) path.
Time Complexity Example: Subtrees with bounds higher than the best
The time complexity of the dynamic Pattern: "ABABCABAB" solution so far are pruned.
programming solution is O(n^3), where n is the LPS = [0, 0, 1, 2, 0, 1, 2, 3, 2] Example:
number of matrices. 5. Backtracking – n-Queen’s Problem Consider 4 cities with the following distance
10. Explain the Longest Common The goal is to place N queens on an N×N matrix:
Subsequence (LCS) problem with example chessboard such that no two queens threaten C1 C2 C3 C4
and algorithm. each other.
LCS is the longest sequence common to two Algorithm: C1 0 10 15 20
sequences in the same order. Place queens row-wise. C2 10 0 35 25
Use 2D DP table dp[m+1][n+1] For each row, try placing a queen in all columns. C3 15 35 0 30
If str1[i] == str2[j], dp[i][j] = dp[i-1][j-1]+1 Check if it is safe (no other queen in same
Else, dp[i][j] = max(dp[i-1][j], dp[i][j-1]) column/diagonal). C4 20 25 30 0
Time Complexity: O(mn) If safe, move to the next row; else backtrack. Start at city C1.
Example: LCS of “ABCBDAB” and “BDCAB” is Time Complexity: Branch to C2, C3, C4.
“BCAB” O(N!) Calculate lower bounds for each branch using
Bonus: Explain Maximum Network Flow problem Example: techniques such as row and column reduction.
using Ford-Fulkerson method. For N = 4: Prune branches where the bound exceeds the
Answer: Solution: current best path cost.
The problem is to find the maximum flow from .Q.. Continue until the complete path with minimum
source to sink in a flow network. ...Q cost is found.
Ford-Fulkerson uses DFS to find augmenting Q... State Space Tree:
paths and update flows. ..Q. Root: Start at C1.
Time Complexity: O(E * max flow), if capacities 6. Backtracking – Hamiltonian Circuit Level 1: Choose next city (C2, C3, or C4).
are integers. Problem Level 2: Choose next city not visited yet.
Enhancements: Use Edmonds-Karp (BFS) for Find a cycle in a graph that visits every Continue until all cities are visited.
O(VE²) vertex exactly once and returns to the Leaves represent complete tours.
Module IV: String Matching Algorithms and starting vertex. 2. What is the State Space Tree in Branch
Backtracking Techniques (8 Hours) Algorithm: and Bound? How does it help in reducing the
1. Naive String Matching Algorithm (Very Try all permutations of vertices. search space for TSP?
Long Answer) Check if there is an edge between adjacent State Space Tree:
The Naive String Matching Algorithm is the vertices and from last to first. It is a tree representation of all possible
simplest method for finding occurrences of a Time Complexity: solutions (partial and complete) of a problem.
pattern string (P) in a text string (T). O(N!) Each node represents a state — a partial
Algorithm Steps: Example: solution.
Let the text be T of length n, and the pattern be Graph with adjacency matrix: Branches represent choices/extensions from
P of length m. Try paths starting from vertex 0: 0→1→2→3→0, one state to another.
Slide the pattern over text one character at a etc. In Branch and Bound for TSP:
time. 7. Backtracking – Subset-Sum Problem The root is the starting city.
For each position i from 0 to n - m: Find if there exists a subset in a set of integers Each level corresponds to the choice of the next
Compare P[0..m-1] with T[i..i+m-1] that sums up to a given value. city in the tour.
If a match is found, report the index. Algorithm: Leaf nodes represent complete tours.
Time Complexity: For each element: Helps reduce search space by:
Worst Case: O((n-m+1) * m) Include it or exclude it. Calculating lower bounds on the cost of tours
Best Case: O(n) Recur on remaining elements and reduced from each node.
Example: target sum. Pruning branches (subtrees) whose bound
T = "ABABDABACDABABCABAB" Base case: If sum = 0, return true. exceeds the current best solution cost.
P = "ABABCABAB" Time Complexity:O(2^n) Thus, it avoids exploring all permutations (which
Compares the pattern at each shift, character by Example:Set = {3, 34, 4, 12, 5, 2}, Sum = 9 are factorial in number), reducing computational
character. Subset {4, 5} gives sum 9. effort.
3. Define and explain Polynomial-Time 5. What is the concept of Reducibility? 9. Discuss the limitations of exact algorithms
Verification with a suitable example. Why is it Explain with an example how it is used to for NP-Hard problems and the role of
significant in the context of NP problems? prove a problem is NP- approximation algorithms in solving such
Polynomial-Time Verification: Complete.Reducibility:Transforming one problems.
Given a candidate solution (certificate) for a problem AA to another problem BB in polynomial Limitations of Exact Algorithms:
problem, verification means checking whether time such that a solution to BB gives a solution Exact algorithms (like brute force) have
the candidate is indeed a valid solution. exponential time complexity.
to AA.--Denoted as A≤pBA \leq_p B (A is
A problem is verifiable in polynomial time if there Impractical for large input sizes.
polynomial-time reducible to B).
exists an algorithm that can verify a solution in NP-Hard problems are computationally
time polynomial in the size of the input. infeasible to solve optimally in polynomial time
Example: Use in NP-Completeness proofs: unless P=NP.
Consider the Hamiltonian Cycle problem: Given To prove a problem CC is NP-Complete: Role of Approximation Algorithms:
a graph, does there exist a cycle visiting each Show CC is in NP (verification in polynomial Provide near-optimal solutions in polynomial
vertex exactly once? time). time.
If a candidate cycle (sequence of vertices) is Choose a known NP-Complete problem PP. Guarantee bounded performance ratio
given, verifying it involves: Reduce PP to CC in polynomial time. (approximation ratio).
Checking that it contains all vertices exactly This shows CC is at least as hard as PP, thus Useful when exact solutions are computationally
once. NP-Complete. expensive or impossible.
Checking that edges exist between consecutive Example: Enable practical solutions for real-world NP-
vertices in the cycle. Prove 3-SAT is NP-Complete: Hard problems.
This verification can be done in polynomial time. 3-SAT is in NP. 10. What are approximation algorithms?
Significance in NP problems: Reduce SAT (known NP-Complete) to 3-SAT by Explain the general characteristics and
NP is the class of decision problems where a converting clauses into 3-literal clauses in performance guarantees with the example of
"yes" answer can be verified in polynomial time. polynomial time. the Vertex Cover problem.
Polynomial-time verification separates NP from This reduction proves 3-SAT is NP-Complete. Approximation Algorithms:
P (problems solvable in polynomial time). 6. Define NP-Complete problems. Describe Algorithms that find solutions close to optimal
This concept is crucial for understanding NP- the steps to prove a problem is NP- within a guaranteed bound.
completeness and the P vs NP question. Complete. Give an example. Used for optimization problems where exact
4. Differentiate between P, NP, NP-Complete, NP-Complete Problems: solutions are hard to find.
and NP-Hard problems with examples. Why A problem is NP-Complete if: General Characteristics:
is the concept of NP-Completeness It is in NP. Polynomial-time complexity.
important in Computability? Every problem in NP can be polynomial-time Provide an approximation ratio (performance
1.Class P (Polynomial Time): Class P consists reduced to it. guarantee).
Steps to prove NP-Completeness: Typically use heuristics or greedy strategies.
of all decision problems (yes/no questions) that
Show the problem is in NP. Trade off optimality for efficiency.
can be solved by a deterministic Turing machine
Select a known NP-Complete problem. Performance Guarantee:
in polynomial time.--Intuition: These problems Provide a polynomial-time reduction from the
are considered efficiently solvable or “easy” For minimization problems, approximation ratio
known NP-Complete problem to the given ρ≥1\rho \geq 1 means the algorithm’s solution
problems.Example:Sorting an array (e.g., Merge problem. cost ≤ ρ\rho × OPT.
Sort, Quick Sort) runs in O(nlog⁡n)O(n \log Example: For maximization problems, solution cost ≥ OPT
n)O(nlogn) time.Finding the shortest path in a Subset Sum Problem: / ρ\rho.
graph using Dijkstra’s algorithm (polynomial Given a set of integers, is there a subset Example: Vertex Cover
time).2. Class NP (Nondeterministic Polynomial summing to a target value? The 2-Approximation algorithm guarantees a
Time): Class NP consists of decision problems Step 1: Verification is polynomial (check sum of cover size ≤ 2 × OPT.
for which a given solution can be verified in candidate subset). Runs efficiently and is simple to implement.
polynomial time by a deterministic Turing Step 2 & 3: Reduce a known NP-Complete 11.Greedy Techniques
machine.Note: It is not required that the problem problem (like 3-SAT or Partition) to Subset Sum. A greedy algorithm is a problem-solving strategy
Hence, Subset Sum is NP-Complete. that makes the locally optimal choice at each
can be solved in polynomial time, only
7. Explain the Travelling Salesman Problem step, hoping to find a global optimum solution.
verified.Example:Subset Sum Problem: Given a in detail. Why is it considered NP-Hard?
set of numbers, is there a subset whose sum Characteristics
What is its decision version? 1. Optimal Substructure: The problem can be
equals a target? Verifying a candidate subset’s Travelling Salesman Problem (TSP): broken down into smaller sub-problems.
sum is polynomial time.Hamiltonian Cycle: Find the shortest possible route visiting every 2. Greedy Choice: The locally optimal choice
Given a cycle, checking if it visits all vertices city exactly once and returning to the starting leads to a global optimum solution.
exactly once is polynomial time.3. NP-Complete city. Examples
Problems: Problems that are in NP and are at Input: A set of cities and distances between 1. Huffman Coding: A greedy algorithm for
least as hard as every other problem in each pair. constructing optimal prefix codes.
NP.Formally, a problem LLL is NP-Complete if:-- Output: A minimum-cost Hamiltonian circuit. 2. Activity Selection Problem: A greedy algorithm
L∈NPL \in NPL∈NP, and---Every other problem Why NP-Hard: for selecting the maximum number of activities.
in NP can be reduced to LLL in polynomial time TSP is a combinatorial optimization problem. 3. Fractional Knapsack Problem: A greedy
It generalizes Hamiltonian Cycle, which is NP- algorithm for maximizing the value of items in a
(known as polynomial-time reducibility).--
Complete. knapsack.
Significance: NP-Complete problems are the
No known polynomial-time algorithm exists for Advantages
“hardest” problems in NP. If any NP-Complete TSP.
problem is solved in polynomial time, all 1. Efficient: Greedy algorithms are often fast and
Finding the optimal solution is at least as hard efficient.
problems in NP can be solved in polynomial as solving any NP problem. 2. Simple: Greedy algorithms are simple to
time, proving P=NPP = NPP=NP.Examples:-- Decision Version of TSP:--Given a set of cities, implement.
Boolean Satisfiability Problem (SAT).--Travelling distances, and a cost kk, is there a tour with Limitations
Salesman Problem (decision version).--Vertex total cost ≤ kk? 1. Not always optimal: Greedy algorithms do not
Cover Problem.4. NP-Hard Problems: This decision problem is NP-Complete. always produce the optimal solution.
Problems that are at least as hard as NP- 8. Describe the 2-Approximation algorithm 2. Problem-specific: Greedy algorithms are often
Complete problems, but they do not have to be for the Vertex Cover Problem. Provide an problem-specific and may not work for all
in NP (i.e., they may not be decision problems example and analyze its complexity. problems.
or may not have polynomial-time verifiers).NP- Vertex Cover Problem: Applications
Find a minimum set of vertices such that every 1. Data compression: Huffman coding is used in
Hard problems might be optimization problems
edge in the graph has at least one endpoint in data compression.
or problems without yes/no this set.
answers.Example:Halting Problem (undecidable 2. Resource allocation: Greedy algorithms are
2-Approximation Algorithm: used in resource allocation problems.
and NP-Hard).--Travelling Salesman Problem Initialize cover C=∅C = \emptyset. 3. Scheduling: Greedy algorithms are used in
(optimization version).--Importance of NP- While there are edges in the graph: scheduling problems.
Completeness:-- Identifies problems for which Pick any edge (u,v)(u,v).
no polynomial-time algorithm is known.If any Add both uu and vv to CC.
NP-Complete problem has a polynomial-time Remove all edges incident on uu or vv.
solution, all NP problems do. Return CC.
Example:Graph edges: (1,2), (2,3), (3,4).
Pick edge (1,2): add 1,2 to CC, remove edges
incident to 1 or 2 (removes (1,2), (2,3)).
Remaining edge (3,4).
Pick edge (3,4): add 3,4 to CC.
Resulting cover: {1, 2, 3, 4}.
Optimal cover could be {2,3} (size 2), so
algorithm gives a cover ≤ 2 * OPT.
Complexity:--Runs in O(E)O(E) time where EE
is number of edges.Approximation ratio
guarantee: size of cover ≤ 2 × optimal cover
size.

You might also like