DAANotes 2
DAANotes 2
1. Optimal Substructure:
This property means that the optimal solution to an overall problem can be
constructed from the optimal solutions of its subproblems.
It allows us to solve a complex problem by breaking it down into smaller, more
manageable subproblems.
2. Overlapping Subproblems:
This property states that the problem can be broken down into subproblems, and the
solutions to some subproblems are reused multiple times.
By storing the solutions to subproblems, we avoid redundant computations and
improve the efficiency of the algorithm.
Example: Consider matrices A(10x30), B(30x5), and C(5x60). The product ABC can be
parenthesized as either (AB)C or A(BC). The optimal parenthesization minimizes the total
number of scalar multiplications, which is crucial for efficient matrix multiplication.
Example: For sequences "ABCD" and "ACDF," the LCS is "ACD." Dynamic programming is often
used to find the LCS efficiently by considering overlapping subproblems.
Example: Given probabilities of searching for keys, construct a binary search tree that minimizes
the expected search cost. This involves placing frequently searched keys closer to the root for
quicker access.
Example: Given items with weights and values, determine the most valuable combination of
items that can fit into a knapsack of limited capacity. The dynamic programming approach
efficiently solves this problem by considering the optimal selection of items for each possible
weight limit.
Example: Given a set of cities with distances between them, find the shortest path that visits
each city exactly once and returns to the starting city. Dynamic programming can be used to
find the optimal solution by considering all possible city sequences.
Example: Given a graph with weighted edges, the algorithm computes the shortest paths
between all pairs of vertices. It uses a dynamic programming approach to iteratively update the
shortest path information, ultimately providing the shortest distances between all pairs of
vertices.
Create branches at each decision point where an item can be included or excluded.
Each branch represents a decision to include or exclude a particular item.
Create branches at each decision point where the salesperson must choose the next
city to visit.
Each branch represents a decision about the order in which cities are visited.
Example: In the Traveling Salesperson Problem, a branch might represent choosing between
visiting City A or City B next. The bounding function estimates the distance traveled so far and
prunes branches where this estimate exceeds the current best solution.
Summary:
Branch and Bound: A systematic algorithmic paradigm for solving optimization problems
by exploring the solution space efficiently.
0/1 Knapsack using Branch and Bound: The method involves creating branches for item
inclusion/exclusion decisions and bounding based on estimated potential solutions.
Traveling Salesperson using Branch and Bound: Branches are created for city visitation
decisions, and bounding helps eliminate suboptimal solutions.
Unit: 4
Algorithm Steps:
1. Slide the pattern over the text one character at a time.
2. Compare the pattern with the substring of the text starting at each position.
3. If a match is found, record the position of the match.
Rabin-Karp Algorithm
Explanation: The Rabin-Karp Algorithm is a more efficient string matching algorithm that uses
hashing. It employs a rolling hash function to quickly compare the pattern with substrings of the
text.
Algorithm Steps:
1. Compute the hash values of the pattern and the initial substring of the text.
2. Slide the pattern over the text, updating the hash value at each step using the rolling
hash function.
3. If the hash values match, perform a character-by-character comparison to confirm the
match.
Algorithm Steps:
1. Construct a finite automaton that recognizes the pattern.
2. Process the text using the automaton to detect occurrences of the pattern.
Algorithm Steps:
1. Preprocess the pattern to compute the longest proper prefix that is also a suffix for
each position.
2. Use the computed information to skip unnecessary character comparisons during the
string matching process.
Example: For the pattern "ABABCABAB" and text "ABABDABACDABABCABAB," the KMP
algorithm efficiently identifies occurrences of the pattern without unnecessary character
comparisons.
Summary:
Naïve String Matching Algorithm: Simple but less efficient method involving a systematic
comparison of the pattern with all substrings of the text.
Rabin-Karp Algorithm: Uses hashing and a rolling hash function for efficient pattern
matching.
String Matching with Finite Automata: Involves constructing a finite automaton for pattern
recognition.
Knuth-Morris-Pratt (KMP) Algorithm: Efficient linear time algorithm that avoids
unnecessary comparisons using information from previous matches.
Time Complexity: The amount of time an algorithm takes as a function of the input size.
Space Complexity: The amount of memory an algorithm uses as a function of the input
size.
Big-O Notation: Describes the upper bound of the growth rate of an algorithm's resource
usage.
NP-hard: Problems for which a polynomial-time algorithm for any one of them implies a
polynomial-time algorithm for all problems in NP.
NP-complete: A problem is NP-complete if it is both in NP and is as hard as the hardest
problems in NP.
Approximation Algorithms
Explanation: Approximation Algorithms provide near-optimal solutions for NP-hard problems
when finding an exact solution is impractical.
Greedy Approximation: Make locally optimal choices at each step, hoping they lead to a
globally optimal solution.
Randomized Approximation: Use randomness to improve the probability of finding a good
solution.
Deterministic Polynomial-Time Approximation: Guarantees a solution within a polynomial
factor of the optimal.
Example: The Traveling Salesperson Problem is NP-hard. An approximation algorithm might not
find the optimal solution but could provide a route close to the shortest possible.
Summary:
Computational Complexity: Studies time and space resources required by algorithms.
Polynomial vs Non-Polynomial Complexity: Classifies algorithms based on their growth
rates.
NP-hard & NP-complete Classes: Categorize problems based on their complexity and
relationship to NP.
Approximation Algorithms: Provide near-optimal solutions for NP-hard problems.
Ford-Fulkerson Method
Explanation: The Ford-Fulkerson method is an algorithm for finding the maximum flow in a flow
network. It uses augmenting paths to incrementally improve the flow until an optimal solution is
reached.
Algorithm Steps:
1. Start with an initial feasible flow.
2. Find an augmenting path in the residual graph.
3. Update the flow along the augmenting path.
4. Repeat steps 2 and 3 until no more augmenting paths can be found.
Algorithm Steps:
1. Construct a bipartite graph.
2. Use algorithms like the Ford-Fulkerson method to find the maximum flow, which
corresponds to the maximum bipartite matching.
Sorting Networks
Explanation: Sorting Networks are networks of comparators used for sorting a sequence of
elements. These networks guarantee correct sorting regardless of the input sequence.
Comparison Network: A sorting network where elements are compared and swapped
based on predefined rules until the entire sequence is sorted.
Zero-One Principle: A principle stating that the correctness of a sorting network can be
determined by checking its behavior on all possible input sequences of 0s and 1s.
Algorithm Steps:
1. Build a bitonic sequence from the input.
2. Recursively sort the two halves of the bitonic sequence.
3. Merge the two sorted halves.
Merging Network
Explanation: A Merging Network is a type of sorting network used for merging two sorted
sequences into a single sorted sequence.
Algorithm Steps:
1. Compare elements from the two input sequences and merge them in sorted order.
2. Continue this process until all elements are merged.
Example: In a Maximum Bipartite Matching problem, given two sets of elements with possible
connections between them, the goal is to find the maximum number of connections such that
no two connections share a common element.
Summary:
Ford-Fulkerson Method: Finds the maximum flow in a flow network.
Maximum Bipartite Matching: Finds the maximum number of non-overlapping
connections in a bipartite graph.
Sorting Networks: Networks of comparators for sorting sequences.
Bitonic Sorting Network: Specifically designed for sorting bitonic sequences.
Merging Network: Used for merging two sorted sequences.