0% found this document useful (0 votes)
20 views8 pages

DAANotes 2

Uploaded by

gursimark.sodhi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views8 pages

DAANotes 2

Uploaded by

gursimark.sodhi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Unit: 3

Dynamic Programming: Ingredients of Dynamic Programming


Explanation: Dynamic Programming is a powerful technique for solving optimization problems
by breaking them down into simpler subproblems. To effectively use dynamic programming, two
key properties are necessary:

1. Optimal Substructure:

This property means that the optimal solution to an overall problem can be
constructed from the optimal solutions of its subproblems.
It allows us to solve a complex problem by breaking it down into smaller, more
manageable subproblems.

2. Overlapping Subproblems:

This property states that the problem can be broken down into subproblems, and the
solutions to some subproblems are reused multiple times.
By storing the solutions to subproblems, we avoid redundant computations and
improve the efficiency of the algorithm.

Matrix Chain Multiplication


Explanation: Matrix Chain Multiplication is a classic dynamic programming problem where the
goal is to parenthesize a sequence of matrices to minimize the number of scalar multiplications.

Example: Consider matrices A(10x30), B(30x5), and C(5x60). The product ABC can be
parenthesized as either (AB)C or A(BC). The optimal parenthesization minimizes the total
number of scalar multiplications, which is crucial for efficient matrix multiplication.

Longest Common Subsequence


Explanation: The Longest Common Subsequence (LCS) problem involves finding the longest
subsequence that two sequences have in common. A subsequence is a sequence that appears in
the same order but not necessarily consecutively.

Example: For sequences "ABCD" and "ACDF," the LCS is "ACD." Dynamic programming is often
used to find the LCS efficiently by considering overlapping subproblems.

Optimal Binary Search Trees


Explanation: Optimal Binary Search Trees involve constructing a binary search tree with
minimum expected search cost for a sequence of keys. The goal is to reduce the average search
time for keys.

Example: Given probabilities of searching for keys, construct a binary search tree that minimizes
the expected search cost. This involves placing frequently searched keys closer to the root for
quicker access.

0-1 Knapsack Problem


Explanation: The 0-1 Knapsack Problem is a combinatorial optimization problem where the goal
is to maximize the total value of items selected, subject to a constraint on the total weight.

Example: Given items with weights and values, determine the most valuable combination of
items that can fit into a knapsack of limited capacity. The dynamic programming approach
efficiently solves this problem by considering the optimal selection of items for each possible
weight limit.

Traveling Salesperson Problem


Explanation: The Traveling Salesperson Problem involves finding the shortest possible route that
visits a set of cities and returns to the original city.

Example: Given a set of cities with distances between them, find the shortest path that visits
each city exactly once and returns to the starting city. Dynamic programming can be used to
find the optimal solution by considering all possible city sequences.

Floyd Warshall Algorithm


Explanation: The Floyd-Warshall algorithm is used to find the shortest paths between all pairs of
vertices in a weighted graph. It works for both directed and undirected graphs with positive or
negative edge weights.

Example: Given a graph with weighted edges, the algorithm computes the shortest paths
between all pairs of vertices. It uses a dynamic programming approach to iteratively update the
shortest path information, ultimately providing the shortest distances between all pairs of
vertices.

Branch and Bound Method


Explanation: Branch and Bound is an algorithmic paradigm for solving optimization problems,
typically combinatorial optimization problems. It systematically searches the solution space to
find the optimal solution by dividing it into smaller subproblems, bounding the solutions, and
efficiently eliminating subproblems that cannot lead to an optimal solution.

Steps in Branch and Bound:


1. Initialization: Begin with an initial feasible solution or an empty solution space.
2. Branching: Divide the problem into subproblems.
3. Bounding: Estimate the potential of each subproblem to improve upon the current
best solution.
4. Pruning: Eliminate subproblems that cannot lead to an optimal solution.
5. Updating: Update the current best solution if a better solution is found.
6. Termination: Stop when the solution space has been completely explored.

0/1 Knapsack Problem using Branch and Bound


Explanation: The 0/1 Knapsack Problem involves selecting a subset of items with maximum total
value, subject to a constraint on the total weight. The Branch and Bound approach optimizes the
search for the optimal solution.

Branching in 0/1 Knapsack:

Create branches at each decision point where an item can be included or excluded.
Each branch represents a decision to include or exclude a particular item.

Bounding in 0/1 Knapsack:

Use heuristics to estimate the potential of a partial solution.


Prune branches that cannot lead to an optimal solution based on these estimates.

Traveling Salesperson Problem using Branch and Bound


Explanation: The Traveling Salesperson Problem involves finding the shortest possible route that
visits a set of cities and returns to the original city. The Branch and Bound method can be
employed to efficiently explore the solution space.

Branching in Traveling Salesperson Problem:

Create branches at each decision point where the salesperson must choose the next
city to visit.
Each branch represents a decision about the order in which cities are visited.

Bounding in Traveling Salesperson Problem:


Use heuristics or lower bounds to estimate the potential of a partial solution.
Prune branches that cannot lead to an optimal solution based on these estimates.

Example: In the Traveling Salesperson Problem, a branch might represent choosing between
visiting City A or City B next. The bounding function estimates the distance traveled so far and
prunes branches where this estimate exceeds the current best solution.

Summary:
Branch and Bound: A systematic algorithmic paradigm for solving optimization problems
by exploring the solution space efficiently.
0/1 Knapsack using Branch and Bound: The method involves creating branches for item
inclusion/exclusion decisions and bounding based on estimated potential solutions.
Traveling Salesperson using Branch and Bound: Branches are created for city visitation
decisions, and bounding helps eliminate suboptimal solutions.

Unit: 4

Naïve String Matching Algorithm


Explanation: The Naïve String Matching algorithm is a simple method for finding occurrences of
a pattern (substring) within a text (string). It involves systematically comparing the pattern with
all possible substrings of the text.

Algorithm Steps:
1. Slide the pattern over the text one character at a time.
2. Compare the pattern with the substring of the text starting at each position.
3. If a match is found, record the position of the match.

Rabin-Karp Algorithm
Explanation: The Rabin-Karp Algorithm is a more efficient string matching algorithm that uses
hashing. It employs a rolling hash function to quickly compare the pattern with substrings of the
text.

Algorithm Steps:
1. Compute the hash values of the pattern and the initial substring of the text.
2. Slide the pattern over the text, updating the hash value at each step using the rolling
hash function.
3. If the hash values match, perform a character-by-character comparison to confirm the
match.

String Matching with Finite Automata


Explanation: String Matching with Finite Automata involves constructing a finite automaton that
recognizes the pattern. The automaton is then used to efficiently scan the text for occurrences of
the pattern.

Algorithm Steps:
1. Construct a finite automaton that recognizes the pattern.
2. Process the text using the automaton to detect occurrences of the pattern.

Knuth-Morris-Pratt (KMP) Algorithm


Explanation: The Knuth-Morris-Pratt (KMP) Algorithm is a linear time, efficient string matching
algorithm that avoids unnecessary character comparisons by using information from previous
comparisons.

Algorithm Steps:
1. Preprocess the pattern to compute the longest proper prefix that is also a suffix for
each position.
2. Use the computed information to skip unnecessary character comparisons during the
string matching process.

Example: For the pattern "ABABCABAB" and text "ABABDABACDABABCABAB," the KMP
algorithm efficiently identifies occurrences of the pattern without unnecessary character
comparisons.

Summary:
Naïve String Matching Algorithm: Simple but less efficient method involving a systematic
comparison of the pattern with all substrings of the text.
Rabin-Karp Algorithm: Uses hashing and a rolling hash function for efficient pattern
matching.
String Matching with Finite Automata: Involves constructing a finite automaton for pattern
recognition.
Knuth-Morris-Pratt (KMP) Algorithm: Efficient linear time algorithm that avoids
unnecessary comparisons using information from previous matches.

Computational Complexity: Basic Concepts


Explanation: Computational Complexity is a field that studies the resources (time and space)
required by algorithms to solve computational problems. Basic concepts include:

Time Complexity: The amount of time an algorithm takes as a function of the input size.
Space Complexity: The amount of memory an algorithm uses as a function of the input
size.
Big-O Notation: Describes the upper bound of the growth rate of an algorithm's resource
usage.

Polynomial vs Non-Polynomial Complexity


Explanation: Polynomial Complexity refers to algorithms whose running time is a polynomial
function of the input size, considered efficient. Non-Polynomial Complexity involves algorithms
with running times that grow faster than any polynomial, often considered less efficient.

Polynomial Time (P): Problems solvable in polynomial time.


Non-Polynomial Time (NP): Problems for which a solution can be verified in polynomial
time but may not be found in polynomial time.

NP-hard & NP-complete Classes


Explanation: NP-hard (Nondeterministic Polynomial-time hard) problems are at least as hard as
the hardest problems in NP but may not be in NP themselves. NP-complete (Nondeterministic
Polynomial-time complete) problems are the hardest problems in NP.

NP-hard: Problems for which a polynomial-time algorithm for any one of them implies a
polynomial-time algorithm for all problems in NP.
NP-complete: A problem is NP-complete if it is both in NP and is as hard as the hardest
problems in NP.

Approximation Algorithms
Explanation: Approximation Algorithms provide near-optimal solutions for NP-hard problems
when finding an exact solution is impractical.

Greedy Approximation: Make locally optimal choices at each step, hoping they lead to a
globally optimal solution.
Randomized Approximation: Use randomness to improve the probability of finding a good
solution.
Deterministic Polynomial-Time Approximation: Guarantees a solution within a polynomial
factor of the optimal.
Example: The Traveling Salesperson Problem is NP-hard. An approximation algorithm might not
find the optimal solution but could provide a route close to the shortest possible.

Summary:
Computational Complexity: Studies time and space resources required by algorithms.
Polynomial vs Non-Polynomial Complexity: Classifies algorithms based on their growth
rates.
NP-hard & NP-complete Classes: Categorize problems based on their complexity and
relationship to NP.
Approximation Algorithms: Provide near-optimal solutions for NP-hard problems.

Ford-Fulkerson Method
Explanation: The Ford-Fulkerson method is an algorithm for finding the maximum flow in a flow
network. It uses augmenting paths to incrementally improve the flow until an optimal solution is
reached.

Algorithm Steps:
1. Start with an initial feasible flow.
2. Find an augmenting path in the residual graph.
3. Update the flow along the augmenting path.
4. Repeat steps 2 and 3 until no more augmenting paths can be found.

Maximum Bipartite Matching


Explanation: Maximum Bipartite Matching is a graph theory problem where the goal is to find
the maximum number of edges in a bipartite graph without any common vertices.

Algorithm Steps:
1. Construct a bipartite graph.
2. Use algorithms like the Ford-Fulkerson method to find the maximum flow, which
corresponds to the maximum bipartite matching.

Sorting Networks
Explanation: Sorting Networks are networks of comparators used for sorting a sequence of
elements. These networks guarantee correct sorting regardless of the input sequence.

Comparison Network: A sorting network where elements are compared and swapped
based on predefined rules until the entire sequence is sorted.
Zero-One Principle: A principle stating that the correctness of a sorting network can be
determined by checking its behavior on all possible input sequences of 0s and 1s.

Bitonic Sorting Network


Explanation: A Bitonic Sorting Network is a type of sorting network designed specifically for
sorting sequences that are bitonic (first increasing, then decreasing).

Algorithm Steps:
1. Build a bitonic sequence from the input.
2. Recursively sort the two halves of the bitonic sequence.
3. Merge the two sorted halves.

Merging Network
Explanation: A Merging Network is a type of sorting network used for merging two sorted
sequences into a single sorted sequence.

Algorithm Steps:
1. Compare elements from the two input sequences and merge them in sorted order.
2. Continue this process until all elements are merged.

Example: In a Maximum Bipartite Matching problem, given two sets of elements with possible
connections between them, the goal is to find the maximum number of connections such that
no two connections share a common element.

Summary:
Ford-Fulkerson Method: Finds the maximum flow in a flow network.
Maximum Bipartite Matching: Finds the maximum number of non-overlapping
connections in a bipartite graph.
Sorting Networks: Networks of comparators for sorting sequences.
Bitonic Sorting Network: Specifically designed for sorting bitonic sequences.
Merging Network: Used for merging two sorted sequences.

You might also like