0% found this document useful (0 votes)
5 views9 pages

Adsa

The document compares the efficiency of the KMP algorithm to the naïve string-matching algorithm, highlighting KMP's linear time complexity and its ability to avoid redundant comparisons through the use of the LPS array. It also discusses the applications of the Floyd-Warshall algorithm in routing, airline networks, and traffic management, among others. Additionally, it covers various string matching algorithms, optimization problems, and techniques to solve them, including linear programming, dynamic programming, and genetic algorithms.

Uploaded by

frrrassa626
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views9 pages

Adsa

The document compares the efficiency of the KMP algorithm to the naïve string-matching algorithm, highlighting KMP's linear time complexity and its ability to avoid redundant comparisons through the use of the LPS array. It also discusses the applications of the Floyd-Warshall algorithm in routing, airline networks, and traffic management, among others. Additionally, it covers various string matching algorithms, optimization problems, and techniques to solve them, including linear programming, dynamic programming, and genetic algorithms.

Uploaded by

frrrassa626
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

📕

ADSA
Q Justify how KMP algorithm is more efficient than naïve
string-matching algorithm.
The Knuth-Morris-Pratt (KMP) algorithm is more efficient than the naïve
string-matching algorithm because it significantly reduces unnecessary
comparisons when searching for a pattern in a text. Here's a detailed
comparison between the two:

Naïve String-Matching
Criteria KMP Algorithm
Algorithm

O(m*n) where `m` is the


Time Complexity length of the pattern and n is O(m + n) (Linear time)
the length of the text

Starts over from the next


Handling Uses LPS (Longest Prefix Suffix)
character in the text after a
Mismatches to skip redundant comparisons
mismatch

Efficient; avoids redundant


Efficiency with Inefficient; performs many
comparisons by skipping over
Repeated Patterns redundant comparisons
previously matched portions

Pattern Shifting Always shifts by 1 position in Shifts intelligently based on the


After Mismatch the text LPS array

ADSA 1
No preprocessing of the Preprocessing of pattern using
Preprocessing Step
pattern LPS array in O(m) time

Yes, rechecks characters


Redundant No redundant comparisons due
multiple times, especially with
Comparisons to the use of the LPS array
repetitive patterns

Small text and pattern sizes, Large text, repetitive patterns, or


Ideal Use Case or when performance is not performance-sensitive
critical applications

Can perform poorly in worst


Practical Performs consistently well, even
cases (e.g., patterns with
Performance with repetitive patterns
repeated substrings)

Q Applications of Floyd Warshall


Routing Algorithms: The Floyd-Warshall algorithm is broadly utilized in
routing algorithms, along with within the OSPF (Open Shortest Path First)
protocol utilized in Internet routing. It can help decide the shortest route
among two nodes in a network and is useful in locating the least congested
path.​

Airline Networks: The Floyd-Warshall set of rules can also be utilized in


airline networks to locate the shortest path between two cities with the
lowest cost. It can assist airways plan their routes and limit fuel charges.​

Traffic Networks: The algorithm is used to find the shortest path between
points in a visitors' network. It can help reduce congestion and improve the
go with the flow of visitors in city areas.​

Computer Networks: The Floyd-Warshall algorithm is likewise utilized in


laptop networks to decide the shortest course between hosts in a network.
It can assist in minimizing community latency and improve community
overall performance.​

Game Development: The set of rules may be used in game development to


find the shortest direction among two items in a sport world. It is beneficial
in games in which the participant desires to navigate through complex
surroundings, together with a maze or a metropolis.

Q Naïve String Matching

ADSA 2
Naïve String Matching Algorithm
The naïve string matching algorithm is a simple and straightforward approach
for finding occurrences of a pattern within a text. It works by checking every
possible position in the text to see if the pattern matches the substring starting
at that position.

Algorithm Steps:
1. Start at the first position of the text.

2. Compare the pattern with the substring of the text at the current position.

3. If all characters match, record the position as a match.

4. Move the pattern by one position to the right in the text and repeat the
comparison until all positions have been checked.

Time Complexity:
Worst-case: O(m⋅n), where m is the length of the pattern and n is the
length of the text.

Example:
Given a text T = "ABAAABCD" and a pattern P = "ABC" , the algorithm compares P
with every substring of length 3 in T and reports a match at index 4 (0-based
index).

Advantages:
1. Simplicity: The algorithm is easy to understand and implement.

2. No preprocessing: Unlike more complex algorithms (e.g., KMP), the naive


algorithm does not require any preprocessing of the pattern or text, making
it quick to start.

Disadvantages:
1. Inefficiency: In the worst case, the algorithm performs O(m⋅n)
comparisons. This happens when the text and pattern have many matching

ADSA 3
initial characters but no complete match (e.g., when searching for AAAAA in
AAAAAA ).

O(m*n)

2. Redundant Comparisons: The algorithm may perform many redundant


comparisons, especially in cases where partial matches occur frequently
(e.g., when the pattern contains repeating characters).

3. Not Optimal for Large Texts: For large texts or patterns, the algorithm
becomes inefficient compared to more advanced algorithms like KMP or
Boyer-Moore, which reduce unnecessary comparisons.
Q Rabin-Karp String Matching Algorithm
The Rabin-Karp algorithm is a string searching algorithm that uses hashing to
find any one of a set of pattern strings in a text. It is particularly efficient for
searching multiple patterns simultaneously. The main idea is to compute a hash
value for the pattern and compare it to the hash values of substrings in the
text.

Algorithm Steps:
1. Hash Calculation: Calculate the hash value of the pattern and the hash
value of the first substring of the text that has the same length as the
pattern.

2. Sliding Window: Move through the text, updating the hash value for the
next substring by removing the leading character and adding the trailing
character. This is done in constant time using a rolling hash technique.

3. Comparison: If the hash values match, perform a direct comparison of the


characters to confirm the match (to handle hash collisions).

4. Repeat: Continue this process for each position in the text.

Time Complexity:
Average Case: O(n+m), where n is the length of the text and m is the
length of the pattern.
O(n+m)

ADSA 4
Worst Case: O(n⋅m) due to hash collisions leading to character
comparisons.

O(n⋅m)

Example:
Given a text T = "ABCDABCE" and a pattern P = "ABC" , the Rabin-Karp algorithm
computes the hash of P and compares it with the hash of substrings of T . If
the hash matches, it checks the characters to confirm the match.

Advantages:
1. Multiple Pattern Search: The algorithm is efficient for searching multiple
patterns at once, as it allows for simultaneous hash calculations for
different patterns.

2. Average Case Efficiency: In practice, it can be very efficient, particularly


for long texts and short patterns, due to the average-case linear time
complexity.

3. Simple Implementation: The concept of using hashes is relatively


straightforward, making the algorithm easier to implement than some
advanced alternatives.

Disadvantages:
1. Hash Collisions: The worst-case performance can degrade to O(n⋅m) due
to hash collisions, which necessitate direct comparisons.

2. Hash Function Complexity: Choosing an efficient hash function is crucial;


a poorly designed hash function can lead to many collisions, reducing
performance.

3. Preprocessing Overhead: While the algorithm is generally efficient, the


overhead of computing hashes can be significant if the hash function is
complex or if there are many patterns.
Q P-NP Algorithm

P Class

ADSA 5
Definition: The class P (Polynomial time) consists of decision problems
(problems with a yes/no answer) that can be solved by a deterministic
Turing machine in polynomial time. This means there exists an algorithm
that can solve the problem in a time complexity of O(n^k) for some
constant k, where n is the size of the input.

Characteristics:

Problems in P can be solved efficiently (in polynomial time).

Examples of problems in P include:

Sorting (e.g., Merge Sort, Quick Sort)

Finding the shortest path in a graph (e.g., Dijkstra's algorithm)

Matrix multiplication

Basic arithmetic operations

NP Class
Definition: The class NP (Nondeterministic Polynomial time) consists of
decision problems for which a solution can be verified by a deterministic
Turing machine in polynomial time. Alternatively, a problem is in NP if a
hypothetical "nondeterministic" Turing machine can solve it in polynomial
time.

Characteristics:

Problems in NP may not be solvable in polynomial time, but if a solution


(or a "certificate") is provided, it can be checked in polynomial time.

Examples of problems in NP include:

The Boolean satisfiability problem (SAT): Given a Boolean


expression, determine if there is an assignment of true/false values
to variables that makes the expression true.

The Traveling Salesman Problem: Given a list of cities and


distances between them, is there a route that visits each city
exactly once and returns to the origin city with a total distance less
than a given value?

ADSA 6
Hamiltonian path problem: Does there exist a path in a graph that
visits each vertex exactly once?
Q Optimization Problem
An optimization problem is a mathematical problem that seeks to find the best
solution from a set of feasible solutions, according to a specified criterion or
objective function. In essence, the goal is to maximize or minimize a particular
quantity while satisfying a set of constraints.

Components of an Optimization Problem:


1. Objective Function: A function that needs to be optimized (maximized or
minimized). It defines the goal of the optimization.

2. Decision Variables: Variables that affect the value of the objective function
and can be adjusted to find the optimal solution.

3. Constraints: Restrictions or limitations on the decision variables, often


represented as equations or inequalities.

Design Techniques to Solve Optimization Problems:


Several design techniques and algorithms can be used to solve optimization
problems, including:

1. Linear Programming (LP):

Solves problems with a linear objective function and linear constraints.

Example: Maximizing profit given constraints on resources in a


manufacturing process.

2. Integer Programming (IP):

A special case of linear programming where some or all of the decision


variables are required to be integers.

Example: Scheduling problems where tasks must be assigned in whole


units (e.g., assigning jobs to machines).

3. Dynamic Programming (DP):

Breaks down problems into simpler subproblems and solves each one
only once, storing the solutions for future reference.

ADSA 7
Example: The Knapsack problem, where items must be selected to
maximize value without exceeding a weight limit.

4. Greedy Algorithms:

Makes a series of choices that look best at the moment, aiming for a
local optimum.

Example: Prim’s algorithm for finding the minimum spanning tree of a


graph.

5. Branch and Bound:

A systematic method for solving optimization problems by dividing


them into smaller subproblems (branching) and evaluating their bounds
to eliminate suboptimal solutions.

Example: Solving combinatorial optimization problems like the


Traveling Salesman Problem.

6. Genetic Algorithms (GA):

Inspired by the process of natural selection, these algorithms use


techniques such as selection, crossover, and mutation to evolve
solutions over generations.

Example: Optimization problems in scheduling, routing, or design.

7. Simulated Annealing:

A probabilistic technique that searches for a good approximation to the


global optimum of a function by exploring the solution space and
allowing for occasional increases in cost to escape local optima.

Example: Traveling Salesman Problem and various scheduling


problems.

8. Constraint Programming:

A technique used to solve combinatorial problems by specifying


constraints and finding solutions that satisfy them.

Example: Scheduling and resource allocation problems.

Examples of Optimization Problems:

ADSA 8
1. Resource Allocation:

Problem: Allocate resources (e.g., budget, workforce) to maximize


profit or minimize costs.

2. Traveling Salesman Problem (TSP):

Problem: Find the shortest possible route that visits a set of cities and
returns to the origin city.

3. Knapsack Problem:

Problem: Given a set of items, each with a weight and a value,


determine the number of each item to include in a collection so that the
total weight does not exceed a specified limit and the total value is
maximized.

4. Network Flow Problem:

Problem: Optimize the flow of goods through a network to minimize


costs or maximize throughput.

5. Portfolio Optimization:

Problem: Choose a mix of investment assets to maximize returns while


minimizing risk.

6. Job Scheduling:

Problem: Schedule jobs on machines to minimize the total completion


time or maximize machine utilization.
Q 0/1 Knapsack Problem Short Note

The
0/1 Knapsack Problem is a classic optimization problem where the objective is
to select items to maximize total value without exceeding a given weight
capacity. Given a set of n items, each with a weight wᵢ and a value vᵢ, and a
knapsack with a maximum weight W, the goal is to maximize the sum of values
of selected items while ensuring their total weight does not exceed W.

ADSA 9

You might also like