ADSA IA2 Solution
ADSA IA2 Solution
DEPT :- IT
Class:- T.E Sem V
Teacher:- Prof. Jaymala Chavan
Q1.A) What is longest common subsequence problem? Find LCS for the following string:
String X: ABCDGH
String Y: AEDFHR (OR)
ANS:-
The Longest Common Subsequence (LCS) problem is a classic dynamic programming
problem used in string matching, DNA sequence analysis, and file comparison. It involves
finding the longest subsequence common to two strings, where a subsequence is a sequence
derived from another by deleting some or no elements without changing the order of the
remaining elements.
Problem Statement
Input: Two strings XXX and YYY of lengths mmm and nnn.
Output: The length of their longest common subsequence, and optionally the
subsequence itself.
Example Strings:
X="ABCDGH"
Y="AEDFHR"
ANS:-
ANS:-
cover in G?
APPROX-VERTEX-COVER
1: C ← Ø ;
2: E′ ← E
3: while E′ ≠ Ø; do
4: let (u, v) be an arbitrary edge of E′
5: C ← C {(u, v)}
6: remove from E′ all edges incident on either u or v
7: end while
A Genetic Algorithm (GA) is a search and optimization technique inspired by the process of
natural selection in biological evolution. It is part of a broader category of algorithms called
evolutionary algorithms, used to find approximate solutions to optimization and search
problems.
Key Concepts
1. Population:
o A set of potential solutions to the problem, represented as individuals
(chromosomes).
o Each individual encodes a candidate solution.
2. Fitness Function:
o Measures the quality of an individual’s solution.
o Higher fitness indicates a better solution.
3. Genetic Operators:
o Selection: Chooses individuals from the current population based on their fitness
to produce offspring.
Examples: Roulette-wheel selection, tournament selection.
o Crossover: Combines two parent solutions to produce new offspring.
Example: Single-point crossover, multi-point crossover.
o Mutation: Randomly alters parts of an individual to maintain diversity.
Example: Flipping bits in binary encoding.
4. Evolution Process:
o A new generation is created by applying selection, crossover, and mutation on the
population.
o This process mimics natural evolution to improve the population over time.
1. Initialization:
o Generate an initial population randomly or based on heuristics.
2. Evaluation:
o Calculate the fitness of each individual in the population.
3. Selection:
o Select individuals with higher fitness as parents for the next generation.
4. Crossover:
o Combine parents to produce new offspring, incorporating genetic material from
both parents.
5. Mutation:
o Introduce small random changes to offspring to explore new solutions.
6. Replacement:
o Replace the old population with the new generation of offspring.
7. Termination:
o Stop the algorithm when a stopping condition is met, such as reaching a
maximum number of generations or finding a satisfactory solution.
Advantages
Applications
1. Optimization Problems:
o Traveling Salesman Problem (TSP)
o Knapsack Problem
2. Machine Learning:
o Hyperparameter tuning
o Feature selection
3. Engineering Design:
o Circuit design
o Structural optimization
4. Bioinformatics:
o DNA sequence alignment
5. Game Development:
ANS:-
The Travelling Salesman Problem (also known as the Travelling Salesperson Problem or TSP) is
an NP-hard graph computational problem where the salesman must visit all cities (denoted
using vertices in a graph) given in a set just once. The distances (denoted using edges in the
graph) between all these cities are known. We are requested to find the shortest possible route
in which the salesman visits all the cities and returns to the origin city.
Let us consider a graph G = (V,E), where V is a set of cities and E is a set of weighted edges. An
edge e(u, v) represents that vertices u and v are connected. Distance between
vertex u and v is d(u, v), which should be non-negative.
Suppose we have started at city 1 and after visiting some cities now we are in city j. Hence, this
is a partial tour. We certainly need to know j, since this will determine which cities are most
convenient to visit next. We also need to know all the cities visited so far, so that we don't
repeat any of them. Hence, this is an appropriate sub-problem.
For a subset of cities S ϵ {1,2,3,...,n} that includes 1, and j ϵ S, let C(S, j) be the length of the
shortest path visiting each node in S exactly once, starting at 1 and ending at j.
When |S|> 1 , we define 𝑪C(S,1)= ∝∝ since the path cannot start and end at 1.
Now, let express C(S, j) in terms of smaller sub-problems. We need to start at 1 and end at j.
We should select the next city in such a way that
1. C ({1}, 1) = 0
2. for s = 2 to n do
3. for all subsets S є {1, 2, 3, … , n} of size s and containing 1
4. C (S, 1) = ∞
5. for all j є S and j ≠ 1
6. C (S, j) = min {C (S – {j}, i) + d(i, j) for i є S and i ≠ j}
7. Return minj C ({1, 2, 3, …, n}, j) + d(j, i)
In the dynamic algorithm for TSP, the number of possible subsets can be at
Example:-
– 24 11 10 9
8 – 2 5 11
26 12 – 8 7
11 23 24 – 6
5 4 8 11 –
Solution:
Let us start our tour from city 1.
Step 1: Initially, we will find the distance between city 1 and city {2, 3, 4, 5} without visiting any
intermediate city.
Cost(x, y, z) represents the distance from x to z and y as an intermediate city.
Cost(2, Φ, 1) = d[2, 1] = 24
Cost(3, Φ, 1) = d[3, 1] = 11
Cost(4, Φ , 1) = d[4, 1] = 10
Cost(5, Φ , 1) = d[5, 1] = 9
Step 2: In this step, we will find the minimum distance by visiting 1 city as intermediate city.
Cost{2, {3}, 1} = d[2, 3] + Cost(3, f, 1)
= 2 + 11 = 13
= 5 + 10 = 15
= 11 + 9 = 20
= 12 + 24 = 36
= 8 + 10 = 18
= 7 + 9 = 16
= 23 + 24 = 47
= 24 + 11 = 35
= 6 + 9 = 15
= 8 + 11 = 19
= 11 + 10 = 21
Step 3: In this step, we will find the minimum distance by visiting 2 cities as intermediate city.
Cost(2, {3, 4}, 1) = min { d[2, 3] + Cost(3, {4}, 1), d[2, 4] + Cost(4, {3}, 1)]}
= min{20, 40} = 20
Cost(2, {4, 5}, 1) = min { d[2, 4] + Cost(4, {5}, 1), d[2, 5] + Cost(5, {4}, 1)]}
= min{20, 32} = 20
Cost(2, {3, 5}, 1) = min { d[2, 3] + Cost(3, {4}, 1), d[2, 4] + Cost(4, {3}, 1)]}
= min{20, 40} = 20
Cost(3, {2, 4}, 1) = min { d[3, 2] + Cost(2, {4}, 1), d[3, 4] + Cost(4, {2}, 1)]}
= min{27, 55} = 27
Cost(3, {4, 5}, 1) = min { d[3, 4] + Cost(4, {5}, 1), d[3, 5] + Cost(5, {4}, 1)]}
= min{23, 28} = 23
Cost(3, {2, 5}, 1) = min { d[3, 2] + Cost(2, {5}, 1), d[3, 5] + Cost(5, {2}, 1)]}
= min{32, 35} = 32
Cost(4, {2, 3}, 1) = min{ d[4, 2] + Cost(2, {3}, 1), d[4, 3] + Cost(3, {2}, 1)]}
= min { [23 + 13], [24 + 36] }
= min{36, 60} = 36
Cost(4, {3, 5}, 1) = min{ d[4, 3] + Cost(3, {5}, 1), d[4, 5] + Cost(5, {3}, 1)]}
= min{40, 25} = 25
Cost(4, {2, 5}, 1) = min{ d[4, 2] + Cost(2, {5}, 1), d[4, 5] + Cost(5, {2}, 1)]}
= min{43, 34} = 34
Cost(5, {2, 3}, 1) = min{ d[5, 2] + Cost(2, {3}, 1), d[5, 3] + Cost(3, {2}, 1)]}
= min{17, 44} = 17
Cost(5, {3, 4}, 1) = min{ d[5, 3] + Cost(3, {4}, 1), d[5, 4] + Cost(4, {3}, 1)]}
= min{26, 46} = 26
Cost(5, {2, 4}, 1) = min{ d[5, 2] + Cost(2, {4}, 1), d[5, 4] + Cost(4, {2}, 1)]}
= min{19, 58} = 19
Step 4 : In this step, we will find the minimum distance by visiting 3 cities as intermediate
city.
Cost(2, {3, 4, 5}, 1) = min { d[2, 3] + Cost(3, {4, 5}, 1), d[2, 4] + Cost(4, {3, 5}, 1), d[2, 5] +
Cost(5, {3, 4}, 1)}
Cost(3, {2, 4, 5}, 1) = min { d[3, 2] + Cost(2, {4, 5}, 1), d[3, 4] + Cost(4, {2, 5}, 1), d[3, 5] +
Cost(5, {2, 4}, 1)}
Cost(4, {2, 3, 5}, 1) = min { d[4, 2] + Cost(2, {3, 5}, 1), d[4, 3] + Cost(3, {2, 5}, 1), d[4, 5] +
Cost(5, {2, 3}, 1)}
Cost(5, {2, 3, 4}, 1) = min { d[5, 2] + Cost(2, {3, 4}, 1), d[5, 3] + Cost(3, {2, 4}, 1), d[5, 4] +
Cost(4, {2, 3}, 1)}
Step 5 : In this step, we will find the minimum distance by visiting 4 cities as an intermediate
city.
Cost(1, {2, 3, 4, 5}, 1) = min { d[1, 2] + Cost(2, {3, 4, 5}, 1), d[1, 3] + Cost(3, {2, 4, 5}, 1), d[1, 4]
+ Cost(4, {2, 3, 5}, 1) , d[1, 5] + Cost(5, {2, 3, 4}, 1)}
Cost(1, {2, 3, 4, 5}, 1) is minimum due to d[1, 4], so move from 1 to 4. Path = {1, 4}.
Cost(4, {2, 3, 5}, 1) is minimum due to d[4, 5], so move from 4 to 5. Path = {1, 4, 5}.
Cost(5, {2, 3}, 1) is minimum due to d[5, 2], so move from 5 to 2. Path = {1, 4, 5, 2}.
Cost(2, {3}, 1) is minimum due to d[2, 3], so move from 2 to 3. Path = {1, 4, 5, 2, 3}.
All cities are visited so come back to 1. Hence the optimum tour would be 1 – 4 – 5 – 2 – 3 – 1.
ANS:-
The knapsack problem is a problem in combinatorial optimization: Given a set of items, each
with a weight and a value, determine the number of each item to include in a collection so that
the total weight is less than or equal to a given limit and the total value is as large as possible.
The 0/1 Knapsack problem can be solved using Dynamic Programming (DP), which involves
solving overlapping sub problems and building solutions incrementally.
Construct an adjacency table with maximum weight of knapsack as rows and items with
respective weights and profits as columns.
Values to be stored in the table are cumulative profits of the items whose weights do not
exceed the maximum weight of the knapsack (designated values of each row)
So we add zeroes to the 0th row and 0th column because if the weight of item is 0, then it
weighs nothing; if the maximum weight of knapsack is 0, then no item can be added into the
knapsack.
The remaining values are filled with the maximum profit achievable with respect to the items
and weight per column that can be stored in the knapsack.
The formula to store the profit values is −
c[i,w]=max{c[i−1,w−w[i]]+P[i]}
By computing all the values using the formula, the table obtained would be –
Inputs
Let us consider that the capacity of the knapsack is W = 8 and the items are as shown in the
following table.
Item A B C D
Profit 2 4 7 10
Weight 1 3 5 7
The optimal solution is {1, 7} with the maximum profit is 12.