Daa Mid2
Daa Mid2
The control abstraction for the Greedy method can be described as follows:
2. Explain in detail about Greedy method and Write the applications of Greedy method.
The greedy method is one of the strategies like Divide and conquer used to solve the
problems. This method is used for solving optimization problems. An optimization
problem is a problem that demands either maximum or minimum results. Let's
understand through some terms.
o Candidate set: A solution that is created from the set is known as a candidate
set.
o Selection function: This function is used to choose the candidate or subset
which can be added in the solution.
o Feasibility function: A function that is used to determine whether the
candidate or subset can be used to contribute to the solution or not.
o Objective function: A function is used to assign the value to the solution or
the partial solution.
o Solution function: This function is used to intimate whether the complete
function has been reached or not.
The fractional knapsack problem is also one of the techniques which are used to solve the
knapsack problem. In fractional knapsack, the items are broken in order to maximize the
profit. The problem in which we break the item is known as a Fractional knapsack problem.
o The first approach is to select the item based on the maximum profit.
o The second approach is to select the item based on the minimum weight.
o The third approach is to calculate the ratio of profit/weight.
Examples
Solution
Step 1
Given, n = 5
Wi = {3, 3, 2, 5, 1}
Pi = {10, 15, 10, 12, 8}
Items 1 2 3 4 5
Profits 10 15 10 20 8
Pi/Wi 3.3 5 5 4 8
Step 2
Items 5 2 3 4 1
Profits 8 15 10 20 10
Pi/Wi 8 5 5 4 3.3
Step 3
Knapsack = {5, 2, 3}
However, the knapsack can still hold 4 kg weight, but the next
item having 5 kg weight will exceed the capacity. Therefore, only
4 kg weight of the 5 kg will be added in the knapsack.
Items 5 2 3 4 1
Profits 8 15 10 20 10
Knapsack 1 1 1 4/5 0
4. Find an optimal solution to the knapsack instance n =7, m = 15, (p1, p2, …p7) = (10, 5, 15, 7,
6, 18, 3) and (w1, w2, … w7) = (2,3,5,7,1,4,1)
To solve this problem, we use some strategy to determine the fraction of weight which
should be included so as to maximize the profit and fill the Knapsack..
X6 = 1, ∑ WiXi < m.
at each step, we try to get the maximum profit. The maximum profit we set by step (7) taking
The sequencing of jobs on a single processor with deadline constraints is called as Job
Sequencing with Deadlines.
Here-
Step-01:
Sort all the given jobs in decreasing order of their profit.
Step-02:
Check the value of maximum deadline.
Draw a Gantt chart where maximum time on Gantt chart is the value of maximum
deadline.
Step-03:
Pick up the jobs one by one.
Put the job on Gantt chart as far as possible from 0 ensuring that the job gets
completed before its deadline.
Problem-
Jobs J1 J2 J3 J4 J5 J6
Deadlines 5 3 3 2 4 2
Solution-
Step-01:
Jobs J4 J1 J3 J2 J5 J6
Deadlines 2 5 3 3 4 2
Step-02:
notes
7. Explain Prim’s Algorithm for finding minimal spanning tree with an example.
Solution
Step 1
Create a visited array to store all the visited vertices into it.
V={}
S→B=8
V = {S, B}
Step 2
Since B is the last visited, check for the least cost edge that is
connected to the vertex B.
B→A=9
B → C = 16
B → E = 14
V = {S, B, A}
Step 3
Since A is the last visited, check for the least cost edge that is
connected to the vertex A.
A → C = 22
A→B=9
A → E = 11
But A → B is already in the spanning tree, check for the next least
cost edge. Hence, A → E is added to the spanning tree.
V = {S, B, A, E}
Step 4
Since E is the last visited, check for the least cost edge that is
connected to the vertex E.
E → C = 18
E→D=3
V = {S, B, A, E, D}
Step 5
Since D is the last visited, check for the least cost edge that is
connected to the vertex D.
D → C = 15
E→D=3
V = {S, B, A, E, D, C}
Minimum cost = 15
9. Explain Kruskal’s Algorithm for finding minimal spanning tree with an example.
Algorithm
Cost
Construct the forest of the graph on a plane with all the
vertices in it.
Select the least cost edge from the edge[] array and add it
into the forest of the graph. Mark the vertices visited by
adding them into the visited[] array.
Repeat the steps 2 and 3 until all the vertices are visited
without having any cycles forming in the graph
When all the vertices are visited, the minimum spanning
tree is formed.
Calculate the minimum cost of the output spanning tree
formed.
Examples
As the first step, sort all the edges in the given graph in an
ascending order and store the values in an array.
Edge B→D A→B C→F F→E B→C G→F A→G C→D D→E C→G
Cost 5 6 9 10 11 12 15 17 22 25
B→D=5
Minimum cost = 5
Visited array, v = {B, D}
Minimum Cost = 5 + 6 + 9 = 20
Visited array, v = {B, D, A, C, F}
The next edge from the least cost array is B → C = 11, hence we
add it in the output graph.
Minimum cost = 5 + 6 + 9 + 10 + 11 = 41
Visited array, v = {B, D, A, C, F, E}
The last edge from the least cost array to be added in the output
graph is F → G = 12.
Minimum cost = 5 + 6 + 9 + 10 + 11 + 12 = 53
Visited array, v = {B, D, A, C, F, E, G}
10.
Assignment 4
For e.g. Suppose there are 3 programs of lengths 2, 5 and 4 respectively. So there are
total 3! = 6 possible orders of storage.
12.
notes
Suppose we have 105 characters in a data file. Normal Storage: 8 bits per character
(ASCII) - 8 x 105 bits in a file. But we want to compress the file and save it compactly.
Suppose only six characters appear in the file:
(i) Fixed length Code: Each letter represented by an equal number of bits. With a
fixed length code, at least 3 bits per character:
For example:
PlayNext
Unmute
Duration 18:10
Loaded: 3.67%
Â
Fullscreen
a 000
b 001
c 010
d 011
e 100
f 101
For example:
a 0
b 101
c 100
d 111
e 1101
f 1100
Number of bits = (45 x 1 + 13 x 3 + 12 x 3 + 16 x 3 + 9 x 4 + 5 x 4) x 1000
= 2.24 x 105bits
14.
Assignment 4
Dijkstra's Algorithm is a Graph algorithm that finds the shortest path from a source
vertex to all other vertices in the Graph (single source shortest path). It is a type of Greedy
Algorithm that only works on Weighted Graphs having positive weights. The time complexity
of Dijkstra's Algorithm is O(V2) with the help of the adjacency matrix representation of the
graph. This time complexity can be reduced to O((V + E) log V) with the help of an
adjacency list representation of the graph, where V is the number of vertices and E is the
number of edges in the graph.
The following is the step that we will follow to implement Dijkstra's Algorithm:
Step 1: First, we will mark the source node with a current distance of 0 and set the rest
of the nodes to INFINITY.
Step 2: We will then set the unvisited node with the smallest current distance as the
current node, suppose X.
Step 3: For each neighbor N of the current node X: We will then add the current
distance of X with the weight of the edge joining X-N. If it is smaller than the current
distance of N, set it as the new current distance of N.
Step 5: We will repeat the process from 'Step 2' if there is any node unvisited left in
the graph.
Let us now understand the implementation of the algorithm with the help of an
example:
Figure 6: The Given Graph
1. We will use the above graph as the input, with node A as the source.
2. First, we will mark all the nodes as unvisited.
3. We will set the path to 0 at node A and INFINITY for all the other nodes.
4. We will now mark source node A as visited and access its neighboring nodes.
Note: We have only accessed the neighboring nodes, not visited them.
5. We will now update the path to node B by 4 with the help of relaxation because
the path to node A is 0 and the path from node A to B is 4, and
the minimum((0 + 4), INFINITY) is 4.
6. We will also update the path to node C by 5 with the help of relaxation because
the path to node A is 0 and the path from node A to C is 5, and
the minimum((0 + 5), INFINITY) is 5. Both the neighbors of node A are now
relaxed; therefore, we can move ahead.
7. We will now select the next unvisited node with the least path and visit it. Hence,
we will visit node B and perform relaxation on its unvisited neighbors. After
performing relaxation, the path to node C will remain 5, whereas the path to
node E will become 11, and the path to node D will become 13.
8. We will now visit node E and perform relaxation on its neighboring nodes B, D,
and F. Since only node F is unvisited, it will be relaxed. Thus, the path to
node B will remain as it is, i.e., 4, the path to node D will also remain 13, and
the path to node F will become 14 (8 + 6).
9. Now we will visit node D, and only node F will be relaxed. However, the path to
node F will remain unchanged, i.e., 14.
10. Since only node F is remaining, we will visit it but not perform any relaxation as
all its neighboring nodes are already visited.
11. Once all the nodes of the graphs are visited, the program will end.
1. A = 0
2. B = 4 (A -> B)
3. C = 5 (A -> C)
4. D = 4 + 9 = 13 (A -> B -> D)
5. E = 5 + 3 = 8 (A -> C -> E)
6. F = 5 + 3 + 6 = 14 (A -> C -> E -> F)
16.
Dijkstra's algorithm Solution for given problem
Unit 4
Control abstraction:
It is an optimization and solves the problem by combining the solution of
the sub problem. Overlapping sub problem and optimal substructure
are the properties of dynamic programming.
Optimization reduces the complexities of time from exponential to
polynomial. Matrix chain multiplication, traveling salesmen problem The
longest common sub sequence are the three-application approach of
dynamic programming.
Dynamic programming is a very powerful technique. It is mathematical
optimization as well as computer programming method.
Dynamic programming (DP) splits the large problem at every
possible point. When the problem becomes sufficiently small, DP
solves it.
Dynamic programming is bottom up approach, it finds the solution of
the smallest problem and constructs the solution of the larger
problem from already solved smaller problems.
To avoid recomputation of the same problem, DP saves the result of
sub problems into thetable. When next time same problem
encounters, the answer is retrieved from the table by lookup
procedure.
Examples
The following computer problems can be solved using dynamic
programming approach −
3. Explain how Matrix chain Multiplication problem can be solved using Dynamic
Programming with suitable example.
4. "Find the minimum no of operations required for the following Chain Matrix
Multiplication using dynamic programming.
A = 50 X 10, B = 10 X 40, C = 40 X 30 and D = 30 X 5
Assignment 5
5.
Assignment 5
7.
8.
Assignment 5
9. Explain Bellman-Ford single shortest path problem with example.
Since the graph has six vertices so it will have five iterations.
First iteration
Consider the edge (A, B). Denote vertex 'A' as 'u' and vertex 'B' as 'v'. Now use the
relaxing formula:
d(u) = 0
d(v) = ∞
c(u , v) = 6
Since (0 + 6) is less than ∞, so update
d(v) = 0 + 6 = 6
Consider the edge (A, C). Denote vertex 'A' as 'u' and vertex 'C' as 'v'. Now use the
relaxing formula:
d(u) = 0
d(v) = ∞
c(u , v) = 4
d(v) = 0 + 4 = 4
Consider the edge (A, D). Denote vertex 'A' as 'u' and vertex 'D' as 'v'. Now use the
relaxing formula:
d(u) = 0
d(v) = ∞
c(u , v) = 5
d(v) = 0 + 5 = 5
Consider the edge (B, E). Denote vertex 'B' as 'u' and vertex 'E' as 'v'. Now use the
relaxing formula:
d(u) = 6
d(v) = ∞
c(u , v) = -1
d(v) = 6 - 1= 5
Consider the edge (C, E). Denote vertex 'C' as 'u' and vertex 'E' as 'v'. Now use the
relaxing formula:
d(u) = 4
d(v) = 5
c(u , v) = 3
Consider the edge (D, C). Denote vertex 'D' as 'u' and vertex 'C' as 'v'. Now use the
relaxing formula:
d(u) = 5
d(v) = 4
c(u , v) = -2
d(v) = 5 - 2 = 3
Consider the edge (D, F). Denote vertex 'D' as 'u' and vertex 'F' as 'v'. Now use the
relaxing formula:
d(u) = 5
d(v) = ∞
c(u , v) = -1
d(v) = 5 - 1 = 4
Consider the edge (E, F). Denote vertex 'E' as 'u' and vertex 'F' as 'v'. Now use the
relaxing formula:
d(u) = 5
d(v) = ∞
c(u , v) = 3
Consider the edge (C, B). Denote vertex 'C' as 'u' and vertex 'B' as 'v'. Now use the
relaxing formula:
d(u) = 3
d(v) = 6
c(u , v) = -2
d(v) = 3 - 2 = 1
In the second iteration, we again check all the edges. The first edge is (A, B). Since (0
+ 6) is greater than 1 so there would be no updation in the vertex B.
The next edge is (A, C). Since (0 + 4) is greater than 3 so there would be no updation
in the vertex C.
The next edge is (A, D). Since (0 + 5) equals to 5 so there would be no updation in the
vertex D.
The next edge is (B, E). Since (1 - 1) equals to 0 which is less than 5 so update:
=1-1=0
The next edge is (C, E). Since (3 + 3) equals to 6 which is greater than 5 so there would
be no updation in the vertex E.
The next edge is (D, C). Since (5 - 2) equals to 3 so there would be no updation in the
vertex C.
The next edge is (D, F). Since (5 - 1) equals to 4 so there would be no updation in the
vertex F.
The next edge is (E, F). Since (5 + 3) equals to 8 which is greater than 4 so there would
be no updation in the vertex F.
The next edge is (C, B). Since (3 - 2) equals to 1` so there would be no updation in the
vertex B.
Third iteration
We will perform the same steps as we did in the previous iterations. We will observe
that there will be no updation in the distance of vertices.
10.
Assignment - 5
Now x3 = 1, profit = 15
So total profit = 34 + 15 = 49
And weight uptil now = 7 + 5 = 12
Now x7 = 1, profit = 3
So total profit = 49 + 3 = 52
And weight uptil now = 12 + 1 = 13
13. Describe travelling sales person problem and discuss how it solves using Dynamic
Programming.
The TSP describes a scenario where a salesman is required to travel between n cities. He
wishes to travel to all locations exactly once and he must finish at his starting point. The
order in which the cities are visited is not important but he wishes to minimize the distance
traveled.
1 2 3
1 0 10 15
2 5 0 9
3 6 13 0
4 8 8 9
14.
Assignment 5
15. Describe reliability design problem with an example.
16.
Let’s say, we have to set up a system consisting of D1, D2, D3, …………, and
Dn devices, each device has some costs C1, C2, C3, …….., Cn. Each device has a
reliability of 0.9 then the entire system has reliability which is equal to
the product of the reliabilities of all devices i.e., πri = (0.9)4.
It means that 35% of the system has a chance to fail, due to the failure of any one
device. the problem is that we want to construct a system whose reliability is
maximum.
P1 30 0.9
P2 15 0.8
P3 20 0.5
Explanation:
Given that we have total cost C = 105,
sum of all Ci = 30 + 15 + 20 = 65, the remaining amount we can use to buy a copy of
each device in such a way that the reliability of the system, may increase.
Remaining amount = C – sum of Ci = 105 – 65 = 40
Now, let us calculate how many copies of each device we can buy with $40, If
consume all $40 in device1 then we can buy 40/30 = 1 and 1 copy we have already
so overall 2 copies of device1. Now In general, we can have the formula to calculate
the upper bond of each device:
Ui = floor( C – ∑Ci / Ci ) + 1 (1 is added because we have one copy of each
device before)
C1=30, C2=15, C3=20, C=105
r1=0.9, r2=0.8, r3=0.5
u1 = floor ((105-(30+15+20))+1 = 2
u2 = 3
u3 = 3
A tuple is just an ordered pair containing reliability and total cost up to a choice of
mi’s that has been made. we can make pair in of Reliability and Cost of each stage
like copySdevice
S0 = {(1,0)}
Device 1:
Each Si from Si-1 by trying out all possible values for r i and combining the resulting
tuples together.
let us consider P 1 :
1S1 = {(0.9, 30)} where 0.9 is the reliability of stage1 with a
copy of one device and 30 is the cost of P 1.
Now, two copies of device1 so, we can take one more copy as:
2S1 = { (0.99, 60) } where 0.99 is the reliability of stage one with
two copies of device, we can see that it will come as: 1 – ( 1 –
r1 )2 = 1 – (1 – 0.9)2 = 1 – 0.01 = 0.99 .
After combining both conditions of Stage1 i.e., with copy one and copy of
2 devices respectively.
S1 = { ( 0.9, 30 ), ( 0.99, 60 ) }
Device 2:
S2 will contain all reliability and cost pairs that we will get by taking all possible
values for the stage2 in conjunction with possibilities calculated in S 1.
First of all we will check the reliability at stage2 when we have 1, 2, and 3 as a copy
of device. let us assume that Ei is the reliability of the particular stage with n number
of devices, so for S2 we first calculate:
E2 (with copy 1) = 1 – ( 1 – r2 )1 = 1 – ( 1 – 0.8 ) = 0.8
E2 (with copy 2) = 1 – (1 – r2 )2 = 1 – (1 – 0.8 )2 = 0.96
E2 (with copy 3) = 1 – (1 – r2 )3 = 1 – ( 1 – 0.8 )3 = 0.992
If we use 1 copy of P 1 and 1 copy of P2 reliability will be 0.9*0.8 and the cost will be
30+15
One Copy of Device two , 1S2 = { (0.8, 15) } Conjunction with S1 (0.9, 30) = {
(0.72,45) }
Similarly, we can calculate other pairs as S2 = ( 0.72, 45 ), ( 0.792, 75), ( 0.864, 60),
( 0.98,90 ) }
We get ordered pair (0.98,90) in S2 when we take 2 copies of Device1and 2 copies of
Device2 However, with the remaining cost of 15 (105 – 90), we cannot use device
Device3 (we need a minimum of 1 copy of every device in any stage), therefore (
0.792, 75) should be discarded and other ordered pairs like it. We get S 2 = { ( 0.72,
45 ), ( 0.864, 60 ), ( 0.98,90 ) }. There are other possible ordered pairs too, but all of
them exceed cost limitations.
Up to this point we got ordered pairs:
S1 = { ( 0.9, 30), ( 0.99, 60 ) }
S2 = { ( 0.72, 45 ), ( 0.864, 60 ), ( 0.98,90 )}
Device 3:
First of all we will check the reliability at stage3 when we have 1, 2, and 3 as a copy
of device. Ei is the reliability of the particular stage with n number of devices, so for
S3 we first calculate:
E3 (with copy 1) = 1 – ( 1 – r3 )1 = 1 – ( 1 – 0.5 ) = 0.5
E3 (with copy 2) = 1 – (1 – r3 )2 = 1 – (1 – 0.5 )2 = 0.75
E3 (with copy 3) = 1 – (1 – r3 )3 = 1 – ( 1 – 0.5 )3 = 0.875
Now, possible ordered pairs of device three are as- S3 = { ( 0.36, 65), ( 0.396, 95), (
0.432, 80), ( 0.54, 85), ( 0.648, 100 ), ( 0.63, 105 ) }
(0.648,100) is the solution pair, 0.648 is the maximum reliability we can get under
the cost constraint of 105.
Unit 5
1.Write the control flow of a Backtracking Algorithm and explain in detail the problems that can
be solved using this approach.
Backtracking is one of the techniques that can be used to solve the problem. We can
write the algorithm using this strategy. It uses the Brute force search to solve the
problem, and the brute force search says that for the given problem, we try to make
all the possible solutions and pick out the best solution from all the desired solutions.
This rule is also followed in dynamic programming, but dynamic programming is used
for solving optimization problems. In contrast, backtracking is not used in solving
optimization problems. Backtracking is used when we have multiple solutions, and we
require all those solutions.
Backtracking name itself suggests that we are going back and coming forward; if it
satisfies the condition, then return success, else we go back again. It is used to solve a
problem in which a sequence of objects is chosen from a specified set so that the
sequence satisfies some criteria.
When we have multiple choices, then we make the decisions from the available
choices. In the following cases, we need to use the backtracking algorithm:
o A piece of sufficient information is not available to make the best choice, so we
use the backtracking strategy to try out all the possible solutions.
o Each decision leads to a new set of choices. Then again, we backtrack to make
new decisions. In this case, we need to use the backtracking strategy.
We start with a start node. First, we move to node A. Since it is not a feasible solution
so we move to the next node, i.e., B. B is also not a feasible solution, and it is a dead-
end so we backtrack from node B to node A.
Suppose another path exists from node A to node C. So, we move from node A to
node C. It is also a dead-end, so again backtrack from node C to node A. We move
from node A to the starting node.
Now we will check any other path exists from the starting node. So, we move from
start node to the node D. Since it is not a feasible solution so we move from node D
to node E. The node E is also not a feasible solution. It is a dead end so we backtrack
from node E to node D.
Suppose another path exists from node D to node F. So, we move from node D to
node F. Since it is not a feasible solution and it's a dead-end, we check for another
path from node F.
Suppose there is another path exists from the node F to node G so move from node F
to node G. The node G is a success node.
The terms related to the backtracking are:
o Live node: The nodes that can be further generated are known as live nodes.
o E node: The nodes whose children are being generated and become a success
node.
o Success node: The node is said to be a success node if it provides a feasible
solution.
o Dead node: The node which cannot be further generated and also does not
provide a feasible solution is known as a dead node.
2.Solve the 8-Queen problem with the help of Backtracking. Show the complete set of possible
cases in this approach.
3.Solve the following Sum of Subsets problem. Let w = {5,7,10,12,15,18,20} and m = 35. Find all
possible subsets of w that sum to m.
Let us run the algorithm on first instance w = {5, 7, 10, 12, 15, 18,
20}.
Items in sub set Condition Comment
{} 0 Initial condition
Items in sub set Condition Comment
{5} 5 < 35 Select 5 and Add next element
{ 5, 7 } 12 < 35 Select 7 and Add next element
{ 5, 7, 10 } 22 < 35 Select 20 and Add next element
{ 5, 7, 10, 12 } 34 < 35 Select 12 and Add next element
{ 5, 7, 10, 12, 15 } 49 > 35 Sum exceeds M, so backtrack and remove 12
{ 5, 7, 10, 15 } 37 > 35 Sum exceeds M, so backtrack and remove 15
{ 5, 7, 12 } 24 < 35 Add next element
{ 5, 7, 12, 15 } 39 > 35 Sub set sum exceeds, so backtrack
{ 5, 10 } 15 < 35 Add next element
{ 5, 10, 12 } 27 < 35 Sub set sum exceeds, so backtrack
{ 5, 10, 12, 15 } 42 > 35 Sub set sum exceeds, so backtrack
{ 5, 10, 15 } 30 < 35 Add next element
{ 5, 10, 15, 18 } 48 > 35 Sub set sum exceeds, so backtrack
{ 5, 10, 18 } 33 < 35 Add next element
{ 5, 10, 18, 20 } 53 > 35 Sub set sum exceeds, so backtrack
{ 5, 10, 20 } 35 Solution Found
Solution:
Items in sub set Condition Comment
{} 0 Initial condition
{ 11 } 11 < 31 Add next element
{ 11, 13 } 24 < 31 Add next element
{ 11, 13, 24 } 48 < 31 Sub set sum exceeds, so backtrack
{ 11, 13, 7 } 31 Solution Found
In the above graph, the black circle shows the correct result. The
gray node shows where the algorithm backtracks. Numbers in the
leftmost column indicate elements under consideration at that level.
The left and right branches represent the inclusion and exclusion of
that element, respectively.
We get two solutions:
{11, 13, 7}
{24, 7}
5.Briefly explain the function that is used to find the next color in graph coloring problem.
Step 2 − Choose the first vertex and color it with the first color.
Step 3 − Choose the next vertex and color it with the lowest numbered color
that has not been colored on any vertices adjacent to it. If all the adjacent
vertices are colored with this color, assign a new color to it. Repeat this step
until all the vertices are colored.
Example
In the above figure, at first vertex a is colored red. As the adjacent vertices of
vertex a are again adjacent, vertex b and vertex d are colored with different
color, green and blue respectively. Then vertex c is colored as red as no
adjacent vertex of c is colored red. Hence, we could color the graph by 3 colors.
Hence, the chromatic number of the graph is 3.
7.
In an undirected graph, the Hamiltonian path is a path, that visits each vertex
exactly once, and the Hamiltonian cycle or circuit is a Hamiltonian path, that
there is an edge from the last vertex to the first vertex.
1. Initialization: Start with an empty path that contains only the starting vertex.
2. Build the Path: Extend the path by adding one vertex at a time, ensuring that
the vertices are visited only once. At each step, choose a neighboring vertex
that hasn't been visited yet. If no unvisited neighbors are available for the
current vertex, backtrack to the previous vertex and explore other options.
3. Termination Condition: Continue extending the path until all vertices are
included, and the path returns to the starting vertex. If the path includes all
vertices, you have found a Hamiltonian Cycle. If it doesn't, backtrack to the
previous vertex and explore other possibilities.
4. Backtracking: During the backtracking phase, when there are no valid choices
for the next vertex, remove the last added vertex from the path and mark it as
unvisited. Then, try other unvisited neighbors of the previous vertex.
5. Repeat: Keep repeating steps 2-4 until all possibilities are explored.
6. Output: If a Hamiltonian Cycle is found, it will be the output of the algorithm.
Otherwise, the algorithm will indicate that there is no Hamiltonian Cycle in the
graph.
8. Write an algorithm to determine the Hamiltonian Cycle in a given graph using backtracking
The Hamiltonian cycle is the cycle in the graph which visits all the
vertices in graph exactly once and terminates at the starting node. It
may not include all the edges
The Hamiltonian cycle problem is the problem of finding a
Hamiltonian cycle in a graph if there exists any such cycle.
The input to the problem is an undirected, connected
graph. For the graph shown in Figure (a), a path A – B – E –
D – C – A forms a Hamiltonian cycle. It visits all the vertices
exactly once, but does not visit the edges <B, D>.
FIFO branch and bound always use the oldest node in the queue to
extend the branch. This leads to a breadth-first search, where all
nodes at depth d are visited first, before any nodes at depth d+1 are
visited.
In LIFO (last in, first out), the branch is always extended from the
youngest node in the queue. This results in a depth-first search, in
which the branch is extended via every first child encountered at a
certain depth until a leaf node is found.
In LC (lowest cost) method, according to a cost function, the branch
is extended by the node with the lowest extra costs. As a result, the
cost function determines how to traverse the search space.
The Branch and Bound method employs either BFS or DFS search.
During BFS, expanded nodes are kept in a queue, whereas in DFS,
nodes are kept on the stack. BFS approach is the first in, first out
(FIFO) method, while DFS is the last in, first out (LIFO). In the FIFO
branch and bound, expanded nodes are inserted into the queue, and
the first generated node becomes the next live node.
The Branch and Bound method can be classified into three types based on the order
in which the state space tree is searched.
1. FIFO Branch and Bound
2. LIFO Branch and Bound
3. Least Cost-Branch and Bound
We will now discuss each of these methods in more detail. To denote the solutions
in these methods, we will use the variable solution method.
First-In-First-Out is an approach to the branch and bound problem that uses the
queue approach to create a state-space tree. In this case, the breadth-first search is
performed, that is, the elements at a certain level are all searched, and then the
elements at the next level are searched, starting with the first child of the first node
at the previous level.
For a given set {A, B, C, D}, the state space tree will be constructed as follows :
The above diagram shows that we first consider element A, then element B, then
element C and finally we’ll consider the last element which is D. We are performing
BFS while exploring the nodes.
So, once the first level is completed. We’ll consider the first element, then we can
consider either B, C, or D. If we follow the route then it says that we are doing
elements A and D so we will not consider elements B and C. If we select the
elements A and D only, then it says that we are selecting elements A and D and we
are not considering elements B and C.
Selecting element A
Now, we will expand node 3, as we have considered element B and not considered
element A, so, we have two options to explore that is elements C and D. Let’s create
nodes 9 and 10 for elements C and D respectively.
Considered element B and not considered element A
Now, we will expand node 4 as we have only considered elements C and not
considered elements A and B, so, we have only one option to explore which is
element D. Let’s create node 11 for D.
Considered elements C and not considered elements A and B
Till node 5, we have only considered elements D, and not selected elements A, B,
and C. So, We have no more elements to explore, Therefore on node 5, there won’t
be any expansion.
Now, we will expand node 6 as we have considered elements A and B, so, we have
only two option to explore that is element C and D. Let’s create node 12 and 13 for
C and D respectively.
Expand node 6
Now, we will expand node 7 as we have considered elements A and C and not
consider element B, so, we have only one option to explore which is element D.
Let’s create node 14 for D.
Expand node 7
Till node 8, we have considered elements A and D, and not selected elements B and
C, So, We have no more elements to explore, Therefore on node 8, there won’t be
any expansion.
Now, we will expand node 9 as we have considered elements B and C and not
considered element A, so, we have only one option to explore which is element D.
Let’s create node 15 for D.
Expand node 9
The Last-In-First-Out approach for this problem uses stack in creating the state
space tree. When nodes are added to a state space tree, they are added to a stack.
After all nodes of a level have been added, we pop the topmost element from the
stack and explore it.
For a given set {A, B, C, D}, the state space tree will be constructed as follows :
State space tree for element {A, B, C, D}
Now the expansion would be based on the node that appears on the top of the stack.
Since node 5 appears on the top of the stack, so we will expand node 5. We will pop
out node 5 from the stack. Since node 5 is in the last element, i.e., D so there is no
further scope for expansion.
The next node that appears on the top of the stack is node 4. Pop-out node 4 and
expand. On expansion, element D will be considered and node 6 will be added to the
stack shown below:
Expand node 4
The next node is 6 which is to be expanded. Pop-out node 6 and expand. Since node
6 is in the last element, i.e., D so there is no further scope for expansion.
The next node to be expanded is node 3. Since node 3 works on element B so node 3
will be expanded to two nodes, i.e., 7 and 8 working on elements C and D
respectively. Nodes 7 and 8 will be pushed into the stack.
The next node that appears on the top of the stack is node 8. Pop-out node 8 and
expand. Since node 8 works on element D so there is no further scope for the
expansion.
Expand node 3
The next node that appears on the top of the stack is node 7. Pop-out node 7 and
expand. Since node 7 works on element C so node 7 will be further expanded to
node 9 which works on element D and node 9 will be pushed into the stack.
The next node is 6 which is to be expanded. Pop-out node 6 and expand. Since node
6 is in the last element, i.e., D so there is no further scope for expansion.
Expand node 7
The next node that appears on the top of the stack is node 9. Since node 9 works on
element D, there is no further scope for expansion.
The next node that appears on the top of the stack is node 2. Since node 2 works on
the element A so it means that node 2 can be further expanded. It can be expanded
up to three nodes named 10, 11, 12 working on elements B, C, and D respectively.
There new nodes will be pushed into the stack shown as below:
Expand node 2
In the above method, we explored all the nodes using the stack that follows the LIFO
principle.
To explore the state space tree, this method uses the cost function. The previous two
methods also calculate the cost function at each node but the cost is not been used
for further exploration.
In this technique, nodes are explored based on their costs, the cost of the node can be
defined using the problem and with the help of the given problem, we can define the
cost function. Once the cost function is defined, we can define the cost of the node.
Now, Consider a node whose cost has been determined. If this value is greater than
U0, this node or its children will not be able to give a solution. As a result, we can
kill this node and not explore its further branches. As a result, this method prevents
us from exploring cases that are not worth it, which makes it more efficient for us.
Let’s first consider node 1 having cost infinity shown below:
In the following diagram, node 1 is expanded into four nodes named 2, 3, 4, and 5.
Assume that cost of the nodes 2, 3, 4, and 5 are 12, 16, 10, and 315 respectively.
In this method, we will explore the node which is having the least cost. In the above
figure, we can observe that the node with a minimum cost is node 4. So, we will
explore node 4 having a cost of 10.
During exploring node 4 which is element C, we can notice that there is only one
possible element that remains unexplored which is D (i.e, we already decided not to
select elements A, and B). So, it will get expanded to one single element D, let’s say
this node number is 6.
Exploring node 4 which is element C
Now, Node 6 has no element left to explore. So, there is no further scope for
expansion. Hence the element {C, D} is the optimal way to choose for the least cost.
11. Draw the portion of the state space tree generated by LC Branch and Bound for the knapsack
instance: n=5, (p1, p2, ..., p5) = (10, 15, 6, 8, 4), (w1, w2, ..., w5) = (4, 6, 3, 4, 2), and m = 12.
12. Generate LC branch and bound solution for the given knapsack problem, m = 15,
n = 4, (P1, P2, P3,P4) = (10, 10,12,18) and (w1, w2, w3,w4) = (2,4,6,9).
Let us use the fixed tuple formulation. The search begins with root node as E-node. cp - current
profit cw- current weight k – k number of decisions m - Capacity of knapsack Algorithm
Ubound(cp,cw,k,m) { b := cp; c := cw; for i:= k+1 to n do { if ( c + w[i] <= m ) then { c := c + w[i]; b := b -
p[i]; } } return b; } Function U(.) for Knapsack problem.
Node 1 Ubound (0,0,0,15) i=1 c=2 b=-10 i=2 c=6 b=-20 i=3 c=12 b=-32 i=4 U(1)=-32 cˆ(1) = -32+3/9 x
18= -38 To calculate lower bound we allow fractions Node2 Ubound(-10,2,1,15) x1=1 i=2 c=6 b= -20
i=3 c=12 b= -32 i=4 U(2)=-32 cˆ(2) = -32+3/9 x 18= -38 To calculate lower bound we allow fractions
Node 3 Ubound(0,0,1,15) i=2 c=4 b= -10 i = 3 c = 10 b = -22 i =4 U(3)=-22 cˆ(3) = -22+5/9 x 18= -32 To
calculate lower bound we allow fractions Node 4 Ubound(-20,6,2,15) i=3 c=12 b=-32 i=4 U(4)=-32
cˆ(4) = -32+3/9 x 18= -38 To calculate lower bound we allow fractions
Node 5 Ubound(-10,2,2,15) i=3 c=8 b=-22 i=4 U(5)=-22 cˆ(5) = -22+7/9 x 18= -36 To calculate lower
bound we allow fractions Node 6 Ubound(-32,12,3,15) i=4 U(6)=-32 cˆ(6) = -32+3/9 x 18= -38 To
calculate lower bound we allow fractions Node 7 Ubound(-20,6,3,15) i=4 c=15 b=-38 U(7)=-38 cˆ(7) =
-38 Node 8 Ubound(-38,15,4,15) U(8)=-38 cˆ(8) = -38 Node 9 Ubound(-20,6,4,15) U(1)=-20 cˆ(1) = -20
Node 8 is a solution node (solution vector) X1=1 X2=1 X3=0 X4=1 p1x1+p2x2+p3x3+p4x4
10x1+10=1+12x0+18x1 =38 We need to consider all n items. The tuple size is n
13.
Now, we will reduce the matrix. We will subtract the minimum value with all the
elements of a row. First, we evaluate the first row. Let's assume two variables, i.e., i and
j, where 'i' represents the rows, and 'j' represents the columns.
When i = 0, j =0
M[0][0] = ∞-10= ∞
When i = 0, j = 1
M[0][1] = 20 - 10 = 10
When i = 0, j = 2
M[0][2] = 30 - 10 = 20
When i = 0, j = 3
M[0][3] = 10 - 10 = 0
When i = 0, j = 4
M[0][4] = 11 - 10 = 1
The matrix is shown below after the evaluation of the first row:
When i = 1, j =0
M[1][0] = 15-2= 13
When i = 1, j = 1
M[1][1] = ∞ - 2= ∞
When i = 1, j = 2
M[1][2] = 16 - 2 = 14
When i = 1, j = 3
M[1][3] = 4 - 2 = 2
When i = 1, j = 4
M[1][4] = 2 - 2 = 0
The matrix is shown below after the evaluation of the second row:
When i = 2, j =0
M[2][0] = 3-2= 1
When i = 2, j = 1
M[2][1] = 5 - 2= 3
When i = 2, j = 2
M[2][2] = ∞ - 2 = ∞
When i = 2, j = 3
M[2][3] = 2 - 2 = 0
When i = 2, j = 4
M[2][4] = 4 - 2 = 2
The matrix is shown below after the evaluation of the third row:
When i = 3, j =0
M[3][0] = 19-3= 16
When i = 3, j = 1
M[3][1] = 6 - 3= 3
When i = 3, j = 2
M[3][2] = 18 - 3 = 15
When i = 3, j = 3
M[3][3] = ∞ - 3 = ∞
When i = 3, j = 4
M[3][4] = 3 - 3 = 0
The matrix is shown below after the evaluation of the fourth row:
When i = 4, j =0
M[4][0] = 16-4= 12
When i = 4, j = 1
M[4][1] = 4 - 4= 0
When i = 4, j = 2
M[4][2] = 7 - 4 = 3
When i = 4, j = 3
M[4][3] = 16 - 4 = 12
When i = 4, j = 4
M[4][4] = ∞ - 4 = ∞
The matrix is shown below after the evaluation of the fifth row:
The above matrix is the reduced matrix with respect to the rows.
Now we reduce the matrix with respect to the columns. Before reducing the matrix, we
first find the minimum value of all the columns. The minimum value of first column is
1, the minimum value of the second column is 0, the minimum value of the third
column is 3, the minimum value of the fourth column is 0, and the minimum value of
the fifth column is 0, as shown in the below matrix:
When i = 0, j =0
M[0][0] = ∞-1= ∞
When i = 1, j = 0
M[1][0] = 13 - 1= 12
When i = 2, j = 0
M[2][0] = 1 - 1 = 0
When i = 3, j = 0
M[3][0] = 16 - 1 = 15
When i = 4, j = 0
M[4][0] = 12 - 1 = 11
The matrix is shown below after the evaluation of the first column:
Since the minimum value of the first and the third columns is non-zero, we will evaluate
only first and third columns. We have evaluated the first column. Now we will evaluate
the third column.
When i = 0, j =2
M[0][2] = 20-3= 17
When i = 1, j = 2
M[1][2] = 13 - 1= 12
When i = 2, j = 2
M[2][2] = 1 - 1 = 0
When i = 3, j = 2
M[3][2] = 16 - 1 = 15
When i = 4, j = 2
M[4][2] = 12 - 1 = 11
The matrix is shown below after the evaluation of the third column:
The above is the reduced matrix. The minimum value of rows is 21, and the columns is
4. Therefore, the total minimum value is (21 + 4) equals to 25.
14. By taking your own 5x5 cost matrix explain the working of Travelling sales person problem
using Branch and Bound.
Above answer
To solve this problem, it do not have To solve this problem, it must be both NP
to be in NP . and NP-hard problems.
B is in NP
Every A in NP is polynomial time reducible to B.
NP-Complete Problems
Following are some NP-Complete problems, for which no
polynomial time algorithm is known.
NP-Hard Problems
The following problems are NP-Hard