0% found this document useful (0 votes)
21 views86 pages

Daa Mid2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views86 pages

Daa Mid2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

UNIT-3(PART-2)

1. Give the control abstraction for Greedy method.

The control abstraction for the Greedy method can be described as follows:

1. Initialization: Initialize the solution to an empty or default state. This can


involve setting up any necessary data structures and initializing variables.
2. Selection of Next Element: Choose the next element or component to add
to the solution. In the context of a Greedy algorithm, this is typically the
element that appears to be the best choice at the current step. The choice is
based on a heuristic or a specific criterion.
3. Feasibility Check: Check whether adding the selected element to the current
solution is feasible. This step ensures that the chosen element does not violate
any constraints or requirements.
4. Update Solution: If the selected element is deemed feasible, add it to the
current solution or update the solution accordingly.
5. Termination Condition: Check if the termination condition is met. This
condition can vary depending on the specific problem being solved. It could
involve reaching a specific solution quality, covering all elements, or other
criteria.
6. Repeat: If the termination condition is not met, return to step 2 to select and
add the next element. This process iterates until the termination condition is
satisfied.

2. Explain in detail about Greedy method and Write the applications of Greedy method.

The greedy method is one of the strategies like Divide and conquer used to solve the
problems. This method is used for solving optimization problems. An optimization
problem is a problem that demands either maximum or minimum results. Let's
understand through some terms.

The Greedy method is the simplest and straightforward approach. It is not an


algorithm, but it is a technique. The main function of this approach is that the decision
is taken on the basis of the currently available information. Whatever the current
information is present, the decision is made without worrying about the effect of the
current decision in future.

The components that can be used in the greedy algorithm are:

o Candidate set: A solution that is created from the set is known as a candidate
set.
o Selection function: This function is used to choose the candidate or subset
which can be added in the solution.
o Feasibility function: A function that is used to determine whether the
candidate or subset can be used to contribute to the solution or not.
o Objective function: A function is used to assign the value to the solution or
the partial solution.
o Solution function: This function is used to intimate whether the complete
function has been reached or not.

Applications of Greedy Algorithm

o It is used in finding the shortest path.


o It is used to find the minimum spanning tree using the prim's algorithm or the
Kruskal's algorithm.
o It is used in a job sequencing with a deadline.
o This algorithm is also used to solve the fractional knapsack problem.

3. Explain Fractional Knapsack Problem with an example.

The fractional knapsack problem is also one of the techniques which are used to solve the
knapsack problem. In fractional knapsack, the items are broken in order to maximize the
profit. The problem in which we break the item is known as a Fractional knapsack problem.

There are basically three approaches to solve the problem:

o The first approach is to select the item based on the maximum profit.
o The second approach is to select the item based on the minimum weight.
o The third approach is to calculate the ratio of profit/weight.

Examples

 For the given set of items and the knapsack capacity of 10


kg, find the subset of the items to be added in the knapsack
such that the profit is maximum.
Items 1 2 3 4 5

Weights (in kg) 3 3 2 5 1


Profits 10 15 10 12 8

Solution
Step 1

Given, n = 5

Wi = {3, 3, 2, 5, 1}
Pi = {10, 15, 10, 12, 8}

Calculate Pi/Wi for all the items

Items 1 2 3 4 5

Weights (in kg) 3 3 2 5 1

Profits 10 15 10 20 8

Pi/Wi 3.3 5 5 4 8

Step 2

Arrange all the items in descending order based on P i/Wi

Items 5 2 3 4 1

Weights (in kg) 1 3 2 5 3

Profits 8 15 10 20 10

Pi/Wi 8 5 5 4 3.3

Step 3

Without exceeding the knapsack capacity, insert the items in the


knapsack with maximum profit.

Knapsack = {5, 2, 3}

However, the knapsack can still hold 4 kg weight, but the next
item having 5 kg weight will exceed the capacity. Therefore, only
4 kg weight of the 5 kg will be added in the knapsack.
Items 5 2 3 4 1

Weights (in kg) 1 3 2 5 3

Profits 8 15 10 20 10

Knapsack 1 1 1 4/5 0

Hence, the knapsack holds the weights = [(1 * 1) + (1 * 3) + (1


* 2) + (4/5 * 5)] = 10, with maximum profit of [(1 * 8) + (1 *
15) + (1 * 10) + (4/5 * 20)] = 37.

4. Find an optimal solution to the knapsack instance n =7, m = 15, (p1, p2, …p7) = (10, 5, 15, 7,
6, 18, 3) and (w1, w2, … w7) = (2,3,5,7,1,4,1)

To solve this problem, we use some strategy to determine the fraction of weight which
should be included so as to maximize the profit and fill the Knapsack..

(X1, X2, X3, X4, X5, X6, X7) ∑WiXi ∑PiXi

(1) (1/2,1/3,1/4,1/5,1/6,1/7,1/8) 5.51 15.76

Now taking maximum profit 18 with weight 4 as -

X6 = 1, ∑ WiXi < m.

(2) (1/2,1/3,1/4,1/5,1/6,1,1/8) 8.51 31.19

(3) (1/2,1/3,1,1/5,1/6,1,1/8) 12.69 42.44

(4) ) (1,1/3,1,1/5,1/6,1,1/8) 13.69 47.44

(5) (1,1/3,1,1/5,1,1,1/8) 14.52 52.44

(6) (1,1/3,1,1/5,1,1,1) 15 54.67

(7) (1,2/3,1,0,1,1,1) 15 55.33

at each step, we try to get the maximum profit. The maximum profit we set by step (7) taking

X1 = 1, X2 = 2/3, X3 = 1, X4 = 0, X5 = 1, X6 = 1, and X7=1


These fraction of weight provided maximum profit.

5. Explain Job Sequencing with deadlines with an example.

The sequencing of jobs on a single processor with deadline constraints is called as Job
Sequencing with Deadlines.

Here-

 You are given a set of jobs.


 Each job has a defined deadline and some profit associated with it.
 The profit of a job is given only when that job is completed within its deadline.
 Only one processor is available for processing all the jobs.
 Processor takes one unit of time to complete a job.
Greedy Algorithm is adopted to determine how the next job is selected for an optimal
solution.
The greedy algorithm described below always gives an optimal solution to the job
sequencing problem-

Step-01:
 Sort all the given jobs in decreasing order of their profit.
Step-02:
 Check the value of maximum deadline.
 Draw a Gantt chart where maximum time on Gantt chart is the value of maximum
deadline.
Step-03:
 Pick up the jobs one by one.
 Put the job on Gantt chart as far as possible from 0 ensuring that the job gets
completed before its deadline.
Problem-

Given the jobs, their deadlines and associated profits as shown-

Jobs J1 J2 J3 J4 J5 J6

Deadlines 5 3 3 2 4 2

Profits 200 180 190 300 120 100


Answer the following questions-
1. Write the optimal schedule that gives maximum profit.
2. Are all the jobs completed in the optimal schedule?
3. What is the maximum earned profit?

Solution-

Step-01:

Sort all the given jobs in decreasing order of their profit-

Jobs J4 J1 J3 J2 J5 J6

Deadlines 2 5 3 3 4 2

Profits 300 200 190 180 120 100

Step-02:

Value of maximum deadline = 5.


6.

notes

7. Explain Prim’s Algorithm for finding minimal spanning tree with an example.

Prim’s minimal spanning tree algorithm is one of the efficient


methods to find the minimum spanning tree of a graph. A
minimum spanning tree is a subgraph that connects all the
vertices present in the main graph with the least possible edges
and minimum cost (sum of the weights assigned to each edge).
The algorithm, similar to any shortest path algorithm, begins
from a vertex that is set as a root and walks through all the
vertices in the graph by determining the least cost adjacent
edges

Solution
Step 1

Create a visited array to store all the visited vertices into it.

V={}

The arbitrary root is mentioned to be S, so among all the edges


that are connected to S we need to find the least cost edge.

S→B=8
V = {S, B}
Step 2

Since B is the last visited, check for the least cost edge that is
connected to the vertex B.

B→A=9
B → C = 16
B → E = 14

Hence, B → A is the edge added to the spanning tree.

V = {S, B, A}

Step 3

Since A is the last visited, check for the least cost edge that is
connected to the vertex A.

A → C = 22
A→B=9
A → E = 11

But A → B is already in the spanning tree, check for the next least
cost edge. Hence, A → E is added to the spanning tree.

V = {S, B, A, E}
Step 4

Since E is the last visited, check for the least cost edge that is
connected to the vertex E.

E → C = 18
E→D=3

Therefore, E → D is added to the spanning tree.

V = {S, B, A, E, D}

Step 5

Since D is the last visited, check for the least cost edge that is
connected to the vertex D.
D → C = 15
E→D=3

Therefore, D → C is added to the spanning tree.

V = {S, B, A, E, D, C}

The minimum spanning tree is obtained with the minimum cost =


46
8.

Minimum cost = 15
9. Explain Kruskal’s Algorithm for finding minimal spanning tree with an example.

Kruskal’s minimal spanning tree algorithm is one of the efficient


methods to find the minimum spanning tree of a graph. A
minimum spanning tree is a subgraph that connects all the
vertices present in the main graph with the least possible edges
and minimum cost (sum of the weights assigned to each edge).

The algorithm first starts from the forest – which is defined as a


subgraph containing only vertices of the main graph – of the
graph, adding the least cost edges later until the minimum
spanning tree is created without forming cycles in the graph.

Algorithm

 Sort all the edges in the graph in an ascending order and


store it in an array edge[].
Edge

Cost
 Construct the forest of the graph on a plane with all the
vertices in it.
 Select the least cost edge from the edge[] array and add it
into the forest of the graph. Mark the vertices visited by
adding them into the visited[] array.
 Repeat the steps 2 and 3 until all the vertices are visited
without having any cycles forming in the graph
 When all the vertices are visited, the minimum spanning
tree is formed.
 Calculate the minimum cost of the output spanning tree
formed.

Examples

Construct a minimum spanning tree using kruskal’s algorithm for


the graph given below −
Solution

As the first step, sort all the edges in the given graph in an
ascending order and store the values in an array.

Edge B→D A→B C→F F→E B→C G→F A→G C→D D→E C→G

Cost 5 6 9 10 11 12 15 17 22 25

Then, construct a forest of the given graph on a single plane.


From the list of sorted edge costs, select the least cost edge and
add it onto the forest in output graph.

B→D=5
Minimum cost = 5
Visited array, v = {B, D}

Similarly, the next least cost edge is B → A = 6; so we add it


onto the output graph.
Minimum cost = 5 + 6 = 11
Visited array, v = {B, D, A}

The next least cost edge is C → F = 9; add it onto the output


graph.

Minimum Cost = 5 + 6 + 9 = 20
Visited array, v = {B, D, A, C, F}

The next edge to be added onto the output graph is F → E = 10.


Minimum Cost = 5 + 6 + 9 + 10 = 30
Visited array, v = {B, D, A, C, F, E}

The next edge from the least cost array is B → C = 11, hence we
add it in the output graph.

Minimum cost = 5 + 6 + 9 + 10 + 11 = 41
Visited array, v = {B, D, A, C, F, E}

The last edge from the least cost array to be added in the output
graph is F → G = 12.
Minimum cost = 5 + 6 + 9 + 10 + 11 + 12 = 53
Visited array, v = {B, D, A, C, F, E, G}

The obtained result is the minimum spanning tree of the given


graph with cost = 53.

10.

Assignment 4

11. Explain about Optimal Storage on Tapes with a suitable example.


Optimal Storage on Tapes is one of the application of the Greedy Method. • The objective is
to find the Optimal retrieval time for accessing programs that are stored on tape.

For e.g. Suppose there are 3 programs of lengths 2, 5 and 4 respectively. So there are
total 3! = 6 possible orders of storage.

Order Total Retrieval Time Mean Retrieval Time


1 1 2 3 2 + (2 + 5) + (2 + 5 + 4) = 20 20/3
2 1 3 2 2 + (2 + 4) + (2 + 4 + 5) = 19 19/3
3 2 1 3 5 + (5 + 2) + (5 + 2 + 4) = 23 23/3
4 2 3 1 5 + (5 + 4) + (5 + 4 + 2) = 25 25/3
5 3 1 2 4 + (4 + 2) + (4 + 2 + 5) = 21 21/3
6 3 2 1 4 + (4 + 5) + (4 + 5 + 2) = 24 24/3
It’s clear that by following the second order in storing the programs, the mean
retrieval time is least.

12.

notes

13. Explain Huffman Codes with a suitable example

o (i) Data can be encoded efficiently using Huffman Codes.


o (ii) It is a widely used and beneficial technique for compressing data.
o (iii) Huffman's greedy algorithm uses a table of the frequencies of occurrences
of each character to build up an optimal way of representing each character as
a binary string.

Suppose we have 105 characters in a data file. Normal Storage: 8 bits per character
(ASCII) - 8 x 105 bits in a file. But we want to compress the file and save it compactly.
Suppose only six characters appear in the file:

(i) Fixed length Code: Each letter represented by an equal number of bits. With a
fixed length code, at least 3 bits per character:
For example:

PlayNext
Unmute

Current Time 0:04

Duration 18:10
Loaded: 3.67%
Â
Fullscreen

a 000

b 001

c 010

d 011

e 100

f 101

For a file with 105 characters, we need 3 x 105 bits.

(ii) A variable-length code: It can do considerably better than a fixed-length code,


by giving many characters short code words and infrequent character long codewords.

For example:

a 0

b 101

c 100

d 111

e 1101

f 1100
Number of bits = (45 x 1 + 13 x 3 + 12 x 3 + 16 x 3 + 9 x 4 + 5 x 4) x 1000
= 2.24 x 105bits

Thus, 224,000 bits to represent the file, a saving of approximately 25%.This is an


optimal character code for this file.

14.

Assignment 4

15. Explain Single Source Shortest Path Algorithm with an example.

Dijkstra's Algorithm is a Graph algorithm that finds the shortest path from a source
vertex to all other vertices in the Graph (single source shortest path). It is a type of Greedy
Algorithm that only works on Weighted Graphs having positive weights. The time complexity
of Dijkstra's Algorithm is O(V2) with the help of the adjacency matrix representation of the
graph. This time complexity can be reduced to O((V + E) log V) with the help of an
adjacency list representation of the graph, where V is the number of vertices and E is the
number of edges in the graph.

The following is the step that we will follow to implement Dijkstra's Algorithm:

Step 1: First, we will mark the source node with a current distance of 0 and set the rest
of the nodes to INFINITY.

Step 2: We will then set the unvisited node with the smallest current distance as the
current node, suppose X.

Step 3: For each neighbor N of the current node X: We will then add the current
distance of X with the weight of the edge joining X-N. If it is smaller than the current
distance of N, set it as the new current distance of N.

Step 4: We will then mark the current node X as visited.

Step 5: We will repeat the process from 'Step 2' if there is any node unvisited left in
the graph.

Let us now understand the implementation of the algorithm with the help of an
example:
Figure 6: The Given Graph

1. We will use the above graph as the input, with node A as the source.
2. First, we will mark all the nodes as unvisited.
3. We will set the path to 0 at node A and INFINITY for all the other nodes.
4. We will now mark source node A as visited and access its neighboring nodes.
Note: We have only accessed the neighboring nodes, not visited them.
5. We will now update the path to node B by 4 with the help of relaxation because
the path to node A is 0 and the path from node A to B is 4, and
the minimum((0 + 4), INFINITY) is 4.
6. We will also update the path to node C by 5 with the help of relaxation because
the path to node A is 0 and the path from node A to C is 5, and
the minimum((0 + 5), INFINITY) is 5. Both the neighbors of node A are now
relaxed; therefore, we can move ahead.
7. We will now select the next unvisited node with the least path and visit it. Hence,
we will visit node B and perform relaxation on its unvisited neighbors. After
performing relaxation, the path to node C will remain 5, whereas the path to
node E will become 11, and the path to node D will become 13.
8. We will now visit node E and perform relaxation on its neighboring nodes B, D,
and F. Since only node F is unvisited, it will be relaxed. Thus, the path to
node B will remain as it is, i.e., 4, the path to node D will also remain 13, and
the path to node F will become 14 (8 + 6).
9. Now we will visit node D, and only node F will be relaxed. However, the path to
node F will remain unchanged, i.e., 14.
10. Since only node F is remaining, we will visit it but not perform any relaxation as
all its neighboring nodes are already visited.
11. Once all the nodes of the graphs are visited, the program will end.

Hence, the final paths we concluded are:

1. A = 0
2. B = 4 (A -> B)
3. C = 5 (A -> C)
4. D = 4 + 9 = 13 (A -> B -> D)
5. E = 5 + 3 = 8 (A -> C -> E)
6. F = 5 + 3 + 6 = 14 (A -> C -> E -> F)

16.
Dijkstra's algorithm Solution for given problem
Unit 4

1. Give the control abstraction of Dynamic Programming strategy.

Dynamic programming approach is similar to divide and conquer


in breaking down the problem into smaller and yet smaller
possible sub-problems. But unlike divide and conquer, these sub-
problems are not solved independently. Rather, results of these
smaller sub-problems are remembered and used for similar or
overlapping sub-problems.

Mostly, dynamic programming algorithms are used for solving


optimization problems. Before solving the in-hand sub-problem,
dynamic algorithm will try to examine the results of the
previously solved sub-problems. The solutions of sub-problems
are combined in order to achieve the best optimal final solution.
This paradigm is thus said to be using Bottom-up approach.

Control abstraction:
 It is an optimization and solves the problem by combining the solution of
the sub problem. Overlapping sub problem and optimal substructure
are the properties of dynamic programming.
 Optimization reduces the complexities of time from exponential to
polynomial. Matrix chain multiplication, traveling salesmen problem The
longest common sub sequence are the three-application approach of
dynamic programming.
 Dynamic programming is a very powerful technique. It is mathematical
optimization as well as computer programming method.
 Dynamic programming (DP) splits the large problem at every
possible point. When the problem becomes sufficiently small, DP
solves it.
 Dynamic programming is bottom up approach, it finds the solution of
the smallest problem and constructs the solution of the larger
problem from already solved smaller problems.
 To avoid recomputation of the same problem, DP saves the result of
sub problems into thetable. When next time same problem
encounters, the answer is retrieved from the table by lookup
procedure.

Examples
The following computer problems can be solved using dynamic
programming approach −

 Fibonacci number series


 Knapsack problem
 Tower of Hanoi
 All pair shortest path by Floyd-Warshall and Bellman Ford
 Shortest path by Dijkstra
 Project scheduling
 Matrix Chain Multiplication
2. Explain in brief the control abstraction for Dynamic Programming with relevant
examples.
st
1 answer

3. Explain how Matrix chain Multiplication problem can be solved using Dynamic
Programming with suitable example.

Dynamic programming is a method for solving optimization


problems.
It is algorithm technique to solve a complex and overlapping sub-
problems. Compute the solutions to the sub-problems once and
store the solutions in a table, so that they can be reused (repeatedly)
later.Dynamic programming is more efficient then other algorithm
methods like as Greedy method, Divide and Conquer method,
Recursion method, etc…
Matrix Chain Multiplication is an algorithm that is applied to
determine the lowest cost way for multiplying matrices. The
actual multiplication is done using the standard way of
multiplying the matrices, i.e., it follows the basic rule that the
number of rows in one matrix must be equal to the number of
columns in another matrix. Hence, multiple scalar multiplications
must be done to achieve the product.
A sequence of matrices A, B, C, D with dimensions 5 × 10, 10 ×
15, 15 × 20, 20 × 25 are set to be multiplied. Find the lowest
cost parenthesization to multiply the given matrices using matrix
chain multiplication.
Among these 5 combinations of parenthesis, the matrix chain
multiplicatiion algorithm must find the lowest cost parenthesis.

4. "Find the minimum no of operations required for the following Chain Matrix
Multiplication using dynamic programming.
A = 50 X 10, B = 10 X 40, C = 40 X 30 and D = 30 X 5

Assignment 5
5.

All Pairs Shortest Path Algorithm is also known as the Floyd-


Warshall algorithm. And this is an optimization problem that can be
solved using dynamic programming.
Notes
6.

Assignment 5

7.
8.

Assignment 5
9. Explain Bellman-Ford single shortest path problem with example.

Bellman ford algorithm is a single-source shortest path algorithm. This algorithm is


used to find the shortest distance from the single vertex to all the other vertices of a
weighted graph. There are various other algorithms used to find the shortest path like
Dijkstra algorithm, etc. If the weighted graph contains the negative weight values, then
the Dijkstra algorithm does not confirm whether it produces the correct answer or not.
In contrast to Dijkstra algorithm, bellman ford algorithm guarantees the correct answer
even if the weighted graph contains the negative weight values.

Rule of this algorithm

1. We will go on relaxing all the edges (n - 1) times where,


2. n = number of vertices

Since the graph has six vertices so it will have five iterations.

First iteration

Consider the edge (A, B). Denote vertex 'A' as 'u' and vertex 'B' as 'v'. Now use the
relaxing formula:

d(u) = 0

d(v) = ∞

c(u , v) = 6
Since (0 + 6) is less than ∞, so update

1. d(v) = d(u) + c(u , v)

d(v) = 0 + 6 = 6

Therefore, the distance of vertex B is 6.

Consider the edge (A, C). Denote vertex 'A' as 'u' and vertex 'C' as 'v'. Now use the
relaxing formula:

d(u) = 0

d(v) = ∞

c(u , v) = 4

Since (0 + 4) is less than ∞, so update

1. d(v) = d(u) + c(u , v)

d(v) = 0 + 4 = 4

Therefore, the distance of vertex C is 4.

Consider the edge (A, D). Denote vertex 'A' as 'u' and vertex 'D' as 'v'. Now use the
relaxing formula:

d(u) = 0

d(v) = ∞

c(u , v) = 5

Since (0 + 5) is less than ∞, so update

1. d(v) = d(u) + c(u , v)

d(v) = 0 + 5 = 5

Therefore, the distance of vertex D is 5.

Consider the edge (B, E). Denote vertex 'B' as 'u' and vertex 'E' as 'v'. Now use the
relaxing formula:
d(u) = 6

d(v) = ∞

c(u , v) = -1

Since (6 - 1) is less than ∞, so update

1. d(v) = d(u) + c(u , v)

d(v) = 6 - 1= 5

Therefore, the distance of vertex E is 5.

Consider the edge (C, E). Denote vertex 'C' as 'u' and vertex 'E' as 'v'. Now use the
relaxing formula:

d(u) = 4

d(v) = 5

c(u , v) = 3

Since (4 + 3) is greater than 5, so there will be no updation. The value at vertex E is 5.

Consider the edge (D, C). Denote vertex 'D' as 'u' and vertex 'C' as 'v'. Now use the
relaxing formula:

d(u) = 5

d(v) = 4

c(u , v) = -2

Since (5 -2) is less than 4, so update

1. d(v) = d(u) + c(u , v)

d(v) = 5 - 2 = 3

Therefore, the distance of vertex C is 3.

Consider the edge (D, F). Denote vertex 'D' as 'u' and vertex 'F' as 'v'. Now use the
relaxing formula:
d(u) = 5

d(v) = ∞

c(u , v) = -1

Since (5 -1) is less than ∞, so update

1. d(v) = d(u) + c(u , v)

d(v) = 5 - 1 = 4

Therefore, the distance of vertex F is 4.

Consider the edge (E, F). Denote vertex 'E' as 'u' and vertex 'F' as 'v'. Now use the
relaxing formula:

d(u) = 5

d(v) = ∞

c(u , v) = 3

Since (5 + 3) is greater than 4, so there would be no updation on the distance value of


vertex F.

Consider the edge (C, B). Denote vertex 'C' as 'u' and vertex 'B' as 'v'. Now use the
relaxing formula:

d(u) = 3

d(v) = 6

c(u , v) = -2

Since (3 - 2) is less than 6, so update

1. d(v) = d(u) + c(u , v)

d(v) = 3 - 2 = 1

Therefore, the distance of vertex B is 1.

Now the first iteration is completed. We move to the second iteration.


Second iteration:

In the second iteration, we again check all the edges. The first edge is (A, B). Since (0
+ 6) is greater than 1 so there would be no updation in the vertex B.

The next edge is (A, C). Since (0 + 4) is greater than 3 so there would be no updation
in the vertex C.

The next edge is (A, D). Since (0 + 5) equals to 5 so there would be no updation in the
vertex D.

The next edge is (B, E). Since (1 - 1) equals to 0 which is less than 5 so update:

d(v) = d(u) + c(u, v)

d(E) = d(B) +c(B , E)

=1-1=0

The next edge is (C, E). Since (3 + 3) equals to 6 which is greater than 5 so there would
be no updation in the vertex E.

The next edge is (D, C). Since (5 - 2) equals to 3 so there would be no updation in the
vertex C.

The next edge is (D, F). Since (5 - 1) equals to 4 so there would be no updation in the
vertex F.

The next edge is (E, F). Since (5 + 3) equals to 8 which is greater than 4 so there would
be no updation in the vertex F.

The next edge is (C, B). Since (3 - 2) equals to 1` so there would be no updation in the
vertex B.
Third iteration

We will perform the same steps as we did in the previous iterations. We will observe
that there will be no updation in the distance of vertices.

1. The following are the distances of vertices:


2. A: 0
3. B: 1
4. C: 3
5. D: 5
6. E: 0
7. F: 3

10.

Same as above example


11.Solve the 0/1 Knapsack problem using Dynamic Programming for number of
objects n=4, m=30, Weights (w1, w2, w3, w4) = (10,15,6,9) and Profits (p1, p2, p3,
p4)=(2,5,8,1).

Assignment - 5

12."Obtain the optimal solution to knapsack problem by Dynamic Programming


method for n=4,

(p1, p2,p3,p4)=(1,2,5,6),(w1,w2,w3,w4)=(2,3,4,5) and m=8."

Step 1: (To find profit/ weight ratio)


p1/w1 = 10/2 = 5
p2/w2 = 5/3 = 1.67
p3/w3 = 15/5 = 3
p4/w4 = 7/7 = 1
p5/w5 = 6/1 = 6
p6/w6 = 18/4 = 4.5
p7/w7 = 3/1 = 3
Step 2: (Arrange this profit/weight ratio in non-increasing order as n values) Since the
highest profit/weight ratio is 6. That is p5/w5, so 1st value is 5. Second highest
profit/weight ratio is 5. That is p1/w1, so 2nd value is 1. Similarly, calculate such n
values and arrange them in non-increasing order.
Order = (5, 1, 6, 3, 7, 2, 4)
Step 3: (To find optimal solution using m = 15 & n = 7)
Consider x5 = 1, profit = 6
Then consider x1 = 1, profit = 10
So weight uptil now = 1 + 2 = 3

Now x6 =1, profit = 18


So total profit = 16 + 18 = 34
And weight uptil now = 3 + 4 =7

Now x3 = 1, profit = 15
So total profit = 34 + 15 = 49
And weight uptil now = 7 + 5 = 12
Now x7 = 1, profit = 3
So total profit = 49 + 3 = 52
And weight uptil now = 12 + 1 = 13

Since m = 15 so we require only 2 units more. Therefore x2 = 2/3


So total profit = 52 + 5 x 2/3 = 52 + 3.33 = 55.3
And weight uptil now = 13 + 3 x 2/3 = 15
Thus, the optimal solution that gives maximum profit is,
(1, 2/3, 1, 0, 1, 1, 1)

13. Describe travelling sales person problem and discuss how it solves using Dynamic
Programming.

The TSP describes a scenario where a salesman is required to travel between n cities. He
wishes to travel to all locations exactly once and he must finish at his starting point. The
order in which the cities are visited is not important but he wishes to minimize the distance
traveled.

Let us consider a graph G = (V,E), where V is a set of cities and E is


a set of weighted edges. An edge e(u, v) represents that
vertices u and v are connected. Distance between
vertex u and v is d(u, v), which should be non-negative.

Suppose we have started at city 1 and after visiting some cities


now we are in city j. Hence, this is a partial tour. We certainly
need to know j, since this will determine which cities are most
convenient to visit next. We also need to know all the cities
visited so far, so that we don't repeat any of them. Hence, this is
an appropriate sub-problem.

For a subset of cities S ϵ� {1,2,3,...,n} that includes 1, and j ϵ� S, let


C(S, j) be the length of the shortest path visiting each node in S
exactly once, starting at 1 and ending at j.

When |S|> 1 , we define 𝑪C(S,1)= ∝∝ since the path cannot start


and end at 1.

In the following example, we will illustrate the steps to solve the


travelling salesman problem.
From the above graph, the following table is prepared.

1 2 3

1 0 10 15

2 5 0 9

3 6 13 0

4 8 8 9
14.

Assignment 5
15. Describe reliability design problem with an example.

16.

The reliability design problem is the designing of a system composed of several


devices connected in series or parallel. Reliability means the probability to get the
success of the device.

Let’s say, we have to set up a system consisting of D1, D2, D3, …………, and
Dn devices, each device has some costs C1, C2, C3, …….., Cn. Each device has a
reliability of 0.9 then the entire system has reliability which is equal to
the product of the reliabilities of all devices i.e., πri = (0.9)4.

It means that 35% of the system has a chance to fail, due to the failure of any one
device. the problem is that we want to construct a system whose reliability is
maximum.

When the same type of 3 devices is connected parallelly in stage 1 having a


reliability 0.9 each then:

Reliability of device1, r1= 0.9


The probability that device does not work well = 1 – r1 = 1 – 0.9 = 0.1
The probability that all three copies failed = ( 1-r1 )3 = (0.1)3 = 0.001
The Probability that all three copies work properly = 1 – (1- r1)3 = 1- 0.001 = 0.999
We can see that the system with multiple copies of the same device parallel may
increase the reliability of the system.
Given a cost C and we have to set up a system by buying the devices and we need to
find number of copies of each device under the cost such that reliability of a system
is maximized.
We have to design a three-stage system with device types D1, D2, and D3. The
costs are $30, $15, and $20 respectively. The cost of the system is to be no more
than $105. The reliability of each device type is 0.9, 0.8, and 0.5 respectively.
Pi Ci ri

P1 30 0.9

P2 15 0.8

P3 20 0.5

Explanation:
Given that we have total cost C = 105,
sum of all Ci = 30 + 15 + 20 = 65, the remaining amount we can use to buy a copy of
each device in such a way that the reliability of the system, may increase.
Remaining amount = C – sum of Ci = 105 – 65 = 40
Now, let us calculate how many copies of each device we can buy with $40, If
consume all $40 in device1 then we can buy 40/30 = 1 and 1 copy we have already
so overall 2 copies of device1. Now In general, we can have the formula to calculate
the upper bond of each device:
Ui = floor( C – ∑Ci / Ci ) + 1 (1 is added because we have one copy of each
device before)
C1=30, C2=15, C3=20, C=105
r1=0.9, r2=0.8, r3=0.5
u1 = floor ((105-(30+15+20))+1 = 2
u2 = 3
u3 = 3
A tuple is just an ordered pair containing reliability and total cost up to a choice of
mi’s that has been made. we can make pair in of Reliability and Cost of each stage
like copySdevice
S0 = {(1,0)}
Device 1:
Each Si from Si-1 by trying out all possible values for r i and combining the resulting
tuples together.
 let us consider P 1 :
 1S1 = {(0.9, 30)} where 0.9 is the reliability of stage1 with a
copy of one device and 30 is the cost of P 1.
 Now, two copies of device1 so, we can take one more copy as:
 2S1 = { (0.99, 60) } where 0.99 is the reliability of stage one with
two copies of device, we can see that it will come as: 1 – ( 1 –
r1 )2 = 1 – (1 – 0.9)2 = 1 – 0.01 = 0.99 .
 After combining both conditions of Stage1 i.e., with copy one and copy of
2 devices respectively.
S1 = { ( 0.9, 30 ), ( 0.99, 60 ) }
Device 2:
S2 will contain all reliability and cost pairs that we will get by taking all possible
values for the stage2 in conjunction with possibilities calculated in S 1.
First of all we will check the reliability at stage2 when we have 1, 2, and 3 as a copy
of device. let us assume that Ei is the reliability of the particular stage with n number
of devices, so for S2 we first calculate:
 E2 (with copy 1) = 1 – ( 1 – r2 )1 = 1 – ( 1 – 0.8 ) = 0.8
 E2 (with copy 2) = 1 – (1 – r2 )2 = 1 – (1 – 0.8 )2 = 0.96
 E2 (with copy 3) = 1 – (1 – r2 )3 = 1 – ( 1 – 0.8 )3 = 0.992
If we use 1 copy of P 1 and 1 copy of P2 reliability will be 0.9*0.8 and the cost will be
30+15
One Copy of Device two , 1S2 = { (0.8, 15) } Conjunction with S1 (0.9, 30) = {
(0.72,45) }
Similarly, we can calculate other pairs as S2 = ( 0.72, 45 ), ( 0.792, 75), ( 0.864, 60),
( 0.98,90 ) }
We get ordered pair (0.98,90) in S2 when we take 2 copies of Device1and 2 copies of
Device2 However, with the remaining cost of 15 (105 – 90), we cannot use device
Device3 (we need a minimum of 1 copy of every device in any stage), therefore (
0.792, 75) should be discarded and other ordered pairs like it. We get S 2 = { ( 0.72,
45 ), ( 0.864, 60 ), ( 0.98,90 ) }. There are other possible ordered pairs too, but all of
them exceed cost limitations.
Up to this point we got ordered pairs:
S1 = { ( 0.9, 30), ( 0.99, 60 ) }
S2 = { ( 0.72, 45 ), ( 0.864, 60 ), ( 0.98,90 )}

Device 3:
First of all we will check the reliability at stage3 when we have 1, 2, and 3 as a copy
of device. Ei is the reliability of the particular stage with n number of devices, so for
S3 we first calculate:
 E3 (with copy 1) = 1 – ( 1 – r3 )1 = 1 – ( 1 – 0.5 ) = 0.5
 E3 (with copy 2) = 1 – (1 – r3 )2 = 1 – (1 – 0.5 )2 = 0.75
 E3 (with copy 3) = 1 – (1 – r3 )3 = 1 – ( 1 – 0.5 )3 = 0.875
Now, possible ordered pairs of device three are as- S3 = { ( 0.36, 65), ( 0.396, 95), (
0.432, 80), ( 0.54, 85), ( 0.648, 100 ), ( 0.63, 105 ) }
(0.648,100) is the solution pair, 0.648 is the maximum reliability we can get under
the cost constraint of 105.

Unit 5

1.Write the control flow of a Backtracking Algorithm and explain in detail the problems that can
be solved using this approach.

Backtracking is one of the techniques that can be used to solve the problem. We can
write the algorithm using this strategy. It uses the Brute force search to solve the
problem, and the brute force search says that for the given problem, we try to make
all the possible solutions and pick out the best solution from all the desired solutions.
This rule is also followed in dynamic programming, but dynamic programming is used
for solving optimization problems. In contrast, backtracking is not used in solving
optimization problems. Backtracking is used when we have multiple solutions, and we
require all those solutions.

Backtracking name itself suggests that we are going back and coming forward; if it
satisfies the condition, then return success, else we go back again. It is used to solve a
problem in which a sequence of objects is chosen from a specified set so that the
sequence satisfies some criteria.

When we have multiple choices, then we make the decisions from the available
choices. In the following cases, we need to use the backtracking algorithm:
o A piece of sufficient information is not available to make the best choice, so we
use the backtracking strategy to try out all the possible solutions.
o Each decision leads to a new set of choices. Then again, we backtrack to make
new decisions. In this case, we need to use the backtracking strategy.

Backtracking is a systematic method of trying out various sequences of decisions until


you find out that works. Let's understand through an example.

We start with a start node. First, we move to node A. Since it is not a feasible solution
so we move to the next node, i.e., B. B is also not a feasible solution, and it is a dead-
end so we backtrack from node B to node A.

Suppose another path exists from node A to node C. So, we move from node A to
node C. It is also a dead-end, so again backtrack from node C to node A. We move
from node A to the starting node.

Now we will check any other path exists from the starting node. So, we move from
start node to the node D. Since it is not a feasible solution so we move from node D
to node E. The node E is also not a feasible solution. It is a dead end so we backtrack
from node E to node D.
Suppose another path exists from node D to node F. So, we move from node D to
node F. Since it is not a feasible solution and it's a dead-end, we check for another
path from node F.

Suppose there is another path exists from the node F to node G so move from node F
to node G. The node G is a success node.
The terms related to the backtracking are:

o Live node: The nodes that can be further generated are known as live nodes.
o E node: The nodes whose children are being generated and become a success
node.
o Success node: The node is said to be a success node if it provides a feasible
solution.
o Dead node: The node which cannot be further generated and also does not
provide a feasible solution is known as a dead node.

2.Solve the 8-Queen problem with the help of Backtracking. Show the complete set of possible
cases in this approach.
3.Solve the following Sum of Subsets problem. Let w = {5,7,10,12,15,18,20} and m = 35. Find all
possible subsets of w that sum to m.

Let us run the algorithm on first instance w = {5, 7, 10, 12, 15, 18,
20}.
Items in sub set Condition Comment
{} 0 Initial condition
Items in sub set Condition Comment
{5} 5 < 35 Select 5 and Add next element
{ 5, 7 } 12 < 35 Select 7 and Add next element
{ 5, 7, 10 } 22 < 35 Select 20 and Add next element
{ 5, 7, 10, 12 } 34 < 35 Select 12 and Add next element
{ 5, 7, 10, 12, 15 } 49 > 35 Sum exceeds M, so backtrack and remove 12
{ 5, 7, 10, 15 } 37 > 35 Sum exceeds M, so backtrack and remove 15
{ 5, 7, 12 } 24 < 35 Add next element
{ 5, 7, 12, 15 } 39 > 35 Sub set sum exceeds, so backtrack
{ 5, 10 } 15 < 35 Add next element
{ 5, 10, 12 } 27 < 35 Sub set sum exceeds, so backtrack
{ 5, 10, 12, 15 } 42 > 35 Sub set sum exceeds, so backtrack
{ 5, 10, 15 } 30 < 35 Add next element
{ 5, 10, 15, 18 } 48 > 35 Sub set sum exceeds, so backtrack
{ 5, 10, 18 } 33 < 35 Add next element
{ 5, 10, 18, 20 } 53 > 35 Sub set sum exceeds, so backtrack
{ 5, 10, 20 } 35 Solution Found

There may be multiple solutions. A state-space tree for the above


sequence is shown here: The number in the leftmost column shows
the element under consideration. The left and right branches in the
tree indicate inclusion and exclusion of the corresponding element
at that level, respectively.
Numbers in the leftmost column indicate elements under
consideration at that level. A gray circle indicates the node that
cannot accommodate any of the next values, so we will cut sort
them from further expansion. White leaves do not lead to a solution.
An intermediate white node may or may not lead to a solution. The
black circle is the solution state.
We get the four solutions:
 {5, 10, 20}
 {5, 12, 18}
 {7, 10, 18}
 {15, 20}
For efficient execution of the subset sum problems, input elements
should be sorted in non-decreasing order. If elements are not in
non-decreasing order, the algorithm does more backtracking. So
second and third sequences would take more time for execution and
may not find as many solutions as we get in the first sequence.
4.Solve sum of subsets problem using back tracking for n = 4 (w1,w2,w3,w4) = (11,13,24,7) & m =
31. Find all possible subsets of w that sum to m using the backtracking algorithm for sum of
subsets problem.

Solution:
Items in sub set Condition Comment
{} 0 Initial condition
{ 11 } 11 < 31 Add next element
{ 11, 13 } 24 < 31 Add next element
{ 11, 13, 24 } 48 < 31 Sub set sum exceeds, so backtrack
{ 11, 13, 7 } 31 Solution Found

State-space tree for a given problem is shown here:

In the above graph, the black circle shows the correct result. The
gray node shows where the algorithm backtracks. Numbers in the
leftmost column indicate elements under consideration at that level.
The left and right branches represent the inclusion and exclusion of
that element, respectively.
We get two solutions:
 {11, 13, 7}
 {24, 7}
5.Briefly explain the function that is used to find the next color in graph coloring problem.

6.Describe graph coloring problem. Give an example.

Graph coloring is the procedure of assignment of colors to each vertex of a


graph G such that no adjacent vertices get same color. The objective is to
minimize the number of colors while coloring a graph. The smallest number of
colors required to color a graph G is called its chromatic number of that graph.
Graph coloring problem is a NP Complete problem.

Method to Color a Graph


The steps required to color a graph G with n number of vertices are as follows

Step 1 − Arrange the vertices of the graph in some order.

Step 2 − Choose the first vertex and color it with the first color.

Step 3 − Choose the next vertex and color it with the lowest numbered color
that has not been colored on any vertices adjacent to it. If all the adjacent
vertices are colored with this color, assign a new color to it. Repeat this step
until all the vertices are colored.

Example

In the above figure, at first vertex a is colored red. As the adjacent vertices of
vertex a are again adjacent, vertex b and vertex d are colored with different
color, green and blue respectively. Then vertex c is colored as red as no
adjacent vertex of c is colored red. Hence, we could color the graph by 3 colors.
Hence, the chromatic number of the graph is 3.

7.

In an undirected graph, the Hamiltonian path is a path, that visits each vertex
exactly once, and the Hamiltonian cycle or circuit is a Hamiltonian path, that
there is an edge from the last vertex to the first vertex.

In this problem, we will try to determine whether a graph contains a


Hamiltonian cycle or not. And when a Hamiltonian cycle is present, also print
the cycle.

Here's an outline of the Backtracking Algorithm for finding a Hamiltonian Cycle in a


graph:

1. Initialization: Start with an empty path that contains only the starting vertex.
2. Build the Path: Extend the path by adding one vertex at a time, ensuring that
the vertices are visited only once. At each step, choose a neighboring vertex
that hasn't been visited yet. If no unvisited neighbors are available for the
current vertex, backtrack to the previous vertex and explore other options.
3. Termination Condition: Continue extending the path until all vertices are
included, and the path returns to the starting vertex. If the path includes all
vertices, you have found a Hamiltonian Cycle. If it doesn't, backtrack to the
previous vertex and explore other possibilities.
4. Backtracking: During the backtracking phase, when there are no valid choices
for the next vertex, remove the last added vertex from the path and mark it as
unvisited. Then, try other unvisited neighbors of the previous vertex.
5. Repeat: Keep repeating steps 2-4 until all possibilities are explored.
6. Output: If a Hamiltonian Cycle is found, it will be the output of the algorithm.
Otherwise, the algorithm will indicate that there is no Hamiltonian Cycle in the
graph.

8. Write an algorithm to determine the Hamiltonian Cycle in a given graph using backtracking

The Hamiltonian cycle is the cycle in the graph which visits all the
vertices in graph exactly once and terminates at the starting node. It
may not include all the edges
 The Hamiltonian cycle problem is the problem of finding a
Hamiltonian cycle in a graph if there exists any such cycle.
 The input to the problem is an undirected, connected
graph. For the graph shown in Figure (a), a path A – B – E –
D – C – A forms a Hamiltonian cycle. It visits all the vertices
exactly once, but does not visit the edges <B, D>.

 The Hamiltonian cycle problem is also both, decision


problem and an optimization problem. A decision problem
is stated as, “Given a path, is it a Hamiltonian cycle of the
graph?”.
 The optimization problem is stated as, “Given graph G, find
the Hamiltonian cycle for the graph.”
 We can define the constraint for the Hamiltonian cycle
problem as follows:
 In any path, vertex i and (i + 1) must be adjacent.
 1st and (n – 1)th vertex must be adjacent (nth of
cycle is the initial vertex itself).
 Vertex i must not appear in the first (i – 1) vertices
of any path.
 With the adjacency matrix representation of the graph, the
adjacency of two vertices can be verified in constant time.
9. Explain FIFO Branch and Bound solution.

FIFO branch and bound always use the oldest node in the queue to
extend the branch. This leads to a breadth-first search, where all
nodes at depth d are visited first, before any nodes at depth d+1 are
visited.
In LIFO (last in, first out), the branch is always extended from the
youngest node in the queue. This results in a depth-first search, in
which the branch is extended via every first child encountered at a
certain depth until a leaf node is found.
In LC (lowest cost) method, according to a cost function, the branch
is extended by the node with the lowest extra costs. As a result, the
cost function determines how to traverse the search space.
The Branch and Bound method employs either BFS or DFS search.
During BFS, expanded nodes are kept in a queue, whereas in DFS,
nodes are kept on the stack. BFS approach is the first in, first out
(FIFO) method, while DFS is the last in, first out (LIFO). In the FIFO
branch and bound, expanded nodes are inserted into the queue, and
the first generated node becomes the next live node.

Assume that Fig. (a) is the collection of nodes expanded in a state-


space tree during some problem-solving stage. Each cell in the
queue represents the cost of the expanded node. Ni represents the
order of node generation. FIFO will select N1 as the next E-node,
because that node was inserted first in the queue. Similarly, LIFO
approach will select node N5 as the next E-node, as it was inserted
last in tht queue. LC approach does not depend on the order of
insertion in the queue. Rather it selects the node based on the cost.
The node with minimum cost will become the next E-Node in LC
method, i.e. node N4 will be expanded next.
10. Discuss various methodologies of Branch and Bound technique.

The Branch and Bound method can be classified into three types based on the order
in which the state space tree is searched.
1. FIFO Branch and Bound
2. LIFO Branch and Bound
3. Least Cost-Branch and Bound
We will now discuss each of these methods in more detail. To denote the solutions
in these methods, we will use the variable solution method.

1. FIFO Branch and Bound

First-In-First-Out is an approach to the branch and bound problem that uses the
queue approach to create a state-space tree. In this case, the breadth-first search is
performed, that is, the elements at a certain level are all searched, and then the
elements at the next level are searched, starting with the first child of the first node
at the previous level.
For a given set {A, B, C, D}, the state space tree will be constructed as follows :

State Space tree for set {A, B, C, D}

The above diagram shows that we first consider element A, then element B, then
element C and finally we’ll consider the last element which is D. We are performing
BFS while exploring the nodes.
So, once the first level is completed. We’ll consider the first element, then we can
consider either B, C, or D. If we follow the route then it says that we are doing
elements A and D so we will not consider elements B and C. If we select the
elements A and D only, then it says that we are selecting elements A and D and we
are not considering elements B and C.

Selecting element A

Now, we will expand node 3, as we have considered element B and not considered
element A, so, we have two options to explore that is elements C and D. Let’s create
nodes 9 and 10 for elements C and D respectively.
Considered element B and not considered element A

Now, we will expand node 4 as we have only considered elements C and not
considered elements A and B, so, we have only one option to explore which is
element D. Let’s create node 11 for D.
Considered elements C and not considered elements A and B

Till node 5, we have only considered elements D, and not selected elements A, B,
and C. So, We have no more elements to explore, Therefore on node 5, there won’t
be any expansion.
Now, we will expand node 6 as we have considered elements A and B, so, we have
only two option to explore that is element C and D. Let’s create node 12 and 13 for
C and D respectively.
Expand node 6

Now, we will expand node 7 as we have considered elements A and C and not
consider element B, so, we have only one option to explore which is element D.
Let’s create node 14 for D.
Expand node 7

Till node 8, we have considered elements A and D, and not selected elements B and
C, So, We have no more elements to explore, Therefore on node 8, there won’t be
any expansion.
Now, we will expand node 9 as we have considered elements B and C and not
considered element A, so, we have only one option to explore which is element D.
Let’s create node 15 for D.
Expand node 9

2. LIFO Branch and Bound

The Last-In-First-Out approach for this problem uses stack in creating the state
space tree. When nodes are added to a state space tree, they are added to a stack.
After all nodes of a level have been added, we pop the topmost element from the
stack and explore it.
For a given set {A, B, C, D}, the state space tree will be constructed as follows :
State space tree for element {A, B, C, D}

Now the expansion would be based on the node that appears on the top of the stack.
Since node 5 appears on the top of the stack, so we will expand node 5. We will pop
out node 5 from the stack. Since node 5 is in the last element, i.e., D so there is no
further scope for expansion.
The next node that appears on the top of the stack is node 4. Pop-out node 4 and
expand. On expansion, element D will be considered and node 6 will be added to the
stack shown below:
Expand node 4

The next node is 6 which is to be expanded. Pop-out node 6 and expand. Since node
6 is in the last element, i.e., D so there is no further scope for expansion.
The next node to be expanded is node 3. Since node 3 works on element B so node 3
will be expanded to two nodes, i.e., 7 and 8 working on elements C and D
respectively. Nodes 7 and 8 will be pushed into the stack.
The next node that appears on the top of the stack is node 8. Pop-out node 8 and
expand. Since node 8 works on element D so there is no further scope for the
expansion.
Expand node 3

The next node that appears on the top of the stack is node 7. Pop-out node 7 and
expand. Since node 7 works on element C so node 7 will be further expanded to
node 9 which works on element D and node 9 will be pushed into the stack.
The next node is 6 which is to be expanded. Pop-out node 6 and expand. Since node
6 is in the last element, i.e., D so there is no further scope for expansion.
Expand node 7

The next node that appears on the top of the stack is node 9. Since node 9 works on
element D, there is no further scope for expansion.
The next node that appears on the top of the stack is node 2. Since node 2 works on
the element A so it means that node 2 can be further expanded. It can be expanded
up to three nodes named 10, 11, 12 working on elements B, C, and D respectively.
There new nodes will be pushed into the stack shown as below:
Expand node 2

In the above method, we explored all the nodes using the stack that follows the LIFO
principle.

3. Least Cost-Branch and Bound

To explore the state space tree, this method uses the cost function. The previous two
methods also calculate the cost function at each node but the cost is not been used
for further exploration.
In this technique, nodes are explored based on their costs, the cost of the node can be
defined using the problem and with the help of the given problem, we can define the
cost function. Once the cost function is defined, we can define the cost of the node.
Now, Consider a node whose cost has been determined. If this value is greater than
U0, this node or its children will not be able to give a solution. As a result, we can
kill this node and not explore its further branches. As a result, this method prevents
us from exploring cases that are not worth it, which makes it more efficient for us.
Let’s first consider node 1 having cost infinity shown below:
In the following diagram, node 1 is expanded into four nodes named 2, 3, 4, and 5.

Node 1 is expanded into four nodes named 2, 3, 4, and 5

Assume that cost of the nodes 2, 3, 4, and 5 are 12, 16, 10, and 315 respectively.
In this method, we will explore the node which is having the least cost. In the above
figure, we can observe that the node with a minimum cost is node 4. So, we will
explore node 4 having a cost of 10.
During exploring node 4 which is element C, we can notice that there is only one
possible element that remains unexplored which is D (i.e, we already decided not to
select elements A, and B). So, it will get expanded to one single element D, let’s say
this node number is 6.
Exploring node 4 which is element C

Now, Node 6 has no element left to explore. So, there is no further scope for
expansion. Hence the element {C, D} is the optimal way to choose for the least cost.

11. Draw the portion of the state space tree generated by LC Branch and Bound for the knapsack
instance: n=5, (p1, p2, ..., p5) = (10, 15, 6, 8, 4), (w1, w2, ..., w5) = (4, 6, 3, 4, 2), and m = 12.

12. Generate LC branch and bound solution for the given knapsack problem, m = 15,

n = 4, (P1, P2, P3,P4) = (10, 10,12,18) and (w1, w2, w3,w4) = (2,4,6,9).

Let us use the fixed tuple formulation. The search begins with root node as E-node. cp - current
profit cw- current weight k – k number of decisions m - Capacity of knapsack Algorithm
Ubound(cp,cw,k,m) { b := cp; c := cw; for i:= k+1 to n do { if ( c + w[i] <= m ) then { c := c + w[i]; b := b -
p[i]; } } return b; } Function U(.) for Knapsack problem.
Node 1 Ubound (0,0,0,15) i=1 c=2 b=-10 i=2 c=6 b=-20 i=3 c=12 b=-32 i=4 U(1)=-32 cˆ(1) = -32+3/9 x
18= -38 To calculate lower bound we allow fractions Node2 Ubound(-10,2,1,15) x1=1 i=2 c=6 b= -20
i=3 c=12 b= -32 i=4 U(2)=-32 cˆ(2) = -32+3/9 x 18= -38 To calculate lower bound we allow fractions
Node 3 Ubound(0,0,1,15) i=2 c=4 b= -10 i = 3 c = 10 b = -22 i =4 U(3)=-22 cˆ(3) = -22+5/9 x 18= -32 To
calculate lower bound we allow fractions Node 4 Ubound(-20,6,2,15) i=3 c=12 b=-32 i=4 U(4)=-32
cˆ(4) = -32+3/9 x 18= -38 To calculate lower bound we allow fractions

Node 5 Ubound(-10,2,2,15) i=3 c=8 b=-22 i=4 U(5)=-22 cˆ(5) = -22+7/9 x 18= -36 To calculate lower
bound we allow fractions Node 6 Ubound(-32,12,3,15) i=4 U(6)=-32 cˆ(6) = -32+3/9 x 18= -38 To
calculate lower bound we allow fractions Node 7 Ubound(-20,6,3,15) i=4 c=15 b=-38 U(7)=-38 cˆ(7) =
-38 Node 8 Ubound(-38,15,4,15) U(8)=-38 cˆ(8) = -38 Node 9 Ubound(-20,6,4,15) U(1)=-20 cˆ(1) = -20
Node 8 is a solution node (solution vector) X1=1 X2=1 X3=0 X4=1 p1x1+p2x2+p3x3+p4x4
10x1+10=1+12x0+18x1 =38 We need to consider all n items. The tuple size is n

13.

Let's consider the above problem.


As we can observe in the above adjacent matrix that 10 is the minimum value in the
first row, 2 is the minimum value in the second row, 2 is the minimum value in the third
row, 3 is the minimum value in the third row, 3 is the minimum value in the fourth row,
and 4 is the minimum value in the fifth row.

Now, we will reduce the matrix. We will subtract the minimum value with all the
elements of a row. First, we evaluate the first row. Let's assume two variables, i.e., i and
j, where 'i' represents the rows, and 'j' represents the columns.

When i = 0, j =0

M[0][0] = ∞-10= ∞
When i = 0, j = 1

M[0][1] = 20 - 10 = 10

When i = 0, j = 2

M[0][2] = 30 - 10 = 20

When i = 0, j = 3

M[0][3] = 10 - 10 = 0

When i = 0, j = 4

M[0][4] = 11 - 10 = 1

The matrix is shown below after the evaluation of the first row:

Consider the second row.

When i = 1, j =0

M[1][0] = 15-2= 13

When i = 1, j = 1

M[1][1] = ∞ - 2= ∞

When i = 1, j = 2

M[1][2] = 16 - 2 = 14
When i = 1, j = 3

M[1][3] = 4 - 2 = 2

When i = 1, j = 4

M[1][4] = 2 - 2 = 0

The matrix is shown below after the evaluation of the second row:

Consider the third row:

When i = 2, j =0

M[2][0] = 3-2= 1

When i = 2, j = 1

M[2][1] = 5 - 2= 3

When i = 2, j = 2

M[2][2] = ∞ - 2 = ∞

When i = 2, j = 3

M[2][3] = 2 - 2 = 0

When i = 2, j = 4

M[2][4] = 4 - 2 = 2
The matrix is shown below after the evaluation of the third row:

Consider the fourth row:

When i = 3, j =0

M[3][0] = 19-3= 16

When i = 3, j = 1

M[3][1] = 6 - 3= 3

When i = 3, j = 2

M[3][2] = 18 - 3 = 15

When i = 3, j = 3

M[3][3] = ∞ - 3 = ∞

When i = 3, j = 4

M[3][4] = 3 - 3 = 0

The matrix is shown below after the evaluation of the fourth row:

Consider the fifth row:

When i = 4, j =0

M[4][0] = 16-4= 12
When i = 4, j = 1

M[4][1] = 4 - 4= 0

When i = 4, j = 2

M[4][2] = 7 - 4 = 3

When i = 4, j = 3

M[4][3] = 16 - 4 = 12

When i = 4, j = 4

M[4][4] = ∞ - 4 = ∞

The matrix is shown below after the evaluation of the fifth row:

The above matrix is the reduced matrix with respect to the rows.

Now we reduce the matrix with respect to the columns. Before reducing the matrix, we
first find the minimum value of all the columns. The minimum value of first column is
1, the minimum value of the second column is 0, the minimum value of the third
column is 3, the minimum value of the fourth column is 0, and the minimum value of
the fifth column is 0, as shown in the below matrix:

Now we reduce the matrix.

Consider the first column.

When i = 0, j =0
M[0][0] = ∞-1= ∞

When i = 1, j = 0

M[1][0] = 13 - 1= 12

When i = 2, j = 0

M[2][0] = 1 - 1 = 0

When i = 3, j = 0

M[3][0] = 16 - 1 = 15

When i = 4, j = 0

M[4][0] = 12 - 1 = 11

The matrix is shown below after the evaluation of the first column:

Since the minimum value of the first and the third columns is non-zero, we will evaluate
only first and third columns. We have evaluated the first column. Now we will evaluate
the third column.

Consider the third column.

When i = 0, j =2

M[0][2] = 20-3= 17

When i = 1, j = 2
M[1][2] = 13 - 1= 12

When i = 2, j = 2

M[2][2] = 1 - 1 = 0

When i = 3, j = 2

M[3][2] = 16 - 1 = 15

When i = 4, j = 2

M[4][2] = 12 - 1 = 11

The matrix is shown below after the evaluation of the third column:

The above is the reduced matrix. The minimum value of rows is 21, and the columns is
4. Therefore, the total minimum value is (21 + 4) equals to 25.

14. By taking your own 5x5 cost matrix explain the working of Travelling sales person problem
using Branch and Bound.

Above answer

15. Explain the differences between NP complete and NP-hard problems.


NP-hard NP-Complete

NP-Hard problems(say X) can be


NP-Complete problems can be solved by a
solved if and only if there is a NP-
non-deterministic Algorithm/Turing
Complete problem(say Y) that can be
Machine in polynomial time.
reducible into X in polynomial time.

To solve this problem, it do not have To solve this problem, it must be both NP
to be in NP . and NP-hard problems.

Time is unknown in NP-Hard. Time is known as it is fixed in NP-Hard.

NP-Complete is exclusively a decision


NP-hard is not a decision problem.
problem.

Not all NP-hard problems are NP-


All NP-complete problems are NP-hard
complete.

Do not have to be a Decision


It is exclusively a Decision problem.
problem.

It is optimization problem used. It is Decision problem used.

Example: Determine whether a graph has a


Example: Halting problem, Vertex Hamiltonian cycle, Determine whether a
cover problem, etc. Boolean formula is satisfiable or not,
Circuit-satisfiability problem, etc.

16. How to determine whether a problem is NP-Hard or P? Illustrate with an example.

A language B is NP-complete if it satisfies two conditions

 B is in NP
 Every A in NP is polynomial time reducible to B.

If a language satisfies the second property, but not necessarily


the first one, the language B is known as NP-Hard. Informally, a
search problem B is NP-Hard if there exists some NP-
Complete problem A that Turing reduces to B.
The problem in NP-Hard cannot be solved in polynomial time,
until P = NP. If a problem is proved to be NPC, there is no need to
waste time on trying to find an efficient algorithm for it. Instead,
we can focus on design approximation algorithm.

NP-Complete Problems
Following are some NP-Complete problems, for which no
polynomial time algorithm is known.

 Determining whether a graph has a Hamiltonian cycle


 Determining whether a Boolean formula is satisfiable, etc.

NP-Hard Problems
The following problems are NP-Hard

 The circuit-satisfiability problem


 Set Cover
 Vertex Cover
 Travelling Salesman Problem

You might also like