0% found this document useful (0 votes)
37 views10 pages

Daa Mod-5

The document discusses backtracking algorithms for solving the knapsack problem and the N-Queens problem. It describes how to apply backtracking to the knapsack problem by structuring the state space as a binary tree. Each node represents a partial solution and is assigned an upper bound on its value. Pruning occurs by skipping nodes whose upper bound is less than the best known solution. It also explains how to solve the N-Queens problem using backtracking. The algorithm tries placing queens one by one in different columns of their respective rows. If a placement leads to a conflict, it backtracks and tries alternative placements, building a state space tree. Pruning occurs when all placements for a row

Uploaded by

Disha Gudigar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views10 pages

Daa Mod-5

The document discusses backtracking algorithms for solving the knapsack problem and the N-Queens problem. It describes how to apply backtracking to the knapsack problem by structuring the state space as a binary tree. Each node represents a partial solution and is assigned an upper bound on its value. Pruning occurs by skipping nodes whose upper bound is less than the best known solution. It also explains how to solve the N-Queens problem using backtracking. The algorithm tries placing queens one by one in different columns of their respective rows. If a placement leads to a conflict, it backtracks and tries alternative placements, building a state space tree. Pruning occurs when all placements for a row

Uploaded by

Disha Gudigar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

MODULE – 5: BACKTRACKING

Knapsack Algorithm Using Backtracking Principle:

Let us now discuss how we can apply the branch-and-bound technique to solving
the knapsack problem: given n items of known weights wi and values vi , i = 1, 2, .
. . , n, and a knapsack of capacity W, find the most valuable subset of the items that
fit in the knapsack.

It is convenient to order the items of a given instance in descending order by their


value-to-weight ratios. Then the first item gives the best payoff per weight unit and
the last one gives the worst payoff per weight unit, with ties resolved arbitrarily:

It is natural to structure the state-space tree for this problem as a binary tree
constructed as follows (see Figure 12.8 for an example). Each node on the ith
level of this tree, 0 ≤ i ≤ n, represents all the subsets of n items that include a
particular selection made from the first i ordered items. This particular selection is
uniquely determined by the path from the root to the node: a branch going to the
left indicates the inclusion of the next item, and a branch going to the right
indicates its exclusion. We record the total weight w and the total value v of this
selection in the node, along with some upper bound ub on the value of any subset
that can be obtained by adding zero or more items to this selection.

A simple way to compute the upper bound ub is to add to v, the total value of the
items already selected, the product of the remaining capacity of the knapsack W
− w and the best per unit payoff among the remaining items, which is
vi+1/wi+1:
As a specific example, let us apply the branch-and-bound algorithm to the
same instance of the knapsack problem we solved in Section 3.4 by exhaustive
search. (We reorder the items in descending order of their value-to-weight
ratios, though.)

Knapsack Problem Using Backtrack:


EXAMPLE 1: Let us consider the instance given by the following data:
Item Weight Value
1 2 12
2 1 10
3 3 18
4 2 14
Capacity(M) = 5. Formula: Ub = v + (M-w) * (vi+1 / wi+1)
Solution:
Item Weight(w) Value(v) vi / wi
1 2 12 6
2 1 10 10
3 3 18 6
4 2 14 7
M=5
When i = 0:
v = 0 ; w = 0;
(M-w) = 5 – 0 = 5
(v0+1 / w0+1) = 6 // (vi+1 / wi+1) => we always consider the next item’s value
Ub = 0 + 5 * 6 = 30
When i = 1:
with w1 and v1
w = 0 + 2 = 2 //the eqn is feasible, w < M, the equation is known to be feasible
v = 0 + 12 = 12
(M-w) = 5 – 2 = 3
(v1+1 / w1+1) = 10 // (vi+1 / wi+1) => we always consider the next item’s value
Ub = 12 + (3 * 10) = 42
Without w1 and v1
w = w0 = 0
v = w0 = 0
(M-w) = 5 – 0 = 5
(v1+1 / w1+1) = 10 // (vi+1 / wi+1) => remains the same in with or without v1,w1 values
Ub = 0 + (5 * 10) = 50
When i = 2:
With w2 and v2
w = 2 + 1 = 3 //the eqn is feasible(w < M),we need to add the previous feasible eqn.
v = 10 + 12 = 22
(M-w) = 5 – 3 = 2
(v2+1 / w2+1) =(v3 / w3) = 6 // (vi+1 / wi+1) => we always consider the next item’s value
Ub = 22 + (2 * 6) = 44
Without w2 and v2
w = 0 + 2= 2//the eqn is feasible, w < M, the equation is known to be feasible
v = 0 +12 =12
(M-w) = 5 – 2 = 3
(v2+1 / w2+1) = (v3 / w3) = 6 // (vi+1 / wi+1) => remains the same in with or without vi+1 / wi+1 values
Ub = 12 + (3 * 6) = 30
When i = 3:
With w3 and v3
w = 3 + 3 = 6 //the eqn is not feasible. As (w > M),we need not to continue to solve eqn.
Without w3 and v3
w = 0 + 3= 3//the eqn is feasible, w < M, the equation is known to be feasible
v = 0 +22 =22
(M-w) = 5 – 3 = 2
(v3+1 / w3+1) = (v4 / w4) = 7
Ub = 22 + (2 * 7) = 36
When i =4:
with w4 and v4
w = w4 + w2 => 2 + 3 = 5 // feasible
v = v4 + v2 => 18 + 22 = 40
(v4+1 / w4+1) = 0 // As it is not available
M–w=5–5=0
Ub = 40 + 0 * 0 = 40
Without w4 and v4
w = w2 => 3 // feasible
v = v2 => 22
(v4+1 / w4+1) = 0 // As it is not available
M–w=5–3=2
Ub = 22 + 2 * 0 = 22
KNAPSACK FEASIBLE SOLUTION = {1, 2, 4}
THE TOTAL VALUE = 42 + 44 + 40 = 126

DIAGRAMATIC REPRESENTATION

P, NP, and NP-Complete Problems:


DEFINITION 1: We say that an algorithm solves a problem in polynomial time if its worst-case time
efficiency belongs to O(p(n)) where p(n) is a polynomial of the problem’s input size n. (Note that
since we are using big-oh notation here, problems solvable in, say, logarithmic time are solvable in
polynomial time as well.) Problems that can be solved in polynomial time are called tractable, and
problems that cannot be solved in polynomial time are called intractable.
P and NP Problems:
DEFINITION 2: Class P is a class of decision problems that can be solved in polynomial time by
(deterministic) algorithms. This class of problems is called polynomial. The restriction of P to
decision problems can be justified by the following reasons. First, it is sensible to exclude problems
not solvable in polynomial time because of their exponentially large output. Such problems do arise
naturally— e.g., generating subsets of a given set or all the permutations of n distinct items—but it is
apparent from the outset that they cannot be solved in polynomial time.
Second, many important problems that are not decision problems in their most natural formulation can
be reduced to a series of decision problems that are easier to study. For example, instead of asking
about the minimum number of colors needed to color the vertices of a graph so that no two adjacent
vertices are colored the same color, we can ask whether there exists such a coloring of the graph’s
vertices with no more than m colors for m = 1, 2, . . . . (The latter is called the m coloring problem.)
DEFINITION 3: A nondeterministic algorithm is a two-stage procedure that takes as its input an
instance I of a decision problem and does the following.

 Nondeterministic (“guessing”) stage: An arbitrary string S is generated that can be thought


of as a candidate solution to the given instance I (but may be complete gibberish as well).
 Deterministic (“verification”) stage: A deterministic algorithm takes both I and S as its
input and outputs yes if S represents a solution to instance I. (If S is not a solution to instance
I , the algorithm either returns no or is allowed not to halt at all.)
Now we can define the class of NP problems.
DEFINITION 4: Class NP is the class of decision problems that can be solved by nondeterministic
polynomial algorithms. This class of problems is called nondeterministic polynomial. Most decision
problems are in NP. First of all, this class includes all the problems in P:
P⊆NP
NP-Complete Problems
Informally, an NP-complete problem is a problem in NP that is as difficult as any other problem in
this class because, by definition, any other problem in NP can be reduced to it in polynomial time
(shown symbolically in Figure 11.6). Here are more formal definitions of these concepts.
DEFINITION 5: A decision problem D1 is said to be polynomial reducible to a decision problem D2,
if there exists a function t that transforms instances of D1 to instances of D2 such that:
1. t maps all yes instances of D1 to yes instances of D2 and all no instances of D1 to no
instances of D2
2. t is computable by a polynomial time algorithm This definition immediately implies that if a
problem D1 is polynomial reducible to some problemD2 that can be solved in polynomial
time, then problem D1 can also be solved in polynomial time (why?).
DEFINITION 6: A decision problem D is said to be NP-complete if:
1. it belongs to class NP
2. every problem in NP is polynomial reducible to D
The fact that closely related decision problems are polynomial reducible to each other is not very
surprising.
For example, let us prove that the Hamiltonian circuit problem is polynomial reducible to the decision
version of the traveling salesman problem.
N-Queens Problem:
The problem is to place n queens on an n × n chessboard so that no two queens attack each other by
being in the same row or in the same column or on the same diagonal. For n = 1, the problem has a
trivial solution, and it is easy to see that there is no solution for n = 2 and n = 3.
So let us consider the four-queens problem and solve it by the backtracking technique. Since each of
the four queens has to be placed in its own row, all we need to do is to assign a column for each queen
on the board presented in the fig below:

We start with the empty board and then place queen 1 in the first possible position of its row, which is
in column 1 of row 1. Then we place queen 2, after trying unsuccessfully columns 1 and 2, in the first
acceptable position for it, which is square (2, 3), the square in row 2 and column 3. This proves to be a
dead end because there is no acceptable position for queen 3. So, the algorithm backtracks and puts
queen 2 in the next possible position at (2, 4). Then queen 3 is placed at (3, 2), which proves to be
another dead end. The algorithm then backtracks all the way to queen 1 and moves it to (1, 2). Queen
2 then goes to (2, 4), queen 3 to (3, 1), and queen 4 to (4, 3), which is a solution to the problem. The
state-space tree of this search is shown in Fig below:

NOTE: If other solutions need to be found, the algorithm can simply resume its operations at the leaf
at which it stopped.
Hamiltonian Circuit Problem:
Without loss of generality, we can assume that if a Hamiltonian circuit exists, it starts at
vertex a. Accordingly, we make vertex ‘a’ the root of the state-space tree (Figure 12.3b). The
first component of our future solution, if it exists, is a first intermediate vertex of a
Hamiltonian circuit to be constructed.
Using the alphabet order to break the three-way tie among the vertices adjacent to ‘a’, we
select vertex b. From b, the algorithm proceeds to c, then to ‘d’, then to ‘e’, and finally to ‘f’,
which proves to be a dead end.
So, the algorithm backtracks from ‘f’ to ‘e’, then to ‘d’, and then to c, which provides the first
alternative for the algorithm to pursue.
Going from ‘c’ to ‘e’ eventually proves useless, and the algorithm has to backtrack from e to
c and then to b. From there, it goes to the vertices f , e, c, and d, from which it can
legitimately return to a, yielding the Hamiltonian circuit a, b, f , e, c, d, a. If we wanted to
find another Hamiltonian circuit, we could continue this process by backtracking from the
leaf of the solution found.
Branch-and-Bound
Compared to backtracking, branch-and-bound requires two additional items:
- a way to provide, for every node of a state-space tree, a bound on the
best value of the objective function1 on any solution that can be obtained by
adding further components to the partially constructed solution represented
by the node
- The value of the best solution seen so far, If this information is available,
we can compare a node’s bound value with the value of the best solution
seen so far.
- If the bound value is not better than the value of the best solution seen so
far—i.e., not smaller for a minimization problem and not larger for a
maximization problem—the node is nonpromising and can be terminated
(some people say the branch is “pruned”).
- Indeed, no solution obtained from it can yield a better solution than the
one already available. This is the principal idea of the branch-and-bound
technique.

In general, we terminate a search path at the current node in a state-space tree of a


branch-and-bound algorithm for any one of the following three reasons:

-The value of the node’s bound is not better than the value of the best
solution seen so far.

-The node represents no feasible solutions because the constraints of the


problem are already violated.

- The subset of feasible solutions represented by the node consists of a


single point (and hence no
further choices can be made)—in this case, we compare the value of the
objective function for this feasible solution with that of the best solution seen
so far and update the latter with the former if the new solution is better.

Assignment Problem
Let us illustrate the branch-and-bound approach by applying it to the problem of assigning n
people to n jobs so that the total cost of the assignment is as small as possible. We introduced
this problem in Section 3.4, where we solved it by exhaustive search. Recall that an instance
of the assignment problem is specified by an n × n cost matrix C so that we can state the
problem as follows: select one element in each row of the matrix so that no two selected
elements are in the same column and their sum is the smallest possible.
For example, it is clear that the cost of any solution, including an optimal one, cannot be
smaller than the sum of the smallest elements in each of the matrix’s rows. For the instance
here, this sum is 2 + 3+ 1 + 4 = 10.
It is important to stress that this is not the cost of any legitimate selection (3 and 1 came from
the same column of the matrix); it is just a lower bound on the cost of any legitimate
selection. We can and will apply the same thinking to partially constructed solutions.
For example, for any legitimate selection that selects 9 from the first row, the lower bound
will be 9 + 3 + 1+ 4 = 17.

You might also like