0% found this document useful (0 votes)
15 views45 pages

Unit V Backtracking and Branch & Bound

Uploaded by

musicalabhi5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views45 pages

Unit V Backtracking and Branch & Bound

Uploaded by

musicalabhi5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

BACKTRACKING

• Problems such as Hamiltonian Circuit, Knapsack that require


finding an element with a special property in a domain that
grows exponentially fast (or faster) with the size of the
problem’s input.
• Such problems may not be solvable in polynomial time.
• Such problems were solved using exhaustive search. The
exhaustive-search technique suggests generating all candidate
solutions and then identifying the one (or the ones) with a
desired property.
• Backtracking is a more intelligent variation of this approach.
BACKTRACKING
The principal idea is to construct solutions one component at a time and
evaluate such partially constructed candidates as follows:
If a partially constructed solution can be developed further without
violating the problem’s constraints, it is done by taking the first remaining
legitimate option for the next component.
If there is no legitimate option for the next component, no alternatives for
any remaining component need to be considered.
In this case, the algorithm backtracks to replace the last component of the
partially constructed solution with its next option.
It is convenient to implement this kind of processing by constructing a tree
of choices being made, called the state-space tree.
state-space tree
• Its root represents an initial state before the search for a solution begins.
• The nodes of the first level in the tree represent the choices made for the first
component of a solution,
• The nodes of the second level represent the choices for the second
component, and so on.
• A node in a state-space tree is said to be promising if it corresponds to a
partially constructed solution that may still lead to a complete solution
otherwise it is called nonpromising.
• Leaves represent either nonpromising dead ends or complete solutions
found by the algorithm.
• In the majority of cases, a state space tree for a backtracking algorithm is
constructed in the manner of depth first search.
• If the current node turns out to be nonpromising, the algorithm backtracks to
the node’s parent to consider the next possible option for its last component.
3,2 2,3
State Space Tree For
Solving 4-Queens problem
Subset-Sum Problem
The subset-sum problem is to find a subset of a given set A={a1,... ,an } of n
positive integers whose sum is equal to a given positive integer d.

For example, for A={1, 2, 5, 6, 8} and d=9, there are two solutions:
{1, 2, 6} and{1, 8}. Some instances of this problem may have no solutions.

It is convenient to sort the set’s elements in increasing order. So, we will


assume that a1<a2<...<an
The state-space tree can be constructed as a binary tree for the instance A={3,
5,6,7}and d=15.
State Space Tree for subset-sum problem
A={3, 5, 6, 7}and d=15.
Construct a State Space Tree for
subset-sum problem
A={1, 4, 6, 9, 11} and d=15
State Space Tree for Hamiltonian Circuit Problem
Branch and Bound
• Branch and Bound is similar to Backtracking but is strengthened
with an optimization problem.
• An optimization problem seeks to minimize or maximize some
objective function, usually subject to some constraints.
• In the standard terminology of optimization problems, a feasible
solution is a point in the problem’s search space that satisfies all
the problem’s constraints.
• An optimal solution is a feasible solution with the best value of
the objective function.
Branch and Bound
Branch-and-bound requires two additional items:
• a way to provide, for every node of a state-space tree, a bound on
the best value of the objective function on any solution that can be
obtained by adding further components to the partially constructed
solution represented by the node.
• the value of the best solution seen so far

Note: This bound should be a lower bound for a minimization


problem and an upper bound for a maximization problem.
Branch and Bound
Terminate a search path at the current node in a state-space tree of a branch-
and-bound algorithm for any one of the following three reasons:
• The value of the node’s bound is not better than the value of the best
solution seen so far.
• The node represents no feasible solutions because the constraints of the
problem are already violated.
• The subset of feasible solutions represented by the node consists of a single
point (and hence no further choices can be made)—in this case, we compare
the value of the objective function for this feasible solution with that of the
best solution seen so far and update the latter with the former if the new
solution is better
Knapsack problem
Given n items of known weights wi and values vi, i=1,2,...,n,and a knapsack of capacity W,
find the most valuable subset of the items that fit in the knapsack.

The state-space tree for this problem is a binary tree constructed as follows:
• Each node on the ith level of this tree, 0≤i≤n represents all the subsets of n items that
include a particular selection made from the first i ordered items.
• A branch going to the left indicates the inclusion of the next item, and a branch going to
the right indicates its exclusion.
• Record the total weight w and the total value v of this selection in the node, along with
some upper bound ub on the value of any subset that can be obtained by adding zero or
more items to this selection.
• Upper bound is calculated using the formula:
ub = v + (W−w) (vi+1/wi+1)
Where: v= total value of the items included so far,
W-w = Total capacity – total weight of the items already placed
vi+1/wi+1 = value of the item / weight of the item to be included.
W=10 Weight= [ 4 7 5 3] Value = [ 40 42 25 12] ub= 0+(10-0) (40/4) = 100
1:ub= 40+(10-4)(42/7)
ub = 40+6*6=76

3: w=4+7=11 >10
not feasible

4:ub=40+(10-4)(25/5)= 70

5:ub=65+(10-9)(12/3)=69

7: w=9+3 =12 > 10


not feasible

8:ub=65+(10-9) = 65
Optimal Solution

6:ub=40+(10-4)(12/3) =64
64 < 65 inferior to
optimal solution

2:ub=0+(10-0)(42/7) = 60
60 < 65 inferior to optimal
• Solve the following instance of the knapsack problem by the branch-
and bound algorithm:
Capacity W=16
item 1 2 3 4
Weight 10 7 8 4
Value 100 63 56 12

Ub for root = 0+(16-0)(100/10)= 160


1: w=10 v=100 ub= 100+(16-10)(63/7)=154
2:w=10+7=17>16 X not feasible
3: w=10 v=100 ub=100+(16-10)(56/8)= 142
Travelling Salesman Problem
• The branch-and-bound technique can be applied to instances of
the traveling salesman problem by finding a reasonable lower
bound on tour lengths.
• One very simple lower bound can be obtained by finding the
smallest element in the intercity distance matrix D and
multiplying it by the number of cities n.
• Other method is a less obvious and more informative lower
bound for instances with symmetric matrix D is to compute a
lower bound on the length l of any tour using

lb=s/2
Travelling Salesman Problem
• For each city i, 1≤i≤n, find the sums i of the distances from city i to
the two nearest cities; compute the sum s of these n numbers,
divide the result by 2, and, if all the distances are integers, round up
the result to the nearest integer.

• For any subset of tours that must include particular edges of a


given graph, we can modify lower bound accordingly. For
example, for all the Hamiltonian circuits of the graph in that must
include edge(a, d), we get the lower bound by summing up the
lengths of the two shortest edges incident with each of the vertices,
with the required inclusion of edges(a, d) and(d, a).
lb=[(1+3)+(3+6)+(1+2)+(3+4)+(2+3)]/2=28/2=14. -> initial

lb = [(1+5)+(3+6)+(1+2)+(3+5)+(2+3)]/2=31/2=16. -> when


we include the edge a-d and d-a
Travelling Salesman Problem

lb=[(1+3)+(3+6)+(1+2)+(3+4)+(2+3)]/2=28/2=14
a b c d e
Travelling Salesman Problem
Limitations of Algorithm Power
• Algorithms are very powerful instruments of problem-solving
especially when they are executed by modern computers.
• The power of algorithms is not unlimited, and its limits are the
subject of this chapter.
• Some problems cannot be solved by any algorithm.
• Other problems can be solved algorithmically but not in
polynomial time.
• Even when a problem can be solved in polynomial time by some
algorithms, there are usually lower bounds on their time efficiency.
• Lower Bounds are estimates on a minimum amount of work needed
to solve a problem
Decision Trees
• Performance (time efficiency) of some algorithms can be studied with a
device called a decision tree.
Eg. Algorithms which work by comparing items of their inputs such as sorting
and searching.
Efficiency of an algorithm using Decision Tree
• The algorithm’s work on a particular input of size n can be traced by a path from
the root to a leaf in its decision tree, and the number of comparisons made by the
algorithm on such a run is equal to the length of this path.
• Hence, the number of comparisons in the worst case is equal to the height of the
algorithm’s decision tree.
• A tree with a given number of leaves, which is dictated by the number of possible
outcomes, has to be tall enough to have that many leaves. Specifically, it is not
difficult to prove that for any binary tree with l leaves and height h, is h≥log2l.
• A binary tree of height h with the largest number of leaves has all its leaves on the
last level. Hence, the largest number of leaves in such a tree is 2h in other words,
2h≥l,
• It is a lower bound on the heights of binary decision trees and hence the worst-case
number of comparisons made by any comparison-based algorithm for the problem
in question. Such a bound is called the information theoretic lower bound .
Decision Trees for Sorting

The number of possible outcomes for sorting an arbitrary n-


element list is equal to n!.
The height of a binary decision tree for any comparison-based
sorting algorithm and hence the worst-case number of comparisons
made by such an algorithm cannot be less than log n!
Decision Trees for Searching a Sorted Array
The average number of comparisons made by this algorithm turns out to
be about log2n−1 and log2(n+1)for successful and unsuccessful searches,
respectively.
P, NP, and NP-Complete Problems
• An algorithm solves a problem in polynomial time, if its worst-case time
efficiency belongs to O(p(n)) where p(n) is a polynomial of the problem’s
input size n.
• If problems are solvable in logarithmic time then they are solvable in
polynomial time as well.
• Problems that can be solved in polynomial time are called tractable, and
problems that cannot be solved in polynomial time are called intractable.
• Class P is a class of decision problems that can be solved in polynomial time
by (deterministic) algorithms. This class of problems is called polynomial.
Eg: product, GCD, sorting a list, searching for a key in a list or for a pattern in
a text string, checking connectivity and acyclicity of a graph, and finding a
minimum spanning tree and shortest paths in a weighted graph.
Class P Problem
• Class P is a class of decision problems that can be solved in polynomial
time by (deterministic) algorithms. This class of problems is called
polynomial.
• The restriction of P to decision problems can be justified as: it is
sensible to exclude problems not solvable in polynomial time because
of their exponentially large output. Such problems do arise naturally—
• e.g., generating subsets of a given set or all the permutations of n
distinct items—but it is apparent from the outset that they cannot be
solved in polynomial time.
• There are many important problems, however, for which no
polynomial-time algorithm has been found, nor has the impossibility of
such an algorithm been proved.
Problems where no polynomial-time algorithm has been found
Hamiltonian circuit problem Determine whether a given graph has
a Hamiltonian circuit—a path that starts and ends at the same vertex
and passes through all the other vertices exactly once.

Traveling salesman problem Find the shortest tour through n cities


with known positive integer distances between them.

Knapsack problem Find the most valuable subset of n items of


given positive integer weights and values that fit into a knapsack of a
given positive integer capacity.
Partition problem Given n positive integers, determine whether it
is possible to partition them into two disjoint subsets with the same
sum.

Bin-packing problem Given n items whose sizes are positive


rational numbers not larger than 1, put them into the smallest
number of bins of size 1.

Graph-coloring problem For a given graph, find its chromatic


number, which is the smallest number of colors that need to be
assigned to the graph’s vertices so that no two adjacent vertices are
assigned the same color.
nondeterministic algorithm

A nondeterministic algorithm is a two-stage procedure that takes as its


input an instance I of a decision problem and does the following.
Nondeterministic (“guessing”) stage: An arbitrary string S is generated
that can be thought of as a candidate solution to the given instance I
Deterministic (“verification”) stage: A deterministic algorithm takes
both I and S as its input and outputs yes if S represents a solution to
instance I.
a nondeterministic algorithm solves a decision problem if and only if for
every yes instance of the problem it returns yes on some execution. And
is said to be nondeterministic polynomial if the time efficiency of its
verification stage is polynomial.
Class NP Problem
Class NP is the class of decision problems that can be solved by
nondeterministic polynomial algorithms. This class of problems is called
nondeterministic polynomial.
• Most decision problems are in NP. This class includes all the problems in P:
P⊆NP.
This is true because, if a problem is in P, we can use the deterministic
polynomial time algorithm that solves it in the verification-stage of a
nondeterministic algorithm that simply ignores string S generated in its
nondeterministic (“guessing”) stage.

Eg. Hamiltonian circuit problem, the partition problem, decision versions of the
traveling salesman, the knapsack, graph coloring, and many hundreds of other
difficult combinatorial optimization problems.
NP-complete Problem
NP-complete problem is a problem in NP that is as difficult as any other problem in this class.
A decision problem D is said to be polynomially reducible to a decision problem D2, if there
exists a function t that transforms instances of D1 to instances of D2 such that:
1. t maps all yes instances of D1 to yes instances of D2 and all no instances of D1 to no
instances of D2
2. t is computable by a polynomial time algorithm
This definition immediately implies that if a problem D1 is polynomially reducible to some
problem D2 that can be solved in polynomial time, then problem D1 can also be solved in
polynomial time.

A decision problem D is said to be NP-complete if:


1. it belongs to class NP
2. every problem in NP is polynomially reducible to D
Approximation Algorithms for NP-Hard Problems
A decision versions of difficult problems of combinatorial optimization fall in the
class of NP-complete and their optimization versions fall in the class of NP-hard
problems.

NP-hard problems—problems that are at least as hard as NP-complete problems.


Hence, there are no known polynomial-time algorithms for these problems, and there
are serious theoretical reasons to believe that such algorithms do not exist.
Eg. traveling salesman problem and the knapsack problem.

If an instance of the problem in question is very small, we might be able to solve it by


an exhaustive-search algorithm and some with dynamic programming technique.
branch-and-bound technique makes it possible to solve many large instances of
difficult optimization problems in an acceptable amount of time
Greedy Algorithms for the TSP
The simplest approximation algorithms for the traveling salesman problem
are based on the greedy technique, Nearest-neighbor algorithm.
The following well-known greedy algorithm is based on the nearest-
neighbour heuristic: always go next to the nearest unvisited city.
Step 1 Choose an arbitrary city as the start.
Step 2 Repeat the following operation until all the cities have been
visited:
go to the unvisited city nearest the one visited last (ties can be broken
arbitrarily).
Step 3 Return to the starting city.
For the instance represented by the graph, with a as the starting vertex, the
nearest-neighbor algorithm yields the tour (Hamiltonian circuit)
sa:a−b−c−d−a of length 10

The optimal solution, as can be easily checked by exhaustive search, is the tour
s∗:a−b−d−c−a of length 8. Thus, the accuracy ratio of this approximation is:

You might also like