Branch and Bound
Branch and Bound
Branch and Bound
all possible branches are derived before any other node can become the E-node. In other words the exploration of a new node cannot begin until the current node is completely explored. Branch and bound is an algorithm technique that is often implemented for finding the optimal solutions in case of optimization problems; it is mainly used for combinatorial and discrete global optimizations of problems. In a nutshell, we opt for this technique when the domain of possible candidates is way too large and all of the other algorithms fail. This technique is based on the en masse elimination of the candidates. You should already be familiar with the tree structure of algorithms. Out of the techniques that we have learned both the backtracking and divide and conquer traverse the tree in its depth, though they take opposite routes. The greedy strategy picks a single route and forgets about the rest. Dynamic programming approaches this in a sort of breadth-first search variation (BFS). Now if the decision tree of the problem that we are planning to solve has practically unlimited depth, then, by definition, the backtracking and divide and conquer algorithms are out. We shouldn't rely on greedy because that is problem-dependent and never promises to deliver a global optimum, unless proven otherwise mathematically. As our last resort we may even think about dynamic programming. The truth is that maybe the problem can indeed be solved with dynamic programming, but the implementation wouldn't be an efficient approach; additionally, it would be very hard to implement. You see, if we have a complex problem where we would need lots of parameters to describe the solutions of subproblems, DP becomes inefficient. A counter-part of the backtracking search algorithm which, in the absence of a cost criteria, the algorithm traverses a spanning tree of the solution space using the breadth-first approach. That is, a queue is used, and the nodes are processed in first-in-first-out order. If a cost criteria is available, the node to be expanded next (i.e., the branch) is the one with the best cost within the queue. In such a case, the cost function may also be used to discard (i.e., the bound) from the queue nodes that can be determined to be expensive. A priority queue is needed here. Cost-Based Tree Traversal A function can be considered to be a tree generator, if when given a node X and index i it produces the ith child of the node. The following function produces a complete binary tree of 11 nodes.
Page 1
The recursive function provided for deriving permutations is another example of a function that may be used to generate trees. Besides for a tree generator function, we also need a cost function to decide in what order to traverse the nodes when searching for a solution. The algorithm proceeds in the following manner. 1. Initialization: The root of the tree is declared to be alive. 2. Visit: The cost criteria decide which of the live nodes is to process next. 3. Replacement: The chosen node is removed from the set of live nodes, and its children are inserted into the set. The children are determined by the tree generator function. 4. Iteration: The visitation and replacement steps are repeated until no alive nodes are left. In the case of backtracking the cost criteria assumes a last-in-first-out (LIFO) function, which can be realized with a stack memory. A first-in-first-out cost criterion implies the FIFO branch-and-bound algorithm, and it can be realized with queue memory. A generalization to arbitrary cost criteria is the basis for the priority branch-and-bound algorithm, and a priority queue memory can be employed to realize the function. Backtracking The backtracking method is based on the systematically inquisition of the possible solutions where through the procedure, set of possible solutions are rejected before even examined so their number is getting a lot smaller. An important requirement which must be fulfilled is that there must be the proper hierarchy in the systematically produce of solutions so that sets of solutions that do not fulfill a certain requirement are rejected before the solutions are produced. For this reason the examination and produce of the solutions follows a model of non-cycle graph for which in this case we will consider as a tree. The root of the tree represents the set of all the solutions. Nodes in lower levels represent even smaller sets of solutions, based on their properties. Obviously, leaves will be isolated solutions. It is easily understood that the tree (or any other graph) is produced during the examination of the solutions so that no rejected solutions are produced. When a node is rejected, the whole sub-tree is rejected, and we backtrack to the ancestor of the node so that more children are produced and examined. Because this method is expected to produce subsets of solutions which are difficult to process,t he method itself is not very popular.
Page 2
Backtracking is a refinement of the brute force approach, which systematically searches for a solution to a problem among all available options. It does so by assuming that the solutions are represented by vectors (v1, ..., vm) of values and by traversing, in a depth first manner, the domains of the vectors until the solutions are found. When invoked, the algorithm starts with an empty vector. At each stage it extends the partial vector with a new value. Upon reaching a partial vector (v1, ..., vi) which cant represent a partial solution, the algorithm backtracks by removing the trailing value from the vector, and then proceeds by trying to extend the vector with alternative values. ALGORITHM try(v1,...,vi) IF (v1,...,vi) is a solution THEN RETURN (v1,...,vi) FOR each v DO IF (v1,...,vi,v) is acceptable vector THEN sol = try(v1,...,vi,v) IF sol != () THEN RETURN sol END END RETURN () If Si is the domain of vi, then S1 ... Sm is the solution space of the problem. The validity criteria used in checking for acceptable vectors determines what portion of that space needs to be searched, and so it also determines the resources required by the algorithm. The traversal of the solution space can be represented by a depth-first traversal of a tree. The tree itself is rarely entirely stored by the algorithm in discourse; instead just a path toward a root is stored, to enable the backtracking.
TRAVELING SALESPERSON The problem assumes a set of n cities, and a salesperson which needs to visit each city exactly once and return to the base city at the end. The solution should provide a route of minimal length. The route (a, b, d, c) is the shortest one for the following one, and its length is 51.
Page 3
The traveling salesperson problem is an NP-hard problem, and so no polynomial time algorithm is available for it. Given an instance G = (V, E) the backtracking algorithm may search for a vector of cities (v1, ..., v|V |) which represents the best route. The validity criteria may just check for number of cities in of the routes, pruning out routes longer than |V |. In such a case, the algorithm needs to investigate |V ||V | vectors from the solution space.
On the other hand, the validity criteria may check for repetition of cities, in which case the number of vectors reduces to |V |!.
Page 4
THE QUEENS PROBLEM We consider a grid of squares, dimensioned nXn, partly equivalent to a chessboard containing n2 places. A queen placed in any of the n2 squares controls all the squares that are on its row, its column and the 450 diagonals. T he problem asked, is how to put n queens on the chessboard, so that the square of every queen is not controlled by any other queen. Obviously for n=2 there is no problem to the solution, while for n=4 a valid solution is given by the drawing below.
A possible position on the grid is set by the pair of pointers (i,j) where 1<i,j<n , and i stands for the number of column and j stands for the number of row. Up to this point, for the same i there are n valid values for j. For a candidate solution though, only one queen can be on each column, that is only one value j=V(i).Therefore the solutions are represented with the n values of the matrix V=[V(1),...V(n)].All the solutions for which V(i)=V(j) are rejected because 2 queens can not be on the same row. Now the solutions are the permutes of n pointers, which is n! ,still a forbiddingly big number. Out of all these solution the correct one is the one which satisfies the last requirement:2 queens will not belong in the same diagonal, which is: V(j)-V(i)<>(i-j) for i<>j. A backtracking algorithm or this problem constructs the permutes [V(1),....V(n)] of the {1,...,n} pointers, and examines them as to the property (5.8-1).For example there are (n-2)! Permutes in the shape of [3,4....].These will not be produced and examined if the systematically construction of them has already ordered them in a sub-tree with root [3,4] which will be rejected by the 5.8-1 condition ,and will also reject all the (n-2)! permutes. On the contrary, the same way of producing-examining will go even further to the examination of more permutes in the shape of p={1,4,2,...} since, so far the condition is satisfied. the next node to be inserted that is j:=V(4) must also satisfies these:j-1<>3,j-4<>2,j-4<>-2,j-2<>1,j-2<>-1.All the j pointers satisfying these requirements produce the following permutes:[1,4,2,j,...] which connect to the tree as children of p. Meanwhile large sets of permutes such as [1,4,2,6,...] have already been rejected. To put it simply: Consider a n by n chess board, and the problem of placing n queens on the board without the queens threatening one another. The solution space is {1, 2, 3, , n}n. The backtracking algorithm may record the columns where the different queens are positioned. Trying all vectors (p1, ..., pn) implies nn cases. Noticing that all the queens must reside in different columns reduces the number of cases to n!. For the latter case, the root of the traversal tree has degree n, the children have degree n 1, the grand children degree n - 2, and so forth.
Design and Analysis of Algorithms Page 5
Checking for threatening positions along the way my further reduce the number of visited configurations A typical declaration of this algorithm: The root of all solutions, has as children n nodes [1],...,[n],where [j] represents all the permutes starting with j(and whose number is (n-1)! for every j).Inductive if a node includes the k nodes {j1,...jk} we attempt to increase it with another node { j1,...,jk,jk+1} so that the condition (5.8-1) is fulfilled. For n=4 this method produces the indirect graph of the following picture, and does not produce the 4!=24 leaves of all the candidate solutions.
. A Hamiltonian cycle is a tour that contains every node precisely once. Therefore, the classic travelling salesman problem is to find the Hamiltonian cycle of minimum weight - i.e. the shortest route passing through each node once. As the number of nodes increases the number of possible Hamiltonian cycles increases very rapidly. For n nodes the number of Hamiltonian cycles is .
The method of evaluating all Hamiltonian cycles therefore has exponential order. A 600MHz processor evaluating cycles for 20 nodes takes about 3 years. Testing each of these cycles to find the optimal solution is generally too time consuming to be feasible for most problems, so finding a reasonably good solution with the Nearest Neighbor algorithm suffices.
The Nearest Neighbor algorithm 1. Choose any starting node 2. Consider the arcs which join the previously chosen node to nodes not yet chosen. Choose the arc with minimum weight and add this node to the tour.
Design and Analysis of Algorithms Page 6
3. Repeat 2 until all nodes have been added 4. Add the arc which joins the last node to the first. Disadvantages
y
The algorithm does not necessarily give the best solution - it chooses the best route at each node without considering future problems It may lead to an incomplete tour.
The Lower Bound algorithm This finds the lower weight limit of a Hamiltonian cycle. If the Nearest Neighbour algorithm gives an answer which is significantly higher than the lower bound, it suggests there is a better solution. If the NN algorithm gives an answer which equals the lower bound, the tour is optimum. 1. Choose any node, X. Find the total of the 2 smallest weight arcs joined to X. 2. Consider the network obtained by ignoring X and any arcs that join to X. Find the total weight of the minimum connector for this network. 3. The sum of the two totals is the lower bound.
Tour Improvement This algorithm attempts to improve the best solution found so far.
y y
Let 1, 2, 3 ... n be the successive nodes of a Hamiltonian tour (NB n+1=1, n+2=2, n+3=3) Let d(V,W) be the weight of the arcs between V and W 1. Let i =1 2. If then swap and . (I.e. for i=1, if the weight between node 1 and the node 3 plus the weight between node 2 and node 4 is less than the weight between node 1 and node 2 plus the weight between node 3 and node 4, swap the second and third nodes.) 3. Replace i by i + 1 4. If then go to step 2. (I.e. repeat until i is greater than the number of nodes in the tour.)
Page 7