Unit 14 Coping With The Limitations of Algorithm Power: Structure
Unit 14 Coping With The Limitations of Algorithm Power: Structure
14.1 Introduction
In the earlier unit, you have learnt about the different limitations of algorithm
power. In this unit we will study about how to cope with some of these
limitations.
Combinatorial problems include counting of structures of a specific kind or
size, identifying the largest, smallest or optimal objects, constructing and
analyzing combinatorial structures. Backtracking and Branch and Bound
algorithm design techniques help in solving some of the large instances of
combinatorial problems. They define potential solutions, component by
component, and evaluate these partial solutions. They do not generate
solutions for the remaining components if they determine that these
components do not lead to a solution.
Both Backtracking and Branch and Bound construct state-space trees. The
nodes of these trees indicate the choices of solutions of a component. When
no solution can be obtained by consideration of the choices corresponding
to the descendants of the node, both these techniques terminate.
14.2 Backtracking
Problems that need to find an element in a domain that grows exponentially
with the size of the input, like the Hamiltonian circuit and the Knapsack
problem, are not solvable in polynomial time. Such problems can be solved
by the exhaustive search technique, which requires identifying the correct
solution from many candidate solutions. Backtracking technique is a
refinement of this approach. Backtracking is a surprisingly simple approach
and can be used even for solving the hardest Sudoku puzzle.
We can implement Backtracking by constructing the state-space tree, which
is a tree of choices. The root of the state-space tree indicates the initial
state, before the search for the solution begins. The nodes of each level of
this tree signify the candidate solutions for the corresponding component. A
node of this tree is considered to be promising if it represents a partially
constructed solution that can lead to a complete solution, else they are
considered to be non-promising. The leaves of the tree signify either the
non-promising dead-ends or the complete solutions.
We use the Depth-First-search method usually for constructing these state-
space-trees. If a node is promising, then a child-node is generated by
adding the first legitimate choice of the next component and the processing
continues for the child node. But if a node is non-promising, then the
Sikkim Manipal University B1480 Page No. 297
Analysis and Design of Algorithms Unit 14
algorithm backtracks to the parent node and considers the next promising
solution for that component. But if there are no more choices for that
component, the algorithm backtracks one more level higher. The algorithm
stops when it finds a complete solution.
This technique is illustrated by the figure 14.1. Here the algorithm goes from
the start node to node 1 and then to node 2. When no solution is found it
backtracks to node1 and goes to the next possible solution node 3. But
node 3 is also a dead-end. Hence the algorithm backtracks once again to
node 1 and then to the start node. From here it goes to node 4 and repeats
the procedure till node 6 is identified as the solution.
Figure 14.2: State Space Tree for Solving the 4-Queens Problem
Let us next use the Backtracking technique to solve the Hamiltonian circuit
problem.
Figure 14.4 shows the state-space tree for the above graph.
Subset-Sum problem
In the Subset-Sum problem, we have to find a subset of a given set
S = {s1, s2, ….., sn } of n positive integers whose sum is equal to a positive
integer t. Let us assume that the set S is arranged in ascending order. For
example, if S = {2, 3, 5, 8} and if t = 10, then the possible solutions are
{2, 3, 5} and {2, 8}.
Figure 14.5 shows the state-space tree for the above set. The root of the
tree is the starting point and its left and right children represent the inclusion
and exclusion of 2. Similarly, the left node of the first level represents the
inclusion of 3 and the right node the exclusion of 3. Thus the path from the
root to the node at the ith level shows the first i numbers that have been
included in the subsets that the node represents. Thus, each node from
level 1 records the sum of the numbers Ssum along the path upto that
particular node. If Ssum equals t, then that node is the solution. If more
solutions have to be found, then we can backtrack to that node’s parent and
repeat the process. The process is terminated for any non-promising node
that meets any of the following two conditions:
Ssum + Si+1 > t (the sum is too large)
n
Ssum + ∑ sj < t (the sum is too small)
j = i+ i
Activity 1
Construct a state-space tree for finding the subset of the instance
S = {2, 3, 5, 7, 9} that gives the sum t = 12.
Matrix C
Figure 14.6: Level 0 and 1 of the State Space Tree for the Example
Assignment Problem
Figure 14.6 shows Levels 0 and 1 of the state space tree for the instance of
the assignment problem being solved with the best-first branch and bound
algorithm. The number above a node shows the order in which the node
was created. A node’s fields indicate the job number assigned to person ‘a’
and the lower bound value, lb, for this node.
We can find a lower bound on the cost of an optimal selection without
solving the problem. We know that the cost of any solution, including an
optimal one, cannot be smaller than the sum of smallest elements in each of
the matrix’s rows. Therefore, here, the sum is 5+2+1+3=11.This is not the
cost of any valid selection. It is just a lower bound on the cost of any valid
selection .We will apply the same idea to partially constructed solutions. For
example, for any valid selection that selects from the first row, the lower
bound will be 8+4+1+3=16.
The problem’s state space tree deals with the order in which tree’s node will
be generated. Here we will generate all the children of the most promising
node among non terminated leaves in the current tree. We can tell about the
most promising nodes by comparing the lower bounds of the live nodes. It is
wise to consider a node with best bound as most promising, though this
does not prevent the possibility that an optimal solution will ultimately belong
(assigning person c to job 2). We then have to select the element from
fourth column of d’s row (assigning person d to job 4). This produces leaf 8
(figure 14.8), which corresponds to the acceptable solution - {a->3, b->1,
c->2, d->4} with the total cost of 11. Its sibling, node 9, corresponds to the
acceptable solution {a->2, b->1, c->4, d->3} with total cost of 23. As the cost
of node 9 is larger than the cost of the solution represented by leaf 8, node 9
is terminated.
When we examine all the live leaves of the last state-space tree (nodes 1, 2,
4, 6, and 7) of figure 14.8, we discover that their lower bound values are not
smaller than 11, the value of the best selection seen so far (leaf 8). Hence,
we end the process and identify the solution indicated by leaf 8 as the
optimal solution to the problem.
Figure 14.8: Complete Space Tree for the Instance of the Assignment Problem
Sikkim Manipal University B1480 Page No. 307
Analysis and Design of Algorithms Unit 14
simple it does not give us an accurate solution. Let us next analyze the
Multifragment-Heuristic algorithm to get a solution for the Traveling
Salesman problem.
Multifragment – Heuristic algorithm
This algorithm gives more emphasis for the edges of a complete weighted
graph.
Step 1: We sort the edges in increasing order according to their weights.
Step 2: We repeat this step till we get a tour of length n where n is the
number of cities. We add the next edge to the sorted edge list of tour edges
provided we do not create a vertex of 3 degree or a cycle of length less than
n. If that is the case we can skip the edge.
Step 3: Then finally we return to the set of tour edges.
When we apply the Multifragmnent – Heuristic algorithm to the graph in
Figure 14.9, we get the solution as {(a, b), (c, d), (b, c), (a, d)} which is very
similar to the tour produced by the Nearest Neighbor algorithm.
In general, the Multifragmnent – Heuristic algorithm provides significantly
better tours than the Nearest Neighbor algorithm but the performance ratio
of the Multifragment – Heuristic algorithms is unbounded.
We will next discuss the Minimum-Spanning Tree based algorithm.
Minimum-Spanning-Tree-based algorithm
There are some approximation algorithms that make use of the connection
between Hamiltonian circuit and spanning trees of the same graph. When
we remove an edge from a Hamiltonian circuit it yields a spanning tree.
Thus the Minimum Spanning Tree provides us a good basis for constructing
a shortest approximation tour.
Twice Around the Tree algorithm
Step 1: We should build a Minimum Spanning Tree of the graph according
to the given instance of the Traveling Salesman problem.
Step 2: We should start with an arbitrary vertex, walk around the Minimum
Spanning Tree and record all the vertices that we pass.
Step 3: We should scan the vertex list obtained in step 2 and eliminate all
the repeated occurrences of the same vertex except the starting one. We
form a Hamiltonian circuit of the vertices that are remaining on the list which
is the output of the algorithm.
Let us analyze the above algorithm with a graph as shown in Figure 14.10.
Step 2: We should sort the items in non-increasing order of the ratios that
we already computed in step 1.
Step 3: We repeat the above operation till no item is left in the sorted list.
We place the current item on the list in the knapsack if it fits in else we
consider the next item.
Let us assume the instance of the Knapsack problem with its capacity equal
to 10 and the item information as given in Table 14.1.
Table 14.1: Item Information for the Knapsack problem
We then compute value to weight ratios and sort the items in decreasing
order. The item information after sorting is given in Table 14.2.
Table 14.2: Sorted Item Information for the Knapsack problem
We compute the value to weight ratios and sort the items in decreasing
order. We select the first item weighing 4, skip the next item of weight 7,
select the next item of weight 5 and skip the last item of weight 3 using
Greedy algorithm. The solution we have found is optimal for the above
example. But Greedy algorithms do not always yield an optimal solution.
There is also no finite upper bound on the accuracy of these approximation
solutions.
Greedy algorithm for Continuous Knapsack problem
Step 1 – We compute the value to weight ratios ri = vi/wi,i=1…..,n for the
items that are given to us.
Step 2 – We should sort the items in non increasing order of the ratios that
we already computed in step 1.
Step 3 – We repeat the following procedure until we fill the knapsack to its
capacity or until no items remain in the sorted list. If the entire current item
can fit into the knapsack, place it in the knapsack and then consider the next
item, else place the largest fraction of the current item that can fit in the
knapsack and stop.
Self Assessment Questions
7. ________________ algorithms can be used to solve NP-Hard
problems that have small instances.
8. Minimum Spanning tree provides us a good basis for constructing a
_________ approximation tour.
9. We select the items in __________ order of their weights in order to
use the knapsack capacity efficiently.
Activity 2
Given the following information, solve the Knapsack problem using the
Greedy algorithm. The knapsack has a maximum capacity of 15.
Item: 1 2 3 4
Weight: 6 4 2 5
Value: 22 25 15 12
14.5 Summary
In this unit, we analyzed some solutions to cope with the limitations of some
algorithms. Backtracking and Branch and Bound algorithm design
techniques help in solving some of the large instances of combinatorial
problems.
The Backtracking algorithm constructs solutions for each component
sequentially and if it finds that it can develop a partially constructed solution
without violating the problem constraints, it considers the first legitimate
solution for the next component. But if there is no legitimate solution for the
next component or for the remaining components, then the algorithm
backtracks to replace the last partially constructed solution with the next
option. We also discussed how to solve the n-Queen problem, the
Hamiltonian circuit problem and the subset sum problem using the
backtracking approach.
Branch and Bound (BB) is a generic algorithm for finding optimal solutions
of various optimization problems, specifically in discrete and combinatorial
optimization. We discussed how to solve an instance of the Assignment
problem using the Branch and Bound approach.
We can use approximation algorithms to find a solution which is near
optimal to solve NP-Hard problems. We discussed some algorithms to solve
the Traveling Salesman problem and the Knapsack problem.
14.6 Glossary
Terms Description
Polynomial-time The execution time of a computation m(n) is said to be
in polynomial time when it is at most a polynomial
function of the problem size n.
Exhaustive search This algorithm produces the complete solution space for
algorithm the problem.
14.8 Answers
Self Assessment Questions
1. State-space tree
2. Worst
3. Hamiltonian
4. Branch and Bound
5. Decreases
6. Optimal
7. Exhaustive search
8. Shortest
9. Decreasing
Terminal Questions
1. Refer section 14.2.1 – Outline of the algorithm.
2. Refer section 14.3.1 – Outline of the algorithm.
3. Refer section 14.4.2 – Approximation algorithms
4. Refer section 14.4.2 – Approximation algorithms
References
Anany Levitin (2009). Introduction to Design and Analysis of Algorithms.
Dorling Kindersley, India
Christos, H. Papadamitrou., & Kenneth Steiglitz (1998). Combinatorial
Optimization. Algorithms and Complexity: Prentice Hall, New York
E-References
www2.siit.tu.ac.th/bunyarit/courses/its033/slides/ITS033x12x
LimitationxofxAlgorithm.ppt
__________________