0% found this document useful (0 votes)
238 views21 pages

Unit 14 Coping With The Limitations of Algorithm Power: Structure

This unit discusses techniques for coping with limitations of algorithm power, specifically backtracking and branch and bound algorithms. Backtracking and branch and bound construct state-space trees to represent partial solutions. They evaluate partial solutions and prune branches that cannot lead to complete solutions. The unit provides examples of problems that can be solved using backtracking, including the n-queens problem and the Hamiltonian circuit problem. It also briefly discusses approximation algorithms for NP-hard problems like the traveling salesman problem.

Uploaded by

Raj Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
238 views21 pages

Unit 14 Coping With The Limitations of Algorithm Power: Structure

This unit discusses techniques for coping with limitations of algorithm power, specifically backtracking and branch and bound algorithms. Backtracking and branch and bound construct state-space trees to represent partial solutions. They evaluate partial solutions and prune branches that cannot lead to complete solutions. The unit provides examples of problems that can be solved using backtracking, including the n-queens problem and the Hamiltonian circuit problem. It also briefly discusses approximation algorithms for NP-hard problems like the traveling salesman problem.

Uploaded by

Raj Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Analysis and Design of Algorithms Unit 14

Unit 14 Coping with the Limitations of


Algorithm Power
Structure:
14.1 Introduction
Objectives
14.2 Backtracking
Outline of the algorithm
14.3 Branch and Bound
Outline of the algorithm
Effectiveness of the algorithm
14.4 Approximation Algorithms for NP-Hard Problems
Underlying principles
Approximation algorithms
14.5 Summary
14.6 Glossary
14.7 Terminal Questions
14.8 Answers

14.1 Introduction
In the earlier unit, you have learnt about the different limitations of algorithm
power. In this unit we will study about how to cope with some of these
limitations.
Combinatorial problems include counting of structures of a specific kind or
size, identifying the largest, smallest or optimal objects, constructing and
analyzing combinatorial structures. Backtracking and Branch and Bound
algorithm design techniques help in solving some of the large instances of
combinatorial problems. They define potential solutions, component by
component, and evaluate these partial solutions. They do not generate
solutions for the remaining components if they determine that these
components do not lead to a solution.
Both Backtracking and Branch and Bound construct state-space trees. The
nodes of these trees indicate the choices of solutions of a component. When
no solution can be obtained by consideration of the choices corresponding
to the descendants of the node, both these techniques terminate.

Sikkim Manipal University B1480 Page No. 296


Analysis and Design of Algorithms Unit 14

Branch and Bound is generally used for optimization problems as it


computes a bound on possible values of the objective function of the
problem. It usually uses the Best-first rule to construct space trees.
Backtracking is mostly used for non-optimization problems. This technique
constructs space trees using the Depth-First approach.
This unit also discusses approximation algorithms for NP-Hard problems like
the Traveling Salesman and the Knapsack problems. Greedy algorithms are
good approximation algorithms for the Knapsack problem.
Objectives:
After studying this unit you should be able to:
 analyze the algorithm technique of Backtracking
 explain the solution strategy of Branch-and-Bound
 discuss the approximation approach to cope with limitations of NP-Hard
problems

14.2 Backtracking
Problems that need to find an element in a domain that grows exponentially
with the size of the input, like the Hamiltonian circuit and the Knapsack
problem, are not solvable in polynomial time. Such problems can be solved
by the exhaustive search technique, which requires identifying the correct
solution from many candidate solutions. Backtracking technique is a
refinement of this approach. Backtracking is a surprisingly simple approach
and can be used even for solving the hardest Sudoku puzzle.
We can implement Backtracking by constructing the state-space tree, which
is a tree of choices. The root of the state-space tree indicates the initial
state, before the search for the solution begins. The nodes of each level of
this tree signify the candidate solutions for the corresponding component. A
node of this tree is considered to be promising if it represents a partially
constructed solution that can lead to a complete solution, else they are
considered to be non-promising. The leaves of the tree signify either the
non-promising dead-ends or the complete solutions.
We use the Depth-First-search method usually for constructing these state-
space-trees. If a node is promising, then a child-node is generated by
adding the first legitimate choice of the next component and the processing
continues for the child node. But if a node is non-promising, then the
Sikkim Manipal University B1480 Page No. 297
Analysis and Design of Algorithms Unit 14

algorithm backtracks to the parent node and considers the next promising
solution for that component. But if there are no more choices for that
component, the algorithm backtracks one more level higher. The algorithm
stops when it finds a complete solution.
This technique is illustrated by the figure 14.1. Here the algorithm goes from
the start node to node 1 and then to node 2. When no solution is found it
backtracks to node1 and goes to the next possible solution node 3. But
node 3 is also a dead-end. Hence the algorithm backtracks once again to
node 1 and then to the start node. From here it goes to node 4 and repeats
the procedure till node 6 is identified as the solution.

Figure 14.1: Backtracking Technique

14.2.1 Outline of the algorithm


The Backtracking algorithm constructs solutions for each component
sequentially and evaluates these partially constructed solutions. If the
algorithm can develop a partially constructed solution without violating the
problem constraints, it considers the first legitimate solution for the next
component. But if there is no legitimate solution for the next component or
for the remaining components, then the algorithm backtracks to replace the
last partially constructed solution with the next option.
The output of the Backtracking algorithm is an n-tuple (b1, b2,……, bn) where
each co-ordinate bi is an element of a finite linearly ordered set Si. The tuple
may also have to satisfy some additional constraints depending on the
problem. The tuple can be of either fixed length or variable length depending

Sikkim Manipal University B1480 Page No. 298


Analysis and Design of Algorithms Unit 14

on the problem. A state-space tree is generated by all algorithms. The


nodes of this space tree represent the partially constructed tuples. The
algorithm finds the next element in Si.+1 if a tuple (b1, b2,……, bi) is not a
solution ensuring that it is consistent with the values of (b1, b2,……, bi) and
the constraints of the problem. The algorithm backtracks to consider the
next value of bi if it does not find such an element.
The following pseudo code explains the Backtracking algorithm.
Pseudocode of Backtracking Algorithm
Backtrack (B[1..i])
// Input: B[1..i] indicates the first i promising parts of a solution
//Output: All the tuples which are the solution of the problem
If B[1..i] is a solution write B[1..i]
else
for each element e ϵ S i +1 consistent with B[1..i] and the constraints do
B[1..i] ← e
Backtrack (B[1..i+1])

In the worst case, Backtracking may have to generate all possible


candidates. We can reduce the size of the state space trees by the following
methods:
 Considering the symmetry present in all combinatorial problems
 Pre-assigning values to one or more components of a solution
 Pre-sorting data
Let us now consider some examples of problems that can be solved by the
Backtracking technique.
Examples
The n-Queens problem, the Hamiltonian circuit and the subset-sum problem
are some examples of problems that can be solved by Backtracking. We will
first consider the n-Queens problem.
n-Queens problem
In the n-Queens problem, we place n queens on an n-by-n chessboard such
that no two queens can be in the same row or same column or in the same
diagonal. When n = 1, the problem has a trivial solution. The problem has
no solution when n = 2 or n = 3. Using the Backtracking technique, we need
to allot a column for each queen in such a way that each queen needs to be
placed in its own row.
Sikkim Manipal University B1480 Page No. 299
Analysis and Design of Algorithms Unit 14

Let us use the Backtracking technique to solve the 4-Queens problem. We


will denote the positions of the queen as (row, column). Initially, the first
queen is placed in (1, 4). The second queen cannot be placed in either the
first row or the fourth column. So, the second queen is placed in its first
acceptable position (2, 1). The third queen is then placed in (3, 3). But we
then see that the fourth queen will not have any acceptable position. So the
algorithm backtracks and places the second queen in its next acceptable
position (2, 2). But now we cannot find any acceptable position for the third
and fourth queen. So, we find that we have reached a dead-end. Once
again the algorithm backtracks and places the first queen in (1, 3). The
second queen moves to (2, 1), the third queen to (3, 4), and the fourth
queen to (4, 2). This is the solution to the problem. Figure 14.2 depicts the
state space tree for solving the 4-Queens problem.

Figure 14.2: State Space Tree for Solving the 4-Queens Problem

Let us next use the Backtracking technique to solve the Hamiltonian circuit
problem.

Sikkim Manipal University B1480 Page No. 300


Analysis and Design of Algorithms Unit 14

Hamiltonian circuit problem


The graph of the Hamiltonian circuit is shown in Figure 14.3. We assume
that the Hamiltonian circuit starts at vertex i, which is the root of the state-
space tree. From i, we can move to any of its adjoining vertices which are j,
k, and l. We first select j, and then move to k, then l, then to m and thereon
to n. But this proves to be a dead-end. So, we backtrack from n to m, then to
l, then to k which is the next alternative solution. But moving from k to m
also leads to a dead-end. So, we backtrack from m to k, then to j. From
there we move to the vertices n, m, k, l and correctly return to i. Thus the
circuit traversed is i-> j -> n-> m-> k-> l-> i.

Figure 14.3: Graph of Hamiltonian Circuit

Figure 14.4 shows the state-space tree for the above graph.

Figure 14.4: State-Space Tree for Finding a Hamiltonian Circuit

Let us next consider the Subset-Sum problem.

Sikkim Manipal University B1480 Page No. 301


Analysis and Design of Algorithms Unit 14

Subset-Sum problem
In the Subset-Sum problem, we have to find a subset of a given set
S = {s1, s2, ….., sn } of n positive integers whose sum is equal to a positive
integer t. Let us assume that the set S is arranged in ascending order. For
example, if S = {2, 3, 5, 8} and if t = 10, then the possible solutions are
{2, 3, 5} and {2, 8}.
Figure 14.5 shows the state-space tree for the above set. The root of the
tree is the starting point and its left and right children represent the inclusion
and exclusion of 2. Similarly, the left node of the first level represents the
inclusion of 3 and the right node the exclusion of 3. Thus the path from the
root to the node at the ith level shows the first i numbers that have been
included in the subsets that the node represents. Thus, each node from
level 1 records the sum of the numbers Ssum along the path upto that
particular node. If Ssum equals t, then that node is the solution. If more
solutions have to be found, then we can backtrack to that node’s parent and
repeat the process. The process is terminated for any non-promising node
that meets any of the following two conditions:
Ssum + Si+1 > t (the sum is too large)
n
Ssum + ∑ sj < t (the sum is too small)
j = i+ i

Figure 14.5: State-Space Tree of the Subset-Sum Problem

Sikkim Manipal University B1480 Page No. 302


Analysis and Design of Algorithms Unit 14

Self Assessment Questions


1. We can implement Backtracking by constructing the _______________.
2. Backtracking, in the _______ case may have to generate all possible
candidates in a problem state that is growing exponentially.
3. The n-Queens problem, the _____________ circuit and the Subset-
Sum problem are some examples of problems that can be solved by
Backtracking.

Activity 1
Construct a state-space tree for finding the subset of the instance
S = {2, 3, 5, 7, 9} that gives the sum t = 12.

14.3 Branch and Bound


Branch and Bound (BB) is a generic algorithm for finding optimal solutions
of various optimization problems, specifically in discrete and combinatorial
optimization. Let us now analyze this algorithm.
14.3.1 Outline of the algorithm
Backtracking cuts off a branch of the problem’s state-space tree as soon as
the algorithm deduces that it cannot lead to a solution. Branch and Bound
organizes details of all candidate solutions, and discards large subsets of
fruitless candidates by using upper and lower estimated bounds of the
quantity being optimized.
A feasible solution is a solution that satisfies all the constraints of a problem
and the one with the best value of objective function is considered an
optimal solution. Branch and Bound requires the following two additional
items when compared to Backtracking:
 A method to provide, for every node of a state-space tree, a bound on
the best value of the objective function on any solution that can be
obtained by adding further components to partially constructed solution
indicated by the node.
 The value of best solution that has been identified
A Branch and Bound procedure requires two tools. The first tool is a
procedure that splits a given set S of candidates into two or more smaller
sets S1, S2 whose union covers S. Note that the minimum of f(x) over S is
min{ v1, v2….}, where each vi is the minimum of f(x) within Si. This step is
Sikkim Manipal University B1480 Page No. 303
Analysis and Design of Algorithms Unit 14

called branching as its recursive application defines a tree structure (the


search tree) whose nodes are the subsets of S. The second tool is a
procedure called bounding that computes the upper and lower bounds for
the minimum value of f(x) within a given subset S.
When the Branch and Bound algorithm identifies that the lower bound for
some tree node (set of candidates) A is greater than the upper bound for
some other node B, then it discards A from the search. This step is called
pruning, and is usually applied by maintaining a global variable m (shared
among all nodes of the tree) that records the minimum upper bound which is
found among all sub regions verified and discards any node whose lower
bound is greater than m.
Example: Assignment problem
The Branch and Bound approach is illustrated by applying it to the problem
of assigning 'n' people to ‘n’ jobs so that the total cost of the assignment is
as small as possible. An instance of assignment problem is specified by
n-by-n cost matrix C so that the problem can be stated as follows. Select
one element in each of the matrix so that no two selected elements are in
the same column and their sum is the smallest possible.

Matrix C

Sikkim Manipal University B1480 Page No. 304


Analysis and Design of Algorithms Unit 14

This problem can be solved using branch and bound technique by


considering a small instance:

Figure 14.6: Level 0 and 1 of the State Space Tree for the Example
Assignment Problem

Figure 14.6 shows Levels 0 and 1 of the state space tree for the instance of
the assignment problem being solved with the best-first branch and bound
algorithm. The number above a node shows the order in which the node
was created. A node’s fields indicate the job number assigned to person ‘a’
and the lower bound value, lb, for this node.
We can find a lower bound on the cost of an optimal selection without
solving the problem. We know that the cost of any solution, including an
optimal one, cannot be smaller than the sum of smallest elements in each of
the matrix’s rows. Therefore, here, the sum is 5+2+1+3=11.This is not the
cost of any valid selection. It is just a lower bound on the cost of any valid
selection .We will apply the same idea to partially constructed solutions. For
example, for any valid selection that selects from the first row, the lower
bound will be 8+4+1+3=16.
The problem’s state space tree deals with the order in which tree’s node will
be generated. Here we will generate all the children of the most promising
node among non terminated leaves in the current tree. We can tell about the
most promising nodes by comparing the lower bounds of the live nodes. It is
wise to consider a node with best bound as most promising, though this
does not prevent the possibility that an optimal solution will ultimately belong

Sikkim Manipal University B1480 Page No. 305


Analysis and Design of Algorithms Unit 14

to a different branch of the state-space tree. The variation of the strategy is


called Best-First-Branch-and-Bound and is shown in figure 14.7.

Figure 14.7: State Space Tree for Best-First-Branch-and-Bound

In the instance of the assignment problem given earlier in figure 14.6, we


start with the root that corresponds to no elements selected from the cost
matrix. As we already discussed, the lower bound value for the root,
denoted lb, is 11.The nodes on the first level of the tree correspond to
selections of an element in the first row of the matrix, that is, a job for person
‘a’.in Matrix C.
Of the four live leaves (nodes 1 through 4) that can contain an optimal
solution, node 3 is the most promising because it has the lowest smaller
bound value. Following the Best-First Search strategy, we branch out from
that node first by considering three different ways of selecting an element
from the second row but not in the third column-the three different jobs that
can be assigned to person b in Matrix C.
Among the six live leaves (nodes 1, 2, 4, 5, 6, and 7) that may contain an
optimal solution, we again choose the one with the least lower bound, node
5. First, we consider selecting the second column’s element from c’s row

Sikkim Manipal University B1480 Page No. 306


Analysis and Design of Algorithms Unit 14

(assigning person c to job 2). We then have to select the element from
fourth column of d’s row (assigning person d to job 4). This produces leaf 8
(figure 14.8), which corresponds to the acceptable solution - {a->3, b->1,
c->2, d->4} with the total cost of 11. Its sibling, node 9, corresponds to the
acceptable solution {a->2, b->1, c->4, d->3} with total cost of 23. As the cost
of node 9 is larger than the cost of the solution represented by leaf 8, node 9
is terminated.
When we examine all the live leaves of the last state-space tree (nodes 1, 2,
4, 6, and 7) of figure 14.8, we discover that their lower bound values are not
smaller than 11, the value of the best selection seen so far (leaf 8). Hence,
we end the process and identify the solution indicated by leaf 8 as the
optimal solution to the problem.

Figure 14.8: Complete Space Tree for the Instance of the Assignment Problem
Sikkim Manipal University B1480 Page No. 307
Analysis and Design of Algorithms Unit 14

14.3.2 Effectiveness of the algorithm


In Branch and Bound algorithm, the ratio of the number of solutions verified
largely decreases as the size of the problem increases. However; the
algorithm has one important limitation. Because a large number of solutions
must be kept in storage as the algorithm proceeds, the method is applicable
for problems which are of reasonable size and which are not likely to
increase to a large number of combinatorial possibilities. For problems
exceeding available storage, the Backtracking algorithm is suitable.
Self Assessment Questions
4. ___________________________ organizes details of all candidate
solutions, and discards large subsets of fruitless candidate solutions.
5. A _____________________ is a solution that satisfies all the
constraints of a problem.
6. In Branch and Bound algorithm, the ratio of the number of solutions
verified largely _______________ as the size of the problem increases.

14.4 Approximation Algorithms for NP – Hard Problems


Combinatorial optimization problems lie within a finite but huge feasible
region. In this section, we focus on finding approximation algorithms for
optimization problems that are NP-Hard.
14.4.1 Underlying principles
An NP-Hard problem is one for which the algorithm can be translated to one
that can solve any NP-problem (non-deterministic polynomial time). Many
optimization problems do not have an algorithm that can find a solution for
all instances. Sometimes, when trying to find an optimal solution to some
problems we realize that it is NP-Hard. Such problems also do not have any
known polynomial-time algorithms. Exhaustive search algorithms can be
used to solve NP-Hard problems that have small instances. Dynamic
programming technique can also be used only if the instance parameters
are small. Hence we can use approximation algorithms to find a solution
which is near optimal to solve these problems. Even many real life
applications lack accurate data to operate with. In such situations, we can
only use approximation algorithms.
Most of these approximation algorithms are based on some heuristic which
is problem specific.

Sikkim Manipal University B1480 Page No. 308


Analysis and Design of Algorithms Unit 14

We also would like to determine how accurate the outputs of these


approximation algorithms are. The accuracy ratio of the approximation
algorithms is given in equation 14.1.
f (sa)
r(sa) = Eq: 14.1
f( s*)
Here sa is an approximate solution to the problem, s* is an exact solution to
the problem and r(sa) is the accuracy ratio. The closer r(sa) is to 1 the more
accurate is the approximate solution. But mostly, we do not know the value
of f(s*) the optimal value of the objective function. Hence, we should try to
obtain a good upper bound for the values of r(sa). We then can define
approximation algorithms in the following manner.
Definition: A polynomial approximation algorithm is said to be a
c-approximation algorithm, where c is greater than or equal to 1, if the
accuracy ratio of the approximation does not exceed c for any instance of
the problem.
This definition is reflected in Equation 14.2.
r( sa) < c Eq: 14.2
Finding approximate solutions with a reasonable level of accuracy is easier
for some problems. Some problems have real life applications which can be
solved by using approximation algorithms. The Traveling Salesman problem
is an example for this.
Combinatorial problems like the Traveling Salesman problems and the
Minimum Spanning Tree have at least a part of the input as integers.
Algorithms for these problems involve mathematical operations like addition
and comparison. There are no explicit bounds for these integers and hence
they can be very large. The time required for computations involving these
integers can grow logarithmically with the integers. So we bound these
operations by having an upper limit for the integers. We can also solve the
Knapsack problem by using an approximation algorithm.
14.4.2 Approximation algorithms
We shall now analyze the approximation solutions for the Traveling
Salesman and the Knapsack problems as they do not have optimal
solutions.

Sikkim Manipal University B1480 Page No. 309


Analysis and Design of Algorithms Unit 14

Approximation algorithms for the Traveling Salesman problem


There are several approximation algorithms for the Traveling Salesman
problem. Let us discuss a few of these approximation algorithms.
Nearest – Neighbor algorithm
Let us analyze the simple greedy algorithm that is based on nearest
neighbor heuristic.
Step 1 – Let us choose an arbitrary city as the start.
Step 2 – We then go to a neighboring unvisited city which is nearest to the
city chosen. We repeat this operation till we visit all the cities.
Step 3 – Then we return to the starting city.
Let us now consider an instance of the Traveling Salesman problem.
Consider the graph depicted for the Traveling Salesman problem in figure
14.9 which has ‘a’ as the starting vertex.

Figure 14.9: Graph of the Instance of the Traveling Salesman Problem


Using the above described Nearest-Neighbor algorithm yields the tour of
length 12 say la: a-b-c-d-a. But if we want the optimal solution, we can
implement exhaustive search and the tour is of length 10 lb: a-b-d-c-a.
The accuracy ratio of this approximation is given in equation Eq: 14.3.
F(la) = r(la)/ r(lb) = 12/10=1.2 Eq: 14.3
In equation 14.3, F(la) is the accuracy ratio, r(la) is the tour length using
Nearest-Neighbor algorithm and r(lb) is the tour length using exhaustive
search algorithm. We conclude that although the above algorithm is very

Sikkim Manipal University B1480 Page No. 310


Analysis and Design of Algorithms Unit 14

simple it does not give us an accurate solution. Let us next analyze the
Multifragment-Heuristic algorithm to get a solution for the Traveling
Salesman problem.
Multifragment – Heuristic algorithm
This algorithm gives more emphasis for the edges of a complete weighted
graph.
Step 1: We sort the edges in increasing order according to their weights.
Step 2: We repeat this step till we get a tour of length n where n is the
number of cities. We add the next edge to the sorted edge list of tour edges
provided we do not create a vertex of 3 degree or a cycle of length less than
n. If that is the case we can skip the edge.
Step 3: Then finally we return to the set of tour edges.
When we apply the Multifragmnent – Heuristic algorithm to the graph in
Figure 14.9, we get the solution as {(a, b), (c, d), (b, c), (a, d)} which is very
similar to the tour produced by the Nearest Neighbor algorithm.
In general, the Multifragmnent – Heuristic algorithm provides significantly
better tours than the Nearest Neighbor algorithm but the performance ratio
of the Multifragment – Heuristic algorithms is unbounded.
We will next discuss the Minimum-Spanning Tree based algorithm.
Minimum-Spanning-Tree-based algorithm
There are some approximation algorithms that make use of the connection
between Hamiltonian circuit and spanning trees of the same graph. When
we remove an edge from a Hamiltonian circuit it yields a spanning tree.
Thus the Minimum Spanning Tree provides us a good basis for constructing
a shortest approximation tour.
Twice Around the Tree algorithm
Step 1: We should build a Minimum Spanning Tree of the graph according
to the given instance of the Traveling Salesman problem.
Step 2: We should start with an arbitrary vertex, walk around the Minimum
Spanning Tree and record all the vertices that we pass.
Step 3: We should scan the vertex list obtained in step 2 and eliminate all
the repeated occurrences of the same vertex except the starting one. We

Sikkim Manipal University B1480 Page No. 311


Analysis and Design of Algorithms Unit 14

form a Hamiltonian circuit of the vertices that are remaining on the list which
is the output of the algorithm.
Let us analyze the above algorithm with a graph as shown in Figure 14.10.

Figure 14.10: Graph Illustrating the Twice-Around the Tree Algorithm


We know that Minimum Spanning Tree is made up of edges (a, b), (b, c),
(b, d) and (d, e). Then the twice around the tree walk that starts and ends at
a is – a, b, c, b, d, e, d, b, a
But if we eliminate the second b (to get a short cut way from c to d), the
second d and third b (to get a shortcut way from e to a) we get the
Hamiltonian circuit – a, b, c, d, e, a which is of length 21.
Approximation algorithms for the Knapsack problem
Another well known NP-Hard problem is the Knapsack problem. In this
problem, we are given n items of known weights w1, w2, ….., wn and values
v1, v2…..vn and a knapsack which has the capacity of weight W. We then
have to find the most suitable subset of the items that can fit into the
knapsack. We consider many approximation algorithms for this problem
also. The Greedy algorithm for the Knapsack problem selects the items in
decreasing order of their weights in order to use the knapsack capacity
efficiently.
Now let us see an algorithm based on this Greedy heuristic.
Greedy algorithm for the Discrete Knapsack problem
Step 1: We compute value to weight ratios ri = vi/wi, where i=1…..,n for the
items that are given to us.

Sikkim Manipal University B1480 Page No. 312


Analysis and Design of Algorithms Unit 14

Step 2: We should sort the items in non-increasing order of the ratios that
we already computed in step 1.
Step 3: We repeat the above operation till no item is left in the sorted list.
We place the current item on the list in the knapsack if it fits in else we
consider the next item.
Let us assume the instance of the Knapsack problem with its capacity equal
to 10 and the item information as given in Table 14.1.
Table 14.1: Item Information for the Knapsack problem

Item Weight Value


1 4 $30
2 5 $40
3 6 $18

We then compute value to weight ratios and sort the items in decreasing
order. The item information after sorting is given in Table 14.2.
Table 14.2: Sorted Item Information for the Knapsack problem

Item Weight Value Value/weight


1 4 $40 10
2 5 $30 6
3 6 $18 3

We compute the value to weight ratios and sort the items in decreasing
order. We select the first item weighing 4, skip the next item of weight 7,
select the next item of weight 5 and skip the last item of weight 3 using
Greedy algorithm. The solution we have found is optimal for the above
example. But Greedy algorithms do not always yield an optimal solution.
There is also no finite upper bound on the accuracy of these approximation
solutions.
Greedy algorithm for Continuous Knapsack problem
Step 1 – We compute the value to weight ratios ri = vi/wi,i=1…..,n for the
items that are given to us.
Step 2 – We should sort the items in non increasing order of the ratios that
we already computed in step 1.

Sikkim Manipal University B1480 Page No. 313


Analysis and Design of Algorithms Unit 14

Step 3 – We repeat the following procedure until we fill the knapsack to its
capacity or until no items remain in the sorted list. If the entire current item
can fit into the knapsack, place it in the knapsack and then consider the next
item, else place the largest fraction of the current item that can fit in the
knapsack and stop.
Self Assessment Questions
7. ________________ algorithms can be used to solve NP-Hard
problems that have small instances.
8. Minimum Spanning tree provides us a good basis for constructing a
_________ approximation tour.
9. We select the items in __________ order of their weights in order to
use the knapsack capacity efficiently.

Activity 2
Given the following information, solve the Knapsack problem using the
Greedy algorithm. The knapsack has a maximum capacity of 15.
Item: 1 2 3 4
Weight: 6 4 2 5
Value: 22 25 15 12

14.5 Summary
In this unit, we analyzed some solutions to cope with the limitations of some
algorithms. Backtracking and Branch and Bound algorithm design
techniques help in solving some of the large instances of combinatorial
problems.
The Backtracking algorithm constructs solutions for each component
sequentially and if it finds that it can develop a partially constructed solution
without violating the problem constraints, it considers the first legitimate
solution for the next component. But if there is no legitimate solution for the
next component or for the remaining components, then the algorithm
backtracks to replace the last partially constructed solution with the next
option. We also discussed how to solve the n-Queen problem, the
Hamiltonian circuit problem and the subset sum problem using the
backtracking approach.

Sikkim Manipal University B1480 Page No. 314


Analysis and Design of Algorithms Unit 14

Branch and Bound (BB) is a generic algorithm for finding optimal solutions
of various optimization problems, specifically in discrete and combinatorial
optimization. We discussed how to solve an instance of the Assignment
problem using the Branch and Bound approach.
We can use approximation algorithms to find a solution which is near
optimal to solve NP-Hard problems. We discussed some algorithms to solve
the Traveling Salesman problem and the Knapsack problem.

14.6 Glossary
Terms Description
Polynomial-time The execution time of a computation m(n) is said to be
in polynomial time when it is at most a polynomial
function of the problem size n.
Exhaustive search This algorithm produces the complete solution space for
algorithm the problem.

14.7 Terminal Questions


1. How will you solve the 4-Queens problem using the Backtracking
technique?
2. What is the basic principle of the Branch and Bound technique?
3. How can you solve the Traveling Salesman problem using the Nearest-
Neighbor algorithm?
4. Discuss the Greedy algorithm for the Discrete Knapsack problem.

14.8 Answers
Self Assessment Questions
1. State-space tree
2. Worst
3. Hamiltonian
4. Branch and Bound
5. Decreases
6. Optimal
7. Exhaustive search
8. Shortest
9. Decreasing

Sikkim Manipal University B1480 Page No. 315


Analysis and Design of Algorithms Unit 14

Terminal Questions
1. Refer section 14.2.1 – Outline of the algorithm.
2. Refer section 14.3.1 – Outline of the algorithm.
3. Refer section 14.4.2 – Approximation algorithms
4. Refer section 14.4.2 – Approximation algorithms

References
 Anany Levitin (2009). Introduction to Design and Analysis of Algorithms.
Dorling Kindersley, India
 Christos, H. Papadamitrou., & Kenneth Steiglitz (1998). Combinatorial
Optimization. Algorithms and Complexity: Prentice Hall, New York

E-References
 www2.siit.tu.ac.th/bunyarit/courses/its033/slides/ITS033x12x
LimitationxofxAlgorithm.ppt

__________________

Sikkim Manipal University B1480 Page No. 316

You might also like