0% found this document useful (0 votes)
3 views

Module 5

This document discusses the limitations of algorithmic power, particularly through the use of decision trees to analyze sorting and searching algorithms. It explains the concepts of P, NP, and NP-complete problems, emphasizing the challenges in solving certain decision problems within polynomial time. Additionally, it introduces backtracking as a method for solving problems like the n-Queens problem through an exhaustive search approach.

Uploaded by

C4rb0n
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Module 5

This document discusses the limitations of algorithmic power, particularly through the use of decision trees to analyze sorting and searching algorithms. It explains the concepts of P, NP, and NP-complete problems, emphasizing the challenges in solving certain decision problems within polynomial time. Additionally, it introduces backtracking as a method for solving problems like the n-Queens problem through an exhaustive search approach.

Uploaded by

C4rb0n
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

BCS401 Module- 5

Module 5
LIMITATIONS OF ALGORITHMIC POWER
COPING WITH LIMITATIONS OF ALGORITHMIC POWER

Limitations of Algorithm power


5.1 Decision Trees
Many important algorithms, especially those for sorting and searching, work by comparing
items of their inputs. We can study the performance of such algorithms with a device called a
decision tree.

Example: decision tree of an algorithm for finding a minimum of three numbers.

Each internal node of a binary decision tree represents a key comparison indicated in the node,
e.g., k < k’ . The node’s left subtree contains the information about subsequent comparisons
made if k < k’ , and its right subtree does the same for the case of k >k’. For the sake of
simplicity, we assume throughout this section that all input items are distinct. Each leaf
represents a possible outcome of the algorithm’s run on some input of size n.

An important point is that the number of leaves must be at least as large as the number of
possible outcomes. The algorithm’s work on a particular input of size n can be traced by a path
from the root to a leaf in its decision tree, and the number of comparisons made by the algorithm
on such a run is equal to the length of this path. Hence, the number of comparisons in the worst
case is equal to the height of the algorithm’s decision tree.

For any binary tree with l leaves and height h,


h >=log2 l.------ eq (1)
A binary tree of height h with the largest number of leaves has all its leaves on the last level
Hence, the largest number of leaves in such a tree is 2h.

Therefore, 2h ≥ l,

Eq (1) puts a lower bound on the heights of binary decision trees and hence the worst-case
number of comparisons made by any comparison-based algorithm for the problem in question.
Such a bound is called the information theoretic lower bound. We illustrate this technique
below on two important problems: sorting and searching in a sorted array.
Decision Trees for Sorting

CSE@HKBKCE 1 2023-24
BCS401 Module- 5
Most sorting algorithms are comparison based, i.e., they work by comparing elements in a list
to be sorted. By studying properties of decision trees for such algorithms, we can derive
important lower bounds on their time efficiencies.

We can interpret an outcome of a sorting algorithm as finding a permutation of the element


indices of an input list that puts the list’s elements in ascending order.

Consider, as an example, a three-element list a, b, c of orderable items such as real numbers or


strings. For the outcome a < c<b obtained by sorting this list shown in figure below. the
permutation in question is 1, 3, 2. In general, the number of possible outcomes for sorting an
arbitrary n-element list is equal to n!.

Decision tree for the tree-element selection sort.

Decision tree for the three-element insertion sort.

CSE@HKBKCE 2 2023-24
BCS401 Module- 5
The height of a binary decision tree for any comparison-based sorting algorithm and hence the
worst-case number of comparisons made by such an algorithm cannot be less than log2 n!
Cworst(n) ≥ log2 n!
Using Stirling’s formula for n!, we get log2 n!  n log2 n.
Cworst(n) ≥ n log2 n.
About n log2 n comparisons are necessary in the worst case to sort an arbitrary n-element list
by any comparison-based sorting algorithm.
Merge sort makes about this many number of comparisons in its worst case and hence is
asymptotically optimal

Decision Trees for Searching a Sorted Array


In this section, we shall see how decision trees can be used for establishing lower bounds on
the number of key comparisons in searching a sorted array of n keys:
A[0]<A[1]< . . .<A[n − 1].
The principal algorithm for this problem is binary search. the number of comparisons made
by binary search in the worst case.
Cworst(n) = log2 n + 1= log2(n + 1)

Ternary decision tree for binary search in a four-element array

we are considering three-way comparisons in which search key K is compared with some
element A[i] to see whetherK <A[i],K = A[i], orK >A[i], therefore a ternary decision tree is
used. The internal nodes of that tree indicate the arrays elements being compared with the search
key. The leaves indicate either a matching element in the case of a successful search or a found
interval that the search key belongs to in the case of an unsuccessful search.
For an array of n elements, all such decision trees will have 2n + 1 leaves (n for successful
searches and n + 1 for unsuccessful ones). Since the minimum height h of a ternary tree with l
leaves is log3 l, we get the following lower bound on the number of worst-case comparisons:

Cworst(n) ≥ log3(2n + 1)

CSE@HKBKCE 3 2023-24
BCS401 Module- 5

Binary decision tree for binary search in a four-element array.

The above binary decision tree is simply the ternary decision tree with all the middle subtrees
eliminated. Applying eq(1) to such binary decision trees yields
Cworst(n) ≥ log2(n + 1).

P, NP, and NP-Complete Problems


An algorithm solves a problem in polynomial time if its worst-case time efficiency belongs to
O(p(n)) where p(n) is a polynomial of the problem’s input size n.

Problems that can be solved in polynomial time are called tractable.


problems that cannot be solved in polynomial time are called intractable.

Class P is a class of decision problems that can be solved in polynomial time by (deterministic)
algorithms. This class of problems is called polynomial.

The restriction of P to decision problems can be justified by the following reasons.


• problems not solvable in polynomial time cab be excluded because of their
exponentially large output. Such problems do arise naturally.e.g., generating subsets
of a given set or all the permutations of n distinct items
• Many important problems that are not decision problems in their most natural
formulation can be reduced to a series of decision problems that are easier to study.
For example, instead of asking about the minimum number of colors needed to color
the vertices of a graph so that no two adjacent vertices are colored the same color, we
can ask whether there exists such a coloring of the graph’s vertices with no more than
m colors for m = 1, 2, . . . . (The latter is called the mcoloring problem.)

Some decision problems cannot be solved at all by any algorithm are called undecidable. A
famous example of an undecidable problem was given by Alan Turing in 1936.The problem in
question is called the halting problem: given a computer program and an input to it, determine
whether the program will halt on that input or continue working indefinitely on it.

Decidable problems that can be solved by an algorithm.

CSE@HKBKCE 4 2023-24
BCS401 Module- 5
There are many important problems, however, for which no polynomial-time algorithm has
been found, nor has the impossibility of such an algorithm been proved. Examples are listed
below.

• Hamiltonian circuit problem Determine whether a given graph has a Hamiltonian


circuit—a path that starts and ends at the same vertex and passes through all the other
vertices exactly once.
• Traveling salesman problem Find the shortest tour through n cities with known
positive integer distances between them (find the shortest Hamiltonian circuit in a
complete graph with positive integer weights).
• Knapsack problem Find the most valuable subset of n items of given positive integer
weights and values that fit into a knapsack of a given positive integer capacity.
• Graph-coloring problem For a given graph, find its chromatic number, which is the
smallest number of colors that need to be assigned to the graph’s vertices so that no two
adjacent vertices are assigned the same color.

A nondeterministic algorithm is a two-stage procedure that takes as its input an instance I of a


decision problem and does the following.
• Nondeterministic (“guessing”) stage: An arbitrary string S is generated that can be
thought of as a candidate solution to the given instance I
• Deterministic (“verification”) stage: A deterministic algorithm takes both I and S as its
input and outputs yes if S represents a solution to instance I. (If S is not a solution to
instance I , the algorithm either returns no or is allowed not to halt at all.)

We say that a nondeterministic algorithm solves a decision problem if and only if for every yes
instance of the problem it returns yes on some execution. (In other words, we require a
nondeterministic algorithm to be capable of “guessing” a solution at least once and to be able
to verify its validity. And, of course, we do not want it to ever output a yes answer on an instance
for which the answer should be no.)

A nondeterministic algorithm is said to be nondeterministic polynomial if the time efficiency


of its verification stage is polynomial.

Class NP
Class NP is the class of decision problems that can be solved by nondeterministic polynomial
algorithms. This class of problems is called nondeterministic polynomial. Most decision
problems are in NP.

this class includes all the problems in P:


P ⊆ NP.

if a problem is in P, we can use the deterministic polynomial time algorithm that solves it in the
verification-stage of a nondeterministic algorithm that simply ignores string S generated in its
nondeterministic (“guessing”) stage.

But NP also contains the Hamiltonian circuit problem, the partition problem, decision versions
of the traveling salesman, the knapsack, graph coloring, and many hundreds of other difficult
combinatorial optimization problems The halting problem, on the other hand, is among the rare
examples of decision problems that are known not to be in NP.

CSE@HKBKCE 5 2023-24
BCS401 Module- 5
NP-Complete Problems
A decision problem D1 is said to be polynomially reducible to a decision problem D2, if there
exists a function t that transforms instances of D1 to instances of D2 such that:
1. t maps all yes instances of D1 to yes instances of D2 and all no instances of D1 to no
instances of D2
2. t is computable by a polynomial time algorithm

This definition immediately implies that if a problem D1 is polynomially reducible to some


problem D2 that can be solved in polynomial time, then problem D1 can also be solved in
polynomial time.

A decision problem D is said to be NP-complete if:


1. It belongs to class NP
2. Every problem in NP is polynomially reducible to D

Notion of an NP-complete problem. Polynomial-time reductions of NP


problems to an NP-complete problem are shown by arrows.

Example:

let us prove that the Hamiltonian circuit problem is polynomially reducible to the decision
version of the traveling salesman problem. The latter can be stated as the existence problem of
a Hamiltonian circuit not longer than a given positive integer m in a given complete graph with
positive integer weights.

We can map a graph G of a given instance of the Hamiltonian circuit problem to a complete
weighted graph G representing an instance of the traveling salesman problem by assigning 1 as
the weight to each edge in G and adding an edge of weight 2 between any pair of nonadjacent
vertices in G. As the upper bound m on the Hamiltonian circuit length, we takem = n, where
n is the number of vertices in G (and G ). Obviously, this transformation can be done in
polynomial time. Let G be a yes instance of the Hamiltonian circuit problem. Then G has a
Hamiltonian circuit, and its image in G will have length n, making the image a yes instance of
the decision traveling salesman problem.

Conversely, if we have a Hamiltonian circuit of the length not larger than n in G , then its length
must be exactly n and hence the circuit must be made up of edges present in G, making the
inverse image of the yes instance of the decision traveling salesman problem be a yes instance
of the Hamiltonian circuit problem. This completes the proof.
CSE@HKBKCE 6 2023-24
BCS401 Module- 5

Example for NP-complete is CNF-satisfiability problem. The CNF-satisfiability problem


deals with boolean expressions. The CNF-satisfiability problem asks whether or not one can
assign values true and false to variables of a given boolean expression in its CNF form to
make the entire expression true.

NP Hard Problems - These problems need not have any bound on their running time. If any
NP-Complete Problem is polynomial time reducible to a problem X, that problem X belongs
to NP-Hard class. Hence, all NP-Complete problems are also NP-Hard. In other words if a
NP-Hard problem is non-deterministic polynomial time solvable, it is a NP- Complete
problem.

Backtracking
Some problems can be solved, by exhaustive search. The exhaustive-search technique
suggests generating all candidate solutions and then identifying the one (or the ones) with a
desired property.

Backtracking is a more intelligent variation of this approach. The principal idea is to


construct solutions one component at a time and evaluate such partially constructed
candidates as follows. If a partially constructed solution can be developed further without
violating the problem’s constraints, it is done by taking the first remaining legitimate option
for the next component. If there is no legitimate option for the next component, no
alternatives for any remaining component need to be considered. In this case, the algorithm
backtracks to replace the last component of the partially constructed solution with its next
option.

It is convenient to implement this kind of processing by constructing a tree of choices being


made, called the state-space tree. Its root represents an initial state before the search for a
solution begins. The nodes of the first level in the tree represent the choices made for the
first component of a solution; the nodes of the second level represent the choices for the
second component, and soon. A node in a state-space tree is said to be promising if it
corresponds to a partially constructed solution that may still lead to a complete solution;
otherwise, it is called non-promising. Leaves represent either non-promising dead ends or
complete solutions found by the algorithm.

In the majority of cases, a state space tree for a backtracking algorithm is constructed in the
manner of depth-first search. If the current node is promising, its child is generated by adding
the first remaining legitimate option for the next component of a solution, and the processing
moves to this child. If the current node turns out to be non-promising, the algorithm
backtracks to the node’s parent to consider the next possible option for its last component;
if there is no such option, it backtracks one more level up the tree, and so on. Finally, if the
algorithm reaches a complete solution to the problem, it either stops (if just one solution is
required) or continues searching for other possible solutions.

CSE@HKBKCE 7 2023-24
BCS401 Module- 5
n-Queensproblem
The problem is to place n queens on an n × n chessboard so that no two queens attack each
other by being in the same row or in the same column or on the same diagonal.

Let us consider the four-queens problem and solve it by the backtracking technique. Since
each of the four queens must be placed in its own row, all we need to do is to assign a
column for each queen on the board presented in figure.

We start with the empty board and then place queen 1 in the first possible position of its row,
which is in column 1 of row 1. Then we place queen 2, after trying unsuccessfully columns 1
and 2, in the first acceptable position for it, which is square (2, 3), the square in row 2 and
column 3. This proves to be a dead end because there is no acceptable position for queen 3. So,
the algorithm backtracks and puts queen 2 in the next possible position at (2, 4). Then queen 3
is placed at (3, 2), which proves to be another dead end. The algorithm then backtracks all the
way to queen 1 and moves it to (1, 2). Queen 2 then goes to (2, 4), queen 3 to(3, 1), and queen
4 to (4, 3), which is a solution to the problem. The state-space tree of this search is shown in
figure.

Figure: State-space tree of solving the four-queens problem by backtracking.


x denotes an unsuccessful attempt to place a queen in the indicated column.
The numbers above the nodes indicate the order in which the nodes are
generated.

CSE@HKBKCE 8 2023-24
BCS401 Module- 5

If other solutions need to be found, the algorithm can simply resume its operations at the
leaf at which it stopped. that a single solution to the n-queens problem for any n ≥ 4 can be
found in linear time.
Sum of subsetsproblem
Problem definition: Find a subset of a given set A = {a1, . . . , an} of n positive integers
whose sum is equal to a given positive integer d.

For example, for A = {1, 2, 5, 6, 8} and d = 9, there are two solutions: {1, 2, 6} and {1,
8}. Of course, some instances of this problem may have no solutions.

It is convenient to sort the set’s elements in increasing order. So, we will assume that

a1< a2< . . . < an.

The state-space tree can be constructed as a binary tree like that in Figure shown below for
the instance A = {3, 5, 6, 7} and d =15.

The root of the tree represents the starting point, with no decisions about the given elements
made yet. Its left and right children represent, respectively, inclusion and exclusion of a1 in
a set being sought. Similarly, going to the left from a node of the first level corresponds to
inclusion of a2 while going to the right corresponds to its exclusion, and so on. Thus, a path
from the root to a node on the ith level of the tree indicates which of the first n numbers have
been included in the subsets represented by that node.

We record the value of s, the sum of these numbers, in the node. If s is equal to d, we have a
solution to the problem. We can either report this result and stop or, if all the solutions need to
be found, continue by backtracking to the node’s parent. If s is not equal to d, we can terminate
the node as nonpromising if either of the following two inequalities holds:

CSE@HKBKCE 9 2023-24
18CS42
BCS401 DAA Notes5
Module-

General Remarks

Backtracking algorithm generates, explicitly or implicitly, a state-space tree; its nodes represent
partially constructed tuples with the first i coordinates defined by the earlier actions of the
algorithm. If such a tuple (x1, x2, . . . , xi) is not a solution, the algorithm finds the next element
in Si+1 that is consistent with the values of (x1, x2, . . . , xi) and the problem’s constraints, and
adds it to the tuple as its (i + 1)st coordinate. If such an element does not exist, the algorithm
backtracks to consider the next value of xi, and so on.

To start a backtracking algorithm, the following pseudocode can be called for i = 0 ; X[1..0]
epresents the empty tuple.
ALGORITHM Backtrack(X[1..i])
//Gives a template of a generic backtracking algorithm
//Input: X[1..i] specifies first i promising components of a solution
//Output: All the tuples representing the problem’s solutions
if X[1..i] is a solution write X[1..i]
else
for each element x ∈ Si+1 consistent with X[1..i] and the constraints do
X[i + 1]←x
Backtrack(X[1..i + 1])

Branch and Bound


Recall that the central idea of backtracking, discussed in the previous section, is tocut off a
branch of the problem’s state-space tree as soon as we can deduce that it cannot lead to a
solution. This idea can be strengthened further if we deal with an optimization problem.

An optimization problem seeks to minimize or maximize some objective function (a tour


length, the value of items selected, the cost of an assignment, and the like), usually subject to
some constraints. An optimal solution is a feasible solution with the best value of the
objective function (e.g., the shortest Hamiltonian circuit or the most valuable subset of items
that fit the knapsack).

Compared to backtracking, branch-and-bound requires two additional items:


1. a way to provide, for every node of a state-space tree, a bound on the best value of
the objective function on any solution that can be obtained by adding further
components to the partially constructed solution represented by the node
2. the value of the best solution seen so far

CSE@HKBKCE 10 2023-24
18CS42
BCS401 DAA Notes
Module- 5
In general, we terminate a search path at the current node in a state-space tree of a branch-
and-bound algorithm for any one of the following three reasons:

1. The value of the node’s bound is not better than the value of the best solution seen
so far.
2. The node represents no feasible solutions because the constraints of the problem are
already violated.
3. The subset of feasible solutions represented by the node consists of a single point
(and hence no further choices can be made)—in this case, we compare the value of
the objective function for this feasible solution with that of the best solution seen

0/1 Knapsack Problem


Given n items of known weights wi and values vi , i = 1, 2, . . . , n, and a knapsack of capacity
W,find the most valuable subset of the items that fit in the knapsack order the items of a given
instance.

Upper bound ub can be computed by finding the sum of v(the total value of the items
already selected) and the product of the remaining capacity of the knapsack W − w and the
best per unit payoff among the remaining items, which is vi+1/wi+1:

Example: Consider the following problem. The items are already ordered in descending order
of their value-to-weight ratios.

At the root of the state-space tree, no items have been selected as yet. Hence, both the total
weight of the items already selected w and their total value v are equal to 0. The value of the
upper bound is 100.

Node 1, the left child of the root, represents the subsets that include item 1. The total weight
and value of the items already included are 4 and 40, respectively; the value of the upper
bound is 40 + (10 − 4) * 6 = 76.

CSE@HKBKCE 11 2023-24
BCS401 Module- 5
well as the upper bounds for these nodes are computed in the same way as for the preceding
nodes.

Node 2 represents the subsets that do not include item 1. Accordingly, w = 0, v = 0, and ub =
0 + (10 − 0) * 6 = 60. Since node 1 has a larger upper bound than the upper bound of node 2,
it is more promising for this maximization problem, and we branch from node 1 first. Its
children—nodes 3 and 4—represent subsets with item 1 and with and without item 2,
respectively. Since the total weight w of every subset represented by node 3 exceeds the
knapsack’s capacity, node 3 can be terminated immediately.

Node 4 has the same values of w and v as its parent; the upper bound ub is equal to 40 + (10−
4) * 5 = 70. Selecting node 4 over node 2 for the next branching (Due to better ub), we get
nodes 5 and 6 by respectively including and excluding item 3. The total weights and values as

Branching from node 5 yields node 7, which represents no feasible solutions, and node 8,
which represents just a single subset {1, 3} of value 65. The remaining live nodes 2 and 6
have smaller upper-bound values than the value of the solution represented by node 8. Hence,
both can be terminated making the subset {1, 3} of node 8 the optimal solution to the
problem.

Figure: State-space tree of the best-first branch-and-bound algorithm for the instance of the
knapsack problem.

CSE@HKBKCE 12 2023-24
BCS401 Module- 5

Comparison of Backtracking and Branch and Bound

Similarity
Both techniques are based on the construction of a state-space tree whose nodes reflect
specific choices made for a solution’s components.
Both techniques terminate a node as soon as it can be guaranteed that no solution to the
problem can be obtained by considering choices that correspond to the node’s descendants.

Differences
Backtracking Branch and Bound
Applies to non-optimization problems Applicable only to optimization problems
Ttree is usually developed depth-first generate nodes according to several rules:
the most natural one is the so-called best-
first rule

Approximation Algorithms for NP-Hard Problems


NP-hard problems—problems that are at least as hard as NP-complete problems. Hence, here
are no known polynomial-time algorithms for these problems, and there are serious heoretical
reasons to believe that such algorithms do not exist.

If an instance of the problem in question is very small, we might be able to solve it by an


exhaustive-search algorithm . Some such problems can be solved by the dynamic programming
technique . But even when this approach works in principle, its practicality is limited by
dependence on the instance parameters being relatively small. The discovery of the branch-and-
bound technique has proved to be an important breakthrough, because this technique makes it
possible to solve many large instances of difficult optimization problems in an acceptable
amount of time. However, such good performance cannot usually be guaranteed.

There is a radically different way of dealing with difficult optimization problems: solve them
approximately by a fast algorithm. This approach is particularly appealing for applications
where a good but not necessarily optimal solution will be sufficient. Besides, in real-life
applications, we often have to operate with inaccurate data to begin with. Under such
circumstances, going for an approximate solution can be a particularly sensible choice.

Most of the approximation algorithms are based on some problem-specific heuristic.

CSE@HKBKCE 13 2023-24
BCS401 Module- 5
A heuristic is a common-sense rule drawn from experience rather than from a mathematically
proved assertion. For example, going to the nearest unvisited city in the traveling salesman
problem is a good illustration of this notion

DEFINITION A polynomial-time approximation algorithm is said to be a c approximation


algorithm, where c ≥ 1, if the accuracy ratio of the approximation it produces does not exceed
c for any instance of the problem in question:
r(sa) ≤ c.
The best (i.e., the smallest) value of c for which inequality holds for all instances of the problem
is called the performance ratio of the algorithm and denoted RA.

The performance ratio serves as the principal metric indicating the quality of the approximation
algorithm. We would like to have approximation algorithms with RA as close to 1 as possible.

Approximation Algorithms for the Knapsack Problem

Greedy Algorithms for the Knapsack Problem

greedy strategy takes into account both the weights and values by computing the value-to-
weight ratios vi/wi, i = 1, 2, . . . , n, and selecting the items in decreasing order of these ratios.
Here is the algorithm based on this greedy heuristic.
Greedy algorithm for the discrete knapsack problem
Step 1 Compute the value-to-weight ratios ri = vi/wi, i = 1, . . . , n, for the items given.
Step 2 Sort the items in nonincreasing order of the ratios computed in Step 1.
Step 3 Repeat the following operation until no item is left in the sorted list: if the current item
on the list fits into the knapsack, place it in the knapsack and proceed to the next item;
otherwise, just proceed to the next item.

Example:
Let us consider the instance of the knapsack problem with the knapsack capacity 10 and the
item information as follows:

CSE@HKBKCE 14 2023-24
BCS401 Module- 5

Computing the value-to-weight ratios and sorting the items in nonincreasing order
of these efficiency ratios yields

The greedy algorithm will select the first item of weight 4, skip the next item of weight 7, select
the next item of weight 5, and skip the last item of weight 3.

Total Value 65

Does this greedy algorithm always yield an optimal solution? The answer, of course, is no: if
it did, we would have a polynomial-time algorithm for the NPhard problem

Greedy algorithm for the continuous knapsack problem


Step 1 Compute the value-to-weight ratios vi/wi, i = 1, . . . , n, for the items given.
Step 2 Sort the items in nonincreasing order of the ratios computed in Step 1.
Step 3 Repeat the following operation until the knapsack is filled to its full capacity or no item
is left in the sorted list: if the current item on the list fits into the knapsack in its entirety, take
it and proceed to the next item; otherwise, take its largest fraction to fill the knapsack to its full
capacity and stop.

Example:
Let us consider the instance of the knapsack problem with the knapsack capacity 10 and the
item information as follows:
CSE@HKBKCE 15 2023-24
BCS401 Module- 5

Computing the value-to-weight ratios and sorting the items in nonincreasing order of these
efficiency ratios yields

Solution :the algorithm will take the first item of weight 4 and then 6/7 of the next item on
the sorted list to fill the knapsack to its full capacity.

Total value= 40 + 6/7*42= 76


this algorithm always yields an optimal solution to the continuous knapsack problem.

Approximation algorithm for Knapsack


The first approximation scheme was suggested by S. Sahni in 1975 [Sah75]. This algorithm
generates all subsets of k items or less, and for each one that fits into the knapsack it adds the
remaining items as the greedy algorithm would do (i.e., in nonincreasing order of their value-
to-weight ratios). The subset of the highest value obtained in this fashion is returned as the
algorithm’s output.

Example: small example of an approximation scheme with k = 2 is provided in Figure below.


The algorithm yields {1, 3, 4}, which is the optimal solution for this instance.

CSE@HKBKCE 16 2023-24
BCS401 Module- 5

CSE@HKBKCE 17 2023-24

You might also like