0% found this document useful (0 votes)
55 views12 pages

Exponentially Small Bounds On The Expected Optimum of The Partition and Subset Sum Problems

This document summarizes research on the expected optimum of the partition and subset sum problems from a probabilistic perspective. It shows that under suitable assumptions about the probability distributions of input numbers, the expectation of the minimum difference in the partition problem and the minimum target value in the subset sum problem is exponentially small, on the order of O(e^{-cn}) for some constant c. The proof models the problems as sequences of random variables and applies martingale theory and concentration inequalities to analyze how the fraction of values that can be approximated grows with the number of variables considered. This establishes that the expected optima are exponentially small, though more precise bounds on the median were shown in previous work.

Uploaded by

seaboy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views12 pages

Exponentially Small Bounds On The Expected Optimum of The Partition and Subset Sum Problems

This document summarizes research on the expected optimum of the partition and subset sum problems from a probabilistic perspective. It shows that under suitable assumptions about the probability distributions of input numbers, the expectation of the minimum difference in the partition problem and the minimum target value in the subset sum problem is exponentially small, on the order of O(e^{-cn}) for some constant c. The proof models the problems as sequences of random variables and applies martingale theory and concentration inequalities to analyze how the fraction of values that can be approximated grows with the number of variables considered. This establishes that the expected optima are exponentially small, though more precise bounds on the median were shown in previous work.

Uploaded by

seaboy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

<}

}<

Exponentially Small Bounds on the


Expected Optimum of the Partition
and Subset Sum Problems
George S. Lueker 1
1

Department of Information and Computer Science, University of California,


Irvine, Irvine, CA 92697-3425; e-mail: [email protected]

Recei ed No ember 14, 1995; accepted January 30, 1997

ABSTRACT: In the partition problem we seek to partition a list of numbers into two sublists
to minimize the difference between the sums of the two sublists. For this and the related
subset sum problem, under suitable assumptions on the probability distributions of the
input, it is known that the median of the optimum difference is exponentially small. In this
paper we show again, under suitable assumptions on the distribution. that the expectation
of the difference is also exponentially small. Q 1998 John Wiley & Sons, Inc. Random Struct.
Alg., 12, 51]62, 1998

Key Words: partition problem; subset sum problem; combinatorial optimization; probabilistic
analysis

1. INTRODUCTION
Define the partition problem as follows. Given n numbers x 1 , x 2 , . . . , x n , find values
for g i g  y1, 14 so as to minimize
n

gi x i

1.1.

is1

Q 1998 John Wiley & Sons, Inc. CCC 1042-9832r98r010051-12

51

52

LUEKER

Also define a related problem called the subset sum problem; here we are given a
target value t and asked to choose d i g  0, 14 to minimize
n

ty

di x i

1.2.

is1

Determining whether the minimum achievable in 1.1., or in 1.2., is 0 is NPcomplete, and thus either minimization problem is NP-hard see wGJ79, Karp 72x..
In this paper we are interested in the behavior of this problem when the x i are
i.i.d. random variables. Under fairly general conditions, the median of the solution
for the subset sum problem has been shown to be exponentially small when t is
near Ew nis1 x i x wLuek82x; this result has found application in the probabilistic
analysis of approximation algorithms for the 0]1 Knapsack problem wLuek82,
GMS84x. The median solution to the partition problem is known to be exponentially small wKKLO86x under fairly general conditions; this paper commented a
significant question which our results leave open is the expected value of the
difference for the best partition wKKLO86, p. 643x.
Under fairly general conditions on the distribution of the X i , we show that the
expected value of the solution to these problems is also exponentially small, i.e., of
the form O eyc n ., though we make no claim that we have the best value for the
constant c. The proof method is in some ways similar to the argument in wPIA78x:
we model the problem by a sequence of random variables and then apply a
nonlinear transformation to make the sequence amenable to analysis by martingale
theory.
We note that while the bounds developed in wKKLO86, Luek82x on the median
are much more precise than those we show here on the expectation, the bounds in
wKKLO86, Luek82x are not strong enough to show that the expectation is exponentially small; see the first two paragraphs in wKKLO86, Section 4x. Moreover, the
results of the present paper show that it is likely that for e ery value z in some
interval, some partition difference or subset sum comes close to z. See Corollary
2.5 and the Corollaries of Section 3 for more precise statements..
The result in this paper is simply a statement of the behavior of the optimum;
we do not know whether it can be achieved by a polynomial-time algorithm. We
note that algorithms for the partition problem have achieved considerable attention; see wCL91x for details. In wKK82x the notion of differencing two variables is
used. Differencing x and y means replacing them by their difference < x y y <; this
simply corresponds to placing x and y on opposite sides of the partition. wKK82x
showed that a fairly complicated algorithm based on this idea tended to achieve a
difference of only nyV log n.. In wYaki96x this was proven for a much simpler and
more natural implementation of the differencing method, for uniformly or exponentially distributed X i .
2. THE EXPECTED SUBSET-SUM SOLUTION
Assume that the X i are uniformly distributed over wy1, 1x. Also assume that some
h g 0, 12 . is specified. If A is some event, the indicator for A, written 1 A , is the
random variable which is 1 if A holds and 0 otherwise. Let fk, h z ., or more briefly

53

PARTITION AND SUBSET SUM PROBLEMS

fk z ., be the indicator for the event


k

'd i g  0, 1 4 such that

di Xi y z

Fh.

2.1.

is1

Informally, fk z . tells us whether z can be approximated to within h by summing


some subset of the first k variables. Note that f0 z . s 1 < z < F h , i.e., f0 z . is simply 1
if < z < F h and 0 otherwise. Also note that, letting k denote the operator or as
usually defined for 0-1 variables, we have for 0 F k - n
fkq 1 z . s fk z . k fk z y X kq1 .
s fk z . q 1 y fk z . . fk z y X kq1 . .

2.2.

For our analysis it will be useful to restrict the choices for the d i in 2.1.. Say that a
choice of values for d 1 , d 2 , . . . , d k is admissible for a given z g wy 12 , 12 x. if
;kX g  1, . . . , k 4 , zy

di Xi g
isk

y 12 , 12 .

Say that z has an admissible h-approximation if 2.1. holds even when we are only
allowed to consider admissible choices for the d i . Define f k z . to be the indicator
for the event that z has an admissible h-approximation. Then, as before, we have
f0 z . s 1< z < Fh ,

2.3.

and the recurrence 2.2. must be modified to


f kq 1 z . s f k z . q 1 y f k z . . 1 zyX kq 1 g wy1r2, 1r2x f k z y X kq1 . .

2.4.

Next define pk to be the random variable depending on X 1 , . . . , X k .


pk s

1r2

Hy1r2 f z . dz.
k

2.5.

Informally, this tells us the fraction of the interval wy 12 , 12 x which has an admissible
h-approximation; the essence of the proof is to study how this fraction grows as k
increases. From 2.3. we have p 0 s 2h. Note also that we must have
pkq 1 F 2 pk ,

2.6.

since the fraction of the interval which is covered can at most double. Also, if
pk - 1 and we fix the value of X 1 , . . . , X k , then from 2.4., 2.5., and the fact that
the density of X kq 1 is 12 over wy1, 1x, we can compute the following recurrence for

54

LUEKER

the expected value of pkq 1:


E w pkq 1 x s E
s

1r2

Hy1r2 f

kq1

z . dz

1r2

Hy1r2 f z . dz
k

1r2

Hy1r2 1 y f z . . Hy1

s pk q

1r2

1
2

1 zyx g wy1r2, 1r2x f k z y x . dx dz

1r2

Hy1r2 1 y f z . . Hy1r2

1
2 k

f u . du dz

s pk q 12

1r2

1r2

Hy1r2 1 y f z . . dzHy1r2 f u . du
k

s pk q 12 1 y pk . pk .

2.7.

Here X 1 , X 2 , . . . , X k are considered fixed, and the expectation is taken with


respect to X kq 1.. Now for k q 1 g  1, . . . , n4 let Zkq1 be the random variable
defined by
pkq 1 y pk
if pk - 1, and
pk 1 y pk .
Zkq 1 s
2.8.
1
if pk s 1.
2

From 2.7. we conclude that, regardless of the value of X 1 , . . . , X k , we have


E w Zkq 1 x s 12 .

2.9.

Moreover, since using 2.6. we have pk F pkq1 F min2 pk ,1., one easily computes
that
0 F Zkq 1 F 2.

2.10.

ykr2q kis1

Thus the sequence


Zi , for k s 0, 1, . . . , n is a martingale so a standard application of a Hoeffding bound wHoef63x yields
Lemma 2.1.

For a F nr2,
n

Pr

is1

Zi F a F exp y

nr2y a .
2n

In order to monitor the evolution of the sequence pk , it is useful to consider the


function

c p . s lg py ln 1 y p . q pr2,

2.11.

so

c X p. s

lg e
p

1
1yp

1
2

2.12.

55

PARTITION AND SUBSET SUM PROBLEMS

wTo avoid having to deal with special cases when the argument of c is 1, in the
following we assume the following conventions: c 1. s `, ` G r and ` q r s ` for
all real r, and ` G `. Also, we assume that division has precedence lower than
multiplication, so that we can write, for example, e n r2 C instead of the more
cumbersome e n r2 C ..x
Lemma 2.2.

For pk g 0, 1x, we ha e

c pkq 1 . G c pk . q Zkq1 .

2.13.

Proof. If pkq 1 s 1, then 2.13. holds since the left side is `. Also, if Zkq1 s 0,
then pk s pkq1 and 2.13. holds trivially. Otherwise we need to show that
1F

c pkq 1 . y c pk .
Zkq 1

c pk q Zkq1 pk 1 y pk . . y c pk .
Zkq1

2.14.

Consider several cases.


Case 1. pk g 0, 1r4.. Then since pk q Zkq1 pk 1 y pk . s pkq1 F 2 pk we have
Zkq 1 F 1r1 y pk .. Since c X is decreasing over 0, 12 ., the right-hand side of 2.14.
is bounded below see Appendix. by

c 2 pk . y c pk .
1r 1 y pk .

s 1 y pk . 1 q ln

G 1 y pk . 1 q

1 y pk
1 y 2 pk
pk

1 y pk

pk
2

s 1.
Case 2. pk g w1r4, 1r2x. Straightforward computation shows that c X has a minimum, over 0, 1., of

1 q lg e . 1r2 .

q 12 G 163 .

Hence the right-hand side of 2.14. is at least


any pk g w1r4, 1r2x.

16
3

pk 1 y pk ., which is at least 1 for

Case 3. pk g 1r2, 1.. Then the right-hand side of 2.14. is at least 1 since one
easily sees that c X is bounded below by 1r pk 1 y pk .. over the interval pk , 1.. B
Lemma 2.3.

If
n

Zi G 1 q lg e . ln hy1 y 12 ,

2.15.

is1

then e ery number zg wy 12 , 12 x has an admissible 2h-approximation.


Proof. First note that

c p 0 . s c 2h . s lg 2h . y ln 1 y 2h . q 2hr2 G 1 q lg h . ,

2.16.

56

LUEKER

and

c pn . s lg pn . y ln 1 y pn . q pnr2 F yln 1 y pn . q 12 ,
i.e.,
yln 1 y pn . q 12 G c pn . .

2.17.

Using Lemma 2.2 and the assumption of this lemma we have


n

Zi G c p0 . q 1 q lg e . ln hy1 y 12 .

c pn . G c p 0 . q

2.18.

is1

Adding the left and right sides of 2.16., 2.17., and 2.18. gives

c p 0 . y ln 1 y pn . q 12 q c pn .
G 1 q lg h . q c pn . q c p 0 . q 1 q lg e . ln hy1 y 12 ,
which simplifies to
yln 1 y pn . G lg h . q 1 q lg e . ln hy1 s yln h ,
implying 1 y pn F h. Thus the measure of the portion of wy 12 , 12 x over which f n is 0
is at most h. Hence each point z of the interval wy 12 , 12 x either has f n z . s 1 or is
within h of a point zX for which f n zX . s 1. From the definition of f n , this implies
that each point in wy 12 , 12 x has an admissible 2h-approximation.
B
Since we will frequently use the constant 1 q lg e, we will henceforth let C
denote this constant. The numerical value of C is approximately 2.442695..
Theorom 2.4. Let X 1 , X 2 , . . . , X n be i.i.d. uniform o er wy1, 1x, and let 0 - h - 12 .
Suppose that nr2G C ln hy1 . Then, except with probability bounded by
exp y

nr2y C ln hy1 .

2n

all alues in wy 12 , 12 x ha e admissible 2h-approximations.


B

Proof. This follows immediately from Lemmas 2.1 and 2.3.

By omitting the condition about admissibility, and noting that the theorem is
trivial for h ) 12 , we have
Corollary 2.5. Let X 1 , X 2 , . . . , X n be i.i.d. uniform o er wy1, 1x, and let h G eyn r2 C
be gi en. Then, except with probability bounded by
exp y

nr2y C ln hy1 .
2n

we ha e
; z g y 12 , 12 ,

'S :  1, 2, . . . , n4 such that z y

Xi
igS

F 2h .

57

PARTITION AND SUBSET SUM PROBLEMS

Now, define the w a, b x-subset-sum gap of X 1 , X 2 , . . . , X n to be the smallest value


of 2h such that each zg  a, b x can be approximated to within 2h by summing
some sublist of the X i .
Theorem 2.6. The expected alue of the wy 12 , 12 x-subset-sum gap for n ariables
X 1 , X 2 , . . . , X n distributed uniformly o er wy1, 1x is at most
2 eyn r2 C 1 q 2p n .

1r2

Cy1 e n r2 C

. s exp

2 C

C2

n q o n. .

Proof. Let 2h be the random variable depending on X 1 , X 2 , . . . , X n . giving the


value of the wy 12 , 12 x-subset-sum gap, and define h 0 s eyn r2 C , i.e.,
n
q C ln hy1
2.19.
0 .
2
Now using Corollary 2.5 we can write
E wh x s

H0

Pr  h G z 4 dz

F h0 q
F h0 q

Hh

Hh

Pr  h G z 4 dz

ey n r2yC ln z

y1 2

. r2 n

dz.

2.20.

To evaluate the integral on the right side we make the substitution zs h 0 u to


obtain
`

Hh

ey n r2yC ln z

y1 2

. r2 n

dz

H1

ey n r2yC lnh 0 u.
`

s h0

H1

s h0

H1

y1 2

. r2 n

y1

ey n r2yC ln h 0
ey C ln u.

F h 0 2p n .

1r2

r2 n

h 0 du

yC ln u y 1 . 2 r2 n

du

du

by 2.19.

Cy1 e n r2 C .
2

2.21.

See Appendix.. Substituting 2.21. into 2.20. results in the bound on Ew2h x
appearing in the Theorem.
B

3. GENERALIZATIONS
Note that the results of the previous section say not only that a particular z g
wy 12 , 12 x is likely to be near some subset sum of X 1 , X 2 , . . . , X n , but in fact that it is
likely that for all zg wy 21 , 21 x some subset sum of X 1 , X 2 , . . . , X n is near z. This

58

LUEKER

makes it easy to prove a variety of corollaries showing that related quantities have
exponentially small expectation.
First we note that we can easily expand the range of values having good
approximations to an interval much larger than wy 12 , 12 x.
Corollary 3.1. Gi en any j ) 0, there exists a c ) 0 such that the expected alue of
the wy1 y j . nr4, 1 y j . nr4x-subset-sum gap for n ariables X 1 , X 2 , . . . , X n distributed uniformly o er wy1, 1x is O eyc n ..
Proof. Let j X s jr2 and consider two subsets of the random variables:
A s  X 1 , X 2 , . . . , X u j X nv 4

and

B s  X u j X nvq1 , X u j X nvq2 , . . . , X n 4 .

Let e be the wy 12 , 12 x-subset-sum gap of A; by Theorem 2.6 we know that E w e x is


exponentially small. By a straightforward application of a Hoeffding bound, we can
establish that, except with exponentially small probability, the lowest subset sum
achievable from B is less than y1 y j . nr4 and the highest subset sum achievable
from B is at least 1 y j . nr4. But since the range of the X i is wy1, 1x, if we look at
all subset sums achievable from B in sorted order, they cannot be more than a
distance of 1 apart. Thus, except with exponentially small probability, we can
approximate any z g wy1 y j . nr4, 1 y j . nr4x to within 12 from B, and then to
within e by fine-tuning the approximation using elements of A.
B
Note that the constant c may become quite small as j approaches 0. Also note
that one could not hope to improve the range of approximable numbers substantially, since the expected sum of all of the positive resp., negative. X i is nr4
resp., ynr4..
Now define the w a, b x-partition gap of X 1 , X 2 , . . . , X n to be the smallest value of
2h such that each zg w a, b x can be approximated to within 2h by a sum of the
form
n

gi Xi

for g i g  y1, 1 4 .

3.1.

is1

Corollary 3.2. Gi en any j ) 0, there exists a c ) 0 such that the expected alue of
the wy1 y j . nr2, 1 y j . nr2x-partition gap for n ariables X 1 , X 2 , . . . , X n distributed
uniformly o er wy1, 1x is at most O eyc n ..
Proof. Let j X s jr3 and consider two subsets of the random variables:
A s  X 1 , X 2 , . . . , X u j 9nv 4

and

B s  X u j 9nvq1 , X u j 9nvq2 , . . . , X n 4 .

Let e be the wy 12 , 12 x-subset-sum gap of A; by Theorem 2.6 we know that E w e x is


exponentially small. By setting

gi s

1
y1

for X i G 0
for X i - 0

and using a Hoeffding bound, we can establish that, except with exponentially
small probability, the highest partition difference achievable from B is at least

59

PARTITION AND SUBSET SUM PROBLEMS

1 y 2 j X . nr2; similarly, except with exponentially small probability the lowest


signed. partition difference achievable from B is less than -1 y 2 j X . nr2. But
since the range of the X i is wy1, 1x, if we look at all partition differences
achievable from B in sorted order, they cannot be more than a distance of 2 apart.
Thus, except with exponentially small probability, we can approximate any zg
wy11 y 2 j X . nr2, 1 y 2 j X . nr2x to within 1 from B. Except with exponentially
small probability we also have

Xi

F j X nr2,

igA

in which case we can also approximate any zg wy1 y 3 j X . nr2, 1 y 3 j X . nr2x s


wy1 y j . nr2, 1 y j . nr2x to within 1 by selecting values for g i for i g B . by a
sum of the form

gi Xi y Xi .
igB

igA

Assume that we now fix z and the corresponding values of g i for i g B, and let
zX s zy

g i X i q X i g wy1, 1x .
igB

3.2.

igA

Since A has a wy 12 , 12 x -subset-sum gap of e , and < zX < F 1, we can choose values
for d i g  0, 14 for i g A. so that
zX y 2

di Xi

F2e .

igA

Letting g i s 2 d i y 1 for i g A., this means there are g i g  y1, 14 for i g A. such
that
zX y

g i q 1. Xi

F2e .

3.3.

igA

Substituting in 3.2. into 3.3. gives


2eG zy

g i Xi q Xi y g i q 1. Xi
igB

igA

igA
n

s zy

gi Xi y gi Xi
igB

igA

s zy

gi Xi ,

is1

giving us the desired approximation for z.

These results can easily be generalized to a much larger class of distributions.


Let U a, b . denote the uniform distribution over w a, b x. Say that a distribution G
contains some uniform distribution if there exists a distribution G1 and constants
a g 0, 1x, c, and h ) 0 such that
Gs 1 y a . G1 q a U c y h, c q h . .
If in particular c s 0, say the distribution contains some uniform distribution
centered at 0.

60

LUEKER

Corollary 3.3. Let X 1 , X 2 , . . . , X n be i.i.d. bounded random ariables. Suppose that


the distribution of X 1 contains some uniform distribution. Let

mys E w 1 X F 0 X x ,

mqs E w 1 X ) 0 X x ,

and

mabs s E w < X < x s mqy my .

Note that m ]F 0.. Finally, choose any j ) 0. Then both the expected alue of the
w m ]q j . n, mqy j . n x-subset-sum gap and the expected alue of the wymabs q j . n,
mabs y j . n x-partition gap for X 1 , X 2 , . . . , X n are exponentially small.
Proof. First consider the partition gap. Let the support of X 1 be contained in
wyd, d x, and let j X s jr2 d. Partition the variables into two sets
A s  X 1 , X 2 , . . . , X 2 u j X n r2 v 4

and

B s  X 2 u j X n r2 vq1 , X 2 u j X n r2 vq2 , . . . , X n 4 .

First consider the variables in A. Recalling that the distribution of these


variables contains some uniform distribution, by definition we can find constants
a ) 0, c, and h ) 0, and a distribution G1 such that the variables in A can be
considered to have been generated as follows: flip a biased coin which comes up
heads with probability a . If it comes up heads, return a uniform draw from
w c y h, c q h x; if it comes up tails, return a value chosen according to the distribution G1. Partition A as A u j A G , where the variables in A u correspond to heads
and those in A G correspond to tails. Then by a Hoeffding bound, except with
exponentially small probability, we have
A u G aj X nr2q 1.

3.4.

If < A u < is odd, move the last variable from A u to A G , so that < A u < becomes even.
Finally consider the variables in A u , which by 3.4. we may index as
X 1 , X 2 , . . . , X 2 k with 2 k G aj X nr2. As in wTsai92x, we first perform a preprocessing
step in which we difference these in pairs to obtain
X 1 y X 2 , X 3 y X 4 , X5 y X6 , . . . , X 2 ky1 y X 2 k .

3.5.

This corresponds to deciding that the differenced variables in each pair will appear
on opposite sides of the partition. Note that each of these differences has a
triangular distribution centered at 0. By a resampling argument like that in wKK82x,
we can partition these differences into two sets Du and Do , such that the variables
in Du have a uniform distribution, and except with exponentially small probability,
by a Hoeffding bound. < Du < G kr3s Q n.. By Corollary 3.2, the wy2 d, 2 d xpartition gap of the values in Du , say e , has an exponentially small expectation.
By another application of the Hoeffding bound, we can conclude that, except
with exponentially small probability, the sum of the absolute values of the variables
in set B is at least mabs y j . n. If so, then since all values in B j A G j Do lie in
wy2 d, 2 d x, the wy mabs y j . n, mabs y j . n x-partition gap of B j A G j Do must be
at most 2 d.
Thus, much as before, we can approximate any desired value in wy mabs y j . n,
mabs y j . n x to within 2 d using variables in B j A G j Do , and then to within e
using the variables in Du .
A fairly similar proof holds for the case of the subset-sum gap. This time, for the
variables in A u , for each i we include X 2 iy1 in the sum and then decide whether

61

PARTITION AND SUBSET SUM PROBLEMS

or not to include X 2 i y X 2 iy1; this is equivalent to deciding whether to include


X 2 iy1 or X 2 i in the sum. We omit the details.
B
APPENDIX
This appendix gives the details of a few omitted computations, in the hope that this
may save the interested reader time.
For verifying Case 1 of Lemma 2.2, we use the following simple observation,
letting f s c , xs pk , u s Zkq11 y pk . pk , and u 0 s pk .
Obser ation A.1. Suppose that f Y exists and is negative over w x, xq u 0 x. Then for
any u with 0 - u F u 0 , we have
f xq u . y f x .
u

f xq u 0 . y f x .
u0

Then we use the fact that for 0 F x- 1r2, we have


ln

1yx
1y2 x

1yx

1yx

H1y2 x z dzG H1y2 x 1 y x dzs 1 y x ,

letting xs pk .
For verifying 2.21., note that if we let xs e u, so dxs e udu, then
`

H1

eya ln

dxs

H0

eyau e u duF
2

yau 2 qu

Hy` e

dus pra.

1r2 1r4 a

ACKNOWLEDGMENTS
It is a pleasure to thank Ed Coffman, Brad Hutchings, David Johnson, Andy
Odlyzko, and the referees for comments on an earlier manuscript andror comments on the related literature.

REFERENCES
wCL91x

wGJ79x
wGMS84x

wHoef63x

E. G. Coffman, Jr. and George S. Lueker, Probabilistic Analysis of Packing and


Partitioning Algorithms, Wiley Interscience Series in Discrete Mathematics and
Optimization, John Wiley & Sons, Inc., New York, NY, 1991.
Michael R. Garey and David S. Johnson, Computers and Intractability: A Guide
to the Theory of NP-Completeness, W. H. Freeman, New York, 1979.
A. V. Goldberg and A. Marchetti-Spaccamela. On finding the exact solution of
a zero-one knapsack problem, Proceedings of the 16 th Annual ACM Symposium
on Theory of Computing, Washington, D.C., May 1984, pp. 359]368.
W. Hoeffding, Probability inequalities for sums of bounded random variables,
J. Amer. Statist. Assoc. 58, 13]30 1963..

62
wKarp72x

wKK82x

wKKLO86x

wLuek82x

wPIA78x
wTsai92x
wYaki96x

LUEKER

Richard M. Karp, Reducibility among combinatorial problems, in Complexity


of Computer Computations, Raymond E. Miller and James W. Thatcher, Eds.
Plenum Press, New York, 1972, pp. 85]103.
Narendra Karmarkar and Richard M. Karp, The differencing method of set
partitioning, Technical Report UCBrCSD 82r113, Computer Science Division
EECS., University of California, Berkeley, December 1982.
Narendra Karmarkar, Richard M. Karp, George S. Lueker, and Andrew M.
Odlyzko, Probabilistic analysis of optimum partitioning, J. App. Probab., 233.,
626]645 1986..
G. S. Lueker, On the average difference between the solutions to linear and
integer knapsack problems, in Applied Probability}Computer Science, The
Interface, Ralph L. Disney and Teunis J. Ott, Eds., Vol. I, Birkhauser,
Boston,

1982, pp. 489]504.


Yehoshua Perl, Alon Itai, and Haim Avni, Interpolation search}a log log n
search, Commun. ACM, 217., 550]553 1978..
Li-Hui Tsai, Asymptotic analysis of an algorithm for balanced parallel processor scheduling, SIAM J. Comput., 211., 59]64 1992..
Benjamin Yakir, The differencing algorithm LDM for partitioning: A proof of
Karps conjecture, Math. Oper. Res. 211., 85]99 1996..

You might also like