Integer Programming
Integer Programming
Chris Johnson
1 Background
1.1 Integer Programming
Given an objective function of n variables x1 , . . . , xn of the form
n
X
f (x1 , . . . , xn ) = ai x i + c
i=1
and a set of constraints that can be written in the form Ax ≤ b (where x is the column
vector of the variables xi , . . . , xn ), the problem of searching for a minimum (or maximum)
of f (x1 , . . . , xn ) is known as a linear program, and can be solved efficiently using algorithms
such as Dantzig’s simplex. However, if we were to also impose that each xi must be an integer
value, we stumble into the world of more difficult problems known as integer programs [4].
1
deterministic machine can be solved in polynomial time on a non-deterministic machine),
though whether P ⊂ N P or P = N P is still an open question.
Given that it is not yet clear if P = N P or not, complexity theorists have formulated
another class of the “hardest” problems in N P known as N P-complete. These are problems
in N P that every other problem in N P (and thus every problem in P) can be transformed
into in (deterministic) polynomial time [2]. Classic examples of the such problems include
the satisfiability problem, and (the decision problem version of) the travelling salesman
problem, while more modern examples include the decision problem versions of games such
as minesweeper and Tetris.
Of course, there are problems that are even harder than the problems in N P. Games
such as chess and go, for instance, are known to reside in EX PT IME, meaning they require
exponential time on a deterministic machine. There is, however, a special class of problems
that are at least as hard any problem in N P.
Suppose we have a problem Π that may or may not be in N P. If every problem in N P
can be transformed into Π in polynomial time, then we say that Π is N P-hard. It is in this
sense we mean that Π is at least as hard as any problem in N P [2].
(x1 ∨ x2 ∨ x3 ) ∧ (x1 ∨ x¯2 ) ∧ (x2 ∨ x¯3 ) ∧ (x3 ∨ x¯1 ) ∧ (x¯1 ∨ x¯2 ∨ x¯3 )
Using the convention described above, this boolean formula becomes the following arith-
metic formula.
2
maximize y
x1 + x2 + x3 ≥ y
x1 + 1 − x2 ≥ 1
x2 + 1 − x3 ≥ 1
x3 + 1 − x1 ≥ 1
1 − x1 + 1 − x2 + 1 − x3 ≥ 1
x1 , x2 , x3 ≤ 1
x1 , x2 , x3 ≥ 0
x1 , x2 , x3 ∈ Z
Thus every integer program can be written as a 0-1 program and vice versa. Since
the satisfiability problem can be written as a 0-1 program, 0-1 programming is N P-hard,
but since 0-1 programming and integer programming are polynomially equivalent, integer
programming is also N P-hard.
3
3 Algorithms for Integer Programming
Given that integer programming is N P-hard, how can we efficiently tackle integer program-
ming problems? There are two approaches we will now consider, both of which approach the
problem the same basic way: we will solve the relaxed version of the problem (that is, the
linear program with the same constraints, but allowing non-integer solutions), and continue
to add constraints until we arrive at an integer solution.
4 Conclusion
Despite that linear programming can be solved relatively efficiently in many cases, it comes
as a bit of a shock that problems can become considerably more difficult by imposing the
constraint that we only consider integer solutions. Despite this, there is no avoiding the
fact that there are applications where non-integer solutions are undesirable, and so we must
consider how to best tackle the problem of integer programming.
4
In this paper we have explained why integer programming is N P-hard, and is thus
likely to remain intractable for the foreseeable future. We have also briefly discussed two
algorithms that we can use for solving integer programming problems by repeatedly solving
relaxed versions of our original problem. Of course, using either algorithm, we’re repeatedly
solving linear programs, modifying the problem until we arrive at an integer solution.
Either algorithm will eventually find the solution to the integer program, but there is no
guarantee on the amount of time it will take to do so. It is possible that we could essentially
build our entire enumeration tree using branch and bound if we do not “luck out” and find
integer solutions after the first few iterations and can kill off parts of the tree. Depending
on the “shape” of our feasibility region, the cutting plane algorithm may require numerous
iterations until we’ve imposed enough constraints (cut off enough of the plane) that we arrive
at an integer solution.
References
[1] Stephen A. Cook. The complexity of theorem-proving procedures. In Proceedings of the
third annual ACM symposium on Theory of Computing, pages 151–158, 1971.
[2] Michael R. Garey and David S. Johnson. Computers and Intractibility: A Guide to the
Theory of NP-Completeness, chapter 2. W. H. Freeman and Company, 1979.
[3] George L. Nemhauser and Laurence A. Wolsey. Integer and Combinatorial Optimization,
chapter 14. John Wiley and Sons, 1988.