0% found this document useful (0 votes)
34 views5 pages

Linear Programming - AccessScience From McGraw-Hill Education

Linear programming is an area of mathematics concerned with minimizing or maximizing a linear objective function subject to linear constraints. It was created in 1947 and the simplex method is commonly used to solve problems. Interior point methods were later developed as alternatives. Extensions to integer programming add complexity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views5 pages

Linear Programming - AccessScience From McGraw-Hill Education

Linear programming is an area of mathematics concerned with minimizing or maximizing a linear objective function subject to linear constraints. It was created in 1947 and the simplex method is commonly used to solve problems. Interior point methods were later developed as alternatives. Extensions to integer programming add complexity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

(http://www.mheducation.

com/)

(https://www-accessscience-com.ezproxy.umng.edu.co/)

Linear programming
Article by:
Johnson, Ellis L. School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia.
Cottle, Richard W. Department of Management Science and Engineering, Huang Engineering Center, Stanford University, Stanford, California.
Last reviewed: 2014
DOI: https://doi.org/10.1036/1097-8542.384200 (https://doi-org.ezproxy.umng.edu.co/10.1036/1097-8542.384200)

Content Hide
General problem
Transportation problem
Dual problem
Simplex method
Interior method
Integer programming
Computation
Applications
Related Primary Literature
Additional Reading

An area of mathematics concerned with the minimization (or maximization) of a linear function of several variables
subject to linear equations and inequalities. Linear programming developed from three main areas: transportation
problems, game theory, and input-output models. Work on all these areas took place before and during World War II, with
independent major contributions by L. V. Kantorovich, J. von Neumann, W. Leontief, and T. C. Koopmans. The subject in its
present form was created in 1947, when G. B. Dantzig defined the general model and proposed the first, and still the most
widely used, algorithm for its solution: the simplex method.

Although the linearity assumptions restrict the class of optimization problems that can be directly modeled in this way, many
algorithms for extensions of linear programming, such as problems with nonlinear or integer restrictions, involve successively
solving linear programming problems. In practice, the simplex method generally finds optimal solutions efficiently. In 1972, a
paper by V. Klee and G. J. Minty exhibited a class of linear programming problems on which the standard simplex algorithm
would require an exponential number of iterations to find the optimal solution. With L. G. Khachiyan's result in 1979 giving a
polynomially bounded ellipsoid method, an alternative to the simplex method, nonlinear programming drew the attention of
computer scientists. Nonlinear methods were refocused on solving the linear programming problem. Work by N. K. Karmarkar
announced in 1984 attracted much attention because of claims of the superior performance of a new interior method. Initially,
the relative merits of Karmarkar's method and the simplex method were hotly debated. Karmarkar's work stimulated
considerable activity in linear programming methodology. Other interior algorithms for linear programming were proposed.
Today it is agreed that there is a place for both types of methods. See also: Nonlinear programming (/content/nonlinear-
programming/455700); Optimization (/content/optimization/474000)

General problem
The linear programming problem is to minimize linear objective function (1) subject to restrictions (2) and (3).

/
(1)
(2)

(3)
The variables x1, …, xn are required to take on real values, and the coefficients aij, cj, and bi are real constants. The objective
could be to maximize rather than minimize, and among balance constraints (2) the equations could be replaced by
inequalities of the form less-than-or-equal-to or greater-than-or-equal-to. The restrictions (3) are an example of what are
called bound constraints. The n-tuples of xj's satisfying constraints (2) and (3) form a convex polyhedron, the elements of
which are called feasible solutions. The optimum value of the objective function will always be assumed at a vertex of this
polyhedron unless the objective function is unbounded in the direction of optimization. The simplex method works by moving
from vertex to vertex until one yielding the optimum value of the objective function is reached, whereas feasible interior
methods approach an optimal vertex from within the polyhedron. See also: Linear systems of equations (/content/linear-
systems-of-equations/384400)

Transportation problem
The transportation problem is an example of a linear programming model. In this model, there are supply amounts S1, …, SK
of a given product at supply points 1, …, K and demand amounts D1, …, DL at demand points 1, …, L. The (doubly
subscripted) variables xij are associated with a supply point i and demand point j and represent the amount to be shipped from
i to j. The constraints are xij≥ 0 and inequalities (4),

(4)

and the objective is to minimize Σcijxij, the total cost of shipping, where cij is the per unit cost of shipping from supply point i to
demand point j. Variants include bounds and costs on storage at the supply points and penalties for shortages at the demand
points.

Dual problem
For every linear programming problem with objective function (1) and constraints (2) and (3), there is a dual linear program in
variables y1, …, ym, one variable for each equation in (2). The objective is to maximize function (5), subject to linear
inequalities (6).

(5)

(6)

/
In this form, the yi's are not constrained to be nonnegative, although the problem can be stated so that the dual is more
symmetric by changing the equations in (2) to greater-than-or-equal-to inequalities. The duality theorem states that the
maximum value of function (5) is equal to the minimum value of function (1), provided both programs have feasible solutions.
The dual variables are related to Lagrange multipliers, and can be interpreted as prices on resources constrained by
restrictions (2). The duality theorem is mathematically equivalent to the result that any point not in the convex polyhedron
defined by constraints (2) can be separated from it by a hyperplane. The original problem with constraints (2) and (3) is
referred to as the primal problem, and the xj's are called primal variables.

Simplex method
The simplex method in its usual form is a primal feasible algorithm; that is, it maintains constraints (2) and (3) while working
toward optimality. The method must find (or be given) a vertex solution of the constraints, and one way to do this is to initially
solve the problem of finding a solution to these constraints while ignoring the original objective function. Such a problem is
itself a different, but related, linear programming problem, called the phase-one problem. Then, the minimization of the original
objective function becomes phase two. The algorithm in phase two maintains primal feasibility, that is, constraints (2) and (3),
while working toward satisfying inequalities (6) in such a way that once these inequalities are satisfied the solutions x and y to
the primal and dual problems will be optimal with regard to these respective problems. In theory, this procedure requires the
adoption of measures to prevent the infinite repetition of a cycle of (nonoptimal) vertex solutions.

Interior method
Interior methods were proposed at the same time as the simplex method, but were not pursued because they did not prove to
be faster than the simplex method. The first promise of success for nonlinear methods was the proof of a polynomial upper
bound for the ellipsoid method of Khachiyan in 1979; that is, this method was guaranteed to find a solution in a time bounded
by a polynomial function of the data. Khachiyan's breakthrough was of great theoretical interest, but the method was found to
be much slower, on the average, than the simplex method, at least for problems of the size encountered in practice.
Karmarkar's method in 1984 proved to be competitive with the simplex method and perhaps better on large problems and on
certain classes of problems. Karmarkar also gave a smaller polynomial bound. The interior methods stimulated renewed
research. In particular, bounds have been improved, variants developed, prior work (in particular, by Russian mathematicians)
rediscovered, and computational experimentation carried out. The basic interior methods for linear programming are affine
scaling, barrier methods (of which Karmarkar's method is a type), path-following, predictor-corrector, and primal-dual
methods, the latter being, perhaps, the most popular sort.

Integer programming
An important extension in practice is to require some of the xj's to take on integral values. The most common case in practice
is where the integer xj must be 0 or 1, representing decision choices such as whether to switch from production of one product
to another or whether to expand a warehouse to allow for larger throughput. Whereas linear programming solution times tend
to be less than an hour, adding the constraint that some or all of the xj's must be integral may cause the running time to be
very long. Work in the 1950s established the usefulness of linear programming in solving the so-called traveling salesman
problem. Around the same time, cutting-plane methods were shown to be a convergent process for solving general integer
programs, in which all of the variables xj are required to take on integer values. Cutting-plane methods attempt to adjoin new
inequality constraints to constraints (2) so that all integral solutions remain feasible and the optimum solution is integral. Some
linear programs have the property that the optimum vertex is already an integral solution, but in practice the main class of
such problems is the transportation problem mentioned above. See also: Decision theory (/content/decision-
theory/182500)

/
Although cutting-plane methods offered the hope of converting integer programs to larger linear programs by adjoining more
constraints, the simpler branch-and-bound method became more common and was incorporated into commercial codes,
perhaps largely due to the fact that the important mixed 0–1 case, where some of the variables must be 0 or 1, is not yet
adequately treated by cutting-plane methods. The branch-and-bound method is simple in its original form. The linear
programming relaxation (in which the constraints are relaxed by not requiring the xj's to be integers) is solved, and a
branching procedure is carried out with regard to some 0–1 variable that is at a fractional value. Two problems are created: In
one the variable is forced to be 0 and in the other it is forced to be 1. The difficulty with the method is its explosive growth in
the number of problems to be solved. The “bound” part of branch-and-bound comes from using the objective value of the
linear programming relaxation to terminate further exploration of a problem when its objective value is worse than the
objective value of some integer solution previously found or given. In its more refined variations, branch-and-bound may
employ many heuristics to guide the search and elaborate improvements to the linear programming relaxation in order to
improve the bound given by the value of the linear programming objective function.

Computation
Early work on computer programs was done in the 1950s. Commercial computer codes implementing the simplex method
have been used in industry since the mid-1960s. Efficient methods for handling the structures encountered have been
developed. In particular, the matrices tend to be very sparse, that is, most (usually over 99%) of the aij's are zeroes. Efficient,
reliable methods for basis inversion and update have been the main numerical focus of code developers. Up until the early
1980s, most problems solved by commercial codes were in the range of several hundred to several thousand equations and
variables. A typical time on a mainframe computer might have been several seconds to an hour. In the 1980s, intense
development in software was begun because of changed hardware and new algorithmic developments. The hardware
changes included both larger supercomputers and more powerful workstations. Today's laptop computers can rapidly solve
problems that would have taken many hours on huge mainframe computers of the past. The algorithmic advances were
mainly the interior methods as well as improvements in the simplex method. Numerical linear algebra played an important role
in these developments. In integer programming, the main algorithmic focus has been on improving the linear programming
relaxation to give better bounds in branch-and-bound. See also: Matrix theory (/content/matrix-theory/410500);
Supercomputer (/content/supercomputer/668550)

Applications
Following the early work on codes, the petroleum industry quickly became the major user of linear programming, and still is an
important user, especially for blending models in petroleum refining. Commercial codes are used in industry and government
for a variety of applications involving planning, scheduling, distribution, manufacturing, agriculture, and so forth. In
universities, linear programming is taught in most business schools, industrial and other engineering departments, and
operations research departments, as well as some mathematics departments. The model is general enough to be useful in
the physical and social sciences. The improved computational efficiency achieved in the 1980s has gone hand-in-hand with
expanded applications, particularly in manufacturing, transportation, and finance. See also: Operations research
(/content/operations-research/470410)

Ellis L. Johnson
Richard W. Cottle

Related Primary Literature


P. Gill et al., On projected Newton barrier methods for linear programming and an equivalence to Karmarkar's projective
method, Math. Prog., 36:183-209, 1998 DOI: https://doi.org/10.1007/BF02592025 (https://doi-
org.ezproxy.umng.edu.co/10.1007/BF02592025)
/
R. Kannan and H. Narayanan, Random walks on polytopes and an affine interior point method for linear programming, Math.
Oper. Res., 37(1):1–20, 2012 DOI: https://doi.org/10.1287/moor.1110.0519 (https://doi-
org.ezproxy.umng.edu.co/10.1287/moor.1110.0519)

D. A. Spielman and S-H. Teng, Smoothed analysis: Why the simplex algorithm usually takes polynomial time, J. ACM,
51(3):385-463, 2004 DOI: https://doi.org/10.1145/990308.990310 (https://doi-
org.ezproxy.umng.edu.co/10.1145/990308.990310)

M. H. Wright, The interior-point revolution in optimization: History, recent developments, and lasting consequences, Bull.
Amer. Math. Soc., 42:39-56, 2005 DOI: https://doi.org/10.1090/S0273-0979-04-01040-7 (https://doi-
org.ezproxy.umng.edu.co/10.1090/S0273-0979-04-01040-7)

Additional Reading
D. Bertsimas and J. N. Tsitsiklis, Introduction to Linear Optimization, Athena Scientific, Belmont, MA, 1997

V. Chvatal, Linear Programming, W. H. Freeman, New York, 1983

G. B. Dantzig and M. N. Thapa, Linear Programming, 2 vols., Springer, New York, 1997, 2003

M. C. Ferris, O. L. Mangasarian, and S. J. Wright, Linear Programming with MATLAB, MPS-SIAM, Philadelphia, 2007

J. P. Ignizio and T. M. Cavalier, Introduction to Linear Programming, Prentice Hall, Englewood Cliffs, NJ, 1994

G. L. Nemhauser and L. Wolsey, Integer and Combinatorial Programming, Wiley, New York, 1988

A. Schrijver, Theory of Linear and Integer Programming, Wiley, Chichester, U.K., 1986

R. J. Vanderbei, Linear Programming: Foundations and Extensions, 3d ed., Springer, New York, 2008

S. J. Wright, Primal-Dual Interior-Point Methods, SIAM, Philadelphia, 1997

Y. Ye, Interior Point Algorithms: Theory and Analysis, Wiley, New York, 1997

E. V. Denardo, Linear Programming and Generalizations: A Problem-Based Introduction with Spreadsheets, Springer
Science+Business Media, New York, 2011

P. Kall and J. Mayer, Stochastic Linear Programming: Models, Theory, and Computation, 2d ed., Springer Science+Business
Media, New York, 2011

P. R. Thie and G. E. Keough, An Introduction to Linear Programming and Game Theory, 3d ed., John Wiley & Sons,
Hoboken, NJ, 2008

You might also like