Linear Programming Apllications
Linear Programming Apllications
outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are
represented by linear relationships. Linear programming is a special case of mathematical
programming (also known as mathematical optimization).
More formally, linear programming is a technique for the optimization of a linear objective function,
subject to linear equality and linear inequality constraints. Its feasible region is a convex polytope,
which is a set defined as the intersection of finitely many half spaces, each of which is defined by a
linear inequality. Its objective function is a real-valued affine (linear) function defined on this
polyhedron. A linear programming algorithm finds a point in the polytope where this function has the
smallest (or largest) value if such a point exists.
Linear programs are problems that can be expressed in canonical form as
History[edit]
Leonid Kantorovich
John von Neumann
The problem of solving a system of linear inequalities dates back at least as far as Fourier, who in
1827 published a method for solving them,[1] and after whom the method of Fourier–Motzkin
elimination is named.
In 1939 a linear programming formulation of a problem that is equivalent to the general linear
programming problem was given by the Soviet mathematician and economist Leonid Kantorovich,
who also proposed a method for solving it.[2] It is a way he developed, during World War II, to plan
expenditures and returns in order to reduce costs of the army and to increase losses imposed on the
enemy.[citation needed] Kantorovich's work was initially neglected in the USSR.[3] About the same time as
Kantorovich, the Dutch-American economist T. C. Koopmans formulated classical economic
problems as linear programs. Kantorovich and Koopmans later shared the 1975 Nobel prize in
economics.[1] In 1941, Frank Lauren Hitchcock also formulated transportation problems as linear
programs and gave a solution very similar to the later simplex method.[2] Hitchcock had died in 1957
and the Nobel prize is not awarded posthumously.
During 1946–1947, George B. Dantzig independently developed general linear programming
formulation to use for planning problems in the US Air Force.[4] In 1947, Dantzig also invented
the simplex method that for the first time efficiently tackled the linear programming problem in most
cases[4]. When Dantzig arranged a meeting with John von Neumann to discuss his simplex method,
Neumann immediately conjectured the theory of duality by realizing that the problem he had been
working in game theory was equivalent[4]. Dantzig provided formal proof in an unpublished report "A
Theorem on Linear Inequalities" on January 5, 1948.[3] Dantzig's work was made available to public
in 1951. In the post-war years, many industries applied it in their daily planning.
Dantzig's original example was to find the best assignment of 70 people to 70 jobs. The computing
power required to test all the permutations to select the best assignment is vast; the number of
possible configurations exceeds the number of particles in the observable universe. However, it
takes only a moment to find the optimum solution by posing the problem as a linear program and
applying the simplex algorithm. The theory behind linear programming drastically reduces the
number of possible solutions that must be checked.
The linear programming problem was first shown to be solvable in polynomial time by Leonid
Khachiyan in 1979,[5] but a larger theoretical and practical breakthrough in the field came in 1984
when Narendra Karmarkar introduced a new interior-point method for solving linear-programming
problems.[6]
Uses[edit]
Linear programming is a widely used field of optimization for several reasons. Many practical
problems in operations research can be expressed as linear programming problems.[3] Certain
special cases of linear programming, such as network flow problems and multicommodity
flow problems are considered important enough to have generated much research on specialized
algorithms for their solution. A number of algorithms for other types of optimization problems work by
solving LP problems as sub-problems. Historically, ideas from linear programming have inspired
many of the central concepts of optimization theory, such as duality, decomposition, and the
importance of convexity and its generalizations. Likewise, linear programming was heavily used in
the early formation of microeconomics and it is currently utilized in company management, such as
planning, production, transportation, technology and other issues. Although the modern
management issues are ever-changing, most companies would like to maximize profits and
minimize costs with limited resources. Therefore, many issues can be characterized as linear
programming problems.
Standard form[edit]
Standard form is the usual and most intuitive form of describing a linear programming problem. It
consists of the following three parts:
e.g.
Non-negative variables
e.g.
Example[edit]
Suppose that a farmer has a piece of farm land, say L km2, to be planted with either
wheat or barley or some combination of the two. The farmer has a limited amount of
fertilizer, F kilograms, and pesticide, P kilograms. Every square kilometer of wheat
requires F1 kilograms of fertilizer and P1 kilograms of pesticide, while every square
kilometer of barley requires F2 kilograms of fertilizer and P2 kilograms of pesticide.
Let S1 be the selling price of wheat per square kilometer, and S2 be the selling price
of barley. If we denote the area of land planted with wheat and barley
by x1 and x2 respectively, then profit can be maximized by choosing optimal values
for x1 and x2. This problem can be expressed with the following linear programming
problem in the standard form:
Maximize:
(maximize the revenue—revenue is the "objective function")
(limit on fertilizer)
(limit on pesticide)
maximize
subject to
Maximize :
where are the newly introduced slack variables,
Example[edit]
The example above is converted into the following augmented
form:
Maximize:
(objective function)
(augmented constraint)
(augmented constraint)
Maximize :