0% found this document useful (0 votes)
46 views

Linear Programming Apllications

Linear programming is a technique for optimizing a linear objective function over variables that satisfy linear constraints. It involves finding the maximum or minimum value of a linear function subject to constraints defining a convex polytope. The simplex method is commonly used to solve linear programming problems by moving along the edges of the polytope until an optimal solution is found. Linear programming has wide applications in fields like business, economics, and engineering.

Uploaded by

Edward Karim
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

Linear Programming Apllications

Linear programming is a technique for optimizing a linear objective function over variables that satisfy linear constraints. It involves finding the maximum or minimum value of a linear function subject to constraints defining a convex polytope. The simplex method is commonly used to solve linear programming problems by moving along the edges of the polytope until an optimal solution is found. Linear programming has wide applications in fields like business, economics, and engineering.

Uploaded by

Edward Karim
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Linear programming (LP, also called linear optimization) is a method to achieve the best

outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are
represented by linear relationships. Linear programming is a special case of mathematical
programming (also known as mathematical optimization).
More formally, linear programming is a technique for the optimization of a linear objective function,
subject to linear equality and linear inequality constraints. Its feasible region is a convex polytope,
which is a set defined as the intersection of finitely many half spaces, each of which is defined by a
linear inequality. Its objective function is a real-valued affine (linear) function defined on this
polyhedron. A linear programming algorithm finds a point in the polytope where this function has the
smallest (or largest) value if such a point exists.
Linear programs are problems that can be expressed in canonical form as

where x represents the vector of variables (to be determined), c and b are vectors of (known)

coefficients, A is a (known) matrix of coefficients, and   is the matrix transpose. The


expression to be maximized or minimized is called the objective function (cTx in this case). The
inequalities Ax ≤ b and x ≥ 0 are the constraints which specify a convex polytope over which the
objective function is to be optimized. In this context, two vectors are comparable when they have
the same dimensions. If every entry in the first is less-than or equal-to the corresponding entry in
the second, then it can be said that the first vector is less-than or equal-to the second vector.
Linear programming can be applied to various fields of study. It is widely used in mathematics,
and to a lesser extent in business, economics, and for some engineering problems. Industries
that use linear programming models include transportation, energy, telecommunications, and
manufacturing. It has proven useful in modeling diverse types of problems
in planning, routing, scheduling, assignment, and design.

History[edit]

Leonid Kantorovich
John von Neumann

The problem of solving a system of linear inequalities dates back at least as far as Fourier, who in
1827 published a method for solving them,[1] and after whom the method of Fourier–Motzkin
elimination is named.
In 1939 a linear programming formulation of a problem that is equivalent to the general linear
programming problem was given by the Soviet mathematician and economist Leonid Kantorovich,
who also proposed a method for solving it.[2] It is a way he developed, during World War II, to plan
expenditures and returns in order to reduce costs of the army and to increase losses imposed on the
enemy.[citation needed] Kantorovich's work was initially neglected in the USSR.[3] About the same time as
Kantorovich, the Dutch-American economist T. C. Koopmans formulated classical economic
problems as linear programs. Kantorovich and Koopmans later shared the 1975 Nobel prize in
economics.[1] In 1941, Frank Lauren Hitchcock also formulated transportation problems as linear
programs and gave a solution very similar to the later simplex method.[2] Hitchcock had died in 1957
and the Nobel prize is not awarded posthumously.
During 1946–1947, George B. Dantzig independently developed general linear programming
formulation to use for planning problems in the US Air Force.[4] In 1947, Dantzig also invented
the simplex method that for the first time efficiently tackled the linear programming problem in most
cases[4]. When Dantzig arranged a meeting with John von Neumann to discuss his simplex method,
Neumann immediately conjectured the theory of duality by realizing that the problem he had been
working in game theory was equivalent[4]. Dantzig provided formal proof in an unpublished report "A
Theorem on Linear Inequalities" on January 5, 1948.[3] Dantzig's work was made available to public
in 1951. In the post-war years, many industries applied it in their daily planning.
Dantzig's original example was to find the best assignment of 70 people to 70 jobs. The computing
power required to test all the permutations to select the best assignment is vast; the number of
possible configurations exceeds the number of particles in the observable universe. However, it
takes only a moment to find the optimum solution by posing the problem as a linear program and
applying the simplex algorithm. The theory behind linear programming drastically reduces the
number of possible solutions that must be checked.
The linear programming problem was first shown to be solvable in polynomial time by Leonid
Khachiyan in 1979,[5] but a larger theoretical and practical breakthrough in the field came in 1984
when Narendra Karmarkar introduced a new interior-point method for solving linear-programming
problems.[6]

Uses[edit]
Linear programming is a widely used field of optimization for several reasons. Many practical
problems in operations research can be expressed as linear programming problems.[3] Certain
special cases of linear programming, such as network flow problems and multicommodity
flow problems are considered important enough to have generated much research on specialized
algorithms for their solution. A number of algorithms for other types of optimization problems work by
solving LP problems as sub-problems. Historically, ideas from linear programming have inspired
many of the central concepts of optimization theory, such as duality, decomposition, and the
importance of convexity and its generalizations. Likewise, linear programming was heavily used in
the early formation of microeconomics and it is currently utilized in company management, such as
planning, production, transportation, technology and other issues. Although the modern
management issues are ever-changing, most companies would like to maximize profits and
minimize costs with limited resources. Therefore, many issues can be characterized as linear
programming problems.

Standard form[edit]
Standard form is the usual and most intuitive form of describing a linear programming problem. It
consists of the following three parts:

 A linear function to be maximized

e.g. 

 Problem constraints of the following form


e.g.

 Non-negative variables
e.g.

The problem is usually expressed in matrix form, and then becomes:

Other forms, such as minimization problems, problems with constraints on


alternative forms, as well as problems involving negative variables can always be
rewritten into an equivalent problem in standard form.

Example[edit]
Suppose that a farmer has a piece of farm land, say L km2, to be planted with either
wheat or barley or some combination of the two. The farmer has a limited amount of
fertilizer, F kilograms, and pesticide, P kilograms. Every square kilometer of wheat
requires F1 kilograms of fertilizer and P1 kilograms of pesticide, while every square
kilometer of barley requires F2 kilograms of fertilizer and P2 kilograms of pesticide.
Let S1 be the selling price of wheat per square kilometer, and S2 be the selling price
of barley. If we denote the area of land planted with wheat and barley
by x1 and x2 respectively, then profit can be maximized by choosing optimal values
for x1 and x2. This problem can be expressed with the following linear programming
problem in the standard form:
Maximize: 
(maximize the revenue—revenue is the "objective function")

Subject to: (limit on total area)

(limit on fertilizer)

(limit on pesticide)

(cannot plant a negative area).

In matrix form this becomes:

maximize 

subject to 

Augmented form (slack form)[edit]


Linear programming problems can be converted into an augmented form in
order to apply the common form of the simplex algorithm. This form
introduces non-negative slack variables to replace inequalities with
equalities in the constraints. The problems can then be written in the
following block matrix form:

Maximize  :
where   are the newly introduced slack variables, 

 are the decision variables, and   is the


variable to be maximized.

Example[edit]
The example above is converted into the following augmented
form:

Maximize: 
(objective function)

subject to: (augmented constraint)

(augmented constraint)

(augmented constraint)

where   are (non-negative) slack variables,


representing in this example the unused area, the amount
of unused fertilizer, and the amount of unused pesticide.
In matrix form this becomes:

Maximize  :

You might also like