0% found this document useful (0 votes)
10 views

Linear Programming2

Uploaded by

Grace Llobrera
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Linear Programming2

Uploaded by

Grace Llobrera
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 22

Linear programming

In mathematics, linear programming (LP) problems involve the


optimization of a linear objective function, subject to linear equality and
inequality constraints.

Put very informally, LP is about trying to get the best outcome (e.g.
maximum profit, least effort, etc) given some list of constraints (e.g. only
working 30 hours a week, not doing anything illegal, etc), using a linear
mathematical model.

More formally, given a polytope (for example, a polygon or a


polyhedron), and a real-valued affine function

defined on this polytope, the goal is to find a point in the polytope where this
function has the smallest (or largest) value. Such points may not exist, but if
they do, searching through the polytope vertices is guaranteed to find at
least one of them.

Linear programs are problems that can be expressed in canonical


form:

Maximize
Subject to
Where

represents the vector of variables, while and are vectors of coefficients


and is a matrix of coefficients. The expression to be maximized or
minimized is called the objective function ( in this case). The equations
are the constraints which specify a convex polyhedron over which
the objective function is to be optimized.

Linear programming can be applied to various fields of study. Most


extensively it is used in business and economic situations, but can also be
utilized for some engineering problems. Some industries that use linear
programming models include transportation, energy, telecommunications,
and manufacturing. It has proved useful in modeling diverse types of
problems in planning, routing, scheduling, assignment, and design.
History of linear programming

The problem of solving a system of linear inequalities dates back at


least as far as Fourier, after whom the method of Fourier-Motzkin elimination
is named. Linear programming arose as a mathematical model developed
during the second world war to plan expenditures and returns in order to
reduce costs to the army and increase losses to the enemy. It was kept
secret until 1947. Postwar, many industries found its use in their daily
planning.

The founders of the subject are George B. Dantzig, who published the
simplex method in 1947, John von Neumann, who developed the theory of
the duality in the same year, and Leonid Kantorovich, a Russian
mathematician who used similar techniques in economics before Dantzig and
won the Nobel prize in 1975 in economics. The linear programming problem
was first shown to be solvable in polynomial time by Leonid Khachiyan in
1979, but a larger major theoretical and practical breakthrough in the field
came in 1984 when Narendra Karmarkar introduced a new interior point
method for solving linear programming problems.

Dantzig's original example of finding the best assignment of 70 people


to 70 jobs exemplifies the usefulness of linear programming. The computing
power required to test all the permutations to select the best assignment is
vast; the number of possible configurations exceeds the number of particles
in the universe. However, it takes only a moment to find the optimum
solution by posing the problem as a linear program and applying the simplex
algorithm. The theory behind linear programming drastically reduces the
number of possible optimal solutions that must be checked.

Uses

Linear programming is an important field of optimization for several


reasons. Many practical problems in operations research can be expressed
as linear programming problems. Certain special cases of linear
programming, such as network flow problems and multicommodity flow
problems are considered important enough to have generated much
research on specialized algorithms for their solution. A number of algorithms
for other types of optimization problems work by solving LP problems as sub-
problems. Historically, ideas from linear programming have inspired many of
the central concepts of optimization theory, such as duality, decomposition,
and the importance of convexity and its generalizations. Likewise, linear
programming is heavily used in microeconomics and business management,
either to maximize the income or minimize the costs of a production scheme.
Some examples are food blending, inventory management, portfolio and
finance management, resource allocation for human and machine resources,
planning advertisement campaigns, etc.
Standard form

Standard form is the usual and most intuitive form of describing a linear
programming problem. It consists of the following three parts:

 A linear function to be maximized

e.g. maximize
 Problem constraints of the following form

e.g.

 Non-negative variables

e.g.

The problem is usually expressed in matrix form, and then becomes:

maximize
subject to

Other forms, such as minimization problems, problems with constraints on


alternative forms, as well as problems involving negative variables can
always be rewritten into an equivalent problem in standard form.

Example

Suppose that a farmer has a piece of farm land, say A square kilometres
large, to be planted with either wheat or barley or some combination of the
two. The farmer has a limited permissible amount F of fertilizer and P of
insecticide which can be used, each of which is required in different amounts
per unit area for wheat (F1, P1) and barley (F2, P2). Let S1 be the selling price
of wheat, and S2 the price of barley. If we denote the area planted with
wheat and barley by x1 and x2 respectively, then the optimal number of square kilometres
to plant with wheat vs barley can be expressed as a linear programming problem:

(maximize the revenue — revenue is the


maximize "objective function")
subject (limit on total area)
to
(limit on fertilizer)
(limit on insecticide)
(cannot plant a negative area)

Which in matrix form becomes:

maximize

subject to

Augmented form (slack form)

Linear programming problems must be converted into augmented form


before being solved by the simplex algorithm. This form introduces non-
negative slack variables to replace inequalities with equalities in the
constraints. The problem can then be written in the following form:

Maximize Z in:

where xs are the newly introduced slack variables, and Z is the variable to be
maximized.

Example

The example above becomes as follows when converted into augmented


form:

subject to:

maximize (objective function)


subject (augmented
to constraint)
(augmented
constraint)
(augmented
constraint)

where are (non-negative) slack variables.

Which in matrix form becomes:

Maximize Z in:

Duality

Every linear programming problem, referred to as a primal problem,


can be converted into a dual problem, which provides an upper bound to the
optimal value of the primal problem. In matrix form, we can express the
primal problem as:

maximize
subject to

The corresponding dual problem is:

minimize
subject to

where y is used instead of x as variable vector.

There are two ideas fundamental to duality theory. One is the fact that the
dual of a dual linear program is the original primal linear program.
Additionally, every feasible solution for a linear program gives a bound on
the optimal value of the objective function of its dual. The weak duality
theorem states that the objective function value of the dual at any feasible
solution is always greater than or equal to the objective function value of the
primal at any feasible solution. The strong duality theorem states that if the
primal has an optimal solution, x*, then the dual also has an optimal solution,
y*, such that cTx*=bTy*.

A linear program can also be unbounded or infeasible. Duality theory


tells us that if the primal is unbounded then the dual is infeasible by the
weak duality theorem. Likewise, if the dual is unbounded, then the primal
must be infeasible. However, it is possible for both the dual and the primal to
be infeasible.

Example

Following the above example of the farmer with some A land, F


fertilizer and P insecticide, the farmer can tell others that he has no way to
earn more than a specific amount of profit with the following scheme: to
claim that with his available method of earning, each kilometre of land can
give him no more than yA, each amount of fertilizer can earn him no more
than yF, and each amount of insecticide can earn him no more than yP. Then
he can tell others that the most he can earn is AyA + FyF + PyP. In order to
find the best (lowest) claim he can make, he can set yA, yF and yP by using the
following linear programming problem:

(minimize the revenue bound — revenue


minimize bound is the "objective function")
subject
(he can earn no more by growing wheat)
to
(he can earn no more by growing barley)
(cannot claim negative revenue on
resource)

Which in matrix form becomes:

minimize

subject to

Note that each variable in the primal problem (amount of wheat/barley


to grow) correspond to an inequality in the dual problem (revenue obtained
by wheat/barley), and each variable in the dual problem (revenue bound
provided by each resource) correspond to an inequality in the primal
problem (limit on each resource).
Since each inequality can be replaced by an equality and a slack
variable, this means each primal variable correspond to a dual slack variable,
and each dual variable correspond to a primal slack variable. This relation
allows us to complementary slackness.

Complementary slackness

It is possible to obtain an optimal solution to the dual when only an


optimal solution to the primal is known using the complementary slackness
theorem. The theorem states:

Suppose that x = (x1, x2, . . ., xn) is primal feasible and that y = (y1, y2, .
. . , ym) is dual feasible. Let (w1, w2, . . ., wm) denote the corresponding primal
slack variables, and let (z1, z2, . . . , zn) denote the corresponding dual slack
variables. Then x and y are optimal for their respective problems if and only
if xjzj = 0, for j = 1, 2, . . . , n, w iyi = 0, for i = 1, 2, . . . , m.

So if the ith slack variable of the primal is not zero, then the ith
variable of the dual is equal zero. Likewise, if the jth slack variable of the
dual is not zero, then the jth variable of the primal is equal to zero.

Theory

Geometrically, the linear constraints define a convex polyhedron,


which is called the feasible region. Since the objective function is also linear,
hence a convex function, all local optima are automatically global optima (by
the KKT theorem). The linearity of the objective function also implies that the
set of optimal solutions is the convex hull of a finite set of points - usually a
single point.

There are two situations in which no optimal solution can be found.


First, if the constraints contradict each other (for instance, x ≥ 2 and x ≤ 1)
then the feasible region is empty and there can be no optimal solution, since
there are no solutions at all. In this case, the LP is said to be infeasible.

Alternatively, the polyhedron can be unbounded in the direction of the


objective function (for example: maximize x1 + 3 x2 subject to x1 ≥ 0, x2 ≥ 0,
x1 + x2 ≥ 10), in which case there is no optimal solution since solutions with
arbitrarily high values of the objective function can be constructed.

Barring these two pathological conditions (which are often ruled out by
resource constraints integral to the problem being represented, as above),
the optimum is always attained at a vertex of the polyhedron. However, the
optimum is not necessarily unique: it is possible to have a set of optimal
solutions covering an edge or face of the polyhedron, or even the entire
polyhedron (This last situation would occur if the objective function were
constant).

Algorithms

A series of linear constraints on two variables produces a region of


possible values for those variables. Solvable problems will have a feasible
region in the shape of a simple polygon.

The simplex algorithm, developed by George Dantzig, solves LP


problems by constructing an admissible solution at a vertex of the
polyhedron and then walking along edges of the polyhedron to vertices with
successively higher values of the objective function until the optimum is
reached. Although this algorithm is quite efficient in practice and can be
guaranteed to find the global optimum if certain precautions against cycling
are taken, it has poor worst-case behavior: it is possible to construct a linear
programming problem for which the simplex method takes a number of
steps exponential in the problem size. In fact, for some time it was not
known whether the linear programming problem was solvable in polynomial
time (complexity class P).

This long standing issue was resolved by Leonid Khachiyan in 1979


with the introduction of the ellipsoid method, the first worst-case polynomial-
time algorithm for linear programming. To solve a problem which has n
variables and can be encoded in L input bits, this algorithm uses O(n4L)
arithmetic operations on numbers with O(L) digits. It consists of a
specialization of the nonlinear optimization technique developed by Naum
Shor, generalizing the ellipsoid method for convex optimization proposed by
Arkadi Nemirovski, a 2003 John von Neumann Theory Prize winner, and D.
Yudin.

Khachiyan's algorithm was of landmark importance for establishing the


polynomial-time solvability of linear programs. The algorithm had little
practical impact, as the simplex method is more efficient for all but specially
constructed families of linear programs. However, it inspired new lines of
research in linear programming with the development of interior point
methods, which can be implemented as a practical tool. In contrast to the
simplex algorithm, which finds the optimal solution by progresses along
points on the boundary of a polyhedral set, interior point methods move
through the interior of the feasible region.

In 1984, N. Karmarkar proposed a new interior point projective method


for linear programming. Karmarkar's algorithm not only improved on
Khachiyan's theoretical worst-case polynomial bound (giving O(n3.5L)), but
also promised dramatic practical performance improvements over the
simplex method. Since then, many interior point methods have been
proposed and analyzed. Early successful implementations were based on
affine scaling variants of the method. For both theoretical and practical
properties, barrier function or path-following methods are the most common
recently.

The current opinion is that the efficiency of good implementations of


simplex-based methods and interior point methods is similar for routine
applications of linear programming.

LP solvers are in widespread use for optimization of various problems


in industry, such as optimization of flow in transportation networks, many of
which can be transformed into linear programming problems only with some
difficulty.

Open problems and recent work

There are several open problems in the theory of linear programming, the
solution of which would represent fundamental breakthroughs in
mathematics and potentially major advances in our ability to solve large-
scale linear programs.

 Does LP admit a strongly polynomial-time algorithm?


 Does LP admit a strongly polynomial algorithm to find a strictly
complementary solution?
 Does LP admit a polynomial algorithm in the real number (unit cost)
model of computation?
This closely related set of problems has been cited by Stephen Smale as
among the 18 greatest unsolved problems of the 21st century. In Smale's
words, the third version of the problem "is the main unsolved problem of
linear programming theory." While algorithms exist to solve linear
programming in weakly polynomial time, such as the ellipsoid methods and
interior-point techniques, no algorithms have yet been found that allow
strongly polynomial-time performance in the number of constraints and the
number of variables. The development of such algorithms would be of great
theoretical interest, and perhaps allow practical gains in solving large LPs as
well.

 Are there pivot rules which lead to polynomial-time Simplex variants?


 Do all polyhedral graphs have polynomially-bounded diameter?
 Is the Hirsch conjecture true for polyhedral graphs?

These questions relate to the performance analysis and development of


Simplex-like methods. The immense efficiency of the Simplex algorithm in
practice despite its exponential-time theoretical performance hints that there
may be variations of Simplex that run in polynomial or even strongly
polynomial time. It would be of great practical and theoretical significance to
know whether any such variants exist, particularly as an approach to
deciding if LP can be solved in strongly polynomial time.

The Simplex algorithm and its variants fall in the family of edge-following
algorithms, so named because they solve linear programming problems by
moving from vertex to vertex along edges of a polyhedron. This means that
their theoretical performance is limited by the maximum number of edges
between any two vertices on the LP polyhedron. As a result, we are
interested in knowing the maximum graph-theoretical diameter of polyhedral
graphs. It has been proved that all polyhedra have subexponential diameter,
and all experimentally observed polyhedra have linear diameter, it is
presently unknown whether any polyhedron has superpolynomial or even
superlinear diameter. If any such polyhedra exist, then no edge-following
variant can run in polynomial or linear time, respectively. Questions about
polyhedron diameter are of independent mathematical interest.

Recent developments in linear programming include work by Vladlen


Koltun to show that linear programming is equivalent to solving problems on
arrangement polytopes, which have small diameter, allowing the possibility
of strongly polynomial-time algorithms without resolving questions about the
diameter of general polyhedra. Jonathan Kelner and Dan Spielman have also
proposed a randomized (weakly) polynomial-time Simplex algorithm.

Integer unknowns
If the unknown variables are all required to be integers, then the
problem is called an integer programming (IP) or integer linear programming
(ILP) problem. In contrast to linear programming, which can be solved
efficiently in the worst case, integer programming problems are in many
practical situations (those with bounded variables) NP-hard. 0-1 integer
programming or binary integer programming (BIP) is the special case of
integer programming where variables are required to be 0 or 1 (rather than
arbitrary integers). This problem is also classified as NP-hard, and in fact the
decision version was one of Karp's 21 NP-complete problems.

If only some of the unknown variables are required to be integers, then


the problem is called a mixed integer programming (MIP) problem. These are
generally also NP-hard.

There are however some important subclasses of IP and MIP problems


that are efficiently solvable, most notably problems where the constraint
matrix is totally unimodular and the right-hand sides of the constraints are
integers.

Advanced algorithms for solving integer linear programs include:

 cutting-plane method
 branch and bound
 branch and cut
 branch and price
 if the problem has some extra structure, it may be possible to apply
delayed column generation.

Linear Programming

Linear programming, sometimes known as linear optimization, is the


problem of maximizing or minimizing a linear function over a convex
polyhedron specified by linear and non-negativity constraints. Simplistically,
linear programming is the optimization of an outcome based on some set of
constraints using a linear mathematical model.

Linear programming is implemented in Mathematica as


LinearProgramming[c, m, b], which finds a vector which minimizes the
quantity subject to the constraints and for .

Linear programming theory falls within convex optimization theory and


is also considered to be an important part of operations research. Linear
programming is extensively used in business and economics, but may also
be used to solve certain engineering problems.
Examples from economics include Leontief's input-output model, the
determination of shadow prices, etc., an example of a business application
would be maximizing profit in a factory that manufactures a number of
different products from the same raw material using the same resources,
and example engineering applications include Chebyshev approximation and
the design of structures (e.g., limit analysis of a planar truss).

Linear programming can be solved using the simplex method (Wood


and Dantzig 1949, Dantzig 1949) which runs along polytope edges of the
visualization solid to find the best answer. Khachian (1979) found a
polynomial time algorithm. A much more efficient polynomial time algorithm
was found by Karmarkar (1984). This method goes through the middle of the
solid (making it a so-called interior point method), and then transforms and
warps. Arguably, interior point methods were known as early as the 1960s in
the form of the barrier function methods, but the media hype accompanying
Karmarkar's announcement led to th ese methods receiving a great deal of
attention.

Linear programming in which variables may take on integer values


only is known as integer programming

Definitions

Objective Function
The linear function (equal sign) representing cost, profit, or some other
quantity to be maximized of minimized subject to the constraints.
Constraints
A system of linear inequalities.
Problem Constraints
The linear inequalities that are derived from the application. For
example, there may be only 40 hours a machine can be used in a
week, so the total time it is used would have to be <= 40. The problem
constraints are usually stated in the story problem.
Non-Negativity Constraints
The linear inequalities x>=0 and y>=0. These are included because x
and y are usually the number of items produced and you cannot
produce a negative number of items, the smallest number of items you
could produce is zero. These are not (usually) stated, they are implied.
Feasible Region
The solution to the system of linear inequalities. That is, the set of all
points that satisfy all the constraints. Only points in the feasible region
can be used.
Corner Point
A vertex of the feasible region. Not every intersection of lines is a
corner point. The corner points only occur at a vertex of the feasible
region. If there is going to be an optimal solution to a linear
programming problem, it will occur at one or more corner points, or on
a line segment between two corner points.
Bounded Region
A feasible region that can be enclosed in a circle. A bounded region will
have both a maximum and minimum values.
Unbounded Region
A feasible region that can not be enclosed in a circle.

Fundamental Theorem of Linear Programming

Recall that almost every area of mathematics has its fundamental theorem.

Here are some of the fundamental theorems or principles that occur in your
text.

Fundamental Theorem of Arithmetic (pg 8)


Every integer greater than one is either prime or can be expressed as
an unique product of prime numbers.
Fundamental Theorem of Algebra (pg 264)
Every polynomial in one variable of degree n > 0 has at least one real
or complex zero.
Fundamental Counting Principle (pg 543)
If there are m ways to do one thing, and n ways to do another, then
there are m*n ways of doing both.

Fundamental Theorem of Linear Programming

If there is a solution to a linear programming problem, then it will occur


at a corner point, or on a line segment between two corner points.

The Fundamental Theorem of Linear Programming is a great help. Instead of


testing all of the infinite number of points in the feasible region, you only
have to test the corner points. Whichever corner point yields the largest
value for the objective function is the maximum and whichever corner point
yields the smallest value for the objective function is the minimum.

Solving a Linear Programming Problem

If the problem is not a story problem, skip to step 3.

1. Define the variables. Usually, a good choice for the definition is the
quantity they asked you to find in the problem.
2. Write the problem by defining the objective function and the system of
linear inequalities. Don't forget about the non-negativity constraints
where necessary.
3. Sketch the system of linear inequalities to obtain the feasible region.
4. Identify each corner point of the feasible region. You can find the
corner points by forming a 2x2 system of linear equations from the two
lines that intersect at that point and solving that system.
5. Evaluate the objective function at each corner point.
6. Choose the point yielding the largest value or smallest value
depending on whether the problem is a maximization or minimization
problem.

Be careful how you give the answer. The answer should give not only the
maximum or minimum value (the value of the objective function), but it
should also give the location where that extremum occurs. Example: The
maximum value is 9 when x=2 and y=3. If it is a story problem, then give
the answer in terms of the original definitions of x and y.

Geometric Approach

If the slope of the objective function is negative and you take a line
with that slope passing through the origin and move it to the right through
the feasible region, the last corner point hit by that moving line will be the
maximum value.

In the example shown, the last line with slope m=-4/3 that touches the
feasible region touches at the corner point (6,3).

S ince z=4(6)+3(3)=24+9=33, the maximum value is 33 when x=6 and


y=3.
Algebraic Approach

Now, to verify the solution non-geometrically. Since we know the optimal solution has to
occur at one or more corner points, we make a table listing all the corner points and evaluate the
objective function at those points.

Corner z = 4x +
x y
Point 3y
A 0 0 0
B 0 4 12
C 4 5 31
D 6 3 33
E 5 0 20

As you can see, the corner point with the maximum value is at (6,3).

We can also determine the minimum value from that table. A suitable
answer, assuming the problem had asked for both the maximum and
minimum is ...

The minimum value is 0 when x=0 and y=0.


The maximum value is 33 when x=6 and y=3.

Linear programming solution examples

Linear programming example 1997 UG exam

A company makes two products (X and Y) using two machines (A and B).
Each unit of X that is produced requires 50 minutes processing time on
machine A and 30 minutes processing time on machine B. Each unit of Y that
is produced requires 24 minutes processing time on machine A and 33
minutes processing time on machine B.

At the start of the current week there are 30 units of X and 90 units of Y in
stock. Available processing time on machine A is forecast to be 40 hours and
on machine B is forecast to be 35 hours.

The demand for X in the current week is forecast to be 75 units and for Y is
forecast to be 95 units. Company policy is to maximise the combined sum of
the units of X and the units of Y in stock at the end of the week.
 Formulate the problem of deciding how much of each product to make
in the current week as a linear program.
 Solve this linear program graphically.

Solution

Let

 x be the number of units of X produced in the current week


 y be the number of units of Y produced in the current week

then the constraints are:

 50x + 24y <= 40(60) machine A time


 30x + 33y <= 35(60) machine B time
 x >= 75 - 30
 i.e. x >= 45 so production of X >= demand (75) - initial stock (30), which
ensures we meet demand
 y >= 95 - 90
 i.e. y >= 5 so production of Y >= demand (95) - initial stock (90), which
ensures we meet demand

The objective is: maximise (x+30-75) + (y+90-95) = (x+y-50)


i.e. to maximise the number of units left in stock at the end of the week

It is plain from the diagram below that the maximum occurs at the
intersection of x=45 and 50x + 24y = 2400
Solving simultaneously, rather than by reading values off the graph, we have
that x=45 and y=6.25 with the value of the objective function being 1.25

Linear programming example 1992 UG exam

A company manufactures two products (A and B) and the profit per unit sold
is £3 and £5 respectively. Each product has to be assembled on a particular
machine, each unit of product A taking 12 minutes of assembly time and
each unit of product B 25 minutes of assembly time. The company estimates
that the machine used for assembly has an effective working week of only 30
hours (due to maintenance/breakdown).

Technological constraints mean that for every five units of product A


produced at least two units of product B must be produced.

 Formulate the problem of how much of each product to produce as a


linear program.
 Solve this linear program graphically.
 The company has been offered the chance to hire an extra machine,
thereby doubling the effective assembly time available. What is the
maximum amount you would be prepared to pay (per week) for the
hire of this machine and why?

Solution

Let

xA = number of units of A produced

xB = number of units of B produced

then the constraints are:

12xA + 25xB <= 30(60) (assembly time)

xB >= 2(xA/5)

i.e. xB - 0.4xA >= 0

i.e. 5xB >= 2xA (technological)

where xA, xB >= 0

and the objective is

maximise 3xA + 5xB

It is plain from the diagram below that the maximum occurs at the
intersection of 12xA + 25xB = 1800 and xB - 0.4xA = 0
Solving simultaneously, rather than by reading values off the graph, we have
that:

xA= (1800/22) = 81.8

xB= 0.4xA = 32.7

with the value of the objective function being £408.9

Doubling the assembly time available means that the assembly time
constraint (currently 12xA + 25xB <= 1800) becomes 12xA + 25xB <=
2(1800) This new constraint will be parallel to the existing assembly time
constraint so that the new optimal solution will lie at the intersection of 12x A
+ 25xB = 3600 and xB - 0.4xA = 0

i.e. at xA = (3600/22) = 163.6

xB= 0.4xA = 65.4


with the value of the objective function being £817.8

Hence we have made an additional profit of £(817.8-408.9) = £408.9 and


this is the maximum amount we would be prepared to pay for the hire of the
machine for doubling the assembly time.

This is because if we pay more than this amount then we will reduce our
maximum profit below the £408.9 we would have made without the new
machine.

Linear programming example 1995 UG exam

The demand for two products in each of the last four weeks is shown below.

Week
1 2 3 4
Demand - product 1 23 27 34 40
Demand - product 2 11 13 15 14

Apply exponential smoothing with a smoothing constant of 0.7 to generate a


forecast for the demand for these products in week 5.

These products are produced using two machines, X and Y. Each unit of
product 1 that is produced requires 15 minutes processing on machine X and
25 minutes processing on machine Y. Each unit of product 2 that is produced
requires 7 minutes processing on machine X and 45 minutes processing on
machine Y. The available time on machine X in week 5 is forecast to be 20
hours and on machine Y in week 5 is forecast to be 15 hours. Each unit of
product 1 sold in week 5 gives a contribution to profit of £10 and each unit of
product 2 sold in week 5 gives a contribution to profit of £4.

It may not be possible to produce enough to meet your forecast demand for
these products in week 5 and each unit of unsatisfied demand for product 1
costs £3, each unit of unsatisfied demand for product 2 costs £1.

 Formulate the problem of deciding how much of each product to make


in week 5 as a linear program.
 Solve this linear program graphically.

Solution

Note that the first part of the question is a forecasting question so it is solved
below.
For product 1 applying exponential smoothing with a smoothing constant of
0.7 we get:

M1 = Y1 = 23
M2 = 0.7Y2 + 0.3M1 = 0.7(27) + 0.3(23) = 25.80
M3 = 0.7Y3 + 0.3M2 = 0.7(34) + 0.3(25.80) = 31.54
M4 = 0.7Y4 + 0.3M3 = 0.7(40) + 0.3(31.54) = 37.46

The forecast for week five is just the average for week 4 = M 4 = 37.46 = 31
(as we cannot have fractional demand).

For product 2 applying exponential smoothing with a smoothing constant of


0.7 we get:

M1 = Y1 = 11
M2 = 0.7Y2 + 0.3M1 = 0.7(13) + 0.3(11) = 12.40
M3 = 0.7Y3 + 0.3M2 = 0.7(15) + 0.3(12.40) = 14.22
M4 = 0.7Y4 + 0.3M3 = 0.7(14) + 0.3(14.22) = 14.07

The forecast for week five is just the average for week 4 = M 4 = 14.07 = 14
(as we cannot have fractional demand).

We can now formulate the LP for week 5 using the two demand figures (37
for product 1 and 14 for product 2) derived above.

Let

x1 be the number of units of product 1 produced

x2 be the number of units of product 2 produced

where x1, x2>=0

The constraints are:

15x1 + 7x2 <= 20(60) machine X

25x1 + 45x2 <= 15(60) machine Y

x1 <= 37 demand for product 1

x2 <= 14 demand for product 2

The objective is to maximise profit, i.e.

maximise 10x1 + 4x2 - 3(37- x1) - 1(14-x2)


i.e. maximise 13x1 + 5x2 - 125

The graph is shown below, from the graph we have that the solution occurs
on the horizontal axis (x2=0) at x1=36 at which point the maximum profit is
13(36) + 5(0) - 125 = £343

REFERENCES

 http://en.wikipedia.org/wiki/Linear_programming

 http://mathworld.wolfram.com/LinearProgramming.html

 http://www.richland.edu/james/lecture/m116/systems/linear.html

 http://people.brunel.ac.uk/~mastjjb/jeb/or/morelp.html

You might also like