0% found this document useful (0 votes)
12 views33 pages

LP Notes New

The document outlines a course on Linear Programming, covering topics such as the general linear programming problem, geometrical solutions, the simplex method, duality, and sensitivity analysis. It includes definitions, applications, and examples of linear programming problems, as well as exercises for practical application. The course aims to equip students with the ability to formulate and solve optimization problems using linear programming techniques.

Uploaded by

ksderricksen10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views33 pages

LP Notes New

The document outlines a course on Linear Programming, covering topics such as the general linear programming problem, geometrical solutions, the simplex method, duality, and sensitivity analysis. It includes definitions, applications, and examples of linear programming problems, as well as exercises for practical application. The course aims to equip students with the ability to formulate and solve optimization problems using linear programming techniques.

Uploaded by

ksderricksen10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

MTH:2205 LINEAR PROGRAMMING

O. Kurama, PhD

January, 2025

Course outline

1. The general linear programming problem.


2. Geometrical Solutions to linear programs.
3. The simplex method.
4. Duality of linear programming problems.
5. Sensitivity analysis/ Post-optimality analysis.

References

1. Mathematical introduction to Linear programming and Game theory by Louis Brick-


man.
2. G. B. Dantzig Linear programming and its Extensions.

1
1 The general linear programming problem

Linear programming (LP) deals with the problem of optimizing (maximizing or minimiz-
ing) a linear function subject to linear constraints. A wide variety of practical problems,
from nutrition, transportation, production planning, finance, and many more areas can
be modeled as linear programs. Linear programming is a branch of Operations Research
together with other branches such as Networks, nonlinear programming, Dynamic pro-
gramming, Integer programming, etc., that deals with linear optimization. Operations
Research methods assist in decision making and finding the best or optimal use of scarce
resources. Mathematical models are set up and an attempt is made to provide a quanti-
tative reason for a decision. A linear programming model has three components, i.e, the
decision variables, objective function, and the restrictions or constraints. The idea
is to solve the model with the aim of finding the best alternative(s) among the possible
ones.
A Mathematical optimization problem is one in which some function is either maximized
or minimized with respect to a given set of restrictions. The function to be either min-
imized or maximized is called the objective function and the set of restrictions is called
the constraint set. Linear programs can be studied both algebraically and geometrically.
The two approaches are equivalent, but one or the other may be more convenient for
answering a particular question about a linear program.
The algebraic point of view is based on writing the linear program in a particular way,
called standard form. Then the coefficient matrix of the constraints of the linear program
can be analyzed using the tools of linear algebra. For example, we might ask about the
rank of the matrix, or for a representation of its null space. The geometric point of
view is based on the geometry of the feasible region and uses ideas such as convexity
to analyze the linear program. It is less dependent on the particular way in which the
constraints are written. Using geometry (particularly in two-dimensional problems where
the feasible region can be graphed) makes many of the concepts in linear programming
easy to understand, because they can be described in terms of intuitive notions such as
moving along an edge of the feasible region. The feasible region is the set of solutions to a
finite number of linear inequality or equality constraints. In this course unit the feasible
region will always be taken as a subset of Rn while the objective function as a real valued
function from Rn to R. A linear programming problem is an optimization problem over
Rn where the objective function and constraints are linear.

Definition 1.1
A function is said to be linear if it takes the form, f (x) = cx = c1 x1 + c2 x2 + . . . + cn xn .
for all ci ∈ R, i = 1, 2, . . . , n, x = (x1 , x2 , . . . , xn ).

2
1.1 Applications of Linear programming

Linear programming is an extremely powerful tool for addressing a wide range of applied
optimization problems namely;

1. Resource allocation.
2. Management and decision making.
3. Transportation, Assignment, and Production scheduling problems.
4. And many more.

1.2 The general form of the LP problem

The general LP problem takes the following form,

min/max z = c1 x1 + c2 x2 + . . . + cn xn
s.t the constraints(restrictions)
ai1 x1 + ai2 x2 + . . . + ain xn {≤, =, ≥} bi
and x1 , x2 , . . . , xn ≥ 0 ∀ i = 1, 2, . . . , m.

The general linear programming problem can also be re - written in the matrix form as,
min/max z = cx
s.t Ax {≤, =, ≥} b
and x ≥ 0
Where A = (aij ) is an m × n matrix x = (x1 , x2 , . . . , xn )t , c = (c1 , c2 , . . . , cn ), b =
(b1 , b2 , . . . , bm )t .

An example of an LP problem

min w = x1 − x2 + 7x3
s.t − 2x1 + 3x2 − x3 ≥ 7
+ x2 + x3 ≥ 1
x1 + 3x3 ≥ 3
x1 , x2 , x3 ≥ 0
or in Matrix form;
 
x1
min w = (1, −1, 7)  x2  subject to
x3

3
    
−2 3 −1 x1 7
 0 1 1   x2  ≥  1  . ∀ x1 , x2 , x3 ≥ 0
1 0 3 x3 3
   
−2 3 −1 7
Where A =  0 1 1  ,b =  1  , c = (1, −1, 7).
1 0 3 3

1.3 Changing the sense of optimization

n
P
A maximization problem over the objective function z = cj xj is equivalent to a mini-
j=1

mization problem with the same constraints but it’s objective function is
n
P
z= −cj xj . The equivalence in this case imply that the solutions to the two problems
j=1

are the same in magnitude.

1.4 Changing the sense of an inequality

n
P
The inequalities aij xj ≤ bi , i = 1, 2, . . . , m can be re - written in the form;
j=1

n
X
−aij xj ≥ −bi .
j=1

That’s the sense of inequalities is changed by multiplying through by −1.

1.5 Changing an inequality into equality

The set of inequalities given by,


n
X
aij xj ≤ bi
j=1

4
can be written as a set of equalities as

n
X
aij xj + si = bi
j=1

where si ≥ 0 and
n
X
aij xj ≥ bi
j=1

will be written as,


n
X
aij xj − ti = bi
j=1

where ti ≥ 0
Here si &ti are referred to as slack and surplus variables respectively.

1.6 Standard and Canonical forms of Linear Programming problems

A linear program is a problem of either maximization or minimization of a linear function


subject to a finite number of linear constraints. Every linear programming problem can
be written or presented in two canonical forms namely;

(i) Maximization canonical form, that is of the form

Maximize z = cx
s.t Ax ≤ b
and x ≥ 0

(ii) Minimization canonical form, that is of the form

Minimize w = cx
s.t Ax ≥ b
and x ≥ 0

Example 1.1
min w = 2x1 − 11x2 − 20x3
s.t − 2x1 + 3x2 − x3 ≥ 7

5
2x1 + x2 − 20x3 ≥ 6
x1 + 2x2 + 3x3 ≥ 3
x1 , x2 , x3 ≥ 0

Example 1.2
max w = 4x1 − 2x2 − 5x3
s.t 2x1 + x2 − x3 ≤ 7
2x1 + x2 − 2x3 ≤ 13
x1 + 3x3 ≤ 3
x1 , x2 , x3 ≥ 0

Exercise 1.1 Write the following minimization linear programming problems in canoni-
cal minimization form and hence convert them to canonical maximization linear program-
ming problems;

(i)
min w = x1 − 12x2 − 2x3
s.t − 2x1 + 3x2 − x3 ≤ 7
2x1 + x2 − 20x3 ≥ −30
x1 + 3x3 ≤ 3
x1 , x2 , x3 ≥ 0

(ii)
min w = x1 + 17x2 + x3
s.t − x1 + 2x2 − 6x3 ≤ −10
− x2 − 2x3 ≥ −6
2x1 + 11x3 ≤ 19
x1 , x2 , x3 ≥ 0

Definition 1.2
A linear programming problem is said to be in standard form if all its restrictions (con-
straint set) are equalities.

The linear program in matrix form below is in standard form.


min z = cx
s.t Ax = b
and x ≥ 0

6
with b ≥ 0. Here x and c are vectors of length n, b is a vector of length m, and A is an
m × n matrix called the constraint matrix. Important things to notice are:

1. it is a minimization problem,
2. all the variables are constrained to be nonnegative,
3. all the other constraints are represented as equations,
4. the components of the right-hand side vector b are all nonnegative.

The standard form will be the form of a linear program to be used within the simplex
method. Notice that the minimization problem can always be changed into a maximiza-
tion problem without loss of generality.

1.7 Formulating Linear Programming Problems

To formulate a Mathematical model for an LP problem, we need to define decision vari-


ables, e.g, x1 , x2 , ..., xn . Next identify or define the item set e.g capacity, cost, etc. Lastly
set up the objective function and the restrictions (constraints) in terms of decision vari-
ables.
Example
A pipe manufacturing company produces two types of pipes, type I. and type II. The
storage space, raw material requirement and production rate are given as below:

Resources Type I Type II Company Availability


Storage space 5 m2 /pipe 3 m2 /pipe 750 m2
Raw materials 6 kg/pipe 4 kg/pipe 800 kg/day
Production rate 30 pipes/hour 20 pipes/hour 8 hours/day

The profit for selling one Type I pipe is 10 dollars, and that for Type II is 8 dollars. The
pipes produced each day are taken out by trucks to sales outlets in the early morning of
the next day before a new day’s manufacturing work starts. Our objective is to formulate
for the company a linear programming model which can determine how many pipes of
each type should be manufactured each day so that the total profit can be maximized.
Solution
Let Z = total profit
x1 = number of Type I pipes produced each day
x2 = number of Type II pipes produced each day

Since our objective is to maximize profit, we write an objective function,


max Z = 10x1 + 8x2

7
Constraints are:
Storage space: 5x1 + 3x2 ≤ 750
Raw materials: 6x1 + 4x2 ≤ 800
Working hours: x301 + x202 ≤ 8
x1 ≥ 0
x2 ≥ 0, since the decision variables cannot be negative.
If the time is converted from hours to minutes, we can remove the fraction in constraints
to have, 2x1 + 3x2 ≤ 480. Hence the linear programming problem can be stated as:
max Z = 10x1 + 8x2
Subject to:
5x1 + 3x2 ≤ 750
6x1 + 4x2 ≤ 800
2x1 + 3x2 ≤ 480
x1 , x2 ≥ 0

Assignment 1.

1. A farmer would like to utilize his piece of land of 500 square meters to grow maize
(M ), beans (B) and ground nuts (G). He has decided that wherever he is to plant
beans or ground nuts he will also plant some maize. However he will not mix the
beans and the groundnut. Also he may have maize planted separate from the beans
and the ground nuts. The time needed to look after each crop until after harvest
and the expected profit in dollars are given in the table below

hours per m2 profit per m2 from


maize beans g.nuts
M 0.4 80 0 0
B∩M 0.7 5 100 0
G∩M 0.9 10 0 120

According to the demands for these crops at least 50 square meters of land should
be planted by each of the crops in the table. The ratio of the area used for G. nuts
to that used for beans should not be less than 2 : 3. If the total time to be spent
on the crops during any season should not exceed 300 hours, formulate the problem
of how the farmer should partition his land for the crops such that his total profit
is maximized. If all the land has to be utilized for these crops reduce the number of
variables in the problem to two, so that the problem can be solved graphically (Do
not solve it).
2. From two mines 1 and 2, Iron ore is to be transported to three different steel plants.
The cost of transporting one ton of Iron ore to a plant is as given in the table below.
Formulate a linear model to minimize the cost of transportation.

8
Mines steel plants Available ore at
the mine (tons)
1 2 3
mine 1 9 16 28 103
mine 2 14 29 19 197
Tons of ore required at steel plant 71 133 96

3. A nutritionist is planning a meal consisting of two foods A and B, Each gram of


A contains 5 units of fats, 2 units of carbohydrates and 4 units of proteins, Each
gram of B contains 2 units of fats, 3 units of carbohydrates and 4 units of proteins.
He wants the meal to provide atleast 18 units of fats, 12 units of carbohydrates and
24 units of proteins. If each gram of A costs 200/ = and that of B costs 250/ =,
formulate a linear programming problem to be used to find how many grams of each
type of food should be used so as to minimize the cost for the meal yet satisfy the
requirements of the nutritionist.
4. A manufacturer has three machines A, B and C with which he produces two different
products x1 and x2 , The different machining times required per product are as follows:

Product Machine times (hrs)


A B C
x1 3 2 0
x2 2 5 5

The estimated profits per product are 7, 500 U GX and 13, 500 U GX for x1 and x2
respectively. the amounts of time each machine is available for making the products
during any given week are restricted as follows:
Machine A is available for atmost 21 hours,
Machine B is available for atmost 40 hours,
Machine C is available for atmost 30 hours,
The Manufacturer’s objective is to make as much profit as he can, Formulate the
problem as a linear programming problem that can be used to find this profits.
5. A pharmaceutical firm produces two products H and M . Each unit of product H
requires 3 hours of operation I and 4 hours of operation II, while each unit of product
M requires 4 hours of operation I and 5 hours of operation II. Time available for
operation I and II are 20 hours and 26 hours respectively. Product H sells at a
profit of 1, 000 U GX per unit, while M sells at a profit of 2, 000 U GX. Determine
the quantities of H and M to be produced, so that the profit earned is maximum.
6. An advertising company is planning to advertise using three different media; tele-
vision, radio and magazines. It is expected that certain numbers of women will be
reached on average using each different media, and it is also known approximately
what proportion of the general population will be reached by each unit of advertising.

9
The cost per unit of advertising in each of the media as well as the expected number
of women reached for each unit of each type of advertising are given in the following
table:

Advertising cost/shs Potential No of customers Women reached


Television 50,000 900,000 400,000
Radio 20,000 500,000 200,000
Magazine 10,000 200,000 150,000

The company only has shs 800, 000 available in its advertising budget and want to
make sure that:
(a) atleast 3 million people are reached by the advertisement
(b) atleast 1, 800, 000 women in particular are reached by the advertisement
(c) television advertising be limited to shs 400, 000
(d) atleast three units of the advertising be bought on the television and atleast two
units and no more than ten on radio.
Set up a linear programming model to achieve these goals with minimal costs.

10
2 Geometrical solutions to linear programs

In this chapter we examine optimal solutions using graphical approach. The graphical
method can only solve linear programming models with two decision variables. For LP
problems with several decision variables, we find solutions using the simplex method.
Given any two dimensional linear programming problem, we can find the optimal solution
to the problem by the use of the following procedure.

1. Plot the constraint set/ equalities in the LP problem.


2. Shade out the unwanted region to identify the feasible region.
3. Plot the objective function on the graph starting with a value of z(w) = 0 and shift
the objective function in order to identify a point that either maximizes or minimizes
the LP objective.

Example 2.1 Use graphical method to solve the linear program.

maximize z = x1 + x2
s.t 2x1 + 3x2 ≤ 6
x1 + 4x2 ≤ 4
x1 , x2 ≥ 0

Solution

The feasible region is graphed in Figure 1. The figure also includes lines corresponding
to various values of the objective function. For example, the line z = 1 = x1 + x2 passes
through the points (1, 0) and (0, 1), and the parallel line z = 0 passes through the origin.
The goal of the linear program is to maximize the value of z. As Figure 1 illustrates, z
increases as these lines move upward and to the right. The objective z cannot be increased
indefinitely. Eventually the z line ceases to intersect the feasible region, indicating that
there are no longer any feasible points corresponding to that particular value of z. The
maximum occurs when z = 3 at the point (3, 0), that is, at the last point where an
objective line intersects the feasible region. This is a corner (extreme point) of the feasible
region.

11
Figure 1: Solution to example 1.3

Example 2.2 Graphically solve the LP below,

max Z = x1 + x2
s.t − 2x1 + x2 ≤ 2
x1 − 2x2 ≤ 4
x1 + x2 ≤ 5
x1 , x2 ≥ 0

12
Exercise 2.1 Using graphical method find the optimal solution to the following linear
programming problems.

(i)
max Z = 2x1 + 3x2
s.t 3x1 + x2 ≤ 2
4x1 + 2x2 ≤ 44
4x1 − x2 ≤ 14
x1 , x2 ≥ 0

(ii)
minimize w = x1 + x2
s.t − x1 + x2 ≤ 3
2x1 + x2 ≤ 18
x2 ≥ 6
x1 , x2 ≥ 0

(iii) On a piece of graph paper, clearly show the region of the following constraint set.
x1 + x2 ≥ 1
x1 − x2 ≥ −1
3x1 + 2x2 ≤ 6
x1 − 2x2 ≤ 1
x1 , x2 ≥ 0

Over the same region above, geometrically solve:


(i) min Z1 = 2x1 − x2
(ii) max Z2 = 6x1 + 4x2 .

2.1 Geometry of linear programs

In this section, we explore the geometry of linear programs and gain geometric insight
into optimal solutions. The corresponding algebraic representations of the geometry are
examined. The feasible set for a linear program can be bounded, unbounded, or infeasible.
Here, we explore additional geometric properties of feasible sets of general linear programs
that are consistent, i.e., whose feasible set is non-empty. It is fundamental to note that, if
a linear program has a finite optimal solution, then it occurs at an extreme point (corner
point).

13
Definition 2.1
The set H = {x ∈ Rn : cx = k} where c is a nonzero vector in Rn and k ∈ R is a constant
is called a hyperplane in Rn .

Example 2.3
In R2 − A hyperplane represents a line in two dimensions. In R3 − A hyperplane represents
a plane in three dimensions.

Definition 2.2
If H1 = {x ∈ Rn : cx ≤ k or cx ≥ k} then H1 is called a closed halfspace of the hyper-
plane.
If H2 = {x ∈ Rn : cx < k or cx > k} then H2 is called an open halfspace of the hyper-
plane.

Definition 2.3
A straight line with both ends at finite distances is called a line segment. Thus a line
segment from a to b through x is defined by the expression x = λa + (1 − λ)b, 0 ≤ λ ≤ 1
where a and b are points in R2 or (Rn ) in general.

Definition 2.4 A subset S of Rn is called a convex set if for any two distinct points
x1 , x2 ∈ S, then the line segment joining x1 and x2 lies in S, that is S is a convex linear
combination of x1 and x2 if whenever x1 & x2 ∈ S then x = λx1 + (1 − λ)x2 ∈ S for
0 ≤ λ ≤ 1.

Definition 2.5

r
Let x1 , x2 , . . . , xr be points in Rn and λ1 , λ2 , . . . , λr a set of scalars, then x =
P
λj xj
j=1

is called a convex linear combination or a convex combination of x1 , x2 , . . . , xr when


r
P
λj ≥ 0 ∀ j and λj = 1.
j=1

Generally a set S is said to be convex if and only if given points x1 , x2 , . . . , xn ∈ S, the


n
P
point x = λ1 x1 + λ2 x2 + . . . + λn xn such that 0 ≤ λi ≤ 1 and λi = 1 is in S.
i=1

Definition 2.6
A polyhedron is algebraically defined as the set of solutions to the constraint set of a
linear programming problem or simply for a system of linear inequalities.

14
Definition 2.7
A bounded convex polyhedron is called a convex polytope.

Definition 2.8
A convex hull of a set P , denoted by P 0 , is the intersection of all convex sets Ci which
contain P that’s P 0 = ∩i Ci such that Ci is convex and P ⊂ Ci . A convex hull of a finite
set of points is called the convex polytope spanned by these points.
Note that a convex polytope is spanned by its extreme points, i.e, any point in the
polytope can be written as a linear combination of the extreme points.

Definition 2.9
A point x in a convex set S is called an extreme point of S if there are no distinct points
x1 , x2 ∈ S such that x = λx1 + (1 − λ)x2 for 0 < λ < 1., i,e., a point x in a convex set
S is an extreme point of S if it is not an interior point of any line segment in S.
Note that, extreme points of a convex set S are always boundary points but not vice
versa.

Definition 2.10
A point x is a boundary point of a set S of vectors if for every number  > 0(however
small), at least one point within the distance  of x is in S, and at least one point within
the distance  of x is outside S.

Definition 2.11
A rectangle in Rn is a set R = {x ∈ Rn such that ai ≤ xi ≤ bi } where ai , bi are real
numbers and ai < bi .

Definition 2.12
A bounded convex set is one which can be enclosed in a rectangle in Rn

15
Theorems about convex sets

Theorem 2.1 Every hyperplane H is a convex set.


Proof:
Let x1 , x2 ∈ H then it is required to show that x = λx1 + (1 − λ)x2 ∈ H with 0 ≤ λ ≤ 1.
By definition,
x1 ∈ H ⇒ cx1 = k
x2 ∈ H ⇒ cx2 = k
Thus, x ∈ H ⇒ {x ∈ Rn such thatcx = k}
cx = c(λx1 + (1 − λ)x2 ) = λcx1 + cx2 + λcx2 .
= λk + k − λk = k.
Hence x ∈ H.

Theorem 2.2
Let A be an m × n matrix and b be a vector in Rn , then the set of solutions to the linear
system of equations Ax = b is a convex set.
Proof:
Denote the set of all solutions to the linear system of equations as S. We show that if
x1 , x2 ∈ S, then x = λx1 + (1 − λ)x2 ∈ S with 0 ≤ λ ≤ 1,
By definition,
x1 ∈ S → Ax1 = b
x2 ∈ S → Ax2 = b
Thus, x ∈ S → {x ∈ Rn such thatAx = b}
Ax = A(λx1 + (1 − λ)x2 ) = λAx1 + Ax2 + λAx2 .
= λb + b − λb = b.
Hence x ∈ S. and therefore the set of solutions to the linear system of equations Ax = b
is a convex set.

Theorem 2.3 The intersection of a finite collection of convex sets is convex.


Proof:
Let C1 , ..., Cn ∈ Rn be a finite collection of convex sets. Let C = ni=1 Ci be the set
T
formed from the intersection of these sets. Choose x1 , x2 ∈ C and 0 ≤ λ ≤ 1. Consider
x = λx1 + (1 − λ)x2 . We know that x1 , x2 ∈ C1 , ..., Cn by definition of C. We know that
x ∈ C1 , ..., Cn by convexity of each set. Therefore, x ∈ C. Thus C is a convex set.

Theorem 2.4

1. A convex polytope is bounded.

16
2. A closed bounded convex set in Rn is the convex hull of it’s extreme points.
3. A closed and bounded convex set is a convex polytope if and only if it has finitely
many extreme points.

Theorem 2.5

1. A convex polyhedron is the intersection of finitely many closed halfspaces.


2. In a convex polytope, every point is a convex combination of extreme points.

Theorem 2.6 Suppose that the set of all feasible solutions to a linear programming
problem is bounded, then one of the extreme points is an optimal solution.
Proof: The proof is given for a maximization linear programming problem. Let
x1 , x2 , x3 , . . . , xr be the extreme points of S. Let zm = cxm be the maximum of zi = cxi for
i = 1, 2, . . . , r. Let x be any other feasible solution. Since the set of all feasible solutions
r
P
is a polyhedron then x = λi xi where λi ≥ 0 and
i=1

r
! r r r
X X X X
z=c λ i xi = λi (cxi ) = λi (zi ) ≤ λi (zm )
i=1 i=1 i=1 i=1

since zi ≤ zm and λi ≥ 0 so z ≤ zm and maximum value at xm .

Note 2.1

(i) When solving linear programming problems the search for an optimal solution is
restricted to extreme points.
(ii) An unbounded solution set may give an unbounded optimal value (i.e optimal value
may not exist) or a finite optimal value may exist depending on the objective function
of the LP problem.
(iii) Some LP’s may have more than one finite optimal values.
(iv) At optimality, the feasible set should be completely contained within one of the closed
halfspaces of the contour of the objective function.

Special cases in solving linear programs

(1) Multiple solutions: A linear program can have multiple optimal solutions depend-
ing on the nature of the constraint set in relation to the objective function. If one of
the constraints is a multiple of the objective function, then it is likely that there are
multiple optimal values.

17
(2) Unboundedness: If the feasible region is open ended, then the maximization prob-
lem has no optimal value. The minimization problem should have an optimal solution
as long as all decision variables are strictly positive.
(3) Infeasibility: If constraints do not intersect, then there is no feasible region, such
problems are infeasible.
(4) Redundancy: If a certain constraint does not affect the feasible region then it is
redundant. Such constraints should be removed from the problem to reduce costs of
solving linear programs.

The following examples illustrate some of the special cases mentioned above.

Example 2.4
maximize z = 120x1 + 100x2
s.t 2x1 + 2x2 ≤ 8
5x1 + 3x2 ≤ 15
x1 , x2 ≥ 0

[has a solution at an extreme point ( 32 , 25 )]

Example 2.5
maximize z = 2x1 + 5x2
s.t 2x1 + 3x2 ≥ 12
3x1 + 4x2 ≤ 12
x1 , x2 ≥ 0

[the feasible region is unbounded]

Example 2.6
maximize z = 2x1 + 3x2
s.t x1 + 3x2 ≤ 9
2x1 + 3x2 ≤ 12
x1 , x2 ≥ 0

Solution Both (6, 0) and (3, 2) are optimal solutions then a line segment
(x1 , x2 ) = λ(6, 0) + (1 − λ)(3, 2)
= (3 + 3λ, 2 − 2λ)

18
for 0 ≤ λ ≤ 1. and on this line segment z = 2x1 + 3x2 = 12. that’s any point on the line
segment is an optimal value.
Consider the LP

max z = cx
s.t Ax = b
and x ≥ 0
where A = (aij ) is an m×n matrix, x = (x1 , x2 , . . . , xn )t ∈ Rn , c = (c1 , c2 , . . . , cn ) ∈ Rn ,
b = (b1 , b2 , . . . , bm )t ∈ Rm .
Assume that; m ≤ n, and m columns of A are linearly independent, then a set of feasible
solutions can be determined. The following theorem gives a hint on the nature of the
solution to this LP problem.

Theorem 2.7

1. Let S be the set of all feasible solutions to the standard LP above, then the point X =
(x1 , x2 , . . . , xm , 0, 0, . . . , 0) is an extreme point of S where xj > 0 for j = 1, 2, . . . , m
2. If X = x1 , x2 , . . . , xn is an extreme point of S, then the columns of A which corre-
spond to positive xj form a linearly independent set of vectors in Rm .
3. At most m components of extreme points of S can be positive and the rest must be
zero.

Note 2.2
A main insight from the geometry of the two-dimensional examples is that if an LP has a
finite number of optimal solutions, then the optimal solution can be attained at an extreme
point. This observation holds in higher dimensions too, and is called the Fundamental
Theorem of Linear Programming. This suggests that one can plot the feasible set
and find all the extreme points and then evaluate them to find the optimal solution if
it exists. Unfortunately, this strategy is effective for only small problems, e.g., LPs with
two variables. Graphing feasible sets in higher dimensions is not a practical endeavor,
however, the insights from the geometry of LPs in low dimensions hold for higher dimen-
sional problems. Algebraic representation of extreme points through their corresponding
geometric notion can be studied. The algebraic representation allows higher-dimensional
LP problems to be considered without relying on the geometry of the underlying problem.

Revision assignment 2.
(To be given later)

19
3 The Simplex Method

3.1 Basic introduction

When there are three or more decision variables in a linear programming model, the
graphical method is no more suitable for solving the model. Instead of the graphical
method, the simplex method will be used. The simplex method is an iterative method
for solving a linear programming problem written in standard form. The simplex method
was conceived by Dantzig (1948), but still remains a powerful method, and is often the
main strategy for solving linear programs in commercial softwares. From the Fundamental
Theorem of Linear Programming, if an LP has a finite optimal solution, then it can be
attained at an extreme point and therefore at some basic feasible solution.
The basic strategy of the simplex method is to explore the extreme points of the feasible
region of a linear program to find the optimal extreme point. However, in practice, the
simplex method will in most cases not need to explore all possible extreme points before
finding an optimal one. The strategy of the simplex method is as follows: given an
initial basic feasible solution, the simplex method determines whether the basic feasible
solution is optimal. If it is optimal, then the method terminates, else, another basic
feasible solution is generated whose objective function value is better or no worse than
the previous one, optimality is checked and so on, until an optimal basic feasible solution
is obtained.

Definition 3.1 Every non-singular m × m submatrix B of A is called a basis for the


standard LP problem

Note 3.1

(i) Every m × m submatrix B of A with |B| =


6 0 is a basis for the standard LP problem.
(ii) Every m linearly independent columns of A is a basis for the standard LP problem.
(iii) The assumption of rank m for the matrix A implies that there is atleast one basis
for the standard LP problem.

Definition 3.2 B is a feasible basis for the standard LP problem if B is a basis and
B −1 b ≥ 0.

Example 3.1
maximize z = 120x1 + 100x2
s.t 2x1 + 2x2 ≤ 8
5x1 + 3x2 ≤ 15
x1 , x2 ≥ 0

20
In standard form
maximize z = 120x1 + 100x2
s.t 2x1 + 2x2 + x3 = 8
5x1 + 3x2 + x4 = 15
x1 , x2 ≥ 0

In this case the interest here is to identify and create basic variables and the optimal value
be obtained, for example we can make x1 and x2 basic variables and have, x1 − 34 x3 + 21 x4 =
3
2
and x2 + 54 x3 − 21 x4 = 5
2

3.2 Basis presentation of a linear program

Here we split the set of variables into basic and non basic to obtain,
   
xB xB
z = cx = (cB cR ) and Ax = b ⇒ (B : R) =b
xR xR

and therefore,

 
−1 xB
(I : B R) = B −1 b or IxB = B −1 b − B −1 RxR (1)
xR

is the basis presentation of the problem. An assumption here is that the first m columns
of A make up the basis.
Let bi be the components of B −1 b i = 1, 2, . . . , m
let aj be components of B −1 R j = m + 1, m + 2, . . . , n
So (1) is equivalent to;

x1 + a1,m+1 xm+1 + . . . + a1,n xn = b1


x2 + a2,m+1 xm+1 + . . . + a2,n xn = b2
.
.
.
xm + am,m+1 xm+1 + . . . + am,n xn = bm

Equation (?)

21
Definition 3.3 The vector (xB , 0) ∈ Rn is a basic solution of the standard LP problem
relative to the basis B.

Definition 3.4 The m components of xB are called basic variables (or variables in the
solution), the n − m components of xR are called non basic variables (or variables not in
the solution).

Definition 3.5 (xB , 0) is called a basic feasible solution of the standard LP problem if
(xB , 0) ≥ 0.

Note 3.2 A feasible basis leads to a basic feasible solution since xB = B −1 b ≥ 0.

Theorem 3.1 For the standard LP problem, every basic feasible solution is an extreme
point and conversely every extreme point is a basic feasible solution.

Note 3.3 In the process of finding the solution, we:

1. Get a basis B that is a basic feasible solution. If it is not possible to find B, then
the problem has no solution ( solution space is empty).
2. Get a basis B, and find another basis B1 such that cB xB < cB1 xB1 and if this is not
possible then B gives the required solution for a maximization problem.

Consider the general LP problem as in equation (?), Assuming A is an m × k matrix that


is m equalities in k variables, split the matrix A into a square matrix (B) and another
matrix that does not necessarily need to be square (R) as in the case below,
A = [B : R], and also decompose x into xB and xR where xB = x1 , x2 , . . . , xm and
xR = xm+1 , . . . , xn . Hence,

 
xB
Ax = (B : R) = BxB + RxR = b
xR

Here we have a choice of assigning values to n − m of the variables and then solve the
remaining ones for example setting xR = 0, we get BxB = b ⇒ xB = B −1 b.

Definition 3.6 From Ax = BxB + RxR = b, if we let xR = 0 then xB = B −1 b is called


the basic solution associated with B to the linear programming problem.

3.3 Vector to enter the basis

Apply the basis presentation decompositions to the LP problem. The objective function
will be z = cx = cB xB + cR xR . When we set xR = 0 and therefore have xB = B −1 b, then
the objective function becomes z = cB xB = cB B −1 b.

22
Now suppose xR 6= 0, then,
max z = cB xB + cR xR
subject to,
BxB + RxR = b (2)
Multiplying through equation (2) by B −1 we get
xB + B −1 RxR = B −1 b (3)
which implies that xB = B −1 b − B −1 RxR .
Using this result in the objective function we get,
z = cB [B −1 b − B −1 RxR ] + cR xR .
= cB B −1 b − cB B −1 RxR + cR xR .
= cB B −1 b + [cR − cB B −1 R]xR
= z + [cR − cB B −1 R]xR

where z = cB B −1 b, and xB = x − B −1 RxR where x = B −1 b (basic solution)


From xB = B −1 b − B −1 RxR , let Y = B −1 R, which is an m × (n − m) matrix.

 
y11 . . . y1n−m

 . ... . 

Y =
 yi1 . . . yin−m 

 . ... . 
ym1 . . . ymn−m

The ith constraint is given by,


n−m
X
xi + yij xj = xi .
j=1

and z can also be re-written as,


z = z + [cR − cB B −1 R]xR
n−m
X
z = z+ [cj − cB yij ]xj
j=1

Let I be the indexing set of indices for the basic variables and J be the indexing set of
indices for the non basic (secondary) variables . The objective function and the set of

23
constraints can therefore be written respectively as,
X
z = z− [zj − cj ]xj (4)
j∈J

X
xi = xi + yij xj (5)
j∈J

Suppose xk , k ∈ J is a non-basic variable, and let it take on a positive value θ > 0, but
the rest of the non-basic variables remain at a value zero, then equation (4) becomes,
z = z − (zk − ck )θ (6)
The value of θ makes z smaller or larger depending on the sign and magnitude of (zk −ck )θ.
From equation (6)
z = z − (zk − ck )θ ≥ z if (zk − ck ) < 0 (7)
z = z − (zk − ck )θ ≤ z if (zk − ck ) ≥ 0 (8)
The above condition on how z can be made larger is referred to as the criterion for
improvement of the solution. It is a criterion for identifying a variable that gives a better
solution. In order to improve the value of the objective function, we choose a non-basic
variable (say xk ) for which zk −ck < 0, and give it a positive value θ. Therefore min{zj −cj }
for zj − cj < 0 is regarded as the entry criterion in the simplex tableaux.

3.4 Sample of a simplex tableau

c1 c2 . . . cn
cB x B x1 x2 . . . xn x
cB1 xB1 xB1
cB2 xB2 xB2
. .
. B −1 R I .
cBm xBm xBm
zj − cj z1 − c1 z2 − c2 − − − zn − cn z

From xB + B −1 RxR = B −1 b we can see that initially B =P


I(m × m) is an identity matrix.
Also the last row of the tableau corresponds to z = z − (zj − cj )xj .
j∈J

Definition 3.7 A variable is said to be basic if it appears in only one of the constraint
equalities and its coefficient is 1.

24
Definition 3.8 A solution to the linear system of equations is said to be feasible if all
of its components are non-negative.
Example
Translate the LP problem below into a tableau.
max z = 2x1 + x2
s.t x1 + 2x2 ≤ 5
3x1 − x2 ≤ 1
x1 , x2 ≥ 0

Solution

(i) Transform the LP into standard form by introducing slack variables x3 and x4 into
the constraint set to get,

max z = 2x1 + x2 + 0.x3 + 0.x4


s.t x1 + 2x2 + x3 = 5
3x1 − x2 + x4 = 1
x1 , x2 , x3 , x4 ≥ 0

 
x1
 x2 
(ii) Write the problem in matrix form that is; max z = (2, 1, 0, 0) 
 x3  subject to

x4
 
  x 1  
1 2 1 0   x2  = 5 , x1 , x2 , x3 , x4 ≥ 0

3 −1 0 1  x3  1
x4
   
1 2 1 0 5
where A = ,b = , c = (2, 1, 0, 0).
3 −1 0 1 1
 
1 2
From where we can decompose A and x into B, R, xB & xR with R =
3 −1
 
1 0
and B =
0 1

25
2 1 0 0
cB x B x1 x2 x3 x4 x
0 x3 1 2 1 0 5
0 x4 3 -1 0 1 1
zj − cj −2 −1 0 0 0

Here, zj − cj = cB xj − cj , for example, z1 − c1 = cB x1 − c1 = (0 × 1 + 0 × 3) − 2 = −2.

3.5 Vector to leave the basis

From equation(5),
X
xi = xi + yij xj (9)
j∈J

Suppose we have chosen xk (k ∈ J) to become a basic variable, isolate the xk term from
equation (9)
X
xi = x i + yij xj + yik xk (10)
j∈J−{k}

Let xl be the basic variable to be exchanged with xk . Consider the lth constraint
X
x l = xl + ylj xk + ylk xk (11)
j∈J−{k}

Dividing equation (11) through by ylk we get,

xl xl X  ylj 
= + xj + xk .
ylk ylk ylk
j∈J−{k}

xl
Note 3.4 The new value of x with the condition of non negativity implies that ylk
≥ 0 iff
ylk > 0.

Eliminate xk from the two constraint equations (10) and (11) by multiplying equation
(10) by ylk and (11) by yik and subtract the two equations.
X
xi ylk = xi ylk + ylk yij xj + ylk yik xk
j∈J−{k}

26
X
xl yik = xl yik + yik ylk xk + yik ylk xk
j∈J−{k}

After subtraction and dividing through by ylk we get,

yik xl X  yij ylk − ylj yik  


xi ylk − xl yik

xi − + xj =
ylk ylk ylk
j∈J−{k}

 
xi ylk −xl yik
For feasibility the new solution x0 = ylk
should be non negative that is,

 
xi ylk − xl yik
≥0
ylk

Which implies that,

xi xl

yik ylk

and therefore the criterion for entry of a variable in the basis set of the tableau is given
by,
 
xl xi
= min ∀ yik > 0
ylk i∈ I yik

Theorem 3.2 For a maximization linear programming problem a necessary and suffi-
cient condition for the solution to be optimal is that zj − cj ≥ 0 for all j = n − m, . . . , n.

We note that zj − cj = 0 for all the basis vectors (xB ).

Example 3.2 For the following LP problem,


max z = −7x1 + 9x2 + 3x3
s.t 5x1 − 4x2 − x3 ≤ 10
x1 − x2 ≤ 4
−3x1 + 4x2 + x3 ≤ 1
x1 , x2 , x3 ≥ 0
Find an optimal solution to the LP using the primal simplex method(the simplex method)

Solution

27
1. Write the LP problem in standard form as,
max z = −7x1 + 9x2 + 3x3
s.t 5x1 − 4x2 − x3 + x4 = 10
x1 − x2 + x5 = 4
−3x1 + 4x2 + x3 + x6 = 1
x1 , x2 , x3 ≥ 0

2. Write the LP problem in matrix form where identify B, & R and finally put the
information in the tableau as follows;

(i) Initial and the following tableaux.

−7 9 3 0 0 0
cB xB x1 x2 x3 x4 x5 x6 x
0 x4 5 -4 -1 1 0 0 10
0 x5 1 -1 0 0 1 0 4
0 x6 -3 4 1 0 0 1 1
zj − cj 7 −9 −3 0 0 0 0
0 x4 2 0 0 1 0 1 11
1 1 1 17
0 x5 4 0 4 0 1 4 4
9 x2 − 43 1 1
4 0 0 1
4
1
4
1
zj − cj 4 0 − 34 0 0 9
4
9
4
0 x4 2 0 0 1 0 1 11
0 x5 1 -1 0 0 1 0 4
3 x3 -3 4 1 0 0 1 1
zj − cj 7 −9 −3 0 0 0 0
0 x4 0 3
−7 x1 1 -1 0 0 1 0 4
3 x3 0 13
zj − cj 0 1 0 0 2 3 11

Thus the optimal solution is x?1 = 4, x?2 = 0, x?3 = 13, x?4 = 3, x?5 = 0, x?6 = 0 and z = 11
Summary of the simplex procedure

1. Check for optimality, i.e, check if zj − cj ≥ 0 ∀ j, and if they are all positive then it
is optimal. If for some j zj − cj < 0 then we are not optimal.

28
2. To identify a vector to enter into the basis we use the entry criterion i.e min(zj − cj )
for all zj − cj < 0.
n o
xi
3. For the vector to leave the basis we use min yik
∀ yik ≥ 0
i∈I

4. Update the tableau (using the pivoting strategy) and go back to 1.

Exercise 3.1 For the following LP problems, find their corresponding optimal solutions
using the primal simplex method(the simplex method),

(i)
max z = 2x1 − 3x2 + x3
s.t x1 − 2x2 + 4x3 ≤ 5
2x1 + 2x2 + 4x3 ≤ 5
3x1 + x2 − x3 ≤ 7
x1 , x2 , x3 ≥ 0

(ii)
max z = 3x1 + x2 + x3
s.t x1 − x2 + 2x3 ≤ 3
2x1 + 3x2 − x3 ≤ 2
−3x1 + x2 − 2x3 ≤ 1
x1 , x2 , x3 ≥ 0

(iii)
max z = x1 − 2x2 + x3
s.t x1 + 2x2 + x3 ≤ 5
2x1 − x2 + x3 ≤ 3
x1 + x2 − 2x3 ≤ 2
x1 , x2 , x3 ≥ 0

(iv)
max z = 3x1 + 4x2 + x3 + 5x4
s.t x1 + 4x2 + 5x3 + 2x4 ≤ 8
2x1 + 6x2 + x3 + x4 ≤ 3
8x1 + 3x2 + 4x3 + x4 ≤ 7
x1 , x2 , x3 , x4 ≥ 0

29
3.6 Degeneracy and Cycling

Definition 3.9 For a linear programming problem with m constraints (excluding the
non- negativity restrictions) we say that a basic feasible solution is degenerate if less than
m of the basic variables are positive or we say that a solution to a linear programming
problem is degenerate if atleast one of the basic variables takes on a value zero.

The likely occurrence of a degenerate solution is due to the occurrence of a tie in calcu-
lating
x0
 
xl xi
θ= = min = l
ylk yik yik yl 0 k

then only one of xl and xl0 can leave the basis, leaving the other variable in the basis with
value zero. since xB ≥ 0, the variable with value zero is the one which leaves the basis.
If xl is the one left in the basis with value xl = 0,(when xl0 leaves the basis), then the
next simplex iteration is,

 
xl xi
= 0 = min provided yik > 0.
ylk yik yik

if this is the case then the new value of z after xl has left the basis is,

 
0 xl
z = z + (ck − zk ) =z
ylk

thus when there is a degenerate solution there may be no change in the value of z,
If degeneracy persists for several successive iterations, there is a possibility of of cycling
that is a set of variables may be introduced in such a sequence that the original solution
re-occurs in a later iteration without a change in z.

Definition 3.10 Cycling occurs when previously dropped solutions (variables) by the
simplex algorithm are again picked.

Exercise 3.2
Do you think degeneracy leads to cycling? Defend your answer.

3.7 The Big-M Method

So far during the finding of an optimal solution to a maximization problem using the
primal simplex method the assumption has been that there exists m basis vectors. suppose

30
that we have less than m vectors in the basis then we introduce another variable to make
them m.
Method

Consider the standard linear programming problem with the number of basis vectors
less than m be given by

Max z = cx (12)

s.t Ax = b
and x ≥ 0

Introduce extra variables in the constraints that do not have basic variables so that the
LP problem is transformed to,

k
X
Max z = cx − M si (13)
i=1

s.t Ax + si ≤ b
and x, si ≥ 0
Solve equation (13) and if at the end;

(a) all si = 0 in the solution or they are not in the basis for the final solution then an
optimal solution to problem (12) is found.
(b) atleast one sj > 0 in the final solution then no solution exists to the original problem
(12) ie the solution space is undefined.

Definition 3.11 The variable introduced in the Lp problem (13) to make the number
of basic variables m is called an artificial variable.

Note 3.5 M is a relatively large positive number so that it will be larger than any number
it will be compared with during the carrying out of the simplex algorithm computations.

Example 3.3 Solve the following linear programming problem using the Big-M method
max z = 2x1 + 5x2
s.t 2x1 − x2 ≤ 8
6x1 + 4x2 ≥ 24
x1 , x2 ≥ 0

31
Solution

(a) Convert the LP problem into standard form by supplying slack variables.
max z = 2x1 + 5x2 + 0x3 + 0x4
s.t 2x1 − x2 + x3 = 8
−6x1 − 4x2 + x4 = −24
x1 , x2 , x3 , x4 ≥ 0

(b) Ensuring feasibility implies that one of the equations has no basic variable thus, we
introduce an artificial variable x5 .
max z = 2x1 + 5x2 + 0x3 + 0x4 − M x5
s.t 2x1 − x2 + x3 = 8
6x1 + 4x2 − x4 + x5 = 24
x1 , x2 , x3 , x4 , x5 ≥ 0

we put the information in the tableaux as follows,

2 5 0 0 -M
cBj xB x1 x2 x3 x4 x5 x
0 x3 1 2 1 0 0 8
−M x5 6 4 0 -1 1 24
zj − cj −6M − 2 −4M − 5 0 M 0 -24
4 1
0 x3 0 3 1 6 − 16 4
2
2 x1 1 3 0 − 16 16 4
zj − cj 0 − 11
3 0 − 13 ? 8
3 1
5 x2 0 1 4 8 ? 3
2 x1 1 0 2
11 1
zj − cj 0 0 4 6 ? 19

Thus the optimal solution is x∗1 = 2, x∗2 = 3, x∗3 = 0, x∗4 = 0, x∗5 = 0 and z = 19

Exercise 3.3 Solve the following Lp problems using the Big-M method( by the use of
artificial variables).

1.
max z = 3x1 − x2
s.t x1 + x2 ≥ 1

32
x1 + 2x2 ≤ 8
x1 ≤ 2
x1 , x2 ≥ 0

2.
max z = x1 + 2x2 + 3x3
s.t 2x1 + x2 + 4x3 ≤ 15
3x1 − 4x2 + 5x3 ≤ −12
4x1 + 5x2 − 6x3 ≤ 9
x1 , x2 , x3 ≥ 0

Revision assignment 3.
(To be given later)

33

You might also like