0% found this document useful (0 votes)
17 views98 pages

Trustrum 1971

Uploaded by

Hades Inferno
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views98 pages

Trustrum 1971

Uploaded by

Hades Inferno
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

LIBRARY OF MATHEMATICS

EDITED BY WALTER LEDERMANN


The aim of this series is to provide short introductory text-books for
the topics which are normally covered in the first two years of
mathematics courses at Universities, Polytechnics, Colleges of Education
and Colleges of Technology. Each volume is made as nearly self-contained
as possible, with exercises and answers, and contains an amount of
material that can be covered in about twenty lectures. Thus each student
will be able to build up a collection of text-books which is adapted to
the syllabus he has to follow.
The exposition is kept at an elementary level with due regard to modern
standards of rigour. When it is not feasible to give a complete treatment,
because this would go beyond the scope of the book, the assumptions
are fully explained and the reader is referred to appropriate works in
the literature.
'The authors obviously understand the difficulties of undergraduates.
Their treatment is more rigorous than what students will have been
used to at school, and yet it is remarkably clear.
All the books contain worked examples in the text and exercises at the
ends of the chapters. They will be invaluable to undergraduates. Pupils
in their last year at school, too, will find them useful and stimulating.
They will learn the university approach to work they have already done,
and will gain a foretaste of what awaits them in the future.'
- The Times Educational Supplement
'It will prove a valuable corpus. A great improvement on many works
published in the past with a similar objective.'
- The Times Literary Supplement
'These are all useful little books, and topics suitable for similar treatment
are doubtless under consideration by the editor of the series.'
- T. A. A. Broad bent, Nature

A complete list of books in the series appears on the inside back cover.

£1'75 net
LINEAR PROGRAMMING
LIBRARY OF MATHEMATICS
edited by
WALTER LEDERMANN
D.Sc., Ph.D., F.R.S.Ed., Professor of
Mathematics, University of Sussex

Linear Equations P. M. Cohn


Sequences and Series J. A. Green
Differential Calculus P. J. Hilton
Elementary Differential
Equations and Operators G. E. H. Reuter
Partial Derivatives P. J. Hilton
Complex Numbers W. Ledermann
Principles of· Dynamics M. B. Glauert
E1ectrical and Mechanical
Oscillations D. S. Jones
Vibrating Systems R. F. Chisnell
Vibrating Strings D. R. Bland
Fourier Series I. N. Sneddon
Solutions of Laplace's Equation D. R. Bland
Solid Geometry P. M. Cohn
Numerical Approximation B. R. Morton
Integral Calculus W. Ledermann
Sets and Groups J. A. Green
Differential Geometry K. L. Wardle
Probability Theory A. M. Arthurs
Multiple Integrals W. Ledermann
Fourier and Laplace Transforms P. D. Robinson
Introduction to Abstract
Algebra C. R. J. Clapham
Functions of a Complex
Variable, 2 vols D. O. Tall
LINEAR
PROGRAMMING
BY

KATHLEEN TRUSTRUM

ROUTIEDGE & KEGAN PAUL


LONDON, HENLEY AND BOSTON
First published 1971
in Great Britain by
Routledge & Kegan Paul Ltd
39 Store Street
London WC1E 7DD,
Broadway House, Newtown Road
Henley-on- Thames
Oxon RG9 lEN and
9 Park Street
Boston, Mass. 02108, USA
Whitstable Litho Ltd, Whitstable, Kent
© Kathleen Trustrum 1971
No part of this book may be reproduced
in any form without permission from
the publisher, except for the quotation
of brief passages in criticism
ISBN-13: 978-0-7100-6779-1 e-ISBN-13: 978-94-010-9462-7
001: 10.1007/978-94-010-9462-7
Contents

page
Preface vii

Chapter One: Convex Sets


1. Convex hulls, polytopes and vertices I
2. Basic solutions of equations 4
3. Theorem of the separating hyperplane 8
4. Alternative solutions of linear inequalities 10
Exercises 12

Chapter Two: The Theory of Linear Programming


I. Examples and classes of linear programmes 14
2. Fundamental duality theorem 18
3. Equilibrium theorems 22
4. Basic optimal vectors 25
5. Graphical method of solution 27
Exercises 28

Chapter Three: The Transportation Problem


1. Formulation of problem and dual 31
2. Theorems concerning optimal solutions 35
3. Method of solution with modifications for degeneracy 36
4. Other problems of transportation type 41
Exercises 44

Chapter Four: The Simplex Method


1. Preliminary discussion and rules 46
2. Theory of the simplex method 53
v
3. Further techniques and extensions 58
Exercises 66

Chapter Five: Game Theory


1. Two-person zero-sum games 68
2. Solution of games: saddle points 70
3. Solution of games: mixed strategies 72
4. Dominated and essential strategies 74
5. Minimax theorem 77
6. Solution of matrix games by simplex method 79
Exercises 81
Suggestions for Further Reading 83
Solutions to Exercises 84
Index 87

vi
Preface

Linear programming is a relatively modern branch of Mathe-


matics, which is a result of the more scientific approach to
management and planning of the post-war era. The purpose
of this book is to present a mathematical theory of the subject,
whilst emphasising the applications and the techniques of
solution. An introduction to the theory of games is given in
chapter five and the relationship between matrix games and
linear programmes is established.
The book assumes that the reader is familiar with matrix
algebra and the background knowledge required is covered
in the book, Linear Equations by P.M. Cohn, of this series.
In fact the notation used in this text conforms with that intro-
duced by Cohn.
The book is based on a course of about 18 lectures given to
Mathematics and Physics undergraduates. Several examples
are worked out in the text and each chapter is followed by a
set of examples.
I am grateful to my husband for many valuable suggestions
and advice, and also to Professor W. Ledermann, for encourag-
ing me to write this book.

University of Sussex Kathleen Trustrum

vii
CHAPTER ONE

Convex Sets

1. Convex Hulls, Polytopes and Vertices

The theorems proved in this chapter provide the mathematical


background for the theory of linear programming. Convex
sets are introduced, not only to prove some of the theorems,
but also to give the reader a geometrical picture of the algebraic
processes involved in solving a linear programme. In the follow-
ing definitions, R n denotes the vector space of all real n-vectors,
DEFINITION. A subset! CeRn is convex if for a1l2 010 02 E C.

AU 1 + (1 - A)U2 E C for 0 ~ A ~ 1.

In other words, a set is convex if the straight line segment


joining any two points of the set, also belongs to the set. For
example, a sphere and a tetrahedron are convex sets, whereas
a torus is not.
DEFINITIONS. A vector x is a convex combination of 010 U2'
••• , Uk ERn if

k k
X = L AiUi'
;=1
where A.; ~ 0 and L A.i =
;=1
1.

The convex hull of a set Xc Rn , written (X), is the set of all


convex combinations of points of X. If X is a finite set, then
the convex hull is called a convex polytope.
1 C is contained in.
2 E belongs to.
CONVEX SETS

It is easily shown that the convex hull of a set is a convex


set. In two dimensions a convex polytope is simply a convex
polygon and figure 1 shows the convex hull (X) of the finite
set of points X = {Ub U2' •.. , U6} C R2, which is an example
of a convex polytope.
u,

u, u.

...
FIGURE

A vertex or extreme point of a convex set C is a point belong-


ing to C, which is not the interior point of any straight line
segment of C. The vertices of the convex set shown in figure 1
are Ul' U3' ~, Us and U8' A convex set may have no vertices e.g.
the interior of a circle. A more precise definition of a vertex
is now given.
DEFINITION. A vertex x of a convex set C belongs to C and is
such1 that, for any y, Z belonging to C and 0 < A < 1,

x = AY + (1 - A)Z =) x = y = z.
The first theorem proves that a convex polytope is the convex
hull of its vertices (see figure 1).
THEoREM 1. If C is a convex polytope and V is its set of
vertices then
C= (V)
Proof Since C is a convex polytope, C = (U1> U2' ••• , Uk)'
A minimal subset {Vb V2, ••• , V,} is selected from amongst
the lit'S so that C = (Vb V2' ••• , V,), where r ~ k. This is

I;:> implies.
2
CONVEX HULLS, POLYTOPES AND VERTICES

achieved by eliminating any uJ which is a convex combination


of the other u/s. To show that VI is a vertex, we suppose that
VI = AX + (1 - A)Y, where X, Y E C and 0 < A < 1.
Expressing X and y as a convex combination of the v;'s, we
, , , ,
have that VI = A L
;=1
A;V; + (l - A) L Il;V/
;=1
where L AI =;=1
;=1
L Il,= 1,
AI ~ 0, Il; ~ O.
Hence
,
(1 - (Xl)V1 =L !X;V;, where !X; = AA.; + (1 - ),,)Il/ ~ 0 and
;=2
,
(1)
1=1

If !Xl < 1, (1) implies that VI is a convex combination of V2,


... , v" which is false, so !Xl = 1 and since 0 < A < 1, Al = III =
1. It now follows that VI = X = Y and VI is a vertex. Simil-
arly V2' ••• , V" are vertices and it is clear that C has no other
vertices, hence V = {Vb V2 , ••• , V,} and C = (V).
We now use Theorem 1 to prove that a linear function defined
on a convex polytope C attains its maximum (minimum) in
C at a vertex of C. The general linear programming problem
(see page 17) is to maximise or minimise a linear function sub-
ject to linear constraints. Such constraints define a convex
set and in the cases where the convex set is also a polytope, the
maximum and minimum can be found by evaluating the linear
function at the vertices.
THEOREM 2. If C c R" is a convex polytope, then the linear
function l e/x takes its maximum (minimum) in C at a vertex
of C, where constant e and variable X belong to R".

, x= (1:) t, a ",Iumn "",I." " = ('" '" , , " ,.J t, a row ",,>I.,
and c'x = C1 Xl + C2X2 + ... + CnX••
3
CONVEX SETS

Proof. Since C is a convex polytope, it follows from Theorem


1 that C = (Vb V2, ..• , V,), where Vi are the vertices of C.
Let M = max C'Vi' then for any x E C
j
r r
X = L AIVI, where I.j ~ 0 and L Ai = 1
i=1 i=1
r r
and c'x= L ;·iC'Vi~M L At=M.
i=1 i.l

Hence the maximum in C is attained at a vertex of C.

2. Basic Solutions of Equations

Often the convex set C, on which a linear function is to be


maximised or minimised, consists of the non-negative solutions
of a system of linear equations, that is

C = {x I Ax = b, x ~ O},1

where A is an m x n matrix, x is an 1l-vector and b is an m-vector.


To prove that C is convex, let U10 U2 E C then

A (AU! + (1 - ).)u2) = )'b + (1 - ).)b = b


and ).u! + (1 - ).)U2 ~ 0 for 0 ~ ). :s;; 1
so AUI + (1 - A)U2 E C for 0 ~ ). ~ 1.

An alternative representation of the system of equations


n
Ax = b is L Xjaj = b, where aj are the column vectors of A.
1.1

1 {x I p} the set of all x with the property p.


4
BASIC SOLUTIONS OF EQUATIONS

DEFINITION. A solution of the system of equations

(2)

is called basic. if the set of vectors {81 I XI #- O} is linearly


independent.
We now identify the basic non-negative solutions of (2)
with the vertices of C = {x I Ax = b, x ~ o}. Let x ~ 0 be a
basic solution of (2), then x E C. Suppose that
x = AY + (1 - A)Z, where 0 < A < 1 and y, Z E C. (3)

Since x ~ 0, Y ~ 0, Z ~ 0 and 0 < A < 1,


XI = 0 = } J'l = Zj = O. (4)

Also y and Z satisfy (2), therefore

:E (Y/ - z/)a, = 0, where S = {i I Xi > O}.


jES

Since x is basic, the set of vectors {8j liE S} is linearly inde-


pendent,
so y/ = z/ for i E S. (5)
It now follows from (3), (4) and (5) that x = Y = z, so x is a
vertex of C. We leave the reader to prove that a vertex of C is
a basic non-negative solution of (2) (see exercise 4 on page 12).
The following theorem is important and proves that if a
system of equations has a non-negative solution, then it has a
basic non-negative solution. This is equivalent to saying that
if C is non-empty, then C has at least one vertex.
n
THEOREM 3. If the system of equations L XI81 = b has a
1=1
non-negative solution, then it has a basic non-negative solution.
Proof The theorem is proved by induction on n. For n = 1
the result is immediate and the inductive assumption is that
5
CONVEX SETS

the result holds for the system of equations

k
L x;a; =
;=1
b (6)

for k < n. Let x = (Xb ..• , xn)' ~ 0 be a solution of (6) for


k = n. If x is basic, there is nothing to prove and if any XI = 0,

°
we are reduced to the case k < n. We are now left with the
case in which x is non-basic and Xi > for all i. Since x is
non-basic, al>~,"" an are linearly dependent and there
. exist At, not all zero, such that

n
L Aja, = O. (7)
;=1

Without loss of generality some Ai > 0, otherwise (7) can be


multiplied by - 1. Choose

It now follows from (6) and (7) that

II
L (XI -
i=1
OAi)ai = b, where Xi - OAt ~ ° for all i.

Since Xn - OAn = 0, (Xl - 0210 X2 - OA2' ••• , XII-l - 02n-J is


a non-negative solution of (6) with k = n - 1, and so by the
inductive assumption, the equations have a basic non-negative
solution.
The following example illustrates some of the preceding
results and shows that C = {x I Ax = b,x ~ O} is not necess-
arily a convex polytope.
6
BASIC SOLUTIONS OF EQUATIONS

Example. Determine the nature of the convex set C of the non-negative


solutions of the equations,

(8)

for all values of a.


Solution. The basic solutions of (8) are found by putting Xl' Xz and
Xa, equal to zero in tum, which gives

U1 = (0, 3, 3 - a)' ~ 0 for a$; 3,

Uz = (3a- i , O. - a)' ;;t; 0,

U3 = (a- i (3 - a), a, 0)' ~ 0 for 0 < a $; 3.

Since the basic non-negative solutions of (8) are the vertices


of C, C has no vertices for (X > 3 and so by Theorem 3, Cis
empty. For (X = 3, C has one vertex u1( = U3), for 0 < (X < 3, C
has two vertices Ul and U3' and for (X $; 0, C has one vertex Ul'
The general solution of (8) is

which represents a line through the point Ul' The condition


x ~ 0 implies that C = Ul for (X = 3, C is the convex polytope
(Ub U3) for 0 < (X < 3 and C is the half-line with vertex U 1
for (X ~ 0, which is not a convex polytope. Geometrically C
is the intersection of a line with the positive octant, which may
be empty, a -single point, a finite line segment or a half-line.
These possibilities are illustrated in figure 2.

FIGURE 2
7
CONVEX SETS

3. Theorem of the Separating Hyperplane

We now leave linear equations temporarily and prove a more


general result about convex sets, which is known as the theorem
of the separating hyperplane; This theorem asserts· that if a
point does not belong to a closed convex set in RH , then a
hyperplane (an (n-l)-dimensional plane) can be drawn so that
the point and convex set lie on opposite sides of the hyperplane
(see figure 3). The theorem does not hold if the set is not closed
for then the point can be chosen as one of the limit points

FIGURE 3

not belonging to the set, in which case it is impossible to draw


a separating hyperplane. For example, take the convex set
consisting of the interior of a circle in R2 and take any point
on the circumference of the circle. Nor does the theorem hold
if the set is not convex, for then there is a line segment joining
two points of the set, which is not wholly contained in the set.
Choose a point on this line segment which does not belong to
the set, then it is impossible to construct a separating hyper-
plane.
THEoREM 4 (theorem of the separating hyperplane). If
C C RH is a closed convex set and the n-vector b q C, then there
exists ayE ~ such that
y'b > y'z for all z E C.
8
THEOREM OF THE SEPARATING HYPERPLANE

Proof The proof is analytical. Choose a closed sphere S


centre b such that C n S is non-empty, then the set C n S
is compactl • Since the function f(z) = (b - z)'(b - z) is conti-
nuous, it is bounded and attains its bounds in C s, that is n
there exists a point 7.0 E enS such that 2

(b - 7.o)'(b - zo) = g.l.b. (b - z)'(b - z).


zECnS

The above result clearly holds for all z E C and since band Zn
are distinct points.

o< (b - Zo)'(b - 7(0) ~ (b - z)'(b - z) for all z E C. (9)

ItfollowsfromtheconvexityofCthatforallzE CandO ~ A. ~ 1

(b - 7(0)' (b - Zo) ~ (b - A.z - (1 - A.)7.o)' (b - A.z - (I - A.)7.o)

which can be written as


o ~ .,1.2(7.0 - z)'(7.o - z) + 2A.(b - 7(0)'(7.0 - z).

For the above inequality to hold for all A. satisfying 0 ~ A. ~ 1,

(b - 7.0)'(7.0 - z) ~ 0 for all z E C.

Combining the left-hand side of (9) with the last inequality


gives
(b - 7.o)'b > (b - 7.0)'7.0 ~ (b - 7.o)'z for all z E C,

and on putting y = (b - Zo), the required result,

y'b > y'z for all z E C,

follows. (A separating hyperplane is H = {x I y'x =


t y'(7.o + b)}. See figure 3.)
1 n intersection.
2 g. 1. b. greatest lower bound or info
9
CONVEX SETS

4. Alternative Solutions of Linear Inequalities

We now use the theorem of the separating hyperplane in the


proof of the next theorem.
THEOREM 5. Either
the equations Ax = b have a solution x ~ 0, (10)
or
the inequalities y' A ~ 0, y'b > 0 have a solution, (11)
but not both.
Proof If both (10) and (11) hold, then
o ~ y' Ax = y'b > 0,
which is impossible. Suppose (10) is false, then
b ~ C = {z = Ax I x ~ O}.
It is easy to show that C is convex and it can be proved that
C is closed, so by Theorem 4, there exists a y satisfying
y'b > y'z = y' Ax for all x ~ 0. (12)
In particular (12) is satisfied for x = 0, so y'b > o.
To prove that y'A ~ 0, suppose for contradiction that
(y'A)j = A> 0,
then for x = A-l(y'b)ej ~ 0, where e, is the ith unit vector,l
y'Ax = A-l(y'b)(y'A)j = y'b,
which contradicts (12). Hence the inequalities in (11) have a
solution, so (10) and (11) cannot be simultaneously false,
which establishes the theorem.

1= (10 ... 0)
1 e, the vector whose ith component is one and whose other components
are zero,
01 ... 0

00 I
10
LINEAR INEQUALITIES

The following theorem is a corollary to Theorem 5 and will


be used later in the proof of the fundamental duality theorem
of Linear Programming.
THEOREM 6. Either

the inequalities Ax ~ b have a solution x ~ 0, (13)


or
the inequalities y' A :=:;; 0, y'b > 0 have a solution y ~ 0, (14)

but not both.


Proof If both (13) and (14) hold, then

o ~ y' Ax ~ y'b > 0,

which is impossible. If (13) is false, then


Ax - z = b has no solution for which x ~ ° and z ~ 0,

that is (A, -I) (:) = b has no solution (:1 ~ 0.


So by Theorem 5, y'(A, -I) :=:;; 0, y'b > 0 have a solution,
which is a solution of the inequalities in (14).
The following example uses Theorem 5 and also gives a
geometrical interpretation of the theorem.
Example. Show that the equations
2xI + 3x 2 - 5xa = - 4,
XI + 2X2 - 6xa = - 3,
have no non-negative solution.
Solution. If we can exhibit a solution of the inequalities
2YI + Y2 :=:;; 0, 3YI + 2Y2 :=:;; 0, - 5YI - 6Y2 :=:;; 0, - 4YI - 3Y2 > 0,
then by Theorem 5, the equations have no non-negative solution. The
above inequalities are equivalent to
1 2 6 3
Yl:=:;;- 2Y:' YI:=:;; - 312' YI ~ - 5YI' Yl <- 4"YI'

which are satisfied by YI = - 1, Y2 = 1.


11
CONVEX SETS

One can see geometrically that the equations have no non-negative


solution by writing them in the form

The set of vectors {1: x/a/ I x ~O} is represented by the shaded region
1=1
C in figure 4 and it is seen that b does not belong to C.

FrGURE 4

EXERCISES ON CHAPTER ONE

1. Establish whether the following sets are convex or not:


{x I Ax ~ b, x~ O}, {(x,y) I xy ~ I}, {x I x'x < I}.
2. Prove that (X) is convex and show that (X) is the intersection of
all the convex sets which contain X.
3. Determine the nature of the convex set of the non-negative solutions
of the equations, Xl + aXa = 1, Xl - X z = a-I, for all values of a.
4. If x ~ 0 is a non-basic solution of the equations Ax = b, prove
that x is not a vertex of the convex set, C = {x I Ax = b; x ~ O}.
S.1f X c R" and x E(X), show that x can be expressed as a convex
combination of not more than n + 1 points of X. (Use Theorem 3.)
6. If the equations Ax = 0 have no non-trivial, non-negative solution,
show that any non-negative solution of the equations Ax b can b~ =
expressed as a convex combination of the basic non-negative solutions
of the equations. (Use an inductive proof similar to that of Theorem 3.)
12
EXERCISES ON CHAPTER ONE

7. Use the theorem of the separating hyperplane to prove that either


the equations Ax = b have a solution, or the equations y' A = 0,
y'b= 1 have a solution, but not both.
8. Show that the inequalities
- 2Xl + X2 - 4xa ~ 3, 3x) - 2X2 + txa ~ - 5,

have no non-negative solution.

13
CHAPTER TWO

The Theory of Linear Programming

1. Examples and Qasses of Linear Programmes

The diet problem was one of the problems which led to the
development of the theory of Linear Programming. It was
first described by Stigler in a paper published in 1945 on
'The Cost of Subsistence'. In this chapter and in chapter four,
we shall use the problem to motivate or interpret some of the
definitions, theorems and techniques.

-The Diet Problem

A hospital dietician has to plan a week's diet for a patient,


which must include at least a prescribed amount of the nutri-
ents, Nb N2 , ••• , Nm• He has n different foods, FI , F2 , ••• , Fm
at his disposal of which he knows the nutritional value and
cost. Financial restrictions require him to minimise the cost
of the food used in the diet.
If
ai} = the number of units of N j in one unit of Fj,
b/ = the minimum number of units of Ni to be included
in the diet,
C/ = the cost of one unit of Fj,

then the dietician's problem is to choose XI units of Fi, so


that the cost of the diet,

CIXI + C2X2 + ...... + CnXn is a minimum,


14
TYPES OF LINEAR PROGRAMMES

subject to the diet containing at least bi units of N I , that is


the XI must satisfy

The remaining constraint,

says that the amounts of food in the diet must be non-negative.


The reader should note that the above formulation of the
diet problem is a mathematical model and is not altogether
realistic, for instance, the cost of food is not necessarily a
linear function of the amount required. However, as in other
branches of Applied Mathematics, we seek a close approxi-
mation to reality, which is tractable.
The diet problem provides an example of the following class
of linear programming problems.
Definition of a standard minimum problem (s.m.p.)
Minimise e'x,
subject to Ax;;::: b and x ~ 0,
where x = (XI) is a variable n-vector, e = (CI) is a given n-vector,
b = (bi ) is a given m-vector and A = (aij) is a given m x n
matrix.
The above notation will be assumed in the subsequent theory,
unless the terms are defined alternatively.
Corresponding to any linear programme, there exists a
related linear programme, called the dual programme, which
is now defined for the s.m.p.
Definition of the dual of the s.m.p. (the standard maximum
problem)
Maximise y'b.
subject to y' A ~ c' and y ~ 0,
where y = (Y/) is a variable m-vector.
15
THE THEORY OF LINEAR PROGRAMMING

It is easily shown that the dual of the dual is the original


s.m.p., by writing the dual in the form
minimise - b'y,
subject to - A'y:2: - c and y ~ O.
The dual programme can often be given a useful economic
interpretation, as we now illustrate.

The dual of the diet problem

On hearing of the dietician's problem, an enterprising


chemist sees his opportunity of making a fortune by manu-
facturing nutrient pills. He considers that the dietician will be
prepared to substitute pills for food in his diet, if the pills are
no more expensive than the food. The chemist's problem is
to choose the price Yt of one unit of the Nt pill, so that his
receipts, based on the minimal nutritional requirements of the
diet, are maximised. This is equivalent to

The constraints on the Yi are

YIQIJ + Y2~ + ... + ymQmj :s; Cj for; = 1,2, ... , n,

which say that the equivalent pill form of each food is not
more expensive than the food itself, and
Yl :2: 0, Y2 :2: 0, ... , Ym :2: 0,
which is the condition that the prices be non-negative.
The maximising value of y, can be used as the imputed cost
of one unit of the ith nutrient, which is the maximum price
that the dietician would be prepared to pay for one unit of the
ith nutrient.
16
TYPES OF LINEAR PROGRAMMES

Another useful class of linear programming problems are


the canonical problems, as all problems must be put into
canonical form to apply the simplex method of solution.
Definition of a canonical minimum problem (c.m.p.)
Minimise c' x,
subject to Ax = b and x 2 O.
To determine the dual problem, which is consistent with the
definition of the dual of the s.m.p .. we convert the c.m.p.
into the following s.m.p.,
minimise c'x,

subject to ( ~~41 x ~ ( ~b J and x 2 O.


The corresponding dual problem is to
maximise z'b - w'b.
subject to z'A - w' A ::; c' and z 20, w 2 O.
where z and ware m-vectors. Since any vector can be expressed
as the difference between two non-negative vectors, we write
y = z - w to obtain the dual of the c.m.p.
Dual of the canonical minimum problem
Maximise y'b,
subject to y' A ::;; c'.
The reader should note that the vector y in the dual problem
is unrestricted in sign. We now show that any linear programm-
ing problem can be expressed as a c.m.p. Consider the general
linear programming problem:
maximise ClX t + C2X2 + ... +cIIXn>
n
subject to L aijxj::; b i for i = 1, ... , r, (1)
j=1
n
L al]>') 2 hi
j=1
for i = r + 1, ... , s, (2)
/I

L aijxj = bi for; = s + 1, ... , m


j=1
17
THE THEORY OF LINEAR PROGRAMMING

and Xi ~ 0 for i E S, where S is a subset of the integers {1,2,


: ... , n}. The inequalities (1) and (2) are equivalent to the equa-
tions
II

L aijxj + W, = bl for i = 1, ... , r,


j=l
II

L alJxj - Wi = hi for i =r + 1, ... ,8,


i-l

where W, ::?: 0 for i = 1,2, ... ,8. In order that all the variables
are non-negative, we put XI = lit - VI for i ~ S, where UI::?: 0
and V, ~ O. Combining the above steps gives the following
c.m.p.:

minimise - L CIX/ - L CI(U,-VI) ,


IES I~S

subject to L aljxj + L alj(u) - vi) + Wi = bl for i = 1, ... ,r ,


jES u.s
L a/jxj + L aiiuj -v)- WI = blfori = r + 1,... ,8,
)ES HS

Laiix) + Its
iES
') a,iuj - Vi) = bi for i =8 + 1, ... , m

and Xi::?: 0 for i E S, u, ~ 0 for i ~ S, v,::?: 0 for i ~ Sand


WI::?: 0 for i = 1,2, ... ,8. The derivation of the dual problem
is left as an exercise to the reader.

2. Fundamental Duality Theorem

DEFINITIONS. A vector satisfying the constraints of a linear


programme is called a feasible vector. A feasible vector which
maximises (minimises) the given linear function, known as the
objective function, is called an optimal vector and the value
of the maximum (minimum) is called the value of the prog-
ramme.
18
FUNDAMENTAL DUALITY THEOREM

LEMMA 1. If x and yare feasible vectors for the s.m. p. and


its dual, respectively, then

y'b ~ y' Ax ~ c/x.

Proof Since x and yare feasible, y ~ 0 and Ax ~ b, which


together imply y' Ax ~ y/b.
Also x ~ 0 and y'A ~ c' ~ y'Ax !5: c/x.
In terms of the diet problem, the above result says that the
cost of the diet using pills does not exceed the cost of the diet
using food.
THEOREM 2. If x* and y* are feasible vectors for the s.m.p.
and its dual, respectively, which also satisfy

c/x* = y*'b,

then x* and y* are optimal vectors.


Proof Let x be any other feasible vector for the s.m.p., then
by Lemma I, c/x ~ y*/b = c'x*. Hence x* minimises c'x and
is an optimal vector. The proof for y* follows similarly.
The above theorem gives a sufficient condition for optimality
and holds for any linear programme and its dual. The following
theorem, which is the basic theorem of linear programming,
proves that a necessary and sufficient condition for the exis-
tence of optimal vectors is that a linear programme and its dual
should be feasible. It also proves that the condition in Theorem
2 is necessary for optimality.
THEOREM 3 (fundamental duality theorem). If both a linear
programme and its dual are feasible, then both have optimal
vectors and the values of the two programmes are equal.
If either programme has no feasible vector, then neither has
an optimal vector.
Proof Since all linear programmes can be expressed as a
s.m.p., there is no loss of generality in proving the theorem for
the s.m. p. The first part of the theorem will be established if we
19
THE THEORY OF LINEAR PROGRAMMING

can show that there exist vectors x· ~ 0 and y* ~ 0 satisfying:


(3)
y'A ~ c' (4)
c'x;:;:; y'b; (5)

for then by Lemma 1, c'x* = y*'b and the values of the two
programmes are equal. The optimality ofx* and y* immediately
follows from Theorem 2.
The system of inequalities (3), (4) and (5) can be written as

( 0 -A')
. A 0 (X) (
y ~ -cb) . (6)
-c' b' , O.

Suppose (6) has no solution for which x ~ 0 and y ~ 0, then


by Theorem 6 on page 11 there exist vectors z ~ 0, w ~ 0 and
a scalar A. ~ 0 satisfying the inequalities

(z',..',A) ( A 0)';; 0,
O-A'
(z',~,A) HJ > 0,

-c' b'

which can be written as


z'A ~ A.C' (7)
Aw ~ Ab (8)
z'b > c'w. (9)

If A. = 0, then since the s.m.p. and its dual are feasible, there
exist vectors x ~ 0 and y ~ 0 satisfying (3) and (4) respectively.
It now follows from (3) and (7) that

° z'Ax
~ ~ z'b
20
FUNDAMENTAL DUALITY THEOREM

:and from (4) and (8) that

0:::; y'Aw:::; c'w,

which together imply that z'b :::; c'w, contradicting (9). Hence
A > 0 and A-iw and A. -1Z are feasible vectors for the s.m.p.
and its dual, respectively, so by Lemma I, A-1z'b :::; A-1C'W,
which again contradicts (9). Therefore our original assumption
is false and the inequalities (6) have a non-negative solution,
which provides the required optimal vectors.
To prove the second part of the theorem, we suppose that
the s.m.p. has no feasible vector, that is Ax:::: b has no solution
for which x :::: 0, and so by Theorem 6 on page 11, the inequali-
ties y' A :::; 0, y'b > 0 have a solution Y1 :::: O. If the dual has
no feasible vector, there is nothing to prove, so suppose Yo is a
feasible vector for the dual, then it satisfies

y' A :::; c', y' :::: O. (10)

The vector Yo + AYl also satisfies (10) for all A :::: 0 and so is a
feasible vector. Since yt'b> 0, (Yo + A.yJ'b can be made
arbitrarily large by choosing A sufficiently large, so y'b is
unbounded above on the set of feasible vectors and the dual
has no optimal vector. It follows from the dual relationship
that if the dual has no feasible vector, then the s.m.p. has no
optimal vector and the proof is completed.
Since the optimal vectors, x* and y* of the programme and
its dual, respectively, satisfy c'x* = y*'b, the value of YI* can
be used to determine how critical the value of the programme
is to small changes in b" For example, if Y1 * is large compared
with the remaining YI*'S, then a small change in b1 could make
a large difference to the minimum value of c'x.
We now deduce two theorems, which can be used to test
given feasible vectors for optimality.
21
THE THEORY OF LINEAR PROGRAMMING

3. Equilibrium Theorems

THEOREM 4 (standard equilibrium theorem). The feasible vec-


tors x and y of the s.m.p. and its dual, respectively, are
optimal if and only if

Y,> O=> (Ax), = h; (11)


and X, > O=> (y'A), = CI' (12)

The conditions (11) and (12) are respectively equivalent to

(Ax), > h, => Yi =0 (13)


and (y/A); < Ci => XI = O. (14)

Proof. Suppose x and yare feasible vectors which satisfy (II)


and (12), then
m m
y' Ax = L y,(Ax); = L ylhl = y'b
;=1 ;-1

II II

and y' Ax = L (y'A)lx; = L c,x, = c'x .


1=1 1=1

Hence by Theorem 2, x and yare optimal.


Conversely suppose x and yare optimal, then by Theorem 3
and Lemma 1
y/b = y' Ax = c'x. (15)

On writing the left-hand side of (15) in the form


m
L YI«Ax)1 - hi) = 0
;=1
22
EQUILIBRIUM THEOREMS

and using the inequalities y ~ 0 and Ax ~ b, we deduce that

which gives the conditions (II) and (13). The conditions (12)
and (14) follow on writing the right-hand side of (15) as
n
L (Cj -
j=1
(y'A)j)Xi = O.

In terms of the diet problem, (13) says that if the nutrient


N j is oversupplied in the optimal diet, then the price of one
unit of the N j pill is zero, and (14) says that the amount of the
food li in the optimal diet is zero, if it is more expensive than
its pill equivalent. This suggests that the food P, is over-
priced.
THEOREM 5 (s::anonical equilibrium theorem). The feasible
vectors x and y of the c.m.p. and its dual, respectively, are
optimal if and only if

(16)

or equivalently (y' A)j < Ci ~ x, = O. (17)

Proof Suppose x and yare feasible vectors which satisfy


(16), then
n n
y' Ax = L: (y'A)j = L:
Xi CiXi = C'X .
j=1 j=1

Since Ax = b, y'b = c'x and the optimality of x and y follows


from Theorem 2.
Conversely suppose x and yare optimal, then by Theorem 3

y'b = y'Ax = c'x.


23
THE THEORY OF LINEAR PROGRAMMING

On writing the right-hand side of the above equation as

n
L (c; - (y' A)/)x/ =0
1=1

and using the inequalities x ~ 0 and y' A S; c', we deduce that

«y' A)/ - c,)x, = 0 for all i,

which implies (16).


The following example illustrates how the equilibrium
theorems can be used to test a feasible vector for optimality.

Example. Verify that y = (2,2,4,0), is an optimal vector for the follow-


ing linear programme.

Maximise 2Yl + 4yz + Ya + y,.


subject to

+ Ya + y, ;$ 6,
Yz
Yl + Yz + ,. ;::;; 9,
and y ~ o.
Solution. It is easy to verify that y =
(2,2,4,0), is feasible for the given
standard maximum problem. The dual problem is the following 8.m.p.

Minimise

subject to + x. ~ 2
3Xl + X z + Xa + x. ~ 4
xa + x, ~ 1
+ x. ~l

and x ~ O.
24
BASIC OPTIMAL VECTORS

Assume that y is optimal, then by Theorem 4, condition (11),


+ x. = 2,

Ya >O~ X3 + x, = 1,
and by condition (14),

YI + Y2 + Y3 < 9 ~ x. = O.
The above equations have the solution x = (: ' ~, 1,0)' ~ 0 and since
Xl + Xa= 5"9 ~1, x is feasible for the dual. Optimality follows since
xand y are feasible and satisfy the conditions (11) and (14) of Theorem 4.
As a check, we note that

c'x = 8 x 5"4 + 6 x 5"3 + 6 x 1 + 9 x 0 = 16


=2x2+4x2+lx4+1xO=y~

4. Basic Optimal Vectors

In Chapter 1 we proved that ~ linear function on a convex


polytope attains its maximum (minimum) at a vertex of the
polytope (see Theorem 2 on page 3). We also showed on page
5 that the vertices of the convex set, C = {x I Ax = b,x ~ O},
are the basic non-negative solutions of the equations, Ax = b.
] f C is not a convex polytope, then the linear function might
not be bounded above (below) on C, in which case the maximum
(minimum) will not exist. However the next theorem proves
that if a linear function attains its maximum (minimum) in C,
then it attains it at a vertex of C.
THEOREM 6. If a canonical minimum problem has an optimal
vector, then it has a basic optimal vector. (A basic optimal
vector is an optimal vector, which is a basic solution of the
constraint equation.)
25
THE THEORY OF LINEAR PROGRAMMING

Proof. We recall that the constraint equation, Ax = b, can


be expressed in the form
II

L xla; = b, (18)
1=1

where a}. a2, ... , an are the column vectors of A. Let x* be


an optimal vector for the c.m.p., then without loss of generality
we can assume that

x; > 0 for i = 1,2, . , ., r and x; = 0 for i = r + 1, , , ., n,

by reordering the x;'s if necessary. If a}. a2, ... , a r are linearly


independent, x* is basic and there is nothing to prove, If not,
r
then it follows from Theorem 3 on page 5 that since L xla; = b
;=1
has a non-negative solution (x;,x;, ... , x;), it has a basic
non-negative solution (Xl> X2 , ••• , xr). Hence there exists a
vector i = (Xl> •• " Xr , 0, ... , 0)' ~ 0 satisfying (18) and such
thaf the set of vectors {a; I XI > O} are linearly independent,
so i is a basic non-negative solution of (18). Let y be an optimal
vector for the dual, then by Theorem 5, since x* is optimal

(y' A); < C; ==> Xj* = 0

but x; = 0 => Xj = 0,
hence i satisfies the condition (17) for optimality and is there-
fore, a basic optimal vector.
The above theorem gives us a method of finding an optimal
vector, which is to solve the system of equations L Xja; = b
jES
for all sets S, such that the set of vectors {aj liE S} are linearly
independent. The non-negative solution which minimises e'x
26
GRAPHICAL METHOD OF SOLUTION

is a basic optimal vector, provided an optimal vector exists.


However for an m x n matrix A, the number of trials required
could reach (::a) , which is prohibitive for large nand m. The
simplex method, described in Chapter 4, is much more efficient.

5. Grapbical Method of Solution

We conclude this chapter by illustrating how standard linear


programming problems in two variables can be solved graphic-
ally.
Example.

x,

FIGURE S
27
THE THEORY OF LINEAR PROGRAMMING

Minimise
subject to 2xl + XI ~ 2
Xl + 2x1 ~ 2

6Xl + XI ~ 3

and Xl ~·O, XI ~ O.

Solution. The values of Xl and X2 satisfying one of the above con-


straints, lie in the half-plane bounded by the straight line given by satis-
fying the inequality as an equation. In figure 5, the shaded region repre-
sents the intersection of all the halfplanes and is the region of feasible
values of Xl and XI' The equation Zo = Xl + X2 is a straight line with
perpendicular distance zof J2
from the origin, so to minimise Zo, the line
is moved parallel to itself towards the origin until it only meets the
shaded region at the vertex P. The values of Xl and X 2 at P provide the
. I soIutlon,
optima ' . Xl = XI= 3'
wh'ICh IS 2 an d Zo = 3"4

EXERCISES ON CHAPTER TWO


1. Show that the following linear programme is feasible, but has no
optimal solution.
Maximise 2Yl + 3y"
subject to - 5Yl + 4yz ~ - 1,
Yl - Y2 ~ 2,
and Y ~ O.

2. Verify that x =(0, ~ 0,0, ~ )' is an optimal solution for the fol-
lowing linear programme.

Minimise + 7X2 - 5xa + lOx, - 4xa,


Xl

subject to Xl + 4X2 - 3xa + 4x, - Xs = 1,


3XI + X + 2xa - 2 X, + 2xa = 4.
and x ~ O.
28
EXERCISES ON CHAPTER TWO

3. The following numbers of waitresses are required each day at a


transport cafe during the given periods.

3.01-7.00 7.01-11.00 11.01-3.00 3.01-7.00 7.01-11.00 11.01-3.0


2 10 14 8 10 3

The waitresses report for duty at 3a.m., 7a.m., etc., and their tours of
duty last for eight hours. The problem is to determine how the' given
numbers can be supplied for each period, whilst involving the smallest
number of waitresses each day. If Xl' X 2, ••• , X8 are the numbers starting
at 3a.m., 7a.m., ... , IIp.m., respectively, verify that an optimal solution
is x = (0,14,0,8,2,2)'.
4. Show that the following linear programme has an optimal solution,
and find it by computing the basic feasible solutions.

Minimise 2Xl + X2 - 4xa,


subject to Xl + 3X2 - Xa = 1,

3xl + 2X2 + Xa= 7,


and x ~ o.
5. Show that the dual of the general linear programme. defined on
page 18, is to
m
minimise L Yib;
;=1

L Yiaii ~ c, for jES


m
subject to
;=1
m
L YtOti =
;=1
cj for NS
and Yi ~ 0 for i = 1, ...• r; y, :S 0 for i = r + 1 •...• s.
6. Formulate the following problem from Approximation Theory as
a linear programme. For given functions tPl(X). tP.(x), ...• tPix), find
n
a1> a 2, ••• , an such that I
a,tPi(X) is the best approximation to a function
;=1
f(x) on the set {Xl. X2, .. " xm} in the sense that the ai minimise the
29
THE THEORY OF LINEAR PROGRAMMING

n
maximum I!(xj) -
j
L alMxj) I. Show that the dual is a canonical problem.
i=l
n
(Hint: let (In+l = max
j
L
1 f(XI) -
i=1
and minimise
r%itP,(Xj) 1 r%n+l')

7. Give the dual of the following linear programme:


minimise A
subject to Ax + J.b ~ b and x ~ 0, A ~ O.
Show that the problem and its dual are feasible, and hence obtain a proof
of Theorem 6 on page 11.
8. Wall paper is supplied in rolls of 33 feet length, and 19 strips of 7
feet length and 8 strips of 3 feet length are required to paper a room.
Verify that to minimise the number of rolls used, 4 rolls must be cut in
the pattern 4x 7' + 1 x 3' with 2''-wasted and 1 roll in the pattern
3x7' + 4x3'.
9. If x = (Xl> X2, .•• , X m , 0, ... , 0)' is a basic feasible vector for the
canonical minimum problem, where X, > 0 for i = 1,2, ... , m, and ifthe
vector y, which satisfies the equations y'a/ = c/ for j = 1,2, ... , m, is not
feasible for the dual problem, prove that x is not optimal.
10. A firm can manufacture n different products in a given period of
time. Constraints upon production are imposed by a fixed capacity in
each of the m production departments and by minimum and maximum
production levels obtained from contracts and sales forecasts, respec-
tively. Let c/ be the profit on one unit of thejth product, b, be the capacity
of the ith department, au be the number of units of capacity of the ith
department required to produce one unit of the jth product and let Uj
and I j be the upper and lower limits, respectively, upon the production
of the jth product. If XI units of the jth product are manufactured, for-
mulate the problem of maximising the profits as a linear programme.
By considering the dual programme, show that the feasible vector
x = (XI) is optimal if there exists a vector y = (y/) satisfying (y' A)I = C;
for II < XI < u" (y' A)i ::;: Cj for Xj = "I and (y' A)i ~ ci for XI 'i' =
where A = (01/)'

30
CHAPTER THREE

The Transportation Problem

1. Formuladon of Problem and Dual

The transportation problem was first described by Hitchock


in a paper on 'The Distribution of a Product from· Several
Sources to Numerous Localities', which was published in 1941.
Although the problem can be solved by the simplex method,
the special form of its constraint matrix leads to alternative
techniques of solution, one of which is described in this chap-
terl .

The Transportation Problem

A commodity is manufactured at m factories, Flo F2 , ••• ,


Fm , and is sold at n markets, Mb M2 , ••• , Mn. The annual
output at Fi is Si units and the annual demand at MJ is ~
units. If the cost of transporting one unit of the commodity
from Fi to M j is Cil> then the problem is to determine which
factories should supply which markets in order to minimise
the transportation costs. For a realistic problem, we can assume
c/j ~ 0, Si > 0 and ~ > o. If Xu is the number of units trans-
ported per year from Fi to MJ , then the transportation costs,
m n
L L CUXij
1=1 j~l
are to be minimised,

1 Another technique is based on network theory, see Chapter 5 of Gale.


31
THE TRANSPORTATION PROBLEM

subject to the amount taken from Fi not exceeding the supply


there,

n
that is L xI}:::; Si for i = 1,2, ... , m, (I)
}=1

and the amount taken to MJ satisfying the demand at M},

m
that is L Xi} ~~ for j = 1,2, ... , n. (2)
i=1

The amounts carried from Fi to ~ must be non-negative, so

for i = 1,2, ... , m and j = 1,2, ... , n.

It follows from (I) and (2) that for the problem to be feasible,

m m n n
L SI ~ L L xI} ~ L d j ,
i=1 1=1 1=1 }=1

so the total demand must not exceed the total supply. If the
total supply and demand are equal, then (1) and (2) must be
satisfied as equations. In any case, the problem can always be
formulated so that the total supply and demand are equal.
This is achieved by introducing an extra market Mn+1 (a dump)
m n
with demand dn +1 = L Si - L ~, the excess supply, and by
i=1 }=1
choosing Cln+1 = 0 for i = 1, ... , m. It can be shown that an
optimal solution for the modified problem is an optimal solu-
tion for the original problem.
We therefore define the transportation problem as the
following c.m.p.
32
FORMULATION OF PROBLEM AND DUAL

m /I

Minimise L L cijx/;,
i=1j=1

II

subject to L xI} = Si for i = 1,2, ... , m, (3)


j=1

m
L Xij = dj for j = 1,2, ... , n, (4)
;=1

and Xii;::: 0 for i = 1,2, ... , m and j = 1,2, ... , n, (5)

L = LtJ.;.
m "
where Si > 0, ~ > 0 and ei;;::: 0 are given and Si
;=L ;=1

The dual of the transportation problem

Let

and the (m + n)xmn matrix

where a/j = \ ej lei) , ei ·"h


IS t e it h UllIt
. m-Vt:ctor an d ej is the jth
unit n-vector. In the case m = 2 and 11 = 3,

1 o o o

~)
1
o o 1
o o o
1 o o 1
o o o
33
THE TRANSPORTATION PROBLEM

With the above notation, (3), (4) and (5) reduce to the c.m.p.

minimise c' x,
subject to Ax = b and x ~ O.

The corresponding dual problem is to

maximise y'b,
subject to y' A ~ c',

where y is an (m + n)-vector. If we put

m /I

then y'aij = - Uj + Vj and y'b = - L Ui8i + L v)~.


1=1 j=1

Hence the dual problem is to find U/ and Vj which


/I m
maximise L vA - L
j=1 ;=1
U/81 , (7)

subject to Vj - UI ~ elj for i = 1,2, ... , m and j = 1,2, ... , n.


(8)

The dual problem can be interpreted in the following way.


A road haulage operator puts in a tender for the transportation
of the commodity from the factories to the markets. He offers
to pay U; per unit commodity at the factory Fi and to sell the
commodity for Vj per unit at the market Mjo so that his charge
for carrying unit commodity from Fi to M j is v} - U/. With
this interpretation (7) represents the amount he will receive
for the job, which he will try to maximise, and (8) is the con-
dition that his charges are competitive.
34
THEOREMS CONCERNING OPTIMAL SOLUTIONS
2. Theorems Concerning Optimal Solutions

We now show that the transportation problem has an optimal


solution, by applying the fundamental duality theorem.
THEOREM 1. The transportation problem has an optimal
solution.
Proof By inspection Xu = Sid.ir~ Sj satisfies (3), (4) and (5),
and Uj = vJ = 0 satisfies (8), as clj ~ O. Hence the problem
and its dual are feasible, so both have an optimal solution by
Theorem 3 on page 19.
If the commodity is cars, then the optimal solution will only
be meaningful if it specifies an integral number of cars to be
transported from F; to M j • The next two theorems prove that
if the supplies and demands are integral, then an optimal
solution in integers exists. We first recall the definition of a
minor of order r, which is the determinant formed from a
matrix by omitting all but r rows and r columns of the matrix. 1
THEOREM 2. Any minor of the matrix A, defined by (6),
takes one of the values - I, 0, 1.
Proof The theorem is proved by induction on the order r of
the minor and is trivially true for all minors of order 1. Assume
that the theorem holds for all minors of order r = N - 1 and
consider minors of order N. Each column of A contains zeros
apart from two unit elements, one of which occurs in the first
m rows and the other in the last n rows of A. If the minor of
order N contains two unit elements in each column, then the
sum of its rows taken from the first m rows of A equals the
sum of its rows taken from the last n rows of A, so the rows
are linearly dependent and the value of the minor is zero.
If not, then the minor contains one column with at most one
non-zero unit element. Expanding the minor by this column
and using the inductive hypothesis, it follows that the minor
takes one of the values - I, 0, 1.
1 See Cohn page 69.
35
THE TRANSPORTATION PROBLEM

THEOREM 3. If the supplies s, and the demands d} are integral,


then the transportation problem has an optimal solution in
which the xi}'s are integers.
Proof It follows from Theorem 1 and Theorem 6 on page
25 that the transportation problem has a basic optimal solu-
tion, so we only need to prove that the basic solutions of Ax = b
are integral.
If x is a basic solution of Ax = b, tRen the non-zero elements
of x are the solution of a regular system of equations Mz = bo,
where M is the matrix corresponding to a non-vanishing minor
of A and bo contains those elements of b corresponding to the
rows of A included in M. Since the elements of bo are· integral
and the determinant of M equals ± 1, it follows from Cramer's
Rule,1 that the elements of z and hence of x are integral.

3. Method of Solution with Modifications for Degeneracy

The following property of the matrix A is important, as it


underlies the method of solution of the transportation
problem given in this chapter.
THEOREM 4. Rank A = m+n - 1, where A is defined by (6).
Proof Let the rows of A be the mn-vectors rIo ... , r m, rm+h
m m+n
... , rm +n• By inspection L rj = L r j , so the rows of A are
;=1 ;=m+l
linearly dependent. Suppose that
m+n-l
L
;=1
A.,rj = o. (9)

Since the matrix formed by deletjng the bottom row of A has


only one non-zero element in the n-th, 2n-th, ... , mn-th
columns, it follows that by taking the n-th, 2n-th, ... , mn-th

1 See Cohn page 66.


36
METHOD OF SOLUTION

components of (9),

The 1st, 2nd, ... , (n-l)-th components of(9) now give

Am+l = Am+2 = ... = Am+n-l = 0.


Hence rb r 2, . . . , rm+n-l are linearly independent and rank
A = m+l1-l.
Although rank A = m+n-l, the system of equations
Ax = b «3) and (4» are consistent, as the total supply equals
the total demand. A basic solution of Ax = b will have at
most m+ n-l non-zero values of xij and if the problem is
degenerate, which occurs when a partial sum of the s;'s equals
a partial sum of the d/s, then a basic solution may have less
than m+n-l non-zero xij's. The following method of solution
assumes that the problem is non-degenerate.

Method of solution of the transportation problem

The method is based on the equilibrium theorem, Theorem


5 on page 23, which says that the feasible solutions xii' Uj and
Vj are optimal if and only if

The procedure is to find a basic feasible solution of (3) and


(4), which contains m+n-l positive xii for a non-degenerate
problem, and then to solve the m + n - 1 equations

Vi - Ui = cij for xij > 0,

which determine the Ui and Vj uniquely, provided a value is


assigned to one of them, say Ul = 0. If the Ui and Vj so deter·
37
THE TRANSPORTATION PROBLEM

mined, also satisfy

then they are a feasible solution for the dual problem and the
optimality of the xij follows from the equilibrium theorem. If

then the basic feasible solution can be improved· by making


Xkl > 0 (see exercise 9 on page 30). It is worth noting that there
is no loss of generality in putting Ul = 0, as an arbitrary con-
v,
stant can be added to all the u, and without altering their
feasibility, (8), or the value of the function to be maximised.
(7).
We now give two methods for finding a basic feasible solu-
tion (b.f.s.). The north-west corner method puts Xu = min(slo
dt ) and then proceeds along successive rows from left to right
with each allocation either using up the supplies at F, or satis-
fying the demand at M j • The matrix minimum method finds
the minimum element in the cost matrix (cij) and allocates as
much as possible along that route. It then repeats the process
using successive minima until all the supplies are exhausted
and the demands are satisfied. It can be shown that both methods
give a b.f.s., though the matrix minimum method usually
gives a solution which is closer to the optimal solution, so it
is the more efficient method.
Example. Find an optimal solution for the transportation problem
for which the cost matrix
5
2

=
the supplies are SI = 3, S: 5
and the demands are d1 = 4, d! = 2, da = 2.
Solution. We construct a tableau with the costsC,Iin the left-hand side
of each column and the initial b.f.s. in the right-hand side. The b.f.s.
38
METHOD OF SOLUTION

has been obtained by the north-west corner method and the O's should
be neglected for the moment. Putting III = 0, the UI and Vi are calculated
from the equations VI - UI = Cli for Xii> 0, for example VI - 0 = Cll =
5 as Xu > O. The circled entries in the

4 2 2
5 6 5

3 0 5 3-0 5 7- ®
® 01-
5 4 1 1+0 2 2-0 1 2

right-hand side of the columns are the values of Vi - U; for XII = 0


which are to be compared with the values of C;I in the left-hand side.
Since V2 - Ul = 6 > 5 = C12, the Vj and II; are not feasible and for a
non-degenerate problem, it can be shown that the Xii are not optimal
(see exercise 9 on page 30). Let X I2 = 0, then to satisfy the constraints
Xu = 3 - 0, X21 = 1 + 0, and X 22 = 2 - O. This leads to a change
in cost

0(C12 - Cu + Cn - C22) = 0[c12 - (VI - 1/1) + (VI - u2) - (V2 - u2)] =


II(C I2 + 111 - v2) < 0 for II > o.
A new b.f.s. is obtained by putting 0 = 2, which is the maximum value
of 0 consistent with X;i~O. It should be noted that 0 is first put into a
position for which Vi - II; > Cil and is then added to or subtracted from
positive values of XU. This is necessary to ensure that the new feasible
solution is basic and that the transportation costs are reduced. On putting
0= 2 and calculating the II; and Vi as before, the following tableau is
obtained.

II; VI 5 5 5

o
4

This time the


1+1+1+1+1+111
U; and Vi satisfy V, - IIi ::;; Cli for Xii = 0, so an optimal
solution is
2
o ~)
and the minimum cost is 20.
39
THE TRANSPORT AnON PROBLEM

It is straightforward to show that if an optimal solution


satisfies Ij - Uj < cij for all XI] = 0, then it is unique (see
exercise 3 on page 66). So in the above example, the optimal
solution is unique. However, if V, - Uk = Ckl for some Xkl = 0,
then another optimal solution can be found, if the problem is
non-degenerate.

Degeneracy
Degeneracy will be formally defined on page 56 and as
mentioned on page 37, occurs for the transportation problem
when a partial sum of the s;'s equals a partial sum of the d/s.
Under such circumstances a b.f.s. may be obtained in which
fewer than In + n - 1 of toe xijs are positive with the result
that there are insufficient equations to determine the u/ and
t',. This difficulty is overcome by making the problem nOI1-
degenerate as follows. Let

Sj = S; + II for; = 1,2, ... , m,


~ = dj for j = 1,2, ... , It-I,
and dn = dn + m 8,
where 8 is a small quantity chosen so that a partial sum of the
s;'s is not equal to a partial sum of the ~'s, though the total
supply still equals the total demand. The problem is now solved

°
with supplies SI and demands ~ and the optimal solution for
the original problem is found by putting 8 = into the optimal
solution of the perturbed problem.
Example. Solve the transportation problem with cost malri"

(ell) = ( ! 2

supplies SI = 3, S2 = 5,
and demands d1 = 3, d: = 3, da = 2.
40
OTHER PROBLEMS OF TRANSPORTATION TYPE

Solution. The problem is degenerate as s~ = d1 + da and the initial


b.f.s.
o
3

contains only three non-zero Xii' On making the problem non-degenerate


according to the above scheme and constructing an initial bJ.s. by the
matrix minimum method, the following tableau i~ obtained.

3 3 2+2e
S, II, Vj o 4

3+e
5+e

Since Vl - 1I~ > cu , we put X 21 = 0 and find that the maximum value of
8 = 2+e, which gives the next tableau.

o 2+2e
1- .
o 5 I @

The u/ and VI now satisfy I', - IIj :S: Cjj for Xjj = 0, so putting e = 0, we
obtain the optimal solution

°
3
~ 1 with cost = 14.

4. Other Problems of Transportation Type

We conclude this chapter by describing two problems which


can be transformed into transportation problems and so can
be solved by the method described in this chapter.
41
THE TRANSPORTATION PROBLEM

The Caterer'sProblem

A caterer has to provide napkins for dinners on m successive


days and n, napkins are required for the dinner on the ith day.
The napkins can either be bought at b pence each, laundered by
the following day at c pence each or laundered in three days at
d pence each. If b > c > d, the caterer has to determine how
many napkins should be bought and how many should be sent
to each of the laundry services so as to minimise the cost of
providing napkins. To formulate the problem as a transporta-
tion problem, let the factories F1, F2, ••• , Fm, be the baskets
of dirty napkins with supplies n , ~, ... , nm , respectively, and
1 m
Fm+! be the shop with a supply L n,ofnapkins. For the mar-
t=1
kets M 1, M 2, M m , we take the dinners with demands
• •• ,
n1' n2' ..•, nm , respectively, and so that the total supply and
demand are equal, we introduce a dump M m +1, with a demand
m
of L",.The cost matrix for the problem is shown below.
t=l

m
St ~ n1 ~ na n, "s ... nm L nt
1=1

nl 00 c c d d ... d 0
n2 00 00 c c d ... d 0
na 00 00 00 c c ... d 0

nm 00 00 00 00 00 ... 00 0
m
L'"
1=1
b b b b b ... b 0

The cost matrix is constructed by assuming that on the fourth


day (say) napkins used from the first day have been laundered
42
OTHER PROBLEMS OF TRANSPORTATION TYPE

by the three day service and napkins used from the second and
third days have been laundered by the one day service. The
infinite costs represent impossible supplies of napkins.

The Optimal Assignment Problem

There are m candidates Cb C2, ••• , Cm for n jobs It. 12 ,


... , In and an efficiency expert assesses the rating of CI for
~ as alj ~ O. The problem is to allocate the candidates to the
jobs so as to maximise the sum of the ratings. Hence numbers
xij' equal to 0 or 1, are required which

m n
maximise LL aijxlj,
1=1 j=1
n
subject to L Xii ~ 1 for
j=l
i = 1,2, .. . ,m, (10)

m
and L Xij ~ 1 for j = 1,2, ... , n. (11)
1=1

The inequality (10) says that a candidate is assigned to at most


one job and (11) is the condition that at most one candidate
is allocated to each job. If m < n, then n - m fictitious candi-
dates can be introduced having zero rating for all the jobs and
if n < m, then m - n fictitious jobs can be included for which
all the candidates have zero rating. Hence the problem can be
modified so that m = n and every candidate is assigned to a
job, as the inequality aij ~ 0 implies that there is no advantage
in leaving a job unfilled. With these modifications, the problem
is to
11 11

maximise LI aijxij
I=lj=l
43
THE TRANSPORTATION PROBLEM
n
subject to L xI} = 1 for i = 1,2,... , n
j=l

n
L xI} = 1 for j = 1,2, ... , n
i=l

and xI} ~ 0 for i = 1,2, ... , n andj = 1,2, ... , n.

It follows from Theorems 1,2 and 3, suitably modified, that the


above problem has an optimal solution in which the xI} equal
o or 1,1 so there is no loss in demanding that the x/j be non-
negative.
The problem can now be solved as a degenerate transporta-
tion problem except that the dual variables, u/ and Vj must
satisfy

in order that the xi} be optimal, as the objective function is to


be maximised. In this problem, the matrix maximum method
will, in general, provide the most efficient initial bJ.s.

EXERCISES ON CHAPTER THREE

1. Solve the transportation problem with cost matrix,

2
o
where the supplies are SI = 6, S2 = 4,
and the demq,nds are d1 = 5, d2 = 2, da = 3.

1 This result could be used to justify a monogamous society.


44
EXERCISES ON CHAPTER THREE

2. A transportation problem has cost matrix,

4 6
5 7
6 5

supplies S1 = 3, S2 = 5, Sa = 7 and demands d J = 2, d2 = 5, da = 4,


d, = 4. Find the minimum cost as a function of ), for)' 2 o.
3. A caterer requires 30 napkins on Monday, 40 on Tuesday, 60 on
Wednesday and 40 on Thursday. Napkins cost 15d, can be laundered
by the following day for 6d and in two days for 3d. Determine the most
economical plan for the caterer.
4. An efficiency expert has rated four candidates Ci for four jobs I:
according to the table below.

11 12 13 1,
C1 8 9 5 6
C2 7 1 2 4
Ca 6 8 2 5
C, 1 3 9 3

Find all the optimal assignments of candidates to jobs.


5. Show that the transportation problem with cost matrix (en) has
the same optimal solutions as the transportation problem with cost
matrix (eli) = (cn + al + bi), where ab ••• , am, bl , ••• , b. are any num-
bers. Give an interpretation of this result.
6. A jam manufacturer has to supply db db da and d, tons of jam,
respectively, during the next four months, and his plant can produce
up to s tons per month. The cost of producing one ton of jam will be
Cl ' C2, Ca and c, during these months and there is a storage cost of k per
ton per month, if the jam is not delivered in the same month as it is
produced. Show that the manufacturer's problem of minimising his
costs is equivalent to a transportation problem and solve the problem
when d l = 4, d2 = 7, da = 5, d, = 7, s = 6, Cl = 3, C2 = 5, Ca = 4.
C, = 6 and k = 1. (Let XII be the number of tons produced in the ith
month for delivery in the jth month and letxia be the unused capacity in
the ith month.)
7. Show that in any optimal assignment at least one candidate is
assigned to the job for which he is best qualified.

45
CHAPTER FOUR

The Simplex Method

1. Preliminary Discussion and Rules


The simplex method was invented by Dantzig and was first
published in 1951. It can be used to solve any linear programm-
ing problem, once it has been put into canonical form. Its
name derives from the geometrical 'simplex', as one of the
first problems to be solved by the method contained the
n+l
constraint L x, = 1.
'=1
Theorem 6 on page 25 says that if a linear programme has
an optimal solution, then the optimum is attained at a vertex
of the convex set of feasible solutions. Suppose the linear
function to be minimised is c'x, then Zo = c'x is the equation
of a hyperplane, orthogonal to c, whose perpendicular distance
from the origin is proportional to zoo Hence the optimal solu-
tion can be found by drawing hyperplanes, orthogonal to c,
through all the vertices and by selecting the vertex whose hyper-
plane is the least distance from the origin (see figure 5 on page
27). The simplex method chooses a sequence of vertices, such
that the value of Zo is reduced at each choice. Mter a finite
number of choices, the minimum, if it exists, is attained.
We begin by recalling the technique of pivotal condensation,
which is used at each step of the simplex procedure.

Pivotal Condensation
Suppose b1> b2 , ••• , bm is a basis for the space Rm , then the
vectors 81> 82' •.. , an E Rm can be expressed uniquely in terms
46
PRELIMINARY DISCUSSION AND RULES

of the basis by
m
aj = L filb; for j = 1,2, ' , " n, (1)
1=1

An alternative way of expressing (1), which will be used later, is


A = BT, (2)
where A = (ab~' ' , "a/l)' B = (bI> b:J,. , " bm) and T = (tlj)'
We now represent (1) by the following tableau T.
a1 as a/l

b1 111 lIs 11/1 R1


b, t,} I,s Irn Rr T
bm tml I_ I"", Rm

If Ir,' ::p. 0, br can be replaced in the basis by as as follows. The


'
re1atlOn ~
as = L.. tisb ' I'
i Imp les
1
b, = -as - ~ -tis
L.. bi,
1=1 Irs ;=1 Ir•
I""
so in terms of the new basis bI> ' , " b'_I> as> br+I> ' • " bm,
~ (lij - - lis)
aj =. L.. I,} bi + -f,} as for J, = 1,2.... ,n,
1=1 I,s I,s
I""
and the new tableau T* is given below, The element frs is called
a1

t 1s t,l
bi 111 ---- o
f,s

trl 1m • 1
as R, =-R T*
f rs frs f,s r

bm f m1 - - - ·
tmsf'l
o t
mn
fmsfrn
---
f,s
trs

47
THE SIMPLEX METHOD

the pivot and the relationship between the rows R, of T and the
rows R; of T* is shown in T*. The mechanics of the replace-
ment are summarised by the following rule.

Replacement Rule

Using the pivot, trs "" 0, the tableau T* is obtained from T


by dividing the rth row of T by tr $ and by subtracting from each
other row of T, the multiple of the rth row which gives a zero
in the sth column, that is

• trj • (ir
t·=- and tij = lij- -(- I,; for i "" r.
rJ Ir• rs

As remarked earlier, the simplex method can only be applied


to linear programmes in canonical form, so we shall consider
the c.m.p.
minimise c' x,

subject to Ax = b and x ~ 0,

and following Gale!, we shall interpret the method in terms of


the diet problem.

An interpretation of the simplex method

We recall that X, is the number of units of the food F, in the


diet, Ct is the cost of one unit of F" b, is the number of units
of the nutrient N j to be included in the diet and o,j is the num-
ber of units of N j in one unit of Fj. Suppose x = (Xlo • •• , X m , 0,
... ,0)" where Xi > 0 for i = 1,2, ... , m, is a bJ.s. for the
c.m.p., which represents a diet using only the foods Flo F2 ,
1 See Gale page 105.
48
PRELIMINARY DISCUSSION AND RULES

L xja
III

... , F m , then j = b. The remaining columns of A can be


=1
expressed uniquely in terms of the basis vectors, aI, a2, ... , am,
to give the tableau below. We now investigate whether the cost
of the diet can be reduced by including the food Fs (s > m)
in the diet. From the tableau

as = L fisa;, (3)
i=l

and the right-hand side of (3) can be interpreted as the

CII

a" b

I 0 tIs fIn Xl
0 0 trs tm Xr
0 1 tms tmn Xm
-----~

z-c 0 0 Zs - Cs Zn - Cn Zo
I
combination of Fi> F2 , ••• , Fm having the same nutritional
content as Fs or in other words substitute Fs. The unit cost of
m
substitute Fs is L fisC; so if Fs is cheaper than its substitute,
;=1
that is if
m

C~ < L
;=1
fiSC/ == ZS' (definition of zs) (4)

Fs should be included in the diet. Suppose there are units x;


of Fs in the new diet, then the amount of F; is reduced to

x; = Xi - t;sx; for i = 1,2, ... , m,


49
THE SIMPLEX METHOD

since it is no longer required to make up substitute F•. For a


feasible solution x; ~ 0, and the amount of F, is chosen by

• • X, X,
X, = mlD- = -(say), (5)
tl. >0 t" t,. I

so that the new solution is also basic. This means that F,


replaces F, in the diet.
The simplex method consists of using the criteria (4) and (5)
to choose a pivot t,. for the replacement and then applying the
replacement rule to obtain a new tableau. The procedure is
repeated until no food is cheaper than its substitute, that is
until
for j = 1,2, ... , n. (6)

It will be proved on page 53 that when (6) is satisfied, an opti-


mal solution has been obtained. The element Zo in the bottom
row, is defined to be the current value of c'x, that is

m
Zo= L C,X,.
1=1

The following lemma proves that the elements in the bottom


row, Z - c, transform according to the replacement rule
under a change of basis.
LEMMA 1. If trs is the pivot for the replacement, then in the
new tableau

and
• (zs - cs)
Zo = Zo- Xr
trs
50
PRELIMINARY DISCUSSION AND RULES

Proof By definition

and it follows from the replacement rule that

Zj=

Lm ('
;=1
tiS) t,]
t j j - - t ' i C i + - C"
tr• trs
Hence

on using the definitions of Zj and z•. The proof for z~ follows


similarly.
We now formulate the criterion (4) and (5) as two rules for
choosing the pivot; the first rule selects the column index and
the second, the row index.

Rule I (for a c.m.p.)l

Calculate the scalar product of the jth column of the tableau


with the cost vector appropriate to the basis, that is evaluate
z] = L ti}Ci' If z] > c] for some j = s, include a, in the basis.
i

Rule II
Given that a. is to be included in the basis, calculate the
ratios Xi for tis > O. If the minimum of these ratios is achieved
tis
at i = r, replace the vector ar of the current basis by a" using
the pivot trs'
1 If c'x is to be maximised, then z. < c. is the criterion.
51
THE SIMPLEX METHOD

The following simple example, for which there is a bJ.s.


by inspection, will help to illustrate the simplex method. In
general there is no immediate b.f.s. and a technique for obtain-
ing an initial b.f.s. will be given on page 58.

Example
Minimise
subject to = 12
- X2 + 12xa - 3x, + 4xs = 20
and Xi:::?: O.

Solution. By inspection x = (3,0,0,0,5), is a bJ.s. and the correspond-


=
ing basis is 81 = (4,0), and a5 (0,4),. The initial tableau expressing
a2, aa, a, and b as a linear combination of a 1 and a 5 is shown below.

c 2 6 -7 2 4
b

3 1
1 -- 121 0 3
4
1
4
3
I
0
4
3 -'4 1 5
--
17 11
z-c 0
2
23 --
2
0 26

The bottom row is calculated from the definitions of Zi - Cj and zo, so


that for example,

Z2-C2=I12C1+t22C5-C2=(-! X 2)+(- ! X4)_6=_1;


and Zo = X 1C1 + xscs = (3 X 2) + (5 x 4) = 26
As Za - Ca = 23 > 0, 83 is a
candidate for the basis by Rule I, and by
Rule 11,113 is the pivot for the replacement, as ~ = 23 < Xs = ~ .
113 1 23 3
52
THEORY OF THE SIMPLEX METHOD

The following tableau is obtained by applying the replacement rule.


b

1 3 1 3
- - -- I -- 0 -
2 8 8 2
3 3 I
2
1-7 1 0 --
8
I
2
,~I
--
23 I 21 17
z-c -- --- 0 - -- 0
2 8 8 2

This time the pivot is t 22 , as Zz - c~ > 0 and tzz.is the only positive liZ'
On re-applying the replacement rule, we obtain the optimal solution,
as Z ::;: CI for all j.
b

12
-
7
4
-
7
--
79 18 1 60
z-c --
7
0 0 -- --
7 7
--
7

The optimal vector is x = 4 ' -7'


(0, 1" °
12 0, )' ,which corresponds to

60
the optimal basis a 2 and a3' and the value of the programme is Zo = - 1" .
It is a wise rule to calculate the elements of the bottom row first and
then to test for optimality, for in this way, unnecessary computation
can be avoided.

2. Theory of the Simplex Method

We now prove that (6) is the criterion for optimality.


THEOREM 2 (optimality criterion)1.

1 If c'x is to be maximised then ZI?:'CI is the criterion for optimality.


53
THE SIMPLEX METHOD

The bJ.s. x = (X1o ••• , xm , 0, ... ,0), corresponding to the


basis aI, Il:!, ... , am, is optimal for the c.m.p. if
m
Zj = L tijc; ~ c j for j = 1,2, ... , n. (7)
'=1
m
where aj = L t;ja; for j = 1, 2, ... , n . (8)
;=1

Proof Since the determinant of (a1o Il:!, ..• , am) is non-zero


there exists a unique vector y satisfying the equations

for i = 1,2, ... , m.

If (7) is satisfied, then


m m
y'a;= L tijy'aj= L tIJC/~Cj for j=m+ t, ... ,n,
1=1 ;=1

and y satisfies the dual constraints, y'A ~ c'. Since


n ~..
y'b = L y' xla, = L CIXj = c'x,
;=1 1=1

the optimality of x and y follows from Theorem 2 on page 19.


The following alternative proof of Theorem 2 is given as it
is independent of the dual problem and also provides further
insight into the significance of the z}.

Alternative Proof
Let x* = (xf, x:, ... , x:), be any feasible vector for the
c.m.p., then it follows from (8) that
n m n m
b= L x;a = L x;aj+ L
j xj L tija j .
1=1 '=1 j=m+l 1=1
54
THEORY OF THE SIMPLEX METHOD

But

so equating the coefficients of the linearly independent at> &.I,


oo.,8m gives

n
Xi = X; + L t/jxj,
j=m+1

and

c'x = L C/Xi = i=lL CI


m

1.. 1
m (
x; + L
n

J=m+1
ttrj ) .

Hence it follows from (7) and the non-negativity of x*, that

n
c'x = c'x* + L (Zj - Cj)xj ~ c'x*,
i=m+1

which proves that x minimises c'x.


Some readers will no doubt be asking what happens when
Rule II cannot be applied, which is the case when Z~ > c. but
none of the tis are positive. The next theorem proves that the
problem has no optimal solution if this occurs.
THEOREM 3. If x = (Xl> • X m , 0, ... , 0)' is a b.f.s. for
0 .,

the c.m.po corresponding to the basis 8t> 8 2, ••• , 8 m , and if


for some s

m
Z. = L tis> C. and tis ~ 0 for i = 1,2, ... ,m,
1=1

m
where a, = L IlaB/. then the c.m. p. has no optimal solution.
1=1
5S
THE SIMPLEX METHOD

m m
Proof Since L x/al = band L tlsal = as, it follows that
/=1 /=1

m
L (XI -
1=1
lliS)a/ + las = b

and x* = (Xl - llls' ... , Xm - Atms , 0, ... , A, ... , 0)' is a


feasible vector for all A ~ 0, as tis ~ 0 for i = 1,2, ... , m. On
substituting for x* we obtain
m
c'x* = L C,(X/ - Atls ) + C,A. = c'x - A(Z, - Cs),
1=1

but Zs > c. and A. can be made arbitrarily large, so c'x is un-


bounded below on the set of feasible vectors. Hence the c.m.p.
has no optimal solution.
For the simplex method to be a practical technique for
solving linear programmes, it must terminate in a finite number
of steps. We now prove that this happens provided the problem
is non-degenerate, and a technique for overcoming the difficulty
that can arise in degenerate problems, is described on page 63.

Definition of non-degeneracy

A canonical linear programme with constraint equation


Ax = b, is non-degenerate if b cannot be expressed as a linear
combination of fewer than m columns of A, where rank A = m.
We can always take the rank of A to equal the number of
equations, m, for if rank A = p < m, then either the system
of equations is inconsistent or the number of equations can be
reduced to p. For example, the last equation of the transporta-
tion problem is redundant.
THEOREM 4. If a replacement is made according to Rules
I, II and the replaceIl'ent rule during the solution of a non-
56
THEORY OF THE SIMPLEX METHOD

degenerate c.m.p., then a new b.f.s. is obtained and the new


value of c'x is strictly less than the previous value, that is
* < Zoo
Zo
Proof. Let trs be the pivot for the replacement and x = (Xl'
0, ... , 0)' be the current b.f.s., where XI > 0 for
••• , X m ,
i = 1,2, ... , m, as the problem is non-degenerate. It follows
from the replacement rule that the new basic solution of
Ax = b is

XI• = XI - -t i
X,sJor
" .l = 1, 2, ... , m, Xs• = -Xr an d Xi• =0
I,s Irs

otherwise, and feasibility follows from Rule II as

X X
- ' ~ ~ for tis> 0 ==*
1'3 Ii.•
x:;;:; 0 for all i.

The new value of the programme is given by

as Zs > Cs by Rule I, tra > 0 by Rule II and X, > 0 from the


non-degeneracy of the problem.
COROLLARY. For a non-degenerate c.m.p. only a finite num-
ber of replacements are required either to obtain an optimal
solution or to prove its non-existence.
Proof Since the number of b.f.s. 's is finite and z: < Zo
ensures that none of them are repeated, either the criterion of
57
THE SIMPLEX METHOD

Theorem 2 or the criterion of Theorem 3 must be reached


after a finite number of steps.
In a degenerate problem it is possible for Xr to be zero and
then for z~ and Zo to be equal, so that in theory one can cycle
through a set of distinct bases without reducing Zoo However
it is usually found in practice that if one replacement fails to
reduce zo, then a subsequent one will do so. Geometrically, de-
generacy occurs when the k-dimensional convex set of feasible
vectors has more than k bounding hyperplanes passing through
a vertex.
Although the value of a linear programme is unique, the
optimal vector is not necessarily unique and it is left as an
exercise to the reader to show that if Zj < Cj for all aj not in the
basis, then the optimal vector is unique (see exercise 3). If
Zs = c, for some as not in the basis and the min X, > 0, then a
11.>0 tis
replacement can be made which leaves Zo unaltered but gives
another optimal vector.

3. Further Techniques and Extensions

We now describe a technique for obtaining a b.f.s. and the


corresponding simplex tableau when none is available by in-
spection.

Method of finding a h.fs.

A basic non-negative solution of the equations

Ax = b (9)
58
FURTHER TECHNIQUES AND EXTENSIONS

is found by first arranging the equations so that b ~ 0 and then


by solving the c.m.p.

minimise u'w,

subject to Ax +w= b and x ~ 0, w ~ 0,


m
where the m-vector u = (1,1, ... , 1)', so that u'w = L Wi' An
i=l
immediate b.f.s. is x = 0, w = b ~ 0 and the corresponding
m
basis vectors are the unit vectors e1, ~, ... , em, as w = L W~"
1=1
The problem has an optimal solution as u'w ~ 0 for all feasible
w, and it is easy to see that (9) has a non-negative solution if and
only if the minimum value of u'w is zero.

Example
Minimise

subject to

(10)

and XI~O.

Solution. The problem is equivalent to the following c.m.p., where


b~ O.

Minimise Xl + X2 + 3xa + X"

subject to

= 4, (11)

and Xi~ O.
59
THE SIMPLEX METHOD

To find a b.f.s., we solve the c.m.p.


minimise
subject to = 8,

-3Xl - Xz - 2xa + 7x, + W2 = 4,


and XI ~ 0, WI~ O.
for which a bJ.s. is x = 0 and w = (8, 4), and the basis vectors are el and
e z. We now solve the problem by the simplex method.

c 0 o o o o
82 b

2 3 0 6 -1 1 0 8
-3 -1 -2 7 0 0 1 4
--
z-c -I 2 -2 13 -1 0 0 12
27 12 6 32
132 - - 0 -1 1 -- -
T1 7 7 7 7
3 1 2 1 4
--
7 -7 7
1 0 0 -
7
-
7
32 27 12 13 32
z-c - - 0 -1 0 -
7 7 7 7 7

1
27
32
-
3
8
0 --327 7
-32 --
16
3
1

7 1 3 3 1
0 -
32
--
8
I --
32
-
32 16
1.
---
z-c 0 0 0 0 0 -1 -1 0

From the above tableau, a bJ.s. for (11) is x = (1,0,0,1,0)" so we now


calculate the z - c row appropriate to problem (11) and proceed with
the simplex method. The vectors el and e2 are retained, as we shall show
how to obtain an optimal solution to the dual problem from the optimal
simplex tableau.
60
FURTHER TECHNIQUES AND EXTENSIONS

c 3 o o o
b

i27i 3 7 7 3
1 -- 0 -- - -- I
:32 8 32 32 16
7 1 3 3 1
0 -- 1 I
32 8 32 32 16
------- -
I 11 5 5 1
z -- c 0 -
16
--
4
0 --
16
-
16
--
8
2
--" ----~-.--

32
-
27
20
-
27
---
2 25 8 8 1 52
z- c -- 0 0 - -
27 9 27 27 9 27

Since ZI - cj :::;; 0 for j = 1, 2, ... , 5, the optimal solution is


x= 32 0, 20
( 0, 27' 27 ' 0)' an d t h e vaI
ue 0 thef 'IS 27
programme 52 .

The dual of the problem (11) is to

It is easy to verify that the numbers in the bottom row of the tableau
8 -
under e1 and e2, Y = ( 27' 9"
1 )' ,are feasible for the dual problem

and the optimality of y follows as 8Yt + 4Y2 = ~~ , the value of the pro-
gramme. We now show that the simultaneous solution of the dual prob-
lem was no accident.
61
THE SIMPLEX METHOD

Solution 0/ the dual problem

Suppose that we have an optimal solution for the c.m.p. and that the
L lua,
m
corresponding simplex tableau is as shown below, where e, =
'=1
m
and Y, = L Ii/c,.
=1

o
b

1 0 t lm +l tIn III 11m Xl

0 1 tmm+1 tmn Iml lmm Xm

z-c o0 Zm+l-Cm+l Zn-Cn Yl Ym Zo

Let Ab = (aI' a2' .•. , am) be the matrix of the basis vectors and L =
(lu), then it follows fro.n (2) that the unit matrix I satisfies
1= (el , e2, ••• , em) = A~,

and since Ab is regular,


LAb= I, (12)
so that L is the inverse basis matrix. On writing

it follows from (12) that

y'A b = c; or y'a, = CI for i = 1,2, ... , tn.


For j= m + 1•... , n

= 6 y'tuB, = ,~ tuc, = Z/::;: C,'


m m
y'a, (13)

since the optimality criterion is satisfied. Hence y satisfies the dual


constraints, y' A $ c', and since X, > 0 ~ y'a, = c" y is optimal by
Theorem 5 on page 23. It can be shown that the optimal vector for the
dual problem is unique if the primal (original) problem is non-degenerate
(see exercise 4 on page 66).
62
FURTHER TECHNIQUES AND EXTENSIONS

We have now shown how to solve any linear programming problem.


The procedure is to first transform the problem into a c.m.p., if neces-
sary, then to use the technique described on page 58 to find a bJ.s. and
the corresponding simplex tableau, if one is not available by inspection,
and finally to solve the problem. Degeneracy is the only outstanding
problem and we now show how to overcome this difficulty, though it is
usually worth trying another replacement to see if the value of Zo can
be reduced.

Degeneracy

The simplex tableau on page 62, extended by the unit vec-


tors, el , f:l, ... , em. is in a suitable form for dealing with the
degeneracy problem. The degenerate constraint equation,
Ax = b, is made non-degenerate by adding the m-vector,
m
(e, e2 , ••• , em)' = L eiejo to b, where e is a suitably small
i=l
number. Suppose that during the course of solving the unper-
turbed problem, the min ~ is not unique and that t,s and tqa
II. >0 tis

sequent replacement will result in a zero value for x:


(say) both qualify for the pivot. If tr $ is chosen, then the sub-
and the
solution will be degenerate. However in the perturbed problem,
the pivot is uniquely determined by the

. l(
mm-- x j ~jl)
+ L... eli' (14)
Ib>O tis j=l

since it follows from the tableau that

Nowe is arbitrarily small, so the minimum will occur at either


i = r or i = q and to determine which will give the pivot, we
63
THE SIMPLEX METHOD

compare the coefficients of the powers of B in (14) for i =r


and i = q. This leads to a comparison of the vectors

If the above vectors first differ in the kth element and

then trs is the pivot for the replacement, as the higher powers
of 8 can be neglected. The method can easily be extended to
cover the case in which more than two tis qualify for the pivot.
We conclude this chapter by describing briefly two variations
of the simplex method. For a more detailed account of the
methods, the reader is referred to the books by Krek6 and Beale,
which are listed under the 'Suggestions for Further Reading' on
page 83.

Inverse Matrix Method (Revised Simplex Method)

This method consists of only up-dating the inverse basis


matrix L and the values of x, y and Zo (see the tableau on page
62). The values of z, - c,are calculated from the matrix A
using the relation (13), namely

and the pivotal column is chosen, as in the simplex method, by


the criterion Z8 - C. > O. Then the elements of the pivotal
column are calculated from the equation
tis = (LA)/a'
which follows from (2), and the pivot is given by the min Xi •
1,.>0 tis
64
FURTHER TECHNIQUES AND EXTENSIONS

The inverse matrix method is used in most computer pro-


grammes as it involves less computations when the number of
variables is large compared with the number of constraints
and when the matrix A contains several zeros. It is also a better
method for handling degeneracy and for doing sensitivity
analysis, which gives the criteria under which a1> a2 , ••• , am
remain an optimal basis for changes in the values of b, c and
A. (see exercises 8 and 9 on page 67).

Dual Simplex Method

The dual simplex method solves the c.m.p. by using a tableau


for which the optimality criterion, Zj ~ Cj' is satisfied for all
replacements, but the corresponding x,'s are not necessarily
non-negative. If all the x/s are non-negative, then the solution
is optimal by Theorem 2. If not, the pivot for the next replace-
ment is obtained by choosing some Xr < 0 and calculating
. z] - Cj Zs - Cs h' I
the mm - - = - - to glVe t e PIvot trs ' n this way

Irl<O trj trs

Xr *= Xr
_. > 0 and *
Zj - c] ~ O. The value of Zo in the dual
trs
simplex method is the current value of the dual problem, which
is to maximise y'b subject to y'A ~ c', and the replacement
. ~~~~
mcreases the value of Zo by - ~ 0.
trs
The dual simplex method is applicable to the following type
of c.m.p:

minimise c'x,
subject to Ax +w= b and x ~ 0, w ~ 0,

where c ~ 0 and b ~ O. For this problem w = b is a basic


non-feasible solution of Ax + w = b and the corresponding
65
THE SIMPLEX METHOD

ZJ - Cj = - Cj ~ 0 for j = 1,2, ... , n. The method is also


useful when an additional constraint is to be added to the
simplex tableau.

EXERCISES ON CHAPTER FOUR


1. Solve by the simplex method:
minimise Xl + 6x: + 2xa - x, + X5 - 3x8,
subject to Xl + 2x: + xa + SX8 = 3,
-3x: + 2X3 + x, + X8 = 1,
Sx: + 3X3 + X6 - 2X8 = 2
and X/ ~ O.

2. Solve by the simplex method:


minimise 4XI + X2 - X3 + 2x"
subject to 3xI - 3X2 + Xa = 3,
6X2 - 2xa + x, = 2
and XI~ O.

3. If a]o a2' ... , am is an optimal basis for a c.m.p. and if

z/ < c/ for j = m + 1, ... , II,


show that the c.m.p. has a unique optimal vector.
4. Show that for any non-degenerate c.m.p., the optimal vector for
the dual problem is unique, if it exists, and that the optimality criterion
on page 53 is satisfied for all optimal bases.
5. Solve by the simplex method for all values of i.:

minimise + 2X2 - 5xa + X"


AX I
subject to + X: - 3xa - X, = 3,
2Xl
3XI - 4x: + Xa + 6x, = 1,

-Xl - X: + 2xa + 2x, = -1

and XI ~ O.
(This is an example of parametric linear programming.)
66
EXERCISES ON CHAPTER FOUR

6. Solve by the simplex method:

maximise + 4Y2 + 3ys,


YI
subject to 3YI + 2Y2 + Ys ~ 4,
YI + 5Y2 + 4Ya ~ 14,
and YI~ o.

Obtain the solution to the dual problem from the simplex tableau.
7. Solve by the simplex method:

maximise Xl + 6x + 4xa,
2

subject to -Xl + 2X2 + 2xa ~ 13,


4XI - 4X2 + Xa ~ 20,

Xl + 2X2 + xa ~ 17,

Xl ~ 7, X2 ~ 2, Xa ~ 3.

8. A baby cereal is to be marketed which contains 25% protein and


not more than 70% carbohydrate. The manufacturer has four cereals at
his disposal, whose analysis and cost are given in the table below.

Cereal

% carbohydrate 80 80 70 60
% protein 10 15 20 30
Cost per pound in shillings 1 2 3 4

What blend of the four cereals should the manufacturer use to minimise
the cost of the baby cereal? By how much must the cost of C2 be
reduced so that C2 is as cheap to use as CI in the manufacture of the
baby cereal, and give the blend in this case?
9. Show that an optimal basis for the c.m.p. remains optimal under
Ihe perturbation b -t b + bb, provided that L(b + bb) ~ 0, where L is
the inverse basis matrix.

67
CHAPTER FIVE

Game Theory

1. Two-person Zero-sum Games

Game theory models competitive situations and has applications


in management science, military tactics and economic theory.
In this chapter the discussion will be confined to two-person
zero-sum games. These are games between two players, which
result in one player paying the other a predetermined amount,
called the payoff. The underlying assumption of the theory is
that both players try to win, and this rules out any cooperation
between the players. We use the following simple card game
to illustrate the concepts of the theory.
Two players, PI and P 2 , are each dealt a card from a pack
of three cards. After looking at his card, PI guesses which card
remains in the pack. P z, having heard PI'S guess and having
inspected his own card, also guesses the identity of the third
card. If either player guesses correctly, he wins I from his
opponent.
The first step in analysing the game is to determine the possible
strategies for PI and P z• A strategy is a set of instructions deter-
mining the player's ahoice at every move in the game and in
all conceivable situations. Even though a player may not have
thought out a course of action, he will behave as though he had
a strategy. PI has two strategies which are SI: guess his own
card and Sz: guess one ofthe other two cards with equal proba-
bility. Pz's strategies depend upon PI'S move and are given in
the table below, where those strategies which involve P z choos-
ing his own card, have been eliminated as P 2 can clearly do
better by a choice of one of the other two cards.
68
TWO-PERSON ZERO-SUM GAMES

P 2's strategies PI guesses P 2's card PI doesn't guess P 2's card

11 Guess one of the Guess the same as Pl'


other two cards
with equal proba-
bility.

ditto Guess the remaining card.

Let cp(s;,tj) be the payoff to PI as a result of the game in


which PI and P2 use Sj and tj respectively, then - cp(Sj, tj) is
the payoff to P2 . If PI plays SI and P2 plays 110 then both guess
PI'S card and win nothing, so CP(slh) = O. If PI plays S2 and P2
plays 12 , then either PI guesses P 2's card and P2 guesses the card
in the pack with probability i- or PI guesses the card in the
pack and P 2 guesses PI'S card, so the expected (average) payoff
to PI is CP(S2,t2) = t x! x (- 1) + i- x 1. The remaining payoffs,
CP(Slot2) and CP(S2,t;), are calculated similarly and the payoff
matrix, with (i,j)th element cp(Sj, tj ), is given by (1).

(1)
-!
In this game there is no single best strategy for either PI
or P 2 for if PI always uses S2, P 2 will use 110 in which case it
would be better for PI to use SI etc. It will be shown later that a
best policy for both players is to vary their choice of strategy
in a sequence of games, that is it pays PI to bluff occasionally.
DEFINITION. A two-person zero-sum game consists of two
strategy sets Sand T, where S E Sand 1 ET are the strategies
for the players PI and P2 respectively. The payoff, cp(s,I), repre-
sents the amount paid by PI to P2 as a result of the game in
which PI plays sand P 2 plays t, and is a real valued function
defined for all S E Sand t E T.
69
GAME THEORY

The process of determining S,T and <p(S,I) for a game is


called normalisation. If s or t involve chance moves, then
<P(s,/) is the expected payoff. In the cases when Sand Tare·
finite the payoff can be represented as a matrix, as in (I).

2. Solution of Games: Saddle Points

We now consider the game with payoff matrix (2). Since Pa

t1 f3

( )
3 -1
-1 3 (2)
2 2

will try to minimise P1'S gains, a sensible procedure for select-


ing P1's best strategy is to determine the minimum payoff for
each of his strategies, that is - 1 for S1> -1 for S2 and 1 for s~
and then to select the strategy corresponding to the maximum
of these payoffs (maximin <p), which is Sa. The similar procedure
for P2 is to find the most he can lose by using each of his
strategies, that is 3 for tlo 1 for t2 and 3 for t a, and then to
select the strategy corresponding to the minimum of these
payoffs (minimax <p), which is t2• In this game P1 and P2 will
be satisfied with their maximin and minimax strategies, Sa and
t2 respectively, for

which means that neither player could have improved their


choice of strategy had they known their opponents choice.
This reasoning motivates the following definition.
DEFINITION. The game G = (S,T;<P) has the solution (s,i;w) if

cfJ(s,t) ~ w ~ cfJ(s,i) for all s E Sand t E T,


70
SOLUTION OF GAMES: SADDLE POINTS

where co = t/J(s,i) is called the value of the game and s E S,


f ET are called the optimal strategies.
A function t/J(s,t), defined for s E Sand t E T, is said to have a
saddle point at (s,i) if

t/J(s,t) ~ ¢(s,i) for all s E Sand t E T,

so if the payoff has a saddle point at (s,i), then (s,l ;¢(s,l» is a


solution to the game. If the payoff is representable by a matrix,
then a matrix element is a saddle point if it is the minimum in
its row and the maximum in its column.
DEFINITION. For the game G = (S,T;¢),
maximin ¢ = max min ¢(s,t) and minimax t/J = min max ¢(s,t)
sES lET lET sES
provided they exist.
By using his maximin strategy PI is assured of winning at
least maximin ¢ and by using his minimax strategy P2 will not
lose more than minimax ¢. For the payoff matrix (2) the
maximin and minimax are equal and it follows from (3) that
(ss,t 2 ;1) is a solution to the game. However for the game with
payoff matrix (I), maximin ¢ = - t < minimax ¢ = 0 and
there was no single best strategy for either player. The following
theorem gives a criterion for the existence of a saddle point.
THEOREM 1. If maximin ¢ and minimax ¢ exist, then

maximin ¢ ~ minimax ¢,

and ¢ has a saddle point if and only if

maximin ¢ = minimax ¢.

Proof Since

min ¢(s,t) ~ ¢(s,t) ~ max ¢(s,t) for all s E Sand t E T, (4)


lET sES

maximin ¢ ~ minimax ¢.
71
GAME THEORY

Suppose maximin c/J = minimax c/J, then there exist 9 E Sand


i E T such that
min c/J(9,t) = maximin c/J = minimax c/J = max c/J(s,i).
lET sES

It now follows from (4) that

c/J(s,t) ~ c/J(s,i) for all s E Sand t E T,

so c/J has a saddle point at (9,t). Conversely suppose that c/J has
a saddle point at (s,i), then
c/J (9,t) ~ c/J(s,i) for all s E Sand t E T,

so

min c/J(s,t) ~ max c/J(s,t)


lET sES

and maximin c/J ~ minimax c/J. Hence by the first part of the
theorem, equality must hold.

3. Solution of Games: Mixed Strategies

If the payoff has no saddle point then the game has no solution
in pure strategies, that is in strategies belonging to the strategy
sets Sand T. So to make progress we must introduce the
mixed strategy, which is represented by a probability density
function defined on a strategy set.
DEFINITION. The mixed strategies x(s) and y(t) for PI and
P 2 respectively, are real valued functions satisfying

L x(s) =
sES
1 and x(s) ~ 0 for all S E S, (5)

L y(t) = 1 and y(t) ~ 0 for all t E T.


I~T

72
SOLUTION OF GAMES: MIXED STRATEGIES

The corresponding value of the payoff is given by the expected


payoff
¢(x, y) = L L x(s) y(t) ¢(S,I) .
s< StU'

If P1 plays a mixed strategy, then he must use a probabilistic


device, such as a dice or pack of cards, to determine which
pure strategy to use in each play of the game. A pure strategy,
Sj say, can be thought of as the mixed strategy satisfying x(Sj) = 1
and xes) = 0 for S =1= St. We now extend the definition of the
solution to a game to include mixed strategies.
DEFINITION. The game G = (S, T; ¢) has the solution
(x(s), yet); w) if

q,(i,y) ~ w ~ ¢(x,y) for all mixed strategies xes) and yet) (6)

or equivalently

¢(i,t) ~ w ~ ¢(s,y) for all s E Sand t E T, (7)

where w = ¢(x,y) is the value of the game and xes) and yet)
are called optimal strategies.
To verify the equivalence of (6) and (7), we assume that (7)
holds, and then use (5) to show that

¢ (i,y) = L y(t)¢(i,t) ~ lET


lET
L y(t)w = w for all mixed yet).

Similarly cp(x,y) ~ w for all mixed x(s). Condition (7) follows


from (6), as pure strategies are special cases of mixed strategies.
The condition (7) is easier to apply than (6) and as an example,
we show that (x, y; - ~) is a solution to the game with payoff
matrix (1), where X(Sl) = ! ,x(sz) = ~ ,y(tJ =! and

73
GAME THEORY

Y(IJ = !. On applying (7) we obtain

_ 1
q,(X, (2) = - 6'
1
q,(S2,Y) = - 6'

so q,(x,l) ~ - !~ q,(s,y) for all sES and lET. An interpretation


of (7) is that an optimal strategy for PI achieves at least the
value of the game against all the pure strategies of P2•
In the next theorem we prove that the value of a game is
unique, though the reader should note that the optimal strate-
gies are not necessarily unique.
THEOREM 2. The value of a game, if it exists, is unique.
Proof. Suppose (x,ji;w) and (x*,y*;w*) are solutions of the
game G = (S,T;q,), then it follows from (6) that

w = q,(x,ji) ~ q,(x*,ji) ~ q,(x*,y*) = w*


and
w = q,(x,y) ~ q,(x,y*) ~ q,(x*,y*) = w*,
which proves that w = w*.

4. Dominated and Essential Strategies

A strategy Sk for PI is said to dominate his strategy s, if f9r all 1


ET, </J(Skol) ~ </J(s"I), which says that P l will not be worse off
by using Sk in preference to s" whatever strategy P2 employs.
Similarly the strategy tk for PI dominates his strategy t, if
q,(s,tk ) ~ q,(s,I,) for all s E S. The following theorem proves
that if a dominated strategy is eliminated from a game, then
74
DOMINATED AND ESSENTIAL STRATEGIES

any solution to the reduced game is a solution to the original


game.
THEOREM 3. If q,(Sk,t) ~q,(s"t) for all t E T and if (x*,y ;co)
is a solution to the game G* = (S - s"T;q,), then (x,Ji;co) is a
solution to the game G = (S,T;q,), where x(s) = x*(s) for
s E S - s, and x(s/) = o.
Proof Since (x*,ji;co) is a solution to G* = (S - s/,T;q,)

q,(x*,t) ~ co ~ q,(s,ji) for all s E S - s, and t E T.

From the definition of x(s)

q,(x, t) = L x(s) q,(S,I) = L x*(s) q,(s, t) = q,(x*, t) ~ co


sES sES-s/
and
q,(s" ji) = L ji(t) q,(s"
fET
t) ~ L ji(t) q,(Sk' t)
fET
= q,(s", ji) ~ co
Hence t!>(x,t) ~ co ~ t!>(s,y) for all s ESand t ET, and (x,ji ;co) is a
solution to G = (S,T;q,).
A pure strategy is called essential if it is used in some optimal
strategy and in the next theorem, we prove that an essential
pure strategy of one player achieves the value of the game
against all the optimal strategies of his opponent.
DEFINITION. The pure strategy Sk for Pl is called essential if
there exists an optimal strategy x(s) for Pl> with X(Sk) > O.
THEoREM 4. If the game G = (S,T;q,) has a solution and if
Sk is an essential pure strategy for Pl> then
q,(SkJi) = co for all optimal strategies ji(t) of PI,
where co is the value of the game and the strategy sets, Sand T,
are enumerable.
Proof As Sk is essential, there exists an optimal strategy,
x(s) for Pl> with X(Sk) > O. For all optimal strategies ji(t) of
PI, q,(x,ji) = co, which can also be written as
L x(s)(q,(s,ji) - co) = o. (8)
sES
75
GAME THEORY

Since y is optimal, ¢(s,y) :::; ro for all s E S and by the definition


of a mixed strategy, x(s)2::0 for all s E S, hence it follows from
(8) that as S is enumerable,

x(s)(¢(s,y) - ro) =0 for all s E S,

which implies that ¢(SkoY) = ro, as -~(Sk) > O.


The following example illustrates how Theorems 3 and 4 can
be used to solve a game.
Example. Find a solution to the game with payoff matrix

'I4 12 f3 t,
SI
S2
Sa
( 6
6
4
5
5
6
4
6
6
4
0
) ,

Solution. Since maximin cp = 4 and minimax cp = 5, the payoff has


no saddle point by Theorem 1. The strategies 11 and la are dominated by
the strategies 12 and I, respectively, and in the reduced game, the strategy
Sa is dominated by S2' We now use Theorem 3 and solve the reduced game

Since the 2 X 2 matrix has no saddle point, the pure strategies for both
players are essential (this result only holds for 2 X 2 matrices). Let:X and
y be optimal strategies for PI and P2 respectively, where i(SI) = p,
i(S2) = 1 - p, y(t2) = q, y(t4) = 1 - q and 0 < p, q < 1. On applying
Theorem 4 we obtain
4J(i, t2)= 4p + 5(1 - p) = w
cp(x, tJ = 6p + 4(1 - p) = W
cp(Sl> ji) = 4q + 6(1 - q) = w

. . 1 14 2
whIch have the solutIOn p = - , w = - and q = -. Hence by Theo-
3 3 3
rem 3, a solution to the game is for PI to use his strategies SI and S2 with
probabilities ! and ~ respectively. for P~ to use his strategies I~ and

76
MINIMAX THEOREM

I, with probabilities ~ and ~ respectively, and the value of the game


14
is "3' The reader should check that the solution satisfies the condi-
tion (7).

5. Minimax Theorem

The existence of a solution to a game, G = (S,T;cp), can be


proved under certain conditions, and the major result of this
chapter will be to prove that G has a solution when Sand T
are finite. Such a game is called a matrix game, as the payoff
can be represented by a matrix. Existence can be proved for
more general conditions.

Matrix Games

If S = (Sl, S2, ... , sm) and T = (/10 12 " .. , tn), then the mixed
strategies x(s) and y(t) for PI and P2 respectively, can be repre-
sented by the vectors x = (Xl> X2, " " "' xm)' and y = (Yl> Y2 •
. . ., Yn)', where Xi = x(s/) and Yi = y(t;). The conditions (5)
for a mixed strategy are equivalent to

x'U = I. x ~ 0, where U = (1, I, " .. , I);" (9)

and v'y = 1, y ~ 0, where v = (1, 1, . " ., I)~. (10)

On writing cp(Sj,t) = aij' the expected payoff

m n
.J.(x , "v)=
'Y '~ ,L x·v·a··
I. J I] = x'Ay ,
where A = (aj)
i= I }-cl

and the condition (7) for (x,y;w) to be a solution to the game


77
GAME THEORY

G = (S,T;t/» becomes
i'A;;:;mv', Ay::;; mu, (11)
as the pure strategies correspond to the unit vectors e/.
Before proving the existence theorem, we require the result
that the sets of optimal strategies are unaltered by adding an
arbitrary constant to the payoff, that is if (x,y;ro) is a solution
to the game G = (S,T;t/», then (x,y;m + k) is a solution to
the game G = (S,T,t/> + k) (proof left as an exercise for the
reader).
THEOREM 5 (minimax theorem). A matrix game has a solution.
Proof. We assume that the maximin of the payoff matrix
A is positive, as this ensures that the value of the game is
positive. If necessary, this can be achieved by adding a suitable
constant to the elements of A. We now consider the s.m.p.,
minimise x'u
subject to x' A ~ v' and x ~ 0,
where u = (1, 1, ... , 1);" and v = (1, 1, ... , I);. A feasible
vector is x =aiiek where akl is the maximin. The dual problem
is to
maximise v'y
subject to Ay ~ u and y ~ 0,
°
for which y = is a feasible vector. Hence by the fundamental
duality theorem on page 19, there exist optimal vectors Xo and
Yo satisfying
, ,
Xou = vYo = Zo
where Zo > 0, as Xo must have at least one positive element to
satisfy the constraints. Let
-
X = Zo-1x o' -
y = Zo-1..0
1 , m=zo\
then (i,y;m) is a solution to the game, since i, y and m satisfy
the conditions (9), (10) and (11).
78
SOLUTION BY SIMPLEX METHOD

The minimax theorem was first proved by von Neumann


in 1928 using a fixed point theorem, whereas the above proof
was given by Dantzig in 1951. The name of the theorem derives
from the condition

max min q,(x,y) = min max q,(x,y),


x y y x

which is necessary and sufficient for the existence of a solution


to a game. This result is the generalisation of Theorem 1 to
include mixed strategies.

6. Solution of Matrix Games by Simplex Method

The above proof of the minimax theorem is constructive, as


it gives a technique for finding a solution to the game with
payoff matrix A, which is to solve the related linear programme,

maxinaise v/y
subject to Ay ~ u and y ~ o. (12)

Example. Find a solution to the game with payoff matrix

A= ( ! o
o
1
~
-1
),
Solution. Since maximin A = 0, we add 1 to the elements of A and
then convert (12) into the following canonical maximum problem,

maximise Yl + Y2 + Ya
4
subject to -Yl+ Y2 + 2Ya + WI = 1
3
2Yl + Y2 + Ya + w2 =1
4
TYI + 2Y2 + wa =1
and y,;;::: 0, WI;;::: o.
79
GAME THEORY

A b.f.s. is y = 0, W = (1, 1, 1)' and the corresponding simplex tableau


is shown below. Since v'y is to be maximised, Zs < c, is the criterion for
a, to be included in the basis and z/ ~c/ for all j, is the criterion for
optimality (see footnotes on pages 51 and 53).

c o o 0
u

4
2 o

z-c
2 1 1 1
2
o o
3 2 2
4 1 1 1
3 2
o 2
o 2
4
o o 0 1 1
3

z-c
1
o ~~.-~--o--o-I- ~
3 2
-----J---
1
3 4
1
4
2 1
3 2

z-c o o o 2
o !I
I
3
4

Yo = (0, ~, ~ )'is an optimal vector, Zo= !


r
From the simplex tableau,

is the value and Xo= ( ~ ,0, ~ is the optimal vector for the dual problem,
80
EXERCISES ON CHAPTER FIVE

which is found in the bottom row under the e/s. The corresponding opti-
mal strategies

.or
t' P d P! are x=
1 an
- Zo -1 Xo = (23,0, 31)' -=
,Y z" -1 Yo =

2 -1 )' and the value of the game ism = Zo -1_ 1 = -1 .


(0, -,
3 3 3
Since ZI - Cl = 0, another optimal vector can be obtained by replac-
ing e! in the basis by 81' This gives another optimal strategy, y* =
1 9'
(,3' 4 9'
2)' ' for P • It can be proved that the set of optimal stra-
2

tegies for a player is a convex polytope (see exercise 9 on page 82),


and in this example (y, y*) is the set of optimal strategies for P2 • The
optimal strategy for PI is unique as the optimal vector for the linear
programme is non-degenerate (see exercise 4 on page 66).
We conclude this chapter by summarising a procedure for obtaining
a solution to a matrix game.
(i) Test for a saddle point, if one exists then a solution is immediate.
if not proceed to (ii).
(ii) Eliminate dominated strategies.
(iii) If reduced matrix is 2 x 2, solve by the method given on page 76.
If not, add a suitable constant k to the elements of A if the maxi-
min is not positive, and then solve the related linear programme,
maximise v'y
subject to Ay + W= D, Y 2 0, W 2 0,
by the simplex method.
(iv) A solution to the game is given by i = ZO-1 xo , y = Zo -lyo, ~
= Zo -1 - k, where Yo and Xo are optimal vectors for the linear
programme and its dual, respectively, and Zo is the value of the programme.
To determine the sets of optimal strategies for each player, the unreduced
payoff matrix must be considered, as some optimal strategies may be
removed by eliminating the dominated strategies. For further theorems
on the structure of matrix games, the reader is referred to chapter 7 of
the book by Gale.

EXERCISES ON CHAPTER FIVE


1. Determine the payoff matrix for the following game. P 2 , the bowler,
can bowl a fast ball and a slow ball. PI' the batter, can either hit or miss.
If PI misses a fast ball or hits a slow ball, he is out and scores no runs.
If PI hits a fast ball, he scores four runs and if he misses a slow ball,
the game is played again and if he again misses a slow ball, he gets one
run. (PI cannot recognise P2's ball in advance.)
81
GAME THEORY

2. Evaluate the maximin and minimax for the game G = (S, T; q,),
where cfJ(S,/) =: 2s1 - S2 + 12 + 2s and S =: T is the field of real num-
bers, and hence solve the game.
3. Solve the game with the n X n payoff matrix, A = (all)' where
au = 0 for j =F j and ali > 0 for j = 1,2, ... , n. Use your result to solve
the game of exercise 1.
4. A game, G = (S, T; cJ» is called symmetric if S =: T and cJ>(S,/) =
- cJ>(t,s) for all sES and t ET. Show that x(s) is an optimal strategy for PI
if and only if cJ>(x,s) 2: 0 for all s E S.
Two aircraft each carry one rocket and the probability of the pilot of
either aircraft hitting the other aircraft by firing his rocket from a distance
d, is e- d • What is the best distance for the pilots to fire from, given that
each knows when the other has fired?
5. Solve the following game. PI is dealt one card and P2 is dealt two
cards from a pack of four aces. After looking at his card, PI guesses
which ace remains in the pack and then Pz, having heard PI'S guess and
having inspected his own cards, also guesses which ace remains in the
pack. If either player guesses correctly, he wins one from his opponent.
6. Use the simplex method to solve the game with payoff matrix

A = (-; -~ ~).
o -2 3
7. Commander A has three aircraft with which to attack a target
and his opponent B has four anti-aircraft missiles to defend it. To destroy
the target, it is sufficient for one of A's aircraft to penetrate B's defences.
There are three corridors through which A's aircraft can approach the
target; B can defend each corridor by placing any number of missiles
in it and each missile is guaranteed to destroy one and only one aircraft.
Find optimal strategies for A and B.
8. Show that the linear programme,
maximise ro
subject to x'A ~ roy', x'u =I and x ~ 0,
and its dual are feasible. Hence prove the mitiimax theorem for matrix
games. (u = (1, I, ... , I)~, v = (1, 1, ... , I)~)
9. Use the result of exercse 6 on page 12 to show that any non-
negative solution, y 2: 0, W 2: 0, of the equations
Ay + w = rou, v'y = 1,
can be expressed as a convex combination of the basic non-negative
solution of the equations. Hence prove that the set of optimal strategies
for P2 is a convex polytope. (u = (1, 1, ... , 1)~, v = (1, 1, ... , 1)~)
82
SUGGESTIONS FOR FURTHER READING

BEALE, E. M. L. Mathematical Programming in Practice,


Pitman, London, 1968.
GALE, D. The Theory of Linear Economic Models, McGraw
Hill, New York, 1960.
KREK6, B. Linear Programming, Pitman, London, 19fi8.
McKINSEY, J. C. C. Introduction to the Theory of Games,
McGraw Hill, New York, 1952.
NEUMANN, J. von and MORGENSTERN, O. Theory of Games and
Economic Behaviour, Princeton, 1944.
SIMMONARD, M. Linear Programming, Prentice Hall, 1966.

83
SOLUTIONS TO EXERCISES

Chapter I
1. Convex, not convex, convex.
3. C is empty (01 > 2), C is convex polytope (0 < IX ~ 2), C is half-
line (01 ~ 0).

Chapter II

4. x =.(0, 5'
8 19)'
5 ' value = - 568 .
6. Minimise OCn + l'
n
subject to L ocl~,(XJ) + ocn+1 ?:.f(x/)
'=1
for j = 1,2, ... , m,
L OCI~I(X;) - oc + ~f(x;)
N

and n 1 for j = 1,2, ... , m.


1=1
7. Dual: maximise y'b,
subject to y' A ~ 0, y'b ~ 1 and y ~ O.
10. Maximise c'x,
subject to Ax = b and l~ x ~ u.
Dual: minimise y'b + w'u - z'J,
subject to y' A + w' - z' ~ c' and w ~ 0, z ~ o.

Chapter III

1. XII = ( 4 ~ ~ ), cost = 34.


2. Minimum cost = 6S(}. > 6), = S9 + ).(4 < ). ~ 6), =
47 +
4).(0 ~). ~ 4).
3. Optimal policy for caterer is on Monday to buy 30 from shop,
on Tuesday to buy 30 from shop and to use 10 from I-day laundry, 00
Wednesday to use 20 from 2-day laundry and 40 from I-day laundry,
and on Thursday to use 40 from I-day laundry.
4. CI does J4 , C z does 11> Ca does J 2 and C4 does J a.
CI does 1 2 , C" does J1> Ca does 14 and C4 does J a.
6. Equivalent transportation problem has supplies S, = s for i = 1•
... ,4, demands d j = d j for ;= 1, ... ,4 and do = 4s - d l - d" -
da - d4 , and cost matrix CI;' where C,i = 0, ('if = CD for; > j, and (',I =
('i + k(j - i) otherwise. For feasibility S ~ d,. 2s ~ d 1 + d", 3s::? d, +

84
SOLUTIONS TO EXERCISES

! D~
+ d~ + da and do 2: O.

Optimal x"~ (~ ~ ~ 00" 106

7. Use equilibrium theorem and assume that the result does not hold.

Chapter IV

16 66 11)' 3
I. x = (0, 29' 0, 29,0, 29 ' value = - 29 .
2. No optimal solution.
5. x = (1,2,0, 1), is optimal for }. 2: 3 with value = ). + 5; no optimal
solution for ). < 3.
6. Y = (0, ~ , ~)', value = 32; optimal solution to dual is x =
3 3 3
1 2'
(
3'3) .
11 9 .,
7. X= ( 2'4,7), value = 47.
8.25% C1 , 75% C4 , cost = 3s Jd. Reduce by 3d, 33 t % C,' 66t % C4 •

Chapter V

1. (~~~)
..
2. M axlmJn = ..
minImax 1 soIutlOn.
= 2' ' IS . ( 1 1. 1)
S =2-' t = - 2;w=2 .

3. Optimal strategies it = y = (~, ~ , ... , ~J' and w =


all a22 an.
(,f.;1f aii 1)-1 . SoI'
utlon to game 0
f " 1 IS -X
exercise = -Y = (1'6' '6
1'3 ' 2)'
2
W = 3'
1
4. Best distance is given by e- d = 2'
5. ( _
1
~ - ~ ) , soitltion is it = (: ' !)'. y = (! ' : r. w =

-4'
85
SOLUTIONS TO EXERCISES

_
6. x = (2'2,0 ,,= 3'3,0 or (4'4'2
1 1 )' _ ( 1 2 ) ' 1 1 1 )'
,0)= 1.

7. B's strategies of deploying missiles


(4,0,0) (3,1,0) (2,2,0) (2, 1, 1)

i)
A's (3,0,0) ( ; ; 1
strategies (2, 1,0) 1 I ;
for (1, I, 1) . 1 1 1
aircraft
Solution i = <+,;,0)', y = (0,;,0,1-)' (not unique), = t.
0)

86
Index

Basic solutions of equations, 4-7, Games


26 definition of solution, 70, 73
Basic optimal vector, 25 matrix, 77
solution by simplex method, 79
two-person zero-sum, 69
Canonical minimum problem, 17 Graphical method of solution of
dual,17
linear programmes, 27
Caterer's problem, 42
Convex
definition, 1 Inverse matrix method, 64
combination, 1
hull,l
polytope, 1 Linear inequalities, 10
Linear programme
canonical problem, 17
Degeneracy, 56 general problem, 17
in :ansportation problem, 37, stantiard problem, 15

in simplex method, 63
Maximin, 71
Diet problem, 14
Minimax, 71
dual, 16
theorem, 78
interpretation of simplex me-
Mixed strategy, 72
thod,48
Dominated strategy, 74
Dual simplex method, 65 Normalisation of game, 70

Objective function, 18
Equilibrium theorem
Optimal assignment problem, 43
canonical, 23
Optimal strategy, 71, 73
standard, 22
Optimal vector, 18
Essential strategy, 75
basic, 25
Extreme point, 2
Optimality criterion, 53

Feasible vector, 18 Payoff, 69, 73


Fundamental duality theorem, 19 Pivotal condensation, 46
87
INDEX

Replacement rule, 48 Strategy, 68


Revised simplex method, 64 dominated, 74
essential, 75
mixed,72
Saddle point, 71 optimal, 71, 73
Separating hyperplane, theorem, 8
Transportation problem, 33
Simplex method, 46 - 66
degenerate, 37, 40
degeneracy, 63
dual, 34
dual, 65
method of solution, 37
initial basic feasible solution' 58
inverse matrix, 64 '
Value
replacement rule, 48 of a game, 71, 73
revised, 64 of a linear programme, 18
rules I and II, 51 Vertex, 2, 5, 46
solution of dual, 62
Standard minimum problem, 15
dual, 15

88
LIBRARY OF MATHEMATICS
CALCULUS OF VARIATIONS A. M. Arthurs
COMPLEX NUMBERS W. Ledermann
DIFFERENTIAL CALCULUS P. J. Hilton
ELECTRICAL AND MECHANICAL OSCILLATIONS D. S. Jones
ELEMENTARY DIFFERENTIAL EQUATIONS AND OPERATORS
G. E. H. Reuter
FOURIER AND LAPLACE TRANSFORMS P. D. Robinson
FOURIER SERIES I. N. Sneddon
FUNCTIONS OF A COMPLEX VARIABLE D. O. Tall
INTEGRAL CALCULUS W. Ledermann
INTRODUCTION TO ABSTRACT ALGEBRA C. R. J. Clapham
INTRODUCTION TO MATHEMATICAL ANALYSIS C. R. J. Clapham
LINEAR EQUATIONS P. M. Cohn
LINEAR PROGRAMMING K. Trustrum
MULTIPLE INTEGRALS W. Ledermann
NUMBER THEORY T. H. Jackson
NUMERICAL APPROXIMATION B. R. Morton
PARTIAL DERIVATIVES P. J. Hilton
PRINCIPLES OF DYNAMICS M. B. Glauert
PROBABILITY THEORY A. M. Arthurs
SEQUENCES AND SERIES J. A. Green
SETS AND GROUPS J. A. Green
SETS AND NUMBERS S. Swierczkowski
SOLID GEOMETRY P. M. Cohn
SOLUTIONS OF LAPLACE'S EQUATION D. R. Bland
VIBRATING STRINGS D. R. Bland
VIBRATING SYSTEMS R. F. Chisnell

ROUTLEDGE & KEGAN PAUL

You might also like