Algebra Note
Algebra Note
MatrixAlgebra
1.1 Introduction
1.1.1 Matrix Operations
1.1.2 Special Matrices
1.1.3 Determinants
1.2 Matrix Inversion
1.3 Partitioned Matrices
1.4 Rank of a Matrix and Linear Independence
1.5 Vectors and Vector Spaces
1.6 Powers and Trace of a Square Matrix
Chapter 2. Systems of Linear Equations
1.1 Introduction
1.2 The Jacobean Determinant
1.3 The Hessian Determinant
1.4 Eigen vectors and Eigen values
1.5 Quadratic Forms
Chapter4.INPUT-OUTPUT ANALYSIS AND LINEAR PROGRAMMING
4.1 Input-Output Model (Leontief Model)
4.1.1 Introduction
4.1.2 Assumptions of Input Output Models
4.1.3 The Open Model
4.1.4 The Closed Model
4.2 Linear Programming
4.2.1 Introduction
4.2.2 Formulating Linear Programming Problems
4.2.3 Solving Linear Programming Problems
4.2.3.1 The Graphic Method
4.2.3.2 The Simplex Method
4.3 The Duality Theorem
1
Course content (new curriculum)
Chapter 1: Linear Differential and Difference Equations
1.1. Definition and Concept
1.2. First order-Linear differential equations
1.3. First -Order Linear Difference Equations
1.4. Economics Applications of Linear Differential and Difference equations
Chapter 2: Matrix
1.5. Definition
1.6. Types of Matrix
1.7. Algebra of Matrix
1.7.1. Addition of Matrices
1.7.2. Multiplication of Matrices
1.7.3. Power of square Matrix
1.7.4. Trace of Square Matrix
1.8. Determinant of Matrix
1.9. Matrix inverse
1.10. Partition of Matrix
1.11. Rank of Matrix
1.12. Vectors
1.13. Vector space
1.14. Linear dependence and Linear independent vectors
Chapter 3: Systems of Linear Equations
1.15. Definition
1.16. Matrix representation of Linear Equations
1.17. Solutions of Systems of Linear Equations
1.17.1. Solving systems of Linear equations by
1.17.1.1. Gaussian Elimination
1.17.1.2. Gaussian Jordan
1.17.1.3. Inverse Method
1.17.1.4. Cramer’s rule
1.18. Homogenous systems of Linear equations
1.18.1. Solution of Homogenous of systems of Linear equations
2
1.19. Economics Application.
Chapter 4: Special determinants &Matrices in Economics
1.20. The Jacobean Matrix and Jacobean Determinant
1.21. The Hessian Matrix and Hessian Determinant
1.22. Quadratic Forms
1.23. Eigen values and Eigen Vectors
Chapter 5: INPUT-OUTPUT ANALYSIS AND LINEAR PROGRAMMING
Introduction
Economic theories help us to understand how the economy operates and what kind of
economic policies (measures) are appropriate when the economy is sick. Economic models,
through simplifying the real world phenomena, are the underlying frame works of economic
theories. These models are used to analyze the basic relationship among economic variables and
to make predictions about the effect of change in these variables.
As you have been introduced in Calculus for economists, the mathematical aspect plays a
significant role by providing a generalized technique of analysis. The search for solutions to
economic problems requires being well equipped with the basic mathematical tools besides
understanding the basic economic theories.
Linear Algebra for Economists is the continuity of calculus for economists, and you have
been discussing the cause and effect relationship between different variables through logical,
3
graphical and mathematical tools, hence you are well aware about different techniques of
analysis.
Thus, this course is designed in such a way that first it deals with matrices and algebra of
matrices. Under this topic, matrix operations, basic properties of determinants, special
determinants some idea of vectors and inverse of a matrix will be discussed.
Chapter Two
which has a carefully order place within the matrix. A matrix is not an aggregate of numbers,
parameters or variables, rather in a given matrix; each element has its own assigned position in a
particular row and column.
The numbers, parameters or variables are called elements or members of the matrix. The
elements in a horizontal line are called rows while the elements in the vertical line are called
columns.
Purposes of matrix
A= [ ac bd ] B= [ ge hf ]
A = B if and only if ()
[ ] [ ]
1 2 3 1 8 7
Examples: A = 8 9 4 A 2 9 6
T=
7 6 5 3 4 5
In linear algebra, a real number such as 6, -8, or 0.5… are called a scalar.
If a matrix’s all elements are zero all in all, then it is said to be zero matrix or null
matrix.
[ ]
0 0 0
0 0 0 0 0
Example: A = 0 0 0 B= ⌈ ⌉ C=⌈ ⌉
0 0 0 0 0
0 0 0
If M = N and aij = 0 for i j, that is a square matrix in which all non diagonal elements
are zero is called diagonal matrix.
[ ]
3 0 0 1 0 0
1 0
Example: A = ⌈ ⌉ B = ⌈ 0 3 0⌉ C= 0 1 0
0 2
0 0 3 0 0 1
A diagonal matrix in which all diagonal elements are one (1) is called identity matrix.
Matrix Operations:
5
Addition and subtraction of two matrices “A” and “B”, A+B and A-B requires that the two
matrices be of equal dimension (order). When this condition is met, the matrices are said to be
conformable for addition or subtraction. Each element of one matrix is then added to or
subtracted from the corresponding element of the other matrix. In such a manner, the addition or
[a ] ij [b ] ij
If A = [ a ij ] and B = [b ]ij
A+B= [a ij + b]ij
Α− β = [a ij − b]
ij
[a 11 a 12 ]
¿ ¿¿¿
Example: If A = ¿
Determinant
A determinant is simply a scalar constant/ number associated with square matrixes. Determinant
of a matrix is represented by a straight-line ( ).
Example, if A is a matrix, then determinant of A is represented as A.
6
[ ]
2 1 3
Example A = 4 5 6 . Find A = 2[(5*9)-(8*6)]-1[(4*9)-(7*6)] +3[(4*8)-(7*5)
7 8 9
2(45- 48) - (36- 42) +3(32-35) 2(-3) – (-6) -3(-3)
-6+6-9 = -9
[ ]
−7 0 3
B= 9 1 4 Find A = 295 (check it)
0 6 5
Sarru’s diagram
−7 0 3 −7 0
9 1 4 9 1
0 6 5 0 6
Det = -7*1*5+0*4*0+3*9*6-3*1*0-(-7)*4*6+0*9*5 =295
If the determinant of a matrix is equal to zero, the determinant is said to vanish and the matrix is
termed as singular matrix. A singular matrix is one in which there exists linear dependency
between at least two rows or columns. If A 0, matrix A is nonsingular and all its rows and
columns are linearly independent.
If linear dependence exists in a system of equations, the system as a whole will have an infinite
number of possible solutions, making unique solution impossible.
Thus,
If A = 0, the matrix is singular and there is linear dependence among the equations. No
unique solution is possible.
If A 0, the matrix is nonsingular and there is no linear dependence among the equations.
Unique solution can be possible.
The rank of a matrix (r (A)) is defined as the maximum number of linearly independent rows or
columns in the matrix. The rank of a matrix also allows for a simple test of linear dependency,
which follows immediately. Assume a square matrix of order n,
If r (A) = n, A is nonsingular and there is no linear dependence.
If r (A) < n, A is singular and there is linear dependence.
7
The determinant of a 3x 3 matrix,
[ ]
a11 a12 a13
A= a2 1 a22 a23 is called as a third order determinant and is a summation of three
a3 1 a32 a33
[ ]
a11 a12 a13
a2 1 a22 a23
a3 1 a32 a33
2. Take the second element of the first row, a 12, and mentally delete the row and the
column in which it appears. Then multiply a12, by -1 times the determinant of the
remaining elements.
[ ]
a11 a12 a13
a2 1 a22 a23
a3 1 a32 a33
3. Take the third element of the first row, a 13, and mentally delete the row and the column
in which it appears. Then multiply a13 by the determinant of the remaining elements.
[ ]
a11 a12 a13
a2 1 a22 a23
a3 1 a32 a33
8
First, the given matrix should be a square matrix. If so,
A) First, select a cell where the determinant (the answer) is to be written.
B) Type = (equal to) sign and type MDETERM.
C) Open bracket (
D) Then, specify the cell range A1:B7 and close the bracket.
E) i.e =MDETERM(A1:B7)
F) Then say “enter”
To find inverse
a. Select new cells the same dimension with the given matrix.(answer place)
b. Write equal to sign(“=) and type “MINVERSE” then specify the cell range and close
the brackets i.e = MINVERSE(H1:H3) then
c. Ctrl+shift+enter
To find transpose
a. Select new cells with the same dimension as the given matrix.(answer place)
b. Write equal to sign = and type “TRANSPOSE” then specify the cell range and
close the brackets i.e = TRANSPOSE (Q1:Q3) then
c. Ctrl+shift+enter
a. Check whether the given matrixes are conformable for matrix multiplication or not. To be
conformable for multiplication, the number of column of the first matrix must be equal to
the number of the row of the second matrix.
b. Decide the dimension (size) of the result matrix. I.e. the dimension of the result matrix
will be determined by the row of the first matrix and the column of the second matrix.
c. Select new cells with dimension decided by “b” (answer place).
d. Write equal to sign = and type MMULT open brackets and then specify the cell range
of the first matrix, then type comma and specify the cell range of the second matrix,
close the brackets. =MMULT(C17:E19,H11:H13)
e. Hold Ctrl+shift+enter
9
The elements of a matrix remaining after deletion process form a sub determinant of the matrix
is called a minor. Thus, a minor M iJ is the determinant of the sub matrix formed by deleting
the ith row and jth column of the matrix.
Using the above given matrix,
Where M11 is the minor of a11, M 12 is the minor of a12, and M 13 is the minor of a13.
Thus, the determinant of A can be written as
| A| = a11|M 11|+ a12 (-1) |M 12|+ a13 |M 13|
A cofactor Cij is a minor with a prescribed sign. The rule for the sign of a cofactor is
|C ij| = (-1)i+j |M ij| .
Thus if the sum of the subscripts (i+j) is an even number, |C ij| = |M ij| ,since -1 raised to an even
power is positive.
If i+j is equal to an odd number, |C ij|= -|M ij| , since -1 raised to an odd power is negative.
The cofactors for |C 11|, |C 12| and |C 13| for the above given matrix are found as;
|C 11| = (-1)1+1|M 11|, since (-1) 1+1 = (-1)2 =1,
10
a) To find the minor of a11, mentally delete the row and column in which it appears. The
remaining element is the minor. Thus |M 11| = a22. Similarly, |M 12| = a21
b) From the rule of cofactors, |C 11| = (-1)1+1|M 11|= +1(a22) = a22
|C 12| = (-1)1+2|M 12|= -1(a21) = -a21
Example: Given A= [ 1319 1715 ]
Find (a) the minors and (b) the cofactors for the elements of the second row.
a) |M 12| = 17 |M 22| =13
b) |C 21| =(-1)2+1|M 21| = -1(17) = -17
|C 22| = (-1)2+2|M 22| = 1(13) = 13
[ ]
2 −4 5
Example 3: Given A = −3 6 8
7 1 9
Find (a) the minors and (b) the cofactors for the elements of the first row.
|C | = (-1) |M | = -1|−3
12
1+2
12
7 9|
8
= -1[(-3*9)-(7*8)] =-1(-27-56) = 83
|C | = (-1) |M | = |−3
13
1+3
7 1|
13
6
= (-3*1) - (7*6) = -3-42= -45
11
[ ]
11 9 4
Example 4: Given A= 3 2 7
10 10 6
Find (a) the minors and (b) the cofactors for the elements of the third row.
b) |C | = (-1) |M | = |
2 7|
3+1 9 4
31 31 = (9*7) - (2*4) = 63-8= 55
[ ]
13 6 11
Example 5: Given A = 7 10 2
12 9 4
Find (a) the minors and (b) the cofactors for the elements in the second column.
a) Deleting row 1 and column 2 |M 12| = |172 24| = (4*7) - (2*12) = 28-24= 4
| = |12 4 | = (13*4) - (12*11) = 52-132= -80
1 3 11
Deleting row 2 and column 2 |M 2 2
12
LAPLACE EXPANSION AND HIGER- ORDER DETERMINANTS
Laplace expansion is a method for evaluating determinants in terms of cofactors. It thus
simplifies matters by permitting higher order determinants to be established in terms of lower
order determinants. For example, laplace exapnsion of a third order determinat can be
expressed as
| A| = a11|C 11|+ a12 |C 12| +a13|C 13|
Where, Cij is a cofactor based on the second order determinant. a12 is not explicitly multiply
by -1, since, by the rule of cofactor C12 will authomatically be multiplied by -1.
Laplace expansion permits evaluation of a determinant along any row or column. Selection of a
row or column with more zeros than others simplifies evaluation of the determinant by
eliminating terms. Laplace expansion also serves as the basis for evaluationg determiants of
orders higher than three.
Example
Where cofactors are third-order sub determinants, which in turn can be reduced in to second –
order sub determinant as above, fifth-order determinants and higher are treated in similar
fashion.
13
Example1:
Example2:
14
More examples on Laplace expansion
PROPERITES OF DETERMINANTS
1. The determinant of a matrix equals the determinant of it transpose.
Example
15
2. Interchanging any two rows or columns will affect the sign of the determinant, but not the
absolute value of the determinant.
3. Multiplying a single row or column of a matrix by a scalar will cause the value of the
determinant to be multiplied by the scalar.
B = 2A
4. Addition or subtraction of a non-zero multiple of any row or column to or from another row
or column does not change the value of the determinant.
16
a) Find A (b) subtract 5 times column 2 from column 1, form a new matrix B and find B.
5. The determinant of a triangular matrix (which has zero elements everywhere above or below
the principal diagonal). (Both upper and lower triangular) is the product of the elements along
the principal diagonal elements.
17
6. If all the elements of a row or column equal zero, the determinant will equal zero.
7. The determinant of the product of two matrices is the product of their determinants.
Example A=
2 1
4 3
B=
1 2
5 3 [ ] AB = ⌈
7 7
19 17
⌉
INVERSE OF A MATRIX
18
More examples
19
20
21
Properties of Inverses
The following properties of inverse matrices are of interest.
a) Not every square matrix has an inverse – square matrices cannot have an inverse -if they are
singular matrices.
b) If A is nxn then A-1 must also be nxn; otherwise it cannot be conformable for both pre and
post multiplication
c) If an inverse exists, then it is unique. A matrix cannot have more than one inverse.
d) The inverse of an inverse is the original matrix, (A-1)-1 = A.
e) The inverse of a product is the product of the inverse in the reverse order. (AB)-1 = B-1A-1
f) The inverse of the transpose is the transpose of the inverse, (AT)-1 = (A-1)T
[ ] [ ]
−2 6 1 2 −4 1
2 3 4 6 1 3
1 5 0 8 2 4
a) A = b) B=
|3 4 ¿|¿ ¿ ¿
|A| ¿
Solution a) = -2
(−20 ) −6 (−4 ) + 7
= -2
¿0
= 40+24+7 = 71
22
|A| ¿ 0
Thus, with , A is non-singular and the three rows or columns are linearly independent,
γ (A) = 3
hence,
|Β| γβ
With = 0. B is singular and the three rows/columns are not linearly independent. Hence
¿ 3
. Now test to see if any two rows/columns are independent. Starting with the sub matrix in
the upper left corner, take the 2 X 2 determinant,
|2 −4¿|¿¿¿
¿ ¿0
= 2+24 = 26
γβ β
Thus, = 2. There are only two linearly independent rows and columns in .
For a square matrix, its rank can be used to determine whether the matrix is singular or non-
singular. An n x n matrix is non-singular if and only if rank of matrix A = n
[5 2 ¿ ] ¿ ¿¿
¿
Example 2. The rank of matrix is 2; because this second order matrix corresponds
to the determinant of the matrix, which is non – zero, thus the given matrix contains a non –
singular matrix of order 2.
[8 6 ¿ ] ¿ ¿¿ [8 6 ¿ ] ¿ ¿¿
¿ ¿
Example 3. The rank of matrix is 1 the highest order determinant is
zero, next to this would be of the first order which is non – zero.
23
[0 0 ¿ ] ¿ ¿¿
¿
Example 4 Rank of is also zero.
From the above examples, we can take the following generalizations.
1. The rank of a matrix is not related in any way to the number of zero elements in it.
2. The rank of a matrix cannot exceed the number of its rows or numbers of its columns
which ever (since a determinant has equal number of rows and columns) is smaller
3. The rank of a column matrix of any number of rows (matrix 3 x 1) is at most 1 while rank
of matrix 3 x 50 is at most 3.
4. The rank of a matrix is at least one unless the matrix is a null matrix.
Note the following points
Rank of the transpose of a matrix is the same as the rank of the original matrix.
The row and column ranks of a matrix are equal
A matrix A is non – singular if rank of A = n.
|A| ¿ 0
If a matrix is an n x n matrix, then rank of A = n Iff A is non - singular or
[]
b 1
b 2
b n
a a a 1 2 n
v (a a )
1 1 2 v (b b )
2 1 2
Let = and =
24
v 1 v 2 ⇔ (a a ) (b b ) ⇒ a b 1
1 2 1 2 1 2 a b
2 2
1. = = = and =
v v 1 2 v 3 (a a ) (b b ) (a
1 2 1 2 1+ b a
1, 2+ b) 2
1- + = = + =
Κ
2- Let be a scalar, i,e a real number.
Κν ( Κα 1 1, Κα ) 2
3- The inner Product of two vectors is not a vector but a scalar /real number.
v 1 v 2
V 1 ( 3 , 4, 6 ) V 2 ( 10 , 9, 12 )
= and =
3ν V 1 2
Find +
3ν 1 ( 3 x3, 4 x3, 6 x3 ) ( 9, 12 , 18 )
= =
( 19, 21, 30 )
=
Properties of the algebraic operation in vectors
1. Vector addition is commutative
¿+ β= β +∝
25
θ
3. There exists a zero/null/ vector such that
¿ +θ=θ+¿ = ¿ θ
, so that is a zero vector
θ ( 0, 0 ,..... 0 ) ¿
i.e =
θ θ ¿+−¿=θ
4. Each vector has an additive inverse (- ) such that
¿ ( a+ b ) ¿ a ¿ ¿
5. For all real a and b and a vector , x = x +bx
Κ Κ (¿ + β) Κ ¿ ( Κ +β )
6. For any real , x = x +
Κ Κ
1 ⋅¿ ¿ 2 ¿ Κ (Κ 1 2⊗ ¿ ) Κ 1 Κ 2 ¿
7. x = x Where and are scalars and is vector.
V 1 (4 , 1) V 2 ( 3 , 2) V 3 (5 , 4)
Example 1 Given = and = and =
Κ 1 Κ 2 Κ 3
Find the linear combination of these vectors if we have three scalars. = 1, = 2 and = 3.
¿
Solution Let be the linear combination of the above vector
¿ Κ ν 2 2 Κ ν
1 1 3 3
Then = + +
¿ ν 1 2 ν 3
= +2 +3
( 4 , 1 ) + 2 ( 3 , 2 ) +3 ( 5 , 4 )
=
( 4 , 1 ) + (6 , 4 ) + ( 15 , 12 )
=
( 4 +6+15 , 1+4+12 )=( 25 , 17 )
=
26
[ ]
2 1 0
6 2 4
4 2 0
Example 2 Let A =
Test the linear independence of the above matrix.
ν 1 (2 , 1 , 0 ) 2 ( 6 , 2 , 4 ) and ν 3 = ( 4 , 2 0)
Solution: Let = =
If there is linear dependence among the three vectors, their linear combination varnishes for not
Κ ν +Κ ν +Κ ν
1 1 2 2 3 3 =0
all scalars equal to zero, that is
⇒ Κ 1 (2 , 1 , 0 ) + Κ 2 (6 , 2 , 4) + Κ 3( 4 , 2 , 0) = 0
⇒ (2 Κ , Κ 1 1 , 0 + 6 ) ( Κ 2 ,2 Κ 2 ,4 Κ ) +(4 Κ
2 3 +2 Κ 3 + 0 =0 )
⇒ (2 Κ 1 +6 Κ 2 +4 Κ ,Κ 3 1 +2 Κ 2 ,+2 Κ 3 ,0+4 Κ 2 )
+ 0 = (0 , 0 , 0)
⇒ Κ 1 +6 Κ 2 +4 Κ 3 =0 , Κ 1+ 2 Κ 2 +2 Κ 3= 0, 4 Κ 2 =0
2
⇒ Κ 2 =0
⇒ Κ 1 + 4 Κ 3 = 0 and Κ 1 +2 Κ 3 =0
2
⇒
Solving the above two systems of equations, we have
Κ 1 +4 Κ 3 =0
2
Κ 1 + 2 Κ 3 =0
⇒
0=0
27
Κ 1 ≠0, Κ 2 ≠ 0 , While Κ 2 =0
Thus,
ν 1 and ν 2
ν 1 = ν 3
Since 2
Note: In a given matrix, if there is linear dependence among the column/row vectors of that
matrix, then the determinant of the matrix is zero and its inverse is undefined.
Chapter 2
Systems of Linear Equations
1. Objectives :- After taking this unit you will be able to
- Express simultaneous linear systems of equations in matrices form
- Test the non – singularity of the coefficient matrix of linear equations and calculate its
inverse.
28
- Obtain the solutions for linear systems of equation by the inverse, Cramer’s rule and Gauss
Jordan elimination methods.
- Apply these methods to economic problems.
Linear systems of equations are those linear simultaneous equations involving two or more
equations and variables.
a X + bY = M
Examples. --------- (1)
cX + dY = N
a χ +a χ =c
11 1 12 2 1
While
a 21 χ1 + a χ =c
22 2 2
---------(2)
a χ +a χ =c
31 1 32 2 3
(χ 1 and χ)
2
a χ +a χ +a χ
11 1 12 2 13 3 + −−−− + a χ =b
1n n 1
a χ +a χ +a χ
21 1 22 2 23 3 + −−− + a χ =b
2n n 2
--------- (3)
⋮
a χ + a χ + a χ + −−−− + a χ = b
m1 1 m2 2 m3 3 mn n m
29
j
th
χ j χ 3
From equation (3), the variable appears only in the column for instance appears
a i
th
ij
only in the 3rd column. The double subscripted represents the coefficient, appearing in the
th
j a 32
equation (row) and attached to the variable (column). For example represents the
b i
rd
coefficient in the 3 equation and attached to the second variable. The parameter represents
i
th
⇒ a ij
⇒ χ i
⇒ b i
30
[a b ¿] ¿ ¿ ¿
¿
Similarly equation (3) can be compacted as.
[ ]
a a 11 12 … a 1n
A=
Αχ β ⇒
Thus, =
Αχ = β
Thus, is the matrix representation of linear systems of equations.
Example1. Given the following systems of equations
x + y =14
2x +5y = 6, we can write it as
Α = ¿[1 1 ¿] ¿ ¿
¿
Αχ = β ⇒
[1 1¿]¿¿¿
¿
Example2.
31
At “Arada” market center, there are three electronic merchants (Aster, Nesru and Mekashaw who
sell
1 Aster sells asset of 150 Refrigerators, 100 TV sets and 120 Taps
2 Nesru sells asset of 200 Refrigerators, 50 TV sets and 80 Taps
3 Mekashaw sells asset of 140 Refrigerators, 150 TV sets and 100 Taps.
Assume, if unit price of each materials is Birr 3500 per refrigerator, Birr 2000 per TV sets and,
Birr 1500 per taps, then form/write a system of linear equations that help to calculate the total
revenue for each merchants using matrix.
(Refrigerators) (TV sets) (Taps)
Nesru 200 50 80
[ ]
3500 → Price per refrigerator
2000 → Pr iceperTVsets
1500 → Price per taps
The total revenue can be obtained by multiplying the number of commodities sold, Refrigerator
R, TV sets (T) and Taps (Ta) by their respective selling prices and then adding up the results of
these multiplications.
[ ]
3500 → Pr ice per Re frigerator
2000 →Price per TV sets
[ 150 100 120 ] 1500 → price Per Tapes
Aster will obtain X
[ 150 X 3500 + 100 X 2000 + 120 X 1500 ] [ 905 , 000 ]
Which can be written as =
Nesru will obtain at constant prices
[ ]
3500
2000
[ 200 50 80 ] 1500
, Which can also be rewritten as:
32
[ 200 X 3500 + 50 X 2000 + 80 X 150 ] [ 920 , 000 ]
=
[ ]
3500
2000
[ 140 150 100 ] 1500
Mekashaw will obtain
This can be rewritten in a similar fashion as
[ 140 X 3500 + 150 X 2000 +1 00 X 1500 ] [ 940 , 000 ]
=
The above analysis can be rewritten as
Shops R T T
[ ][ ] [ ]
Aster 150 100 120 3500 → Pr ice per Re frigerator 905,000
Nesru 200 50 80 2000 → Pr ice per TV sets 920,000
Mekashaw 140 150 100 1500 → price per Tapes 94, 000
X =
[ ][ ]
(150 X 3500 ) + ( 100 X 2000) + ( 120 X 1500 ) ) 905 ,000
(200 X 3500 ) +(50 X 2000 ) + ( 80 X 1500 ) 920 ,000
(140 X 3500 ) + (1500 X 2000) + (100 X 1500 ) 94 , 000
= = Revenue
2. Assume Dashen Brewery wants to advertise three of its main products, Dashen Beer, royal
Beer and royal Draft a per unit at least of Birr 150, 200 and 120 respectively per
advertisement. If the firm advertises 4 days of Dashen Beer, 6 days loyal beer and every day
per week loyal draft respectively. Then what will be the total cost of the firm in a month?
In transforming linear systems of equations in to matrix form, we can follow the following steps.
1. First check how many variables and how many equations are involved and whether they
are written in proper order. If they are not in a proper order, rewrite it in to proper order.
2. Write out the coefficient matrix and its dimension underneath.
3. Write the variable matrix (column vector and it dimensions determined by the number of
variables involved in it.
33
4. Write the constant column vector which will be equal to the number of equations
involved.
5. Check that the three rectangular matrices will confirm to the rule of matrix multiplication.
6. Finally notice that the variable column vector X will be pre multiplies by the coefficient
Αχ = β
matrix, A and the product will be equal to the constant column vector, B. i.e
Methods of solving simultaneous systems of equations
In this chapter, we will see three methods to solve simultaneous linear system of equations
(|Α| ≠ 0 ) ,
If the matrix is non-singular then there will be unique solution for the system of
equation and the system is said to be consistent. But if the matrix is singular and if equations are
not contradictory to each other, then the system will also be consistent and there will be an
infinite solution for the system.
χ +γ =5
2 χ + 2γ = 8
Example i) Given
[1 1 ¿]¿¿¿
¿
A=
|Α| = 2 − 2 = 0
34
x+y = 5
x+y = 4
Here, note that the determinant of the coefficient matrix is zero and two systems of equations are
contradictory to each other such that there is no solution satisfying them simultaneously.
2χ + γ = 4
χ +γ =3
ii) Let
Α = ¿[2 1¿]¿¿
¿
|Α| = 2 − 1 = 1
|Α| ≠ 0 ,
Since then A is non singular, then there will be unique solution for the system i.e
χ = 1 , γ = 2.
⇒ 0 = −1
0 + 0 = -1
It is not true that 0 and -1 are equal. Thus, the two systems of equation are rather inconsistent
and their coefficient matrix is singular. Thus such systems of equation don’t have any solution at
all.
A system of equations that has no solutions is said to be inconsistent.
Note: Every system of linear equations has no solutions, or has exactly one solution, or has
infinitely many solutions.
2.3.1. Inverse Method
35
After having an idea on how to identify equations as consistent and inconsistent, we can proceed
to look for solutions for those systems of equations which are consistent, but before proceeding
to solve for systems of equation it is good to classify linear equations in to homogeneous and non
– homogeneous equations.
i.e
¿¿ nx1¿¿¿¿ Thus, the system is of the form
Αχ = 0
Homogeneous systems of equations with singular coefficient matrix will have an infinite
number of solutions.
Example
4x+10y = 0
2x+5y = 0. Here X and Y will have an infinite number of solutions because, the determinant
is zero. Thus, we cannot find unique solutions for X and Y as long as the coefficient matrix
is singular.
36
[ ]
a a a
11 12 13
a a a
21 22 23
χ =¿ [ χ ¿][ χ ¿] ¿ ¿
1 2
¿
a a a
31 32 33
Let A=
⇒ A ( 3 x 3) χ ( 3 x 1) = β ( 3 x 1)
|Α|
Step 2. Evaluate
|Α| ≠ 0,
a. When
−1 −1
|Α| ≠ 0, Α Α
Step 3 - If matrix A is non singular, then exists and find
−1
Αχ = β Α
Step 4 - Pre multiply both sides of the equation by
−1 −1
⇒ Α (3 x3 ) Α (3x3) χ (3x1) = Α (3 x 3) β (3 x 1)
−1
⇒ Ι ( 3 x 3) χ (3 x 1) = Α (3x3) β ( 3 x 1)
−1
⇒ χ (3 x1 ) = Α β
The right hand side of this equation is a column vector of known vectors while the left hand side
is a column vector of variables.
5. By the definition of matrix equality, equate the two left hand side & right hand side
Step
χ χ χ
1 2 3
Example
37
3x+5y = 4
Solution
The above equation can be rewritten in compact form as
[ ]
5 −1
A-1 =
adjA
❑
=
1 5 −1
17 −3 4 [
Thus A-1 =
17
−3 ] 17
4
17 17
[ ][ [ ]
5 −1 56
¿
But X = A-1 B X =
17
−3
17
4
¿ 12
¿4 ]
X=
¿−
17
20
17 17 47
56 −20
Hence X = and Y=
17 47
|Α| = 0
b. when
−1
Α
In this particular case, the matrix A is singular and therefore, does not exist. Then you will
have to proceed as follows.
38
adj β (3x1)
Find ( A) 3x3
There are two possibilities for the a above product
( adjA ) β ≠ 0 ,
i) If the product matrix then the given system of equation is inconsistent
and has no solution
Example
Given x+y = 4
3x+3y = 15, then find the value of x and y using matrix inversion.
Solution
The AX = B, form of the equation is
A= [ 13 13] x 4
X=[ ] B=[ ]
y 15
|Α| =
[ 13 13] = 3-3 = 0
Thus, matrix A is singular, and A-1 does not exist.
Next, we shall check whether the given equation is consistent or not
(adjA)xB = [ ] [ ¿3 ]
3 −1 4 ¿−3
[ ] = 0
−3 1 15
Since (adjA) xB 0, the given system of equation is inconsistent and has no
solution.
( adj A ) β = 0 ,
ii) If then the given system of equation is consistent and has infinitely
many solutions.
Examples
1. Solve the following system of equation using the matrix inversion method.
39
4 x + 8γ = 6
2 χ + 4 γ =3
Solution
4 8 |Α| =
¿ ¿
2 4 = , AX = B,
16-16 =0
(AdjA) xB = [
−2 4 ]
4 −8
x¿ = ¿
Thus, (AdjA)xB = 0 and the equation is consistent and will have an infinite number of solutions.
A homogeneous system of linear equations with more unknowns than equations has infinitely many
solutions.
A non homogeneous system with more unknowns than equations need not be consistent however, if
the system is consistent, it will have infinitely many solutions.
40
A1
Therefore, X1 = , A1 is the determinant of the new matrix formed by replacing
A
coefficient of X1 by the constant column vector.
A2 A3 A4 An
X2 = , X3 = X4 = ……. Xn =
A A A A
Consider the system of linear equations in three variables
a11X1+a12X2+a13X3 = b1
a21X1+a22X2+a23X3 = b2
a31X1+a32X2+a33X3 = b3
The determinant of the coefficient matrix is given by
| |
a 11 a 12 a 13
A = a21 a 22 a 23 and let A 0
a31 a 32 a 33
According to the Cramer’s rule the solutions of the variables is given as
| |
b1 a 12 a 13
A1
X1= = b 2 a 22 a 23
A
b 3 a 32 a 33
a 11 a 12 a13
a 21 a 22 a 23 The first column of A is replaced by b1, b2, b3
a 31 a 32 a 33
| |
a 11 b 1 a 13
A2
X2= = a21 b 2 a 23
A
a31 b 32 a 33
a 11 a 12 a13
a 21 a 22 a 23 The second column of A is replaced by b1, b2, b3
a 31 a 32 a 33
| |
a 11 a 12 b 1
A3
X3= = a21 a 22 b 2
A
a31 a 33 b 3
a 11 a 12 a13
a 21 a 22 a 23 The third column of A is replaced by b1, b2, b3
a 31 a 32 a 33
Note: The Cramer’s rule is applicable only for non – singular matrices. I.e. non singularity is a
necessary condition for the method as it was for the existence of A-1
41
42
43
44
45
46
The Gauss-Jordan Elimination Method (GJEM)
This method mainly depends on the following three operations applied on the rows (columns) of
a matrix.
a. Interchange of any two rows (columns). If the ith row (column) is interchanged with the ith row
(column). We denote it as Ri Rij (Cj Cj)
b. Multiplying the elements of a row (a column) by a non – zero a scalar. If the elements of a row
(column) are multiplied by a non – zero scalar K. We write Ri KRj (Cj KCj).
c.Adding to the elements of a row (column) the constant times the corresponding elements of
another row (column). If K times the elements of ith row (column) are added to the corresponding
elements of the ith row (column), we write Ri Ri +kRj (Cj Cj+KCj)
The GJEM is a general method applicable when the given system of equation is
i. Homogeneous /non – homogeneous/
ii. When the number of equations is that equal to the number of variables
iii. When the coefficient matrix is singular
The Rule of thumb
The technique of solving systems of equations using the Gauss Jordan elimination method
involves first obtaining a 1 in the a 11 position and then using this element to obtain 0 s in this
particular column. Then a 1 is found in the a22 position and using it to change other elements in
this column in to 0. The process continues to change a33 in to 1 and remaining elements in this
column in to 0. Thus, the rule continues to change the main diagonal element of a matrix aii in to
1 and other elements in this column aij, ij, in to 0. Here it is possible to change rows so that the
first element in the a11 position is non-zero.
Basic Steps
b. Augment the coefficient matrix with the constant column vector; that is A/B.
C. Then apply elementary row /column operation until the coefficient matrix is transformed
in to an identity matrix and the constant column vector is transformed in to a new
solution vector. i.e. A-1A/A-1B I A-1B = IX
A/B
Example
47
Solve the system of equation using the Gauss Jordan elimination method
2x+12y = 40
8x+4y =28
Solution, the AX = B form is given as
[ 28 124 ] xy = 4280
A/B = [
8 4 ] 28
2 12 40
R = -1/44 R = [ ] R = R -6R = [
0 1] 3
1 6 20 1 0 2
2 2 1 1 2
0 1 3
Thus X = 2 and Y = 3
Example2. Solve the following system of equation using the Gauss Jordan elimination method.
X-2y+3z = 1
3X-y+4z = 3
2X+y-2z = -1
The AX = B form of the equation is given by
[ ] [ ]
1 −2 3 x 1 1 −2 3 1
A= 3 −1 4 y = 3 A/B = 3 −1 4 3
2 1 −2 z −1 2 1 −2 −1
Since a11 is 1 we proceed as follows
[ ]
1 −2 3 1
R2 = R2-3R1 = 0 5 −5 0
0 5 −8 −3
[ ]
1 −2 3 1
R3 =R3-2R1 R2 =1/5R2= 0 1 −1 0
0 5 −8 −3
R1= R1+2R2
[ ] [ ]
1 0 1 1 1 0 1 1
R3= R3-5R2= 0 1 −1 0 R3= -1/3R3= 0 1 −1 0
0 0 −3 −3 0 0 1 1
R1= R1-R3
48
[ ]
1 0 0 0 x 0
R2= R2+R3 = 0 1 0 1, Thus x= 0, y = 1and z= 1 x= y =¿ 1
0 0 1 1 z 1
Example 3
3x1 + 12x2 = 102
4x1 + 5x2 = 48
Solve this equation using Gauss Jordan elimination method
First express the equations in an augmented matrix:
49
50
51
Exercises
Determinant
52
53
54
Determinant
55
56
Determinant
57
58
59
60
61
62
63
64
65
66
67
68
A is positive definite.
69
70
71
In generalterms, if there are n producing sectors,
72
73
74
75
76
77
78
79
where j represents the total output of the j industry
80
81
82
83
84
85
86
87
88
89
2. Linear Programming: Using Graphs
Linear Programming (LP) is a mathematical procedure for determining optimal allocation
of scarce resources.
It deals with a class of programming problems where both the objective function to be
optimized is linear and all relations among the variables corresponding to resources are
linear.
Any LP problem consists of an objective function and a set of constraints.
The optimal solution (denoted as xi * ) gives the values for the decision variables that
optimize the objective function, that is, gives the best way to achieve the desired objective
while satisfying all the restrictions (constraints)
Any combination of values for the activities that satisfies all the constraints (including non-
negativity) constitutes a feasible solution.
An iso-contribution line shows all activity solutions that yield the same value for the
objective function.
the optimal solution is found by moving the iso-contribution line out/in as far as possible
until only one point on the line touches (is tangent to) a point on the feasible region.
90
FORMULATING OF LINEAR PROGRAMMING PROBLEMS.
(Reading without understanding is barking, practice makes you perfect)
When we formulate LP, mostly we follow the following procedures.
a. Identify the objective function for the problem, (may be to maximize profits, or may be
to minimize costs or some other goal.
b. Identify the activities (decision variables or simply, variables) for the problem.
c. Identify the objective function coefficients for each activity.
d. Set up the appropriate structural constraints in the constraint set.
Eg
91
92
Example
Solution
We first sketch the line
2x + y = 4, when x = 0 we get, y = 4 When y = 0 we get 2x = 4 and so x = 4/2 = 2.
The line passes through (0, 4) and (2, 0)
93
For a test point let us take (3, 2),
This lies above the line. Substituting x = 3 and y = 2 into the expression 2x + y gives 2(3) 2
8
This is not less than 4, so the test point does not satisfy the inequality. It follows that the region
of interest lies below the line. In this example the symbol < is used rather than ≤. Hence the
points on the line itself are not included in the region of interest. We have chosen to indicate this
by using a broken line for the boundary.
94
In this problem the easiest inequalities to handle are the last two. These merely indicate that x
and y are non negative and so we need only consider points in the top right-hand quadrant of the
plane, as shown in
95
Figure
The line passes through (0, 3) and (−3, 0). Unfortunately, the second point does not lie on the
diagram as we have drawn it. At this stage we can either redraw the x axis to include −3 or we
can try finding another point on the line which does fit on the graph. For example, putting x = 5
gives −5 + y = 3 so y = 3 + 5 = 8. Hence the line passes through (5, 8), which can now be plotted
along with (0, 3) to sketch the line. At the test point (0, 0) the inequality reads −0 + 0 ≤ 3 which
is obviously true. We are therefore interested in the region below the line, since this contains the
origin.
As usual we indicate this by shading the region on the other side. Points ( x, y) which satisfy all
four inequalities must lie in the un shaded ‘hole’ in the middle. Incidentally, this explains why
we did not adopt the convention of shading the region of interest. Had we done so, our task
would have been to identify the most heavily shaded part of the diagram, which is not so easy.
Exercise
Sketch the feasible region
x + 2y ≤ 10
−3x + y ≤ 10
x≥0
y≥0
96
This method may be summarized:
Step 1
Sketch the feasible region.
Step 2
Identify the corners of the feasible region and find their coordinates.
Step 3
Evaluate the objective function at the corners and choose the one which has the maximum or
minimum value.
Corner Objective function
(0, 0) −2(0) 0 0
(0, 3) −2(0) 3 3
(2, 5) −2(2) 5 1
(12, 0) −2(12) 0 −24
97
Thus
1
From A y2 = 5- y1
3
2
From B y2 = 8- y1
3
From C y2 = 12-2y1
The graph of the original “greater than or equal to” inequality will include all the points on the
line and to the right of it. The shaded area is the feasible region, containing all the points that
satisfy all three constraints plus the non negativity constraints.
3. To find the optimal solution within the feasible region, graph the objective function as a series
of (dashed) iso cost lines. Y2 = -3/5y1+c/50
Or take all the corner order pairs and evaluate them at the objective function and take the
lowest value as a solution.
98
Note:
An optimal solution to a LP problem will always occur at an extreme point of the feasible region.
If the slope of the iso-contribution line is not the same as the slope of any of the constraints,
then the optimal solution will be unique.
Special Cases
(1) Unbounded solution: An unbounded solution occurs whenever the feasible region is not
constrained from above, which results in an infinite feasible region. For example, the following
maximization problem is “unbounded from above,”
Max: Z = 1x + 9y
s.t.: x, y 0
It should be clear that no finite optimal solution exists for this problem. The objective function
value will consistently become larger with increases in x and/or y. Since both x and y are not
bounded from above, Z will approach infinity as x and/or y approaches infinity.
Graphically, an unbounded solution for a maximization problem can be detected whenever the
feasible region extends without limits upwards from the x and/or y axes assuming that the
objective function coefficients are positive.
An unbounded solution is usually the result of leaving out one or more constraints. In any case,
the problem needs to be reformulated by adding constraints that bound the feasible region in
order to get a finite solution.
(2) No feasible solution: The case of no feasible solution occurs whenever the feasible region is
an empty set (a set containing no points), which is due to conflicting constraint specification.
Obviously if the feasible region is empty, a feasible solution cannot be obtained. This case
usually arises due to an error in specification. It may also arise when the decision maker is
attempting to satisfy inconsistent constraints.
E.g Max: Z = 1.5x +10y
S.t. 1x +2y 100
1x + 2y 50
x, y 0
(3) Multiple optimal solutions: Multiple optimal solutions or “alternative optimal solutions” as
they are sometimes called, mean that there is more than one solution that is optimal, that is, “the
99
optimal solution is not unique.” This occurs whenever the slope of the iso-contribution line is the
same as the slope of one of the line segments connecting two extreme points in the feasible
regions. In this case, there will be an infinite number of optimal solutions; each point along this
line segment is an optimal solution.
As an example, consider the following maximization problem:
Max: Z = x + y
s.t.: x + y 100
X, y 0
THE BASIS THEOREM
Given a system of n consistent equations and υ variables, where υ > n, there will be an infinite
number of solutions. Fortunately, however, the number of extreme points is finite. The basis
theorem tells us that for a system of n equations and υ variables, where υ > n, a solution in which
at least υ − n variables equal zero is an extreme point. Thus by setting υ – n variables equal to
zero and solving the n equations for the remaining n variables, an extreme point, or basic
solution, can be found. The number N of basic solutions is given by the formula where
υ! Reads υ factorial
Problems involving more than two variables are beyond the scope of the two-dimensional
graphic approach. Because equations are needed, the system of linear inequalities must be
converted to a system of linear equations. This is done by incorporating a separate slack or
surplus variable into each inequality in the system. A “less than or equal to” inequality such as
9x1 + 2x2 ≤ 86 can be converted to an equation by adding a slack variable s ≥ 0, such that 9x1 +
2x2 + s = 86. If 9x1 + 2x2 = 86, the slack variable s = 0.
If 9x1 + 2x2 < 86, s is a positive value equal to the difference between 86 and 9x1 + 2x2.
A “greater than or equal to” inequality such as 3y1 + 8y2 ≥ 55 can be converted to an equation by
subtracting a surplus variable s ≥ 0, such that 3y1 + 8y2 − s = 55. If 3y1 + 8y2 = 55, the surplus
variable s = 0.
If 3y1 + 8y2 > 55, s is a positive value equal to the difference between 3y1 + 8y2 and 55.
100
Example
2x1 + x2 ≤ 14,
5x1 + 5x2 + ≤ 40
x1 + 3x2 + ≤ 18
2x1 + x2 + s1 = 14
5x1 + 5x2 + s2 = 40
X1 + 3x2 + s3 = 18
101
Since slack variables do not contribute to the objective function value, they are included as
activities in the objective function with zero objective function coefficients. The model with
slack variables is written as the following:
Max: Z = 40x + 45y +0s1 +0s2 +0s3-----------(0)
s.t.:
x + y + 1s1 = 600 ----------------- (1)
x +1.5y + 1s2 = 750---------------- (2)
x + 1s3 = 400 -------------------------(3)
X, y, s1, s2, s3 0---------------------(4)
This is called the standard form of the LP model. The difference between the general and
standard forms of an LP model is that the standard form includes slack variables (and/or surplus”
variables, and structural equality constraints, while the general form uses weak inequality
constraints and does not include slack (or surplus) variables. At the optimal solution, the values
of the slack variables are found by solving equations (1) through (3), given the values for x* and
y*, s1 * = 0, s2 * = 0, and s3 * = 100.
The slack variables can now be used to distinguish whether a constraint is binding or not. A
binding constraint is one where its slack variable is zero. A nonbinding constraint is one where
its slack variable is positive.
Another way to write the standard form of the model is using tableau form as shown below.
Equation x y S1 S2 S3 b
0 40 45 0 0 0 -
1 1 1 1 0 0 600
2 1 1.5 0 1 0 750
3 1 0 0 0 1 400
The activities are arranged as columns, and the last column is the resource endowment (b). The
first row contains the objective function coefficients. The rest of the rows correspond to the
constraints of the LP problem. Finally, the non-negativity constraint is not included in the
tableau, but is assumed. Alternatively, all zero coefficients could be left as blanks.
Simplex method
The simplex method is an algebraic method, which systematically finds an optimal solution to
the LP problem using iterative procedures. It is an iterative procedure because the simplex
102
method uses basic steps that are repeated over and over again until an optimal solution is found
by certain criteria.
103
A basic feasible solution (BFS) satisfies all constraints, including non-negativity.
A basic infeasible solution violates at least one constraint.
Fact: A BFS always occurs at an extreme point of the feasible region.
A BFS and an extreme point are one and the same. Since we know that an extreme point will be
optimal (if any optimal solution exists), it seems reasonable to focus our attention on extreme
points. The simplex method is based on this observation. It examines a sequence of BFSs, based
on an iterative algorithm, until the optimal BFS is found.
The Simplex Tableau
The simplex method starts out by setting all “productive” activities to zero (i.e., the solution is
the origin), and a simplex tableau is formed to do the first iteration. All non slack and non
surplus activities (i.e., the xi’s) will be referred to as “productive” activities in the discussion that
follows. The first tableau is:
X1 X2 S1 S2 S3
Basis CB 35 50 0 0 0 b bi/aij
S1 0 1 1 1 0 0 1,000
S2 0 2.5 0.75 0 1 0 1,500
S3 0 0 1.5 0 0 1 800
Zj
Net eval (Cj- Zj)
Comments on Columns
1. The basis column includes all the basic variables. In the first iteration, the non basic variables
are the productive activities (x1 = x2 = 0), and the basis therefore consists of the three slack
variables s1, s2, and s3.
2. The CB column contains the objective function coefficients for the basic variables. CB stands
for the contribution of the current basis. Since the basic variables in the first iteration are all slack
variables, s1, s2, and s3 = 0.
3. Columns x1, x2, s1, s2, and s3 are the activities and slack variables to the problem. They include
the basic and non basic variables.
4. The b column contains the right-hand-side (RHS) values (resource endowments) of the
problem.
104
5. The bi/aij column will be used to determine the pivot row, as will be explained later.
6. Note that the columns associated with the basic variables (s1, s2, and s3 in this case) look like
an identity matrix (1’s on the diagonal and 0’s in the off diagonal).
Each of these columns is known as a unit column or unit vector. It is desirable to always have
all basic variables forming unit vectors for the following reason:
When all basic variables are unit vectors, the solution for each basic variable is given by the
value under the resource endowment b column associated with row i in the simplex tableau.
Comments on Rows
1 The first row under the activities row contains the objective function coefficients for the
basic and non basic variables.
2. The next three rows correspond to the constraints of the problem. It is identical to the LP
problem above, only expressed in tableau form.
3. The last two rows are called the zj and cj _ zj rows.
The zj and cj _ zj Rows
The zj and cj _ zj rows provide a criterion for selecting which non basic variable, if any,
should enter the next solution in order to increase the value of the objective function.
There are two contrasting effects that bringing a non basic variable into the new basis
will have on the value of the objective function.
1. Direct Rate of Increase.
The objective function will increase at a rate of c i per unit of xi forced into the basis,
where xi is a non basic variable and ci is its objective function coefficient.
2. Indirect Rate of Decrease.
The objective function will decrease owing to a downward adjustment in the current basic
variables due to bringing a non basic variable into the solution. The zj row measures this
indirect rate of decrease for each non basic variable. The net effect of the direct rate of
increase and the indirect rate of decrease in the objective function for each non basic
variable is measured by the cj _ zj row.
MAXIMIZATION
The simplex algorithm method for maximization is explained below in four easy steps, using the
following concrete example:
105
Maximize π = 8x1 + 6x2
Subject to 2x1 + 5x2 ≤ 40
3x1 + 3x2 ≤ 30
8x1 + 4x2 ≤ 64
x1, x2 ≥ 0
2x1 + 5x2+s1 = 40
3x1 + 3x2 +s2 = 30
8x1 + 4x2 +s3 = 64
3. Set up an initial simplex tableau which will be the framework for the algorithm. The initial
tableau represents the first basic feasible solution when x1 and x2 equal zero. It is composed of
the coefficient matrix of the constraint equations and the column vector of constants set above
a row of indicators which are the negatives of the coefficients of the decision variables in the
objective function and a zero coefficient for each slack variable. The constant column entry in
the last row is also zero, corresponding to the value of the objective function when x1 and x2
equal zero. The initial simplex tableau is as follows:
106
4. By setting x1 = x2 = 0, the first basic feasible solution can be read directly from the initial
tableau: s1 = 40, s2 = 30, and s3 = 64. Since x1 and x2 are initially set equal to zero, the objective
function has a value of zero.
II. The Pivot Element and a Change of Basis
To increase the value of the objective function, a new basic solution is examined. To move to
a new basic feasible solution, a new variable must be introduced into the basis and one of the
variables formerly in the basis must be excluded. The process of selecting the variable to be
included and the variable to be excluded is called change of basis.
1. The negative indicator with the largest absolute value determines the variable to enter the
basis. Since −8 in the first (or x1) column is the negative indicator with the largest absolute value,
x1 is brought into the basis. The x1 column becomes the pivot column and is denoted by an arrow.
2. The variable to be eliminated is determined by the smallest displacement ratio. Displacement
ratios are found by dividing the elements of the constant column by the elements of the pivot
column. The row with the smallest displacement ratio, ignoring ratios less than or equal to zero,
64
becomes the pivot row and determines the variable to leave the basis. Since provides the
8
64 30 40
smallest ratio ( < < ), row 3 is the pivot row. Since the unit column vector with 1 in the
8 3 2
third row appears under the s3 column, s3 leaves the basis. The pivot element is 8, the element at
the intersection of the column of the variable entering the basis and the row associated with the
variable leaving the basis (i.e., the element at the intersection of the pivot row and the pivot
column).
107
III. Pivoting
Pivoting is the process of solving the n equations for the n variables presently in the basis. Since
only one new variable enters the basis at each step of the process and the previous step always
involves an identity matrix (although the columns are often out of normal order), pivoting simply
involves converting the pivot element to 1 and all the other elements in the pivot column to zero,
as in the Gaussian elimination method of finding an inverse matrix, as follows:
1. Multiply the pivot row by the reciprocal of the pivot element. In this case, multiply row 3 of the
1
initial tableau by :
8
2. Having reduced the pivot element to 1, clear the pivot column. Here subtract 2 times row 3 from
row 1, 3 times row 3 from row 2, and add 8 times row 3 to row 4. This gives the second tableau:
The second basic feasible solution can be read directly from the second tableau. Setting equal to
zero all the variables heading columns which are not composed of unit vectors (in this case x2 and
108
s3), and mentally rearranging the unit column vectors to form an identity matrix, we see that s1 =
24, s2 = 6, and x = 8. With x1 = 8, π = 64, as is indicated by the last element of the last row.
IV. Optimization
The objective function is maximized when there are no negative indicators in the last row.
Changing the basis and pivoting continue according to the rules above until this is achieved.
Since −2 in the second column is the only negative indicator, x2 is brought into the basis and
column 2 becomes the pivot column. Dividing the constant column by the pivot column shows
3
that the smallest ratio is in the second row. Thus, becomes the new pivot element. Since the
2
unit column vector with 1 in the second row is under s2, s2 will leave the basis. To pivot,
perform the following steps:
2
1. Multiply row 2 by, .
3
1
2. Then subtract 4 times row 2 from row 1, times row 2 from row 3, and add 2 times row 2 to
2
row 4, deriving the third tableau:
109
Setting all the variables heading non-unit vector columns equal to zero (i.e., s2 = s3 = 0), and
mentally rearranging the unit column vectors to form an identity matrix, we see that s1 = 8, x2 =
4, and x1 = 6. Since there are no negative indicators left in the last row, this is the optimal
solution. The last element in the last row indicates that at x 1 = 6 and x2 = 4, s1 = 8, s2 = 0, and s3 =
0, the objective function reaches a maximum at = 72.
Example 2
Maximize π = 50x1 + 30x2
Subject to 2x2 + x2 ≤ 14
5x1 + 5x2 ≤ 40
x1 + 3x2 ≤ 18
x1, x2 ≥ 0
1. Construct the initial simplex tableau.
(a) Add slack variables to the inequalities to make them equations.
2x1 + x2 + s1 = 14
5x1 + 5x2 + s2 = 40
x1 + 3x2 + s3 = 18
(b) Express the constraint equations in matrix form.
(c) Form the initial simplex tableau composed of the coefficient matrix of the constraint
equations and the column vector of constants set above a row of indicators which are the
negatives of the coefficients of the objective function and zero coefficients for the slack
variables. The initial tableau is
110
Setting x1 = x2 = 0, the first basic feasible solution is s1 = 14, s2 = 40, and s3 = 18. At the first
basic feasible solution, π = 0.
2. Change the basis. The negative indicator with the largest absolute value (arrow) determines
the pivot column. The smallest displacement ratio arising from the division of the elements of the
constant column by the elements of the pivot column decides the pivot row. Thus 2 becomes the
pivot element, the element at the intersection of the pivot row and the pivot column.
3. Pivot.
1
(a) Convert the pivot row to 1 by multiplying row 1 by .
2
(b) Clear the pivot column by subtracting 5 times row 1 from row 2, row 1 from row 3, and
adding 50 times row 1 to row 4.
The second tableau is
111
5
4. Change the basis and pivot again. Column 2 is the pivot column, row 2 is the pivot row, and
2
is the pivot element.
2
(a) Multiply row 2 by .
5
1 2
(b) Clear the pivot column by subtracting times row 2 from row 1, times row 2 from row 3
2 5
and adding 5 times row 2 to row 4. The third tableau is
With no negative indicators left, this is the final tableau. Setting the variables (s1 and s2) above
non-unit column vectors equal to zero, and rearranging mentally to form the identity matrix, we
see that x1 = 6,x2 = 2, s1 = 0,s2 = 0,s3 = 6, and = 360. The shadow prices of the inputs are 20,
2, and 0, respectively.
112
Example 3.
Maximize
π = 56x1 + 24x2 + x3
Subject to 4x1 + 2x2 + 3x3 ≤ 240
8x1 + 2x2 + x3 ≤ 120
x1, x2, x3 ≥ 0
Add slack variables and express the constraint equations in matrix form.
4x1 + 2x2 + 3x3 + s1 = 240
8x1 + 2x2 + x3 + s2 = 120
113
1
Then change the basis and pivot, as follows. (1) Multiply row 2 by .
8
(2) Clear the pivot column by subtracting 4 times row 2 from row 1 and adding 56 times row 2 to
row 3. Set up the second tableau:
2
Change the basis and pivot again. (1) Multiply row 1 by .
5
1
(2) Clear the pivot column by subtracting times row 1 from row 2 and adding 11 times row 1
8
to row 3. Set up the third tableau:
114
Since there is still a negative indicator, pivot again. Multiply row 2 by 5.
2 28
Subtract times row 2 from row 1 and add times row 2 to row 3. Set up the final tableau:
5 5
THE DUAL
Every minimization problem in linear programming has a corresponding maximization problem,
and every maximization problem has a corresponding minimization problem. The original
problem is called the primal; the corresponding problem is called the dual. The relationship
between the two can most easily be seen in terms of the parameters they share in common. Given
an original primal problem.
Minimize
c = g1y1 + g2y2 + g3y3
Subject to
115
a11y1 + a12y2 + a13y3 ≥ h1
a21y1 + a22y2 + a23y3 ≥ h2
a31y1 + a32y2 + a33y3 ≥ h3
y1, y2, y3 ≥ 0
The related dual problem is
Maximize
π = h1x1 + h2x2 + h3x3
Subject to
a11x1 + a21x2 + a31x3 ≤ g1
a12x1 + a22x2 + a32x3 ≤ g2
a13x1 + a23x2 + a33x3 ≤ g3
X1, x2, x3 ≥ 0
RULES OF TRANSFORMATION TO OBTAIN THE DUAL
In the formulation of a dual from a primal problem,
1. The direction of optimization is reversed. Minimization becomes maximization in the dual and
vice versa.
2. The inequality signs of the technical constraints are reversed, but the non negativity
constraints on decision variables always remain in effect.
3. The rows of the coefficient matrix of the constraints in the primal are transposed to columns
for the coefficient matrix of constraints in the dual.
4. The row vector of coefficients in the objective function in the primal is transposed to a column
vector of constants for the dual constraints.
5. The column vector of constants from the primal constraints is transposed to a row vector of
coefficients for the objective function in the dual.
6. Primal decision variables xi or yi are replaced by the corresponding dual decision variables yi
or xi.
116