Linear Programming All Theorems
Linear Programming All Theorems
Show that for every basic feasible solution of the following LPP there is a unique
extreme point:
Minimize z = cx
subject to
Ax = b
x 0.
where A is a matrix of order m n with (A) = m.
𝑥𝐵
Proof. Let ( ) be a basic feasible feasible solution of the given LPP with basis matrix
0
B = (𝑏1 , 𝑏2 , … , 𝑏𝑚 ).
Then B𝑥𝐵 = b
𝑏1 𝑥𝐵1 + 𝑏2 𝑥𝐵2 + ⋯ + 𝑏𝑚 𝑥𝐵𝑚 = b
𝑢1 , 𝑢2 0 ; 𝑣1 , 𝑣2 0
Also > 0 , 1 − > 0
By (1),
𝑣1 = 0 , (1 − ) 𝑣2 = 0
𝑣1 = 0 , 𝑣2 = 0
1
𝑢 𝑢
So, 𝑥1 = ( 1) , 𝑥 2 = ( 2)
0 0
Further A𝑥 1 = b , A𝑥 2 = b
Let (B : R) be the partition of the matrix A corresponding to the partition of the basic
𝑥
feasible solution ( 𝐵 )
0
𝑢 𝑢
Then (B : R) ( 1 ) = b and (B : R) ( 2 ) = b
0 0
i.e. B𝑢1 = b and B𝑢2 = b
B𝑢1 − B𝑢2 = 0
B(𝑢1 − 𝑢2 ) = 0
𝑢1 − 𝑢2 = 0 [as B is non-singular]
𝑢1 = 𝑢2
Thus, 𝑥 1 = 𝑥 2 which is a contradiction.
So, our assumption is wrong.
𝑥𝐵
Hence, the basic feasible solution ( ) corresponds to an extreme point of the set of
0
feasible solution of the LPP.
Theorem (Optimality Criteria)
𝑡
Let 𝑥𝐵 = 𝐵−1 𝑏 = (𝑥𝐵1 , 𝑥𝐵2 , … , 𝑥𝐵𝑚 ) be a basic feasible solution to the linear
programming problem
Minimize z = c x (1)
subject to
Ax = b
x0, A = (𝐴1 , 𝐴2 , … , 𝐴𝑛 )𝑚×𝑛 , (A) = m
with B = (𝑏1 , 𝑏2 , … , 𝑏𝑚 ) as the basis matrix.
Suppose that 𝑧𝑗 − 𝑐𝑗 0 𝐴𝑗 A , 𝐴𝑗 B,
2
OR
A basic feasible solution with 𝑧𝑗 − 𝑐𝑗 0 𝐴𝑗 A , 𝐴𝑗 B is an optimal basic feasible
solution.
Proof. Given 𝑥𝐵 = 𝐵−1 𝑏 is a basic feasible solution
B𝑥𝐵 = b
𝑏1 𝑥𝐵1 + 𝑏2 𝑥𝐵2 + ⋯ + 𝑏𝑚 𝑥𝐵𝑚 = b (2)
𝐴𝑗 = 𝑦1 𝑗 𝑏1 + 𝑦2 𝑗 𝑏2 + ⋯ + 𝑦𝑚 𝑗 𝑏𝑚 j = 1, 2, …, n
i.e. 𝐴𝑗 = ∑𝑚
𝑖=1 𝑦𝑖𝑗 𝑏𝑖 j = 1, 2, …, n
Putting these values in (3), we get
(∑𝑚 𝑚 𝑚
𝑖=1 𝑦𝑖 1 𝑏𝑖 ) 𝑥1 + (∑𝑖=1 𝑦𝑖 2 𝑏𝑖 ) 𝑥2 + ⋯ + (∑𝑖=1 𝑦𝑖 𝑛 𝑏𝑖 ) 𝑥𝑛 = b
Given 𝑧𝑗 − 𝑐𝑗 0 j : 𝐴𝑗 A , 𝐴𝑗 B
3
For basic columns 𝐴𝑗 = 𝑏𝑖
𝑧𝑗 − 𝑐𝑗 = 𝑐𝐵 𝑌𝑗 − 𝑐𝑗
= 𝑐𝐵 (𝐵−1 𝐴𝑗 ) − 𝑐𝑗
= 𝑐𝐵 (𝐵−1 𝑏𝑖 ) − 𝑐𝐵𝑖 ( as 𝐴𝑗 = 𝑏𝑖 and 𝑐𝑗 = 𝑐𝐵𝑖 )
= 𝑐𝐵 𝑒𝑖 − 𝑐𝐵𝑖
= 𝑐𝐵𝑖 − 𝑐𝐵𝑖
=0
Thus 𝑧𝑗 − 𝑐𝑗 = 0 basic columns
Hence 𝑧𝑗 − 𝑐𝑗 0 𝐴𝑗 A
i.e. 𝑧𝑗 − 𝑐𝑗 0 j = 1, 2, …, n
𝑧𝑗 𝑐𝑗 j = 1, 2, …, n
𝑧𝑗 𝑥𝑗 𝑐𝑗 𝑥𝑗 j = 1, 2, …, n
Adding over j = 1, 2, …, n; we get
𝑧1 𝑥1 + 𝑧2 𝑥2 + ⋯ + 𝑧𝑛 𝑥𝑛 𝑐1 𝑥1 + 𝑐2 𝑥2 + ⋯ + 𝑐𝑛 𝑥𝑛
𝑖=1 𝑐𝐵𝑖 𝑦𝑖1 ) 𝑥1 + (∑𝑖=1 𝑐𝐵𝑖 𝑦𝑖2 ) 𝑥2 + ⋯ + (∑𝑖=1 𝑐𝐵𝑖 𝑦𝑖𝑛 ) 𝑥𝑛 z
(∑𝑚 𝑚 𝑚
i.e. 𝑧𝐵 z
Since X was an arbitrary feasible solution to the given problem (1)
𝑧𝐵 z feasible solutions to lpp (1).
Hence, 𝑧𝐵 is the optimal value (minimum) of the objective function and so 𝑋𝐵 is an optimal
basic feasible solution to (1).
4
Theorem (Improvement Theorem)
𝑡
Let 𝑥𝐵 = 𝐵−1 𝑏 = (𝑥𝐵1 , 𝑥𝐵2 , … , 𝑥𝐵𝑚 ) be a basic feasible solution to the linear
programming problem
Minimize z = c x (1)
subject to
Ax = b
x0, A = (𝐴1 , 𝐴2 , … , 𝐴𝑛 )𝑚×𝑛 , (A) = m
with B = (𝑏1 , 𝑏2 , … , 𝑏𝑚 ) as the basis matrix and 𝑧𝐵 = 𝑐𝐵 𝑥𝐵 as the value of the objective
function.
Suppose that there is a non-basic column 𝐴𝑘 A , 𝐴𝑘 B for which 𝑧𝑘 − 𝑐𝑘 > 0 and
at least one 𝑦𝑖𝑘 > 0.
Then a new basic solution 𝑥̂𝐵 can be obtained with improved value of objective function
𝑧̂
𝐵 (𝑧̂𝐵 𝑧𝐵 ) and if 𝑥𝐵 is nondegenerate, then 𝑧̂𝐵 < 𝑧𝐵 .
5
Using (4) in (2), we have
𝑏1 𝑥𝐵1 + 𝑏2 𝑥𝐵2 + ⋯ + 𝑏𝑟−1 𝑥𝐵𝑟−1
𝑦1𝑘 𝑦2𝑘 𝑦𝑟−1 𝑘 1 𝑦𝑟+1 𝑘 𝑦𝑚𝑘
+ (− 𝑏1 − 𝑏2 − ⋯ − 𝑏𝑟−1 + 𝐴𝑘 − 𝑏𝑟+1 − ⋯ − − 𝑏𝑚 ) 𝑥𝐵𝑟
𝑦𝑟𝑘 𝑦𝑟𝑘 𝑦𝑟𝑘 𝑦𝑟𝑘 𝑦𝑟𝑘 𝑦𝑟𝑘
𝑦𝑟+1 𝑘 𝑦𝑚𝑘
+ 𝑏𝑟+1 (𝑥𝐵𝑟+1 − 𝑥𝐵𝑟 ) + ⋯ + 𝑏𝑚 (𝑥𝐵𝑚 − 𝑥 )=𝑏
𝑦𝑟𝑘 𝑦𝑟𝑘 𝐵𝑟
i.e. 𝑏1 𝑥̂
𝐵1 + 𝑏2 𝑥̂
𝐵2 + ⋯ + 𝑏𝑟−1 𝑥
̂𝐵𝑟−1 + 𝐴𝑗 𝑥̂
𝐵𝑟 + 𝑏𝑟+1 𝑥
̂𝐵𝑟+1 + ⋯ + 𝑏𝑚 𝑥̂
𝐵𝑚 = 𝑏
𝑦𝑖𝑘
where 𝑥̂
𝐵𝑖 = 𝑥𝐵𝑖 − 𝑥𝐵𝑟 , i = 1, 2, …, m ; i r
𝑦𝑟𝑘
𝑥𝐵𝑟
and 𝑥̂
𝐵𝑟 = 𝑦𝑟𝑘
Thus, 𝑥̂𝐵 = ( 𝑥̂
𝐵1 , … , 𝑥
̂𝐵𝑟−1 , 𝑥̂
𝐵𝑟 , 𝑥
̂𝐵𝑟+1 , … , 𝑥̂
𝐵𝑚 ) is a solution to the given lpp (1).
Further, since 𝑦𝑟𝑘 0, the columns 𝑏1 , 𝑏2 , … , 𝑏𝑟−1 , 𝐴𝑘 , 𝑏𝑟+1 , … , 𝑏𝑚 form a basis and
hence 𝑥̂𝐵 is a basic solution.
𝐵𝑖 0
Now the basic solution 𝑥̂𝐵 will be feasible if 𝑥̂
𝑦𝑖𝑘
i.e. if 𝑥𝐵𝑖 − 𝑥𝐵𝑟 ≥ 0 , i = 1, 2, …, m ; i r
𝑦𝑟𝑘
𝑥𝐵𝑟
and ≥0
𝑦𝑟𝑘
𝑥𝐵𝑟 𝑥𝐵𝑖
≤
𝑦𝑟𝑘 𝑦𝑖𝑘
𝑥𝐵𝑟 𝑥𝐵𝑖
Choose = min { ∶ 𝑦𝑖𝑘 > 0} ≥ 0 (A)
𝑦𝑟𝑘 𝑖 𝑦𝑖𝑘
6
Choosing r as in (A), we get
𝐵𝑖 0
𝑥̂ i : 𝑦𝑖𝑘 > 0 , i r
𝑧̂
𝐵 = 𝑐̂
𝐵1 𝑥̂
𝐵1 + ⋯ + 𝑐̂
𝐵𝑟 𝑥̂
𝐵𝑟 + ⋯ + 𝑐̂
𝐵𝑚 𝑥̂
𝐵𝑚
𝑦𝑖𝑘
= ∑𝑚
𝑖=1 𝑐𝐵𝑖 (𝑥𝐵𝑖 − 𝑥𝐵𝑟 ) + 𝑐̂
𝐵𝑟 𝑥̂
𝐵𝑟
𝑦𝑟𝑘
𝑖 ≠𝑟
𝑥𝐵𝑟 𝑥𝐵𝑟
𝑖=1 𝑐𝐵𝑖 𝑥𝐵𝑖 −
= ∑𝑚
𝑦𝑟𝑘
∑𝑚
𝑖=1 𝑐𝐵𝑖 𝑦𝑖𝑘 + 𝑐𝑘 𝑦𝑟𝑘
𝑖 ≠𝑟 𝑖 ≠𝑟
𝑥𝐵𝑟 𝑥𝐵𝑟
= (𝑧𝐵 − 𝑐𝐵𝑟 𝑥𝐵𝑟 ) − ∑𝑚
𝑖=1 𝑐𝐵𝑖 𝑦𝑖𝑘 + 𝑐𝑘 [By (3)]
𝑦𝑟𝑘 𝑦𝑟𝑘
𝑖 ≠𝑟
𝑦𝑟𝑘 𝑥𝐵𝑟
= 𝑧𝐵 − 𝑐𝐵𝑟 𝑥𝐵𝑟 − ( ∑𝑚
𝑖=1 𝑐𝐵𝑖 𝑦𝑖𝑘 − 𝑐𝑘 )
𝑦𝑟𝑘 𝑦𝑟𝑘
𝑖 ≠𝑟
𝑥𝐵𝑟 𝑥𝐵𝑟
= 𝑧𝐵 − 𝑐𝐵𝑟 𝑦𝑟𝑘 − ( ∑𝑚
𝑖=1 𝑐𝐵𝑖 𝑦𝑖𝑘 − 𝑐𝑘 )
𝑦𝑟𝑘 𝑦𝑟𝑘
𝑖 ≠𝑟
𝑥𝐵𝑟
= 𝑧𝐵 − ( ∑𝑚
𝑖=1 𝑐𝐵𝑖 𝑦𝑖𝑘 − 𝑐𝑘 )
𝑦𝑟𝑘
𝑥𝐵𝑟
= 𝑧𝐵 − ( 𝑧𝑘 − 𝑐𝑘 )
𝑦𝑟𝑘
where 𝑧𝑘 = ∑𝑚
𝑖=1 𝑐𝐵𝑖 𝑦𝑖𝑘
Now, if 𝑧𝑘 − 𝑐𝑘 > 0
𝑥𝐵𝑟
Then, since 0, so
𝑦𝑟𝑘
𝐵 𝑧𝐵
𝑧̂
7
𝑥𝐵𝑟
Further, if 𝑥𝐵 is nondegenerate, so > 0, and thus
𝑦𝑟𝑘
𝑧̂
𝐵 < 𝑧𝐵 .
Thus, a new basic feasible solution 𝑥̂𝐵 has been obtained with improved value of objective
function 𝑧̂
𝐵 (𝑧̂𝐵 𝑧𝐵 ) and if 𝑥𝐵 is nondegenerate, then 𝑧 ̂𝐵 < 𝑧𝐵 .
Hence proved.
Theorem (Unbounded Solution)
𝑡
Let 𝑥𝐵 = 𝐵−1 𝑏 = (𝑥𝐵1 , 𝑥𝐵2 , … , 𝑥𝐵𝑚 ) be a basic feasible solution to the linear
programming problem
Minimize z = c x (1)
subject to
Ax = b
x0, A = (𝐴1 , 𝐴2 , … , 𝐴𝑛 )𝑚×𝑛 , (A) = m
with B = (𝑏1 , 𝑏2 , … , 𝑏𝑚 ) as the basis matrix and 𝑧𝐵 = 𝑐𝐵 𝑥𝐵 as the value of the objective
function.
Suppose that there is a non-basic column 𝐴𝑘 A , 𝐴𝑘 B (or a non-basic variable 𝑥𝑘 )
for which 𝑧𝑘 − 𝑐𝑘 > 0 and all 𝑦𝑖𝑘 0.
Then, the given problem has an unbounded solution.
8
Given that 𝑦𝑖𝑘 0 i = 1, 2, …, m.
Let > 0 be any positive real number.
Multiplying (3) by and subtracting it from (2), we get
𝑏1 (𝑥𝐵1 − 𝜃𝑦1𝑘 ) + 𝑏2 (𝑥𝐵2 − 𝜃𝑦2𝑘 ) + ⋯ + 𝑏𝑚 (𝑥𝐵𝑚 − 𝜃𝑦𝑚𝑘 ) + 𝜃𝐴𝑘 = 𝑏
𝑡
Thus, 𝑥̂ = (𝑥𝐵1 − 𝜃𝑦1𝑘 , 𝑥𝐵2 − 𝜃𝑦2𝑘 , … , 𝑥𝐵𝑚 − 𝜃𝑦𝑚𝑘 , 𝜃, 0 , … , 0) is a solution to lpp (1).
= ∑𝑚 𝑚
𝑖=1 𝑐𝐵𝑖 𝑥𝐵𝑖 − 𝜃 ∑𝑖=1 𝑐𝐵𝑖 𝑦𝑖𝑘 + 𝜃 𝑐𝑘
= ∑𝑚 𝑚
𝑖=1 𝑐𝐵𝑖 𝑥𝐵𝑖 − 𝜃(∑𝑖=1 𝑐𝐵𝑖 𝑦𝑖𝑘 − 𝑐𝑘 )
= 𝑧𝐵 − 𝜃(𝑧𝑘 − 𝑐𝑘 )
where 𝑧𝑘 = ∑𝑚
𝑖=1 𝑐𝐵𝑖 𝑦𝑖𝑘
9
subject to
𝐴𝑡 𝑦 𝑐 𝑡
y0
where y = (𝑦1 , 𝑦2 , … , 𝑦𝑚 )𝑡 .
Converting the above problem into minimization and multiplying the constraints by − 1,
the Dual (D) becomes
Minimize w* = − 𝑏𝑡 𝑦
subject to
−𝐴𝑡 𝑦 −𝑐 𝑡
y0
where Min w* = − Max w
The above problem can be rewritten as
Minimize w* = (− 𝑏𝑡 )𝑦
subject to
(−𝐴𝑡 )𝑦 (−𝑐 𝑡 )
y0
The above problem is in MIN problem,
So its Dual is
Maximize t = (−𝑐 𝑡 )𝑡 𝛾
subject to
(−𝐴𝑡 )𝑡 𝛾 (−𝑏𝑡 )𝑡
0
where = (𝛾1 , 𝛾2 , … , 𝛾𝑛 )𝑡 .
This problem can be rewritten in the form
Maximize t = −(𝑐 𝑡 )𝑡 𝛾
subject to
−(𝐴𝑡 )𝑡 𝛾 −(𝑏𝑡 )𝑡
0
10
Or Maximize t = − c 𝛾
subject to
−𝐴𝛾 − b
0
Converting the problem into Minimization and multiplying the constraints by −1, we get
Minimize t* = c
subject to
A b
0
where Min t* = − Max t
This problem is equivalent to the given lpp (P).
Hence the Dual of the Dual of a problem is the problem itself.
Theorem. Weak Duality Theorem
The value of objective function corresponding to any feasible solution of a Max-
Problem (Primal) is always less than or equal to the value of the objective function
corresponding to any feasible solution of the Dual Problem.
Let 𝑥 0 be a feasible solution to the Primal problem
Maximize z = cx (P)
subject to
Ax b
x0
and 𝑦 0 be a feasible solution to its dual problem
Minimize w = 𝑏 𝑡 𝑦 (D)
subject to
𝐴𝑡 𝑦 𝑐 𝑡
y0
Then c𝑥 0 𝑏 𝑡 𝑦 0
11
Proof. Since 𝑥 0 is a feasible solution of the Max Problem (P)
A𝑥 0 b (1)
𝑥0 0 (2)
And Since 𝑦 0 be a feasible solution of the Dual (D)
𝐴𝑡 𝑦 0 𝑐 𝑡 (3)
𝑦0 0 (4)
By (1) and (4), we have
𝑡 𝑡
𝑦 0 A𝑥 0 𝑦 0 b (5)
From (2) and (3), we have
𝑡 𝑡
𝑥 0 𝐴𝑡 𝑦 0 𝑥 0 𝑐 𝑡 (6)
𝑡
Now 𝑥 0 𝐴𝑡 𝑦 0 is 1 1 matrix i.e. a real number.
𝑡 𝑡 𝑡
𝑥 0 𝐴𝑡 𝑦 0 = (𝑥 0 𝐴𝑡 𝑦 0 )
𝑡
= 𝑦 0 A𝑥 0 (7)
From (6), (7) and (5), we have
𝑡 𝑡 𝑡 𝑡
𝑥 0 𝑐 𝑡 𝑥 0 𝐴𝑡 𝑦 0 = 𝑦 0 A𝑥 0 𝑦 0 b
𝑡 𝑡
i.e. 𝑥0 𝑐𝑡 𝑦0 b
𝑡 𝑡 𝑡 𝑡
(𝑥 0 𝑐 𝑡 ) (𝑦 0 b)
c𝑥 0 𝑏 𝑡 𝑦 0
Hence proved.
12
Result: If 𝑥 0 is a feasible solution to (P) and 𝑦 0 is a feasible solution of the Dual (D)
with c𝑥 0 = 𝑏 𝑡 𝑦 0
Then 𝑥 0 is an optimal solution to (P) and 𝑦 0 is an optimal solution to (D).
13
Proof. Let a Primal problem be
Maximize z = cx (P)
subject to
Ax b
x0
has an unbounded solution.
Its Dual is
Minimize w = 𝑏 𝑡 𝑦 (D)
subject to
𝐴𝑡 𝑦 𝑐 𝑡
y0
To prove the Dual problem (D) has no feasible solution.
Let, if possible, the Dual problem (D) has a feasible solution, say 𝑦 0 with 𝑏 𝑡 𝑦 0 as
the value of objective function.
** Since the primal problem (P) has an unbounded solution
There are infinitely many feasible solutions of (P) corresponding to which the
value of the objective function can be made arbitrarily large.
Consequently, we can find a feasible solution 𝑥 ∗ of (P) satisfying
c𝑥 ∗ > 𝑏 𝑡 𝑦 0 .
This contradicts Weak Duality Theorem.
Thus our assumption is wrong.
Hence, Dual Problem (D) cannot have a feasible solution.
** So, by Weak Duality theorem,
The value of objective function corresponding to any feasible solution of a
Max-Problem (Primal) is always less than or equal to the value of the objective
function corresponding to any feasible solution of the Dual Problem.
i.e. c𝑥 0 𝑏 𝑡 𝑦 0 feasible solution 𝑥 0 of the Primal problem (P).
14
Value of objective function of the Primal problem (P) is bounded by 𝑏 𝑡 𝑦 0 .
This contradicts the fact that primal problem (P) has an unbounded solution.
Thus our assumption is wrong.
Hence the Dual problem (D) has no solution.
Theorem (Basic (Strong) Duality Theorem).
If the a Max-Problem (Primal problem) (P) has an optimal solution then its Dual
problem (D) also has an optimal solution with same optimal values of objective
functions.
Proof. Let a Primal problem be
Maximize z = cx (P)
subject to
Ax b
x0
has an optimal solution, say 𝑥 0 with 𝑧 0 = c𝑥 0 as optimal value of the objective
function.
Its Dual is
Minimize w = 𝑏 𝑡 𝑦 (D)
subject to
𝐴𝑡 𝑦 𝑐 𝑡
y 0.
Adding slack variables to constraints of (P), we get
Maximize z = cx (P1)
subject to
Ax + 𝐼𝑚 𝑥𝑠 = b
x, 𝑥𝑠 0
where 𝑥𝑠 is the vector of slack variables.
15
Since (P) has an optimal solution.
(P1) will also have an optimal solution and one of the basic feasible solution of
(P1) will be optimal solution of (P1).
𝑥 −1
Let ( 𝐵 ) = (𝐵 𝑏) be an optimal basic feasible solution of (P1) with 𝑧𝐵 = 𝑐𝐵 𝑥𝐵 =
0 0
0
𝑧 as the optimal value of the LPP (P1)
and all 𝑧𝑗 − 𝑐𝑗 0
Now, 𝑧𝑗 − 𝑐𝑗 0 𝐴𝑗 (A : 𝐼𝑚 )
𝑧𝑗 − 𝑐𝑗 0 𝐴𝑗 A = (𝐴1 , 𝐴2 , … , 𝐴𝑛 )
and 𝑧𝑗 − 𝑐𝑗 0 𝐴𝑗 𝐼𝑚 = (𝑒1 , 𝑒2 , … , 𝑒𝑚 )
i.e. 𝑐𝐵 𝐵−1 𝐴𝑗 − 𝑐𝑗 0 j = 1, 2, …, n
and 𝑐𝐵 𝐵−1 𝑒𝑖 − 𝑐𝑗 0 i = 1, 2, …, m
16
= (𝑐𝐵 𝐵−1 )𝑏
= 𝑐𝐵 (𝐵−1 𝑏)
= 𝑐𝐵 𝑥𝐵 [ as 𝑥𝐵 = 𝐵−1 𝑏 ]
= 𝑧𝐵 .
We know that “If 𝑥 0 is a feasible solution to (P) and 𝑦 0 is a feasible solution of the
Dual (D) with c𝑥 0 = 𝑏 𝑡 𝑦 0
Then 𝑥 0 is an optimal solution to (P) and 𝑦 0 is an optimal solution to (D)”.
Thus, 𝑦 0 is an optimal solution to (D).
Hence, the Dual problem (D) also possess an optimal solution.
Theorem (Complementary Slackness Property)
If 𝑥 0 = (𝑥10 , 𝑥20 , … , 𝑥𝑛0 ) is an optimal solution to the Primal problem (P)
Maximize z = 𝑐1 𝑥1 + 𝑐2 𝑥2 + ⋯ + 𝑐𝑛 𝑥𝑛
subject to
𝑎11 𝑥1 + 𝑎12 𝑥2 + ⋯ + 𝑎1𝑛 𝑥𝑛 𝑏1
𝑎21 𝑥1 + 𝑎22 𝑥2 + ⋯ + 𝑎2𝑛 𝑥𝑛 𝑏2
… … … … …
𝑎𝑚1 𝑥1 + 𝑎𝑚2 𝑥2 + ⋯ + 𝑎𝑚𝑛 𝑥𝑛 𝑏𝑚
𝑥1 , 𝑥2 , … , 𝑥𝑛 0
and 𝑦 0 = (𝑦10 , 𝑦20 , … , 𝑦𝑚
0 )
is an optimal solution to its Dual problem (D)
Maximize z = 𝑏1 𝑦1 + 𝑏2 𝑦2 + ⋯ + 𝑏𝑚 𝑦𝑚
subject to
𝑎11 𝑦1 + 𝑎21 𝑦2 + ⋯ + 𝑎𝑚1 𝑦𝑚 𝑐1
𝑎12 𝑦1 + 𝑎22 𝑦2 + ⋯ + 𝑎𝑚2 𝑦𝑚 𝑐2
… … … … …
𝑎1𝑛 𝑦1 + 𝑎2𝑛 𝑦2 + ⋯ + 𝑎𝑚𝑛 𝑦𝑚 𝑐𝑛
𝑦1 , 𝑦2 , … , 𝑦𝑚 0
17
Then, the following hold:
(i) If the constraint in either problem is satisfied as a strict inequality, then the
corresponding dual variable vanishes.
(ii) If a variable in either of the optimal solution is positive, then the
corresponding dual constraint will be satisfies as an equality.
Note: The Converse of the above theorem is NOT true.
18