0% found this document useful (0 votes)
149 views32 pages

Advanced Operations Research Prof. G. Srinivasan Dept of Management Studies Indian Institute of Technology, Madras Lecture-8 Primal Dual Algorithm

The document discusses using the primal-dual algorithm to solve a linear programming problem. It first writes the primal problem and converts the inequalities to equations using slack variables. It then derives the dual problem and finds an initial feasible solution of 0 for both dual variables. It applies complementary slackness to get a restricted primal problem. Solving this restricted primal shows it is infeasible. To get another dual feasible solution, it finds the dual of the restricted primal, which has a feasible solution of 1 for both variables. It then defines a new dual feasible solution y-dash as the original y plus theta times the new dual feasible solution v, where theta is chosen such that one inequality constraint is now satisfied as an equation.

Uploaded by

khalidscribd1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
149 views32 pages

Advanced Operations Research Prof. G. Srinivasan Dept of Management Studies Indian Institute of Technology, Madras Lecture-8 Primal Dual Algorithm

The document discusses using the primal-dual algorithm to solve a linear programming problem. It first writes the primal problem and converts the inequalities to equations using slack variables. It then derives the dual problem and finds an initial feasible solution of 0 for both dual variables. It applies complementary slackness to get a restricted primal problem. Solving this restricted primal shows it is infeasible. To get another dual feasible solution, it finds the dual of the restricted primal, which has a feasible solution of 1 for both variables. It then defines a new dual feasible solution y-dash as the original y plus theta times the new dual feasible solution v, where theta is chosen such that one inequality constraint is now satisfied as an equation.

Uploaded by

khalidscribd1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

1

Advanced Operations Research


Prof. G. Srinivasan
Dept of Management Studies
Indian Institute of Technology, Madras

Lecture- 8
Primal Dual Algorithm
We continue the discussion on the primal dual algorithm.
(Refer Slide Time: 00:21)

In the last lecture we were solving this problem, minimize 3X
1
plus 4X
2
, subject to 2X
1

plus 3X
2
greater than or equal to 8, 5X
1
plus 2X
2
greater than or equal to 12. The standard
problem is a minimization problem with all greater than or equal to constraints. We first
converted the inequalities to equations by adding the surplus variables or negative slack
variables X
3
and X
4
, to get 2X
1
plus 3X
2
minus X
3
equal to 8. 5X
1
plus 2X
2
minus X
4

equal to 12. This gives a contribution of 0. Now we write the dual of this problem
because we have already seen that for a minimization problem with all greater than or
equal to constraints here. Strictly positive values 0 0 is not basic feasible to the primal, so
this involved use of artificial variables or the two phase method to solve this. We are now
going to write the dual of this problem.
2

(Refer Slide Time: 01:29)

When we write the dual of this problem including the surplus variables, we get a dual like
this. Maximize 8y
1
plus 12y
2
which comes from 8y
1
plus 12y
2
, two dual variables subject
to 2y
1
plus 5y
2
is less than or equal to 3; 3y
1
plus 2y
2
less than or equal to 4; minus y
1
less
than or equal to 0; minus y
2
less than or equal to 0 and y
1
and y
2
unrestricted. When we
look at a dual like this we quickly realize that 0 0 which is y
1
is equal to 0 and y
2
equal to
0 is feasible to the dual. The way the dual is written, if the given problem is a
minimization problem with all greater than or equal to constraints and strict possible
coefficients here then we will have a dual where, 0 0 is feasible. We have identified a
feasible solution to the dual. The next thing we do is to apply complementary slackness
based on this feasible solution to the dual. If we look at 0 0, this is satisfied as an
inequality, this is satisfied as an inequality and these two are satisfied as equations. So
with these two being satisfied as equations, we write the corresponding primal after
applying complimentary slackness conditions. This corresponds to variable X
3
, this
corresponds to variable X
4
, and so in the primal corresponding to this dual, after we apply
complimentary slackness, it is equivalent of solving minus X
3
equal to 8 minus X
4
equal
to 12 because X
3
and X
4
are basic that comes out of this. So, minus X
3
equal to 8 minus
X
4
equal to 12, we need to find the solution.
3

(Refer Slide Time: 03:29)

Right now, the solution for minus X
3
equal to 8 and minus X
4
equal to 12 is easy, but we
also know that solving equations can also be done through linear programming. What we
do is we rewrite this as follows:
(Refer Slide Time: 03:45)

Minus X
3
plus a
1
equal to 8 minus X
4
plus a
2
equal to 12, a
1
a
2
greater than or equal to 0,
X
3
X
4
greater than or equal to 0 and we minimize a
1
plus a
2
. If this system has a solution,
4

also satisfying X
3
X
4
greater than or equal to 0, then it will automatically force a
1
, a
2
to 0
and give us a solution with Z equal to 0. If this plus X
3
X
4
greater than or equal to 0 does
not have solution then, one of the artificial variables a
1
will be in the basis. This will give
us a Z value as 1 0, which is non zero, which is 1 or 2 depending on the number of a's
that are in the solution. We solve this problem using the simplex algorithm here.
(Refer Slide Time: 04:39)

We found out that the solution a
1
equal 8, a
2
equal to 12 with Z equal to 20 is optimal.
This gives us a solution a
1
equal to 8, a
2
equal to 12, Z equal to 20.
5

(Refer Slide Time: 04:54)

This problem is called the restricted primal; restricted because it is restricted by the
feasible solution to the dual. We apply complimentary slackness and then we get the
restricted primal. The basic idea is this. If I have a feasible solution to the dual, I apply
complementary slackness and then if I solve the restricted primal, then the optimum
solution to the restricted primal will be feasible to the original primal. Therefore, it will
be optimal to the primal and dual based on duality relationships. Because, if there is a
feasible solution to the dual, there is a feasible solution to the primal and they satisfy
complementary slackness; then they are optimal to the primal and dual respectively. So
this method will start with the feasible solution to the dual, apply complementary
slackness, create a restricted primal and if the restricted primal is feasible then, it is
optimal. Right now the restricted primal is not feasible to this because, this gives a
solution a
1
equal to 8, a
2
equal to 12, which is not feasible to this one.
We need to get one more dual feasible solution. Now in order to get one more dual
feasible solution, what we do is, we go back and try to find out the dual of the restricted
primal and see what happens. When we look at the dual of the restricted primal, the dual
of the restricted primal will now have two variables, in this case v
1
and v
2
and that will be
to maximize 8 v
1
plus 12 v
2
.
6

(Refer Slide Time: 06:43)

The primal has two constraints, so the dual will have two variables, which will be v
1
and
v
2
. So this will be maximize 8 v
1
plus v
2
. Now as far as this is concerned, minus v
1
, X
3

appears only here, so minus v
1
, the coefficient is 0. All values are greater than or equal to
0. So, the maximization problem will have less than or equal to 0; minus v
1
less than or
equal to 0, minus v
2
less than or equal to 0 from this. This appears only in the second
constraint. From this, v
1
greater than or equal to 1 and v
2
greater than or equal to 1,
because this variable appears only in the first constraint. v
1
objective function coefficient
is 1, so v
1
greater than or equal to 1 and v
2
greater than or equal to 1 and v
1
v
2

unrestricted because the two primal constraints are equations.
A solution to this is given by 1, 1. v
1
is equal to 1 and v
2
is equal to 1 with Z is equal to
20, or objective function value equal to 20. Please note that, this has an objective function
value of 20, which is the same as the objective function value of 20 here. So what we do
is, this we call as some v which is the solution to the dual of the restricted primal. Now
we have a solution which we call here as y, which was the starting dual solution.
7

(Refer Slide Time: 08:54)

So what we will do now is, we want to create one more dual feasible solution and then,
apply complementary slackness to solve the restricted primal. Let us now define the new
dual feasible solution as y-dash, which we are going to have and let us call this y-dash as
y plus theta v, where y is the original 0 0, the existing dual solution, v is the solution of
the dual of the restricted primal. So v is 1 1 and we need to find out theta. If we need to
find out theta, then, we get another feasible solution to the dual, which is based on the
original y, as well as v. What do we want this y dash to have? When we started with y
equal to 0 0 and wrote its restricted primal and solved it, it turned out to be infeasible.
Now we want to define a y-dash, which is feasible to the dual, the restricted primal of
which we will now solve. So we want the restricted primal to be different from this
because, this is not giving us the solution.
When will the restricted primal be different? It will be different when the new y-dash
satisfies at least one more new constraint as an equation, so that the corresponding
variable will now enter into the basis here. Theta should be such that, at least one of the
existing constraints which is satisfied as an inequality by y, should now be satisfied as an
equation by y-dash, so that a new constraint satisfied as an equation here which means, a
new variable will appear. It may replace an existing thing but, the only nice thing is that
there will be one candidate, which can come in as a basic variable here, which means
8

theta should be made out of all those constraints which are currently satisfied as
inequalities, so that, one of those which are currently satisfied as inequalities will also
become an equation. Second thing is that, in general principles of linear programming,
this by itself is infeasible which is given by this, but this is a non-optimal solution to this.
(Refer Slide Time: 11:53)

So non-optimal solution in principle should have a new variable that is entering and we
relate the entering variable on the primal to the feasibility of the dual, then dual infeasible
is primal non-optimal. Therefore, right now for all v that we have here, this will satisfy va
less than or equal to c. So we need a variable X
j
into this primal, such that v a
j
is greater
than zero. This will satisfy the condition v a less than or equal to 0, so we will look at
those variables which have v a greater than 0.
9

(Refer Slide Time: 12:56)

We would like to look at these constraints which are right, now satisfied as inequality and
then try and get the best value of theta. The best value of theta that we will have, will be
given by theta is equal to minimum of minus y a
j
minus c
j
divided by v star a
j
, where v
star a
j
is greater than or equal to 0. There is also a little more theory involved in trying to
get this. But without getting in too much of theory, I have tried to explain this; this comes
out of two things. One is we will consider only those j's which are satisfied as an
inequality. They are not satisfied as an equation because, we want eventually one of them
to be satisfied as an equation. So we will not look at those that are right now satisfied as
an equation. The second thing that I mentioned, is we want v-star a
j
greater than 0
because this will satisfy the condition v-star a
j
less than or equal to 0. So we want a new
variable that enters, which will have a v-star a
j
greater than 0, which is in principle an
infeasible dual of the moment, which will be non-optimal primal and such a variable will
enter.
If we go back and check this, this comes because, if you look at this dual, this dual is of
the form ya
j
less than or equal to c
j
. For example, a
j
s are the corresponding columns, c
j
s
are the objective functions and y is the dual variable. So a dual constraint is typically of
the form ya
j
less than or equal to c
j
. Now what we have written is, we want y-star is equal
to y plus theta v. So if there is a dual feasible solution y-star or y-dash then, y-dash a
j

10

should be less than or equal to c
j
. Now y-dash is written as y plus theta v, so y plus theta
v into a
j
is less than or equal to c
j
. Now ya
j
plus theta va
j
is less than or equal to c
j
. We
want it finally to be an equation, so that one of these, as we said, will become an equation
so for something. For one of them to become an equation, let us quickly change this
inequality to an equation. Therefore, theta will be of the form minus ya
j
minus c
j
by v-star
a
j
which is the same as c
j
minus ya
j
by v star a
j
. Basically, this comes from this derivation
that, that value which is the minimum will force one of these two to be an equation and
the other would still be feasible. Keeping that in mind, we write this expression for theta.
With the present solution, let us try and find out what we do. We first have to find out v-
star a
j
for these two.
(Refer Slide Time: 17:05)

v-star a
j
from here, this is a
j
which is the same as this. Now v-star is 1,1. So the value
here is 2 plus 5 equal to7, 3 plus 2 equal to 5; so for both of them v-star a
j
is positive.
11

(Refer Slide Time: 17:25)

So theta will be equal to minimum of minus ya
j
minus c
j
, so y is 0 0, that we have here.
So all y a
j
s are 0, so minus of minus c
j
is plus c
j
. So c
1
is 3 by v-star a
j
, which we just now
found out, v-star is 1,1, a
j
is 2 5, so v-star a
j
is 7. So, minimum over 3 by 7, for the second
one here, this corresponds to the second coefficient. So c
j
is 4 by v-star a
j
corresponding
to this, v-star is 1,1, a
j
is 3,2, so v-star a
j
is 5. So theta will be minimum over 3 by 7, 4 by
5; v-star a
j
greater than 0. So theta is equal to 3 by 7. When theta is equal to 3 by 7, y-
dash is equal to y plus theta v will be 0 0 plus 3 by 7 into 1,1 which is 3 by 7, 3 by 7.
Now we have a new solution y
1
, so we are going to say y is equal to 0 0, is what we
started.
12

(Refer Slide Time: 19:13)

y
1
is y plus theta v, which is 3 by 7, 3 by 7. Now if we look at 3 by 7, 3 by 7 and start
substituting here, this is 2 into 3 by 7 plus 5 into 3 by 7 which is 7 into 3 by 7, which is 3.
(Refer Slide Time: 19:30)

So this is satisfied as an equation. Now this one is satisfied as an equation; 3 into 3 by 7
is 9 by 7, plus 6 by 7 is 15 by 7 which is satisfied as an inequality. y
1
greater than or
equal to 0, satisfied as an inequality, y
2
greater than or equal to 0 satisfied as an
13

inequality. This is the only one that is satisfied as an equation. So this will force variable
X
1
into the basis. The basis is different so we have to write the restricted primal. The
restricted primal becomes only variable X
1
is there, so what we are trying to do is X
1

alone is in the basis.
(Refer Slide Time: 20:32)

So we are trying to solve here 2X
1
is equal to 8; 5X
1
is equal to 12 is what we are trying
to solve now. Once again this is made into a linear programming problem with X
1
greater
than or equal to 0. This will become 2X
1
plus a
1
is equal to 8; 5X
1
plus a
2
is equal to 12.
a
1
a
2
greater than or equal to 0 and minimize a
1
plus a
2
. Now we actually solve this linear
programming problem; it is like the 2 phase method. We are essentially solving only this
set of equations 2X
1
equal to 8, 5X
1
equal to 12, but we are actually solving it as a LP by
adding two variables, which may be like artificial variables, with the plus 1 coefficients
and it is like a two phase method to solve this LP. We go back and solve this; we have X
1

a
1
a
2
.
14

(Refer Slide Time: 21:47)

a
1
a
2
will be the basic variables, so 2X
1
plus a
1
0 8; 5X
1
0 1 12. These have objective
coefficients of 1; it is a minimization problem. These also have objective function
coefficient of 1, so c
j
minus z
j
; 0. So 1 into 2 plus 1 into 5 is 7, so 0 minus 7 is minus 7,
you get 0 here, you get 0 here, so you get 20 here. It is a minimization problem with the
negative c
j
minus z
j
, the variable will enter the basis. So, variable X
1
will enter the basis.
Theta is computed as 8 by 2 is 4, 12 by 5; 12 by 5 is smaller than 4. So the variable a
2

leaves the basis. This is the pivot element.
We do one more iteration with a
1
and X
1
coming here. c
j
minus z
j
, this is 0, this is 1
divided by the pivot element; 1 0, 1 by 5, 12 by 5; this minus 2 times this, 0 1 minus 2 by
5, 8 minus 24 by 5 is 16 by 5. So objective function value is 16 by 5 0 0 minus 2 by 5, 0;
1 minus of minus 2 by 5 is 7 by 5. This is c
j
minus z
j
, so no entering variable; algorithm
terminates with the optimum solution X
1
is equal to 12 by 5.
15

(Refer Slide Time: 24:28)

The solution here is, X
1
is equal to 12 by 5, a
1
is equal to 16 by 5 and z is equal to 16 by
5. Now the restricted primal has an optimum solution with an artificial variable still in the
basis. Therefore, the solution to the restricted primal is not feasible to this. So we need to
do one more iteration here because it is not feasible to this. We will now go back and try
to find out the dual of the restricted primal as we did. This restricted primal will have two
dual variables v
1
and v
2
. The dual of the restricted primal will now have maximize 8v
1

plus 12v
2
, which is the same here.
16

(Refer Slide Time: 25:33)

8v
1
plus 12v
2
, this one is 2v
1
plus 5v
2
; this constraint is 2v
1
plus 5v
2
is less than or equal
to 0. So 2v
1
plus 5v
2
corresponding to this is less than or equal to 0, corresponding to this
is v
1
is less than or equal to 1 and corresponding to this, v
2
is less than or equal to 1; v
1
v
2

unrestricted in sign. So 2v
1
plus 5v
2
less than or equal to 0, v
1
less than or equal to 1, v
2

less than or equal to 1, so we have to solve this to try and get the optimum solution to
this. We can either solve this by the simplex or we can solve this by the graphical method
because we have only two variables that are there, so we will quickly try and solve it
using the graphical method.
17

(Refer Slide Time: 26:56)

Note that, v
1
v
2
is unrestricted in sign. So just solving it by the graphical method will give
us v
1
less than or equal to 1. This is same; this is v
1,
this is v
2
. v
1
less than or equal to 1 is
here. v
2
less than or equal to 1 is here, 2v
1
plus 5v
2
less than or equal to 0; this is one
point; 0,0 is one point; the other point could be if v
2
is equal to 1, then v
1
is equal to
minus 5 by 2.
v
2
is equal to 1, which is here, v
1
is equal to minus 5 by 2, this is 2, 2 and 1 by 2. So this
is another point; so the third line will look like this. This is 2v
1
plus 5v
2
less than or equal
to 0. So this is the feasible region, this is the region corresponding to this one, this is the
region corresponding to this, this is the region corresponding to this. This will be the
entire region that we will have.
18

(Refer Slide Time: 28:49)

So the objective function maximize 8 v
1
plus 12 v
2
suppose we take 8v
1
plus 12v
2
equal
to 24, then we have 3,0 and 0,2 that come here.
(Refer Slide Time: 29:01)

Now this is 1 2 3,0 0,2; so the objective function line is like this. As it moves here, the
last point that it will actually touch is v
1
is equal to 1 and v
2
is equal to minus 2 by 5, so
19

this will be the point. This is 0 0, this is v
1
equal to 1; so as it moves, this is the last point
which will come here, so v
1
is equal to 1, v
2
is equal to minus 2 by 5.
(Refer Slide Time: 29:41)

The optimum solution is v
1
is equal to 1, v
2
is minus 2 by 5. Now note that the objective
function value here, this is minus 24 by 5 plus 8, which is plus 16 by 5 which is exactly
what we got here as plus 16 by 5. We now have a new v which is given by 1 and minus 2
by 5. Now we need to find out again a new value. First we need to find out theta and then
we also need to find out new y-dash is equal to y plus theta v.
20

(Refer Slide Time: 30:23)

From this we know that this is the one that is satisfied as an equation. So all these three
now become potential candidates to evaluate theta. We also need to find out for which of
them v-star a is greater than 0. We need to find out v-star, so first let us find out the
second one; 3,2.
(Refer Slide Time: 30:56)

v-star a is 3 into 1 minus 4 by 5; 3 minus 4 by 5 is 11 by 5, so v-star a is greater than 0.
21

(Refer Slide Time: 31:09)

So we will write minimum of, for the variable X
2
ya
j
minus c
j
;

y is 3 by 7, 3 by 7. So ya
j

is 9 by 7 plus 6 by 7 which is 15 by 7. ya
j
is 15 by 7 minus c
j
minus 4, negative of that, so
4 minus 15 by 7 is 35 by 7, so 35 by 7 divided by ya
j
minus c
j
.
(Refer Slide Time: 32:01)

y
1
is 3 by 7, 3 by 7; so, 3 by 7 into 2 is 9 by 7 plus 6 by 7 is 15 by 7, so 4 minus 15 by 7 is
13 by 7.
22

(Refer Slide Time: 32:19)

This is 13 by 7 divided by v-star a
j
, which we just now calculated as 3 into 1, 3 minus 4
by 5, which is 11 by 5, so 13 by 7 divided by 11 by 5. Now for the third one here, which
is variable X
3
, so this is variable X
3
minus 1 0 into 1 minus 2 by 5, so v-star a
j
is
negative, minus 1 0 into 1 minus 2 by 5 is negative.
Now for the fourth one, 0 minus 1 into 1 minus 2 by 5 is positive, so you get 2 by 5 here.
This is minus 3 by 7, so we get c minus ya
j
; 0 minus, minus 3 by 7 is plus 3 by 7 divided
by 0 minus 1, 1 minus 2 by 5 so plus 2 by 5; so 3 by 7 divided by 2 by 5. So this is
minimum of 65 by 77, 13 by 7 into 5 by 11. So 65 by 77 and 3 by 7 into 5 by 2, which is
15 by 14. You get 65 by 77 as the value. So now y-dash is equal to y plus theta v, so y is
3 by 7, 3 by 7; so 3 by 7, 3 by 7 plus 65 by 77 into v, which is 1 and minus 2 by 5. This is
3 by 7 plus 65 by 77. This is 33; 65 plus 33 is 88, 88 by 77 is 8 by 7, this is 3 by 7, 65
into minus 2 by 77 into 5. This will go 13 times minus 26 by 77. This is 33 by 77, so 7 by
77, which is 1 by 11. This is 65 into minus 2 by 5 minus 26 by 77 plus 33 by 77, 7 by 77,
which is 1 by 11. Let us look at this, this is 33 plus 65, this is 11 times, so 33 plus 65 is
98 by 77. So 98 by 77 is 7 into 14 is 98. 14 by 11, 1 by 11; so the value is 14 by 11 and 1
by 11. So that is your y-dash.
23

(Refer Slide Time: 36:30)

So we call that as y
2
equal to 14 by 11, 1 by 11. Now, we go back and check from this.
(Refer Slide Time: 36:50)

14 by 11; this is 28 by 11 plus 5 by 11, is 33 by 11, which is equal to 3; so satisfied as an
equation. 42 by 11 plus 2 by 11 is 44 by 11; so satisfied as an equation, these are satisfied
as inequalities. Now we go back here and write the restricted primal. So the restricted
primal will become now corresponding to these two X
1
and X
2
will be the basic variables.
24

(Refer Slide Time: 37:26)

We should solve for 2X
1
plus 3X
2
is equal to 8 and 5X
1
plus 2X
2
is equal to 12. So in the
same way, we add two more variables plus a
1
equal to 8, plus a
2
equal to 8 and then we
minimize a
1
plus a
2
and say that a
1
a
2
X
1
X
2
greater than or equal to 0. One way is to
solve it directly, the other is to go to the linear programming and do it. Since we have
consistently been using the linear programming approach, we will do it using the same
approach.
(Refer Slide Time: 38:09)

25

We start with a
1
and a
2
as the basic variables. So 2X
1
plus 3X
2
plus a
1
is equal to 8, 5X
1

plus 2X
2
plus a
2
is equal to 12. We have 1 and 1 here, so we have c
j
minus z
j
. This is 2
plus 5 7, so minus 7 minus 5 0 0 and 20. So minimization problem, so negative c
j
minus
z
j
will enter, so 8 divided by 2 is 4, 12 by 5. So 12 by 5 leaves the basis, this is your
pivot; your a
1
X
1
coming in. Now this is 1 and this is 0 1 2 by 5, 0 1 by 5, 12 by 5. This
minus 2 times this is 0; 3 minus 4 by 5 is 11 by 5, 1 minus 2 by 5, 8 minus 24 by 5 is 16
by 5. Now this is again c
j
minus z
j
, this is 0; this is minus 22 by 5. This goes, so you get
minus 22 by 5. This is 0, this is 1 plus 2 by 5 is 7 by 5 and 16 by 5.
Once again minimization problem, so negative c
j
minus z
j
will enter. So this enters, so 16
by 5 divided by 11 by 5 is 16 by 11; 12 by 5 divided by 2 by 5 is 6, 16 by 11 leaves the
basis. This is your new pivot. So you will have X
2
and X
1
with 0 0 here, c
j
minus z
j

divided by a pivot element 0 1 5 by 11 minus 2 by 11, 16 by 11. This minus 2 by 5 times
1 is 0 so 0 0 0, minus 2 by 5 into 5 by 11 is minus 2 by 11. So this plus 2 by 5 into 2 by
11, this plus 4 by 55, 15 by 55 which is 3 by 11. This minus 2 by 5 into this, so this is 12
by 5 minus 32 by 55, this is 12 into 11 is 132 minus 32 is 100 by 55, which is 20 by 11.
So this will be 0 0, these all 0, so you get 1, you get 1, you get 0. So the restricted primal
now has a solution which is X
1
equal to 20 by 11.
(Refer Slide Time: 41:57)

26

X
2
equal to 16 by 11, with Z is equal to 0. So we can just quickly check 40 plus 48 is 88
by 11, which is 8, 100 plus 32, 132 by 11, which is 12. Now the restricted primal has an
optimal solution with equal to 0. so the moment it has an optimal solution with z equal to
0, it means the artificial variables are not there in the solution, which means, you have a
solution based only on X
1
and X
2
. Therefore, this solution is feasible to this right. Which
is, 2 into 20 by 11 plus 3 into 16 by 11 is equal to 8 and so on. We get a feasible solution
to this, so right now we have a feasible solution to the dual complementary slackness is
satisfied.
The corresponding restricted primal gives the solution that is feasible to the primal.
Therefore, this solution is optimal to primal and dual respectively. The solution 20 by 11,
16 by 11, with z is equal to 3X
1
plus 4X
2
; 3 into 20 by 11 plus 4 into 16 by 11. This is 20
plus 60, 60 plus 64 is 124 by 11. That is 3X
1
plus 4X
2
, so from here we get 14 into 8 is
112 plus 12, 124 by 11. So the solution 20 by 11, 16 by 11, z equal to 124 by 11 which is
what we got here 16 by 11, 20 by 11 plus from the dual side 14 by 11 into 8 plus 12 into
1 by 11 gives the solution. This is how the primal dual algorithm works by giving us this
optimal solution. So the underlying principle is that, for a certain type of problem, which
is like this minimize, write the dual, work with the feasible solution to the dual, try to get
a feasible solution to the primal and if it is feasible then it is optimal. Otherwise, try and
get a new feasible solution to the dual, such that there is at least one new candidate in the
restricted primal. That is the fundamental principle that there is at least one new
candidate in the restricted primal.
Once we start getting a new candidate in the restricted primal, the solution to the
restricted primal becomes better. This theta also will ensure that the dual feasibility is
maintained. At the same time, a new inequality which is right now satisfied as an
inequality will be satisfied as an equation. So in this process I am alternately updating
this dual variables and then writing the restricted primal and solving it till the restricted
primal has z equal to 0, which means, it does not have artificial variables is the essence of
the primal dual algorithm. So this is how the primal dual algorithm works and many a
time we will be tempted to say that this algorithm is somewhat similar to the two phase
method.
27

Now instead of doing this, we could have straight away started with this problem, apply
the two phase simplex algorithm by simply adding two artificial variables a
1
a
2
, giving
them objective function values of 1 and in the first phase eliminating them and then
proceeding to do that. Computationally, it is the same as the two phase method but it
brings a very important idea, that by making the dual different and by bringing a new
candidate in the restricted primal, we can get to the optimal. In fact, the primal dual
algorithm is used extensively. The Hungarian method that we saw for the assignment is a
direct application of the primal dual algorithm.
Now let us take a very small example of an assignment problem and try to show how the
Hungarian algorithm is actually a primal dual algorithm; we will do that with a very
small example here.
(Refer Slide Time: 46:43)

Let us look at a small 4 by 4 assignment problem. Now let us do the row subtractions and
simply write u
1
equal to 2, with 0 2 4 and 6, u
2
equal to 1 0 6 4 4, u
3
equal to 2 0 1 2 3
and u
4
equal to 6 0 2 1 2. Let us do the column subtractions to get 0 0 0 0, 1 5 0 1, 3 3 1 0,
4 2 1 0. Now we have u
1
equal to 2, u
2
equal 1, u
3
equal to 2, u
4
equal to 6, v
1
equal to 0,
v
2
equal to 1, v
3
equal to 1, v
4
equal to 2. Let us quickly make the assignment in this.
Now we realize that we can make an assignment here. This goes, this goes, this goes;
28

now we can make an assignment here and we could make an assignment in any one of
them and say this goes. We have not been able to make four assignments, so what we do
is we do the line drawing procedure and tick an unassigned row, if there is a 0 tick that
column, there is an assignment tick that row, draw lines through unticked rows and ticked
columns and then change this, such that, if there are two lines, find the minimum theta,
which is 1. If there are two lines add theta, if there is one line retain it and if there is no
lines subtract theta. So we get 0 0 2 3, 0 4 2 1, 1 0 1 1, 1 1 0 0. We know that whether we
use this matrix or whether we use this matrix, the solution is the same; the reason the
solution is the same comes from this.
When we created this, we have written c-dash
ij
equal to c
ij
minus u
i
plus v
j
. By doing the
row column subtraction, we have done this and we have ensured that c-dash
ij
is greater
than or equal to 0. Therefore this is dual feasible to the assignment problem because
assignment problem constraint is of the form u
i
plus v
j
less than or equal to c
ij
. If we
ensure that c-dash
ij
c
ij
minus u
i
plus v
j
is greater than or equal to 0, it means the dual is
satisfying. Now we satisfy complementary slackness only by assigning in 0 positions, we
do not make an assignment on any other position.
We satisfy complementary slackness and if we get a primal feasible solution, then it is
optimal. Right now, we do not have a primal feasible solution because we do not have
four allocations. Now what we do next? By doing this, we actually readjust the u
i
s and
the v
j
s. They are now adjusted again; this has a different u
1
, u
2
, u
3
, u
4
. In fact, here there
is a line that goes, so v becomes minus theta. So this is, minus 1 1 1 and 2. Here when
there is no line, the theta actually reduces. So u
1
is equal to 1, u
2
is equal to 0. The theta
gets adjusted here, so when there is a line theta remains the same.
So u
4
is equal to 6, this is equal to 2. When there is no line, theta becomes plus 3 and 2.
For example, this 0 was 6 minus 6 plus 0 0, 6 minus 5 is 1, so we still again have c-dash
ij

equal to c
ij
minus u
i
plus v
j
. Now what we have done is, we have a new set of dual
variables. But in the process, what we have done is we have included at least one new
assignable 0. If you see carefully, this is one new assignable 0, which means, this can
29

bring one more variable into the primal basis. That is, exactly what we did here. We
started with the dual solution and then after one iteration we computed the theta.
(Refer Slide Time: 52:55)

Then we make sure that by moving from y to y
1
, at least one constraint which was
originally satisfied as an inequality is now satisfied as an equation. So here satisfying as
an equation is reflected by 0.
(Refer Slide Time: 53:10)

30

At least one new constraint is satisfied as an equation and the primal now has the
opportunity to work with restricted primal works, with one more additional basic
variable. So in that process, we get the optimum. In a very similar manner till we get the
optimum we keep making these ticks and subtractions, till we finally get a feasible
solution to the primal, which will give us optimal. Here also, the moment we get the
feasible solution to the primal, we get to the optimal. So the Hungarian algorithm is a
direct application of the primal dual algorithm.
Once again the primal dual algorithm starts with a minimization problem with greater
than or equal to, writes a dual so that it is easy to find a feasible solution to the dual then
computes the restricted primal. If the restricted primal is feasible then, it is optimal.
Otherwise the y becomes y
1
by carefully ensuring that there is at least one new dual
constraint, which is now satisfied as an equation.
(Refer Slide Time: 54:20)

That is done by using this expression for theta such that dual feasibility is maintained and
one new basic variable enters. With that once again the restricted primal is computed and
this process is repeated back and forth till the restricted primal is feasible. The moment
the restricted primal is feasible, it is optimal and we have also seen now that the
Hungarian algorithm is the direct application of the primal dual method.
31

(Refer Slide Time: 54:46)

So far in the course, we have seen several aspects of linear programming. We started with
the revised simplex algorithm which essentially tries to quicken the simplex and time
taken per iteration. Then we saw how simplex is modified to handle bounded variables,
by treating them separately and not by treating them as explicit constraints. Then we
looked at the idea of column generation through the cutting-stock problem, where it is not
necessary to store all the columns explicitly. Columns can be generated by solving sub
problems and we saw an application to the one dimension cutting-stock problem.
Then, we moved to the Danzig Wolfe decomposition algorithm, where if the problem has
a certain structure whereby removing certain constraints, we can decompose it into
smaller sized LPs. Then we exploited the fact that, it is easier to solve a certain number of
smaller LPs compared to solving one large LP, because the computational effort is cubic
with respect to the number of constraints that we have. So the decomposition algorithm
taught us a way by which we can split a bigger problem into smaller problems and by
solving a series of smaller problems and also by using the column generation idea we
generate entering columns into the basis. Then we have seen the primal dual algorithm,
where by intelligently working with the dual and by modifying the dual solutions at each
stage and by solving restricted primal which is a much smaller sized problem, one can go
back and get the optimum solution to the linear programming.
32

So far, we have seen several aspects of simplex algorithm and algorithms related and
associated with simplex. We have largely looked at it from the point of view of making it
faster, by storing less and by generating entering columns as and when they have to be
generated. So much we have seen about how to make the simplex better and handle
enough variety. The next thing, we may have to look at is do linear programming or OR
problems have only one objective, or can we have situations where these problems can
have more than one objective or goal. From this, we move to a topic called goal
programming, whose basics we will see in the next lecture.

You might also like