Module - 3 Lecture Notes - 5 Revised Simplex Method, Duality and Sensitivity Analysis
Module - 3 Lecture Notes - 5 Revised Simplex Method, Duality and Sensitivity Analysis
Module - 3 Lecture Notes - 5 Revised Simplex Method, Duality and Sensitivity Analysis
Module 3 Lecture Notes 5 Revised Simplex Method, Duality and Sensitivity analysis Introduction In the previous class, the simplex method was discussed where the simplex tableau at each iteration needs to be computed entirely. However, revised simplex method is an improvement over simplex method. Revised simplex method is computationally more efficient and accurate. Duality of LP problem is a useful property that makes the problem easier in some cases and leads to dual simplex method. This is also helpful in sensitivity or post optimality analysis of decision variables. In this lecture, revised simplex method, duality of LP, dual simplex method and sensitivity or post optimality analysis will be discussed. Revised Simplex method Benefit of revised simplex method is clearly comprehended in case of large LP problems. In simplex method the entire simplex tableau is updated while a small part of it is used. The revised simplex method uses exactly the same steps as those in simplex method. The only difference occurs in the details of computing the entering variables and departing variable as explained below. Let us consider the following LP problem, with general notations, after transforming it to its standard form and incorporating all required slack, surplus and artificial variables.
(Z ) ( xi )
j
(x )
M M
( xl )
As the revised simplex method is mostly beneficial for large LP problems, it will be discussed in the context of matrix notation. Matrix notation of above LP problem can be expressed as follows:
M3L5
c1n c2n c mn
It can be noted for subsequent discussion that column vector corresponding to a decision
c1k c variable x k is 2 k . c mk
Let X S is the column vector of basic variables. Also let C S is the row vector of costs coefficients corresponding to X S and S is the basis matrix corresponding to X S .
1. Selection of entering variable
For each of the nonbasic variables, calculate the coefficient (WP c ) , where, P is the corresponding column vector associated with the nonbasic variable at hand, c is the cost coefficient associated with that nonbasic variable and W = C S S 1 . For maximization (minimization) problem, nonbasic variable, having the lowest negative (highest positive) coefficient, as calculated above, is the entering variable.
2. Selection of departing variable
a. A new column vector U is calculated as U = S 1B . b. Corresponding to the entering variable, another vector V is calculated as V = S 1 P , where P is the column vector corresponding to entering variable. c. It may be noted that length of both U and V is same ( = m ). For i = 1, , m , the ratios,
U(i ) , are calculated provided V (i ) > 0 . i = r , for which the ratio is least, is V (i )
noted. The r th basic variable of the current basis is the departing variable. If it is found that V (i ) 0 for all i , then further calculation is stopped concluding that bounded solution does not exist for the LP problem at hand.
M3L5
1 2
M
L L L L O L L
0 0 M M M 1 0
r
M
m 1 m
0 0 M V (i ) V (r ) M and i = 1 V (r ) M 0 1
for for
ir i=r
r th column
S is replaced by S new and steps 1 through 3 are repeated. If all the coefficients calculated in
step 1, i.e., (WP c ) is positive (negative) in case of maximization (minimization) problem, then optimum solution is reached and the optimal solution is,
X S = S 1B and z = CX S
Duality of LP problems
Each LP problem (called as Primal in this context) is associated with its counterpart known as Dual LP problem. Instead of primal, solving the dual LP problem is sometimes easier when a) the dual has fewer constraints than primal (time required for solving LP problems is directly affected by the number of constraints, i.e., number of iterations necessary to converge to an optimum solution which in Simplex method usually ranges from 1.5 to 3 times the number of structural constraints in the problem) and b) the dual involves maximization of an objective function (it may be possible to avoid artificial variables that otherwise would be used in a primal minimization problem). The dual LP problem can be constructed by defining a new decision variable for each constraint in the primal problem and a new constraint for each variable in the primal. The coefficients of the j th variable in the duals objective function is the i th component of the primals requirements vector (right hand side values of the constraints in the Primal). The duals requirements vector consists of coefficients of decision variables in the primal objective function. Coefficients of each constraint in the dual (i.e., row vectors) are the
M3L5
column vectors associated with each decision variable in the coefficients matrix of the primal problem. In other words, the coefficients matrix of the dual is the transpose of the primals coefficient matrix. Finally, maximizing the primal problem is equivalent to minimizing the dual and their respective values will be exactly equal. When a primal constraint is less than equal to in equality, the corresponding variable in the dual is non-negative. And equality constraint in the primal problem means that the corresponding dual variable is unrestricted in sign. Obviously, duals dual is primal. In summary the following relationships exists between primal and dual.
Primal
Maximization Minimization
Dual
Minimization Maximization
i th variable j th constraint
i th constraint j th variable
Inequality sign of i th Constraint:
xi 0
i th variable unrestricted
j th constraint with = sign
RHS of j th constraint Cost coefficient associated with
RHS of i th constraint
See the pictorial representation in the next page for better understanding and quick reference:
M3L5
Mark the corresponding decision variables in the dual Cost coefficients for the Objective Function
cm1 x1 + cm 2 x2 + L L L + cmn xn bm ym
Thus, the 1st constraint, c11 y1 + c21 y2 + L + cm1 ym c1
M
Determine the sign of y1
M
Determine the sign of y2
M
LL
Determine the sign of ym
Dual Problem
Minimize Z = b1 y1 + b2 y2 + L L L + bm ym Subject to c11 y1 + c21 y2 + L L L + cm1 ym c1 c12 y1 + c22 y2 + L L L + cm 2 ym = c2 M M c1n y1 + c2 n y2 + L L L + cmn ym cn y1 unrestricted, y2 0, L , ym 0
M3L5
It may be noted that, before finding its dual, all the constraints should be transformed to lessthan-equal-to or equal-to type for maximization problem and to greater-than-equal-to or equal-to type for minimization problem. It can be done by multiplying with 1 both sides of the constraints, so that inequality sign gets reversed. An example of finding dual problem is illustrated with the following example. Primal Maximize Subject to
x1 + 2 x 2 6000 3
Z = 4 x1 + 3 x 2
Dual Minimize
Subject to y1 y 2 + y 3 = 4 2 y1 + y 2 3 3
y1 0 y2 0
x1 x 2 2000 x1 4000
x1 unrestricted
x2 0
y3 0
It may be noted that second constraint in the primal is transformed to x1 + x2 2000 before constructing the dual.
Primal-Dual relationships
Following points are important to be noted regarding primal-dual relationship: 1. If one problem (either primal or dual) has an optimal feasible solution, other problem also has an optimal feasible solution. The optimal objective function value is same for both primal and dual. 2. If one problem has no solution (infeasible), the other problem is either infeasible or unbounded. 3. If one problem is unbounded the other problem is infeasible.
M3L5
The column corresponding to minimum ratio is identified as the pivotal column and associated decision variable is the entering variable. 5. Pivotal operation: Pivotal operation is exactly same as in the case of simplex method, considering the pivotal element as the element at the intersection of pivotal row and pivotal column. 6. Check for optimality: If all the basic variables have nonnegative values then the optimum solution is reached. Otherwise, Steps 3 to 5 are repeated until the optimum is reached.
M3L5
Minimize subject to
Z = 2 x1 + x 2 x1 2 3 x1 + 4 x 2 24 4 x1 + 3 x 2 12 x1 + 2 x 2 1
By introducing the surplus variables, the problem is reformulated with equality constraints as follows: Minimize subject to Z = 2 x1 + x 2 x1 3 x1 4 x1 x1 Expressing the problem in the tableau form: Variables Iteration 1 Basis Z
x1 x2
+ x3 = 2 +4 x 2 3 x 2 2 x 2 + x 4 = 24 + x5 = 12 + x6 = 1
br x3 0 1 0 0 0 -x4
x5 0 0 0 1 0 0
x6 0 0 0 0 1 -0 -2 24 -12 -1
Z x3
x4
1 0 0 0 0 Ratios
-2 -1 3 -4 1 0.5
-1 0 4 -3 -2 1/3
0 0 1 0 0 --
x5 x6
Pivotal Element
M3L5
Tableaus for successive iterations are shown below. Pivotal Row, Pivotal Column and Pivotal Element for each tableau are marked as usual. Z
x1 x2
Iteration
Basis Z x3
Variables x3 x4 0 1 0 0 0 -0 0 1 0 0 --
x6 0 0 0 0 1 --
br
1 0 0 0 0 Ratios
0 0 0 1 0 --
4 -2 8 4 7
x4 x2
x6
Iteration
Basis Z x1
Z 1 0 0 0 0
x1
x2
x6 0 0 0 0 1 --
br
0 1 0 0 0 --
0 0 0 1 0 --
x4 x2
x6
Ratios
Iteration
Basis Z
x1
Z 1 0 0 0 0
x1
x2
x5 0 0 0 0 1
br
0 1 0 0 0
0 0 0 1 0
x4
x2
x5
Ratios
M3L5
10
As all the br are positive, optimum solution is reached. Thus, the optimal solution is Z = 5.5
Dual
Minimize subject to
Z ' = 6 y1 + 0 y 2 + 4 y3 2 y1 + y 2 + 5 y3 4 y1 4 y 2 2 y3 1 2 y1 + 2 y 2 2 y3 2 y1 , y 2 , y3 0
y1 y2 Z
y3
As illustrated above solution for the dual can be obtained corresponding to the coefficients of slack variables of respective constraints in the primal, in the Z row as, y1 = 1 , y 2 =
1 and 3
y3 =
1 and Z=Z=22/3. 3
M3L5
11
where y j is the dual variable associated with the i th constraint, bi is the small change in the RHS of i th constraint, and Z is the change in objective function owing to bi . Let, for a LP problem, i th constraint be 2 x1 + x2 50 and the optimum value of the objective function be 250. What if the RHS of the i th constraint changes to 55, i.e., i th constraint changes to 2 x1 + x2 55 ? To answer this question, let, dual variable associated with the i th constraint is y j , optimum value of which is 2.5 (say). Thus, bi = 55 50 = 5 and y j = 2.5 . So, Z = y j bi = 2.5 5 = 12.5 and revised optimum value of the objective function is
M3L5