Chapter 5. Linear and Nonlinear Programming Final
Chapter 5. Linear and Nonlinear Programming Final
1
Why Talk About Linear Programming?
2
Simplex Method
5
Basic Steps of Simplex
Compare
Compare constraint
constraint conversion
conversion withwith goal
goal
conversions
conversionsusing
usingdeviation
deviationvariables
variables
7
Different "components" of a LP model
8
Movement to Adjacent Extreme Point
9
Entering and Departing Vector (Variable) Rules
General rules:
• The one non-basic variable to come in is
the one which provides the highest
reduction in the objective function.
• The one basic variable to leave is the one
which is expected to go infeasible first.
NOTE: THESE ARE HEURISTICS!!
Variations on these rules exist, but are rare.
10
Simplex Variations
11
Computational Considerations
12
Limitations of Simplex
13
Example Problem
subject to
x1 + 3x2 - x3 ≤ 6,
x2 + x3 ≤ 4,
3x1 + x2 ≤ 7,
x1, x2, x3 ≥ 0.
14
Simplex and Example Problem
a11 x1 + a12 x2 + ••• + a1n xn ≤ b1, a11 x1 + a12 x2 + ••• + a1n xn + xn+1 = b1,
a21 x1 + a22 x2 + ••• + a2n xn ≥ b2, a21 x1 + a22 x2 + ••• + a2n xn - xn+2 = b2,
am1 x1 + am2 x2 + ••• + amn xn ≤ bm, am1 x1 + am2 x2 + ••• + amn xn + xn+k = bm,
In our example problem:
x1 + 3x2 - x3 ≤ 6, x1 + 3x2 - x3 + x4 = 6,
x2 + x3 ≤ 4, x2 + x 3 + x5 = 4,
3x1 + x2 ≤ 7, 3x1 + x2 + x6 = 7,
x1 + 3x2 - x3 + x4 = 6,
x2 + x3 + x5 = 4, c
BBas
i
s c
j Co
n
st
an
ts
521 000
3x1 + x2 + x6 = 7, x1x
2x
3x4x
5x
6
0x 41 3-
1 100 6
0x 50 11 010 4
x1, x2, x3, x4, x5, x6 ≥ 0. 0x 63 10 001 7
cr
ow 521 000Z=0
16
Step 2: Explanation
Use the inner product rule to find the relative profit coefficients
c
BBas
i
s c
j Co
n
st
an
ts
521 000
x1x
2x
3x4x
5x
6
0x 41 3-
1 100 6
0x 50 11 010 4
0x 63 10 001 7
cr
ow 521 000Z=0
c j c j cB Pj
c1 = 5 - 0(1) - 0(0) - 0(3) = 5 -> largest positive
c2 = ….
c3 = ….
Step 4: Is this an optimal basic feasible solution?
18
Simplex: Step 5
bi
max x s min
a is 0 a is
In our example:
c
BBas
i
s c
j Co
n
st
an
ts
521000 Row Basic Variable Ratio
x1x
2x
3x4x
5x
6 1 x4 6
0x 41 3-
1 100 6 2 x5 -
0x 50 11010 4 3 x6 7/3
0x 63 10001 7
cr
ow521000 Z =0
19
Simplex: Step 6
Perform the pivot operation to get the new tableau and the b.f.s.
c
BBas
i
s c
j Co
n
st
an
ts
521000
x1x
2x
3x4x
5x
6
0x 41 3-
1 100 6
0x 50 11010 4
0x 63 10001 7
cr
o
w521000 Z =0
New iteration:
find entering c
BBas
i
s c
j C
o
n
st
an
ts
variable: 521000
c j c j c B Pj x1x
2x3x4x
5x6
0x 40 8
/
3 -
1 100 11/
3
0x 50 11010 4
cB = (0 0 5) 5x 11 1
/
3 0001/37/3
c2 = 2 - (0) 8/3 - (0) 1 - (5) 1/3 = 1/3 cr
o
w0 1
/
3 100 -
5/
3 Z
=
35
/3
c3 = 1 - (0) (-1) - (0) 1 - (5) 0 = 1
c6 = 0 - (0) 0 - (0) 0 - (5) 1/3 = -5/3
20
Final Tableau
c
BBas
i
s c
j C
o
n
st
an
ts x3 enters basis,
521000
x5 leaves basis
x1x
2x3x4x
5x6
0x 40 8
/
3 -
1 100 11/
3
0x 50 11010 4
5x 11 1
/
3 0001/37/3
cr
o
w0 1
/
3 100 -
5/
3 Z
=
35
/3
Wrong value!
4 should be 11/3 c
BBas
i
s c
j Co
n
st
an
ts
521000
x
1x2x
3x4x
5x 6
0x 4 040110 2 3/
3
1x 3 011010 4
5x 1 11
/3000 1
/37/3
cr
ow0 -
2
/300 -
1 -
5
/3 Z
=
47
/3
21
5.2.Nonlinear Programming
Introduction
• This part of the course deals with techniques that are applicable to the
solution of the constrained optimization problem:
The random search methods described for unconstrained minimization can be used with
minor modifications to solve a constrained optimization problem. The basic procedure can
be described by the following steps:
1. Generate a trial design vector using one random number for each design variable.
2. Verify whether the constraints are satisfied at the trial design vector. Usually, the equality
constraints are considered satisfactory whenever their magnitudes lie within a specified
tolerance. If any constraint is violated, continue generating new trial vectors until a trial
vector that satisfies all the constraints is found.
3. If all the constraints are satisfied, retain the current trial vector as the best design if it gives
a reduced objective function value compared to the previous best available design.
Otherwise, discard the current feasible trial vector and proceed to step 1 to generate a new
trial design vector.
4. The best design available at the end of generating a specified maximum number of trial
design vectors is taken as the solution of the constrained optimization problem.
Direct Methods
RANDOM SEARCH METHODS
• It can be seen that several modifications can be made to the basic
procedure indicated above. For example, after finding a feasible trial
design vector, a feasible direction can be generated (using random
numbers) and a one-dimensional search can be conducted along the
feasible direction to find an improved feasible design vector.
• Another procedure involves constructing an unconstrained function,
F(X), by adding penalty for violating any constraint as:
where
indicates that while minimizing the objective function f (X), a positive penalty is
added whenever a constraint is violated, the penalty being proportional to the
square of the amount of violation. The values of the constants a and b can be
adjusted to change the contributions of the penalty terms relative to the
magnitude of the objective function.
• Note that the random search methods are not efficient compared to the other
methods described in this chapter. However, they are very simple to program
and are usually reliable in finding a nearly optimal solution with a sufficiently
large number of trial vectors. Also, these methods can find near global optimal
solution even when the feasible region is nonconvex.
Direct Methods
SEQUENTIAL LINEAR PROGRAMMING
• In the sequential linear programming (SLP) method, the solution of the original
nonlinear programming problem is found by solving a series of linear programming
problems.
• The resulting LP problem is solved using the simplex method to find the new
design vector Xi+1.
• If Xi+1 does not satisfy the stated convergence criteria, the problem is relinearized
about the point Xi+1 and the procedure is continued until the optimum solution X* is
found.
Direct Methods
SEQUENTIAL LINEAR PROGRAMMING
• However, by relinearizing the problem about the new point and repeating
the process, we can achieve convergence to the solution of the original
problem in a few iterations.
Notice that the LP problem in the above equation may sometimes have an
unbounded solution. This can be avoided by formulating the first
approximating LP problem by considering only the following constraints:
li xi ui i=1,2,...,n
li and ui represent the lower and upper bounds on xi , respectively.
Direct methods
SEQUENTIAL LINEAR PROGRAMMING
4. Solve the approximating LP problem to obtain the
solution vector Xi+1
• As can be seen from the figures, the optimum of all the approximating LP
problems (e.g., points c,e,f,...) lie outside the feasible region and converge
toward the optimum point, x = a.
Note: This example was originally given by Kelly. Since the constraint
boundary represents an ellipse, the problem is a convex programming
problem. From graphical representation, the optimum solution of the
problem can be identified as x1* = 0, x2* = 1, and f min= -1
Sequential Linear Programming
Steps 1, 2, 3:Although we can start the solution from any initial point X1, to
avoid the possible unbounded solution, we first take the bounds on x1 and
x2 as
-2 x1 2
-2 x2 2
The equation
becomes
Sequential Linear Programming
• By adding this constraint to the previous LP problem, the new LP problem
becomes:
where [A] is an n x p matrix whose kth column denotes the gradient of the
function hk. The above equations represent a set of n+p nonlinear
equations in n+p unknowns (xi, i=1,...,n and k, k=1,...,p ). These
nonlinear equations can be solved using Newton’s method. For
convenience, we rewrite the above equations as:
Sequential Quadratic Programming
• According to the Newton’s method, the solution of the above equation can
be found as:
where Yj is the solution at the start of the jth equation and ∆Yj is the change
in Yj necessary to generate the improved solution, Yj+1, and [F]j=
[F(Yj)]j is the (n+p) x (n+p) Jacobian matrix of the nonlinear equations
whose ith column denotes the gradient of the function Fi (Y) with respect
to the vector Y.
Sequential Quadratic Programming
• By substituting
into
we obtain:
Sequential Quadratic Programming Method
where
denotes the Hessian matrix of the Lagrange function. The first set of
equations in
The above equation can be solved to find the change in the design vector
∆Xj and the new values of the Lagrange multipliers, j+1.The iterative
process indicated by the above equation can be continued until
convergence is achieved.
Sequential Quadratic Programming Method
• Now consider the following quadratic programming problem:
• Find ∆X that minimizes the quadratic objective function
in matrix form.
Sequential Quadratic Programming Method
• This shows that the original problem of the equation
• In fact, when inequality constraints are added to the original problem, the
quadratic programming problem of the above equation becomes:
Sequential Quadratic Programming Method
and
are constants used to ensure that the linearized constraints do not cut off the
feasible space completely. Typical values of these constants are given by:
Sequential Quadratic Programming Method
• The subproblem of the equation
where is given by
and the constants 0.2 and 0.8 in the above equation can be changed, based
on numerical experience.
Sequential Quadratic Programming Method
Example 1: Find the solution of the problem
becomes
Sequential Quadratic Programming Method
- Example 1
Solution:
Equation
gives 1=3=0 since g1= g3=0 and 2=1.0 since g2<0, and hence the
constraints of the equation
Sequential Quadratic Programming Method
- Example 1
Solution:
can be expressed as:
We solve this quadratic programming problem directly with the use of the
Kuhn-Tucker conditions which are given by:
Sequential Quadratic Programming Method
- Example 1
• The equations
with
Sequential Quadratic Programming Method
- Example 1
Sequential Quadratic Programming Method
- Example 1
• We can now start another iteration by defining a new quadratic
programming problem using
and continue the procedure until the optimum solution is found. Note that
the objective function reduced from a value of 1.5917 to 1.38874 in one
iteration when X changed from X1 to X2.
Indirect methods
TRANSFORMATION TECHNIQUES
• If the constraints gj (X) are explicit functions of the variables xi and have
certain simple forms, it may be possible to make a transformation of the
independent variables such that the constraints are satisfied
automatically.
• Thus the optimal solution is given by x1*=20 in., x2*=20 in., x3*=20 in.,
and the maximum volume =8000 in3.
• Since the above equation does not allow any constraint to be violated, it
requires a feasible starting point for the search toward the optimum point.
However, in many engineering problems, it may not be very difficult to
find a point satisfying all the constraints gj (X) 0, at the expense of large
values of the objective function, f (X).
Interior Penalty Function Method
• Since the initial point as well as each of the subsequent points generated
in this method lies inside the acceptable region of the design space, the
method is classified as an interior penalty function formulation.
• Since the constraint boundaries act as barriers, the method is also known
as a barrier method.
The iteration procedure of this method can be summarized as follows:
1. Start with an initial feasible point X1 satisfying all the constraints with
strict inequality sign, that is, gj (X1) < 0 for j=1,2,...,m, and an initial
value of r1. Set k =1.
2. Minimize (X, rk) by using any of the unconstrained minimization
methods and obtain the solution Xk*.
Interior Penalty Function Method
3. Test whether Xk* is the optimum solution of the original problem. If Xk* is found
to be optimum, terminate the process. Otherwise, go to the next step.
4. Find the value of the next penalty parameter rk+1, as
where c < 1.
5. Set the new value of k=k+1, take the new starting point as X 1=Xk* , and go to
step 2.
Although the algorithm is starightforward, there are a number of points to be
considered in implementing the method. These are:
1. The starting feasible point X may not be readily available in some cases.
6. A suitable value of the initial penalty parameter (r1) has to be found.
7. A proper value has to be selected for the multiplication factor c.
8. Suitable convergence criteria have to be chosen to identify the optimum point.
9. The constraints have to be normalized so that each one of them vary between -1
and 0 only.
Starting Feasible Point X1
• In most engineering problems, it will not be very difficult to find an initial
point X1 satisfying all the constraints gj (X1) < 0. As an example, consider
the problem of minimum weight design of a beam whose deflection under
a given loading condition has to remain less than or equal to a specified
value. In this case, one can always choose the cross-section of the beam to
be very large initially so that the constraint remains satisfied. The only
problem is that the weight of the beam (objective) corresponding to this
initial design will be very large. Thus, in most of the practical problems,
we will be able to find a feasible starting point at the expense of a large
value of the objective function.
• However, there may be some situations where the feasible design points
could not be found so easily. In such cases, the required feasible starting
points can be found by using the interior penalty function method itself as
follows:
Starting Feasible Point X1
1. Choose an arbitrary point X1 and evaluate the constraints gj (X) at the
point X1. Since the point X1 is arbitrary, it may not satisfy all the
constraints with strict inequality sign. If r out of the total of m
constraints are violated, renumber the constraints such that the last r
constraints will become the violated ones, that is,
2. Identify the constraint that is violated most at the point X1, that is, find
the integer k such that
Starting Feasible Point X1
3. Now formulate a new optimization problem as:
subject to
This procedure is repeated until all the constraints are satisfied and a
point X1= XM is obtained from which gj (X) < 0, j=1,2,...,m.
where the maximum allowable values are given by max =0.5 in and
max=20,000 psi. If a design vector X 1 gives the values of g1 and g2 as -0.2
and -10,000, the contribution of g1 will be much larger than that of g2 (by
an order of 104) in the formulation of the function given by the equation
Normalization of Constraints
• This will badly affect the convergence rate during the minimization of
function. Thus it is advisable to normalize the constraints so that they vary
between -1 and 0 as far as possible. For the constraints shown in the below
equations
where R1, R2,......, Rm are selected such that the contributions of different
gj(X) to the function will be approximately the same at the initial point
X1.
Normalization of Constraints
• When the unconstrained minimization of (X, rk) is carried for a
decreasing sequence of values of rk , the values of R1, R2,......, Rm will not
be altered; however, they are expected to be effective in reducing the
disparities between the contributions of the various constraints to the
function.
Example:
Subject to
g 2 ( x1 , x2 ) x2 0
Solution: To illustrate the interior penalty function method, we use the
calculus method for solving the unconstrained minimization problem in
this case. Hence there is no need to have an initial feasible point X1.
Example 1
• The function is:
xi 0, i 1,2,3
Solution: The interior penalty function method coupled with the Davidon-Fletcher-
Powell method of unconstrained minimization and cubic interpolation method of
one-dimensional search is used to solve this problem. The necessary data is
assumed as follows:
Starting feasible point:
Example 2
• The optimum solution of this problem is known to be:
Example 2
Example 2
• Convergence Proof: The following theorem proves the convergence of
the interior penalty function method.
as