Balaji Opt Lecture8 Act
Balaji Opt Lecture8 Act
Optimization
(Nonlinear Programming & Time Series Simulation)
CVEN 5393
• Nonlinear Programming
• R-resources / demonstration
NONLINEAR PROGRAMMING
IV. Nonlinear Programming
A nonlinear programming example
x1+x2+x3 ≤ Q
x1,x2,x3 ≥ 0
In one general form, the nonlinear programming problem is to find
x = (x1, x2, …, xn) so as to
C
D
E
A local maximum need not be a global maximum
C
A B
D
E
H B
A
L
The geometric interpretation indicates that f (x) is convex if it
“bend upward”.
To be more precise, if f (x) possesses a second derivative
everywhere, then f (x) is convex if and only if
for all possible values of x
In terms of the second derivative of the function, the convexity
test is summarized below.
M N
(2) The sum of convex functions is a convex function, and the sum of concave
functions is a concave function.
Convex Sets
Condition for a local optimal solution to be a global optimal solution
(1) Nonlinear programming problems without constraints
If a nonlinear programming problem has no constraints, the objective
function being concave guarantees that a local maximum is a global
maximum (Similarly, the objective function being convex ensures that a
local minimum is a global minimum) .
For any linear programming problem, its linear objective function is both
convex and concave and its feasible region is a convex set, so its optimal
solution is certainly a global optimal solution.
B C
A E
3.Classical Optimization Methods
Unconstrained optimization of a function of a single variable
If
Solutions:
f=2 f = -2
4.Types of Nonlinear Programming Problems
Nonlinear programming problems come in many different shapes and forms.
No single algorithm can solve all these different types of problems. Algorithms
have been developed for various special types of nonlinear programming problems.
(1) Unconstrained optimization
(2) Convex programming
The assumptions in convex programming are
This form is particularly convenient because, except for the complementarity constraint,
these conditions are linear programming constraints.
For any quadratic programming problem, its KKT conditions can be reduced to this same
convenient form containing just linear programming constraints plus one complementarity
constraint. In matrix notation, this general form is
where the elements of the column vector u are Lagrange multipliers and the elements of
the column vectors y and v are slack variables.
The original problem is reduced to the equivalent problem of finding a feasible solution to
these constraints.
Modified simplex method
The KKT conditions for quadratic programming are nothing more than linear programming
constraints, except for the complementary constrain.
The complementary constraint simply implies that it is not permissible for both
complementary variables of any pair to be basic variables when BF solutions are considered.
As a result, the problem reduces to finding an initial BF solution to any linear programming
problem that has the constraints shown below (obtained from the KKT conditions for
quadratic programming ).
In the simple case where cT ≤ 0 and b ≥ 0, we can easily find a feasible solution
which satisfies the above constraints and thus is the optimal solution for the quadratic
programming problem.
In the case where cT > 0 or b < 0, we have to introduce artificial variables as we do for the
Big M method or the two-phase method.
So we use phase 1 of the two-phase method to find a BF solution satisfying the constraints
obtained from the KKT conditions .
Specifically, we apply the simplex method with one modification to the following linear
programming problem
subject to the linear programming constraints obtained from the KKT conditions, but with
the artificial variables zj included.
The one modification in the simplex method is the following change in the procedure for
selecting an entering basic variable.
Example to illustrate the modified simplex method
As can be verified from the convexity test, f (x1, x2) is strictly concave,so the modified
simplex method can be applied.
After the artificial variables are introduced, the linear programming problem to be
addressed by the modified simplex method is
For each of the three pairs of complementary variables
whenever one of the two variables already is a basic variable, the other variable is excluded
as a candidate for the entering basic variable.
The initial set of basic variables gives an initial BF solution that satisfies the
complementary constraint.
so that each fj(xj) has a shape such as either case shown in the figure below over the
feasible range of values of xj .
In case 1, the slope decreases only at certain breakpoints, so that fj(xj) is a piecewise
linear function (a sequence of connected line segments).
In case 2, the slope may decrease continuously as xj increases, so that fj(xj) is a general
concave function. Any such function can be approximated as closely as desired by a
piecewise linear function.
Reformulation as a Linear Programming Problem
The key to rewriting a piecewise linear function as a linear function is to use a separate
variable for each line segment.
To illustrate, consider the piecewise linear function in case 1 or the approximating
piecewise linear function in case 2
Introduce three new variables xj1, xj2, and xj3 and set
where
Special restriction
permits
To write down the complete model in the preceding notation, let nj be the number of line
segments in fj (xj), so that
Unfortunately, the special restriction does not fit into the required format for linear
programming constraints.
However, our fj (xj) are assumed to be concave so that an algorithm for maximizing f (x)
automatically gives the highest priority to using xj1 when increasing xj from zero, the next
highest priority to using xj2, and so on, without even including the special restriction
explicitly in the model. This observation leads to the following key property.
After obtaining an optimal solution for the model, you then can calculate
HUGHES-MCMAKEE-NOTES\CHAPTER-
11.PDF
HUGHES-MCMAKEE-NOTES\CHAPTER-
08.PDF