NLPQLY: An Easy-To-Use Fortran Implementation of A Sequential Quadratic Programming Algorithm - User's Guide
NLPQLY: An Easy-To-Use Fortran Implementation of A Sequential Quadratic Programming Algorithm - User's Guide
E-mail:
Web:
http://www.klaus-schittkowski.de
Date:
August, 2010
Abstract
The Fortran subroutine NLPQLY simplies the numerical solution of nonlinear
programming problems by calling the standard SQP code NLPQLP, where the calling sequence is simplied as much as possible. A user has to provide objective and
constraint function values in the same code which calls NLPQLY. Derivatives are internally approximated by forward dierences. The usage of the code is documented
and illustrated by an example.
Introduction
(1)
Sequential quadratic programming or SQP methods belong to the most powerful nonlinear
programming algorithms we know today for solving dierentiable nonlinear programming
problems of the form (1). The theoretical background is described e.g. in Stoer [30] in
form of a review, or in Spellucci [29] in form of an extensive text book. From the more
practical point of view, SQP methods are also introduced in the books of Papalambros,
Wilde [5] and Edgar, Himmelblau [1]. Their excellent numerical performance is tested
and compared with other methods in Schittkowski [9], and since many years they belong
to the most frequently used algorithms to solve practical optimization problems.
To facilitate the notation of this section, we assume that upper and lower bounds xu
and xl are not handled separately, i.e., we consider the somewhat simpler formulation
min f (x)
x IR : gj (x) = 0 , j = 1, . . . , me
gj (x) 0 , j = me + 1, . . . , m
n
(2)
It is assumed that all problem functions f (x) and gj (x), j = 1, . . ., m, are continuously
dierentiable on IRn .
The basic idea is to formulate and solve a quadratic programming subproblem in
each iteration which is obtained by linearizing the constraints and approximating the
Lagrangian function
L(x, u) := f (x)
uj gj (x)
(3)
j=1
quadratically, where x IRn is the primal variable and u = (u1 , . . . , um )T IRm the
multiplier vector.
To formulate the quadratic programming subproblem, we proceed from given iterates
xk IRn , an approximation of the solution, vk IRm , an approximation of the multipliers,
and Bk IRnn , an approximation of the Hessian of the Lagrangian function. Then one
has to solve the quadratic programming problem
min 12 dT Bk d + f (xk )T d
d IRn : gj (xk )T d + gj (xk ) = 0 , j = 1, . . . , me
gj (xk )T d + gj (xk ) 0 , j = me + 1, . . . , m
(4)
Let dk be the optimal solution and uk the corresponding multiplier of this subproblem.
A new iterate is obtained by
(
xk+1
vk+1
:=
xk
vk
+ k
dk
u k vk
(5)
Although we are able to guarantee that the matrix Bk is positive denite, it is possible
that (4) is not solvable due to inconsistent constraints. One possible remedy is to introduce
an additional variable IR, leading to a modied quadratic programming problem, see
Schittkowski [16] for details.
The steplength parameter k is required in (5) to enforce global convergence of the SQP
method, i.e., the approximation of a point satisfying the necessary Karush-Kuhn-Tucker
optimality conditions when starting from arbitrary initial values, typically a user-provided
x0 IRn and v0 = 0, B0 = I. k should satisfy at least a sucient decrease condition of
a merit function r () given by
((
r () := r
x
v
d
uv
))
(6)
with a suitable penalty function r (x, v). Implemented is the augmented Lagrangian
function
1
1 2
r (x, v) := f (x) (vj gj (x) rj gj (x)2 )
v /rj ,
(7)
2
2 jK j
jJ
with J := {1, . . . , me } {j : me < j m, gj (x) vj /rj } and K := {1, . . . , m} \ J,
cf. Schittkowski [14]. The objective function is penalized as soon as an iterate leaves the
feasible domain. The corresponding penalty parameters rj , j = 1, . . ., m that control the
degree of constraint violation, must carefully be chosen to guarantee a descent direction
of the merit function, see Schittkowski [14],
(
rk (0)
= rk (xk , vk )
dk
uk v k
<0 .
(8)
Finally one has to approximate the Hessian matrix of the Lagrangian function in a suitable way. To avoid calculation of second derivatives and to obtain a nal superlinear
convergence rate, the standard approach is to update Bk by the BFGS quasi-Newton
formula, cf. Powell [7] or Stoer [30].
Program Documentation
2. Compute objective and all constraint function values, store them in a scalar variable
F and an array G, respectively.
3. Set IFAIL=0 and execute NLPQLY.
4. If NLPQLY returns with IFAIL0, compute objective and constraint function values
at variable values found in X, store them in F and G, and call NLPQLY again.
5. If NLPQLY terminates with IFAIL=0, the internal optimality criteria are satised.
In case of IFAIL>0, an error occurred.
Usage:
CALL NLPQLY(
/
/
/
IPRINT,
M,
ME, MMAX,
N,
F
G,
XL,
XU, ACC, MAXIT,
IOUT, IFAIL,
WA, LWA,
KWA, LKWA,
ACT, LACT
)
ACC :
MAXIT :
IPRINT :
IOUT :
IFAIL :
WA(LWA) :
LWA :
KWA(LKWA) :
LKWA :
ACT(LACT) :
LACT :
There are the following possibilities that could cause an error message:
1. The termination parameter ACC is too small, so that the numerical algorithm plays
around with round-o errors without being able to improve the solution. Especially
the Hessian approximation of the Lagrangian function becomes unstable in this case.
A straightforward remedy is to restart the optimization cycle again with a larger
stopping tolerance.
2. The constraints are contradicting, i.e., the set of feasible solutions is empty. There
is no way to nd out, whether nonlinear and non-convex constraints are infeasible
or not. Thus, the nonlinear programming algorithms will proceed until running in
any of the mentioned error situations. In this case, the correctness of the model
must be very carefully checked.
3. Constraints are feasible, but active constraints are degenerate, e.g., redundant. One
should know that SQP algorithms assume the satisfaction of the so-called linear
independency constraint qualication, i.e., that gradients of active constraints are
linearly independent at each iterate and in a neighborhood of an optimal solution. In
this situation, it is recommended to check the formulation of the model constraints.
However, some of the error situations also occur if, because of inaccurate gradients,
the quadratic programming subproblem does not yield a descent direction for the underlying merit function. In this case, one should try to improve the accuracy of function
evaluations, scale the model functions in a proper way, or start the algorithm from other
initial values.
Important Note: The tolerance for approximating derivatives by a forward dierence
formula is set to the square root of the machine precision. This might be too small in
case of inaccurate function values.
Example
To give an example how to organize the code, we consider Rosenbrocks post oce problem, i.e., test problem TP37 of Hock and Schittkowski [4],
min x1 x2 x3
x1 + 2x2 + 2x3 0
72 x1 2x2 2x3 0
x1 , x2 IR :
0 x1 42
0 x2 42
0 x3 42
(9)
The Fortran source code for executing NLPQLP is listed below. Gradients are approximated by forward dierences. The function block inserted in the main program can be
replaced by a subroutine call.
IMPLICIT
INTEGER
PARAMETER (
/
/
/
/
INTEGER
/
DOUBLE PRECISION
LOGICAL
C
C
C
NONE
N, M, LWA, LKWA, LACTIV
N
= 3,
M
= 2,
LWA
= 3*N*N + M*N + 45*N + 12*M + 200,
LKWA
= N + 25,
LACTIV = 2*M + 10 )
KWA(LKWA), ME, MAXIT, IPRINT, IOUT, IFAIL, I,
NFUNC
X(N), F, G(M), XL(N), XU(N), WA(LWA), ACC
ACTIVE(LACTIV)
IOUT
= 6
ACC
= 1.0D-8
MAXIT = 100
IPRINT = 2
ME
= 0
IFAIL = 0
NFUNC = 0
DO I=1,N
X(I) = 10.0D0
XL(I) = 0.0D0
XU(I) = 42.0D0
ENDDO
1 CONTINUE
C============================================================
C
C
C
C
C
C
C============================================================
NFUNC = NFUNC + 1
CALL NLPQLY (
M,
ME,
N,
X,
F,
/
G,
XL,
XU,
ACC, MAXIT,
/
IPRINT,
IOUT, IFAIL,
WA,
LWA,
/
KWA,
LKWA, ACTIVE, LACTIV
)
IF (IFAIL.LT.0) GOTO 1
C
WRITE(IOUT,1000) NFUNC
1000 FORMAT(
*** Number of function calls: ,I3)
C
STOP
END
Conclusions
We present an easy-to-use version of the SQP code NLPQLP, see Schittkowski [27], where
a limited set of parameters is passed and where only objective and constraint function are
to be provided by the user. Derivatives are evaluated internally by forward dierences.
References
[1] Edgar T.F., Himmelblau D.M. (1988): Optimization of Chemical Processes, McGraw Hill
[2] Goldfarb D., Idnani A. (1983): A numerically stable method for solving strictly
convex quadratic programs, Mathematical Programming, Vol. 27, 1-33
[3] Hock W., Schittkowski K. (1981): Test Examples for Nonlinear Programming
Codes, Lecture Notes in Economics and Mathematical Systems, Vol. 187, Springer
[4] Hock W., Schittkowski K. (1983): A comparative performance evaluation of 27
nonlinear programming codes, Computing, Vol. 30, 335-358
[5] Papalambros P.Y., Wilde D.J. (1988): Principles of Optimal Design, Cambridge
University Press
[6] Powell M.J.D. (1978): A fast algorithm for nonlinearly constraint optimization calculations, in: Numerical Analysis, G.A. Watson ed., Lecture Notes in Mathematics,
Vol. 630, Springer
[7] Powell M.J.D. (1978): The convergence of variable metric methods for nonlinearly
constrained optimization calculations, in: Nonlinear Programming 3, O.L. Mangasarian, R.R. Meyer, S.M. Robinson eds., Academic Press
[8] Powell M.J.D. (1983): On the quadratic programming algorithm of Goldfarb and
Idnani. Report DAMTP 1983/Na 19, University of Cambridge, Cambridge
[9] Schittkowski K. (1980): Nonlinear Programming Codes, Lecture Notes in Economics
and Mathematical Systems, Vol. 183 Springer
[10] Schittkowski K. (1981): The nonlinear programming method of Wilson, Han and
Powell. Part 1: Convergence analysis, Numerische Mathematik, Vol. 38, 83-114
[11] Schittkowski K. (1981): The nonlinear programming method of Wilson, Han and
Powell. Part 2: An ecient implementation with linear least squares subproblems,
Numerische Mathematik, Vol. 38, 115-127
[12] Schittkowski K. (1982): Nonlinear programming methods with linear least squares
subproblems, in: Evaluating Mathematical Programming Techniques, J.M. Mulvey
ed., Lecture Notes in Economics and Mathematical Systems, Vol. 199, Springer
[13] Schittkowski K. (1983): Theory, implementation and test of a nonlinear programming algorithm, in: Optimization Methods in Structural Design, H. Eschenauer, N.
Olho eds., Wissenschaftsverlag
[14] Schittkowski K. (1983): On the convergence of a sequential quadratic programming
method with an augmented Lagrangian search direction, Mathematische Operationsforschung und Statistik, Series Optimization, Vol. 14, 197-216
[15] Schittkowski K. (1985): On the global convergence of nonlinear programming algorithms, ASME Journal of Mechanics, Transmissions, and Automation in Design,
Vol. 107, 454-458
[16] Schittkowski K. (1985/86): NLPQL: A Fortran subroutine solving constrained nonlinear programming problems, Annals of Operations Research, Vol. 5, 485-500
[17] Schittkowski K. (1987a): More Test Examples for Nonlinear Programming, Lecture
Notes in Economics and Mathematical Systems, Vol. 182, Springer
10
[18] Schittkowski K. (1987): New routines in MATH/LIBRARY for nonlinear programming problems, IMSL Directions, Vol. 4, No. 3
[19] Schittkowski K. (1988): Solving nonlinear least squares problems by a general purpose SQP-method, in: Trends in Mathematical Optimization, K.-H. Homann, J.B. Hiriart-Urruty, C. Lemarechal, J. Zowe eds., International Series of Numerical
Mathematics, Vol. 84, Birkhauser, 295-309
[20] Schittkowski K. (1992): Solving nonlinear programming problems with very many
constraints, Optimization, Vol. 25, 179-196
[21] Schittkowski K. (1994): Parameter estimation in systems of nonlinear equations,
Numerische Mathematik, Vol. 68, 129-142
[22] Schittkowski K. (2002): Test problems for nonlinear programming - users guide,
Report, Department of Mathematics, University of Bayreuth
[23] Schittkowski K. (2002): Numerical Data Fitting in Dynamical Systems, Kluwer
Academic Publishers, Dordrecht
[24] Schittkowski K. (2002): EASY-FIT: A software system for data tting in dynamic
systems, Structural and Multidisciplinary Optimization, Vol. 23, No. 2, 153-169
[25] Schittkowski K. (2003): QL: A Fortran code for convex quadratic programming users guide, Report, Department of Mathematics, University of Bayreuth, 2003
[26] Schittkowski K. (2003): DFNLP: A Fortran implementation of an SQP-GaussNewton algorithm - users guide, Report, Department of Mathematics, University
of Bayreuth, 2003
[27] Schittkowski K. (2009): NLPQLP: A Fortran implementation of a sequential
quadratic programming algorithm with distributed and non-monotone line search
- users guide, Report, Department of Computer Science, University of Bayreuth
[28] Schittkowski K., Zillober C., Zotemantel R. (1994): Numerical comparison of nonlinear programming algorithms for structural optimization, Structural Optimization,
Vol. 7, No. 1, 1-28
[29] Spellucci P. (1993):
Birkhauser
[30] Stoer J. (1985): Foundations of recursive quadratic programming methods for solving nonlinear programs, in: Computational Mathematical Programming, K. Schittkowski, ed., NATO ASI Series, Series F: Computer and Systems Sciences, Vol. 15,
Springer
11