C6.2/B2.
Continuous Optimization
Resources
References
[1] A. R. Conn, N. I. M. Gould and Ph. L. Toint, Trust-Region Methods, SIAM 2000.
[2] J. Dennis and R. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear
equations, (republished by) SIAM, 1996.
[3] R. Fletcher, Practical Methods of Optimization, 2nd edition Wiley, 1987 (republished in paperback
in 2000).
[4] P. Gill, W. Murray and M. H. Wright, Practical Optimization, Academic Press, 1981.
[5] N. I. M. Gould, An Introduction to Algorithms for Continuous Optimization, 2006. Available for
download at http://www.numerical.rl.ac.uk/nimg/course/lectures/paper/paper.pdf.
[6] J. Nocedal and S. J. Wright, Numerical Optimization, Springer Verlag, 1999 (1st edition) or
2006 (2nd edition). All citations in the lecture notes apply to either edition, unless otherwise stated.
Comments on the bibliography
For a comprehensive, yet highly accessible, introduction to numerical methods for continuous (uncon-
strained and constrained) optimization problems, see [6] - most recommended (but not required) for this
course ! Reference [5] is also a very good, but more succinct introduction to this topic, with particular
emphasis on nonconvex problems and with a well-structured bibliography of fundamental optimization
articles. The monograph [1] is the most comprehensive reference book on trust-region methods to date.
The remaining books in the bibliography are classics of the nonlinear (constrained and unconstrained)
optimization literature.
Online and software resources
For an index and a guide to existing public and commercial software for solving (constrained and uncon-
strained) optimization problems, see
http://neos-guide.org/Optimization-Guide
and follow the links to Optimization Tree for example. Other useful links related to optimization may be
found at the same webpage (links to test problems, to the NEOS Server which solves user-sent optimiza-
tion problems over the internet, to online repositories of optimization articles, etc.).
For general nonconvex, smooth constrained and unconstrained problems the following software pack-
ages are of high quality/reliable: KNITRO, IPOPT, GALAHAD, etc. MATLAB’s optimization toolbox
(available on departmental computers) contains built-in optimization solvers for various problem classes
- be careful which subroutine you choose ! COIN-OR is a public software repository that you may find
useful in the future.
An important aspect of optimization software is the interface that allows the user to input the problem
to the solver; interfaces, and hence acceptable input formats, vary between solvers. Presently, usually
besides file-input in the language the solver is written in, much software allows MATLAB input files
or/and AMPL files (AMPL is a modelling language specifically designed for expressing optimization
problems; see www.ampl.com), etc.
1
Optimization beyond this course
We briefly discuss generic features of the optimization models that we address in the course by placing
them in the broader context of the field of optimization (with some literature pointers for some of the
material that is beyond the scope of this course). The underlined classes of problems are the ones we
address in the course.
A. Smooth versus nonsmooth optimization One of the main assumptions in our course will be
that the objective and the constraints are sufficiently smooth functions. When differentiability require-
ments are absent (but the functions remain continuous), different analytic tools than the ones to be
presented in this course have to be used for identifying a solution of the problems (subgradient, etc.).
The difficulties spring from the unpredictibility of the behaviour of say the objective function near one
of its points of nonsmoothness. See J. B. Hiriart-Urruty and C. Lemarechal, Convex Analysis and Min-
imization Algorithms, Springer 1991, and F. H. Clarke, Optimization and Nonsmooth Analysis, SIAM,
1990, for fundamental investigations of theoretical issues connected to this class of problems. When even
continuity of the objective and/or constraints is lacking, then it is essential to know something about the
structure of these functions or nature of discontinuities in order to have any hope of solving the problem.
B. Continuous versus discrete optimization In some optimization problems, the variables make
sense only if they are integers (for example, the company in Example 2 may be a car manufacturer).
Discrete optimization addresses these problems. Very often, a discrete optimization algorithm would
solve a sequence of continuous problems (see branch-and-bound methods for integer linear programming
that use linear programming relaxations of the integer programming problem, or see approximation
algorithms using semidefinite programming relaxations for the max-cut combinatorial problem (Goemans
& Williamson, 1995)). An active research area at present is mixed integer nonlinear programming, where
the objective and/or constraints are nonlinear functions, and where there are also integrality requirements
on at least some components of x. The literature for discrete (linear) optimization algorithms is vast (see,
for example, A. Schrijver, Theory of linear and integer programming, Wiley, 1986 , and L. A. Wolsey,
Integer Programming, Wiley, 1998).
C. Deterministic versus stochastic optimization It may happen that the model data for an
optimization problem is not fully known at the time of the setting and solving of the model. For example,
many financial planning and economic models share this feature, as they depend on future demand, prices
and interest rates. Based on statistical estimates of the unknown parameters of the model, different
scenarios are constructed and endowed with a certain probability. Stochastic optimization addresses
solving these scenarios, which are deterministic problems and may be solved by methods that we study
in this course, to obtain the optimal value of the expected performance of the model. See J. R. Birge and
F. Louveaux, Introduction to Stochastic Programming, 1997.
D. Other classes of optimization problems Some more general optimization problems may in-
clude multiple objectives (multiobjective optimization) or infinitely many constraints (PDE-constrained
optimization, etc., etc.), and they are beyond the scope of this course. As an example of a multiobjective
optimization problem, consider car design, where an engineer may wish to maximize crash resistance for
safety purposes and minimize weight for fuel consumption. New optimality concepts need to be employed
for this class of problems (Pareto optimality, etc.), as Rn is only a partially ordered set (not any two
points can be compared). See, for example, K. Miettinen, Nonlinear Multiobjective Optimization, Kluwer,
1999. For PDE-constrained problems, the techniques in this course are very relevant, as each problem is
discretized into (many) finitely-constrained problems to be solved. The latter pose a challenge as their
dimensions are (very) large. For more on the very active research area of PDE-constrained optimiza-
tion, see the work and website of Omar Ghattas, Ekkehard W. Sachs, and the references therein. See
also, L. Biegler, O. Ghattas, M. Heinkenschloss and B.v. Bloemenn Waanders, editors, PDE-Constrained
Optimization, Springer, 2003, for some of the latest developments.