Pyopt: A Python-Based Object-Oriented Framework For Nonlinear Constrained Optimization
Pyopt: A Python-Based Object-Oriented Framework For Nonlinear Constrained Optimization
DOI 10.1007/s00158-011-0666-3
RESEARCH PAPER
Received: 7 October 2010 / Revised: 8 April 2011 / Accepted: 18 April 2011 / Published online: 31 May 2011
c Springer-Verlag 2011
and rely instead on a specific solver interface that cannot Multidisciplinary design optimization (MDO) applica-
generally be used in other contexts. tions in engineering have motivated yet another type of
Another approach uses a standard programming lan- framework that, in addition to interfacing with optimiz-
guage that enables the use of different optimizers. The ers also provides a way to connect the various applica-
framework accomplishes this by executing the optimizers tion solvers and formulate an MDO problem. OpenMDAO
via a system call interface that uses a set of standardized (Gray et al. 2010a, b), for example, is currently being devel-
files for optimization problem definition input and solution oped at NASA for this purpose. Another example is pyMDO
output. Since a standard programming language is used in (Martins et al. 2009), which can automatically implement
lieu of an algebraic modeling language, this type of frame- a number of different MDO architectures (Tedford and
work adds the flexibility of handling existing application Martins 2010). Both of these frameworks are developed in
models programmed in the same language. Python and both can use the pyOpt interface to access the
Some of the frameworks have taken advantage of object- various optimizers. On the commercial side, a number of
oriented programming, which increases code reusability and other MDO frameworks have also been developed (Hong
enables the use of programming constructs similar to the et al. 2004).
mathematical constructs used by AMLs. While existing The goal of the effort described herein is to develop
object-oriented optimization frameworks enable the solu- an object-oriented framework programmed in Python that
tion of large and complex problems with existing applica- facilitates the formulation of optimization problems, the
tion models, they have mainly been programmed in low- integration of application models developed in different pro-
level languages such as C/C++. Examples of such frame- gramming languages, and the solution and benchmarking
works include the Common Optimization Library INterface of multiple optimizers. This framework facilitates these
(COLIN) (Hart 2003), the OPT++ class library (Meza tasks for both practitioners and developers alike. pyOpt
1994; Meza et al. 2007), the DAKOTA toolkit (Eldred et al. focuses on the solution of constrained nonlinear optimiza-
2007), and the Toolkit for Advanced Optimization (TAO) tion problems, i.e., problems with the following form:
(Benson et al. 2007).
High-level programming languages, such as MATLAB min f (x)
x
and Python, not only provide a flexible environment to
subject to g j (x) = 0, j = 1, . . . , m e ,
model optimization problems, but also enable easier inter- (1)
facing of application models and optimizers written in g j (x) ≤ 0, j = m e + 1, . . . , m,
different low-level languages. x i L ≤ x i ≤ x iU , i = 1, . . . , n,
The advantages of using a high-level language are
evident in a number of recent projects. For example, where x ∈ n , f : n → 1 , and g : n → m . It
pyIPOPT (Xu 2009), SciPy.optimize (Jones et al. 2001), is assumed that the objective function f (x) is a nonlinear
and TANGO (Tan 2007) provide direct Python interfaces to function, and that the equality and inequality constraints
compiled-language optimizers, while CVXOPT (Dahl and can be either linear or nonlinear functions of the design
Vandenberghe 2008) and NLPy (Friedlander and Orban variables x.
2008) use interfaces to compiled-language linear algebra The main features of the pyOpt framework are described
and matrix routines (ATLAS or BLAS/LAPACK) to solve below.
convex optimization problems within Python. The goal
Problem-optimizer independence Object-oriented con-
of NLPy is somewhat different from the other projects:
structs allow for true separation between the optimization
it provides the building blocks for researchers that want
problem formulation and its solution by different opti-
to develop their own optimizers. NLPy also provides an
mizers. This separation is a feature unique to pyOpt and
interface to the AMPL algebraic modeling language. Simi-
enables a large degree of flexibility for problem formula-
larly, algebraic modeling capabilities are provided by other
tion and solution, allowing the easy and efficient use of
packages, such as Coopr.Pyomo (Hart 2009) in Python,
advanced optimization features, such as nested optimization
and CVX (Grant and Boyd 2010) in MATLAB. Other
and automatic solution refinement.
projects such as YALMIP (Lofberg 2004) and TOMLAB
(Holmström et al. 2010) in MATLAB or puLP (Mitchell Flexible optimizer integration pyOpt already provides an
2009) and OpenOpt (Kroshko 2010) in Python also offer interface to a number of optimizers and enables the integra-
system call interfacing frameworks for different optimizers. tion of additional optimizers, both open-source and com-
In some projects, some of the optimization algorithms are mercial. Furthermore, the interface allows for easy integra-
themselves implemented in Python as opposed to a com- tion of gradient-based, gradient-free, and population-based
piled language (Jacobs et al. 2004; Friedlander and Orban optimization algorithms that solve the general constrained
2008; Kroshko 2010; Woodruff et al. 2011). nonlinear optimization problem (1).
A Python-based object-oriented framework for nonlinear constrained optimization 103
Multi-platform compatibility The base classes and all resembles its mathematical formulation. For developers, the
implemented optimizers can be used in different operating framework should provide intuitive object-oriented class
system environments, including Windows, Linux, and OS structures where new algorithms can be integrated and
X, and within different computing architectures, including tested by a wide range of developers with diverse program-
parallel systems. ming backgrounds. Python provides a clear and readable
syntax with intuitive object orientation and a large num-
Parallelization capability Using the message passing ber of data types and structures. It is highly stable and run
interface (MPI) standard, the framework can solve opti- in interactive mode, making it easy to learn and debug.
mization problems where the function evaluations from The language supports user-defined raising and catching of
the model applications run in parallel environments. For exceptions, resulting in cleaner error handling. Moreover,
gradient-based optimizers, it can automatically parallelize automatic garbage collection is performed, which frees the
the gradient evaluation, and for gradient-free optimizers it programmer from the burden of memory management.
can parallelize the function evaluations.
2.2 Extensibility
History and warm-restart capability The user has the
option to store the solver evaluation history during the opti- The framework and its programming language should
mization process. A partial history can also be used to provide a solid foundation for additional extensions or
“warm-restart” the optimization. modifications to the framework architecture, to its classes
and modeling routines, to the optimization algorithm inter-
Of these features, the automatic solution refinement, the faces, and to the user application models. Python provides a
parallel capability, and the warm-restart are currently unique simple model for loading Python code developed by users.
to pyOpt. These features are demonstrated in Section 4. Additionally, it includes support for shared libraries and
This article is organized as follows. The next section dynamic loading, so new capabilities can be dynamically
outlines the software implementation philosophy and the integrated into Python applications.
programming language selection. Section 3 describes the
class design in pyOpt and lists the optimization algorithms
2.3 Portability
integrated into the framework. Section 4 demonstrates the
solution of three optimization problems using pyOpt with
An important requirement for the framework is that it must
multiple optimization algorithms. In the last section we
work on several computer architectures. Not only should it
present our conclusions.
be easily ported across computer platforms, but it should
also allow easy integration of user models and optimizers
across computer platforms. Python is available on a large
2 Software design
number of computer architectures and operating systems,
so portability is typically not a limitation for Python-based
The design of pyOpt is driven by the need to provide
applications.
an easy-to-use optimization framework not only for prac-
titioners, but also for developers. Different programming
languages were considered for the development of pyOpt, 2.4 Integration flexibility
but ultimately Python (Beazley 2006) was selected. Python
is a free, high-level programming language that supports The framework should also provide the flexibility to support
object-oriented programming and has a large following both tight coupling integration (where a model or optimizer
in the scientific computing community (Oliphant 2007; is directly linked into the framework) and loose coupling
Langtangen 2008). Python fullfils all of our code design integration (where a model or optimizer is externally exe-
development requirements according to the principles cuted through system calls). Furthermore, application mod-
described below. els and solvers developed in heterogeneous programming
languages should be easily integrated into the framework.
2.1 Clarity and usability Python excels at interfacing with both high- and low-level
languages. It was designed to interface directly with C, mak-
For optimization practitioners, the framework should be ing the integration of C codes straightforward. Integration
usable by someone who has only basic knowledge of of Fortran and C++ codes can be done using freely avail-
optimization. An intuitive interface should therefore be able tools, such as F2PY (Peterson 2009) and SWIG (Blezek
provided in which the optimization problem formulation 1998), respectively, which automate the integration process
104 R.E. Perez et al.
while enabling access to the original code functionality from the objective function. Similarly, optimizers are defined by
Python. the Optimizer abstract class, which provides the meth-
ods necessary to interact with and solve an optimization
2.5 Standard libraries problem. Each solution, as provided by a given optimizer
instance, is stored as a Solution class instance. The
The framework should have access to a large set of libraries details for each class are discussed below.
and tools to support additional capabilities, such as special-
ized numerical libraries, data integration tools, and plotting 3.1 Optimization problem class
routines. Python includes a large set of standard libraries,
facilitating the programming of a wide array of tasks. Fur- Any nonlinear optimization problem (1) can be represented
thermore, a large number of open-source libraries can be by the Optimization class. The attributes of this class
added, such as the scientific computing library SciPy, are the name of the optimization problem (name), the
the numerical computing library NumPy, and the plotting pointer to the objective function (objfun), and the dic-
library matplotlib. These libraries further extend the tionaries that contain the instances of each optimization
capabilities available to both optimization practitioners and variable (variables), constraint (constraints), and
developers. objective (objectives). Each design variable, constraint,
and objective is defined with its own instance to provide
2.6 Documentation greater flexibility for problem reformulation. The class pro-
vides methods that help set, delete and list one or more
The programming language used for the framework devel- variables, constraints and objectives instances. For exam-
opment should be well documented, and should also provide ple, addCon instantiates a constraint and appends it to
tools to properly document the code and generate API doc- the optimization problem constraints set. The class
umentation. An extensive set of articles, books, online also provides methods to add, delete, or list any optimiza-
documentation, newsgroups, and special interest groups are tion problem solution stored in the dictionary of solution
available for Python and its extended set of libraries. Fur- instances (solutions).
thermore, a large number of tools, such as pydoc, are The design variables are represented by the Varia-
available to automatically generate API documentation, and ble class. Attributes of the class include a variable name
make it available in a variety of formats, including web (name), a variable type identifier (type), its current value
pages. (value), as well as its upper and lower bounds (upper and
lower). Three different variable types can be used: contin-
2.7 Flexible licensing uous, integer, and discrete. If a variable type is continuous or
discrete, the user-specified upper and lower bounds are used
To facilitate the use and distribution of pyOpt, both the directly. If a variable is defined as discrete, the user provides
programming language and the framework should have a set of choices (choices) and the lower and upper bounds
open-source licenses. Python is freely available and its are automatically set to represent the choice indices.
open-source license enables the modification and dis- Similarly, the Constraint class allows the definition
tribution of Python-based applications with almost no of equality and inequality constraints. The class attributes
restrictions. include the constraint name (name), a type identifier
(type), and its value (value).
Finally, the Objective class is used to encapsulate the
objective function value.
3 Implementation
In the implementation of pyOpt, abstract classes have 3.2 Optimization solution class
been used throughout to facilitate reuse and extensibility.
Figure 1 illustrates the relationship between the main classes For a given Optimization instance, the Solution
in the form of a unified modeling language (UML) class class stores information related to the optimum found by
diagram (Arlow and Neustadt 2002). The class structure a given optimizer. The class inherits from the Optimiza-
in pyOpt was developed based on the premise that the tion class, and hence it shares all attributes and methods
definition of an optimization problem should be indepen- from Optimization. This feature allows the user to per-
dent of the optimizer. An optimization problem is defined form automatic refinement of a solution with ease, where
by the Optimization abstract class, which contains class the solution of one optimizer is used directly as the ini-
instances representing the design variables, constraints, and tial point of another optimizer. Additional attributes of the
A Python-based object-oriented framework for nonlinear constrained optimization 105
Gradient
Function
Evaluation 0..* + String method
Optimization Optimizer + String mode
+ String name + Scalar step_size
+ String name
+ Pointer objfun + String category + __init__()
- Dictionary variables + Dictionary options + getGrad()
- Dictionary objectives + Dictionary informs + getHess()
- Dictionary constraints
Objective - Dictionary solutions History + __init__()
N 1 + __solve__()
+ String name + __init__() + String lename + __call__()
+ Scalar value + getVar() + String mode + setOption()
+ addVar() + getOption()
+ __init__() + addVarGroup() + __init__()
+ ListAttributes( ) + getInform()
+ setVar() + write()
+ __str__() + ListAttributes( )
+ delVar() + read()
- _on_setOption( )
+ getVarSet() + close()
- _on_getOption( )
+ getObj() - _on_getInform()
Variable + addObj()
N 1 + setObj()
+ String name + delObj()
+ String type + getObjSet() 0..*
+ Scalar value + getCon() Solver
+ Scalar lower + addCon() __solve__
+ Scalar upper 0
+ addConGroup()
+ Dictionary choices 1 Solver
+ setCon()
Program
+ __init__() + delCon() N
+ ListAttributes( ) + getConSet()
+ solution() Solution
+ __str__()
+ getSol() + String optimizer
+ addSol() + Scalar opt_time
+ setSol() + Scalar opt_evals
Constraint + delSol()
N 1 + Dictionary opt_inform
+ String name + getSolSet() + Dictionary options_set
+ String type + __str__() + Boolean display_opt Class Association
+ Scalar value + Dictionary parameters Class Dependency
+ __init__() + __init__() Class Inheritance
+ ListAttributes( ) + __str__()
Class Composition
+ __str__() + write2 le()
class include details from the solver and its solution, such as the class performs all the solver-specific tasks required
the optimizer settings used, the computational time, and the to obtain a solution. For example, each solver requires
number of evaluations required to solve the problem. different array workspaces to be defined. Depending on the
solver that is used, sensitivities can be calculated using the
3.3 Optimization solver class Gradient class. Once a solution has been obtained, it
can be stored as a solution instance that is contained in the
All optimization problem solvers inherit from the Opti- Optimization class, maintaining the separation between
mizer abstract class. The Solver class attributes include the problem being solved and the optimizer used to solve it.
the solver name (name), an optimizer type identifier (ca- The history of the solver optimization can also be stored in
tegory), and dictionaries that contain the solver setup the History class. A partially stored history can also be
parameters (options) and message output settings (in- used to enable a “warm-restart” of the optimization.
forms). The class provides methods to check and change
default solver parameters (getOption, setOption), as
well as a method that runs the solver for a given optimiza- 3.4 Optimization solvers
tion problem (solve). As long as an optimization package
is wrapped with Python, this class provides a common inter- A number of constrained optimization solvers are currently
face to interact with and solve an optimization problem as integrated into the framework. All these optimizers are
defined by the Optimization class. When the solver designed to solve the general nonlinear optimization prob-
is instantiated, it inherits the Optimizer attributes and lem (1). They include traditional gradient-based optimizers,
methods and is initialized with solver-specific options and as well as gradient-free optimizers. A brief description of
messages. By making use of object-oriented polymorphism, each optimizer currently implemented is presented below.
106 R.E. Perez et al.
3.4.1 SNOPT at each iteration and choosing a step size that improves the
objective function.
This is a sparse nonlinear optimizer written in Fortran that is
particularly useful for solving large-scale constrained prob- 3.4.6 MMA/GCMMA
lems with smooth objective functions and constraints (Gill
et al. 2002). The algorithm consists of a sequential quadratic This is a Fortran implementation of the method of mov-
programming (SQP) algorithm that uses a smooth aug- ing asymptotes (MMA). MMA solves a sequence of sub-
mented Lagrangian merit function, while making explicit problems that are convex approximations of the original
provision for infeasibility in the original problem and in one (Svanberg 1987). The generation of these subproblems
the quadratic programming subproblems. The Hessian of is controlled by the so-called moving asymptotes, which
the Lagrangian is approximated using a limited-memory both stabilize and speed up the convergence of the gen-
quasi-Newton method. eral process. A variant of the original algorithm (GCMMA)
has also been integrated into the framework. The variant
3.4.2 NLPQL extends the original MMA functionality and guarantees con-
vergence to some local minimum from any feasible starting
This is another SQP method written in Fortran that solves point (Svanberg 1995).
problems with smooth continuously differentiable objective
function and constraints (Schittkowski 1986). The algo- 3.4.7 KSOPT
rithm uses a quadratic approximation of the Lagrangian
function and a linearization of the constraints. To generate This Fortran code reformulates the constrained problem
a search direction a quadratic subproblem is formulated and into an unconstrained one using a composite Kreisselmeier–
solved. The line search can be performed with respect to two Steinhauser objective function (Kreisselmeier and
alternative merit functions, and the Hessian approximation Steinhauser 1979) to create an envelope of the objective
is updated by a modified BFGS formula. function and set of constraints (Wrenn 1989). The envelope
function is then optimized using a sequential unconstrained
3.4.3 SLSQP minimization technique (SUMT) (Fiacco and McCormick
1968). At each iteration, the unconstrained optimiza-
This optimizer is a sequential least squares programming tion problem is solved using the Davidon–Fletcher–Powell
algorithm (Kraft 1988). It is written in Fortran and uses (DFP) algorithm.
the Han–Powell quasi-Newton method with a BFGS update
of the B-matrix and an L1-test function in the step-length 3.4.8 COBYLA
algorithm. The optimizer uses a slightly modified version
of Lawson and Hanson’s NNLS nonlinear least-squares This optimizer is an implementation of Powell’s nonlinear
solver (Lawson and Hanson 1974). derivative-free constrained optimization that uses a linear
approximation approach (Powell 1994). The algorithm is
3.4.4 FSQP written in Fortran and is a sequential trust-region algo-
rithm that uses linear approximations of the objective and
This code, which is available in either C or Fortran, imple- constraint functions.
ments an SQP approach that is modified to generate feasible
iterates (Lawrence and Tits 1996). In addition to handling 3.4.9 SOLVOPT
general single objective constrained nonlinear optimization
problems, the code is also capable of handling multiple This optimizer, which is available in either C or Fortran,
competing linear and nonlinear objective functions (mini- uses a modified version of Shor’s r -algorithm (Shor 1985)
max), linear and nonlinear inequality constraints, and linear with space dilation to find a local minimum of nonlinear and
and nonlinear equality constraints (Zhou and Tits 1996). non-smooth problems (Kuntsevich and Kappel 1997). The
algorithm handles constraints using an exact penalization
3.4.5 CONMIN method (Kiwiel 1985).
2011). It solves nonlinear non-smooth constrained prob- variables is automatically distributed over different proces-
lems using an augmented Lagrange multiplier approach to sors using the message passing interface (MPI), and the
handle constraints. This algoritm has been used in chal- Jacobian is then assembled and sent to the optimizer.
lenging constrained optimization applications with multi-
ple local optima, such as the aerostructural optimization
3.6 Optimization history class
of aircraft with non-planar lifting surfaces (Jansen et al.
2010). Other versions of particle swarm algorithms have
When any of the Optimizer instances are called to solve
also been used to optimize aircraft structures (Venter and
an optimization problem, an option to store the solution his-
Sobieszczanski-Sobieski 2004).
tory can be used. When this option is in effect, an instance
of the History class is initialized. This class stores all
3.4.11 NSGA2 the data associated with each call to the objective function.
This data consists of the values for the design variables,
This optimizer is a non-dominating sorting genetic algo- the objective, the constraints, and if applicable, the gradi-
rithm developed in C++ that solves non-convex and non- ents. The History class opens two files when initialized:
smooth single and multiobjective optimization problems a binary file with the the actual data and an ASCII file that
(Deb et al. 2002). The algorithm attempts to perform global stores the cues to that data. The data is flushed immediately
optimization, while enforcing constraints using a tourna- to the files at each write call.
ment selection-based strategy. The History class allows the warm restart of a pre-
viously interrupted optimization, even when the actual
optimization package does not support warm restarts. This
3.4.12 ALHSO feature works for any deterministic optimization algorithm
and relies on the fact that deterministic algorithms follow
This Python code is an extension of a harmony search opti- exactly the same path when starting from the same point.
mizer (Geem et al. 2001; Lee and Geem 2005) that handles If the history file exists for a previous optimization that
constraints. It follows an approach similar to the augmented finished prematurely for some reason (due to a time limit,
Lagrange multiplier approach used in ALPSO to handle or a convergence tolerance that was set to high), pyOpt can
constraints. restart the optimization using that history file to provide the
optimizer with the objective and constraint values for all the
points in the path that was previously followed. Instead of
3.4.13 MIDACO recomputing the function values at these points, pyOpt pro-
vides the previously computed values until the end of the
This optimizer implements an extended ant colony opti- history. After the end of the history has been reached, the
mization to solve non-convex nonlinear programming prob- optimization continues with the new part of the path.
lems (Schlüter et al. 2009). The algorithm is written in The cue file is read in at the initialization of the class.
Fortran and handles constraints using an oracle penalty The position and number-of-values cues are then used to
method (Schlüter and Gerdts 2009). read in the required values, and only those values, from the
binary file. The optimizer can be called with options for
both storing a history and reading in a previous history. In
3.5 Optimization gradient class
this case two instances of the history class are initialized:
one in write mode and one in read mode. If the same name is
Some of the solvers described above use gradient informa-
used in both history files, the history instance in read mode
tion. The Gradient class provides a unified interface for
is only maintained until all its history data has been read and
the gradient calculation. pyOpt provides an implementa-
used by the optimizer.
tion of the finite-difference method (default setting) and the
complex-step method (Martins et al. 2003). The complex-
step method is implemented automatically by pyOpt for
any code in Python; for other programming languages, 4 Examples
the user must implement the method. pyOpt also allows
users to define their own sensitivity calculation, such as a We illustrate the capabilities of pyOpt by solving four
semi-analytic adjoint method or automatically differentiated different optimization problems. The first three problems
code. A parallel gradient calculation option can be used for involve explicit analytic formulations, while the last prob-
optimization problems with large numbers of design vari- lem is an engineering design example involving more
ables. The calculation of the gradients for the various design complex numerical simulation.
108 R.E. Perez et al.
Solution:
This first example demonstrates the flexibility enabled in ----------------------------------------------------------------------
Total Time: 0.0310
pyOpt by maintaining independence between the optimiza- Total Function Evaluations: 13
Lambda: [-144.0000003 0.0]
tion problem and the optimization solver. The problem is Sensitivities: FD
x1 + 2x2 + 2x3 − 72 ≤ 0
Name Type Bound
subject to g1 i −
-1.00e+021 <= 0.000000 <= 0.00e+000
g2 i −
-1.00e+021 −
<= -72.000000 <= 0.00e+000
− x1 − 2x2 − 2x3 ≤ 0
----------------------------------------------------------------------
0 ≤ x1 ≤ 42
======================================================================
Objective Function: objfunc
0 ≤ x2 ≤ 42 Solution:
----------------------------------------------------------------------
Total Time: 0.0160
pyOpt Optimization
pyOpt
pyOpt
SNOPT
NLPQL
3. Instantiation of the optimization problem object and
definition of the problem design variables, objective,
objfunc ( x ) :
f = x[0] x[1] x[2]
objective function, and constraints
g = [0.0] 2
g[0] = x[0] + 2. x[1] + 2. x[2] 72.0 4. Instantiation of the optimization solver objects, setting
g[1] = x[0] 2. x[1] 2. x[2]
fail = 0 of the solver options, and solution of the problem by the
f , g , fail
solvers
opt_prob = Optimization ( ,
objfunc )
Each optimizer is instantiated with a set of default
options that work in most cases. This way, the likelihood
opt_prob . addVarGroup ( ,3 , , lower = 0 . 0 ,
upper = 4 2 . 0 , value = 1 0 . 0 ) of success is maximized for less experienced optimiza-
opt_prob . addObj ( )
tion practitioners. Furthermore, users that have experience
with a given optimizer can easily modify the default set of
opt_prob . addConGroup ( ,2 , )
options. In Fig. 2 for example, the first optimizer makes use
of its default derivative estimation (finite differences), while
opt_prob
the second optimizer makes use of the complex-step method
snopt = SNOPT ( ) (set by the “CS” input flag). The outputs of the solution by
nlpql = NLPQL ( )
the two sample solvers are shown in Fig. 3. Since all solvers
share the same Optimizer abstract class, the output for-
snopt . setOption ( , 1 . 0 e 6)
snopt ( opt_prob ) mat of the results is standardized to facilitate interpretation
opt_prob . solution ( 0 )
and comparison.
nlpql . setOption ( , 1 . 0 e 6)
Since complete independence between the optimization
nlpql ( opt_prob , )
opt_prob . solution ( 1 )
problem and the solvers is maintained, it is easy to solve the
same optimization problem instance with different optimiz-
Fig. 2 Python source code that implements Problem 1 ers. For example, Fig. 4 and Table 1 show a comparison
A Python-based object-oriented framework for nonlinear constrained optimization 109
Table 1 Comparison of
solutions for Problem 1 Solver Fevals εf ε̄ f ε̄x ε̄g
n−1
snopt ( opt_prob2 , )
opt_prob2 . solution ( 0 )
min 100 xi+1 − xi 2 )2 + (1 − xi )2
xi
i=1 Fig. 5 Python source code that implements Problem 2
n−1
subject to 0.1 − (xi − 1)3 − (xi+1 − 1) ≤ 0
i=1
− 5.12 ≤ xi ≤ 5.12, i = 1, . . . , n (5) Similarly, Fig. 8 shows a comparison of the total num-
ber of objective function evaluations requested by each
optimizer to solve the variable-dimensionality problem
where the constraint is active at the optimum. described by (5) for an increasing number of design vari-
Figure 5 lists the source for solving two versions of this ables. The number of dimensions of this problem is. For
problem: one with 2 variables, and another one with 50. An
optimization problem is instantiated for each dimensionality
and solved using a SNOPT optimizer instance. When called,
the optimizer instance solves the optimization problem—as
defined by the provided problem instance—and returns it
solution to that instance which stores it. This true separation
between the problem definition and the optimizer allows
for a clean and effortless solution of multiple problems by
calling instanced optimizers.
As in Problem 1, all optimizers start from the same ini-
tial point, (x1 , . . . , xn ) = (4, . . . , 4), and use their default
options. Sensitivities are computed using the complex-
step derivative approximation with a step size of 10−20 .
Figures 6 and 7 show a comparison of the relative error
in the optimum objective function (4) and in the optimum
design variable values for the different optimizers versus
the number of design variables in the problem. Compar-
isons of all gradient-based and gradient-free optimizers are
made with respect to the solution obtained by a reference Fig. 6 Objective function accuracy versus dimensionality of
optimizer (SNOPT). Problem 2
A Python-based object-oriented framework for nonlinear constrained optimization 111
(j) SOLVOPT
5
1
2 f = 0.0
i xrange ( 5 ) :
min − ck exp − (xi − aki ) ·
2 f += (c [ i ] exp( (1/ pi ) ( ( x[0] a [ i ] ) 2 + \
x1 ,x2 π ( x[1] b [ i ] ) 2 ) ) cos ( pi ( ( x[0] a [ i ] ) 2 + \
k=1 i=1 ( x[1] b [ i ] ) 2)))
2
g = [0.0] 1
g [ 0 ] = 20.04895 (x[0]+2.0) 2 (x[1]+1.0) 2
cos π (xi − aki )2 fail = 0
f , g , fail
i=1 (6)
⎧
⎪
⎪ 20.04895 − (x1 + 2)2 + (x1 + 1)2 ≤ 0
opt_prob = Optimization ( ,
⎪
⎨
objfunc )
where,
opt_prob . addCon ( , )
3 5 2 1 7
a= opt_prob
5 2 1 4 9 (7)
alpso = ALPSO ( )
c= 1 2 5 2 3 . snopt = SNOPT ( )
alpso ( opt_prob )
Figure 10 shows the unconstrained design space for this opt_prob . solution ( 0 )
problem. A large number ∗ ∗of local optima exists and the snopt ( opt_prob . solution ( 0 ) )
global optimum is at x1 , x2 = (2.003, 1.006), where the opt_prob . solution ( 0 ) . solution ( 0 )
A − b = 0, (8)
K u − f = 0, (9)
min WTO
α,αmvr ,γ ,t j
⎧
⎪
⎪ L = Wi
⎪
⎪
⎪
⎪
⎪
⎪ C L mvr
⎪
⎪ = 2.5
⎪
⎪ C L cruise
⎪
⎨ (10)
s.t. 1.5σ j |C L mvr ≤ σyield
⎪
⎪
⎪
⎪ − 15 ≤ α ≤ 15
⎪
⎪ (a) Parallel Speed-Up
⎪
⎪
⎪
⎪ − 15 ≤ γ ≤ 15
⎪
⎪
⎪
⎩ 0.06 ≤ t ≤ R ,
j j
Optimizer
α, α m vr, γ tj to or lower than the yield stress of the material with a 1.5
safety factor.
Multi-Disciplinary The maximum number of processors that can be used
Analysis
in the parallel gradient computation depends on the num-
Aerodynamic Solver Structural Solver ber of design variables in the optimization problem. This is
due to the processor management approach, which statically
f
allocates the gradient computation of each individual design
Z
variable to the available processors. The aerostructural opti-
Z
Y X u
Y X
mization problem is solved using SNOPT, with gradients
computed in parallel by a SiCortex SC072-PDS computer.
The wing is discretized using 29 elements resulting in 32
design variables, and hence a maximum of 32 processors
L,CL,mvr ,CL,cruise WTO ,Wi , σj are used. The optimization parallel gradient implementa-
tion speed-up and efficiencies are shown in Fig. 14a and b,
Fig. 13 Aerostructural design optimization problem respectively.
116 R.E. Perez et al.
Maneuver
The ideal speed-up and efficiency of a parallel computa- Cruise
tion are given by Amdahl’s law. The portion of the algorithm
that can be parallelized, which corresponds to the gradient
computation, is estimated to be 89% of the total execution
time. Due to the static processor allocation, the efficiency 8 60
65
Undeformed
when the number of design variables is not a multiple of 40
Z
2
35
0 30
10 15
results in the steps shown in Fig. 14. For 8, 12, 16, and 15
20
25
30
5
10
X 35 0
32 processors the speed-up and efficiency is close to the
theoretical maximum, while between those numbers, the Fig. 16 Optimized wing geometry for Problem 4, showing unde-
increase in processors provides little improvement. This formed and deformed shapes for both the cruise and maneuver
conditions, with dimensions in feet
trend becomes even more pronounced as the number of pro-
cessors increases. Beyond 16 processors, the performance
deteriorates with an increasing number of processors up
to 32, where performance closely matches the theoretical The optimal lift and stress distributions obtained for the
maximum again. aerostructural optimization problem are shown in Fig. 15a
and b, respectively. The optimal lift distribution shows the
shift of the lift towards the root, which alleviates the bend-
ing moment in the wing and enables a reduction in structural
weight. This shift in loading is slightly more pronounced in
the structurally critical maneuver condition. At the cruise
condition, a more elliptical distribution is favored in order
to improve the aerodynamic performance. The elements of
the beam are fully stressed at the maneuver condition to
minimize the structural weight. Figure 16 shows the opti-
mized wing in its undeformed state, as well as its shape in
the cruise condition and in the maneuver condition.
5 Summary
Oliphant TE (2007) Python for scientific computing. Comput Sci Eng Svanberg K (1987) The method of moving asymptotes—a new method
9(3):10–20 for structural optimization. Int J Numer Methods Eng 24(2):359–
Perez RE, Martins JRRA (2010) pyOpt. http://pyopt.org/ 373. doi:10.1002/nme.1620240207
Peterson P (2009) F2PY: a tool for connecting Fortran and Python Svanberg K (1995) A globally convergent version of MMA without
programs. Int J Comput Sci Eng 4(4):296–305 linesearch. In: First World congress of structural and multidisci-
Poon NMK, Martins JRRA (2007) An adaptive approach to constraint plinary optimization. Goslar, Germany
aggregation using adjoint sensitivity analysis. Struct Multidisc TANGO project (2007) Trustable algorithms for nonlinear general
Optim 30(1):61–73 optimization. Tech Rep, Applied Mathematics Department at
Powell MJD (1994) Advances in optimization and numerical analysis. IMECC-UNICAMP and Computer Science Department at IME-
Kluwer Academic, Dordrecht, chap A direct search optimiza- USP. http://www.ime.usp.br/~egbirgin/tango/
tion method that models the objective and constraint functions by Tedford NP, Martins JRRA (2010) Benchmarking multidisciplinary
linear interpolation, pp 51–67 design optimization algorithms. Optim Eng 11(1):159–183
Rosenbrock HH (1960) An automatic method for finding the greatest Vanderplaats GN (1973) CONMIN—a Fortran program for con-
or least value of a function. Comput J 3:175–184 strained function minimization. Technical Memorandum TM X-
Rosenthal RE (2008) GAMS—a user’s guide. Tech Rep, GAMS 62282, NASA Ames Research Center, Moffett Field, California
Development Corporation, Washington Venter G, Sobieszczanski-Sobieski J (2004) Multidisciplinary
Schittkowski K (1986) NLPQL: a Fortran subroutine for solving optimization of a transport aircraft wing using particle
constrained nonlinear programming problems. Ann Oper Res swarm optimization. Struct Multidisc Optim 26:121–131.
5(2):485–500 doi:10.1007/s00158-003-0318-3
Schittkowski K (1987) More test problems for nonlinear programming Woodruff D, Watson JP, Hart W (2011) PySP: modeling and solving
codes. Lecture notes in economics and mathematical systems, stochastic programs in Python. In: 12th INFORMS computing
vol 282 society conference. Monterey, CA
Schlüter M, Gerdts M (2009) The oracle penalty method. J Glob Optim Wrenn G (1989) An indirect method for numerical optimization using
47(2):293–325. doi:10.1007/s10898-009-9477-0 the Kreisselmeier–Steinhauser function. Contractor report NASA
Schlüter M, Egea J, Banga J (2009) Extended ant colony optimization CR-4220, NASA Langley Research Center, Hampton
for non-convex mixed integer nonlinear programming. Comput Xu E (2009) pyIPOpt: an IPOPT connector to Python. User’s manual.
Oper Res 36(7):2217–2229 http://code.google.com/p/pyipopt/
Shor N (1985) Minimization methods for non-differentiable func- Zhou JL, Tits AL (1996) An SQP algorithm for finely discretized
tions. In: Springer series in computational mathematics, vol 3. continuous minimax problems and other minimax problems with
Springer–Verlag, Berlin many objective functions. SIAM J Optim 6(2):461–487