MATLAB Optimization Toolbox
Presentation Outline
Introduction
Function Optimization
Optimization Toolbox
Routines / Algorithms available
Optimization Problems
Unconstrained
Constrained
Example
The Algorithm Description
Function Optimization
Optimization concerns the minimization
or maximization of functions
Standard Optimization Problem
min f x
x
~
~
Subject to:
hi x 0
~
Equality Constraints
g x 0
j
Inequality Constraints
~
x L k xk xU k Side Constraints
Function Optimization
and
evaluate the performance of a system.
f xis the objective function, which measure
~
In a standard problem, we are minimizing
the function.
For maximization, it is equivalent to minimization
of the –ve of the objective function.
x is a column vector of design variables, which can
~
affect the performance of the system.
Function Optimization
Constraints – Limitation to the design space.
Can be linear or nonlinear, explicit or implicit functions
hi x 0 Equality Constraints
~
g x 0 Inequality Constraints
j
~
Most algorithm require less than!!!
x L k xk xU k Side Constraints
Optimization Toolbox
Is a collection of functions that extend the capability of
MATLAB. The toolbox includes routines for:
Unconstrained optimization
Constrained nonlinear optimization, including goal
attainment problems, minimax problems, and semi-
infinite minimization problems
Quadratic and linear programming
Nonlinear least squares and curve fitting
Nonlinear systems of equations solving
Constrained linear least squares
Specialized algorithms for large scale problems
Minimization Algorithm
Minimization Algorithm (Cont.)
Equation Solving Algorithms
Least-Squares Algorithms
Implementing Opt. Toolbox
Most of these optimization routines require
the definition of an M-file containing the
function, f, to be minimized.
Maximization is achieved by supplying the
routines with –f.
Optimization options passed to the routines
change optimization parameters.
Default optimization parameters can be
changed through an options structure.
Unconstrained Minimization
Consider the problem of finding a set
of values [x1 x2]T that solves
x
~
min f x e x1 4 x12 2 x22 4 x1 x2 2 x2 1
~
x x1 x2
T
~
Steps
Create an M-file that returns the
function value (Objective Function)
Call it objfun.m
Then, invoke the unconstrained
minimization routine
Use fminunc
Step 1 – Obj. Function
x x1 x2
T
~
function f = objfun(x)
f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);
Objective function
Step 2 – Invoke Routine
Starting with a guess
x0 = [-1,1]; Optimization parameters settings
options = optimset(‘LargeScale’,’off’);
[xmin,feval,exitflag,output]=
fminunc(‘objfun’,x0,options);
Output arguments
Input arguments
Results
xmin =
0.5000 -1.0000 Minimum point of design variables
feval =
1.3028e-010
Objective function value
exitflag =
1
Exitflag tells if the algorithm is converged.
output = If exitflag > 0, then local minimum is found
iterations: 7
funcCount: 40
stepsize: 1
Some other information
firstorderopt: 8.1998e-004
algorithm: 'medium-scale: Quasi-Newton line search'
More on fminunc – Input
[xmin,feval,exitflag,output,grad,hessian]=
fminunc(fun,x0,options,P1,P2,…)
fun: Return a function of objective function.
x0: Starts with an initial guess. The guess must be a vector of size of
number of design variables.
option: To set some of the optimization parameters. (More after few
slides)
P1,P2,…: To pass additional parameters.
More on fminunc – Output
[xmin,feval,exitflag,output,grad,hessian]=
fminunc(fun,x0,options,P1,P2,…)
xmin: Vector of the minimum point (optimal point). The size is the
number of design variables.
feval: The objective function value of at the optimal point.
exitflag: A value shows whether the optimization routine is
terminated successfully. (converged if >0)
output: This structure gives more details about the optimization
grad: The gradient value at the optimal point.
hessian: The hessian value of at the optimal point
Options Setting – optimset
Options =
optimset(‘param1’,value1, ‘param2’,value2,…)
The routines in Optimization Toolbox has a set of default optimization
parameters.
However, the toolbox allows you to alter some of those parameters,
for example: the tolerance, the step size, the gradient or hessian
values, the max. number of iterations etc.
There are also a list of features available, for example: displaying the
values at each iterations, compare the user supply gradient or
hessian, etc.
You can also choose the algorithm you wish to use.
Options Setting (Cont.)
Options =
optimset(‘param1’,value1, ‘param2’,value2,…)
Type help optimset in command window, a list of options setting
available will be displayed.
How to read? For example:
LargeScale - Use large-scale algorithm if
possible [ {on} | off ]
The default is with { }
Value (value1)
Parameter (param1)
Options Setting (Cont.)
Options =
optimset(‘param1’,value1, ‘param2’,value2,…)
LargeScale - Use large-scale algorithm if
possible [ {on} | off ]
Since the default is on, if we would like to turn off, we just type:
Options =
optimset(‘LargeScale’, ‘off’)
and pass to the input of fminunc.
Useful Option Settings
Highly recommended to use!!
Display - Level of display [ off | iter |
notify | final ]
MaxIter - Maximum number of iterations
allowed [ positive integer ]
TolCon - Termination tolerance on the
constraint violation [ positive scalar ]
TolFun - Termination tolerance on the
function value [ positive scalar ]
TolX - Termination tolerance on X [ positive
scalar ]
fminunc and fminsearch
fminunc uses algorithm with gradient
and hessian information.
Two modes:
Large-Scale: interior-reflective Newton
Medium-Scale: quasi-Newton (BFGS)
Not preferred in solving highly
discontinuous functions.
This function may only give local solutions.
fminunc and fminsearch
fminsearch is generally less efficient than
fminunc for problems of order greater than
two. However, when the problem is highly
discontinuous, fminsearch may be more
robust.
This is a direct search method that does not
use numerical or analytic gradients as in
fminunc.
This function may only give local solutions.
Constrained Minimization
Vector of
Lagrange
Multiplier at
optimal point
[xmin,feval,exitflag,output,lambda,grad,hessian]
=
fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,
P1,P2,…)
Example
function f = myfun(x)
x
~
min f x x1 x2 x3
~
f=-x(1)*x(2)*x(3);
Subject to: 2 x12 x2 0
x1 2 x2 2 x3 0
x1 2 x2 2 x3 72
0 x1 , x2 , x3 30
0 30
1 2 2 0
A , B LB 0 , UB 30
1 2 2 72
0 30
Example (Cont.)
For
2 x12 x2 0
Create a function call nonlcon which returns 2 constraint vectors [C,Ceq]
function [C,Ceq]=nonlcon(x)
C=2*x(1)^2+x(2); Remember to return a null
Ceq=[]; Matrix if the constraint does
not apply
Example (Cont.)
Initial guess (3 design variables)
1 2 2 0
x0=[10;10;10];
A , B
1 2 2 72
A=[-1 -2 -2;1 2 2];
0 30
LB 0 , UB 30
B=[0 72]';
LB = [0 0 0]';
UB = [30 30 30]';
0 30
[x,feval]=fmincon(@myfun,x0,A,B,[],[],LB,UB,@nonlcon)
CAREFUL!!!
fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P1,P2,…)
Example (Cont.)
Warning: Large-scale (trust region) method does not currently solve this type of problem,
switching to medium-scale (line search).
> In D:\Programs\MATLAB6p1\toolbox\optim\fmincon.m at line 213
In D:\usr\CHINTANG\OptToolbox\min_con.m at line 6
Optimization terminated successfully:
Magnitude of directional derivative in search direction less than 2*options.TolFun and
x1 2 x2 2 x3 0
maximum constraint violation is less than options.TolCon Const. 1
Active Constraints:
2 x1 2 x2 2 x3 72 Const. 2
9
x=
Const. 3 0 x1 30 Const. 5
0.00050378663220 Const. 4 0 x2 30 Const. 6
0.00000000000000
Const. 7
30.00000000000000 0 x3 30 Const. 8
feval =
2 x12 x2 0 Const. 9
-4.657237250542452e-035
Sequence: A,B,Aeq,Beq,LB,UB,C,Ceq
Solving Linear Programs:
MATLAB uses the following format for
linear programs:
A linear program in the above format is
solved using the command:
x = linprog(f ;A; b;Aeq; beq; l; u):
Simple LP Example:
Suppose we want to solve the following linear program using MATLAB:
Convert the LP into MATLAB format:
Simple LP Example – cont …
Input the variables into Matlab:
>> f = -[4;2;1];
>> A = [2 1 0;1 0 2];
>> b = [1;2];
Aeq = [1 1 1];
beq = [1];
l = [0;0;0];
u = [1;1;2];
Simple LP Example – cont…
Solve the linear program using MATLAB:
>> x = linprog(f,A,b,Aeq,beq,l,u)
And you should get:
x=
0.5000
0.0000
0.5000
Simple LP Example cont …
What to do when some of the variables are
missing?
For example, suppose that no lower bounds on the variables. In this case
define l to be the empty set using the MATLAB command:
>> l = [ ];
Doing this and resolving the LP gives:
x=
0.6667
-0.3333
0.6667
Simple LP Example cont …
Define other matrices to be empty matrices if they
do not appear in the problem formulation. For
example, if there are no equality constraints, define
Aeq and beq as empty sets, i.e.
>> Aeq = [ ];
>> beq = [ ];
Solution becomes:
X=
0.0000
1.0000
1.0000
Sensitivity Analysis:
How sensitive are optimal solutions to
errors in data, model parameters, and
changes to inputs in the linear
programming problem?
Sensitivity analysis is important
because it enables flexibility and
adaptation to changing conditions
Conclusion
Easy to use! But, we do not know what is happening behind
the routine. Therefore, it is still important to understand the
limitation of each routine.
Basic steps:
Recognize the class of optimization problem
Define the design variables
Create objective function
Recognize the constraints
Start an initial guess
Invoke suitable routine
Analyze the results (it might not make sense)