02 Huong Dan
02 Huong Dan
Presentation Outline
Introduction
Function Optimization
Optimization Toolbox
Routines / Algorithms available
Optimization Problems
Unconstrained
Constrained
Example
The Algorithm Description
Function Optimization
Optimization concerns the minimization
or maximization of functions
Standard Optimization Problem
min f x
x
~
~
Subject to:
hi x 0
~
Equality Constraints
g x 0
j
Inequality Constraints
~
and
evaluate the performance of a system.
f xis the objective function, which measure
~
hi x 0 Equality Constraints
~
g x 0 Inequality Constraints
j
~
Most algorithm require less than!!!
x
~
min f x e x1 4 x12 2 x22 4 x1 x2 2 x2 1
~
x x1 x2
T
~
Steps
Create an M-file that returns the
function value (Objective Function)
Call it objfun.m
Then, invoke the unconstrained
minimization routine
Use fminunc
Step 1 – Obj. Function
x x1 x2
T
~
function f = objfun(x)
f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);
Objective function
Step 2 – Invoke Routine
Starting with a guess
options = optimset(‘LargeScale’,’off’);
[xmin,feval,exitflag,output]=
fminunc(‘objfun’,x0,options);
Output arguments
Input arguments
Results
xmin =
0.5000 -1.0000 Minimum point of design variables
feval =
1.3028e-010
Objective function value
exitflag =
1
Exitflag tells if the algorithm is converged.
output = If exitflag > 0, then local minimum is found
iterations: 7
funcCount: 40
stepsize: 1
Some other information
firstorderopt: 8.1998e-004
algorithm: 'medium-scale: Quasi-Newton line search'
More on fminunc – Input
[xmin,feval,exitflag,output,grad,hessian]=
fminunc(fun,x0,options,P1,P2,…)
[xmin,feval,exitflag,output,grad,hessian]=
fminunc(fun,x0,options,P1,P2,…)
xmin: Vector of the minimum point (optimal point). The size is the
number of design variables.
feval: The objective function value of at the optimal point.
exitflag: A value shows whether the optimization routine is
terminated successfully. (converged if >0)
output: This structure gives more details about the optimization
grad: The gradient value at the optimal point.
hessian: The hessian value of at the optimal point
Options Setting – optimset
Options =
optimset(‘param1’,value1, ‘param2’,value2,…)
The routines in Optimization Toolbox has a set of default optimization
parameters.
However, the toolbox allows you to alter some of those parameters,
for example: the tolerance, the step size, the gradient or hessian
values, the max. number of iterations etc.
There are also a list of features available, for example: displaying the
values at each iterations, compare the user supply gradient or
hessian, etc.
You can also choose the algorithm you wish to use.
Options Setting (Cont.)
Options =
optimset(‘param1’,value1, ‘param2’,value2,…)
Type help optimset in command window, a list of options setting
available will be displayed.
How to read? For example:
Value (value1)
Parameter (param1)
Options Setting (Cont.)
Options =
optimset(‘param1’,value1, ‘param2’,value2,…)
x
~
min f x x1 x2 x3
~
f=-x(1)*x(2)*x(3);
0 30
1 2 2 0
A , B LB 0 , UB 30
1 2 2 72
0 30
Example (Cont.)
For
2 x12 x2 0
Create a function call nonlcon which returns 2 constraint vectors [C,Ceq]
function [C,Ceq]=nonlcon(x)
fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P1,P2,…)
Example (Cont.)
Warning: Large-scale (trust region) method does not currently solve this type of problem,
switching to medium-scale (line search).
> In D:\Programs\MATLAB6p1\toolbox\optim\fmincon.m at line 213
In D:\usr\CHINTANG\OptToolbox\min_con.m at line 6
Optimization terminated successfully:
Magnitude of directional derivative in search direction less than 2*options.TolFun and
x1 2 x2 2 x3 0
maximum constraint violation is less than options.TolCon Const. 1
Active Constraints:
2 x1 2 x2 2 x3 72 Const. 2
9
x=
Const. 3 0 x1 30 Const. 5
0.00050378663220 Const. 4 0 x2 30 Const. 6
0.00000000000000
Const. 7
30.00000000000000 0 x3 30 Const. 8
feval =
2 x12 x2 0 Const. 9
-4.657237250542452e-035
Sequence: A,B,Aeq,Beq,LB,UB,C,Ceq
Solving Linear Programs:
MATLAB uses the following format for
linear programs:
Input the variables into Matlab:
>> f = -[4;2;1];
>> A = [2 1 0;1 0 2];
>> b = [1;2];
Aeq = [1 1 1];
beq = [1];
l = [0;0;0];
u = [1;1;2];
Simple LP Example – cont…
Solve the linear program using MATLAB:
>> x = linprog(f,A,b,Aeq,beq,l,u)
And you should get:
x=
0.5000
0.0000
0.5000
Simple LP Example cont …
What to do when some of the variables are
missing?
For example, suppose that no lower bounds on the variables. In this case
define l to be the empty set using the MATLAB command:
>> l = [ ];
Doing this and resolving the LP gives:
x=
0.6667
-0.3333
0.6667
Simple LP Example cont …
Define other matrices to be empty matrices if they
do not appear in the problem formulation. For
example, if there are no equality constraints, define
Aeq and beq as empty sets, i.e.
>> Aeq = [ ];
>> beq = [ ];
Solution becomes:
X=
0.0000
1.0000
1.0000
Sensitivity Analysis:
How sensitive are optimal solutions to
errors in data, model parameters, and
changes to inputs in the linear
programming problem?
Sensitivity analysis is important
because it enables flexibility and
adaptation to changing conditions
Conclusion
Easy to use! But, we do not know what is happening behind
the routine. Therefore, it is still important to understand the
limitation of each routine.
Basic steps:
Recognize the class of optimization problem
Define the design variables
Create objective function
Recognize the constraints
Start an initial guess
Invoke suitable routine
Analyze the results (it might not make sense)