0% found this document useful (0 votes)
197 views

Optimization

Optimization is an iterative process that finds the maximum or minimum of a function while satisfying constraints. The document discusses unconstrained and constrained linear and non-linear optimization problems. It describes optimization algorithms like gradient descent, conjugate gradient, and interior penalty functions that are used to find the optimal solution.

Uploaded by

t2jzge
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
197 views

Optimization

Optimization is an iterative process that finds the maximum or minimum of a function while satisfying constraints. The document discusses unconstrained and constrained linear and non-linear optimization problems. It describes optimization algorithms like gradient descent, conjugate gradient, and interior penalty functions that are used to find the optimal solution.

Uploaded by

t2jzge
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

What is Optimization?

Optimization is an iterative process by which a desired solution (max/min) of the problem can be found while satisfying all its constraint or bounded conditions.

Figure 2: Optimum solution is found while satisfying its constraint (derivative must be zero at optimum).

Optimization problem could be linear or non-linear. Non linear optimization is accomplished by numerical Search Methods.

Search methods are used iteratively before a solution is achieved. The search procedure is termed as algorithm.

What is Optimization?(Cont.)
Linear problem solved by Simplex or Graphical methods. The solution of the linear problem lies on boundaries of the feasible region.

Figure 3: Solution of linear problem

Figure 4: Three dimensional solution of non-linear problem

Non-linear problem solution lies within and on the boundaries of the feasible region.

Fundamentals of Non-Linear Optimization


Single Objective function f(x) Maximization Minimization Design Variables, xi , i=0,1,2,3.. Constraints Inequality Equality

Maximize X1 + 1.5 X2 Subject to: X1 + X2 150 0.25 X1 + 0.5 X2 50 X1 50 X2 25 X1 0, X2 0

Figure 5: Example of design variables and constraints used in non-linear optimization.

Optimal points Local minima/maxima points: A point or Solution x* is at local point if there is no other x in its Neighborhood less than x* Global minima/maxima points: A point or Solution x** is at global point if there is no other x in entire search space less than x**

Fundamentals of Non-Linear Optimization (Cont.)

Figure 6: Global versus local optimization.

Figure 7: Local point is equal to global point if the function is convex.

Fundamentals of Non-Linear Optimization (Cont.)


Function f is convex if f(Xa) is less than value of the corresponding point joining f(X1) and f(X2). Convexity condition Hessian 2nd order derivative) matrix of function f must be positive semi definite ( eigen values +ve or zero).

Figure 8: Convex and nonconvex set

Figure 9: Convex function

Mathematical Background
Slop or gradient of the objective function f represent the direction in which the function will decrease/increase most rapidly
df dx lim
x

f (x

x) x

f ( x)

lim
x

f x

Taylor series expansion df 1 d2 f f (xp x) ( x) dx x p 2! dx2

( x) 2 .......
xp

Jacobian matrix of gradient of f with respect to several variables


J f x g x f y g y f z g z

Mathematical Background (Cont.)


First order Condition (FOC)

f (X *) 0
Hessian Second derivative of f of several variables
2

x2 2 f x y

f y x 2 f y2

Second order condition (SOC) Eigen values of H(X*) are all positive Determinants of all lower order of H(X*) are +ve

Optimization Algorithm

Deterministic - specific rules to move from one iteration to next , gradient, Hessian

Stochastic probalistic rules are used for subsequent iteration Optimal Design Engineering Design based on optimization algorithm
Lagrangian method sum of objective function and linear combination of the constraints.

Optimization Methods
Deterministic Direct Search Use Objective function values to locate minimum Gradient Based first or second order of objective function. Minimization objective function f(x) is used with ve sign f(x) for maximization problem. Single Variable Newton Raphson is Gradient based technique (FOC) Golden Search step size reducing iterative method
Multivariable Techniques ( Make use of Single variable Techniques specially Golden Section) Unconstrained Optimization
a.) Powell Method Quadratic (degree 2) objective function polynomial is non-gradient based. b.) Gradient Based Steepest Descent (FOC) or Least Square minimum (LMS) c.) Hessian Based -Conjugate Gradient (FOC) and BFGS (SOC)

Optimization Methods Constrained


Constrained Optimization
a.) Indirect approach by transforming into unconstrained problem. b.) Exterior Penalty Function (EPF) and Augmented Lagrange Multiplier c.) Direct Method Sequential Linear Programming (SLP), SQP and Steepest Generalized Reduced Gradient Method (GRG)

Figure 10: Descent Gradient or LMS

Optimization Methods (Cont.)

Global

Optimization Stochastic techniques

Simulated Annealing (SA) method minimum energy principle of cooling metal crystalline structure
Genetic Algorithm (GA) Survival of the fittest principle based upon evolutionary theory

Optimization Methods (Example)


Multivariable Gradient based optimization J is the cost function to be minimized in two dimension

The contours of the J paraboloid shrinks as it is decrease function retval = Example6_1(x) % example 6.1 retval = 3 + (x(1) - 1.5*x(2))^2 + (x(2) - 2)^2;
>> SteepestDescent('Example6_1', [0.5 0.5], 20, 0.0001, 0, 1, 20) Where [0.5 0.5] -initial guess value 20 -No. of iteration 0.001 -Golden search tol. 0 -initial step size 1 -step interval 20 -scanning step >> ans 2.7585 1.8960

Figure 11: Multivariable Gradient based optimization

Figure 12: Steepest Descent

PART II

MATLAB Optimization Toolbox

Presentation Outline

Introduction
Function Optimization Optimization Toolbox Routines / Algorithms available

Minimization Problems
Unconstrained Constrained

Example The Algorithm Description

Multiobjective Optimization
Optimal PID Control Example

Function Optimization
Optimization concerns the minimization or maximization of functions
Standard Optimization Problem:

min f x
x
~

Subject to:

hi x
~

0 Equality Constraints

gj x
~

0 Inequality Constraints

xLk

xk

xU k Side Constraints

Where:

f x
~

is the objective function, which measure and evaluate the performance of a system. In a standard problem, we are minimizing the function. For maximization, it is equivalent to minimization of the ve of the objective function. is a column vector of design variables, which can affect the performance of the system.

x
~

Function Optimization (Cont.)


Constraints Limitation to the design space. Can be linear or nonlinear, explicit or implicit functions

hi x
~

0 0

Equality Constraints Inequality Constraints


Most algorithm require less than!!!

gj x
~

xLk

xk

xU k

Side Constraints

Optimization Toolbox
Is a collection of functions that extend the capability of MATLAB.
The toolbox includes routines for: Unconstrained optimization Constrained nonlinear optimization, including goal attainment problems, minimax problems, and semi-infinite minimization problems Quadratic and linear programming Nonlinear least squares and curve fitting

Nonlinear systems of equations solving Constrained linear least squares


Specialized algorithms for large scale problems

Minimization Algorithm

Minimization Algorithm (Cont.)

Equation Solving Algorithms

Least-Squares Algorithms

Implementing Opt. Toolbox


Most of these optimization routines require the definition of an Mfile containing the function, f, to be minimized.

Maximization is achieved by supplying the routines with f.


Optimization options passed to the routines change optimization parameters.

Default optimization parameters can be changed through an options structure.

Unconstrained Minimization

Consider the problem of finding a set of values [x1 x2]T that solves

min f x
x
~

e x1 4 x12
x1 x2
T

2 2 x2

4 x1 x2

2 x2 1

x
~

Steps:
Create an M-file that returns the function value (Objective Function). Call it objfun.m Then, invoke the unconstrained minimization routine. Use fminunc

Step 1 Obj. Function

x
function f = objfun(x)
~

x1

x2

f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);

Objective function

Step 2 Invoke Routine

Starting with a guess

x0 = [-1,1];

Optimization parameters settings

options = optimset(LargeScale,off); [xmin,feval,exitflag,output]=


fminunc(objfun,x0,options);

Output arguments

Input arguments

Results

xmin =
0.5000 -1.0000 Minimum point of design variables Objective function value feval = 1.3028e-010 1

exitflag = output = Exitflag tells if the algorithm is converged. If exitflag > 0, then local minimum is found

iterations: 7 funcCount: 40

stepsize: 1

Some other information

firstorderopt: 8.1998e-004

search'

algorithm: 'medium-scale: Quasi-Newton line

More on fminunc Input

[xmin,feval,exitflag,output,grad,hessian]=
fminunc(fun,x0,options,P1,P2,)

fun x0 a vector

: Return a function of objective function. : Starts with an initial guess. The guess must be
of size of number of design variables. : To set some of the optimization parameters. after few slides) : To pass additional parameters.

Option (More P1,P2,

More on fminunc Output

[xmin,feval,exitflag,output,grad,hessian]=

fminunc(fun,x0,options,P1,P2,)

xmin : Vector of the minimum point (optimal point). The size is the number of design variables. feval : The objective function value of at the optimal point. exitflag : A value shows whether the optimization routine is terminated successfully. (converged if >0) Output : This structure gives more details about the optimization grad : The gradient value at the optimal point. hessian : The hessian value of at the optimal point

Options Setting optimset

Options =

optimset(param1,value1, param2,value2,)
The routines in Optimization Toolbox has a set of default optimization parameters. However, the toolbox allows you to alter some of those parameters, for example: the tolerance, the step size, the gradient or hessian values, the max. number of iterations etc.
There are also a list of features available, for example: displaying the values at each iterations, compare the user supply gradient or hessian, etc. You can also choose the algorithm you wish to use.

Options Setting (Cont.)

Options =

optimset(param1,value1, param2,value2,)
Type help optimset in command window, a list of options setting available will be displayed. How to read? For example:

LargeScale - Use large-scale algorithm if possible [ {on} | off ]


The default is with { } Value (value1)

Parameter (param1)

Options Setting (Cont.)

Options =

optimset(param1,value1, param2,value2,)
LargeScale - Use large-scale algorithm if possible [ {on} | off ]

Since the default is on, if we would like to turn off, we just type:

Options = optimset(LargeScale, off)


and pass to the input of fminunc.

Useful Option Settings


Highly recommended to use!!!

Display - Level of display [ off | iter | notify | final ] MaxIter - Maximum number of iterations allowed [ positive integer ]

TolCon - Termination tolerance on the constraint violation [ positive scalar ]


TolFun - Termination tolerance on the function value [ positive scalar ] TolX - Termination tolerance on X [ positive scalar ]

fminunc and fminsearch


fminunc uses algorithm with gradient and hessian information. Two modes: Large-Scale: interior-reflective Newton Medium-Scale: quasi-Newton (BFGS) Not preferred in solving highly discontinuous functions.

This function may only give local solutions.. fminsearch is generally less efficient than fminunc for problems of order greater than two. However, when the problem is highly discontinuous, fminsearch may be more robust.
This is a direct search method that does not use numerical or analytic gradients as in fminunc. This function may only give local solutions.

Constrained Minimization
Vector of Lagrange Multiplier at optimal point

[xmin,feval,exitflag,output,lambda,grad,hessian] = fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options, P1,P2,)

Example
function f = myfun(x)
f=-x(1)*x(2)*x(3);

min f x
x
~

x1 x2 x3
x2 0

Subject to:

2 x12

x1 2 x2 x1 2 x2

2 x3 2 x3

0 72

0 x1 , x2 , x3 30
A 1 1 2 2 2 , B 2 0 72

LB

0 0 , UB 0

30 30 30

Example (Cont.)

For

2 x12

x2

Create a function call nonlcon which returns 2 constraint vectors [C,Ceq] function [C,Ceq]=nonlcon(x)

C=2*x(1)^2+x(2); Ceq=[];

Remember to return a null Matrix if the constraint does not apply

Example (Cont.)

Initial guess (3 design variables) x0=[10;10;10];


A=[-1 -2 -2;1 2 2];

1 1

2 2

2 , B 2

0 72

B=[0 72]';
LB = [0 0 0]'; UB = [30 30 30]';

LB

0 0 , UB 0

30 30 30

[x,feval]=fmincon(@myfun,x0,A,B,[],[],LB,UB,@nonlcon)
CAREFUL!!!

fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P1,P2,)

Example (Cont.)
Warning: Large-scale (trust region) method does not currently solve this type of problem, switching to medium-scale (line search). > In D:\Programs\MATLAB6p1\toolbox\optim\fmincon.m at line 213

In D:\usr\CHINTANG\OptToolbox\min_con.m at line 6
Optimization terminated successfully: Magnitude of directional derivative in search direction less than 2*options.TolFun and maximum constraint violation is less than options.TolCon Const. 1 Active Constraints:

x1 2 x2

2 x3

2
9 x= 0.00050378663220 0.00000000000000

Const. 3 Const. 4

x1 2 x2 2 x3 72 Const. 2 Const. 5 0 x1 30
0 x2 30
Const. 6 Const. 8

30.00000000000000
feval = -4.657237250542452e-035

Const. 7

0 x3 30 2 x12 x2 0

Const. 9

Sequence: A,B,Aeq,Beq,LB,UB,C,Ceq

You might also like