0% found this document useful (0 votes)
175 views

Matlab Optimization Toolbox Optimtool

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
175 views

Matlab Optimization Toolbox Optimtool

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

om

MATLAB Optimization Toolbox

.c
ss
re
(optimtool)

dp
or
.w
ar
m
Dr. Rajesh Kumar

ku
sh
PhD, PDF (NUS, Singapore)
je
SMIEEE (USA), FIET (UK) FIETE, FIE (I), LMCSI, LMISTE
ra

Professor, Department of Electrical Engineering


dr

Malaviya National Institute of Technology, Jaipur, India,


//

Mobile: (91)9549654481
s:
tp

[email protected],in, [email protected]
ht

Web:http://drrajeshkumar.wordpress.com/
Contents

om
• Minimization algorithm

.c
ss
– fgoalattain
– fmincon

re
– fminimax

dp
– fminunc

or
• Equation solving

.w
– fsolve

ar
– fseminf

m
• Linear programming

ku
– linprog
– intlinprog sh
Least square problems
je

ra

– lsqlin
dr

– lsqnonlin
lsqcurvefit (curve fitting)
//


• Quadratic programming
s:

– quadprog
tp

• Global Optimization Toolbox


ht

– ga (genetic algorithm)
– particaleswarm (Particle swarm optimization)
– simulannealbnd (simulated annealing algorithm)
– gamultiobj (multi-objective ga)
– patternsearch
fgoalattain

om
.c
• Solve multiobjective goal attainment problems

ss
⎧ F ( x) − weight. y ≤ goal

re
dp
⎪ c(x) ≤ 0

or
ceq ( x) = 0

.w

minimize ⎨

ar
x, y A.x ≤ b

m

ku
⎪ Aeq.x = beq
⎪ sh
lb ≤ x ≤ ub
je

ra
dr
//
s:
tp
ht
fgoalattain

om
.c
• Solve multiobjective goal attainment problems

ss
⎧ F ( x) − weight. y ≤ goal

re
dp
⎪ c(x) ≤ 0 Nonlinear inequality constraints

or
ceq ( x) = 0

.w
⎪ Nonlinear equality constraints
minimize ⎨

ar
x, y A.x ≤ b Linear inequality constraints

m

ku
⎪ Aeq.x = beq Linear equality constraints
⎪ sh
lb ≤ x ≤ ub Range of x
je
• Where ⎩
ra
dr

– weight, goal, b, and beq are vectors


//

– A and Aeq are matrices


s:
tp

– c(x), ceq(x), and F(x) are functions that return vectors


ht

– F(x), c(x), and ceq(x) can be nonlinear functions


fgoalattain

om
.c
• Example An output feedback controller, K is designed

ss
producing a closed loop system

re
dp
x&= ( A + BKC ) x + Bu

or
.w
y = Cx

ar
m
With design consideration, close loop poles [-5,-3,-1] and gain

ku
-4< K <4 sh
je
ra
dr
//
s:
tp
ht
fgoalattain

om
.c
• Example An output feedback controller, K is designed

ss
producing a closed loop system

re
dp
x&= ( A + BKC ) x + Bu

or
.w
y = Cx

ar
m
With design consideration, close loop poles [-5,-3,-1] and gain

ku
-4< K <4 ⎡ −0.5 0 0 ⎤ ⎡ 1 0⎤
sh
⎡1 0 0 ⎤
je

A=⎢ 0 ⎥ ⎢ ⎥
−2 10 ⎥ ; B = ⎢ −2 2 ⎥ ; C = ⎢
0 0 1 ⎥⎦
ra

⎢⎣ 0 1 −2 ⎥⎦ ⎢⎣ 0 1 ⎥⎦ ⎣
dr
//

goal= [ −5 −3 −1]; weight = abs(goal);


s:
tp

⎡ −1 −1⎤
K0 = ⎢ ;
ht

⎣ −1 −1⎥⎦
⎡ −4 −4 ⎤ ⎡ 4 4⎤
lb = ⎢ ⎥ ; ub = ⎢ ⎥ ;
⎣ −4 −4 ⎦ ⎣ 4 4⎦
fgoalattain

om
• Create function file, eigfun.m.

.c
ss
function F = eigfun(K,A,B,C)

re
F = sort(eig(A+B*K*C)); % Evaluate objectives

dp
• Next enter the system matrix and invoke an optimization routine

or
.w
A= [-0.5 0 0; 0 -2 10; 0 1 -2];

ar
B = [1 0; -2 2; 0 1];

m
C = [1 0 0; 0 0 1];

ku
K0 = [-1 -1; -1 -1]; % Initialize controller matrix
goal = [-5 -3 -1]; % Set goal values for the eigenvalues
sh
weight = abs(goal); % Set weight for same percentage
je
lb = -4*ones(size(K0)); % Set lower bounds
ra

ub = 4*ones(size(K0)); % Set upper bounds


dr
//
s:
tp
ht
fgoalattain

om
• Create function file, eigfun.m.

.c
ss
function F = eigfun(K,A,B,C)

re
F = sort(eig(A+B*K*C)); % Evaluate objectives

dp
or
• Next enter the system matrix and invoke an optimization routine

.w
A= [-0.5 0 0; 0 -2 10; 0 1 -2];

ar
B = [1 0; -2 2; 0 1];

m
C = [1 0 0; 0 0 1];

ku
K0 = [-1 -1; -1 -1]; % Initialize controller matrix
goal = [-5 -3 -1]; % Set goal values for the eigenvalues
sh
weight = abs(goal); % Set weight for same percentage
je
lb = -4*ones(size(K0)); % Set lower bounds
ra

ub = 4*ones(size(K0)); % Set upper bounds


dr
//

options = optimoptions('fgoalattain','Display','iter');
s:
tp

[K,fval,attainfactor]=fgoalattain(@(K)eigfun(K,A,B,C),...
ht

K0,goal,weight,[],[],[],[],lb,ub,[],options)
fgoalattain

om
.c
• Result

ss
K =

re
dp
-4.0000 -0.2564

or
-4.0000 -4.0000

.w
ar
fval =

m
ku
-6.9313
-4.1588 sh
je
-1.4099
ra
dr

attainfactor =
//
s:

-0.3863
tp
ht
fmincon

om
.c
• Find minimum of constrained nonlinear multivariable function

ss
⎧ c(x) ≤ 0

re
dp
⎪ ceq ( x) = 0

or
⎪⎪

.w
minimize f ( x) ⎨ A.x ≤ b

ar
x
⎪ Aeq.x = beq

m

ku
⎪⎩ lb ≤ x ≤ ub
je
sh
• Syntex
ra
dr

[x,fval]= fmincon(fun,x0,A,b,Aeq,beq,lb,ub)
//
s:
tp
ht
fmincon

om
.c
• Example Find the minimum value of Rosenbrock's function

ss
2 2
(
100 x2 − x12 ) + (1 − x1 )

re
dp
with following constraints

or
x1 + 2 x2 ≤ 1

.w
ar
at starting point (-1,2)

m
ku
sh
je
ra
dr
//
s:
tp
ht
fmincon

om
.c
• Example Find the minimum value of Rosenbrock's function

ss
2 2
(
100 x2 − x12 ) + (1 − x1 )

re
dp
with following constraints

or
x1 + 2 x2 ≤ 1

.w
ar
at starting point (-1,2)

m
ku
• Matlab Code sh
je
x0 = [-1,2];
ra
dr

A = [1,2];
//

b = 1;
s:

x = fmincon(fun,x0,A,b)
tp
ht
fmincon

om
.c
• Example Find the minimum value of Rosenbrock's function

ss
2 2
(
100 x2 − x12 ) + (1 − x1 )

re
dp
with following constraints

or
x1 + 2 x2 ≤ 1

.w
ar
at starting point (-1,2)

m
ku
sh
• Matlab Code
je
ra

x0 = [-1,2];
dr

A = [1,2];
//

b = 1;
s:
tp

x = fmincon(fun,x0,A,b)
ht

• Solution
x =
0.5022 0.2489
fminimax

om
.c
• Finds the minimum of a problem specified by

ss
⎧ c(x) ≤ 0

re
dp
⎪ ceq ( x) = 0

or
⎪⎪

.w
min max f i ( x) ⎨ A.x ≤ b

ar
x i
⎪ Aeq.x = beq

m

ku
⎪⎩ lb ≤ x ≤ ub
je
sh
• Syntex
ra
dr

[x,fval]= fminimax(fun,x0,A,b,Aeq,beq,lb,ub)
//
s:
tp
ht
fminimax

om
.c
• Example Find values of x that minimize the maximum value of

ss
[ f1 ( x), f2 ( x), f3 ( x), f4 ( x), f5 ( x)]

re
dp
where f1 ( x) = 2 x12 + x22 − 48 x1 − 40 x2 + 304

or
f 2 ( x) = − x12 + 3x22

.w
ar
f3 ( x) = x1 + 3x2 − 18

m
ku
f 4 ( x) = − x1 − x2
f5 ( x) = x1 + x2 − 8
sh
je
at starting point (0.1,0.1)
ra
dr
//
s:
tp
ht
fminimax

om
.c
• First, write a file that computes the five functions at x

ss
function f = myfun(x)

re
f(1)= 2*x(1)^2+x(2)^2-48*x(1)-40*x(2)+304;

dp
f(2)= -x(1)^2 - 3*x(2)^2;

or
.w
f(3)= x(1) + 3*x(2) -18;

ar
f(4)= -x(1)- x(2);

m
f(5)= x(1) + x(2) - 8;

ku
sh
je
ra
dr
//
s:
tp
ht
fminimax

om
.c
• First, write a file that computes the five functions at x

ss
function f = myfun(x)

re
f(1)= 2*x(1)^2+x(2)^2-48*x(1)-40*x(2)+304;

dp
f(2)= -x(1)^2 - 3*x(2)^2;

or
.w
f(3)= x(1) + 3*x(2) -18;

ar
f(4)= -x(1)- x(2);

m
f(5)= x(1) + x(2) - 8;

ku
sh
• Next, invoke an optimization routine
je
x0 =[0.1; 0.1]; % Make a starting guess solution
ra

[x,fval] = fminimax(@myfun,x0);
dr
//
s:
tp
ht
fminimax

om
• First, write a file that computes the five functions at x

.c
ss
function f = myfun(x)

re
f(1)= 2*x(1)^2+x(2)^2-48*x(1)-40*x(2)+304;

dp
f(2)= -x(1)^2 - 3*x(2)^2;

or
f(3)= x(1) + 3*x(2) -18;

.w
f(4)= -x(1)- x(2);

ar
f(5)= x(1) + x(2) - 8;

m
ku
• Next, invoke an optimization routine
sh
x0 =[0.1; 0.1]; % Make a starting guess solution
je
[x,fval] = fminimax(@myfun,x0);
ra

• After seven iterations, the solution is


dr

x =
//

4.0000
s:
tp

4.0000
ht

fval =
0.0000 -64.0000 -2.0000 -8.0000 -0.0000
fminunc

om
.c
• Finds the minimum of a problem specified by

ss
re
min f ( x)

dp
x

or
where f(x) is a function that returns a scalar

.w
ar
x is a vector or a matrix

m
ku
sh
• Syntex
je
ra

[x,fval]= fminunc(fun,x0)
dr
//
s:
tp
ht
fminunc

om
.c
• Example Minimize the function

ss
f ( x) = 3x12 + 2 x1 x2 + x22 − 4 x1 + 5 x2

re
dp
at starting point (1,1)

or
.w
ar
m
ku
sh
je
ra
dr
//
s:
tp
ht
fminunc

om
.c
• Example Minimize the function

ss
f ( x) = 3x12 + 2 x1 x2 + x22 − 4 x1 + 5 x2

re
dp
at starting point (1,1)

or
.w
• Matlab Code

ar
fun = @(x)3*x(1)^2 + 2*x(1)*x(2) + x(2)^2 - 4*x(1)

m
+ 5*x(2);

ku
sh
je
x0 = [1,1];
ra

[x,fval] = fminunc(fun,x0);
dr
//
s:
tp
ht
fminunc

om
• Example Minimize the function

.c
ss
f ( x) = 3x12 + 2 x1 x2 + x22 − 4 x1 + 5 x2

re
at starting point (1,1)

dp
or
• Matlab Code

.w
fun = @(x)3*x(1)^2 + 2*x(1)*x(2) + x(2)^2 - 4*x(1)

ar
+ 5*x(2);

m
ku
x0 = [1,1]; sh
je
[x,fval] = fminunc(fun,x0);
ra

• After a few iterations, fminunc returns the solution


dr
//

x =
s:

2.2500 -4.7500
tp

fval =
ht

-16.3750
fseminf

om
• Finds the minimum of a problem specified by

.c
ss
⎧ Ki ( x, wi ) ≤ 0,1 ≤ i ≤ n

re
dp
⎪ c(x) ≤ 0

or

.w
⎪ ceq( x) = 0

ar
minimize f ( x) ⎨
A.x ≤ b

m
x

ku
⎪ Aeq.x = beq
sh

je
⎩ lb ≤ x ≤ ub
ra
dr

Ki(x,wi) ≤ 0 are continuous functions of both x and an additional set


//

of variables w1,w2,...,wn
s:
tp

• Syntex
ht

[x,fval]= fseminf(fun,x0,ntheta,seminfcon,…
A,b,Aeq,beq,lb,ub)
– ntheta -Number of semi-infinite constraints
– Seminfcon - Semi-infinite constraint function
fseminf

om
.c
• Example Minimizes the function

ss
2
( x −1)

re
dp
subject to the constraints

or
0≤ x≤2

.w
ar
2
g ( x, t ) = x − 1
( − t−1
) ( )
m
≤ 0 for all 0 ≤ t ≤ 1
2 2

ku
sh
je
ra
dr
//
s:
tp
ht
fseminf

om
.c
• Example Minimizes the function

ss
2
( x −1)

re
dp
subject to the constraints

or
0≤ x≤2

.w
ar
2
g ( x, t ) = x − 1
( − t−1
) ( )
m
≤ 0 for all 0 ≤ t ≤ 1
2 2

ku
sh
• The unconstrained objective function is minimized at x = 1.
je

However, the constraint,


ra
dr

g ( x, t )2 ≤ 0 for all 0 ≤ t ≤ 1
//

implies x ≤ 1 2
s:
tp
ht
fseminf

om
• Write the objective function as an anonymous function

.c
ss
objfun = @(x)(x-1)^2;

re
• Write the semi-infinite constraint function,

dp
or
– Includes the nonlinear constraints ([ ] in this case)

.w
– Initial sampling interval for t (0 to 1 in steps of 0.01 in this case)

ar
m
– The semi-infinite constraint function g(x, t):

ku
function [c, ceq, K1, s] = seminfcon(x,s)
sh
% No finite nonlinear inequality and equality
je
constraints
ra

c = []; ceq = [];


dr

% Sample set
//

if isnan(s) % Initial sampling interval


s:
tp

s = [0.01 0];
ht

end
t = 0:s(1):1;
% Evaluate the semi-infinite constraint
K1 = (x - 0.5) - (t - 0.5).^2;
fseminf

om
.c
• Call fseminf with initial point 0.2, and view the result:

ss
re
x = fseminf(objfun,0.2,1,@seminfcon)

dp
or
x =

.w
0.5000

ar
m
ku
sh
je
ra
dr
//
s:
tp
ht
fsolve

om
.c
• Solves a nonlinear problem specified by

ss
F ( x) = 0

re
dp
where F(x) is a function that returns a vector value

or
.w
ar
• Syntex

m
ku
[x,fval]= fsolve(fun,x0)
sh
je
ra
dr
//
s:
tp
ht
fsolve

om
.c
• Example Solve two nonlinear equations in two variables. The

ss
equations are:-

re
dp
− x +x
−e ( 1 2 )
e (
= x2 1 + x12 )

or
.w
1

ar
x1 cos ( x2 ) + x2 sin ( x1 ) =

m
2

ku
sh
je
ra
dr
//
s:
tp
ht
fsolve

om
.c
• Example Solve two nonlinear equations in two variables. The

ss
equations are:-

re
dp
− x +x
−e ( 1 2 )
e (
= x2 1 + x12 )

or
.w
1

ar
x1 cos ( x2 ) + x2 sin ( x1 ) =

m
2

ku
• Converting in F(x)=0
sh
je
− x +x
−e ( 1 2 )
e (
− x2 1 + x12 = 0)
ra
dr

1
//

x1 cos ( x2 ) + x2 sin ( x1 ) − =0
s:

2
tp
ht
fsolve

om
• Matlab Code for function file

.c
ss
function F = root2d(x)

re
F(1) = exp(-exp(-(x(1)+x(2)))) - x(2)*(1+x(1)^2);

dp
F(2) = x(1)*cos(x(2)) + x(2)*sin(x(1)) - 0.5;

or
.w
ar
m
ku
sh
je
ra
dr
//
s:
tp
ht
fsolve

om
• Matlab Code for function file

.c
ss
function F = root2d(x)

re
F(1) = exp(-exp(-(x(1)+x(2)))) - x(2)*(1+x(1)^2);

dp
F(2) = x(1)*cos(x(2)) + x(2)*sin(x(1)) - 0.5;

or
.w
ar
• Solve the system of equations starting at the point [0,0].

m
ku
fun = @root2d;
x0 = [0,0]; sh
je
x = fsolve(fun,x0)
ra
dr
//
s:
tp
ht
fsolve

om
• Matlab Code for function file

.c
ss
function F = root2d(x)

re
F(1) = exp(-exp(-(x(1)+x(2)))) - x(2)*(1+x(1)^2);

dp
F(2) = x(1)*cos(x(2)) + x(2)*sin(x(1)) - 0.5;

or
.w
ar
• Solve the system of equations starting at the point [0,0].

m
ku
fun = @root2d;
x0 = [0,0]; sh
je
x = fsolve(fun,x0)
ra
dr
//

• Results
s:

x =
tp

0.3532 0.6061
ht
linprog

om
.c
• Solve linear programming problems

ss
re
⎧ A.x ≤ b

dp

min f T x ⎨ Aeq.x = beq

or
x

.w
⎪ lb ≤ x ≤ ub

ar

m
• Syntax

ku
x = linprog(fun,A,b,Aeq,beq,lb,ub)
sh
je
ra
dr
//
s:
tp
ht
linprog

om
.c
• Example Linear programming

ss
⎧ x1 + x2 ≤ 2

re
dp

⎪ x1 + x2 ≤ 1

or
4

.w

ar
1 ⎪ x1 − x2 ≤ 2

m
min(− x1 − x2 ) subject to ⎨
⎪− x1 − x2 ≤ 1
ku
x 3
sh ⎪ 4
je
⎪− x1 − x2 ≤ −1
ra


dr

⎩− x1 + x2 ≤ 2
//
s:
tp
ht
linprog

om
.c
• Example Linear programming

ss
⎧ x1 + x2 ≤ 2

re
dp

⎪ x1 + x2 ≤ 1

or
4

.w

ar
1 ⎪ x1 − x2 ≤ 2

m
min(− x1 − x2 ) subject to ⎨
⎪− x1 − x2 ≤ 1
ku
x 3
sh ⎪ 4
je
⎪− x1 − x2 ≤ −1
ra


dr

⎩− x1 + x2 ≤ 2
//
s:
tp
ht
linprog

om
.c
• Example Linear programming

ss
⎧ x1 + x2 ≤ 2

re
dp

⎪ x1 + x2 ≤ 1

or
4

.w

ar
1 ⎪ x1 − x2 ≤ 2

m
min(− x1 − x2 ) subject to ⎨
⎪− x1 − x2 ≤ 1
ku
x 3
sh ⎪ 4
je
⎪− x1 − x2 ≤ −1
• Matlab Code
ra


dr

% Objective Function ⎩− x1 + x2 ≤ 2
//

f = [-1 -1/3];
s:
tp

A = [1 1; 1 1/4; 1 -1;-1/4 -1; -1 -1; -1 1];


ht

B = [2 1 2 1 -1 2];
% calling linprog
x = linprog(f,A,b,[],[],[],[])
linprog

om
.c
• Results

ss
Optimal solution found.

re
dp
x =

or
0.6667

.w
1.3333

ar
m
ku
sh
je
ra
dr
//
s:
tp
ht
intlinprog

om
.c
• Mixed-integer linear programming (MILP)

ss
re
⎧ x(intcon)are integers

dp
⎪ A.x ≤ b

or
T ⎪
min f x ⎨

.w
x Aeq.x = beq

ar

m
⎪⎩ lb ≤ x ≤ ub

ku
sh
je

• Syntax
ra
dr

x = intlinprog(f,intcon,A,b,Aeq,beq,lb,ub)
//

– intcon –integer variables


s:
tp
ht
intlinprog

om
.c
• Example Mixed-integer linear programming (MILP)

ss
re
⎧ x2 is an integer

dp
⎪ x + 2 x ≥ −14

or

min(8 x1 + x2 ) subject to ⎨ 1 2

.w
⎪−4 x1 − x2 ≤ −33
x

ar
m
⎪⎩ 2 x1 + x2 ≤ 20

ku
sh
je
ra
dr
//
s:
tp
ht
intlinprog

om
.c
• Example Mixed-integer linear programming (MILP)

ss
re
⎧ x2 is an integer

dp
⎪ x + 2 x ≥ −14

or

min(8 x1 + x2 ) subject to ⎨ 1 2

.w
⎪−4 x1 − x2 ≤ −33
x

ar
m
⎪⎩ 2 x1 + x2 ≤ 20

ku
• Matlab Code sh
je
% Objective Function and integer variable
ra

f = [8;1];
dr
//

intcon = 2;
s:

A = [-1,-2; -4,-1; 2,1];


tp

B = [14;-33;20];
ht

% calling intlinprog
x = intlinprog(f,intcon,A,b)
intlinprog

om
.c
• Results

ss
re
LP: Optimal objective value is 59.000000

dp
x =

or
6.5000

.w
7.0000

ar
m
ku
sh
je
ra
dr
//
s:
tp
ht
lsqcurvefit

om
.c
• Solve nonlinear curve-fitting (data-fitting) problems in least-

ss
squares sense

re
dp
2 2
min F ( x, xdata) − ydata 2 = min ∑ ( F ( x, xdata) − ydata )

or
.w
x x
i

ar
Nonlinear least-squares solver finds coefficients x that solve

m
the problem.
ku
sh
Where user defined data should be feed in vector form
je
⎡ F ( x, xdata (1)) ⎤
ra

⎢ F ( x, xdata(2)) ⎥
dr
//

F ( x, xdata ) = ⎢ ⎥
M
s:

⎢ ⎥
tp

⎢ ⎥
F ( x , xdata ( k ))
ht

• Syntax ⎣ ⎦
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub)
lsqcurvefit

om
ydata = x1e x2 xdata

.c
• Example Simple exponential fit

ss
re
dp
xdata = [0.9 1.5 13.8 19.8 24.1 28.2 35.2 60.3 74.6 81.3];

or
ydata = [455.2 428.6 124.1 67.3 43.2 28.1 13.1 -0.4 -1.3 -1.5];

.w
ar
m
ku
sh
je
ra
dr
//
s:
tp
ht
lsqcurvefit

om
• Example Simple exponential fit ydata = x1e x2 xdata

.c
ss
re
xdata = [0.9 1.5 13.8 19.8 24.1 28.2 35.2 60.3 74.6 81.3];

dp
or
ydata = [455.2 428.6 124.1 67.3 43.2 28.1 13.1 -0.4 -1.3 -1.5];

.w
ar
m
• Matlab code

ku
xdata = [0.9 1.5 13.8 19.8 24.1 28.2 35.2 60.3 …
sh
74.6 81.3];
je
ydata = [455.2 428.6 124.1 67.3 43.2 28.1 13.1 …
ra
dr

-0.4 -1.3 -1.5];


//

% Create a simple exponential decay model


s:

fun = @(x,xdata)x(1)*exp(x(2)*xdata);
tp

% Fit the model using the starting point


ht

x0 = [100,-1]

x = lsqcurvefit(fun,x0,xdata,ydata,[],[])
lsqcurvefit

om
.c
• Results

ss
Local minimum possible.

re
dp
x =

or
498.8309 -0.1013

.w
ar
m
ku
sh
je
ra
dr
//
s:
tp
ht
lsqlin

om
.c
• Solve constrained linear least-squares problems

ss
⎧ A.x ≤ b

re
1

dp
2 ⎪
min C.x − d ⎨ Aeq.x = beq

or
x 2 2

.w
⎪ lb ≤ x ≤ ub

ar
solves the linear system C*x = d

m
ku
sh
je
ra

• Syntax
dr
//

x = lsqlin(C,d,A,b,Aeq,beq,lb,ub)
s:
tp
ht
lsqlin

om
.c
• Example Find the x that minimizes the norm of C*x - d for an

ss
over determined problem with linear equality and inequality

re
dp
constraints and bounds.

or
.w
ar
• Matlab code

m
ku
C = [0.9501 0.7620 0.6153 0.4057;
sh
0.2311 0.4564 0.7919 0.9354;
je
0.6068 0.0185 0.9218 0.9169;
ra

0.4859 0.8214 0.7382 0.4102;


dr
//

0.8912 0.4447 0.1762 0.8936];


s:

d = [0.0578; 0.3528; 0.8131; 0.0098; 0.1388];


tp

A =[0.2027 0.2721 0.7467 0.4659;


ht

0.1987 0.1988 0.4450 0.4186;


0.6037 0.0152 0.9318 0.8462];
lsqlin

om
.c
• Matlab code conti.

ss
b =[0.5251; 0.2026; 0.6721];

re
dp
Aeq = [3 5 7 9];

or
beq = 4;

.w
lb = -0.1*ones(4,1);

ar
ub = 2*ones(4,1);

m
ku
sh
% call lsqlin to solve the problem
je
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub)
ra
dr
//
s:
tp
ht
lsqlin

om
.c
• Matlab code conti.

ss
b =[0.5251; 0.2026; 0.6721];

re
dp
Aeq = [3 5 7 9];

or
beq = 4;

.w
lb = -0.1*ones(4,1);

ar
ub = 2*ones(4,1);

m
ku
sh
% call lsqlin to solve the problem
je
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub)
ra
dr
//

• Results
s:
tp

x = -0.1000 -0.1000 0.1599 0.4090


ht
lsqnonlin

om
• To solve nonlinear least-squares curve fitting problems

.c
ss
n
2
min f ( x) 2 = min ∑ fi ( x) 2

re
dp
x x
i =1
• Trust region reflective algorithm or Levenburg Marquardt (set in

or
.w
Options)

ar
• lsqnonlin stops because the final change in the sum of squares

m
ku
relative to its initial value is less than the default value of the
sh
function tolerance.
je
ra
dr
//

• Syntax
s:
tp

x = lsqnonlin(fun,x0,lb,ub)
ht
lsqnonlin

om
• Example Fit a simple exponential decay curve to data. The

.c
ss
model considered is:

re
−1.3t
y=e +ε

dp
or
where 0≤ t ≤ 3 , and ε is normally distributed noise, mean 0

.w
and standard deviation 0.05.

ar
m
ku
sh
je
ra
dr
//
s:
tp
ht
lsqnonlin

om
• Example Fit a simple exponential decay curve to data. The

.c
ss
model considered is:

re
−1.3t
y=e +ε

dp
or
where 0≤ t ≤ 3 , and ε is normally distributed noise, mean 0

.w
and standard deviation 0.05.

ar
m
ku
sh
• Matlab code
je
ra

rng default; % for reproducibility


dr

d = linspace(0,3);
//

y = exp(-1.3*d) + 0.05*randn(size(d));
s:

fun = @(r)exp(-d*r)-y;
tp
ht

x0 = 4;

x = lsqnonlin(fun,x0,[],[])
lsqnonlin

om
• Results

.c
ss
plot(d,y,'ko',d,exp(-x*d),'b-')

re
legend('Data','Best fit')

dp
xlabel('t')

or
.w
ylabel('exp(-tx)')

ar
m
ku
sh
je
ra
dr
//
s:
tp
ht
quadprog

om
• Finds a minimum for a quadratic programming problem that is

.c
ss
specified by:

re
⎧ Ax ≤ b

dp
1 T ⎪

or
min x Hx + f T x ⎨ Aeq.x ≤ beq

.w
x 2 ⎪lb ≤ x ≤ ub

ar

m
ku
• Syntax sh
je
x = quadprog(H,f,A,b,Aeq,beq,lb,ub,options)
ra

- options – optimtool options for specific function


dr
//
s:
tp
ht
quadprog

om
• Example Solve the given the quadratic programming problem:

.c
ss
1 2
f ( x) = x1 + x22 − x1 x2 − 2 x1 − 6 x2

re
2

dp
or
x1 + x2 ≤ 2
subject to

.w
− x1 + 2 x2 ≤ 2

ar
m
2 x1 + x2 ≤ 3

ku
je 0 ≤ x1 ; 0 ≤ x2
sh
ra
dr
//
s:
tp
ht
quadprog

om
• Example Solve the given the quadratic programming problem:

.c
ss
1 2
f ( x) = x1 + x22 − x1 x2 − 2 x1 − 6 x2

re
2

dp
or
x1 + x2 ≤ 2
subject to

.w
− x1 + 2 x2 ≤ 2

ar
m
2 x1 + x2 ≤ 3

ku
je 0 ≤ x1 ; 0 ≤ x2
sh
• Matlab Code
ra
dr

H = [1 -1; -1 2]; f = [-2; -6];


//

A = [1 1; -1 2; 2 1]; b = [2; 2; 3];


s:

lb = zeros(2,1);
tp
ht

options = optimoptions('quadprog', 'Algorithm',


'interior-point-convex','Display','off');
[x,fval]=
quadprog(H,f,A,b,[],[],lb,[],[],options);
quadprog

om
.c
• Result

ss
x =

re
dp
0.6667

or
1.3333

.w
fval =

ar
m
-8.2222

ku
sh
je
ra
dr
//
s:
tp
ht
Genetic Algorithm (GA)

om
.c
• Genetic algorithm solver for mixed-integer or continuous-

ss
variable optimization, constrained or unconstrained

re
dp
• Syntax

or
.w
x = ga(fun,nvars,A,b,Aeq,beq,lb,ub)

ar
– nvars is the dimension (number of design variables) of fun

m
ku
sh
je
ra
dr
//
s:
tp
ht
Genetic Algorithm (GA)

om
.c
• Example Use the genetic algorithm to minimize the

ss
ps_example function {inbuilt in Matlab}, with following

re
dp
constraints :-

or
.w
x1 + x2 ≥ 1

ar
x2 = x1 + 5

m
ku
1 ≤ x1 ≤ 6
je
sh
−3 ≤ x2 ≤ 8
ra
dr
//
s:
tp
ht
Genetic Algorithm (GA)

om
.c
• Rearranging constraints

ss
x1 + x2 ≥ 1

re
dp
− x1 + x2 = 5

or
.w
1 ≤ x1 ≤ 6

ar
−3 ≤ x2 ≤ 8

m
• Matlab Code

ku
A = [-1 -1]; b = -1;
sh
Aeq = [-1 1]; beq = 5
je
ra

lb = [1 -3]; ub = [6 8];%Set bounds lb and ub


dr

fun = @ps_example;
//
s:
tp

x = ga(fun,2,A,b,Aeq,beq)
ht
Genetic Algorithm (GA)

om
.c
• Results

ss
x =

re
dp
-2.0000 2.9990

or
.w
ar
m
ku
sh
je
ra
dr
//
s:
tp
ht
particleswarm

om
• Bound constrained optimization using Particle Swarm

.c
ss
Optimization (PSO)

re
dp
• Minimizes objective function subject to constraints

or
.w
• MATLAB built in function particleswarm

ar
– May define as an unconstrained problem or find solution in a

m
ku
range
sh
– Allows to specify various options using optimoptions
je
ra
dr
//

• Syntax
s:

x = particleswarm(fun,nvars,lb,ub,options)
tp
ht
particleswarm

om
.c
• Example the objective function.

ss
fun = @( x) x(1)*exp(−norm( x)2 )

re
dp
Set bounds on the variables

or
lb = [−10, −15]and ub = [15, 20]

.w
ar
m
ku
sh
je
ra
dr
//
s:
tp
ht
particleswarm

om
• Example the objective function.

.c
ss
fun = @( x) x(1)*exp(−norm( x)2 )

re
Set bounds on the variables

dp
or
lb = [−10, −15]and ub = [15, 20]

.w
ar
m
ku
sh
je
• Matlab Code
ra

%Define the objective function.


dr

fun = @(x)x(1)*exp(-norm(x)^2);
//
s:
tp

%Call particleswarm to minimize the function.


ht

rng default % For reproducibility


nvars = 2;
x = particleswarm(fun,nvars)
particleswarm

om
.c
• Result

ss
x =

re
dp
629.4474 311.4814

or
%This solution is far from the true minimum, as

.w
%you see in a function plot.

ar
fsurf(@(x,y)x.*exp(-(x.^2+y.^2)))

m
ku
sh
je
ra
dr
//
s:
tp
ht
simulannealbnd

om
• Finds the minimum of a given function using the Simulated

.c
ss
Annealing Algorithm.

re
dp
• May define upper and lower bounds

or
.w
• Useful in minimizing functions that may have many local

ar
minima

m
ku
sh
je
ra

• Syntax
dr

[x,fval] = simulannealbnd(fun,x0,lb,ub,options)
//
s:
tp
ht
simulannealbnd

om
• Example Minimize De Jong’s fifth function à 2 D

.c
ss
function with many local minima.

re
dp
• Starting point à [0,0]

or
.w
• UB is 64 and LB is -64

ar
m
ku
sh
je
ra
dr
//
s:
tp
ht

De Jong’s Fifth Function Plot – Many Local Minima


simulannealbnd (example)

om
.c
ss
re
Plot De Jong’s
No Lower and Upper Bounds With LB and UB

dp
function and assign
fun = @dejong5fcn;

or
fcn = @dejong5fcn
fun = @dejong5fcn; x0 = [0 0];

.w
x0 = [0 0]; lb = [-64 -64];

ar
m
x = simulannealbnd(fun,x0) ub = [64 64]; In main program,

ku
x = simulannealbnd(fun,x0,lb,ub) simulannealbnd with
sh starting [0,0]
je
Output Output
ra

x= x=
dr

Set LB and UB.


//

Call simlannealbnd
-31.9785 -31.9797 0.0009 -31.9644
s:

with LB and UB
tp
ht

Note: The simulannealbnd algorithm uses the MATLAB® random number stream,
so different results may be obtained after each run
Multi-objective GA

om
.c
• Find Pareto front of multiple fitness functions using genetic

ss
algorithm

re
dp
• Syntax

or
.w
x = gamultiobj(fitnessfcn,nvars,A,b,Aeq,beq,lb,ub)

ar
m
ku
sh
je
ra
dr
//
s:
tp
ht
Multi-objective GA

om
.c
• Example Compute the Pareto front for a simple multi-

ss
objective problem. There are two objectives and two decision

re
dp
variables x f ( x) = x
2

or
1

.w
2

ar
⎡2⎤
f 2 ( x) = 0.5 x − ⎢ ⎥ + 2

m
⎣1 ⎦
ku
sh
je
ra
dr
//
s:
tp
ht
Multi-objective GA

om
.c
• Example Compute the Pareto front for a simple multi-

ss
objective problem. There are two objectives and two decision

re
dp
variables x f ( x) = x
2

or
1

.w
2

ar
⎡2⎤
f 2 ( x) = 0.5 x − ⎢ ⎥ + 2

m
⎣1 ⎦
ku
sh
• Matlab code
je
ra

fitnessfcn = @(x)[norm(x)^2,0.5*norm(x(:)-[2;-
dr

1])^2+2];
//
s:
tp

rng default % For reproducibility


ht

x = gamultiobj(fitnessfcn,2,[],[],[],[],[],[])
Multi-objective GA

om
• Results

.c
ss
Optimization terminated: average change in the spread of

re
Pareto solutions less than options.FunctionTolerance.

dp
x =
-0.0072 0.0003

or
0.0947 -0.0811

.w
1.0217 -0.6946

ar
1.1254 -0.0857

m
-0.0072 0.0003

ku
0.4489 -0.2101
1.8039 -0.8394 sh
je
0.5115 -0.6314
ra

1.5164 -0.7277
dr

1.7082 -0.7006
1.8330 -0.9337
//

0.7657 -0.6695
s:

0.7671 -0.4882
tp

1.2080 -0.5407
ht

1.0075 -0.5348
0.6281 -0.1454
2.0040 -1.0064
1.5314 -0.9184
Multi-objective GA

om
.c
• Plot the solution points

ss
plot(x(:,1),x(:,2),'ko')

re
dp
or
.w
ar
m
ku
sh
je
ra
dr
//
s:
tp
ht
patternsearch

om
• Find a local minimum a given function using pattern search

.c
ss
subject to

re
dp
– linear equalities/inequalities,

or
– lower/upper bounds,

.w
ar
– non-linear inequalities/equalities

m
• Specific options may be given using optimoptions
ku
sh
je
ra
dr

• Syntax
//

x = patternsearch(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)
s:
tp
ht
patternsearch

om
• Example1 Minimize an unconstrained problem using a user

.c
ss
function and patternsearch solver Define a user

re
function (obj1) that

dp
y= exp(-x(1)2-x(2)2)*(1+5x(1) + 6*x(2) + 12x(1)cos(x(2))); describes the

or
objective

.w
Function Code

ar
function[y] = obj1(x)

m
In main program, call

ku
y = exp(-x(1)^2-x(2)^2)*(1+5*x(1) + 6*x(2) + 12*x(1)*cos(x(2))); the function obj1 and
Main Code
sh define starting point
je
fun = @obj1;
ra

x0 = [0,0];
dr

Use patternsearch to
//

x = patternsearch(fun,x0)
find the minima
s:
tp

Output
ht

x=
-0.7037 -0.1860
patternsearch

om
• Example 2 Solving the previous problem with lower bound (LB)

.c
ss
and upper bound (UB) specified and starting at [1,-5]:

re
dp
0 ≤ x1 ≤ ∞

or
.w
−∞ ≤ x2 ≤ −3

ar
m
ku
Main Code
fun = @obj1; sh Output
x=
je
lb = [0,-Inf];
ra

ub = [Inf,-3]; 0.1880 -3.0000


dr

A = [];
//

b = [];
s:

Aeq = [];
tp

beq = [];
ht

x0 = [1,-5];
x=
patternsearch(fun,x0,A,b,Aeq,beq,lb,ub)

You might also like