0% found this document useful (0 votes)
38 views21 pages

Lab 4 Guide Parameter Estimation

The document discusses parameter estimation for a discrete-time system. It describes generating simulated data for a continuous system by discretizing it and adding white noise. The goal is to estimate the parameters that minimize the error between the simulated output and a linear model's output. This is formulated as a numerical optimization problem to minimize the cost function, which is the sum of squared errors between the actual and estimated outputs over all data points.

Uploaded by

vicenc puig
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views21 pages

Lab 4 Guide Parameter Estimation

The document discusses parameter estimation for a discrete-time system. It describes generating simulated data for a continuous system by discretizing it and adding white noise. The goal is to estimate the parameters that minimize the error between the simulated output and a linear model's output. This is formulated as a numerical optimization problem to minimize the cost function, which is the sum of squared errors between the actual and estimated outputs over all data points.

Uploaded by

vicenc puig
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Master's degree in Automatic Control and Robotics

Optimization in Control and Robotics: Lab 4

Activity 2: Parameter Estimation


Master's degree in Automatic Control and Robotics

Discretization of continuous systems


If we have a continuous system: x  A  x  B  u

Using zero-order hold on the inputs and a sample time of T seconds, the associated
X( z) b
discrete system is : x(k  1)  a  x(k )  b  u(k ) or equivalently: 
U( z) z  a
T 
where a  e AT 
b  e d   B

A
 
0 

Euler approximation: x(kT ) 
x(k  1)  x(k ) x(k  1)  1  A  T   x(k )  T  B  u(k )
T


V  k v  V  k b  u V(k  1)  a  V(k )  b  u(k ))
A  k v a  e k v T
T  1  e k v T
 

k v 
B  kb b e d  k b   kb
  kv
0 
If we had used Euler approximation: a  (1  k v  T ) b  kb  T
Master's degree in Automatic Control and Robotics

Problem description

u(k) y(k)
Real
System
ŷ(k )  f (u(k ) , i )
Model

y(k)
practice 4 DATA

ŷ(k )  f (u(k ) , i ) 25

practice 4 DATA
20

18
20

16

14 15
volume

12
volume

10
10
8

4 5

0 0
0 50 100 150 200 250 300 350 400 450 500
0 50 100 150 200 250 300 350 400 450 500
time
Master's degree in Automatic Control and Robotics

Formulation of the problem

u(k) y(k) If we have N


Real observations:
System
ŷ(k )  f (u(k ) , i )
i=1,..,n u(1), u(2),…,u(N) and
Model
(n parameters) y(1), y(2),…,y(N)

N N N

cost function: J  e (k)  
k 1
2

k 1
( y(k )  y(k ))2  (y(k)  f (u(k), ))
k 1
i
2

If f(u(k),i)=c(k)T· (f is linear respect to the parameters)


N
J (y(k)  c(k)
k 1
T
 θ)2  Y(N)  C(N)  θ(N)  Y(N)  C(N)  θ(N)
T
where:

 y(1)   c T (1)   c 1(1) c 2 (1) ... c n (1)   1 


       
 y( 2 )   c T ( 2 )   c 1( 2 ) c 2 ( 2 ) c n ( 2)   2 
Y(N)   ...  C(N)   ...    ... ... ...  θ(N)  θ   ... 
     
       
   T     
 y(N)   c (N )   c 1(N) c 2 (N) c n (N)   n 
Master's degree in Automatic Control and Robotics

Data generation
% Practice 4. Parameter estimation
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Data generation
rng(5) % Inizialize random seed (for repetitibility)

T=0.2; % Sample time V  k v  V  k b  u
kv=0.1;kb=2; % parameters for data generation
A=-kv;B=kb;sys=ss(A,B,1,0); % continuous system sys:dx=A·x+B·u y  1 V
% y=1·x+0·u
% y=x=V=volume (Output = State variable)
% u=v_b (Input variable) A  k v B  kb
dsys=c2d(sys,T); % it converts to discrete system
[nd,dd]=tfdata(dsys,'v'); % it returns numerator and denominator
% as vectors
b=nd(2);a=-dd(2); % x(k)=a·x(k-1)+b·u(k)

N=500; % Number of input/output patterns


y=zeros(N,1); % it inizializes vector y (ouputs)
u=ones(N,1); % it inizializes vector u (inputs) X( z) b

sigma=0.2; % Deviation of the white noise U( z) z  a
for k=2:N % We obtain the data adding WHITE NOISE
y(k)=a*y(k-1)+b*u(k-1)+sigma*randn(1);
end

data=[u y];
save data data % Save data for future calculations
Master's degree in Automatic Control and Robotics

Data generation

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
plot(y) % Plot the data generated
title('practice 4 DATA')
xlabel('time')
ylabel('volume')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

>> data
practice 4 DATA
25 data =

1.0000 0
20 1.0000 0.3274
1.0000 1.0502
1.0000 1.2721
15 1.0000 2.1701
volume

1.0000 2.5198

10

0
0 50 100 150 200 250 300 350 400 450 500
time
Master's degree in Automatic Control and Robotics

Optimization in Control and Robotics: Lab 4

Activity 2: Parameter Estimation


Master's degree in Automatic Control and Robotics

Numerical optimization
N
J  ( y(k )  c(k )
k 2
T
 θ )2  y(N)  C(N)  θ(N)  y(N)  C(N)  θ(N)
T
min J
xyV
 y( 2 )   c (2)   y(1)
T u(1) 
      y(k  1)  a  y(k )  b  u(k )
 y ( 3 )   c (3)   y(2)
T u(2) 
  a

y(N)  ...  C(N)   ...    ... ...  θ(N)   
    b
The first y estimated is y(2)
 ...     
   T   
 y(N)   c (N)   y (N  1) u(N  1) 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Parameter estimation
load data data; % it reads data
u=data(:,1); % it extracts inputs
y=data(:,2); % it extracts outputs

Y=y(2:N); % column matrix Y


C=[y(1:N-1) u(1:N-1)]; % matrix C

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Numerical Optimization
J = @(x)(Y-C*x)'*(Y-C*x); % performance index
theta_est=fminsearch(J,[0;0]); % parameters estimators from numerical
% optimization
%...
Master's degree in Automatic Control and Robotics

Numerical optimization

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Numerical optimization

theta = sdpvar(2,1);

objective= (Y-C*x)'*(Y-C*x); % performance index


constraints=[];

options = sdpsettings('solver',‘quadporg');
optimize(constraints,objective,options); % numerical optimization

theta_est = value(theta);

kv_est = (1-theta_est(1))/T %Estimation of kv (Euler approximation)


kb_est = theta_est(2)/T %Estimation of kb (Euler approximation)
Master's degree in Automatic Control and Robotics

Optimization in Control and Robotics: Lab 4

Activity 2: Parameter Estimation


Master's degree in Automatic Control and Robotics

Non recursive method

N
J  ( y(k )  c(k )
k 2
T
 θ )2  y(N)  C(N)  θ(N)  y(N)  C(N)  θ(N)
T

dJ

 
 2  C(N) T  y(N)  2  C(N)T  C(N)  θ(N)
dJ

 
1
 0  θ(N)  C(N) T  C(N)  C(N) T  y(N)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Non recursive method
theta_est_4=(C'*C)\C'*Y; % another way to obtain theta_est
Master's degree in Automatic Control and Robotics

Confidence intervals

Model y(k  1)  a  y(k)  b  u(k)     N(0, )


θ̂(N)  C(N) T  C(N)  1
 C(N) T  y(N)
 â 
θ̂(N)    estimators of parameters a and b
Jθ̂
 b̂ 
s2  estimator of parameter 2
N2


s 2  C(N) T  C(N) 1  â 
Covariance matrix of estimators θ̂(N)   
 b̂ 
 
 â  cov  z , â  cov  z  Confidence interval for a
 
 11 11

 2 2 

 
 b̂  cov  z , b̂  cov  z  Confidence interval for b
 
 22 22

 2 2 
T 
exact discret. : a  e AT  
B
A

b   e A d   B   e AT  1  Euler approx a  1  AT b  BT
0 
Master's degree in Automatic Control and Robotics

Confidence intervals

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Confidence intervals
sigma2=J(theta_est)/(N-2); % estimation of noise variance
Cov_mat=sigma2*inv(C'*C); % covariance matrix of the parameters estimators
std_error=sqrt(diag(Cov_mat)); % standard errors

%kv_est=-log(theta_est(1))/T % point estimation of kv using exact discret.


%lbkv=-log(theta_est(1)+std_error(1)*3)/T; % lower bound for 95% confidence interval for kv
%ubkv=-log(theta_est(1)-std_error(1)*3)/T; % upper bound for 95% confidence interval for kv

kv_est=(1-theta_est(1))/T % point l estimation of kv using Euler approximation


lbkv=(1-theta_est(1)-std_error(1)*3)/T % lower bound for 98% confidence interval for kv
ubkv=(1-theta_est(1)+std_error(1)*3)/T % upper bound for 98% confidence interval for kv

% Exact determination of confidence interval for kb_est is too difficult

kb_est=theta_est(2)/T % point estimation of kb using Euler approximation


lbkb=(theta_est(2)-std_error(2)*3)/T; % lower bound for 98% confidence interval for kb
ubkb=(theta_est(2)+std_error(2)*3)/T; % upper bound for 98% confidence interval for kb
Master's degree in Automatic Control and Robotics

Optimization in Control and Robotics: Lab 4

Activity 2: Parameter Estimation


Master's degree in Automatic Control and Robotics

Recursive method

 y(2)   y(1) u(1)   â 


        
P(3)  C(3) T  C(3) 1

 y(3)   y(2) u(2)   b̂ 


Y(3)  C(3)  ˆ (3) ˆ (3)  P(3)  C T (3)  Y(3)

c(k  1)  y(k) u(k)

P(k)  c T (k  1)  c(k  1)  P(k)


P(k  1)  P(k) 
1  c(k  1)  P(k)  c T (k  1)
P(k)  c T (k  1)
K(k  1) 
1  c(k  1)  P(k)  c T (k  1)

 
ˆ (k  1)  ˆ (k)  K(k  1)  y(k  1)  c(k  1)  ˆ (k)
Master's degree in Automatic Control and Robotics

Recursive method

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Recursive method
Y=[y(2);y(3)]; %Y(3)
C=[y(1:2) u(1:2)]; %C(3)
P=inv(C'*C); %P(3)
theta_est_rec=P*C'*Y %theta(3)
for i=3:N-1
c=[y(i) u(i)];
D=1+c*P*c'; %Denominator
K=P*c'/D;
theta_est_rec=theta_est_rec+K*(y(i+1)-c*theta_est_rec);
P=P-P*c'*c*P/D;
end

%...
kv_est_rec=-log(theta_est_rec(1))/T; % point estimation of kv
kb_est_rec=kv_est_rec*theta_est_rec(2)/(1-exp(-kv_est_rec*T)); % kb

c(k  1)  y(k) u(k)

P(k)  c T (k  1)  c(k  1)  P(k)


P(k  1)  P(k) 
1  c(k  1)  P(k)  c T (k  1)
P(k)  c T (k  1)
K(k  1) 
1  c(k  1)  P(k)  c T (k  1)
 
ˆ (k  1)  ˆ (k)  K(k  1)  y(k  1)  c(k  1)  ˆ (k)
Master's degree in Automatic Control and Robotics

Confidence intervals: Recursive method


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Recursive Confidence intervals
Y=[y(2);y(3)]; %Y(3)
C=[y(1:2) u(1:2)] %C(3)
P=inv(C'*C) %P(3)
theta_est_rec=P*C'*Y %theta(3)

clear lbkb ubkb lbkv ubkv kb_est kv_est;


lbkb=zeros(N,1);
ubkb=zeros(N,1);
lbkv=zeros(N,1);
ubkv=zeros(N,1);
kbest=zeros(N,1);
kvest=zeros(N,1);

for i=3:N-1
c=[y(i) u(i)];
D=1+c*P*c'; %Denominator
K=P*c'/D;
theta_est_rec=theta_est_rec+K*(y(i+1)-c*theta_est_rec);
P=P-P*c'*c*P/D;
sigma2=J(theta_est_rec)/(i-2); % estimation of noise variance
Cov_mat=sigma2*P; % covariance matrix
std_error=sqrt(diag(Cov_mat)); % standard errors

kv_est(i)=(1-theta_est_rec(1))/T; % point estimation of kv


lbkv(i)=(1-theta_est_rec(1)-std_error(1)*3)/T; % lower bound for 98%
ubkv(i)=(1-theta_est_rec(1)+std_error(1)*3)/T; % upper bound for 98%

kb_est(i)=theta_est(2)/T; % point estimation of kb


lbkb(i)=(theta_est_rec(2)-std_error(2)*3)/T; % lower bound for 98%
ubkb(i)=(theta_est_rec(2)+std_error(2)*3)/T; % upper bound for 98%
end
Master's degree in Automatic Control and Robotics

Confidence intervals: Recursive method


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Confidence intervals. Recursive method
%Plots
figure(1)
plot(kv_est,'r');
hold on;
plot(lbkv);
plot(ubkv);
axis([0 N -0.5 0.5]);
title('95% Confidence interval for kv');
xlabel('time');
ylabel('kv estimation');

figure(2)
plot(kb_est,'r');
hold on;
plot(lbkb);
plot(ubkb);
axis([0 N -1 5]);
title('95% Confidence interval for kb');
xlabel('time');
ylabel('kb estimation');
Master's degree in Automatic Control and Robotics

Confidence intervals: Recursive method

95% Confidence interval for kv 95% Confidence interval for kb


0.5 5

0.4

4
0.3

0.2
3

0.1
kv estimation

kb estimation
0 2

-0.1

1
-0.2

-0.3
0

-0.4

-0.5 -1
0 50 100 150 200 250 300 350 400 450 500 0 50 100 150 200 250 300 350 400 450 500
time time
Master's degree in Automatic Control and Robotics

Optimization in Control and Robotics: Lab 4

Activity 2: Parameter Estimation


Master's degree in Automatic Control and Robotics

Parameter estimation using Kalman filter

Then, regressor model y(k)=cT (k)  in state space form:


The parameters are (k+1)=(k)
considered as the states y(k)= cT (k) +v(k)
x(k)=(k)

v(k) measurement noise with variance R

Considering state space Then, Kalman filter formulas for parameter estimation are:
form of y(k)=cT (k)  1
L(k)  P(k)c(k) R  c T (k)P(k)c(k) 
A=I
B=0 ˆ (k  1)  ˆ (k)  L(k)  y(k)  c T (k)ˆ (k) 
C=cT
P(k  1)  I  L(k)c T (k) P(k)
and no disturbances: Q=0

You might also like