Combined State and Least Squares Parameter Estimation Algorithms For Dynamic Systems

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Applied Mathematical Modelling 38 (2014) 403–412

Contents lists available at SciVerse ScienceDirect

Applied Mathematical Modelling


journal homepage: www.elsevier.com/locate/apm

Short communication

Combined state and least squares parameter estimation


algorithms for dynamic systems q
Feng Ding ⇑
Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi 214122, PR China
Control Science and Engineering Research Center, Jiangnan University, Wuxi 214122, PR China

a r t i c l e i n f o a b s t r a c t

Article history: The control theory and automation technology cast the glory of our era. Highly integrated
Received 15 January 2013 computer chip and automation products are changing our lives. Mathematical models and
Accepted 1 June 2013 parameter estimation are basic for automatic control. This paper discusses the parameter
Available online 4 July 2013
estimation algorithm of establishing the mathematical models for dynamic systems and
presents an estimated states based recursive least squares algorithm, and the states of
Keywords: the system are computed through the Kalman filter using the estimated parameters. A
Dynamic system
numerical example is provided to confirm the effectiveness of the proposed algorithm.
Numerical algorithm
Least squares
Ó 2013 Elsevier Inc. All rights reserved.
Parameter estimation
Recursive identification
State space model

1. Introduction

Numerical methods have wide applications for solving matrix equations or compute the model parameters of dynamic
systems [1–3]. Typical numerical identification methods include the gradient search, the least squares and the Newton
methods [4–6]. Parameter estimation is basic for controller design [7–9], filtering and state estimation [10,11] and system
identification [12–14]. Recently, a gradient based iterative method and a least squares based iterative method were pre-
sented for identifying multiple-input multiple-output systems [15] and for identifying Wiener nonlinear systems [16];
and a Newton recursive and a Newton iterative algorithms were developed for identifying Hammerstein nonlinear systems
[17]; a least squares based recursive estimation algorithm and a least squares based iterative algorithm were proposed for
output error moving average systems using data filtering [18]; several maximum likelihood based recursive least squares
algorithms were discussed for systems with colored noises [19–21].
In the area of parameter estimation [22–25], Zhang et al. proposed a bias compensation based recursive least squares
method for stochastic systems with colored noises [26] and for a class of multiple-input single-output systems [27]; Liu
et al. discussed multi-innovation stochastic gradient approach for multiple-input single-output systems using the multi-
innovation identification theory and the auxiliary model identification idea [28] and analyzed the convergence of the sto-
chastic gradient algorithm for multivariable ARX-like systems [29]. Ding et al. presented an auxiliary model based multi-
innovation stochastic gradient algorithm for systems with scarce measurements [30] and an auxiliary model based recursive
least squares algorithm for missing-data systems [31]. Xiao et al. presented a residual based interactive least squares

q
This work was supported by the National Natural Science Foundation of China (No. 61273194), the Natural Science Foundation of Jiangsu Province
(China, BK2012549), the 111 Project (B12018) and the PAPD of Jiangsu Higher Education Institutions.
⇑ Address: Control Science and Engineering Research Center, Jiangnan University, Wuxi 214122, PR China.
E-mail address: [email protected]

0307-904X/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.apm.2013.06.007
404 F. Ding / Applied Mathematical Modelling 38 (2014) 403–412

algorithm for controlled autoregressive moving average systems [32]; Ding and Duan proposed a two-stage parameter esti-
mation algorithms for Box–Jenkins systems [33].
In the field of state space system identification, Ding et al. presented a hierarchical identification method for the lifted
state space model of general dual-rate systems [34] and for non-uniformly sampled-data systems [35]; Gu et al. discussed
a least squares numerical parameter estimation algorithm for a state space model with multi-state delays, assuming the
states of the system are available [36], and studied parameter and state estimation for a state space model with a one-unit
state delay [37] and for a multivariable state space system with d-step state-delay [38]. This paper studied the identification
method of canonical state space systems, assuming the states of the system are unavailable.
This paper is organized as follows. Section 2 derives the identification model for state space systems. Section 3 gives the
parameter and state estimation algorithm. Section 4 provides an example to verify the effectiveness of the proposed algo-
rithm. Finally, concluding remarks are given in Section 5.

2. The identification model for the state space systems

Let us define some notations. ‘‘A ¼: X’’ or ‘‘X :¼ A’’ stands for ‘‘A is defined as X’’. Let z denote a unit forward shift operator
with zxðtÞ ¼ xðt þ 1Þ and z1 xðtÞ ¼ xðt  1Þ.
Consider the following observer canonical state space system,
xðt þ 1Þ ¼ AxðtÞ þ buðtÞ; ð1Þ
yðtÞ ¼ cxðtÞ þ v ðtÞ; ð2Þ
where xðtÞ :¼ ½x1 ðtÞ; x2 ðtÞ;    ; xn ðtÞT 2 Rn is the state vector, uðtÞ 2 R is the system input, yðtÞ 2 R is the system output,
v ðtÞ 2 R is random noise with zero mean, A 2 Rnn ; b 2 Rn and c are the system parameter matrix and vectors:
2 a 1 0  03 2 3
1 b1
6 .. 7 6 7
6 a2 0 1 .7 6 b2 7
6 7 6 7
6 7 6 . 7
A ¼ 6 .. .. .. 2 Rnn ; b :¼ 6 .. 7 2 Rn ;
6 . . . 07 7 6 7
6 7 6 7
4 an1 0  0 15 4 bn1 5
an 0   0 bn
c :¼ ½1; 0; 0; . . . ; 0 2 R1n :
The parameters ai 2 R and bi 2 R are to be identifified from observation data fuðtÞ; yðtÞ : t ¼ 1; 2; 3;   g.
From (1), we have
2 3 2 32 3 2 3
x1 ðt þ 1Þ a1 1 0  0 x1 ðtÞ b1
6 x ðt þ 1Þ 7 6 .. 76 x ðtÞ 7 6 7
6 2 7 6 a2 0 1 .7 6 2 7 6 b2 7
6 7 6 76 7 6 7
6 .. 7 6 . .. .. 76 .. 7 6 .. 7
6 . 7 ¼ 6 .. . 7 6
. 0 76 . 7 6 7 þ 6 . 7uðtÞ; ð3Þ
6 7 6 7
6 7 6 76 7 6 7
4 xn1 ðt þ 1Þ 5 4 an1 0    0 1 54 xn1 ðtÞ 5 4 bn1 5
xn ðt þ 1Þ an 0   0 xn ðtÞ bn

yðtÞ ¼ ½1; 0; 0; . . . ; 0xðtÞ þ v ðtÞ; ð4Þ


which can be written as
xi ðt þ 1Þ ¼ ai x1 ðtÞ þ xiþ1 ðtÞ þ bi uðtÞ; i ¼ 1; 2; . . . ; ðn  1Þ; ð5Þ
xn ðt þ 1Þ ¼ an x1 ðtÞ þ bn uðtÞ; ð6Þ
yðtÞ ¼ x1 ðtÞ þ v ðtÞ: ð7Þ
i
Multiplying (5) by z gives
xi ðt  i þ 1Þ ¼ ai x1 ðt  iÞ þ xiþ1 ðt  iÞ þ bi uðt  iÞ; i ¼ 1; 2; . . . ; ðn  1Þ:
Summing for i from i ¼ 1 to i ¼ ðn  1Þ gives
X
n1 X
n1 X
n1 X
n1
xi ðt  i þ 1Þ ¼  ai x1 ðt  iÞ þ xiþ1 ðt  iÞ þ bi uðt  iÞ;
i¼1 i¼1 i¼1 i¼1

or
X
n1 X
n1
x1 ðtÞ ¼  ai x1 ðt  iÞ þ xn ðt  n þ 1Þ þ bi uðt  iÞ: ð8Þ
i¼1 i¼1
F. Ding / Applied Mathematical Modelling 38 (2014) 403–412 405

Multiplying (6) by zn gives


xn ðt  n þ 1Þ ¼ an x1 ðt  nÞ þ bn uðt  nÞ: ð9Þ
Substituting (9) into (8) gives
X
n1 X
n1 X
n X
n
x1 ðtÞ ¼  ai x1 ðt  iÞ  an x1 ðt  nÞ þ bn uðt  nÞ þ bi uðt  iÞ ¼  ai x1 ðt  iÞ þ bi uðt  nÞ: ð10Þ
i¼1 i¼1 i¼1 i¼1

Define the parameter vector h and the information vector uðtÞ as


   
a /ðtÞ
h :¼ 2 R2n ; uðtÞ :¼ 2 R2n ;
b wðtÞ
2 3 2 3 2 3 2 3
a1 b1 x1 ðt  1Þ uðt  1Þ
6a 7 6b 7 6 x1 ðt  2Þ 7 6 uðt  2Þ 7
6 27 6 27 6 7 6 7
a :¼ 6 7 n
6 .. 7 2 R ; b :¼ 6 7 n
6 .. 7 2 R ; /ðtÞ :¼ 6
6 ..
7 2 Rn ;
7 wðtÞ :¼ 6
6 ..
7 2 Rn :
7
4 . 5 4 . 5 4 . 5 4 . 5
an bn x1 ðt  nÞ uðt  nÞ
Using (10) and from (7), we obtain the identification model of the state space system in (1) and (2):
 
a
yðtÞ ¼ x1 ðtÞ þ v ðtÞ ¼ /T ðtÞa þ wT ðtÞb þ v ðtÞ ¼ ½/T ðtÞ; wT ðtÞ þ v ðtÞuT ðtÞh þ v ðtÞ: ð11Þ
b
The information vector uðtÞ consists of the state x1 ðt  iÞ and the input uðt  iÞ, and the parameter vector h consists of all the
parameters ai and bi of the state space system in (1).

3. The parameter and state estimation algorithm

3.1. The state estimation algorithm

If the parameter matix/vector A and b are known, then we can apply the following Kalman filter to generate the estimate
^ðtÞ of the state vector xðtÞ:
x
^ðt þ 1Þ ¼ Ax
x ^ðtÞ þ buðtÞ þ L1 ðtÞ½yðtÞ  c x
^ðtÞ; ^ð1Þ ¼ 1n =p0 ;
x ð12Þ
T T 1
L1 ðtÞ ¼ AP 1 ðtÞc ½1 þ cP 1 ðtÞc  ; ð13Þ
T T
P 1 ðt þ 1Þ ¼ AP 1 ðtÞA  L1 ðtÞcP 1 ðtÞA ; P 1 ð1Þ ¼ I n : ð14Þ
When the parameter matix/vector A and b are unknown, then we use the estimated parameter vector

^hðtÞ ¼ ½a
^1 ðtÞ; a
^2 ðtÞ; . . . ; a ^1 ðtÞ; b
^n ðtÞ; b ^n ðtÞT ;
^2 ðtÞ; . . . ; b
^
to construct the estimates AðtÞ ^ of A and b and use the estimated parameter matrix AðtÞ
and bðtÞ ^
^ and the parameter vector bðtÞ
^
to compute the estimate xðtÞ of the state vector xðtÞ [37,38]:

x ^ x
^ðt þ 1Þ ¼ AðtÞ ^
^ðtÞ þ bðtÞuðtÞ ^ðtÞ;
þ L2 ðtÞ½yðtÞ  c x ^ð1Þ ¼ 1n =p0 ;
x ð15Þ

^ T T 1
L2 ðtÞ ¼ AðtÞP 2 ðtÞc ½1 þ cP 2 ðtÞc  ; ð16Þ

^
P 2 ðt þ 1Þ ¼ AðtÞP ^T ^T
2 ðtÞA ðtÞ  L2 ðtÞcP 2 ðtÞA ðtÞ; P 2 ð1Þ ¼ I n ; ð17Þ
2 3 2 3
^1 ðtÞ
a 1
 0 0 ^1 ðtÞ
b
6 7 6 7
6 ^
6 a2 ðtÞ 0 1 ... 7
7
6 b^
6 1 ðtÞ 7
7
6 7 ^ ¼6 6 7
^ ¼6
AðtÞ 6 .
.. .
.. . . . 0 7;
7 bðtÞ 6 ... 7:
7 ð18Þ
6 7 6 7
6 7 6b^n1 ðtÞ 7
^n1 ðtÞ 0    0 1 5
4 a 4 5
^n ðtÞ 0       0
a ^n ðtÞ
b

3.2. The parameter estimation algorithm

Let ^
hðtÞ represent the estimate of h at time t. According to the least squares principle, defining and minimizing the qua-
dratic criterion function
406 F. Ding / Applied Mathematical Modelling 38 (2014) 403–412

X
t
2
JðhÞ :¼ ½yðjÞ  uT ðjÞh ;
j¼1

we can obtain the following recursive algorithm:


^hðtÞ ¼ ^hðt  1Þ þ PðtÞuðtÞ½yðtÞ  uT ðtÞ^hðt  1Þ; ^hð0Þ ¼ 12n =p ; ð19Þ
0
1 1
P ðtÞ ¼ P ðt  1Þ þ uðtÞuT ðtÞ; Pð0Þ ¼ p0 I 2n ; ð20Þ
where 12n denotes a 2n–dimensional column vector whose elements are all unity, p0 is generally taken to be a large positive
number, e.g., p0 ¼ 106 .
Because the information vector uðtÞ contains the unmeasurable state variable x1 ðt  iÞ in /ðtÞ, the algorithm in (19) and
(20) is impossible to implement. The scheme here is to replace x1 ðt  iÞ in uðtÞ with its estimated state ^
x1 ðt  iÞ and to define
2 3
^x1 ðt  1Þ
" # 6 x^ ðt  2Þ 7
^
/ðtÞ 6 1 7
u
^ ðtÞ :¼ 2 R2n ; ^ :¼ 6
/ðtÞ 7 2 Rn : ð21Þ
wðtÞ 6 .. 7
4 . 5
^x1 ðt  nÞ
Replacing uðtÞ in (19) and (20) with its estimate u
^ ðtÞ yields
^hðtÞ ¼ ^hðt  1Þ þ PðtÞu ^ T ðtÞ^hðt  1Þ;
^ ðtÞ½yðtÞ  u ^hð0Þ ¼ 12n =p ; ð22Þ
0
1 1
P ðtÞ ¼ P ðt  1Þ þ u
^ ðtÞu ðtÞ; ^T Pð0Þ ¼ p0 I 2n : ð23Þ
Applying the matrix inversion lemma [1,36]
1
ðA þ BCÞ1 ¼ A1  A1 BðI þ CA1 BÞ CA1 ;
to (23) gives
1
PðtÞ ¼ Pðt  1Þ  Pðt  1Þu ^ T ðtÞPðt  1Þu
^ ðtÞ½1 þ u ^ T ðtÞPðt  1Þ:
^ ðtÞ u ð24Þ
2n
Define the gain vector LðtÞ :¼ PðtÞu
^ ðtÞ 2 R . Post-multiplying (24) by u
^ ðtÞ, we have
1
LðtÞ ¼ Pðt  1Þu ^ T ðtÞPðt  1Þu
^ ðtÞ½1 þ u ^ ðtÞ : ð25Þ
Thus, we have
^ T ðtÞPðt  1Þ:
PðtÞ ¼ ½I 2n  LðtÞu ð26Þ
Combining (22), (25), (26) and (21), we can summarize the estimated states based recursive least squares (ES-RLS) algorithm
as [1]
^hðtÞ ¼ ^hðt  1Þ þ LðtÞ½yðtÞ  u
^ T ðtÞ^hðt  1Þ; ^hð0Þ ¼ 12n =p0 ; ð27Þ
1
LðtÞ ¼ Pðt  1Þu ^T
^ ðtÞ½1 þ u ðtÞPðt  1Þu
^ ðtÞ ; ð28Þ
^T
PðtÞ ¼ ½I 2n  LðtÞu ðtÞPðt  1Þ; Pð0Þ ¼ p0 I 2n ; ð29Þ
" #
^
/ðtÞ
u
^ ðtÞ ¼ ; ð30Þ
wðtÞ
2 3 2 3
^x1 ðt  1Þ uðt  1Þ
6 ^x ðt  2Þ 7 6 uðt  2Þ 7
6 1 7 6 7
^ ¼6
/ðtÞ 7; wðtÞ ¼ 6 7; ð31Þ
6 .. 7 6 .. 7
4 . 5 4 . 5
^x1 ðt  nÞ uðt  nÞ
^hðtÞ ¼ ½a
^1 ðtÞ; a
^2 ðtÞ; . . . ; a ^1 ðtÞ; b
^n ðtÞ; b ^n ðtÞT :
^2 ðtÞ; . . . ; b ð32Þ
^ using the estimated states x
The algorithm in (27)–(32) compute recursively the parameter estimation vector hðtÞ ^1 ðt  iÞ in
the information vector u ^ ðtÞ.
Eqs. (27)–(32) form the estimated states based recursive least squares parameter identification algorithm for state space
systems.
The following lists the steps of computing the parameter and state estimates for the algorithm in (27)–(32) and (15)–(18)
with the data length k increasing.

1. To initialize, let t ¼ 1; ^ ^ðt  iÞ ¼ 1=p0 for i ¼ 1; 2; . . . ; n, P 2 ð1Þ ¼ I n ; p0 ¼ 106 .


hð0Þ ¼ 12n =p0 , Pð0Þ ¼ p0 I 2n ; x
2. Collect the input–output data uðtÞ and yðtÞ.
F. Ding / Applied Mathematical Modelling 38 (2014) 403–412 407

Fig. 1. The flowchart of computing the parameter estimate ^ ^ðtÞ.


hðtÞ and the state estimate x

3. Form /^ k ðtÞ and wðtÞ using (31) and u^ ðtÞ by (30).


4. Compute the gain vector LðtÞ and the covariance matrix PðtÞ using (28) and (29), and update the parameter estimate ^ hðtÞ
using (27).
5. Read a ^ ðtÞ from ^
^i ðtÞ and b ^
hðtÞ according to (32), and construct AðtÞ ^
and bðtÞ using (18).
i
6. Compute the state gain vector L2 ðtÞ and the covariance matrix P 2 ðt þ 1Þ using (16) and (17), and update the stater esti-
mate x^ðt þ 1Þ using (15).
7. Increase t by 1 and go to step 2.

The flowchart of computing the parameter estimate ^


hðtÞ and the state estimate ^
xðtÞ is shown in Fig. 1.

4. Example

Consider the following state space system:


   
0:8 1 1:68
xðt þ 1Þ ¼ xðtÞ þ uðtÞ;
0:4 0 2:32
yðtÞ ¼ ½1; 0xðtÞ þ v ðtÞ:
The parameter vector to be estimated is
T
h ¼ ½a1 ; a2 ; b1 ; b2  ¼ ½0:80; 0:40; 1:68; 2:32T :
In simulation, the input fuðtÞg is taken as an independent persistent excitation signal sequence with zero mean and unit var-
iance, and fv ðtÞg as a white noise sequence with zero mean and variance r2 ¼ 1:002 and r2 ¼ 2:002 , respectively, the cor-
responding noise-to-signal ratios are dns ¼ 21:14% and dns ¼ 42:28%. Applying the combined parameter and state estimation
algorithm in (27)–(32) and (15)–(18) to identify the parameters of this system. The parameter estimates and their estimation
errors are shown in Tables 1 and 2, the parameter estimates a ^i ðtÞ versus t are shown in Fig. 2 and the parameter
^i ðtÞ and b
estimation errors d versus t are shown in Fig. 3.
From Tables 1 and 2 and Figs. 2 and 3, we can see that the parameter estimation errors become smaller with the increas-
ing of the data length t. This shows that the proposed algorithm works well.
The following is the Matlab program of this example.
408 F. Ding / Applied Mathematical Modelling 38 (2014) 403–412

Table 1
The parameter estimates and errors (r2 ¼ 1:002 ; dns ¼ 21:14%).

t a1 a2 b1 b2 d (%)

100 0.28240 0.11921 1.68131 2.90353 31.22563


200 0.35731 0.04641 1.61350 2.80427 26.53828
500 0.53120 0.11304 1.80451 2.65430 17.69214
1000 0.63648 0.21173 1.71096 2.46077 9.59839
2000 0.74319 0.32148 1.68539 2.33859 3.29339
3000 0.77411 0.35447 1.68038 2.31114 1.77022
4000 0.78798 0.37181 1.68284 2.31393 1.04523
5000 0.79382 0.38083 1.68370 2.30665 0.81465
True values 0.80000 0.40000 1.68000 2.32000

Table 2
The parameter estimates and errors (r2 ¼ 2:002 ; dns ¼ 42:28%).

t a1 a2 b1 b2 d (%)

100 0.27617 0.10903 1.73217 3.08394 35.26457


200 0.33908 0.03543 1.66731 2.96745 30.20244
500 0.50656 0.10603 1.86060 2.76488 21.15664
1000 0.59426 0.18176 1.69254 2.52856 12.18110
2000 0.71571 0.30026 1.66639 2.38384 4.86514
3000 0.75423 0.33469 1.65991 2.34964 2.91328
4000 0.77274 0.35482 1.66832 2.35992 2.23903
5000 0.78088 0.36591 1.67271 2.34540 1.57235
True values 0.80000 0.40000 1.68000 2.32000

3
b2
Parameter estimates

2 b
1

1
a2

0
a1

−1
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
t

Fig. 2. The parameter estimation errors d versus t (r2 ¼ 1:002 ).

0.5

0.4

0.3
δ

0.2

0.1

0
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
t

Fig. 3. The parameter estimation error d versus t (r2 ¼ 1:002 ).


F. Ding / Applied Mathematical Modelling 38 (2014) 403–412 409

1 %———————————————————————————————————————————————————————————————————————*
2 %Filename: SS_ParamState_Ao_RLS_ex1.m *
3 % for observer canonical state space systems *
4 % x (t + 1)=Ax (t)+ bu (t) *
5 % y (t)=cx (t)+v (t) *
6 % Parameter and state estiamtion algorithm *
7 % u (t): The model input: an uncorrelated stochastic signal sequence *
8 % with zero mean and unit variance, *
9 % v (t): The disturbance: an uncorrelated white noise sequence *
^
10 % with zero mean and variance sigma 2, *
11 % y (t): The model output, *
12 % *
^ ^ ^
13 % The noise variance sigma 2 = 1.00 2 and 2.00 2 *
14 % The forgetting factor k = FF = 1 *
15 % Date: 2012/11/18 Sunday 23:30 *
16 %———————————————————————————————————————————————————————————————————————*
17 % Copyright 2008- *
18 % Feng Ding (Ding Feng, F. Ding, Ding F.) *
19 % School of Internet of Things Engineering *
20 % Jiangnan University, Wuxi, PR China, 214122 *
21 % Email: [email protected] *
22 % www.fding.org www.fding.org/df2012 *
23 % *
24 % Revision Date: xxx/xx/xx hh:mm:ss By whom *
25 %———————————————————————————————————————————————————————————————————————*
26 clear; format short g; clf
27 fprintf (’nn The parameter and state estimation algorithm nn’)
28 FF = 1; % The forgetting factor FF = k = 1
29 sigma = 1; % The noise variance sigma ^2 = 1.0^ 2 and 2.0^2
30
31 PlotLength = 5000; length1 = PlotLength + 100;
32 n = 2; % The orders
33 A=[ 0.8, 1; 0.4, 0]; b=[1.68, 2.32]’; c=[1, 0]; d = 0;
34 ss1 = ss (A,b,c,d);
35 par0=[-A (:,1); b]; n1 = length (par0);
36 p0 = 1e6; P = eye (n1)*p0; r = 1;
37 par1 = ones (n1,1)/p0;
38
39 P2 = eye (n)*1; % The covariance matrix of the Kalman filter
40 %–Compute the noise-to-signal ratio
41 a=[1, -A (:,1)’];
42 sy = f_integral (a,b); sv = 1;
43 delta_ns = sqrt (sv/sy)*100*sigma;
44 [sy,sv,delta_ns]
45 %——Generate the input–output data
46 rand (’state’,2); randn (’state’,2);
47 u=(rand (length1,1)-0.5)*sqrt (12);
48 v = randn (length1,1)*sigma;
49
50 x1 = ones (n1,1)/p0; x2 = x1; y = x1;
51 for t = n:length1
52 x=[x1(t), x2(t)]’;
53 x1(t + 1)=A (1,:)*x + b (1)*u (t);
54 x2(t + 1)=A (2,:)*x + b (2)*u (t);
55 y (t)=c*x + v (t);
56 end
58 %——Compute the parameter estimates
59 hx1 = zeros (n1,1); hx2 = hx1;

(continued on next page)


410 F. Ding / Applied Mathematical Modelling 38 (2014) 403–412

60 jj = 0; j1 = 0;
61 for t = n1:length1
62 jj = jj + 1; varphi=[-hx1(t-1:-1:t-n); u (t-1:-1:t-n)];
63 L = P*varphi/(FF + varphi’*P*varphi);
64 P=(P-L*varphi’*P)/FF;
65 par1 = par1 + L*(y (t)-varphi’*par1);
66
67 A1=[-par1(1:n), [1; 0]]; b1 = par1(n + 1:n1);
68 L2 = A1*P2*c’/(1 + c*P2*c’);
69 P2 = A1’*P2*A1’-L2*c*P2*A1’;
70 hx=[hx1(t); hx2(t)];
71 hx1(t + 1)=A1(1,:)*hx + b1(1)*u (t) +L2(1)*(y (t)-c*hx);
72 hx2(t + 1)=A1(2,:)*hx + b1(2)*u (t) +L2(2)*(y (t)-c*hx);;
73
74 delta = norm (par1-par0)/norm (par0);
75 ls (jj,:)=[jj,par1’,delta];
76 if (jj==100)|(jj==200)|(jj==500)|mod (jj,1000)==0
77 j1 = j1 + 1;
78 ls100(j1,:)=[jj, par1’, delta*100];
79 end
80 if jj==PlotLength
81 break
82 end
83 end
84 ls100(j1 + 1,:)=[0, par0’, 0];
85 fprintf (’nn ($ nnsigma ^2=%5.2f ^2 $, $ nndelta_{nnns}=%6.2f%s) nn’, sigma,
delta_ns,’ n%’)
86 fprintf (’nn %s nn’,’ $t$ &$a_1$ &$a_2$ &$b_1$ &$b_2$ &$dn(n%) nn$ nnnhline’);
87 fprintf (’%5d & %10.5f & %10.5f & %10.5f & %10.5f & %10.5f nnnnnn’,ls100’);
88
89 Fig. 1; k=(17:PlotLength-1)’;
90 plot (ls (k,1),ls (k,n1 + 2));
91 axis ([0, PlotLength, 0, 0.51]);
92 xlabel (’t’); ylabel (’d’);
93
94 Fig.(2); k=(20:PlotLength-1)’;
95 plot (k,ls (k,2),’k’,k,ls (k,3),’b’,k,ls (k,4),’k’,k,ls (k,5),’b’);
96 xlabel (’t’); ylabel (’Parameter estimates’);
97 axis ([0, PlotLength, 1.1, 3.6]);
98 k = 2500;
99 text (k,ls (k,2)+0.25,’a_1’); text (k,ls (k,3)+0.25,’a_2’)
100 text (k,ls (k,4)+0.25,’b_1’); text (k,ls (k,5)+0.25,’b_2’)
101
102 if sigma==1.0
103 data1=[ls (:,1), ls (:,n1 + 2)];
104 save data1 data1
105 else % sigma==2.0
106 load data1
107 z0=[data1, ls (:,n1 + 2)];
108 Fig.(3); k=(17:2:PlotLength-1)’;
109 jk = z0(k,1);
110 plot (jk,z0(k,2),’k’,jk,z0(k,3),’b’)
111 axis ([0, PlotLength, 0, 0.72]);
112 xlabel (’t’); ylabel (’d’);
113 line ([800,1400],[z0(800,2),0.3])
114 text (1400,0.3 + 0.03,’ nit r ^2 = 1.00 ^2’)
115
116 line ([2000,2800],[z0(2000,3),0.3])
117 text (2800,0.3 + 0.03,’r^2 = 2.00^2’)
118 end
F. Ding / Applied Mathematical Modelling 38 (2014) 403–412 411

5. Conclusions

This paper proposes a combined parameter and state estimation algorithm for estimating the parameters and states of an
observer canonical state space system. The simulation results indicate that the proposed algorithms are effective.
The proposed method can combine other methods, e.g., the multi-innovation identification methods [39–42], the hierar-
chical identification methods [43–45], the iterative identification methods [46], the two-stage identification algorithms and
so on, to study identification problems of the controller canonical form, the controllability canonical form and the observ-
ability canonical form of scalar or multivariable systems [47–49].

References

[1] F. Ding, System Identification – New Theory and Methods, Science Press, Beijing, 2013.
[2] M. Dehghan, M. Hajarian, Two algorithms for finding the Hermitian reflexive and skew–Hermitian solutions of Sylvester matrix equations, Appl. Math.
Lett. 24 (4) (2011) 444–449.
[3] M. Dehghan, M. Hajarian, Analysis of an iterative algorithm to solve the generalized coupled Sylvester matrix equations, Appl. Math. Model. 35 (7)
(2011) 3285–3300.
[4] J.H. Li, R.F. Ding, Y. Yang, Iterative parameter identification methods for nonlinear functions, Appl. Math. Model. 36 (6) (2012) 2739–2750.
[5] J.H. Li, Parameter estimation for Hammerstein CARARMA systems based on the Newton iteration, Appl. Math. Lett. 26 (1) (2013) 91–96.
[6] M. Dehghan, M. Hajarian, Fourth-order variants of Newton’s method without second derivatives for solving non-linear equations, Eng. Comput. 29 (4)
(2012) 356–365.
[7] J.B. Zhang, F. Ding, Y. Shi, Self-tuning control based on multi-innovation stochastic gradient parameter estimation, Syst. Control Lett. 58 (1) (2009) 69–
75.
[8] Y. Shi, B. Yu, Output feedback stabilization of networked control systems with random delays modeled by Markov chains, IEEE Trans. Autom. Control
54 (7) (2009) 1668–1674.
[9] Y. Shi, B. Yu, Robust mixed H2/Hinfinity control of networked control systems with random time delays in both forward and backward communication
links, Automatica 47 (4) (2011) 754–760.
[10] Y. Shi, T. Chen, Optimal design of multi-channel transmultiplexers with stopband energy and passband magnitude constraints, IEEE Trans. Circuits
Syst. II: Analog Digit. Sig. Process. 50 (9) (2003) 659–662.
[11] Y. Shi, H. Fang, Kalman filter based identification for systems with randomly missing measurements in a network environment, Int. J. Control 83 (3)
(2010) 538–551.
[12] F. Ding, Y. Gu, Performance analysis of the auxiliary model based least squares identification algorithm for one-step state delay systems, Int. J. Comput.
Math. 89 (15) (2012) 2019–2028.
[13] F. Ding, Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling, Appl. Math. Model. 37 (4) (2013)
1694–1704.
[14] J. Ding, L.L. Han, X.M. Chen, Time series AR modeling with missing observations based on the polynomial transformation, Math. Comput. Model. 51 (5–
6) (2010) 527–536.
[15] F. Ding, Y.J. Liu, B. Bao, Gradient based and least squares based iterative estimation algorithms for multi-input multi-output systems, Proc. Inst. Mech.
Eng., Part I: J. Syst. Control Eng. 226 (1) (2012) 43–55.
[16] D.Q. Wang, F. Ding, Least squares based and gradient based iterative identification for Wiener nonlinear systems, Signal Process. 91 (5) (2011) 1182–
1189.
[17] F. Ding, X.P. Liu, G. Liu, Identification methods for Hammerstein nonlinear systems, Digit. Sig. Process. 21 (2) (2011) 215–238.
[18] D.Q. Wang, Least squares-based recursive and iterative estimation for output error moving average systems using data filtering, IET Control Theory
Appl. 5 (14) (2011) 1648–1657.
[19] W. Wang, F. Ding, J.Y. Dai, Maximum likelihood least squares identification for systems with autoregressive moving average noise, Appl. Math. Model.
36 (5) (2012) 1842–1853.
[20] J.H. Li, F. Ding, Maximum likelihood stochastic gradient estimation for Hammerstein systems with colored noise based on the key term separation
technique, Comput. Math. Appl. 62 (11) (2011) 4170–4177.
[21] J.H. Li, F. Ding, G.W. Yang, Maximum likelihood least squares identification method for input nonlinear finite impulse response moving average
systems, Math. Comput. Model. 55 (3–4) (2012) 442–450.
[22] F. Ding, Coupled-least-squares identification for multivariable systems, IET Control Theory Appl. 7 (1) (2013) 68–79.
[23] F. Ding, X.G. Liu, J. Chu, Gradient-based and least-squares-based iterative algorithms for Hammerstein systems using the hierarchical identification
principle, IET Control Theory Appl. 7 (2) (2013) 176–184.
[24] F. Ding, Decomposition based fast least squares algorithm for output error systems, Signal Process. 93 (5) (2013) 1235–1242.
[25] F. Ding, Two-stage least squares based iterative estimation algorithm for CARARMA system modeling, Appl. Math. Model. 37 (7) (2013) 4798–4808.
[26] Y. Zhang, G.M. Cui, Bias compensation methods for stochastic systems with colored noise, Appl. Math. Model. 35 (4) (2011) 1709–1716.
[27] Y. Zhang, Unbiased identification of a class of multi-input single-output systems with correlated disturbances using bias compensation methods, Math.
Comput. Model. 53 (9–10) (2011) 1810–1819.
[28] Y.J. Liu, Y.S. Xiao, X.L. Zhao, Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model, Appl.
Math. Comput. 215 (4) (2009) 1477–1483.
[29] Y.J. Liu, J. Sheng, R.F. Ding, Convergence of stochastic gradient algorithm for multivariable ARX-like systems, Comput. Math. Appl. 59 (8) (2010) 2615–
2627.
[30] F. Ding, G. Liu, X.P. Liu, Parameter estimation with scarce measurements, Automatica 47 (8) (2011) 1646–1655.
[31] F. Ding, J. Ding, Least squares parameter estimation with irregularly missing data, Int. J. Adapt. Control Signal Process. 24 (7) (2010) 540–553.
[32] Y.S. Xiao, Y. Zhang, J. Ding, J.Y. Dai, The residual based interactive least squares algorithms and simulation studies, Comput. Math. Appl. 58 (6) (2009)
1190–1197.
[33] F. Ding, H.H. Duan, Two-stage parameter estimation algorithms for Box-Jenkins systems, IET Signal Process. (2013), http://dx.doi.org/10.1049/iet-
spr.2012.0183.
[34] F. Ding, T. Chen, Hierarchical identification of lifted state-space models for general dual-rate systems, IEEE Trans. Circuits Syst.-I: Reg. Pap. 52 (6)
(2005) 1179–1187.
[35] F. Ding, L. Qiu, T. Chen, Reconstruction of continuous-time systems from their non-uniformly sampled discrete-time systems, Automatica 45 (2) (2009)
324–332.
[36] Y. Gu, R. Ding, A least squares numerical algorithm for a state space model with multi-state delays, Appl. Math. Lett. 26 (7) (2013) 748–753.
[37] Y. Gu, X.L. Lu, R.F. Ding, Parameter and state estimation algorithm for a state space model with a one-unit state delay, Circuits Syst. Sig. Process. 32 (x)
(2013), http://dx.doi.org/10.1007/s00034-013-9569-4.
[38] Y. Gu, F. Ding, Parameter estimation for a multivariable state space system with d-step state-delay, J. Franklin Inst. – Eng. Appl. Math. 350 (4) (2013)
724–736.
412 F. Ding / Applied Mathematical Modelling 38 (2014) 403–412

[39] F. Ding, T. Chen, Performance analysis of multi-innovation gradient type identification methods, Automatica 43 (1) (2007) 1–14.
[40] F. Ding, X.P. Liu, G. Liu, Auxiliary model based multi-innovation extended stochastic gradient parameter estimation with colored measurement noises,
Sig. Process. 89 (10) (2009) 1883–1890.
[41] F. Ding, Several multi-innovation identification methods, Digit. Sig. Process. 20 (4) (2010) 1027–1039.
[42] F. Ding, X.P. Liu, G. Liu, Multi-innovation least squares identification for linear and pseudo-linear regression models, IEEE Trans. Syst. Man Cybern. Part
B: Cybern. 40 (3) (2010) 767–778.
[43] J. Ding, F. Ding, et al, Hierarchical least squares identification for linear SISO systems with dual-rate sampled-data, IEEE Trans. Autom. Control 56 (11)
(2011) 2677–2683.
[44] H.Q. Han, L. Xie, et al, Hierarchical least squares based iterative identification for multivariable systems with moving average noises, Math. Comput.
Model. 51 (9–10) (2010) 1213–1220.
[45] Z.N. Zhang, F. Ding, X.G. Liu, Hierarchical gradient based iterative parameter estimation algorithm for multivariable output error moving average
systems, Comput. Math. Appl. 61 (3) (2011) 672–682.
[46] D.Q. Wang, G.W. Yang, R.F. Ding, Gradient-based iterative parameter estimation for Box–Jenkins systems, Comput. Math. Appl. 60 (5) (2010) 1200–
1208.
[47] F. Ding, G. Liu, X.P. Liu, Partially coupled stochastic gradient identification methods for non-uniformly sampled systems, IEEE Trans. Autom. Control 55
(8) (2010) 1976–1981.
[48] J. Ding, F. Ding, Bias compensation based parameter estimation for output error moving average systems, Int. J. Adapt. Control Signal Process. 25 (12)
(2011) 1100–1111.
[49] F. Ding, Y. Shi, T. Chen, Performance analysis of estimation algorithms of non-stationary ARMA processes, IEEE Trans. Signal Process. 54 (3) (2006)
1041–1053.

You might also like