Mathematical Problems in Engineering - 2014 - Wang - Filtering Based Recursive Least Squares Algorithm For Multi Input
Mathematical Problems in Engineering - 2014 - Wang - Filtering Based Recursive Least Squares Algorithm For Multi Input
Mathematical Problems in Engineering - 2014 - Wang - Filtering Based Recursive Least Squares Algorithm For Multi Input
Research Article
Filtering Based Recursive Least Squares Algorithm for
Multi-Input Multioutput Hammerstein Models
Received 28 June 2014; Revised 7 September 2014; Accepted 25 September 2014; Published 16 October 2014
Copyright © 2014 Ziyun Wang et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper considers the parameter estimation problem for Hammerstein multi-input multioutput finite impulse response (FIR-
MA) systems. Filtered by the noise transfer function, the FIR-MA model is transformed into a controlled autoregressive model.
The key-term variable separation principle is used to derive a data filtering based recursive least squares algorithm. The numerical
examples confirm that the proposed algorithm can estimate parameters more accurately and has a higher computational efficiency
compared with the recursive least squares algorithm.
The nonlinear block in the Hammerstein model is a linear 2. The MRLS Algorithm
combination of the known basis f := (𝑓1 , 𝑓2 , . . . , 𝑓𝑟 ):
For comparison, the MRLS algorithm is listed in Section 2.
Here we introduce some notations. 𝑡 represents the current
u (𝑡) = f (u (𝑡))
time in this paper and “𝐴 =: 𝑋” or “𝑋 := 𝐴” means that
(3) “𝐴 is defined as 𝑋”. The symbol I𝑚×𝑟 represents an identity
𝑇
:= [𝑓1 (𝑢1 (𝑡)) , 𝑓2 (𝑢2 (𝑡)) , . . . , 𝑓𝑟 (𝑢𝑟 (𝑡))] ∈ R𝑟 ,
matrix of size 𝑟 followed by a null matrix of the last 𝑚 − 𝑟
rows when 𝑚 ⩾ 𝑟 and vice versa. The norm of a matrix (or a
where the superscript 𝑇 denotes the matrix transpose. The column vector) X is defined by ‖X‖2 := tr[XX𝑇 ]; ⊗ denotes
function 𝑓𝑖 (𝑢𝑖 (𝑡)) in (3) is a nonlinear function of a known the Kronecker product or direct product: if A = [𝑎𝑖𝑗 ] ∈ R𝑚×𝑛 ,
basis (𝛾1 , 𝛾2 , . . . , 𝛾𝑛𝑐 ): B = [𝑏𝑖𝑗 ] ∈ R𝑝×𝑞 , then A ⊗ B := [𝑎𝑖𝑗 B] ∈ R𝑚𝑝×𝑛𝑞 ; col[X]
is supposed to be the vector formed by the column of the
𝑛𝑐 matrix X: if X = [x1 , x2 , . . . , x𝑛 ] ∈ R𝑚×𝑛 , then col[X] :=
𝑓𝑖 (𝑢𝑖 (𝑡)) := ∑ 𝑐𝑗 𝛾𝑗 (𝑢𝑖 (𝑡)) , (4) 𝑇
𝑗=1
[x1𝑇 , x2𝑇 , . . . , x𝑛𝑇 ] ∈ R𝑚𝑛 .
From (1)–(5), the intermediate variables x(𝑡) and w(𝑡) and
output of the system y(𝑡) can be expressed as
where the coefficients c := (𝑐1 , 𝑐2 , . . . , 𝑐𝑛𝑐 ) are unknown.
𝑛𝑏
Substituting (4) into (3) yields
x (𝑡) = [I𝑚×𝑟 + ∑B𝑖 𝑧−𝑖 ] u (𝑡) ,
𝑖=1
𝑛𝑐 (6)
u (𝑡) = ∑𝑐𝑖 𝛾𝑖 (u (𝑡)) , 𝑛𝑑
𝑖=1 w (𝑡) = [1 + ∑𝑑𝑖 𝑧−𝑖 ] k (𝑡) ,
𝑖=1
𝑇
𝛾𝑖 (u (𝑡)) := [𝛾𝑖 (𝑢1 (𝑡)) , 𝛾𝑖 (𝑢2 (𝑡)) , . . . , 𝛾𝑖 (𝑢𝑟 (𝑡))] ∈ R𝑟 . 𝑛𝑐 𝑛
(5) 𝛾𝑖 (u (𝑡)) 𝑏
Distinguished from the hierarchical identification methods, Define the parameter estimation matrices
we reparameterize the model in (7) by using the Kronecker
product to get a parameter matrix 𝜗 and by gathering
̂
the input information vectors 𝜑1 (𝑡) and 𝜑2 (𝑡) and output ̂ (𝑡) := [ 𝜃𝑠 (𝑡) ] ∈ R𝑛0 ,
𝜗
information matrix 𝜑𝑛 (𝑡) into one information matrix Ψ(𝑡) ̂𝜃 (𝑡)
𝑛
to obtain an information matrix Ψ(𝑡) as follows:
̂𝑇
̂𝜃 (𝑡) := [col [𝜃1 (𝑡)]] ∈ R𝑚𝑟𝑛𝑏 +𝑛𝑐 ,
𝑠
𝜃 ̂
𝜗 := [ 𝑠 ] ∈ R𝑛0 , 𝑛0 := 𝑚𝑟𝑛𝑏 + 𝑛𝑐 + 𝑛𝑑 , [ 𝜃2 (𝑡) ]
𝜃𝑛 (17)
𝑇
col [𝜃𝑇1 ] 𝜃̂1 (𝑡) := [B
̂ 1 (𝑡) , B
̂ 2 (𝑡) , . . . , B
̂ 𝑛 (𝑡)] ∈ R𝑚×(𝑟𝑛𝑏 ) ,
] ∈ R𝑚𝑟𝑛𝑏 +𝑛𝑐 ,
𝑏
𝜃𝑠 := [
𝜃2 (10) 𝑇
𝜃̂2 (𝑡) := [̂𝑐1 (𝑡) , 𝑐̂2 (𝑡) , . . . , 𝑐̂𝑛𝑐 (𝑡)] ∈ R𝑛𝑐 ,
Ψ (𝑡) := [𝜑𝑠 (𝑡) , 𝜑𝑛 (𝑡)] ∈ R𝑚×𝑛0 ,
𝑇
𝜃̂𝑛 (𝑡) := [𝑑̂1 (𝑡) , 𝑑̂2 (𝑡) , . . . , 𝑑̂𝑛𝑑 (𝑡)] ∈ R𝑛𝑑 .
𝜑𝑠 (𝑡) := [𝜑𝑇1 (𝑡) ⊗ I𝑚 , 𝜑2 (𝑡)] ∈ R 𝑚×(𝑚𝑟𝑛𝑏 +𝑛𝑐 )
.
𝑇
col [𝜃̂1 (𝑡)] Thus, (21) can be rewritten as
[ ]
̂
𝜗 (𝑡) = [ 𝜃̂ (𝑡) ]
[
2 ], 𝑛𝑏
y𝑓 (𝑡) = [I𝑚×𝑟 + ∑B𝑖 𝑧−𝑖 ] u𝑓 (𝑡) + v (𝑡)
[ 𝜃̂𝑛 (𝑡) ] 𝑖=1
𝑇
𝜃̂1 (𝑡) ̂ 1 (𝑡) , B
= [B ̂ 2 (𝑡) , . . . , B
̂ 𝑛 (𝑡)] , u𝑓 (𝑡)
𝑏 =[ ] + B1 u𝑓 (𝑡 − 1) + B2 u𝑓 (𝑡 − 2)
0
𝑇
𝜃̂2 (𝑡) = [̂𝑐1 (𝑡) , 𝑐̂2 (𝑡) , . . . , 𝑐̂𝑛𝑐 (𝑡)] ,
+ ⋅ ⋅ ⋅ + B𝑛𝑏 u𝑓 (𝑡 − 𝑛𝑏 ) + v (𝑡) ,
𝑇
𝜃̂𝑛 (𝑡) = [𝑑̂1 (𝑡) , 𝑑̂2 (𝑡) , . . . , 𝑑̂𝑛𝑑 (𝑡)] . = 𝑐1 𝜁1 (𝑡) + 𝑐2 𝜁2 (𝑡) + ⋅ ⋅ ⋅ + 𝑐𝑛𝑐 𝜁𝑛𝑐 (𝑡) + B1 u𝑓 (𝑡 − 1)
(19)
+ B2 u𝑓 (𝑡 − 2) + ⋅ ⋅ ⋅ + B𝑛𝑏 u𝑓 (𝑡 − 𝑛𝑏 ) + v (𝑡) .
(23)
3. The F-MRLS Algorithm
Define the filtered information matrices:
The convergence rate of the MRLS algorithm in Section 2
is slow because the noise information intermediate variables Ψ𝑓 (𝑡) := [𝜑𝑇𝑓1 (𝑡) ⊗ I𝑚 , 𝜑𝑓2 (𝑡)] ∈ R𝑚×(𝑚𝑟𝑛𝑏 +𝑛𝑐 ) , (24)
w(𝑡) contain unmeasurable time-delay noise k(𝑡−𝑖). The solu-
𝑇
tion here is to present a filtering based recursive least squares 𝜑𝑓1 (𝑡) := [u𝑇𝑓 (𝑡 − 1) , u𝑇𝑓 (𝑡 − 2) , . . . , u𝑇𝑓 (𝑡 − 𝑛𝑏 )] ∈ R(𝑟𝑛𝑏 ) ,
algorithm (the F-MRLS algorithm) for the multivariable
(25)
Hammerstein models by filtering the rational function 𝑁(𝑧)
and transferring the FIR-MA model in (1) into a controlled
𝜑𝑓2 (𝑡) := [𝜁1 (𝑡) , 𝜁2 (𝑡) , . . . , 𝜁𝑛𝑐 (𝑡)] ∈ R𝑚×𝑛𝑐 . (26)
autoregressive (CAR) model. Multiplying both sides of (1) by
𝑁−1 (𝑧) yields Since the polynomial 𝑁(𝑧) is unknown and to be estimated,
it is impossible to use u𝑓 (𝑡) to construct 𝜑𝑓1 (𝑡) in (25). Here,
we adopt the principle of the MRLS algorithm in Section 2
𝑁−1 (𝑧) y (𝑡) = B (𝑧) 𝑁−1 (𝑧) u (𝑡) + k (𝑡) , (20) and replace the unmeasurable variables and vectors with their
estimates to derive the following algorithm.
or By using the parameter estimates 𝜃̂1 (𝑡) and 𝜃̂𝑛 (𝑡), the
estimates of polynomials B(𝑧) and 𝑁(𝑧) at time 𝑡 can be
constructed as
y𝑓 (𝑡) = B (𝑧) u𝑓 (𝑡) + k (𝑡) , (21) 𝑛𝑏
̂ (𝑡, 𝑧) := I𝑚×𝑟 + ∑B
B ̂ 𝑖 (𝑡) 𝑧−𝑖 ,
𝑖=1
where
̂ (𝑡, 𝑧) := 1 + 𝑑̂1 (𝑡) 𝑧−1 + 𝑑̂2 (𝑡) 𝑧−2 + ⋅ ⋅ ⋅ + 𝑑̂𝑛 (𝑡) 𝑧−𝑛𝑑 .
𝑁 𝑑
(27)
1
u𝑓 (𝑡) := u (𝑡)
𝑁 (𝑧) w(𝑡) in model (1) can be rewritten as
𝑛
1 𝑐
w (𝑡) = y (𝑡) − x (𝑡)
= ∑𝑐̂𝑖 (𝑡) 𝛾𝑖 (u (𝑡))
𝑁 (𝑧) 𝑖=1 𝑛𝑏
= y (𝑡) − [I𝑚×𝑟 + ∑B𝑖 (𝑡) 𝑧−𝑖 ] u (𝑡) (28)
= 𝑐1 U1 (𝑡) + 𝑐2 U2 (𝑡) + ⋅ ⋅ ⋅ + 𝑐𝑛𝑐 U𝑛𝑐 (𝑡) , 𝑖=1
1 = y (𝑡) − 𝜑𝑠 (𝑡) 𝜃𝑠 .
y𝑓 (𝑡) := y (𝑡)
𝑁 (𝑧)
(22) Let ŵ(𝑡) be the estimate of w(𝑡). Replacing w(𝑡), y(𝑡), 𝜑𝑠 (𝑡),
= [1 − 𝑁 (𝑧)] y𝑓 (𝑡) + y (𝑡) 𝜑𝑛 (𝑡), 𝜃𝑛 , and 𝜃𝑠 with their estimates w ̂ 𝑠 (𝑡), 𝜑
̂(𝑡), ŷ(𝑡), 𝜑 ̂ 𝑛 (𝑡),
𝑛𝑑
̂𝜃 (𝑡), and 𝜃̂ (𝑡 − 1) leads to
𝑛 𝑠
= y (𝑡) − ∑𝑑𝑖 (𝑡) y𝑓 (𝑡 − 𝑖) ,
𝑖=1 w ̂ 𝑠 (𝑡) 𝜃̂𝑠 (𝑡 − 1) .
̂ (𝑡) = y (𝑡) − 𝜑 (29)
1
U𝑖 (𝑡) := 𝛾 (u (𝑡)) ∈ R𝑟 , 𝑖 = 1, 2, . . . , 𝑛𝑐 , Defining and minimizing the cost function
𝑁 (𝑧) 𝑖
𝑡
U (𝑡) 2
𝜁𝑗 (𝑡) := [ 𝑗 ] ∈ R𝑚 , 𝑗 = 1, 2, . . . , 𝑛𝑐 . 𝐽 (𝜃𝑛 ) := ∑[w (𝑗) − 𝜑𝑛 (𝑗) 𝜃𝑛 ] (30)
0 𝑗=1
2629, 2014, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2014/232848 by Libya Hinari NPL, Wiley Online Library on [22/09/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Mathematical Problems in Engineering 5
and using the least squares search principle, we list the Based on the MRLS search principle, we can obtain the
recursive least squares algorithm to compute 𝜃̂𝑛 (𝑡): estimate of 𝜃𝑠 by the following algorithm:
̂ 𝑛 (𝑡)] P𝑛 (𝑡 − 1) ,
P𝑛 (𝑡) = [I𝑛𝑑 − L𝑛 (𝑡) 𝜑 P𝑛 (0) = 𝑝0 I. ̂ 𝑓 (𝑡)] P𝑓 (𝑡 − 1) ,
P𝑓 (𝑡) = [I𝑟𝑛𝑏+𝑛𝑐 − L𝑓 (𝑡) Ψ
(31)
P𝑓 (0) = 𝑝0 I.
(38)
Let ̂c(𝑡) := [̂𝑐1 (𝑡), 𝑐̂2 (𝑡), . . . , 𝑐̂𝑛𝑐 (𝑡)]𝑇 ∈ R𝑛𝑐 be the estimate of c
at time 𝑡, filtering y(𝑡) with 1/𝑁(𝑡, ̂ 𝑧) to obtain the estimate ̂ can be computed by
The estimate u(𝑡)
ŷ𝑓 (𝑡):
̂ (𝑡) = 𝑐̂ (𝑡) 𝛾 (u (𝑡)) + 𝑐̂ (𝑡) 𝛾 (u (𝑡))
u 1 1 2 2
1 (39)
ŷ𝑓 (𝑡) = y (𝑡) + ⋅ ⋅ ⋅ + 𝑐̂𝑛𝑐 (𝑡) 𝛾𝑛𝑐 (u (𝑡)) .
̂ (𝑡, 𝑧)
𝑁
̂ (𝑡, 𝑧)] ŷ𝑓 (𝑡) + y (𝑡)
= [1 − 𝑁 ̂ by 1/𝑁(𝑡,
Filter u(𝑡) ̂ (𝑡):
̂ 𝑧) to obtain the estimate u
(32) 𝑓
𝑛𝑑
̂ (𝑡) = 1 ̂
= − ∑𝑑̂𝑖 (𝑡) ŷ𝑓 (𝑡 − 𝑖) + y (𝑡) . u𝑓 u (𝑡)
̂ (𝑡, 𝑧)
𝑁
𝑖=1
𝑛
1 𝑐
̂ 𝑓 (𝑡)] P𝑓 (𝑡 − 1) ,
P𝑓 (𝑡) = [I𝑟𝑛𝑏 +𝑛𝑐 − L𝑓 (𝑡) 𝜑 (44)
̂ 𝑓2 (𝑡) = [̂𝜁1 (𝑡) , ̂𝜁2 (𝑡) , . . . , ̂𝜁𝑛𝑐 (𝑡)] .
𝜑
̂ 𝑓 (𝑡) = [̂
Ψ 𝜑𝑇𝑓1 (𝑡) ⊗ I𝑚 , 𝜑
̂ 𝑓2 ] , (45)
The filtered model in (21) can be rewritten in a matrix form:
𝑇
̂ 𝑇 (𝑡 − 1) , u
̂ 𝑓1 (𝑡) = [u
𝜑 ̂ 𝑇 (𝑡 − 2) , . . . , u
̂ 𝑇 (𝑡 − 𝑛 )] , (46)
𝑓 𝑓 𝑓 𝑏
y𝑓 (𝑡) = Ψ𝑓 (𝑡) 𝜃𝑠 + v (𝑡) , (36)
̂ 𝑓2 (𝑡) = [̂𝜁1 (𝑡) , ̂𝜁2 (𝑡) , . . . , ̂𝜁𝑛𝑐 (𝑡)] ,
𝜑 (47)
or
̂
̂𝜁 (𝑡) = [U𝑗 (𝑡)] , (48)
v (𝑡) = y𝑓 (𝑡) − Ψ𝑓 (𝑡) 𝜃𝑠 . (37) 𝑗
0
2629, 2014, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2014/232848 by Libya Hinari NPL, Wiley Online Library on [22/09/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
6 Mathematical Problems in Engineering
𝑛𝑑
̂ 𝑗 (𝑡) = − ∑𝑑̂𝑖 (𝑡) U
̂ 𝑗 (𝑡 − 𝑖) + 𝛾 (u (𝑡)) , and 𝑗 = 1, 2, . . . , 𝑛𝑐 , P𝑓 (0) = 𝑝0 I𝑟𝑛𝑏 +𝑛𝑐 , P𝑛 (0) = 𝑝0 I𝑛𝑑 ,
U 𝑗 (49)
𝑖=1 𝑝0 = 106 , and give the basis functions 𝛾𝑗 (⋅).
̂ (𝑡) = 𝑐̂ (𝑡) U
̂ 1 (𝑡) + 𝑐̂2 (𝑡) U
̂ 2 (𝑡) + ⋅ ⋅ ⋅ + 𝑐̂𝑛 (𝑡) U
̂ 𝑛 (𝑡) , (2) Collect the input-output data u(𝑡) and y(𝑡), and
u 𝑓 1 𝑐 𝑐 ̂ 1 (𝑡), 𝜑2 (𝑡) by (58), (59).
construct 𝜂𝑖 (𝑡) by (60), and 𝜑
(50) Form the information vectors 𝜑 ̂ 𝑛 (𝑡) by
̂ 𝑠 (𝑡) by (57), 𝜑
𝑛𝑑 ̂ by (56), respectively.
(61), and Ψ(𝑡)
ŷ𝑓 (𝑡) = − ∑𝑑̂𝑖 (𝑡) ŷ𝑓 (𝑡 − 𝑖) + y (𝑡) , (51) (3) Compute w ̂(𝑡) by (55), the gain vector L𝑛 (𝑡) by (53),
𝑖=1
and the covariance matrix P𝑛 (𝑡) by (54), respectively.
𝜃̂𝑛 (𝑡) = 𝜃̂𝑛 (𝑡 − 1) + L𝑛 (𝑡) [̂ ̂ 𝑛 (𝑡) 𝜃̂𝑛 (𝑡 − 1)] , (52)
w (𝑡) − 𝜑 Update the parameter estimate 𝜃̂𝑛 (𝑡) by (52).
(4) Compute ŷ𝑓 (𝑡) by (51), U ̂ 𝑗 (𝑡) by (49). Construct ̂𝜁𝑗 (𝑡),
−1
̂ 𝑇𝑛 (𝑡) [I𝑚 + 𝜑
L𝑛 (𝑡) = P𝑛 (𝑡 − 1) 𝜑 ̂ 𝑇𝑛 (𝑡)] ,
̂ 𝑛 (𝑡) P𝑛 (𝑡 − 1) 𝜑 ̂ 𝑓2 (𝑡), and 𝜑
𝜑 ̂ 𝑓1 (𝑡) by (48), (47), and (46), respectively.
(53) Compute Ψ ̂ 𝑓 (𝑡) by (45).
̂ 𝑛 (𝑡)] P𝑛 (𝑡 − 1) ,
P𝑛 (𝑡) = [I𝑛𝑑 − L𝑛 (𝑡) 𝜑 (54) (5) Compute the gain vector L𝑓 (𝑡) by (43) and the covari-
ance matrix P𝑓 (𝑡) by (44). Update the parameter
w ̂ 𝑠 (𝑡) 𝜃̂𝑠 (𝑡 − 1) ,
̂ (𝑡) = y (𝑡) − 𝜑 (55) estimate 𝜃̂𝑠 (𝑡) by (42).
̂ (𝑡) = [̂ (6) Construct the parameter vectors 𝜃̂1 (𝑡), 𝜃̂2 (𝑡), and 𝜃̂𝑛 (𝑡)
Ψ ̂ 𝑛 (𝑡)] ,
𝜑𝑠 (𝑡) , 𝜑 (56)
̂ by (65) and
by (66), (67), and (68). Form 𝜃̂𝑠 (𝑡) and 𝜗(𝑡)
𝜑𝑇1 (𝑡) ⊗ I𝑚 , 𝜑2 (𝑡)] ,
̂ 𝑠 (𝑡) = [̂
𝜑 (57) (64). Compute u ̂ (𝑡) by (50), k̂(𝑡) by (62), and u(𝑡)
̂ by
𝑓
(63).
𝑇 𝑇 𝑇 𝑇
̂ (𝑡 − 1) , u
̂ 1 (𝑡) = [u
𝜑 ̂ (𝑡 − 2) , . . . , u
̂ (𝑡 − 𝑛 )] , (58) (7) Increase 𝑡 by 1 and go to step 2.
𝑏
̂ 𝑛 (𝑡) = [̂v (𝑡 − 1) , v̂ (𝑡 − 2) , . . . , v̂ (𝑡 − 𝑛𝑑 )] ,
𝜑 (61) 𝑦1 (𝑡) 𝑢 (𝑡) 𝑢 (𝑡 − 1)
[ ] = [ 1 ] + B[ 1 ]
𝑦2 (𝑡) 𝑢2 (𝑡) 𝑢2 (𝑡 − 1)
̂ 𝑓 (𝑡) 𝜃̂𝑠 (𝑡) ,
v̂ (𝑡) = ŷ𝑓 (𝑡) − Ψ (62) (69)
V (𝑡) V (𝑡 − 1)
𝑛𝑐 + [ 1 ] + 𝑑1 [ 1 ],
V2 (𝑡) V2 (𝑡 − 1)
̂ (𝑡) = ∑𝑐̂ (𝑡) 𝛾 (u (𝑡)) ,
u (63)
𝑖 𝑖
𝑖=1 where
̂
̂ (𝑡) = [ 𝜃𝑠 (𝑡) ] ,
𝜗 (64) B (𝑧) = [
0.13 0.25 −1
]𝑧 ,
𝜃̂ (𝑡) 𝑛 −1.21 0.13
𝑇
col [𝜃̂1 (𝑡)] 𝐷 (𝑧) = 1 + 𝑑1 𝑧−1 = 1 + 0.68𝑧−1 ,
𝜃̂𝑠 (𝑡) = [ ], (65)
̂ (𝑡)
𝜃 u (𝑡) = 𝑐1 u (𝑡) + 𝑐2 u2 (𝑡) , (70)
[ 2 ]
𝑇 𝑇
𝜃̂1 (𝑡) = [B
̂ 1 (𝑡) , B
̂ 2 (𝑡) , . . . , B
̂ 𝑛 (𝑡)] , (66) c = [𝑐1 , 𝑐2 ] = [1.00, −0.14]𝑇 ,
𝑏
𝑇
𝜃̂2 (𝑡) = [̂𝑐1 (𝑡) , 𝑐̂2 (𝑡) , . . . , 𝑐̂𝑛𝑐 (𝑡)] , (67) 𝜗 = [0.13, −1.21, 0.25, 0.13, −0.14, 0.68]𝑇 .
𝑇 In simulation, the inputs {𝑢1 (𝑡)} and {𝑢2 (𝑡)} are taken as two
𝜃̂𝑛 (𝑡) = [𝑑̂1 (𝑡) , 𝑑̂2 (𝑡) , . . . , 𝑑̂𝑛𝑑 (𝑡)] . (68) uncorrelated persistent excitation signal sequences with zero
mean and unit variances and {V1 (𝑡)} and {V2 (𝑡)} as two white
The steps involved in the F-MRLS algorithm for multi- noise sequences with zero mean and variances 𝜎12 = 𝜎22 =
variable Hammerstein systems are listed in the following. 𝜎2 (𝜎2 = 0.502 , 𝜎2 = 1.002 ). Applying the MRLS algorithm
in (19) and the F-MRLS algorithm in (42)–(68) to estimate
(1) To initialize, let 𝑡 = 1, set the initial values of
the parameters of this multivariable Hammerstein system,
the parameter estimation variables and covariance
the F-MRLS parameter estimates and their estimation errors
matrices as follows: 𝜃̂𝑠 (𝑖) = 1𝑚𝑟𝑛𝑏 +𝑛𝑐 /𝑝0 , 𝜃̂𝑛 (𝑖) = are shown in Table 1, the comparison between the F-MRLS
̂ (𝑖) = 1 /𝑝 , u(𝑖)
1𝑛𝑑 /𝑝0 , ŷ𝑓 (𝑖) = 1𝑚 /𝑝0 , u ̂ = 1 /𝑝 , algorithm and the MRLS algorithm in the estimation error
𝑓 𝑟 0 𝑟 0
k̂(𝑖) = 1𝑚 /𝑝0 for 𝑖 ⩽ 0, U ̂ 𝑗 (𝑖) = 1𝑟 /𝑝0 for 𝑖 ⩽ 0 ̂ − 𝜗‖/‖𝜗‖ versus 𝑡 is shown in Table 2, and the
𝛿 := ‖𝜗(𝑡)
2629, 2014, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2014/232848 by Libya Hinari NPL, Wiley Online Library on [22/09/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Mathematical Problems in Engineering 7
Table 1: The F-MRLS estimates and errors in Example 1 (𝜎2 = 0.502 and 𝜎2 = 1.002 ).
Table 2: The MRLS and F-MRLS estimates and errors in Example 1 (𝜎2 = 0.502 ).
0.14 0.25 −1
B (𝑧) = [ ]𝑧 ,
−0.15 0.125 𝑇
c = [𝑐1 , 𝑐2 , 𝑐3 ] = [1.00, −0.19, 1.19]𝑇 ,
−1 −1
𝐷 (𝑧) = 1 + 𝑑1 𝑧 = 1 + 0.50𝑧 , 𝜗 = [0.14, −0.15, 0.25, 0.125, −0.19, 1.19, 0.50]𝑇 .
u (𝑡) = 𝑐1 u (𝑡) + 𝑐2 u2 (𝑡) + 𝑐3 u3 (𝑡) , (72)
2629, 2014, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2014/232848 by Libya Hinari NPL, Wiley Online Library on [22/09/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
8 Mathematical Problems in Engineering
Table 3: The F-RLS estimates and errors in Example 2 (𝜎2 = 0.502 and 𝜎2 = 1.002 ).
Table 4: The comparison of parameter estimates and errors in Example 2 (𝜎2 = 0.502 ).
[16] M. Dehghan and M. Hajarian, “Iterative algorithms for the filtering,” IET Control Theory & Applications, vol. 5, no. 14, pp.
generalized centro-symmetric and central anti-symmetric solu- 1648–1657, 2011.
tions of general coupled matrix equations,” Engineering Compu- [33] B. Yu, Y. Shi, and H. Huang, “𝑙2 -𝑙∞ filtering for multirate systems
tations, vol. 29, no. 5, pp. 528–560, 2012. based on lifted models,” Circuits, Systems, and Signal Processing,
[17] Y. Shi and H. Fang, “Kalman filter-based identification for vol. 27, no. 5, pp. 699–711, 2008.
systems with randomly missing measurements in a network [34] Y. Xiao and N. Yue, “Parameter estimation for nonlinear
environment,” International Journal of Control, vol. 83, no. 3, pp. dynamical adjustment models,” Mathematical and Computer
538–551, 2010. Modelling, vol. 54, no. 5-6, pp. 1561–1568, 2011.
[18] Y. Shi and B. Yu, “Output feedback stabilization of networked [35] F. Ding and H. Duan, “Two-stage parameter estimation algo-
control systems with random delays modeled by Markov rithms for Box-Jenkins systems,” IET Signal Processing, vol. 7,
chains,” IEEE Transactions on Automatic Control, vol. 54, no. 7, no. 8, pp. 646–654, 2013.
pp. 1668–1674, 2009. [36] J. Vörös, “Iterative algorithm for parameter identification of
[19] Y. Shi and B. Yu, “Robust mixed 𝐻2 /𝐻∞ control of networked Hammerstein systems with two-segment nonlinearities,” IEEE
control systems with random time delays in both forward and Transactions on Automatic Control, vol. 44, no. 11, pp. 2145–2149,
backward communication links,” Automatica, vol. 47, no. 4, pp. 1999.
754–760, 2011. [37] J. Vörös, “Identification of Hammerstein systems with time-
[20] Y. Hu, “Iterative and recursive least squares estimation algo- varying piecewise-linear characteristics,” IEEE Transactions on
rithms for moving average systems,” Simulation Modelling Circuits and Systems II: Express Briefs, vol. 52, no. 12, pp. 865–
Practice and Theory, vol. 34, pp. 12–19, 2013. 869, 2005.
[21] Y. B. Hu, B. L. Liu, Q. Zhou, and C. Yang, “Recursive extended [38] J. Vörös, “Recursive identification of Hammerstein systems
least squares parameter estimation for Wiener nonlinear sys- with discontinuous nonlinearities containing dead-zones,”
tems with moving average noises,” Circuits, Systems, and Signal IEEE Transactions on Automatic Control, vol. 48, no. 12, pp.
Processing, vol. 33, no. 2, pp. 655–664, 2014. 2203–2206, 2003.
[22] C. Wang and T. Tang, “Recursive least squares estimation [39] Y. Liu, F. Ding, and Y. Shi, “An efficient hierarchical identi-
algorithm applied to a class of linear-in-parameters output error fication method for general dual-rate sampled-data systems,”
moving average systems,” Applied Mathematics Letters, vol. 29, Automatica, vol. 50, no. 3, pp. 962–970, 2014.
pp. 36–41, 2014. [40] J. Ding, C. X. Fan, and J. X. Lin, “Auxiliary model based
[23] F. Ding, X. Liu, and J. Chu, “Gradient-based and least-squares- parameter estimation for dual-rate output error systems with
based iterative algorithms for Hammerstein systems using the colored noise,” Applied Mathematical Modelling, vol. 37, no. 6,
hierarchical identification principle,” IET Control Theory & pp. 4051–4058, 2013.
Applications, vol. 7, no. 2, pp. 176–184, 2013. [41] L. Xie, H. Yang, and B. Huang, “FIR model identification of
[24] C. Wang and T. Tang, “Several gradient-based iterative esti- multirate processes with random delays using EM algorithm,”
mation algorithms for a class of nonlinear systems using the AIChE Journal, vol. 59, no. 11, pp. 4124–4132, 2013.
filtering technique,” Nonlinear Dynamics, vol. 77, no. 3, pp. 769– [42] J. Ding and J. Lin, “Modified subspace identification for peri-
780, 2014. odically non-uniformly sampled systems by using the lifting
[25] Y. B. Hu, B. L. Liu, and Q. Zhou, “A multi-innovation gen- technique,” Circuits, Systems, and Signal Processing, vol. 33, no.
eralized extended stochastic gradient algorithm for output 5, pp. 1439–1449, 2014.
nonlinear autoregressive moving average systems,” Applied [43] F. Ding, “Hierarchical estimation algorithms for multivariable
Mathematics and Computation, vol. 247, pp. 218–224, 2014. systems using measurement information,” Information Sciences,
[26] J. H. Li, “Parameter estimation for Hammerstein CARARMA vol. 277, pp. 396–405, 2014.
systems based on the Newton iteration,” Applied Mathematics [44] F. Ding, “Combined state and least squares parameter estima-
Letters, vol. 26, no. 1, pp. 91–96, 2013. tion algorithms for dynamic systems,” Applied Mathematical
[27] F. Ding and T. Chen, “Identification of Hammerstein nonlinear Modelling, vol. 38, no. 1, pp. 403–412, 2014.
ARMAX systems,” Automatica, vol. 41, no. 9, pp. 1479–1489, [45] X. Luan, P. Shi, and F. Liu, “Stabilization of networked control
2005. systems with random delays,” IEEE Transactions on Industrial
[28] F. Ding, Y. Shi, and T. Chen, “Auxiliary model-based least- Electronics, vol. 58, no. 9, pp. 4323–4330, 2011.
squares identification methods for Hammerstein output-error [46] X. Luan, S. Zhao, and F. Liu, “𝐻∞ control for discrete-time
systems,” Systems and Control Letters, vol. 56, no. 5, pp. 373–380, Markov jump systems with uncertain transition probabilities,”
2007. Institute of Electrical and Electronics Engineers. Transactions on
[29] F. Ding, X. P. Liu, and G. Liu, “Identification methods for Automatic Control, vol. 58, no. 6, pp. 1566–1572, 2013.
Hammerstein nonlinear systems,” Digital Signal Processing: A
Review Journal, vol. 21, no. 2, pp. 215–238, 2011.
[30] Z. Y. Wang, Y. X. Shen, Z. Ji, and R. Ding, “Filtering based
recursive least squares algorithm for Hammerstein FIR-MA
systems,” Nonlinear Dynamics, vol. 73, no. 1-2, pp. 1045–1054,
2013.
[31] M. S. Ahmad, O. Kukrer, and A. Hocanin, “Recursive inverse
adaptive filtering algorithm,” Digital Signal Processing: A Review
Journal, vol. 21, no. 4, pp. 491–496, 2011.
[32] D. Q. Wang, “Least squares-based recursive and iterative esti-
mation for output error moving average systems using data