Mathematical Problems in Engineering - 2014 - Wang - Filtering Based Recursive Least Squares Algorithm For Multi Input

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Hindawi Publishing Corporation

Mathematical Problems in Engineering


Volume 2014, Article ID 232848, 10 pages
http://dx.doi.org/10.1155/2014/232848

Research Article
Filtering Based Recursive Least Squares Algorithm for
Multi-Input Multioutput Hammerstein Models

Ziyun Wang, Yan Wang, and Zhicheng Ji


Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi 214122, China

Correspondence should be addressed to Yan Wang; [email protected]

Received 28 June 2014; Revised 7 September 2014; Accepted 25 September 2014; Published 16 October 2014

Academic Editor: Haranath Kar

Copyright © 2014 Ziyun Wang et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper considers the parameter estimation problem for Hammerstein multi-input multioutput finite impulse response (FIR-
MA) systems. Filtered by the noise transfer function, the FIR-MA model is transformed into a controlled autoregressive model.
The key-term variable separation principle is used to derive a data filtering based recursive least squares algorithm. The numerical
examples confirm that the proposed algorithm can estimate parameters more accurately and has a higher computational efficiency
compared with the recursive least squares algorithm.

1. Introduction Consider a multi-input multioutput (MIMO) Hammer-


stein finite impulse response (FIR) system depicted by
Parameter estimation is an important approach to model
dynamical systems and has been widely used in estimating y (𝑡) = B (𝑧) u (𝑡) + 𝑁 (𝑧) k (𝑡) , (1)
the parameters for nonlinear systems [1–3], deriving system
identification methods [4–7], identifying state-space models where u(𝑡) := [𝑢1 (𝑡), 𝑢2 (𝑡), . . . , 𝑢𝑟 (𝑡)]𝑇 ∈ R𝑟 is the nonlinear
[8, 9], and developing solutions for matrix equations [10– system input vector with zero mean and unit variances, y(𝑡) ∈
13]. For example, Dehghan and Hajarian discussed several R𝑚 is the measurement of x(𝑡) := B(𝑧)u(𝑡) but is corrupted
solution methods for different matrix equations [14–16]. In by w(𝑡) := 𝑁(𝑧)k(𝑡), k(𝑡) ∈ R𝑚 is the white noise vector with
the area of system control and modeling, Shi and Fang zero mean, and B(𝑧) and 𝑁(𝑧) are polynomials in the unit
developed a Kalman filter based identification for systems backward shift operator 𝑧−1 [𝑧−1 y(𝑡) = y(𝑡 − 1)]:
with randomly missing measurements [17], gave output
feedback stabilization [18], and presented a robust mixed 𝑛𝑏
𝐻2 /𝐻∞ control of networked control systems [19]. B (𝑧) := I𝑚×𝑟 + ∑B𝑖 𝑧−𝑖 , B𝑖 ∈ R𝑚×𝑟 ,
The least squares algorithm is a fundamental method 𝑖=1
[20–22] and many methods such as the iterative algorithm (2)
𝑛𝑑
[23, 24] and the gradient algorithm [25] are widely used in −𝑗 1
𝑁 (𝑧) := 1 + ∑ 𝑑𝑗 𝑧 , 𝑑𝑗 ∈ R .
the parameter estimation. In the field of Hammerstein system 𝑗=1
identification, several methods have been developed [26]. For
example, a least squares based iterative algorithm and an It is obvious that the relation between sizes 𝑚 and 𝑟 would
auxiliary model based recursive least squares algorithm have influence the model identification of this multi-input multi-
been presented, respectively, for Hammerstein nonlinear output Hammerstein system. For example, the dimension of
ARMAX systems and Hammerstein output error systems the output vector is not less than that of the input vector if
[27, 28]; a Newton recursive algorithm and a Newton iter- 𝑚 ⩾ 𝑟; otherwise, when 𝑚 < 𝑟, the output size is smaller
ative algorithm for Hammerstein controlled autoregressive compared with that of the input vector. In this paper, we
systems are presented in [29]. discuss the identification problem with 𝑚 ⩾ 𝑟.
2629, 2014, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2014/232848 by Libya Hinari NPL, Wiley Online Library on [22/09/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
2 Mathematical Problems in Engineering

The nonlinear block in the Hammerstein model is a linear 2. The MRLS Algorithm
combination of the known basis f := (𝑓1 , 𝑓2 , . . . , 𝑓𝑟 ):
For comparison, the MRLS algorithm is listed in Section 2.
Here we introduce some notations. 𝑡 represents the current
u (𝑡) = f (u (𝑡))
time in this paper and “𝐴 =: 𝑋” or “𝑋 := 𝐴” means that
(3) “𝐴 is defined as 𝑋”. The symbol I𝑚×𝑟 represents an identity
𝑇
:= [𝑓1 (𝑢1 (𝑡)) , 𝑓2 (𝑢2 (𝑡)) , . . . , 𝑓𝑟 (𝑢𝑟 (𝑡))] ∈ R𝑟 ,
matrix of size 𝑟 followed by a null matrix of the last 𝑚 − 𝑟
rows when 𝑚 ⩾ 𝑟 and vice versa. The norm of a matrix (or a
where the superscript 𝑇 denotes the matrix transpose. The column vector) X is defined by ‖X‖2 := tr[XX𝑇 ]; ⊗ denotes
function 𝑓𝑖 (𝑢𝑖 (𝑡)) in (3) is a nonlinear function of a known the Kronecker product or direct product: if A = [𝑎𝑖𝑗 ] ∈ R𝑚×𝑛 ,
basis (𝛾1 , 𝛾2 , . . . , 𝛾𝑛𝑐 ): B = [𝑏𝑖𝑗 ] ∈ R𝑝×𝑞 , then A ⊗ B := [𝑎𝑖𝑗 B] ∈ R𝑚𝑝×𝑛𝑞 ; col[X]
is supposed to be the vector formed by the column of the
𝑛𝑐 matrix X: if X = [x1 , x2 , . . . , x𝑛 ] ∈ R𝑚×𝑛 , then col[X] :=
𝑓𝑖 (𝑢𝑖 (𝑡)) := ∑ 𝑐𝑗 𝛾𝑗 (𝑢𝑖 (𝑡)) , (4) 𝑇
𝑗=1
[x1𝑇 , x2𝑇 , . . . , x𝑛𝑇 ] ∈ R𝑚𝑛 .
From (1)–(5), the intermediate variables x(𝑡) and w(𝑡) and
output of the system y(𝑡) can be expressed as
where the coefficients c := (𝑐1 , 𝑐2 , . . . , 𝑐𝑛𝑐 ) are unknown.
𝑛𝑏
Substituting (4) into (3) yields
x (𝑡) = [I𝑚×𝑟 + ∑B𝑖 𝑧−𝑖 ] u (𝑡) ,
𝑖=1
𝑛𝑐 (6)
u (𝑡) = ∑𝑐𝑖 𝛾𝑖 (u (𝑡)) , 𝑛𝑑
𝑖=1 w (𝑡) = [1 + ∑𝑑𝑖 𝑧−𝑖 ] k (𝑡) ,
𝑖=1
𝑇
𝛾𝑖 (u (𝑡)) := [𝛾𝑖 (𝑢1 (𝑡)) , 𝛾𝑖 (𝑢2 (𝑡)) , . . . , 𝛾𝑖 (𝑢𝑟 (𝑡))] ∈ R𝑟 . 𝑛𝑐 𝑛
(5) 𝛾𝑖 (u (𝑡)) 𝑏

y (𝑡) = ∑𝑐𝑖 [ ] + ∑B𝑖 u (𝑡 − 𝑖)


0
𝑖=1 𝑖=1
Assume that u(𝑡) = 0, y(𝑡) = 0, and k(𝑡) = 0 for 𝑡 ⩽ 0, (7)
𝑛𝑑
and the orders 𝑛𝑏 , 𝑛𝑐 , and 𝑛𝑑 are known but can be obtained + ∑𝑑𝑖 k (𝑡 − 𝑖) + k (𝑡) .
by trial and error. In general, the orders of the Hammerstein 𝑖=1
model should be large when the nonlinear system is used in
Note that the subscripts (Roman) 𝑠 and 𝑛 denote the first
prediction; otherwise, the orders should be small if the system
letters of “system” and “noise” for distinguishing the types
is applied for control.
of the unknown parameter vectors or matrices, respectively.
The objective of this paper is to estimate the unknown
Define the parameter matrix 𝜃1 , the parameter vectors 𝜃2 , 𝜃𝑛 ,
parameter matrices: B𝑖 , 𝑐𝑖 , 𝑑𝑖 from the available input-output
and the information vectors 𝜑1 (𝑡), 𝜑2 (𝑡), and 𝜑𝑛 (𝑡) as
data {u(𝑡), y(𝑡)} of the multivariable Hammerstein finite
impulse response moving average (FIR-MA) models [30]. 𝜃𝑇1 := [B1 , B2 , . . . , B𝑛𝑏 ] ∈ R𝑚×(𝑟𝑛𝑏 ) ,
Recently, the filtering idea has received much attention
[31–33]. Xiao and Yue studied input nonlinear dynamical 𝑇
𝜃2 := [𝑐1 , 𝑐2 , . . . , 𝑐𝑛𝑐 ] ∈ R𝑛𝑐 ,
adjustment models and presented a recursive generalized
least squares algorithm and a filtering based least squares 𝑇
algorithm by replacing the unknown terms in the informa- 𝜃𝑛 := [𝑑1 , 𝑑2 , . . . , 𝑑𝑛𝑑 ] ∈ R𝑛𝑑 ,
tion vectors with their estimates [34]. The overparameteriza- 𝑇
tion method in [34] leads to a redundant estimated product 𝜑1 (𝑡) := [u𝑇 (𝑡 − 1) , u𝑇 (𝑡 − 2) , . . . , u𝑇 (𝑡 − 𝑛𝑏 )] ∈ R𝑟𝑛𝑏 ,
of the nonlinear systems and requires extra computation.
Differing from the work in [30, 34, 35], this paper discusses 𝜑2 (𝑡) := [𝜂1 (𝑡) , 𝜂2 (𝑡) , . . . , 𝜂𝑛𝑐 (𝑡)] ∈ R𝑚×𝑛𝑐 ,
the estimation problem of the MIMO Hammerstein systems
𝛾𝑖 (u (𝑡))
using the data filtering idea and transfers the FIR-MA system 𝜂𝑖 (𝑡) := [ ] ∈ R𝑚 ,
to controlled autoregressive model by means of the key- 0
term variable separate principle in [36–38]. The proposed
algorithm used in this paper can extend to study parameter 𝜑𝑛 (𝑡) := [k (𝑡 − 1) , k (𝑡 − 2) , . . . , k (𝑡 − 𝑛𝑑 )] ∈ R𝑚×𝑛𝑑 .
estimation problems of dual-rate/multirate sampled systems (8)
[39–42] and other linear or nonlinear systems [43–46]. Then, we have
Briefly, the rest of this paper is recognized as follows.
Section 2 discusses a recursive least squares algorithm for x (𝑡) = 𝜃𝑇1 𝜑1 (𝑡) + 𝜑2 (𝑡) 𝜃2 ,
the Hammerstein systems. Section 3 presents a filtering based w (𝑡) = 𝜑𝑛 (𝑡) 𝜃𝑛 + k (𝑡) ,
recursive least squares algorithm by transferring an FIR- (9)
MA model to a controlled autoregressive model. Section 4 y (𝑡) = x (𝑡) + w (𝑡)
provides an illustrative example. Finally, some concluding
remarks are offered in Section 5. = 𝜃𝑇1 𝜑1 (𝑡) + 𝜑2 (𝑡) 𝜃2 + 𝜑𝑛 (𝑡) 𝜃𝑛 + k (𝑡) .
2629, 2014, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2014/232848 by Libya Hinari NPL, Wiley Online Library on [22/09/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Mathematical Problems in Engineering 3

Distinguished from the hierarchical identification methods, Define the parameter estimation matrices
we reparameterize the model in (7) by using the Kronecker
product to get a parameter matrix 𝜗 and by gathering
̂
the input information vectors 𝜑1 (𝑡) and 𝜑2 (𝑡) and output ̂ (𝑡) := [ 𝜃𝑠 (𝑡) ] ∈ R𝑛0 ,
𝜗
information matrix 𝜑𝑛 (𝑡) into one information matrix Ψ(𝑡) ̂𝜃 (𝑡)
𝑛
to obtain an information matrix Ψ(𝑡) as follows:
̂𝑇
̂𝜃 (𝑡) := [col [𝜃1 (𝑡)]] ∈ R𝑚𝑟𝑛𝑏 +𝑛𝑐 ,
𝑠
𝜃 ̂
𝜗 := [ 𝑠 ] ∈ R𝑛0 , 𝑛0 := 𝑚𝑟𝑛𝑏 + 𝑛𝑐 + 𝑛𝑑 , [ 𝜃2 (𝑡) ]
𝜃𝑛 (17)
𝑇
col [𝜃𝑇1 ] 𝜃̂1 (𝑡) := [B
̂ 1 (𝑡) , B
̂ 2 (𝑡) , . . . , B
̂ 𝑛 (𝑡)] ∈ R𝑚×(𝑟𝑛𝑏 ) ,
] ∈ R𝑚𝑟𝑛𝑏 +𝑛𝑐 ,
𝑏
𝜃𝑠 := [
𝜃2 (10) 𝑇
𝜃̂2 (𝑡) := [̂𝑐1 (𝑡) , 𝑐̂2 (𝑡) , . . . , 𝑐̂𝑛𝑐 (𝑡)] ∈ R𝑛𝑐 ,
Ψ (𝑡) := [𝜑𝑠 (𝑡) , 𝜑𝑛 (𝑡)] ∈ R𝑚×𝑛0 ,
𝑇
𝜃̂𝑛 (𝑡) := [𝑑̂1 (𝑡) , 𝑑̂2 (𝑡) , . . . , 𝑑̂𝑛𝑑 (𝑡)] ∈ R𝑛𝑑 .
𝜑𝑠 (𝑡) := [𝜑𝑇1 (𝑡) ⊗ I𝑚 , 𝜑2 (𝑡)] ∈ R 𝑚×(𝑚𝑟𝑛𝑏 +𝑛𝑐 )
.

By replacing the parameters 𝑐𝑖 (𝑖 = 1, 2, . . . , 𝑛𝑐 ) in (4) with


Thus, we obtain ̂ is given
𝑐̂𝑖 (𝑡), the output of the proposed auxiliary model u(𝑡)
by
y (𝑡) = Ψ (𝑡) 𝜗 + k (𝑡) . (11)
𝑛𝑐
Equation (11) is the identification model of the multivariable ̂ (𝑡) = ∑𝑐̂ (𝑡) 𝛾 (u (𝑡))
u 𝑖 𝑖
Hammerstein FIR-MA system. Defining and minimizing the 𝑖=1
cost function (18)
= 𝑐̂1 (𝑡) 𝛾1 (u (𝑡)) + 𝑐̂2 (𝑡) 𝛾2 (u (𝑡))
󵄩 󵄩2
𝐽 (𝜗) := 󵄩󵄩󵄩y (𝑡) − Ψ (𝑡) 𝜗󵄩󵄩󵄩 (12) + ⋅ ⋅ ⋅ + 𝑐̂𝑛𝑐 (𝑡) 𝛾𝑛𝑐 (u (𝑡)) .

and using the least squares search principle, we obtain the


From (11), we obtain k(𝑡) = y(𝑡) − Ψ(𝑡)𝜗. Replacing Ψ(𝑡) and
following recursive least squares algorithm [35] to obtain ̂ and 𝜗(𝑡̂ − 1), the residual k̂(𝑡) can be written as
̂ 𝜗 with Ψ(𝑡)
parameter estimates 𝜗(𝑡): ̂ ̂ − 1).
k̂(𝑡) = y(𝑡) − Ψ(𝑡)𝜗(𝑡
To summarize, we conclude the following recursive least
̂ (𝑡) = 𝜗
𝜗 ̂ (𝑡 − 1)] ,
̂ (𝑡 − 1) + L (𝑡) [y (𝑡) − Ψ (𝑡) 𝜗 (13) squares algorithm for multivariable Hammerstein FIR-MA
models (the MRLS algorithm for short):
−1
L (𝑡) = P (𝑡 − 1) Ψ𝑇 (𝑡) [I𝑚 + Ψ (𝑡) P (𝑡 − 1) Ψ𝑇 (𝑡)] , (14)
̂ (𝑡) = 𝜗
𝜗 ̂ (𝑡 − 1) + L (𝑡) [y (𝑡) − Ψ ̂ (𝑡 − 1)] ,
̂ (𝑡) 𝜗
P (𝑡) = [I𝑛0 − L (𝑡) Ψ (𝑡)] P (𝑡 − 1) , P (0) = 𝑝0 I𝑛0 . (15)
−1
̂ 𝑇 (𝑡) [I𝑚 + Ψ
L (𝑡) = P (𝑡 − 1) Ψ ̂ 𝑇 (𝑡)] ,
̂ (𝑡) P (𝑡 − 1) Ψ
Since the information matrix Ψ(𝑡) in (13) contains the
unknown intermediate variables u(𝑡 − 𝑖) and the unmeasur- ̂ (𝑡)] P (𝑡 − 1) ,
able terms k(𝑡 − 𝑖), the recursive algorithm in (13)–(15) cannot P (𝑡) = [I𝑛0 − L (𝑡) Ψ
compute the parameter estimate 𝜃(𝑡). ̂ The solution here is 𝑛𝑐
replacing the unknown intermediate variables u(𝑡−𝑖) and the ̂ (𝑡) = ∑𝑐̂ (𝑡) 𝛾 (u (𝑡)) ,
u 𝑖 𝑖
unmeasurable terms k(𝑡−𝑖) in Ψ(𝑡) with the variable estimates 𝑖=1
(or the outputs of the auxiliary model) u(𝑡 ̂ − 𝑖) and the
̂ (𝑡 − 1) ,
̂ (𝑡) 𝜗
k̂ (𝑡) = y (𝑡) − Ψ
estimates k̂(𝑡 − 𝑖) based on the auxiliary model identification
idea. The replaced information matrices are defined as
̂ (𝑡) = [̂
Ψ 𝜑𝑇1 (𝑡) ⊗ I𝑚 , 𝜑2 , 𝜑
̂ 𝑛 (𝑡)] ,
̂ (𝑡) := [̂
Ψ ̂ 𝑛 (𝑡)] ∈ R𝑚×𝑛0 ,
𝜑𝑠 (𝑡) , 𝜑 𝑇 𝑇 𝑇 𝑇
̂ (𝑡 − 1) , u
̂ 1 (𝑡) = [u
𝜑 ̂ (𝑡 − 2) , . . . , u
̂ (𝑡 − 𝑛 )] ,
𝑏
𝜑𝑇1 (𝑡) ⊗ I𝑚 , 𝜑2 (𝑡)] ∈ R𝑚×(𝑚𝑟𝑛𝑏 +𝑛𝑐 ) ,
̂ 𝑠 (𝑡) := [̂
𝜑
𝜑2 (𝑡) = [𝜂1 (𝑡) , 𝜂2 (𝑡) , . . . , 𝜂𝑛𝑐 (𝑡)] ,
𝑇 𝑇 𝑇 𝑇
̂ (𝑡 − 1) , u
̂ 1 (𝑡) := [u
𝜑 ̂ (𝑡 − 2) , . . . , u
̂ (𝑡 − 𝑛 )] ∈ R𝑟𝑛𝑏 ,
𝑏 𝛾𝑖 (u (𝑡))
𝜂𝑖 (𝑡) = [ ],
𝑚×𝑛𝑑
0
̂ 𝑛 (𝑡) := [̂k (𝑡 − 1) , k̂ (𝑡 − 2) , . . . , k̂ (𝑡 − 𝑛𝑑 )] ∈ R
𝜑 .
(16) ̂ 𝑛 (𝑡) = [̂k (𝑡 − 1) , k̂ (𝑡 − 2) , . . . , k̂ (𝑡 − 𝑛𝑑 )] ,
𝜑
2629, 2014, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2014/232848 by Libya Hinari NPL, Wiley Online Library on [22/09/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
4 Mathematical Problems in Engineering

𝑇
col [𝜃̂1 (𝑡)] Thus, (21) can be rewritten as
[ ]
̂
𝜗 (𝑡) = [ 𝜃̂ (𝑡) ]
[
2 ], 𝑛𝑏
y𝑓 (𝑡) = [I𝑚×𝑟 + ∑B𝑖 𝑧−𝑖 ] u𝑓 (𝑡) + v (𝑡)
[ 𝜃̂𝑛 (𝑡) ] 𝑖=1
𝑇
𝜃̂1 (𝑡) ̂ 1 (𝑡) , B
= [B ̂ 2 (𝑡) , . . . , B
̂ 𝑛 (𝑡)] , u𝑓 (𝑡)
𝑏 =[ ] + B1 u𝑓 (𝑡 − 1) + B2 u𝑓 (𝑡 − 2)
0
𝑇
𝜃̂2 (𝑡) = [̂𝑐1 (𝑡) , 𝑐̂2 (𝑡) , . . . , 𝑐̂𝑛𝑐 (𝑡)] ,
+ ⋅ ⋅ ⋅ + B𝑛𝑏 u𝑓 (𝑡 − 𝑛𝑏 ) + v (𝑡) ,
𝑇
𝜃̂𝑛 (𝑡) = [𝑑̂1 (𝑡) , 𝑑̂2 (𝑡) , . . . , 𝑑̂𝑛𝑑 (𝑡)] . = 𝑐1 𝜁1 (𝑡) + 𝑐2 𝜁2 (𝑡) + ⋅ ⋅ ⋅ + 𝑐𝑛𝑐 𝜁𝑛𝑐 (𝑡) + B1 u𝑓 (𝑡 − 1)
(19)
+ B2 u𝑓 (𝑡 − 2) + ⋅ ⋅ ⋅ + B𝑛𝑏 u𝑓 (𝑡 − 𝑛𝑏 ) + v (𝑡) .
(23)
3. The F-MRLS Algorithm
Define the filtered information matrices:
The convergence rate of the MRLS algorithm in Section 2
is slow because the noise information intermediate variables Ψ𝑓 (𝑡) := [𝜑𝑇𝑓1 (𝑡) ⊗ I𝑚 , 𝜑𝑓2 (𝑡)] ∈ R𝑚×(𝑚𝑟𝑛𝑏 +𝑛𝑐 ) , (24)
w(𝑡) contain unmeasurable time-delay noise k(𝑡−𝑖). The solu-
𝑇
tion here is to present a filtering based recursive least squares 𝜑𝑓1 (𝑡) := [u𝑇𝑓 (𝑡 − 1) , u𝑇𝑓 (𝑡 − 2) , . . . , u𝑇𝑓 (𝑡 − 𝑛𝑏 )] ∈ R(𝑟𝑛𝑏 ) ,
algorithm (the F-MRLS algorithm) for the multivariable
(25)
Hammerstein models by filtering the rational function 𝑁(𝑧)
and transferring the FIR-MA model in (1) into a controlled
𝜑𝑓2 (𝑡) := [𝜁1 (𝑡) , 𝜁2 (𝑡) , . . . , 𝜁𝑛𝑐 (𝑡)] ∈ R𝑚×𝑛𝑐 . (26)
autoregressive (CAR) model. Multiplying both sides of (1) by
𝑁−1 (𝑧) yields Since the polynomial 𝑁(𝑧) is unknown and to be estimated,
it is impossible to use u𝑓 (𝑡) to construct 𝜑𝑓1 (𝑡) in (25). Here,
we adopt the principle of the MRLS algorithm in Section 2
𝑁−1 (𝑧) y (𝑡) = B (𝑧) 𝑁−1 (𝑧) u (𝑡) + k (𝑡) , (20) and replace the unmeasurable variables and vectors with their
estimates to derive the following algorithm.
or By using the parameter estimates 𝜃̂1 (𝑡) and 𝜃̂𝑛 (𝑡), the
estimates of polynomials B(𝑧) and 𝑁(𝑧) at time 𝑡 can be
constructed as
y𝑓 (𝑡) = B (𝑧) u𝑓 (𝑡) + k (𝑡) , (21) 𝑛𝑏
̂ (𝑡, 𝑧) := I𝑚×𝑟 + ∑B
B ̂ 𝑖 (𝑡) 𝑧−𝑖 ,
𝑖=1
where
̂ (𝑡, 𝑧) := 1 + 𝑑̂1 (𝑡) 𝑧−1 + 𝑑̂2 (𝑡) 𝑧−2 + ⋅ ⋅ ⋅ + 𝑑̂𝑛 (𝑡) 𝑧−𝑛𝑑 .
𝑁 𝑑
(27)
1
u𝑓 (𝑡) := u (𝑡)
𝑁 (𝑧) w(𝑡) in model (1) can be rewritten as
𝑛
1 𝑐
w (𝑡) = y (𝑡) − x (𝑡)
= ∑𝑐̂𝑖 (𝑡) 𝛾𝑖 (u (𝑡))
𝑁 (𝑧) 𝑖=1 𝑛𝑏
= y (𝑡) − [I𝑚×𝑟 + ∑B𝑖 (𝑡) 𝑧−𝑖 ] u (𝑡) (28)
= 𝑐1 U1 (𝑡) + 𝑐2 U2 (𝑡) + ⋅ ⋅ ⋅ + 𝑐𝑛𝑐 U𝑛𝑐 (𝑡) , 𝑖=1

1 = y (𝑡) − 𝜑𝑠 (𝑡) 𝜃𝑠 .
y𝑓 (𝑡) := y (𝑡)
𝑁 (𝑧)
(22) Let ŵ(𝑡) be the estimate of w(𝑡). Replacing w(𝑡), y(𝑡), 𝜑𝑠 (𝑡),
= [1 − 𝑁 (𝑧)] y𝑓 (𝑡) + y (𝑡) 𝜑𝑛 (𝑡), 𝜃𝑛 , and 𝜃𝑠 with their estimates w ̂ 𝑠 (𝑡), 𝜑
̂(𝑡), ŷ(𝑡), 𝜑 ̂ 𝑛 (𝑡),
𝑛𝑑
̂𝜃 (𝑡), and 𝜃̂ (𝑡 − 1) leads to
𝑛 𝑠
= y (𝑡) − ∑𝑑𝑖 (𝑡) y𝑓 (𝑡 − 𝑖) ,
𝑖=1 w ̂ 𝑠 (𝑡) 𝜃̂𝑠 (𝑡 − 1) .
̂ (𝑡) = y (𝑡) − 𝜑 (29)
1
U𝑖 (𝑡) := 𝛾 (u (𝑡)) ∈ R𝑟 , 𝑖 = 1, 2, . . . , 𝑛𝑐 , Defining and minimizing the cost function
𝑁 (𝑧) 𝑖
𝑡
U (𝑡) 2
𝜁𝑗 (𝑡) := [ 𝑗 ] ∈ R𝑚 , 𝑗 = 1, 2, . . . , 𝑛𝑐 . 𝐽 (𝜃𝑛 ) := ∑[w (𝑗) − 𝜑𝑛 (𝑗) 𝜃𝑛 ] (30)
0 𝑗=1
2629, 2014, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2014/232848 by Libya Hinari NPL, Wiley Online Library on [22/09/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Mathematical Problems in Engineering 5

and using the least squares search principle, we list the Based on the MRLS search principle, we can obtain the
recursive least squares algorithm to compute 𝜃̂𝑛 (𝑡): estimate of 𝜃𝑠 by the following algorithm:

̂ 𝑓 (𝑡) 𝜃̂𝑠 (𝑡 − 1)] ,


𝜃̂𝑠 (𝑡) = 𝜃̂𝑠 (𝑡 − 1) + L𝑓 (𝑡) [̂y𝑓 (𝑡) − Ψ
𝜃̂𝑛 (𝑡) = 𝜃̂𝑛 (𝑡 − 1) + L𝑛 (𝑡) [̂ ̂ 𝑛 (𝑡) 𝜃̂𝑛 (𝑡 − 1)] ,
w (𝑡) − 𝜑
𝑇 𝑇 −1
−1 ̂ (𝑡) [I𝑚 + Ψ
L𝑓 (𝑡) = P𝑓 (𝑡 − 1) Ψ ̂ (𝑡)] ,
̂ 𝑓 (𝑡) P𝑓 (𝑡 − 1) Ψ
̂ 𝑇𝑛 (𝑡) [I𝑚 + 𝜑
L𝑛 (𝑡) = P𝑛 (𝑡 − 1) 𝜑 ̂ 𝑇𝑛 (𝑡)] ,
̂ 𝑛 (𝑡) P𝑛 (𝑡 − 1) 𝜑 𝑓 𝑓

̂ 𝑛 (𝑡)] P𝑛 (𝑡 − 1) ,
P𝑛 (𝑡) = [I𝑛𝑑 − L𝑛 (𝑡) 𝜑 P𝑛 (0) = 𝑝0 I. ̂ 𝑓 (𝑡)] P𝑓 (𝑡 − 1) ,
P𝑓 (𝑡) = [I𝑟𝑛𝑏+𝑛𝑐 − L𝑓 (𝑡) Ψ
(31)
P𝑓 (0) = 𝑝0 I.
(38)
Let ̂c(𝑡) := [̂𝑐1 (𝑡), 𝑐̂2 (𝑡), . . . , 𝑐̂𝑛𝑐 (𝑡)]𝑇 ∈ R𝑛𝑐 be the estimate of c
at time 𝑡, filtering y(𝑡) with 1/𝑁(𝑡, ̂ 𝑧) to obtain the estimate ̂ can be computed by
The estimate u(𝑡)
ŷ𝑓 (𝑡):
̂ (𝑡) = 𝑐̂ (𝑡) 𝛾 (u (𝑡)) + 𝑐̂ (𝑡) 𝛾 (u (𝑡))
u 1 1 2 2
1 (39)
ŷ𝑓 (𝑡) = y (𝑡) + ⋅ ⋅ ⋅ + 𝑐̂𝑛𝑐 (𝑡) 𝛾𝑛𝑐 (u (𝑡)) .
̂ (𝑡, 𝑧)
𝑁
̂ (𝑡, 𝑧)] ŷ𝑓 (𝑡) + y (𝑡)
= [1 − 𝑁 ̂ by 1/𝑁(𝑡,
Filter u(𝑡) ̂ (𝑡):
̂ 𝑧) to obtain the estimate u
(32) 𝑓

𝑛𝑑
̂ (𝑡) = 1 ̂
= − ∑𝑑̂𝑖 (𝑡) ŷ𝑓 (𝑡 − 𝑖) + y (𝑡) . u𝑓 u (𝑡)
̂ (𝑡, 𝑧)
𝑁
𝑖=1
𝑛
1 𝑐

The estimate of U𝑗 (𝑡) can be computed by = ∑𝑐̂𝑖 (𝑡) 𝛾𝑖 (u (𝑡))


̂ (𝑡, 𝑧) 𝑖=1
𝑁

̂ 𝑗 (𝑡) = 1 ̂ 1 (𝑡) + 𝑐̂2 (𝑡) U


= 𝑐̂1 (𝑡) U ̂ 2 (𝑡) + ⋅ ⋅ ⋅ + 𝑐̂𝑛 (𝑡) U
̂ 𝑛 (𝑡) .
U 𝛾 (u (𝑡)) 𝑐 𝑐
̂ (𝑡, 𝑧) 𝑗
𝑁 (40)
= [1 − 𝑁 ̂ 𝑗 (𝑡) + 𝛾 (u (𝑡))
̂ (𝑡, 𝑧)] U (33) Replacing k(𝑡), y𝑓 (𝑡), Ψ𝑓 (𝑡), and 𝜃𝑠 in (37) with their
𝑗

𝑛𝑑 ̂ 𝑓 (𝑡), and 𝜃̂𝑠 (𝑡) at time 𝑡, the noise


estimates k̂(𝑡), ŷ𝑓 (𝑡), Ψ
= − ∑𝑑̂𝑖 (𝑡) U
̂ 𝑗 (𝑡 − 𝑖) + 𝛾 (u (𝑡)) .
𝑗
vector can be computed by
𝑖=1
̂ 𝑓 (𝑡) 𝜃̂𝑠 (𝑡) .
v̂ (𝑡) = ŷ𝑓 (𝑡) − Ψ (41)
Define the estimate of 𝜁𝑗 (𝑡) by
In conclusion, we can summarize the following filtering
̂ based recursive least squares algorithm for multivariable
̂𝜁 (𝑡) := [U𝑗 (𝑡)] ∈ R𝑚 , 𝑗 = 1, 2, . . . , 𝑛𝑐 , (34) Hammerstein models (the F-MRLS algorithm for short):
𝑗
0
𝜃̂𝑠 (𝑡) = 𝜃̂𝑠 (𝑡 − 1) + L𝑓 (𝑡) [̂y𝑓 (𝑡) − Ψ
̂ 𝑓 (𝑡) 𝜃̂𝑠 (𝑡 − 1)] ,
̂ 𝑓1 (𝑡) and 𝜑
and construct the estimate of Ψ𝑓 (𝑡) with 𝜑 ̂ 𝑓2 (𝑡) as
follows: (42)
̂ 𝑓 (𝑡) = [̂
Ψ 𝜑𝑇𝑓1 (𝑡) ⊗ I𝑚 , 𝜑
̂ 𝑓2 (𝑡)] , ̂ 𝑇 (𝑡)
P𝑓 (𝑡 − 1) Ψ 𝑓
L𝑓 (𝑡) = 𝑇
, (43)
𝑇
I𝑚 + Ψ ̂ (𝑡)
̂ 𝑓 (𝑡) P𝑓 (𝑡 − 1) Ψ
̂ 𝑇 (𝑡 − 1) , u
̂ 𝑓1 (𝑡) = [u
𝜑 ̂ 𝑇 (𝑡 − 2) , . . . , u
̂ 𝑇 (𝑡 − 𝑛 )] , (35) 𝑓
𝑓 𝑓 𝑓 𝑏

̂ 𝑓 (𝑡)] P𝑓 (𝑡 − 1) ,
P𝑓 (𝑡) = [I𝑟𝑛𝑏 +𝑛𝑐 − L𝑓 (𝑡) 𝜑 (44)
̂ 𝑓2 (𝑡) = [̂𝜁1 (𝑡) , ̂𝜁2 (𝑡) , . . . , ̂𝜁𝑛𝑐 (𝑡)] .
𝜑
̂ 𝑓 (𝑡) = [̂
Ψ 𝜑𝑇𝑓1 (𝑡) ⊗ I𝑚 , 𝜑
̂ 𝑓2 ] , (45)
The filtered model in (21) can be rewritten in a matrix form:
𝑇
̂ 𝑇 (𝑡 − 1) , u
̂ 𝑓1 (𝑡) = [u
𝜑 ̂ 𝑇 (𝑡 − 2) , . . . , u
̂ 𝑇 (𝑡 − 𝑛 )] , (46)
𝑓 𝑓 𝑓 𝑏
y𝑓 (𝑡) = Ψ𝑓 (𝑡) 𝜃𝑠 + v (𝑡) , (36)
̂ 𝑓2 (𝑡) = [̂𝜁1 (𝑡) , ̂𝜁2 (𝑡) , . . . , ̂𝜁𝑛𝑐 (𝑡)] ,
𝜑 (47)
or
̂
̂𝜁 (𝑡) = [U𝑗 (𝑡)] , (48)
v (𝑡) = y𝑓 (𝑡) − Ψ𝑓 (𝑡) 𝜃𝑠 . (37) 𝑗
0
2629, 2014, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2014/232848 by Libya Hinari NPL, Wiley Online Library on [22/09/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
6 Mathematical Problems in Engineering

𝑛𝑑
̂ 𝑗 (𝑡) = − ∑𝑑̂𝑖 (𝑡) U
̂ 𝑗 (𝑡 − 𝑖) + 𝛾 (u (𝑡)) , and 𝑗 = 1, 2, . . . , 𝑛𝑐 , P𝑓 (0) = 𝑝0 I𝑟𝑛𝑏 +𝑛𝑐 , P𝑛 (0) = 𝑝0 I𝑛𝑑 ,
U 𝑗 (49)
𝑖=1 𝑝0 = 106 , and give the basis functions 𝛾𝑗 (⋅).
̂ (𝑡) = 𝑐̂ (𝑡) U
̂ 1 (𝑡) + 𝑐̂2 (𝑡) U
̂ 2 (𝑡) + ⋅ ⋅ ⋅ + 𝑐̂𝑛 (𝑡) U
̂ 𝑛 (𝑡) , (2) Collect the input-output data u(𝑡) and y(𝑡), and
u 𝑓 1 𝑐 𝑐 ̂ 1 (𝑡), 𝜑2 (𝑡) by (58), (59).
construct 𝜂𝑖 (𝑡) by (60), and 𝜑
(50) Form the information vectors 𝜑 ̂ 𝑛 (𝑡) by
̂ 𝑠 (𝑡) by (57), 𝜑
𝑛𝑑 ̂ by (56), respectively.
(61), and Ψ(𝑡)
ŷ𝑓 (𝑡) = − ∑𝑑̂𝑖 (𝑡) ŷ𝑓 (𝑡 − 𝑖) + y (𝑡) , (51) (3) Compute w ̂(𝑡) by (55), the gain vector L𝑛 (𝑡) by (53),
𝑖=1
and the covariance matrix P𝑛 (𝑡) by (54), respectively.
𝜃̂𝑛 (𝑡) = 𝜃̂𝑛 (𝑡 − 1) + L𝑛 (𝑡) [̂ ̂ 𝑛 (𝑡) 𝜃̂𝑛 (𝑡 − 1)] , (52)
w (𝑡) − 𝜑 Update the parameter estimate 𝜃̂𝑛 (𝑡) by (52).
(4) Compute ŷ𝑓 (𝑡) by (51), U ̂ 𝑗 (𝑡) by (49). Construct ̂𝜁𝑗 (𝑡),
−1
̂ 𝑇𝑛 (𝑡) [I𝑚 + 𝜑
L𝑛 (𝑡) = P𝑛 (𝑡 − 1) 𝜑 ̂ 𝑇𝑛 (𝑡)] ,
̂ 𝑛 (𝑡) P𝑛 (𝑡 − 1) 𝜑 ̂ 𝑓2 (𝑡), and 𝜑
𝜑 ̂ 𝑓1 (𝑡) by (48), (47), and (46), respectively.
(53) Compute Ψ ̂ 𝑓 (𝑡) by (45).

̂ 𝑛 (𝑡)] P𝑛 (𝑡 − 1) ,
P𝑛 (𝑡) = [I𝑛𝑑 − L𝑛 (𝑡) 𝜑 (54) (5) Compute the gain vector L𝑓 (𝑡) by (43) and the covari-
ance matrix P𝑓 (𝑡) by (44). Update the parameter
w ̂ 𝑠 (𝑡) 𝜃̂𝑠 (𝑡 − 1) ,
̂ (𝑡) = y (𝑡) − 𝜑 (55) estimate 𝜃̂𝑠 (𝑡) by (42).

̂ (𝑡) = [̂ (6) Construct the parameter vectors 𝜃̂1 (𝑡), 𝜃̂2 (𝑡), and 𝜃̂𝑛 (𝑡)
Ψ ̂ 𝑛 (𝑡)] ,
𝜑𝑠 (𝑡) , 𝜑 (56)
̂ by (65) and
by (66), (67), and (68). Form 𝜃̂𝑠 (𝑡) and 𝜗(𝑡)
𝜑𝑇1 (𝑡) ⊗ I𝑚 , 𝜑2 (𝑡)] ,
̂ 𝑠 (𝑡) = [̂
𝜑 (57) (64). Compute u ̂ (𝑡) by (50), k̂(𝑡) by (62), and u(𝑡)
̂ by
𝑓
(63).
𝑇 𝑇 𝑇 𝑇
̂ (𝑡 − 1) , u
̂ 1 (𝑡) = [u
𝜑 ̂ (𝑡 − 2) , . . . , u
̂ (𝑡 − 𝑛 )] , (58) (7) Increase 𝑡 by 1 and go to step 2.
𝑏

𝜑2 (𝑡) = [𝜂1 (𝑡) , 𝜂2 (𝑡) , . . . , 𝜂𝑛𝑐 (𝑡)] , (59) 4. Examples


𝛾𝑖 (u (𝑡)) Example 1. Consider the following 2-input 2-output Ham-
𝜂𝑖 (𝑡) = [ ], (60)
0 merstein FIR-MA system:

̂ 𝑛 (𝑡) = [̂v (𝑡 − 1) , v̂ (𝑡 − 2) , . . . , v̂ (𝑡 − 𝑛𝑑 )] ,
𝜑 (61) 𝑦1 (𝑡) 𝑢 (𝑡) 𝑢 (𝑡 − 1)
[ ] = [ 1 ] + B[ 1 ]
𝑦2 (𝑡) 𝑢2 (𝑡) 𝑢2 (𝑡 − 1)
̂ 𝑓 (𝑡) 𝜃̂𝑠 (𝑡) ,
v̂ (𝑡) = ŷ𝑓 (𝑡) − Ψ (62) (69)
V (𝑡) V (𝑡 − 1)
𝑛𝑐 + [ 1 ] + 𝑑1 [ 1 ],
V2 (𝑡) V2 (𝑡 − 1)
̂ (𝑡) = ∑𝑐̂ (𝑡) 𝛾 (u (𝑡)) ,
u (63)
𝑖 𝑖
𝑖=1 where
̂
̂ (𝑡) = [ 𝜃𝑠 (𝑡) ] ,
𝜗 (64) B (𝑧) = [
0.13 0.25 −1
]𝑧 ,
𝜃̂ (𝑡) 𝑛 −1.21 0.13
𝑇
col [𝜃̂1 (𝑡)] 𝐷 (𝑧) = 1 + 𝑑1 𝑧−1 = 1 + 0.68𝑧−1 ,
𝜃̂𝑠 (𝑡) = [ ], (65)
̂ (𝑡)
𝜃 u (𝑡) = 𝑐1 u (𝑡) + 𝑐2 u2 (𝑡) , (70)
[ 2 ]
𝑇 𝑇
𝜃̂1 (𝑡) = [B
̂ 1 (𝑡) , B
̂ 2 (𝑡) , . . . , B
̂ 𝑛 (𝑡)] , (66) c = [𝑐1 , 𝑐2 ] = [1.00, −0.14]𝑇 ,
𝑏

𝑇
𝜃̂2 (𝑡) = [̂𝑐1 (𝑡) , 𝑐̂2 (𝑡) , . . . , 𝑐̂𝑛𝑐 (𝑡)] , (67) 𝜗 = [0.13, −1.21, 0.25, 0.13, −0.14, 0.68]𝑇 .

𝑇 In simulation, the inputs {𝑢1 (𝑡)} and {𝑢2 (𝑡)} are taken as two
𝜃̂𝑛 (𝑡) = [𝑑̂1 (𝑡) , 𝑑̂2 (𝑡) , . . . , 𝑑̂𝑛𝑑 (𝑡)] . (68) uncorrelated persistent excitation signal sequences with zero
mean and unit variances and {V1 (𝑡)} and {V2 (𝑡)} as two white
The steps involved in the F-MRLS algorithm for multi- noise sequences with zero mean and variances 𝜎12 = 𝜎22 =
variable Hammerstein systems are listed in the following. 𝜎2 (𝜎2 = 0.502 , 𝜎2 = 1.002 ). Applying the MRLS algorithm
in (19) and the F-MRLS algorithm in (42)–(68) to estimate
(1) To initialize, let 𝑡 = 1, set the initial values of
the parameters of this multivariable Hammerstein system,
the parameter estimation variables and covariance
the F-MRLS parameter estimates and their estimation errors
matrices as follows: 𝜃̂𝑠 (𝑖) = 1𝑚𝑟𝑛𝑏 +𝑛𝑐 /𝑝0 , 𝜃̂𝑛 (𝑖) = are shown in Table 1, the comparison between the F-MRLS
̂ (𝑖) = 1 /𝑝 , u(𝑖)
1𝑛𝑑 /𝑝0 , ŷ𝑓 (𝑖) = 1𝑚 /𝑝0 , u ̂ = 1 /𝑝 , algorithm and the MRLS algorithm in the estimation error
𝑓 𝑟 0 𝑟 0
k̂(𝑖) = 1𝑚 /𝑝0 for 𝑖 ⩽ 0, U ̂ 𝑗 (𝑖) = 1𝑟 /𝑝0 for 𝑖 ⩽ 0 ̂ − 𝜗‖/‖𝜗‖ versus 𝑡 is shown in Table 2, and the
𝛿 := ‖𝜗(𝑡)
2629, 2014, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2014/232848 by Libya Hinari NPL, Wiley Online Library on [22/09/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Mathematical Problems in Engineering 7

Table 1: The F-MRLS estimates and errors in Example 1 (𝜎2 = 0.502 and 𝜎2 = 1.002 ).

𝜎2 𝑡 𝐵(1, 1) 𝐵(2, 1) 𝐵(1, 2) 𝐵(2, 2) 𝑐2 𝑑1 𝛿 (%)


100 0.10919 −1.60279 0.39771 0.12515 −0.16795 0.46319 33.14292
200 0.08155 −1.44298 0.31145 0.09590 −0.13875 0.51078 21.01533
2 500 0.10144 −1.33934 0.25923 0.08705 −0.15442 0.60335 11.18663
0.50
1000 0.12452 −1.30385 0.26703 0.11599 −0.14796 0.65282 7.04142
2000 0.13930 −1.27241 0.26711 0.12703 −0.14550 0.69136 4.66378
3000 0.14008 −1.25781 0.26567 0.11928 −0.14361 0.70371 4.03373
100 0.09530 −1.72159 0.52726 0.13387 −0.15869 0.52551 42.21841
200 0.03631 −1.52223 0.36553 0.07113 −0.11792 0.58022 25.56872
500 0.07400 −1.40400 0.25492 0.04715 −0.16037 0.64763 15.50868
1.002
1000 0.12015 −1.36311 0.27803 0.10606 −0.15335 0.68001 11.08080
2000 0.14849 −1.31457 0.28056 0.12704 −0.15048 0.70624 7.98334
3000 0.15005 −1.29138 0.27909 0.10998 −0.14680 0.71335 6.79480
True values 0.13000 −1.21000 0.25000 0.13000 −0.14000 0.68000

Table 2: The MRLS and F-MRLS estimates and errors in Example 1 (𝜎2 = 0.502 ).

Algorithms 𝑡 𝐵(1, 1) 𝐵(2, 1) 𝐵(1, 2) 𝐵(2, 2) 𝑐2 𝑑1 𝛿 (%)


100 −0.01946 −1.84785 0.44644 0.04712 −0.16528 0.39802 52.11950
200 −0.03340 −1.60645 0.31299 0.05091 −0.14456 0.46017 34.45327
500 0.03788 −1.39775 0.28600 0.03749 −0.14940 0.56178 18.20046
MRLS
1000 0.08074 −1.34458 0.28178 0.07949 −0.14078 0.61722 11.71737
2000 0.11167 −1.28751 0.27285 0.10106 −0.13785 0.66411 6.24292
3000 0.12600 −1.27455 0.26866 0.10656 −0.13729 0.68367 4.99772
100 0.10919 −1.60279 0.39771 0.12515 −0.16795 0.46319 33.14292
200 0.08155 −1.44298 0.31145 0.09590 −0.13875 0.51078 21.01533
500 0.10144 −1.33934 0.25923 0.08705 −0.15442 0.60335 11.18663
F-MRLS
1000 0.12452 −1.30385 0.26703 0.11599 −0.14796 0.65282 7.04142
2000 0.13930 −1.27241 0.26711 0.12703 −0.14550 0.69136 4.66378
3000 0.14008 −1.25781 0.26567 0.11928 −0.14361 0.70371 4.03373
True values 0.13000 −1.21000 0.25000 0.13000 −0.14000 0.68000

̂ − 𝜗‖/‖𝜗‖ versus 𝑡 are shown in


estimation errors 𝛿 := ‖𝜗(𝑡) 0.7
Figures 1 and 2. 0.6

Example 2. Consider the following 2-input 2-output Ham- 0.5


merstein FIR-MA system: 0.4
𝛿
0.3 𝜎2 = 1.00 2

𝑦1 (𝑡) 𝑢 (𝑡) 𝑢 (𝑡 − 1) 0.2 𝜎2 = 0.502


[ ] = [ 1 ] + B[ 1 ]
𝑦2 (𝑡) 𝑢2 (𝑡) 𝑢2 (𝑡 − 1)
0.1
(71)
V1 (𝑡) V (𝑡 − 1) 0
+[ ] + 𝑑1 [ 1 ], 0 500 1000 1500 2000 2500 3000
V2 (𝑡) V2 (𝑡 − 1)
t
Figure 1: The F-RLS estimation errors 𝛿 versus 𝑡 in Example 1 (𝜎2 =
where 0.502 and 𝜎2 = 1.002 ).

0.14 0.25 −1
B (𝑧) = [ ]𝑧 ,
−0.15 0.125 𝑇
c = [𝑐1 , 𝑐2 , 𝑐3 ] = [1.00, −0.19, 1.19]𝑇 ,
−1 −1
𝐷 (𝑧) = 1 + 𝑑1 𝑧 = 1 + 0.50𝑧 , 𝜗 = [0.14, −0.15, 0.25, 0.125, −0.19, 1.19, 0.50]𝑇 .
u (𝑡) = 𝑐1 u (𝑡) + 𝑐2 u2 (𝑡) + 𝑐3 u3 (𝑡) , (72)
2629, 2014, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2014/232848 by Libya Hinari NPL, Wiley Online Library on [22/09/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
8 Mathematical Problems in Engineering

Table 3: The F-RLS estimates and errors in Example 2 (𝜎2 = 0.502 and 𝜎2 = 1.002 ).

𝜎2 𝑡 𝐵(1, 1) 𝐵(2, 1) 𝐵(1, 2) 𝐵(2, 2) 𝑐2 𝑐3 𝑑1 𝛿 (%)


100 0.12373 −1.02307 0.27283 0.10358 −0.39300 1.25093 0.32512 16.33820
200 0.12306 −1.03981 0.24977 0.11748 −0.29306 1.19925 0.35407 10.57344
2
500 0.13109 −1.05853 0.24655 0.11563 −0.25013 1.19098 0.40948 6.44687
0.50
1000 0.13728 −1.06400 0.25207 0.12027 −0.22484 1.21159 0.45612 3.63495
2000 0.14372 −1.06203 0.25315 0.12346 −0.20822 1.20110 0.49970 1.46876
3000 0.14389 −1.05999 0.25321 0.12156 −0.20281 1.19897 0.51458 1.43078
100 0.10227 −0.88895 0.29620 0.02428 −0.45977 1.39972 0.43173 23.55652
200 0.10022 −0.94211 0.26126 0.06755 −0.30135 1.31216 0.41341 13.31668
500 0.11893 −1.01753 0.24656 0.08815 −0.27093 1.24343 0.43981 7.40054
1.002
1000 0.13195 −1.05047 0.25483 0.10514 −0.24217 1.26386 0.47397 5.67199
2000 0.14551 −1.05983 0.25634 0.11672 −0.21751 1.22939 0.51038 3.02299
3000 0.14632 −1.06041 0.25650 0.11459 −0.20918 1.22061 0.52227 2.68962
True values 0.14000 −1.05000 0.25000 0.12500 −0.19000 1.19000 0.50000

Table 4: The comparison of parameter estimates and errors in Example 2 (𝜎2 = 0.502 ).

Algorithms 𝑡 𝐵(1, 1) 𝐵(2, 1) 𝐵(1, 2) 𝐵(2, 2) 𝑐2 𝑐3 𝑑1 𝛿 (%)


100 0.10379 −1.49022 0.30877 0.11454 −0.37596 0.96745 0.25108 34.46417
200 0.08551 −1.31358 0.29070 0.10159 −0.30829 1.05130 0.24601 24.37082
500 0.10950 −1.17837 0.27162 0.09651 −0.24919 1.13023 0.30548 14.80320
MRLS
1000 0.12218 −1.13050 0.26705 0.10410 −0.21956 1.17878 0.36159 9.76634
2000 0.13294 −1.09517 0.26234 0.11047 −0.20567 1.18593 0.42481 5.36992
3000 0.13715 −1.08439 0.25990 0.11297 −0.20141 1.18517 0.45571 3.49648
100 0.12373 −1.02307 0.27283 0.10358 −0.39300 1.25093 0.32512 16.33820
200 0.12306 −1.03981 0.24977 0.11748 −0.29306 1.19925 0.35407 10.57344
500 0.13109 −1.05853 0.24655 0.11563 −0.25013 1.19098 0.40948 6.44687
F-MRLS
1000 0.13728 −1.06400 0.25207 0.12027 −0.22484 1.21159 0.45612 3.63495
2000 0.14372 −1.06203 0.25315 0.12346 −0.20822 1.20110 0.49970 1.46876
3000 0.14389 −1.05999 0.25321 0.12156 −0.20281 1.19897 0.51458 1.43078
True values 0.14000 −1.05000 0.25000 0.12500 −0.19000 1.19000 0.50000

Table 5: Comparison of the computational efficiency of the F-MRLS 0.7


and MRLS algorithms.
0.6
Algorithms Number of multiplications Number of additions 0.5
MRLS 2𝑛2 + 5𝑛 2𝑛2 + 2𝑛
0.4
2 2 2
F-MRLS 2𝑛𝑏 + 2𝑛𝑐 + 2𝑛𝑑 + 5𝑛 2𝑛𝑏 + 2𝑛𝑐2 + 2𝑛𝑑2 + 2𝑛 + 1
2
𝛿
0.3 MRLS
0.2 F-MRLS
In simulation, the inputs {𝑢1 (𝑡)} and {𝑢2 (𝑡)} and noise data 0.1
{V1 (𝑡)} and {V2 (𝑡)} are settled in the same way as in Example 1,
0
where variances 𝜎12 = 𝜎22 = 𝜎2 (𝜎2 = 0.502 ). Applying the 0 500 1000 1500 2000 2500 3000
MRLS algorithm and the F-MRLS algorithm in (42)–(68), t
the F-MRLS parameter estimates and their estimation errors
are shown in Table 3, the comparison between the F-MRLS Figure 2: The parameter estimation errors 𝛿 versus 𝑡 in Example 1
(𝜎2 = 0.502 ).
algorithm and the MRLS algorithm in the estimation error
̂ − 𝜗‖/‖𝜗‖ versus 𝑡 is shown in Table 4, and the
𝛿 := ‖𝜗(𝑡)
estimation errors 𝛿 := ‖𝜗(𝑡) ̂ − 𝜗‖/‖𝜗‖ versus 𝑡 are shown in
Figures 3 and 4. From Tables 1–5, Figures 1–4, we can draw the following
conclusions.
To illustrate the advantages of the proposed algorithm,
the numbers of multiplications and additions for each step (i) The parameter estimation errors are getting smaller
of the F-MRLS algorithm and the MRLS algorithm are listed with 𝑡 increasing, which proves that the proposed
in Table 5. algorithms are effective.
2629, 2014, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2014/232848 by Libya Hinari NPL, Wiley Online Library on [22/09/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Mathematical Problems in Engineering 9

0.7 the Doctoral Foundation of Higher Education Priority Fields


0.6 of Scientific Research (no. 20110093130001), and the Sci-
entific Research Foundation of Jiangnan University (no.
0.5 1252050205135110) and by the PAPD of Jiangsu Higher Edu-
0.4 cation Institutions.
𝛿
0.3 𝜎2 = 1.00 2

0.2 𝜎2 = 0.502 References


0.1 [1] W.-H. Ho, J.-H. Chou, and C.-Y. Guo, “Parameter identification
of chaotic systems using improved differential evolution algo-
0
0 500 1000 1500 2000 2500 3000 rithm,” Nonlinear Dynamics, vol. 61, no. 1-2, pp. 29–41, 2010.
t [2] M. D. Narayanan, S. Narayanan, and C. Padmanabhan, “Para-
Figure 3: The F-RLS estimation errors 𝛿 versus 𝑡 in Example 2 (𝜎2 = metric identification of nonlinear systems using multiple trials,”
0.502 and 𝜎2 = 1.002 ). Nonlinear Dynamics, vol. 48, no. 4, pp. 341–360, 2007.
[3] W. Silva, “Identification of nonlinear aeroelastic systems based
on the volterra theory: progress and opportunities,” Nonlinear
0.7
Dynamics, vol. 39, no. 1-2, pp. 25–62, 2005.
0.6 [4] Y. Zhang and G. Cui, “Bias compensation methods for stochas-
0.5 tic systems with colored noise,” Applied Mathematical Mod-
elling: Simulation and Computation for Engineering and Envi-
0.4 ronmental Systems, vol. 35, no. 4, pp. 1709–1716, 2011.
𝛿 MRLS
0.3 F-MRLS [5] Y. Zhang, “Unbiased identification of a class of multi-input
single-output systems with correlated disturbances using bias
0.2
compensation methods,” Mathematical and Computer Mod-
0.1 elling, vol. 53, no. 9-10, pp. 1810–1819, 2011.
0 [6] Y. S. Xiao, Y. Zhang, J. Ding, and J. Y. Dai, “The residual based
0 500 1000 1500 2000 2500 3000 interactive least squares algorithms and simulation studies,”
t Computers & Mathematics with Applications, vol. 58, no. 6, pp.
Figure 4: The parameter estimation errors 𝛿 versus 𝑡 in Example 2 1190–1197, 2009.
(𝜎2 = 0.502 ). [7] Q.-G. Wang, X. Guo, and Y. Zhang, “Direct identification of
continuous time delay systems from step responses,” Journal of
Process Control, vol. 11, no. 5, pp. 531–542, 2001.
(ii) The F-MRLS algorithm is more accurate than the [8] F. Ding, “State filtering and parameter estimation for state space
systems with scarce measurements,” Signal Processing, vol. 104,
MRLS algorithm, which means the proposed F-MRLS
pp. 369–380, 2014.
algorithm has a better performance compared with
the MRLS algorithm. [9] Y. Gu, F. Ding, and J. H. Li, “State filtering and parameter
estimation for linear systems with d-step state-delay,” IET Signal
(iii) The parameter estimates given by the F-MRLS algo- Processing, vol. 8, no. 6, pp. 639–646, 2014.
rithm have faster convergence than those given by the [10] M. Dehghan and M. Hajarian, “Two algorithms for finding
MRLS algorithm. the HERmitian reflexive and skew-HERmitian solutions of
Sylvester matrix equations,” Applied Mathematics Letters, vol.
24, no. 4, pp. 444–449, 2011.
5. Conclusions
[11] M. Dehghan and M. Hajarian, “A lower bound for the product
This paper presents a data filtering based recursive least of eigenvalues of solutions to matrix equations,” Applied Math-
squares algorithm for MIMO nonlinear FIR-MA systems. ematics Letters, vol. 22, no. 12, pp. 1786–1788, 2009.
The simulation results show that the proposed data filtering [12] M. Dehghan and M. Hajarian, “SSHI methods for solving
based recursive least squares algorithm is more accurate and general linear matrix equations,” Engineering Computations
reduces computational burden compared with the recursive (Swansea, Wales), vol. 28, no. 8, pp. 1028–1043, 2011.
least squares algorithm. [13] M. Dehghan and M. Hajarian, “An iterative algorithm for the
reflexive solutions of the generalized coupled Sylvester matrix
equations and its optimal approximation,” Applied Mathematics
Conflict of Interests and Computation, vol. 202, no. 2, pp. 571–588, 2008.
[14] M. Dehghan and M. Hajarian, “Finite iterative algorithms
The authors declare that there is no conflict of interests
for the reflexive and anti-reflexive solutions of the matrix
regarding the publication of this paper. equation 𝐴 1 𝑋1 𝐵1 + 𝐴 2 𝑋2 𝐵2 = 𝐶,” Mathematical and Computer
Modelling, vol. 49, no. 9-10, pp. 1937–1959, 2009.
Acknowledgments [15] M. Dehghan and M. Hajarian, “Fourth-order variants of New-
ton’s method without second derivatives for solving non-linear
This work was supported in part by the National Natural equations,” Engineering Computations (Swansea, Wales), vol. 29,
Science Foundation of China (nos. 61174032 and 61202473), no. 4, pp. 356–365, 2012.
2629, 2014, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2014/232848 by Libya Hinari NPL, Wiley Online Library on [22/09/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
10 Mathematical Problems in Engineering

[16] M. Dehghan and M. Hajarian, “Iterative algorithms for the filtering,” IET Control Theory & Applications, vol. 5, no. 14, pp.
generalized centro-symmetric and central anti-symmetric solu- 1648–1657, 2011.
tions of general coupled matrix equations,” Engineering Compu- [33] B. Yu, Y. Shi, and H. Huang, “𝑙2 -𝑙∞ filtering for multirate systems
tations, vol. 29, no. 5, pp. 528–560, 2012. based on lifted models,” Circuits, Systems, and Signal Processing,
[17] Y. Shi and H. Fang, “Kalman filter-based identification for vol. 27, no. 5, pp. 699–711, 2008.
systems with randomly missing measurements in a network [34] Y. Xiao and N. Yue, “Parameter estimation for nonlinear
environment,” International Journal of Control, vol. 83, no. 3, pp. dynamical adjustment models,” Mathematical and Computer
538–551, 2010. Modelling, vol. 54, no. 5-6, pp. 1561–1568, 2011.
[18] Y. Shi and B. Yu, “Output feedback stabilization of networked [35] F. Ding and H. Duan, “Two-stage parameter estimation algo-
control systems with random delays modeled by Markov rithms for Box-Jenkins systems,” IET Signal Processing, vol. 7,
chains,” IEEE Transactions on Automatic Control, vol. 54, no. 7, no. 8, pp. 646–654, 2013.
pp. 1668–1674, 2009. [36] J. Vörös, “Iterative algorithm for parameter identification of
[19] Y. Shi and B. Yu, “Robust mixed 𝐻2 /𝐻∞ control of networked Hammerstein systems with two-segment nonlinearities,” IEEE
control systems with random time delays in both forward and Transactions on Automatic Control, vol. 44, no. 11, pp. 2145–2149,
backward communication links,” Automatica, vol. 47, no. 4, pp. 1999.
754–760, 2011. [37] J. Vörös, “Identification of Hammerstein systems with time-
[20] Y. Hu, “Iterative and recursive least squares estimation algo- varying piecewise-linear characteristics,” IEEE Transactions on
rithms for moving average systems,” Simulation Modelling Circuits and Systems II: Express Briefs, vol. 52, no. 12, pp. 865–
Practice and Theory, vol. 34, pp. 12–19, 2013. 869, 2005.
[21] Y. B. Hu, B. L. Liu, Q. Zhou, and C. Yang, “Recursive extended [38] J. Vörös, “Recursive identification of Hammerstein systems
least squares parameter estimation for Wiener nonlinear sys- with discontinuous nonlinearities containing dead-zones,”
tems with moving average noises,” Circuits, Systems, and Signal IEEE Transactions on Automatic Control, vol. 48, no. 12, pp.
Processing, vol. 33, no. 2, pp. 655–664, 2014. 2203–2206, 2003.
[22] C. Wang and T. Tang, “Recursive least squares estimation [39] Y. Liu, F. Ding, and Y. Shi, “An efficient hierarchical identi-
algorithm applied to a class of linear-in-parameters output error fication method for general dual-rate sampled-data systems,”
moving average systems,” Applied Mathematics Letters, vol. 29, Automatica, vol. 50, no. 3, pp. 962–970, 2014.
pp. 36–41, 2014. [40] J. Ding, C. X. Fan, and J. X. Lin, “Auxiliary model based
[23] F. Ding, X. Liu, and J. Chu, “Gradient-based and least-squares- parameter estimation for dual-rate output error systems with
based iterative algorithms for Hammerstein systems using the colored noise,” Applied Mathematical Modelling, vol. 37, no. 6,
hierarchical identification principle,” IET Control Theory & pp. 4051–4058, 2013.
Applications, vol. 7, no. 2, pp. 176–184, 2013. [41] L. Xie, H. Yang, and B. Huang, “FIR model identification of
[24] C. Wang and T. Tang, “Several gradient-based iterative esti- multirate processes with random delays using EM algorithm,”
mation algorithms for a class of nonlinear systems using the AIChE Journal, vol. 59, no. 11, pp. 4124–4132, 2013.
filtering technique,” Nonlinear Dynamics, vol. 77, no. 3, pp. 769– [42] J. Ding and J. Lin, “Modified subspace identification for peri-
780, 2014. odically non-uniformly sampled systems by using the lifting
[25] Y. B. Hu, B. L. Liu, and Q. Zhou, “A multi-innovation gen- technique,” Circuits, Systems, and Signal Processing, vol. 33, no.
eralized extended stochastic gradient algorithm for output 5, pp. 1439–1449, 2014.
nonlinear autoregressive moving average systems,” Applied [43] F. Ding, “Hierarchical estimation algorithms for multivariable
Mathematics and Computation, vol. 247, pp. 218–224, 2014. systems using measurement information,” Information Sciences,
[26] J. H. Li, “Parameter estimation for Hammerstein CARARMA vol. 277, pp. 396–405, 2014.
systems based on the Newton iteration,” Applied Mathematics [44] F. Ding, “Combined state and least squares parameter estima-
Letters, vol. 26, no. 1, pp. 91–96, 2013. tion algorithms for dynamic systems,” Applied Mathematical
[27] F. Ding and T. Chen, “Identification of Hammerstein nonlinear Modelling, vol. 38, no. 1, pp. 403–412, 2014.
ARMAX systems,” Automatica, vol. 41, no. 9, pp. 1479–1489, [45] X. Luan, P. Shi, and F. Liu, “Stabilization of networked control
2005. systems with random delays,” IEEE Transactions on Industrial
[28] F. Ding, Y. Shi, and T. Chen, “Auxiliary model-based least- Electronics, vol. 58, no. 9, pp. 4323–4330, 2011.
squares identification methods for Hammerstein output-error [46] X. Luan, S. Zhao, and F. Liu, “𝐻∞ control for discrete-time
systems,” Systems and Control Letters, vol. 56, no. 5, pp. 373–380, Markov jump systems with uncertain transition probabilities,”
2007. Institute of Electrical and Electronics Engineers. Transactions on
[29] F. Ding, X. P. Liu, and G. Liu, “Identification methods for Automatic Control, vol. 58, no. 6, pp. 1566–1572, 2013.
Hammerstein nonlinear systems,” Digital Signal Processing: A
Review Journal, vol. 21, no. 2, pp. 215–238, 2011.
[30] Z. Y. Wang, Y. X. Shen, Z. Ji, and R. Ding, “Filtering based
recursive least squares algorithm for Hammerstein FIR-MA
systems,” Nonlinear Dynamics, vol. 73, no. 1-2, pp. 1045–1054,
2013.
[31] M. S. Ahmad, O. Kukrer, and A. Hocanin, “Recursive inverse
adaptive filtering algorithm,” Digital Signal Processing: A Review
Journal, vol. 21, no. 4, pp. 491–496, 2011.
[32] D. Q. Wang, “Least squares-based recursive and iterative esti-
mation for output error moving average systems using data

You might also like