0% found this document useful (0 votes)
31 views8 pages

A Numerical Procedure For Filtering

The document proposes a numerical algorithm for filtering and robust signal differentiation based on solving a simplified linear optimization problem. The algorithm computes an optimal regularization parameter that minimizes the Generalized Cross Validation criterion to achieve a compromise between smoothing and fidelity to the measurable data. Simulation results are provided to demonstrate the effectiveness of the proposed procedure.

Uploaded by

oussama sadki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views8 pages

A Numerical Procedure For Filtering

The document proposes a numerical algorithm for filtering and robust signal differentiation based on solving a simplified linear optimization problem. The algorithm computes an optimal regularization parameter that minimizes the Generalized Cross Validation criterion to achieve a compromise between smoothing and fidelity to the measurable data. Simulation results are provided to demonstrate the effectiveness of the proposed procedure.

Uploaded by

oussama sadki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Int. J. Appl. Math. Comput. Sci., 2004, Vol. 14, No.

2, 201–208

A NUMERICAL PROCEDURE FOR FILTERING AND EFFICIENT


HIGH-ORDER SIGNAL DIFFERENTIATION

S ALIM IBRIR∗ , S ETTE DIOP∗∗



Department of Automated Production, École de Technologie Supérieure
1100, rue Notre Dame Ouest, Montreal, Québec, H3C 1K3 Canada
e-mail: s− [email protected]
∗∗
Laboratoire des Signaux et Systèmes, CNRS, Supélec, 3 rue Juliot-Curie
91190 Gif-sur-Yvette, France
e-mail: [email protected]

In this paper, we propose a numerical algorithm for filtering and robust signal differentiation. The numerical procedure
is based on the solution of a simplified linear optimization problem. A compromise between smoothing and fidelity with
respect to the measurable data is achieved by the computation of an optimal regularization parameter that minimizes the
Generalized Cross Validation criterion (GCV). Simulation results are given to highlight the effectiveness of the proposed
procedure.

Keywords: generalized cross validation, smoothing, differentiation, splines functions, optimization

1. Introduction of uncertain systems, see, e.g., (Barmish and Leitmann,


1982; Chen, 1990; Chen and Leitmann, 1987; Dawson
In many estimation and observation problems, estimating et al., 1992; Leitmann, 1981). The practical convergence
the unmeasured system dynamics turns on estimating the that is reached by the latter approach needs some match-
derivatives of the measured system outputs from discrete ing conditions. We shall also mention the approach via
samples of measurements (Diop et al., 1993; Gauthier et sliding modes as in (Slotine et al., 1987).
al., 1992; Ibrir, 1999). A model of the signal dynam-
ics may be of crucial help to achieve the desired objec- However, there are at least two practical situations
tive. This has been magnificently demonstrated in pio- where the available model is not of great help. First, the
neering works by R.E. Kalman (1960) and D.G. Luen- system model may be too poorly known. Second, it may
berger (1971) for signals generated by known linear dy- be too complex for an extension of linear observer design
namical systems. Roughly speaking, if a signal model is theory. In those situations, and as long as practical (in
known, then the resulting smooth signal can be differen- lieu of asymptotic) convergence is enough for the specific
tiated with respect to time in order to have estimates of application at hand, we may consider using differentia-
higher derivatives of the system output. For example, con- tion estimators which merely ignore the nonlinear func-
sider the problem of estimating ν − 1 first derivatives, tion f in their design. Differentiation estimators may be
y (i) , i = 0, 1, . . . , ν − 1 of the output of a dynamic sys- realized in both continuous time or discrete time as sug-
tem, say, y (ν) = f (y, ẏ, ÿ, . . . , y (ν−1) ), where y may be gested in (Ibrir, 2001; 2003). This motivates enough the
a vector, and f may contain input derivatives. But we study, by observer design theorists, of more sophisticated
choose not to go into technical details. If the nonlinear numerical differentiation techniques for use in more in-
function f is known accurately enough, then asymptotic volved control design problems. The numerical analysis
nonlinear observers can be designed using the results from literature is where to find the main contributions in the
(Ciccarella et al., 1993; Gauthier et al., 1992; Misawa area, see (Anderson and Bloomfield, 1974; Craven and
and Hedrick, 1989; Rajamani, 1998; Tornambè, 1992; Xia Wahba, 1979; De Boor, 1978; Eubank, 1988; Gasser et al.,
and Gao, 1989). The proof of the asymptotic conver- 1985; Georgiev, 1984; Härdle, 1984; 1985; Ibrir, 1999;
gence of those observers requires various restrictive as- 2000; 2003; Müller, 1984; Reinsch, 1967; 1971) for more
sumptions on the nonlinear function f . If f is not known motivations and basic references. But these results have
accurately enough then, estimators for the derivatives of to be adapted to observer design problems since they were
y may still be obtained via the theory of stabilization often envisioned so as to be used in an off-line basis.
202 S. Ibrir and S. Diop

The main difficulty that we face while designing dif- functions are smooth piecewise functions. Since their in-
ferentiation observers, without any a-priori knowledge of troduction, splines have proved to be very popular in in-
system dynamics, is noise filtering. For this reason, ro- terpolation, smoothing and approximation, and in compu-
bust signal differentiation can be classified as an ill-posed tational mathematics in general.
problem due to the conflicting goals that we aim to realize. In this paper we present the steps of a new discrete-
Generally, noise filtering, precision, and the peaking phe- time algorithm which smooths signals from their uncer-
nomenon are three contradictory performances that char- tain discrete samples. The proposed algorithm does not
acterize the robustness of any differentiation system. require any knowledge of the statistics of the measure-
The field of ill-posed problems has certainly been ment uncertainties and is based on the minimization of a
one of the fastest growing areas in signal processing and criterion equivalent to (1). The new discrete-time smooth-
applied mathematics. This growth has largely been driven ing criterion is inspired by finite-difference schemes. In
by the needs of applications both in other sciences and this algorithm the regularization parameter is obtained
in industry. A problem is mathematically ill-posed if its from the optimality condition of the Generalized Cross-
solution does not exist, is not unique or does not depend Validation criterion as earlier introduced in (Craven and
continuously on the data. A typical example is the com- Wahba, 1979). We show that the smooth solution can be
bined interpolation and differentiation problem of noisy given as discrete samples or as a continuous-time spline
data. A problem therein is that there are infinitely many function defined over the observation interval. Conse-
ways to determine the interpolated function values if only quently, the regularized solution can be differentiated as
the constraint from the data is used. Additional constraints many times as possible to estimate smooth higher deriva-
are needed to guarantee the uniqueness of the solution to tives of the measured signal.
make the problem well posed. An important constraint in
context is smoothness. By imposing a smoothness con-
straint, the analytic regularization method converts an ill- 2. Problem Statement and Solution
posed problem into a well-posed one. This has been used
of the Optimization Problem
in solving numerous practical problems such as estimat-
ing higher derivatives of a signal through potentially noisy Here, we consider the problem of smoothing noisy
data. data with possibly estimating the higher derivatives
As will be shown, inverse problems typically lead ŷ (µ) (ti ), µ = 0, 1, . . . , ν − 1 from discrete, potentially
to mathematical models that are not well posed in uncertain, samples y` = ȳ(t` )+(t` ), ` = i−n+1, . . . , i,
Hadamard’s sense, i.e., to ill-posed problems. Specifi- measured with an error (t` ) at n distinct instants, by
cally, this means that their solutions is unstable under data minimizing the cost function
perturbations. Numerical methods that can cope with this
problem are the so-called regularization methods. These i
1 X 2
J :=

methods have been quite successfully used in the numeri- ŷ(t` ) − y(t` )
n
cal analysis literature in approaches to the ill-posed prob- `=i−n+1

lem of smoothing a signal from its discrete, potentially un- i−1 h i2


(m)
X
certain, samples (Anderson and Bloomfield, 1974; Craven +λ ŷ` (∆t)m , i ∈ Z≥n ,
and Wahba, 1979; Eubank, 1988; De Boor, 1978). One `=i−n+m
of these approaches proposed an algorithm for the com-
putation of an optimal spline whose first derivatives are where Z≥n is the set of positive integer numbers
estimates of the first derivatives of the signal. These al- greater than or equal to n. For each moving window
gorithms suffer from a large amount of computation they [ti−n+1 , . . . , ti ] of length n, we minimize (2) with re-
imply. One of the famous regularization criteria which spect to ŷ. The first term in the criterion is the well-known
have been extensively considered in numerical analysis least-squares criterion, and the second term represents an
and statistics (De Boor, 1978) is equivalent functional to the continuous integral
n Z t
1X 2
Z ti
J = (yi − ŷi ) + λ ŷ (m) (s) ds, (1) ŷ (m) (t) dt,
n i=1 0
ti−n+1
which embodies a compromise between the closeness to
the measured data and smoothness of the estimate. The such that ŷ (m) (t) is the continuous m-th derivative of
(m)
balance between the two distances is mastered by a par- the function ŷ(t). Here ŷi denotes the finite-difference
ticular choice of the parameter λ. It was shown that the scheme of the m-th derivative of the continuous function
minimum of the performance index (1) is a spline func- ŷ(t) at time t = ti . In order to compute the m-th deriva-
tion of order 2m, see (De Boor, 1978). Recall that spline tive of ŷ(t) at time t = ti we will only use the samples
A numerical procedure for filtering and efficient high-order signal differentiation 203

ŷi−m , ŷi−m−1 , . . . , ŷi . Then the last cost function is writ- The derivative formulae (3) come from the approxi-
ten in the matrix form as mation of the m-th derivative of ŷ by the following finite-
difference scheme:
1
J := kY − Ŷ k2 + λkH Ŷ k2 , (2) m+1
n (m) 1 X
ŷi = (−1)m+j Cm
j
ŷi−m+j+1 . (4)
where (∆t)m j=0

This differentiation scheme is obtained by solving the set


   
yi−n+1 ŷi−n+1
of the following Taylor expansions with respect to the
yi−n+2 ŷi−n+2
   
    (1) (2) (m)
Y =
 .. ,
 Ŷ = 
 .. ,
 derivatives ŷi , ŷi , . . . , ŷi :
 .   . 
yi ŷi δ (1) δ 2 (2) δ m (m)
ŷi−1 = ŷi − ŷi + ŷi + · · · + ŷ ,
1! 2! m! i
and H is an (n − m) × (n) matrix consisting of general 2δ (1) (2δ)2 (2) (2δ)m (m)
rows ŷi−2 = ŷi − ŷi + ŷi + · · · + ŷ ,
1! 2! m! i
(−1)m+j−1 Cm
j−1
, j = 1, . . . , m + 1, (3) ..
.
where Cnk is the standard binomial coefficient. For m = mδ (1) (mδ)2 (2) (mδ)m (m)
ŷi−m = ŷi − ŷi + ŷi +. . .+ ŷ ,
2, 3, and 4, the smoothness conditions are 1! 2! m! i
n−1
X where δ = ti − ti−1 is the sampling period. We have
2 selected this finite-difference scheme in order to force the
[ŷ`−1 − 2ŷ` + ŷ`+1 ] ,
`=2 matrix H 0 H to be positive definite. The symbol k · k de-
notes the Euclidean norm, and λ is a smoothing parame-
n−1
X ter chosen in the interval [0, ∞[. We look for a solution
2
[−ŷ`−2 + 3ŷ`−1 − 3ŷ` + ŷ`+1 ] , of the last functional in the space of B-spline functions
`=3 of order k = 2m. An interpretation of minimizing such
n−1 a functional concerns the trade-off between the smooth-
ing and the closeness to the data. If λ is set to zero, the
X 2
[ŷ`−3 − 4ŷ`−2 + 6ŷ`−1 − 4ŷ` + ŷ`+1 ] ,
`=4 minimization of (2) leads to a classical problem of least-
squares approximation by a B-spline function of degree
n−1
X 2 2m − 1.
[−ŷ`−4 +5ŷ`−3 −10ŷ`−2 +10ŷ`−1 −5ŷ` + ŷ`+1 ] ,
We shall use splines because they often exhibit some
`=5
optimal properties in interpolation and smoothing—in
respectively. Consequently, the corresponding matrices other words, they can often be characterized as solutions
are to variational problems. Roughly speaking, splines min-
  imize some sort of “energy” functional. This variational
1 −2 1 0 ··· 0 characterization leads to a generalized notion of splines,
 0 1 −2 1 · · · 0 
 
namely, variational splines.
H(n−2)×n =   .. .. .. .. .. ,
. 
 . . . . . ..  For each fixed measurement window, we seek the so-
lution of (2) as
0 0 · · · 1 −2 1
i
X

−1 3 −3 1 0 ··· 0
 ŷ(t) := αj bj,2m (t), ti−n+1 ≤ t ≤ ti , i ∈ Z≥n ,
j=i−n+1
 0 −1 3 −3 1 ··· 0
 
 (5)
H(n−3)×n =
 .. .. .. .. .. .. .. ,
n
. . . . where α ∈ R , and bi,2m (t) is the i-th B-spline basis

 . . . 
0 0 0 −1 3 −3 1 function of order 2m. For notational simplicity, ŷ(t) and
α are not indexed with respect to the moving window. We
assume that the conditions of the optimization problem are
 
1 −4 6 −4 1 0 ··· 0
the same for each moving window. Thus, the cost function
0 1 −4 6 −4 1 ··· 0
 
(1) becomes
 
H(n−4)×n =  .. .. .. .. .. .. .. .. .
. . . . .
 
. . .
1
 
0 0 0 1 −4 6 −4 1 J = (Y − Bα)0 (Y − Bα) + λα0 B 0 RBα (6)
n
204 S. Ibrir and S. Diop

such that variance of the noise σ 2 is known, then λ should be cho-


sen so that
R := H 0 H , i
X 2
ŷ(t` ) − y(t` ) = n σ 2 .

Bi,j := bj,2m (t` ), ` = i − n + 1, . . . , i, i ∈ Z≥n . (12)
`=i−n+1

The optimum value of the control vector α is obtained via Let A (λ) be the n × n matrix depending on
the optimality condition dJ /dα = 0. Then we get ti−n+1 , ti−n+2 , . . . , ti and λ such that
   
2 ŷ(ti−n+1 ) y(ti−n+1 )
− B 0 (Y − B α) + 2λB 0 R B α = 0, (7)
n  ..  = A (λ) 
  .. 
.

 .   .  (13)
or
ŷ(ti ) y(ti )
α = (nλB 0 R B + B 0 B)−1 B 0 Y The main result of (Craven and Wahba, 1979) shows
−1 that a good estimate of the smoothing parameter λ (also
= (nλRB + B) Y. (8)
called the generalized cross-validation parameter) is the
minimizer of the GCV criterion
Consequently,
1
k (I − A (λ)) Y k2
0
Y − B α = nλR B(nλB R B + B B) 0 −1 0
B Y. (9) V (λ) = n 2 . (14)
n trace(I − A (λ))
1

From (8), the continuous spline is fully determined. This estimate has the advantage of being free from the
Hence the discrete samples of the regularized solution are knowledge of the statistical properties of noise. Further, if
computed from the minimizer of V (λ) is obtained, then the estimates of
higher derivatives of the function y(t) could be obtained
Ŷ = Y − nλR B(nλB 0 R B + B 0 B)−1 B 0 Y by differentiating the smooth function ŷ(t).

−1
 Now, we outline a computational method to deter-
= I − nλR (I + nλR) Y. (10)
mine the smoothing parameter which minimizes the cross-
validation criterion V (λ), where the polynomial smooth-
As for the last equation, note that the discrete regularized ing spline ŷ(t) is supposed to be a B-spline of degree
samples are given as the output of an FIR filter where its 2m − 1. Using the definition of A (λ), we write
coefficients are functions of the regularization parameter
Y − Ŷ = Y − A (λ)Y = I − A (λ) Y.

λ. The sensitivity of the solution to this parameter is quite (15)
important, so the next section will be devoted to the op-
timal calculation of the regularization parameter through From (7), we obtain
the cross-validation criterion. Y − Ŷ = nλ R B α. (16)
Substituting (8) in (16), we get
3. Computing the Regularization Parameter Y − Ŷ = nλ R B(nλ B 0 R B + B 0 B)−1 B 0 Y
In this section we shall present details of a computational −1
= nλR(I + nλR) Y. (17)
method for estimating the optimal regularization parame-
ter in terms of the criterion matrices. We have seen that the By comparison with (15), we deduce that
spline vector α depends upon the smoothing parameter −1
I − A (λ) = nλR(I + nλR) .

λ. In (Craven and Wahba, 1979), two ways of estimating (18)
the smoothing parameter λ were given. The first method The GCV-criterion becomes
is called the ordinary cross-validation (OCV), which con-
1 −1
sists in finding the value of λ that minimizes the OCV- n knλR(I + nλR)Y k2
V (λ) = h i2 . (19)
criterion 1

−1
n trace nλR(I + nλR)
i
X 2
R(λ) :=

ŷ(t` ) − y(t` ) , i = n, n + 1, . . . , We propose the classical Newton method to compute the
`=i−n+1 minimum of V (λ). This yields to the following itera-
(11) tions:
where ŷ(t) is a smooth polynomial of degree 2m − 1. V˙ (λk )
λk+1 = λk − , (20)
Reinsch (1967) suggests, roughly speaking, that if the V¨(λk )
A numerical procedure for filtering and efficient high-order signal differentiation 205

where V˙ et V¨ are the first and second derivatives of V such that


with respect to λ, respectively.
S = p2 R W R − p R, (28)
Let  
p = nλ, dS p dW
= 2p R W + R − R, (29)
dp 2 dp
v = (pR + I)−1 Y,
dW
W = (pR + I)−1 . = p(RW )2 − RW, (30)
dp

Then the criterion V becomes dv


= pRW Rv − Rv. (31)
dp
1 2
n kpRvk
V (p) =  2 . (21) Finally, the derivatives
1
n trace(p R)W
N


d
V˙ = ,
Let dλ D

d2 N
 
1 2 p2 V¨ =
N = kpRvk = v 0 R0 Rv, (22) dλ2 D
n n
2
can be easily computed in terms of the first and second

1
D = trace(pRW ) . (23) derivatives of N and D.
n
Remark 1. It is possible to recursively use the last al-
Differentiating the last two equations with respect to λ, gorithm if we take the values of the obtained spline as
we obtain noisy data for another iteration. In this case the amount
dN of noise in the data is reduced in each step by choosing
= 2pv 0 R0 R I + p2 R W R − p R v,
 
(24) a new smoothing parameter. The user could fix a priori

a limited number of iterations according to the specified
and application and the time allowed to run the algorithm.

dD 2 h
= trace(pRW ) trace(R W ) 4. Connection with Adaptive Filtering
dλ n
i From (10), we have
+ trace(p R2 W (p R W − I)) . (25)
Ŷ = A (λ)Y, (32)
Finally, the second derivatives of N and D are respec- −1
tively where A (λ) = I − nλR (I + nλR) . If we write
A (λ) = (ai,j (λ))1≤i,j≤n , then
d2 N
= 2nv 0 R0 R(I + S)v ŷi = an,1 (λ)yi−n+1 + an,2 (λ)yi−n+2
dλ2

dv dS
 + · · · + an,n (λ)yi . (33)
+2pn 2v 0 R0 R(I +S) +v 0 R0 R v ,(26)
dp dp Let ŷ(z) and y(z) be the z-transforms of the discrete
d D
2 2 signals ŷi and yi , respectively. Then by taking the z-
= 2 trace RW + p R2 W (p R W − I)

transform of (33), we obtain
dλ2
  dW  ŷ(z)
+ 2 trace(p R W ) trace R = an,1 (λ)z −n+1 + an,2 (λ)z −n+2
dp y(z)
+ · · · + an,n (λ). (34)
+ trace R2 W (p R W − I)


 dW  The resulting system (34) takes the form of an adaptive


+ trace p R2 (p R W − I) FIR filter, where its coefficients (an,i )1≤i≤n (λ) are up-
dp
dated by computing a new λ in each iteration i ∈ Z≥n .
 
 dW  The updating law in our case is based on the minimiza-
+ trace p R2 W R W + p R , (27) tion of the generalized cross-validation criterion V (λ).
dp
206 S. Ibrir and S. Diop

If we see attentively the formulation of the generalized 5. Numerical Algorithm


cross-validation criterion given by (19), we realize that
this criterion is simply a weighted least-squares (LS) per- Here, we summarize the regularization procedure in the
formance index. The LS part is given by the numerator following steps:
−1
term knλR(I + nλR) Y k2 which is exactly the error
Step 1. Specify the desired spline of order k = 2m
between the smoothed discrete samples and the noisy dis-
and construct the optimal knot sequence which cor-
crete data. The weighting parameter is given by the term
responds to the breakpoints ti−n+1 , ti−n+2 , . . . , ti .
(1/n)/[(1/n)trace(I − A (λ))]2 . Consequently, the fil-
See (De Boor, 1978) for more details on optimal knot
ter (34) can be seen as a weighted least-squares (WLS)
computing.
adaptive FIR filter.
The smoothing strategy given in this paper has a re- Step 2. Construct B-spline basis functions that corre-
lationship with the classical LMS (Least Mean Squares) spond to the optimal knots calculated in Step 1.
adaptive filtering discussed in the signal processing lit- Step 3. Construct matrices H, B, R, T , and Q.
erature. Although our method of updating the filter co-
efficients is not quite identical to the principle of LMS Step 4. Compute the optimal value of the smoothing pa-
adaptive filtering, the philosophy of smoothing remains rameter λ using (23)–(27).
the same. To highlight this fact, let us recall the princi-
ple of LMS adaptive filtering. In such a filtering strat- Step 5. Compute the spline vector α.
egy, the time invariance of filter coefficients is removed. Step 6. Compute the derivatives of the spline using the
This is done by allowing the filter to change coefficients formulae
according to some prescribed optimization criterion. At
each instant, the desired discrete samples ŷi are compared
X  X
D` αi bi,k (t) = αi `+1 bi,k−` (t),
with the instantaneous filter output ỹi . On the basis of this i i
measure, the adaptive filter will change its coefficients in
an attempt to reduce the error. The coefficient update re- where D` is the `-th derivative with respect to time,
lation is a function of the error signal. and


 αr for ` = 0,

αr `+1 := (35)
 1 αr ` − αr−1 `

 for ` > 0.
k − ` tr+k−` − tr

Step 7. In order to gradually reduce the amount of noise


in the obtained smooth spline, the user has to repeat
all the steps from the beginning by taking the values
of the spline at (ti−n+2 , . . . , ti+1 ) as noisy data for
the next iteration.

6. Simulations
Fig. 1. Scheme of the LMS adaptive filter. In the following simulations we suppose that we measure
the noisy signal
By comparison with the algorithm presented in this
paper, the imposed signal ŷi is not known a priori, but its y(t) = cos(30t) sin(t) + (t) (36)
formulation in terms of the noisy samples and the smooth-
ing parameter λ is known. The main advantage of the for each δ = 0.01 s. We assume that (t) is a norm-
GCV-based filter is that the minimum of the GCV perfor- bounded noise of unknown variance. The exact first
mance index is computed independently of the knowledge derivatives of the signal y are
of the statistical properties of noise. In addition, the infor-
ẏ(t) = −30 sin(30t) sin(t) + cos(30t) cos(t), (37)
mation on the smoothing degree m is incorporated in the
quadratic performance index (2), which makes the algo- ÿ(t) = −901 cos(30t) sin(t) − 60 sin(30t) cos(t). (38)
rithm not only capable of filtering the discrete samples of
the noisy signal but also capable of reliably reproducing In Fig. 2 we show the noisy signal (36). In Fig. 3
the continuous higher derivatives of the signal considered. we plot the exact signal (signal without noise) with the
A numerical procedure for filtering and efficient high-order signal differentiation 207

1.5
continuous-time spline function, the solution to the min-
imization problem (2). In the whole simulation the mov-
1 ing window of observation is supposed to be constant of
length 10. In Figs. 4 and 5 we depict the exact derivatives
0.5
of the original signal with their estimated values given by
the differentiation of the optimal continuous spline with
The noisy signal

respect to time. In Fig. 6, we compare the output of an


0
LMS adaptive FIR filter of order 7 with the exact sam-
ple of the signal y(t). We see clearly the superiority of
−0.5 the GCV-based filter in the first instants of the filtering
process in comparison with the transient behaviour of the
−1
adaptive FIR filter presented in Fig. 6.

−1.5 7. Conclusion
0 0.5 1 1.5 2 2.5 3
Time in (sec)
In this paper we have presented a new numerical pro-
Fig. 2. Noisy signal.
cedure for reliable filtering and high-order signal differ-

1.5 1500
Estimated
Exact

1
1000
The spline with the exact signal

0.5
The second derivatives

500

−0.5

−500
The spline function
−1
Signal without noise

−1.5 −1000
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Time in (sec) Time in (sec)

Fig. 3. Optimal spline vs. the exact signal. Fig. 5. Second derivative of the optimal spline and the
exact second derivative of the signal.
40
Estimated 1.5
Exact
The output of the adaptive FIR filter and the exact signal

30
1
20

0.5
The first derivatives

10

0
0

−10
−0.5

−20

−1
−30

−40 −1.5
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Time in (sec) Time in (sec)

Fig. 4. First derivative of the optimal Fig. 6. Output of the adaptive FIR fil-
spline vs. the exact one. ter vs. the exact signal.
208 S. Ibrir and S. Diop

entiation. The design strategy consists in determining Georgiev A.A. (1984): Kernel estimates of functions and their
the continuous spline signal which minimizes the discrete derivatives with applications. — Statist. Prob. Lett., Vol. 2,
functional being the sum of a least-squares criterion and pp. 45–50.
a discrete smoothing term inspired by finite-difference Härdle W. (1984): Robust regression function estimation. —
schemes. The control of smoothing and the fidelity to the Multivar. Anal., Vol. 14, pp. 169–180.
measurable data is ensured by the computation of one op- Härdle W. (1985): On robust kernel estimation of derivatives of
timal regularization parameter that minimizes the general- regression functions. — Scand. J. Statist., Vol. 12, pp. 233–
ized cross-validation criterion. The developed algorithm 240.
is able to estimate higher derivatives of a smooth signal Ibrir S. (1999): Numerical algorithm for filtering and state ob-
only by differentiating its basis functions with respect to servation. — Int. J. Appl. Math. Comp. Sci., Vol. 9, No.4,
time. Satisfactory simulation results were obtained which pp. 855–869.
prove the efficiency of the developed algorithm. Ibrir S. (2000): Méthodes numriques pour la commande et
l’observation des systèmes non linéaires. — Ph.D. thesis,
References Laboratoire des Signaux et Systèmes, Univ. Paris-Sud.
Ibrir S. (2001): New differentiators for control and observa-
Anderson R.S. and Bloomfield P. (1974): A time series approach tion applications. — Proc. Amer. Contr. Conf., Arlington,
to numerical differentiation. — Technom., Vol. 16, No. 1, pp. 2522–2527.
pp. 69–75. Ibrir S. (2003): Algebraic riccati equation based differentiation
Barmish B.R. and Leitmann G. (1982): On ultimate boundness trackers. — AIAA J. Guid. Contr. Dynam., Vol. 26, No. 3,
control of uncertain systems in the absence of matching pp. 502–505.
assumptions. — IEEE Trans. Automat. Contr., Vol. AC-27, Kalman R.E. (1960): A new approach to linear filtering and pre-
No. 1, pp. 153–158. diction problems. — Trans. ASME J. Basic Eng., Vol. 82,
Chen Y. H. (1990): State estimation for non-linear uncertain No. D, pp. 35–45.
systems: A design based on properties related to the un-
Leitmann G. (1981): On the efficacy of nonlinear control in un-
certainty bound. — Int. J. Contr., Vol. 52, No. 5, pp. 1131–
certain linear systems. — J. Dynam. Syst. Meas. Contr.,
1146.
Vol. 102, No.2, pp. 95–102.
Chen Y. H. and Leitmann G. (1987): Robustness of uncertain
Luenberger D.G. (1971): An introduction to observers. — IEEE
systems in the absence of matching assumptions. — Int. J.
Trans. Automat. Contr., Vol. 16, No.6, pp. 596–602.
Contr., Vol. 45, No. 5, pp. 1527–1542.
Ciccarella G., Mora M.D. and Germani A. (1993): A Misawa E.A. and Hedrick J.K. (1989): Nonlinear observers.
Luenberger-like observer for nonlinear systems. — Int. J. A state of the art survey. — J. Dyn. Syst. Meas. Contr.,
Contr., Vol. 57, No. 3, pp. 537–556. Vol.111, No.3, pp. 344–351.
Craven P. and Wahba G. (1979): Smoothing noisy data with Müller H.G. (1984): Smooth optimum kernel estimators of densi-
spline functions: Estimation the correct degree of smooth- ties, regression curves and modes. — Ann. Statist., Vol. 12,
ing by the method of generalized cross-validation. — Nu- pp. 766–774.
mer. Math., Vol. 31, No.4, pp. 377–403. Rajamani R. (1998): Observers for Lipschitz nonlinear systems.
Dawson D.M., Qu Z. and Caroll J.C. (1992): On the state obser- — IEEE Trans. Automat. Contr., Vol. 43, No. 3, pp. 397–
vation and output feedback problems for nonlinear uncer- 400.
tain dynamic systems. — Syst. Contr. Lett., Vol. 18, No.3, Reinsch C.H. (1967): Smoothing by spline functions. — Numer.
pp. 217–222. Math., Vol. 10, pp. 177–183.
De Boor C., (1978): A Practical Guide to Splines. — New York:
Reinsch C.H. (1971): Smoothing by spline functions ii. — Nu-
Springer.
mer. Math., Vol. 16, No.5, pp. 451–454.
Diop S., Grizzle J.W., Morral P.E. and Stefanoupoulou A.G.
Slotine J.J.E., Hedrick J.K. and Misawa E.A. (1987): On sliding
(1993): Interpolation and numerical differentiation for ob-
observers for nonlinear systems. — J. Dynam. Syst. Meas.
server design. — Proc. Amer. Contr. Conf., Evanston, IL,
Contr., Vol. 109, No.3, pp. 245–252.
pp. 1329–1333.
Eubank R.L. (1988): Spline Smoothing and Nonparametric Re- Tornambè A. (1992): High-gain observers for nonlinear sys-
gression. — New York: Marcel Dekker. tems. — Int. J. Syst. Sci., Vol. 23, No.9, pp. 1475–1489.
Gasser T., Müller H.G. and Mammitzsch V. (1985): Kernels for Xia X.-H. and Gao W.-B. (1989): Nonlinear observer design
nonparametric curve estimation. — J. Roy. Statist. Soc., by observer error linearization. — SIAM J. Contr. Optim.,
Vol. B47, pp. 238–252. Vol. 27, No. 1, pp. 199–216.
Gauthier J.P., Hammouri H. and Othman S. (1992): A simple ob-
Received: 26 January 2004
server for nonlinear systems: Application to bioreactors.
Revised: 28 May 2004
— IEEE Trans. Automat. Contr., Vol. 37, No. 6, pp. 875–
880.

You might also like