A Numerical Procedure For Filtering
A Numerical Procedure For Filtering
2, 201–208
In this paper, we propose a numerical algorithm for filtering and robust signal differentiation. The numerical procedure
is based on the solution of a simplified linear optimization problem. A compromise between smoothing and fidelity with
respect to the measurable data is achieved by the computation of an optimal regularization parameter that minimizes the
Generalized Cross Validation criterion (GCV). Simulation results are given to highlight the effectiveness of the proposed
procedure.
The main difficulty that we face while designing dif- functions are smooth piecewise functions. Since their in-
ferentiation observers, without any a-priori knowledge of troduction, splines have proved to be very popular in in-
system dynamics, is noise filtering. For this reason, ro- terpolation, smoothing and approximation, and in compu-
bust signal differentiation can be classified as an ill-posed tational mathematics in general.
problem due to the conflicting goals that we aim to realize. In this paper we present the steps of a new discrete-
Generally, noise filtering, precision, and the peaking phe- time algorithm which smooths signals from their uncer-
nomenon are three contradictory performances that char- tain discrete samples. The proposed algorithm does not
acterize the robustness of any differentiation system. require any knowledge of the statistics of the measure-
The field of ill-posed problems has certainly been ment uncertainties and is based on the minimization of a
one of the fastest growing areas in signal processing and criterion equivalent to (1). The new discrete-time smooth-
applied mathematics. This growth has largely been driven ing criterion is inspired by finite-difference schemes. In
by the needs of applications both in other sciences and this algorithm the regularization parameter is obtained
in industry. A problem is mathematically ill-posed if its from the optimality condition of the Generalized Cross-
solution does not exist, is not unique or does not depend Validation criterion as earlier introduced in (Craven and
continuously on the data. A typical example is the com- Wahba, 1979). We show that the smooth solution can be
bined interpolation and differentiation problem of noisy given as discrete samples or as a continuous-time spline
data. A problem therein is that there are infinitely many function defined over the observation interval. Conse-
ways to determine the interpolated function values if only quently, the regularized solution can be differentiated as
the constraint from the data is used. Additional constraints many times as possible to estimate smooth higher deriva-
are needed to guarantee the uniqueness of the solution to tives of the measured signal.
make the problem well posed. An important constraint in
context is smoothness. By imposing a smoothness con-
straint, the analytic regularization method converts an ill- 2. Problem Statement and Solution
posed problem into a well-posed one. This has been used
of the Optimization Problem
in solving numerous practical problems such as estimat-
ing higher derivatives of a signal through potentially noisy Here, we consider the problem of smoothing noisy
data. data with possibly estimating the higher derivatives
As will be shown, inverse problems typically lead ŷ (µ) (ti ), µ = 0, 1, . . . , ν − 1 from discrete, potentially
to mathematical models that are not well posed in uncertain, samples y` = ȳ(t` )+(t` ), ` = i−n+1, . . . , i,
Hadamard’s sense, i.e., to ill-posed problems. Specifi- measured with an error (t` ) at n distinct instants, by
cally, this means that their solutions is unstable under data minimizing the cost function
perturbations. Numerical methods that can cope with this
problem are the so-called regularization methods. These i
1 X 2
J :=
methods have been quite successfully used in the numeri- ŷ(t` ) − y(t` )
n
cal analysis literature in approaches to the ill-posed prob- `=i−n+1
ŷi−m , ŷi−m−1 , . . . , ŷi . Then the last cost function is writ- The derivative formulae (3) come from the approxi-
ten in the matrix form as mation of the m-th derivative of ŷ by the following finite-
difference scheme:
1
J := kY − Ŷ k2 + λkH Ŷ k2 , (2) m+1
n (m) 1 X
ŷi = (−1)m+j Cm
j
ŷi−m+j+1 . (4)
where (∆t)m j=0
The optimum value of the control vector α is obtained via Let A (λ) be the n × n matrix depending on
the optimality condition dJ /dα = 0. Then we get ti−n+1 , ti−n+2 , . . . , ti and λ such that
2 ŷ(ti−n+1 ) y(ti−n+1 )
− B 0 (Y − B α) + 2λB 0 R B α = 0, (7)
n .. = A (λ)
..
.
. . (13)
or
ŷ(ti ) y(ti )
α = (nλB 0 R B + B 0 B)−1 B 0 Y The main result of (Craven and Wahba, 1979) shows
−1 that a good estimate of the smoothing parameter λ (also
= (nλRB + B) Y. (8)
called the generalized cross-validation parameter) is the
minimizer of the GCV criterion
Consequently,
1
k (I − A (λ)) Y k2
0
Y − B α = nλR B(nλB R B + B B) 0 −1 0
B Y. (9) V (λ) = n 2 . (14)
n trace(I − A (λ))
1
From (8), the continuous spline is fully determined. This estimate has the advantage of being free from the
Hence the discrete samples of the regularized solution are knowledge of the statistical properties of noise. Further, if
computed from the minimizer of V (λ) is obtained, then the estimates of
higher derivatives of the function y(t) could be obtained
Ŷ = Y − nλR B(nλB 0 R B + B 0 B)−1 B 0 Y by differentiating the smooth function ŷ(t).
−1
Now, we outline a computational method to deter-
= I − nλR (I + nλR) Y. (10)
mine the smoothing parameter which minimizes the cross-
validation criterion V (λ), where the polynomial smooth-
As for the last equation, note that the discrete regularized ing spline ŷ(t) is supposed to be a B-spline of degree
samples are given as the output of an FIR filter where its 2m − 1. Using the definition of A (λ), we write
coefficients are functions of the regularization parameter
Y − Ŷ = Y − A (λ)Y = I − A (λ) Y.
λ. The sensitivity of the solution to this parameter is quite (15)
important, so the next section will be devoted to the op-
timal calculation of the regularization parameter through From (7), we obtain
the cross-validation criterion. Y − Ŷ = nλ R B α. (16)
Substituting (8) in (16), we get
3. Computing the Regularization Parameter Y − Ŷ = nλ R B(nλ B 0 R B + B 0 B)−1 B 0 Y
In this section we shall present details of a computational −1
= nλR(I + nλR) Y. (17)
method for estimating the optimal regularization parame-
ter in terms of the criterion matrices. We have seen that the By comparison with (15), we deduce that
spline vector α depends upon the smoothing parameter −1
I − A (λ) = nλR(I + nλR) .
λ. In (Craven and Wahba, 1979), two ways of estimating (18)
the smoothing parameter λ were given. The first method The GCV-criterion becomes
is called the ordinary cross-validation (OCV), which con-
1 −1
sists in finding the value of λ that minimizes the OCV- n knλR(I + nλR)Y k2
V (λ) = h i2 . (19)
criterion 1
−1
n trace nλR(I + nλR)
i
X 2
R(λ) :=
ŷ(t` ) − y(t` ) , i = n, n + 1, . . . , We propose the classical Newton method to compute the
`=i−n+1 minimum of V (λ). This yields to the following itera-
(11) tions:
where ŷ(t) is a smooth polynomial of degree 2m − 1. V˙ (λk )
λk+1 = λk − , (20)
Reinsch (1967) suggests, roughly speaking, that if the V¨(λk )
A numerical procedure for filtering and efficient high-order signal differentiation 205
d2 N
1 2 p2 V¨ =
N = kpRvk = v 0 R0 Rv, (22) dλ2 D
n n
2
can be easily computed in terms of the first and second
1
D = trace(pRW ) . (23) derivatives of N and D.
n
Remark 1. It is possible to recursively use the last al-
Differentiating the last two equations with respect to λ, gorithm if we take the values of the obtained spline as
we obtain noisy data for another iteration. In this case the amount
dN of noise in the data is reduced in each step by choosing
= 2pv 0 R0 R I + p2 R W R − p R v,
(24) a new smoothing parameter. The user could fix a priori
dλ
a limited number of iterations according to the specified
and application and the time allowed to run the algorithm.
dD 2 h
= trace(pRW ) trace(R W ) 4. Connection with Adaptive Filtering
dλ n
i From (10), we have
+ trace(p R2 W (p R W − I)) . (25)
Ŷ = A (λ)Y, (32)
Finally, the second derivatives of N and D are respec- −1
tively where A (λ) = I − nλR (I + nλR) . If we write
A (λ) = (ai,j (λ))1≤i,j≤n , then
d2 N
= 2nv 0 R0 R(I + S)v ŷi = an,1 (λ)yi−n+1 + an,2 (λ)yi−n+2
dλ2
dv dS
+ · · · + an,n (λ)yi . (33)
+2pn 2v 0 R0 R(I +S) +v 0 R0 R v ,(26)
dp dp Let ŷ(z) and y(z) be the z-transforms of the discrete
d D
2 2 signals ŷi and yi , respectively. Then by taking the z-
= 2 trace RW + p R2 W (p R W − I)
transform of (33), we obtain
dλ2
dW ŷ(z)
+ 2 trace(p R W ) trace R = an,1 (λ)z −n+1 + an,2 (λ)z −n+2
dp y(z)
+ · · · + an,n (λ). (34)
+ trace R2 W (p R W − I)
6. Simulations
Fig. 1. Scheme of the LMS adaptive filter. In the following simulations we suppose that we measure
the noisy signal
By comparison with the algorithm presented in this
paper, the imposed signal ŷi is not known a priori, but its y(t) = cos(30t) sin(t) + (t) (36)
formulation in terms of the noisy samples and the smooth-
ing parameter λ is known. The main advantage of the for each δ = 0.01 s. We assume that (t) is a norm-
GCV-based filter is that the minimum of the GCV perfor- bounded noise of unknown variance. The exact first
mance index is computed independently of the knowledge derivatives of the signal y are
of the statistical properties of noise. In addition, the infor-
ẏ(t) = −30 sin(30t) sin(t) + cos(30t) cos(t), (37)
mation on the smoothing degree m is incorporated in the
quadratic performance index (2), which makes the algo- ÿ(t) = −901 cos(30t) sin(t) − 60 sin(30t) cos(t). (38)
rithm not only capable of filtering the discrete samples of
the noisy signal but also capable of reliably reproducing In Fig. 2 we show the noisy signal (36). In Fig. 3
the continuous higher derivatives of the signal considered. we plot the exact signal (signal without noise) with the
A numerical procedure for filtering and efficient high-order signal differentiation 207
1.5
continuous-time spline function, the solution to the min-
imization problem (2). In the whole simulation the mov-
1 ing window of observation is supposed to be constant of
length 10. In Figs. 4 and 5 we depict the exact derivatives
0.5
of the original signal with their estimated values given by
the differentiation of the optimal continuous spline with
The noisy signal
−1.5 7. Conclusion
0 0.5 1 1.5 2 2.5 3
Time in (sec)
In this paper we have presented a new numerical pro-
Fig. 2. Noisy signal.
cedure for reliable filtering and high-order signal differ-
1.5 1500
Estimated
Exact
1
1000
The spline with the exact signal
0.5
The second derivatives
500
−0.5
−500
The spline function
−1
Signal without noise
−1.5 −1000
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Time in (sec) Time in (sec)
Fig. 3. Optimal spline vs. the exact signal. Fig. 5. Second derivative of the optimal spline and the
exact second derivative of the signal.
40
Estimated 1.5
Exact
The output of the adaptive FIR filter and the exact signal
30
1
20
0.5
The first derivatives
10
0
0
−10
−0.5
−20
−1
−30
−40 −1.5
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Time in (sec) Time in (sec)
Fig. 4. First derivative of the optimal Fig. 6. Output of the adaptive FIR fil-
spline vs. the exact one. ter vs. the exact signal.
208 S. Ibrir and S. Diop
entiation. The design strategy consists in determining Georgiev A.A. (1984): Kernel estimates of functions and their
the continuous spline signal which minimizes the discrete derivatives with applications. — Statist. Prob. Lett., Vol. 2,
functional being the sum of a least-squares criterion and pp. 45–50.
a discrete smoothing term inspired by finite-difference Härdle W. (1984): Robust regression function estimation. —
schemes. The control of smoothing and the fidelity to the Multivar. Anal., Vol. 14, pp. 169–180.
measurable data is ensured by the computation of one op- Härdle W. (1985): On robust kernel estimation of derivatives of
timal regularization parameter that minimizes the general- regression functions. — Scand. J. Statist., Vol. 12, pp. 233–
ized cross-validation criterion. The developed algorithm 240.
is able to estimate higher derivatives of a smooth signal Ibrir S. (1999): Numerical algorithm for filtering and state ob-
only by differentiating its basis functions with respect to servation. — Int. J. Appl. Math. Comp. Sci., Vol. 9, No.4,
time. Satisfactory simulation results were obtained which pp. 855–869.
prove the efficiency of the developed algorithm. Ibrir S. (2000): Méthodes numriques pour la commande et
l’observation des systèmes non linéaires. — Ph.D. thesis,
References Laboratoire des Signaux et Systèmes, Univ. Paris-Sud.
Ibrir S. (2001): New differentiators for control and observa-
Anderson R.S. and Bloomfield P. (1974): A time series approach tion applications. — Proc. Amer. Contr. Conf., Arlington,
to numerical differentiation. — Technom., Vol. 16, No. 1, pp. 2522–2527.
pp. 69–75. Ibrir S. (2003): Algebraic riccati equation based differentiation
Barmish B.R. and Leitmann G. (1982): On ultimate boundness trackers. — AIAA J. Guid. Contr. Dynam., Vol. 26, No. 3,
control of uncertain systems in the absence of matching pp. 502–505.
assumptions. — IEEE Trans. Automat. Contr., Vol. AC-27, Kalman R.E. (1960): A new approach to linear filtering and pre-
No. 1, pp. 153–158. diction problems. — Trans. ASME J. Basic Eng., Vol. 82,
Chen Y. H. (1990): State estimation for non-linear uncertain No. D, pp. 35–45.
systems: A design based on properties related to the un-
Leitmann G. (1981): On the efficacy of nonlinear control in un-
certainty bound. — Int. J. Contr., Vol. 52, No. 5, pp. 1131–
certain linear systems. — J. Dynam. Syst. Meas. Contr.,
1146.
Vol. 102, No.2, pp. 95–102.
Chen Y. H. and Leitmann G. (1987): Robustness of uncertain
Luenberger D.G. (1971): An introduction to observers. — IEEE
systems in the absence of matching assumptions. — Int. J.
Trans. Automat. Contr., Vol. 16, No.6, pp. 596–602.
Contr., Vol. 45, No. 5, pp. 1527–1542.
Ciccarella G., Mora M.D. and Germani A. (1993): A Misawa E.A. and Hedrick J.K. (1989): Nonlinear observers.
Luenberger-like observer for nonlinear systems. — Int. J. A state of the art survey. — J. Dyn. Syst. Meas. Contr.,
Contr., Vol. 57, No. 3, pp. 537–556. Vol.111, No.3, pp. 344–351.
Craven P. and Wahba G. (1979): Smoothing noisy data with Müller H.G. (1984): Smooth optimum kernel estimators of densi-
spline functions: Estimation the correct degree of smooth- ties, regression curves and modes. — Ann. Statist., Vol. 12,
ing by the method of generalized cross-validation. — Nu- pp. 766–774.
mer. Math., Vol. 31, No.4, pp. 377–403. Rajamani R. (1998): Observers for Lipschitz nonlinear systems.
Dawson D.M., Qu Z. and Caroll J.C. (1992): On the state obser- — IEEE Trans. Automat. Contr., Vol. 43, No. 3, pp. 397–
vation and output feedback problems for nonlinear uncer- 400.
tain dynamic systems. — Syst. Contr. Lett., Vol. 18, No.3, Reinsch C.H. (1967): Smoothing by spline functions. — Numer.
pp. 217–222. Math., Vol. 10, pp. 177–183.
De Boor C., (1978): A Practical Guide to Splines. — New York:
Reinsch C.H. (1971): Smoothing by spline functions ii. — Nu-
Springer.
mer. Math., Vol. 16, No.5, pp. 451–454.
Diop S., Grizzle J.W., Morral P.E. and Stefanoupoulou A.G.
Slotine J.J.E., Hedrick J.K. and Misawa E.A. (1987): On sliding
(1993): Interpolation and numerical differentiation for ob-
observers for nonlinear systems. — J. Dynam. Syst. Meas.
server design. — Proc. Amer. Contr. Conf., Evanston, IL,
Contr., Vol. 109, No.3, pp. 245–252.
pp. 1329–1333.
Eubank R.L. (1988): Spline Smoothing and Nonparametric Re- Tornambè A. (1992): High-gain observers for nonlinear sys-
gression. — New York: Marcel Dekker. tems. — Int. J. Syst. Sci., Vol. 23, No.9, pp. 1475–1489.
Gasser T., Müller H.G. and Mammitzsch V. (1985): Kernels for Xia X.-H. and Gao W.-B. (1989): Nonlinear observer design
nonparametric curve estimation. — J. Roy. Statist. Soc., by observer error linearization. — SIAM J. Contr. Optim.,
Vol. B47, pp. 238–252. Vol. 27, No. 1, pp. 199–216.
Gauthier J.P., Hammouri H. and Othman S. (1992): A simple ob-
Received: 26 January 2004
server for nonlinear systems: Application to bioreactors.
Revised: 28 May 2004
— IEEE Trans. Automat. Contr., Vol. 37, No. 6, pp. 875–
880.