Sensors: A Steady-State Kalman Predictor-Based Filtering Strategy For Non-Overlapping Sub-Band Spectral Estimation
Sensors: A Steady-State Kalman Predictor-Based Filtering Strategy For Non-Overlapping Sub-Band Spectral Estimation
3390/s150100110
OPEN ACCESS
sensors
ISSN 1424-8220
www.mdpi.com/journal/sensors
Article
Abstract: This paper focuses on suppressing spectral overlap for sub-band spectral
estimation, with which we can greatly decrease the computational complexity of existing
spectral estimation algorithms, such as nonlinear least squares spectral analysis and
non-quadratic regularized sparse representation. Firstly, our study shows that the nominal
ability of the high-order analysis filter to suppress spectral overlap is greatly weakened when
filtering a finite-length sequence, because many meaningless zeros are used as samples
in convolution operations. Next, an extrapolation-based filtering strategy is proposed
to produce a series of estimates as the substitutions of the zeros and to recover the
suppression ability. Meanwhile, a steady-state Kalman predictor is applied to perform a
linearly-optimal extrapolation. Finally, several typical methods for spectral analysis are
applied to demonstrate the effectiveness of the proposed strategy.
Keywords: AR model; equiripple FIR filter; linear prediction; spectral estimation; spectral
overlap; sub-band decomposition
1. Introduction
As one of the most important tools, spectral estimation [1] has been extensively applied in radar, sonar
and control systems, in the economics, meteorology and astronomy fields, speech, audio, seismic and
Sensors 2015, 15
111
biomedical signal processing, and so on. In particular, sparse representation [24] opens an exciting new
vision for spectral analysis. However, such methods are usually accompanied by high computational
complexity, which makes their availability somewhat limited.
Sub-band decomposition-based spectral estimation (SDSE) [5] is an important research direction
in spectral estimation, because it has several advantageous features, e.g., computational complexity
decrease, model order reduction, spectral density whiteness, reduction of linear prediction error for
autoregressive (AR) estimation and the increment of both frequency spacing and local signal-to-noise
ratio (SNR) [6]. These features have been theoretically demonstrated under the hypothesis of the
ideal infinitely-sharp bandpass filter bank [7]. Subsequent studies [810] indicate that these benefits
aid complex frequency estimation in sub-bands, thereby enabling better estimation performance than
that achieved in full-band. In addition, the computational complexity of most algorithms for spectral
analysis has a superlinear relationship with the data size, and sub-band decomposition can considerably
speed up these algorithms. Independently handling each sub-band enables parallel processing, which
can further improve the computational efficiency. Both advantages are crucial for reducing the
computational burden, especially when analyzing multi-dimensional big data, such as polarimetric
and/or interferometric synthetic aperture radar images of large scenes.
Unfortunately, the ideal infinitely-sharp bandpass filter cannot be physically realized, and non-ideal
(realizable) filters introduce energy leakage and/or frequency aliasing phenomena [11]. Due to
these non-ideal frequency characteristics of analysis filters, spectral overlap between any two
contiguous sub-bands occurs during the sub-band decomposition. Then, the performance of SDSE
severely degrades.
In the relevant literature, several methods have been proposed to mitigate spectral overlap. We
classify these methods into three categories. The first category is defined as ideal frequency domain
filtering with a strict box-like spectrum, such as ideal Hilbert transform-based half-band filters [9]
and harmonic wavelet transform-based filters [12,13]. Theoretically, sub-band decomposition with
these filters is immune to spectral overlap. However, discrete Fourier transform will inevitably induce
spectral energy leakage, which can likewise distort sub-band decomposition. The second category is
known as convolution filtering with wavelet packet filters [8], Kaiser window-based prototype cosine
modulated filters, discrete cosine transform (DCT) IV filters [10] and Comb filters [6,14]. It seems that
increasing the filter order can improve the filtering performance and also the spectral overlap suppression
capability. However, in the context of involving a finite-length sequence and performing convolution
filtering, the nominal improvement of performance will lead to spectral energy leakage and inferior
filtering accuracy [10]. Considering the compromise between suppressing spectral overlap and reducing
spectral energy leakage, we have to restrict the filter order. The third category is frequency-selective
filtering, and a representative method is SELF-SVD (singular value decomposition-based method in a
selected frequency band) [15]. Essentially, SELF-SVD attempts to attenuate the interferences of the
out-of-band components by the post-multiplication with an orthogonal projection matrix. Unfortunately,
the attenuation is often insufficient when the out-of-band components are much stronger than the in-band
components or the SNR is relatively low. In this case, the estimation of the in-band frequencies is
seriously affected.
Sensors 2015, 15
112
In this paper, a new filtering strategy is proposed to suppress spectral overlap for sub-band spectral
estimation. First, we discuss the formation mechanism of spectral overlap. Nominally, a high-order finite
impulse response (FIR) filter usually has a powerful ability in spectral overlap suppression. However,
once we perform such a filter on a finite-length sequence with the convolution operation, the non-given
samples at the forward and backward sampling periods of the sequence are assumed to be zeros. A certain
filtering error therefore occurs and conversely disrupts the decomposed sub-bands. As a result, sub-band
spectral analysis severely suffers from the mutual overlap of adjacent sub-band spectra. Second, we
propose a filtering strategy to eliminate the filtering error and recover the suppression ability. This
strategy intuitively takes the place of the artificial zeros with some extrapolated samples. Toward the
problem of data extrapolation, many algorithms have been proposed based on various theories, such
as linear prediction [16], GerchbergPapoulis [17], Slepian series [18], linear canonical transform [19]
and sparse representation [20]. To establish an efficient method for the extrapolation in context and
to evaluate the effectiveness of the proposed strategy, we preliminarily develop a linearly-optimal
extrapolation based on the classical AR model identification and the Kalman prediction [2123]. Third,
we derive the formulas to estimate the residual filtering error and adapt two common information criteria
with adaptive penalty terms for AR order determination. Moreover, equiripple FIR filters are applied as
analysis filters in coordination with the proposed filtering, because of their advantageous features [11].
Finally, the entire algorithm and the computational complexity are summarized. Some details, such as
the sub-band spectrum mosaicking procedure and parameter selection, are discussed in practice.
The remainder of the paper is organized as follows. In Section 2, the formation mechanism of spectral
overlap is discussed. Based on this, a steady-state Kalman predictor-based filtering strategy is developed
to suppress the overlapped spectra. In Section 3, the proposed filtering strategy is discussed for SDSE.
In Section 4, experimental results with several typical algorithms for spectral analysis demonstrate the
effectiveness of the proposed strategy. Finally, Section 5 concludes this paper.
2. Signal Filtering Based on AR Model Identification and Kalman Prediction
This section focuses on signal filtering. To reduce the filtering error induced by convolution filtering,
we propose an extrapolation-based filtering strategy and apply a steady-state Kalman predictor for
extrapolation. Two criteria with adaptive penalty terms for order determination are developed based
on the estimation of the residual filtering error.
2.1. Problem Statement of Signal Filtering
FIR filters are typical linear time-invariant (LTI) systems. According to the linear system theory,
the filter can be mathematically expressed as the convolution of its impulse response with the input.
Suppose that txn u is an input sequence and thn u is the impulse response of a causal FIR filter; the
filtered sequence tyn u can be derived as [11]:
Nf 1
yn hn xn
k0
hk xnk
(1)
Sensors 2015, 15
113
where denotes the convolution operator and Nf is the filter length (i.e., the length of the impulse
response; the relationship between Nf and the filter order No can be written as Nf No ` 1).
Alternatively, taking discrete-time Fourier transforms (DTFT), we can represent Equation (1) in the
frequency domain as:
`
` `
Y ej H ej X ej
(2)
In addition, the filtered sequence length L, the input sequence length N and the filter length
Nf satisfy:
L N ` Nf 1 N ` No
(3)
Theoretically, given a large enough stop-band attenuation, spectral overlap can be thoroughly
suppressed. Moreover, the spectral estimation error in sub-bands can be neglected, as long as the width
of the transition band and the ripple of the passband are sufficiently small. Nonetheless, the pursuit of
excellent filtering performance substantially increases both the filter order and the length of the filtered
sequence (refer to Equation (3)). Such a high order is more likely to create error in part or even all of
the filtered samples. This result is contrary to our original objective, and the resultant filtering quality
is undesirable.
From the perspective of a discrete-time system, the output sequence of the convolution operation is
equivalent to the zero-state response of the filter system, because the initial state of every delay cell is
zero prior to the excitation of the input sequence. We take the example of the direct-type FIR system [24].
The value of the output sample at any time depends on all or part of the input samples and the system
state at that time. The first Nf 1 output samples suffer from biases, because a part of the delay cells
do not yet become input-driven states; analogously, the last Nf 1 output samples are invalid, because
a part of the delay cells restore the initial zero-states. Thus, the length of the valid part of the output
sequence, defined as Lv , satisfies:
Lv L 2 pNf 1q N Nf ` 1 N No
Actually, if we rewrite Equation (1) in the following matrix form:
x0
0
..
x1
.
0
x
0
h0
y0
..
..
..
.
.
.
h1
y1
.
.
x0 ..
. xN 1 xN 2 . .
.
.
0
xN 1
x1
hNo
yL1
.. looomooon
..
loooomoooon ..
..
.
.
.
.
y
0
0
xN 1
loooooooooooooooooomoooooooooooooooooon
(4)
(5)
then we can find that the matrix X possesses many zero elements, which probably makes the outputs
y0 , y1 , . . . , yNo 1 ; yLNo , yLNo `1 , . . . , yL1 not ideal. For example, y0 x0 h0 , while the ideal
output should be y0 x0 h0 ` x1 h1 ` x2 h2 ` ` xNo hNo . This means that the unknown
samples xNo , xNo `1 , ..., x1 are assumed to be zeros. The filtering error of y0 is y0 y0
px1 h1 ` x2 h2 ` ` xNo hNo q. Likewise, the outputs y1 , y2, ..., yNo 1 ; yLNo , yLNo`1 , ..., yL1
Sensors 2015, 15
114
all suffer from errors under the zero assumption. Thus, we can conclude that the meaningless zeros are
the error sources of the filtering.
Referring to Equation (4), we note that, if the filter order is not less than the length of the input
sequence, the output samples are all invalid. Thus, improving filtering performance by means of
unlimited increasing filter order is meaningless.
In the next subsection, we will identify an efficient way to resolve this problem.
2.2. Filtering Procedure Based on Signal Extrapolation
The desired output of the filtering process should have two characteristics:
The original and filtered sequences should be of equal length;
During the filtering process, the states of the delay cells in the filter system should always maintain
input-driven states, i.e., there are no artificial zeros, but authentic samples in X.
As shown in Equation (5), the convolution filtering assumes the unknown samples
xNo , xNo `1 , ..., x1 ; xN , xN `1 , ..., xN `No 1 to be zeros, which leads to the filtering error. Thus,
pN 1q
an intuitive thought is to extrapolate the sequence txn upn0q along two sides to provide a series of
estimates for the unknown samples. Taking place of the zeros in the matrix X with these estimates
can mitigate the filtering error. The input sequence is extrapolated along both sides, yielding two
extrapolated sequences, called Part A and Part B (see Figure 1). Suppose that LA and LB are the lengths
of Part A and Part B, respectively; then, those LA ` LB extrapolated samples are used to replace zeros
in X. According to Equation (3), the length of the associated output sequence is LA ` LB ` N ` No .
From Equation (4), the effective length of the output can be given by LA ` LB ` N No . To satisfy the
requirement that the original and filtered sequence are of equal length, the extrapolated length can be
derived as:
LA ` LB ` N No N LA ` LB No
(6)
3
x 10
Original sequence
Part A
Part B
x(n)
2
0
Extrapolated sequence
Original sequence
2
0
20
40
60
80
100
120
140
index n
Sensors 2015, 15
115
No LG ` n pn 1, 2, , Nq, because of the group delay. Consequently, the input sample before time
No LG is merely used as a training sequence of the system state. Thus, we can obtain the relationships:
#
LA No LG
(7)
LB LG
Let xn and yn be the extrapolated sequence and associated filtered result, respectively. Then,
they satisfy:
$
& xn : LG No n N ` LG 1
(8)
hn :
0 n No
%
yn :
LG n LG ` N 1
(9)
xn xn p0 n N 1q
y
Xh
(10)
y
r
yLG , yLG `1 , . . . , yLG `N 1 sT
pN 1q
(11)
where:
and:
xLG
xLG `1
..
.
xLG 1
xLG
..
.
..
.
xLG No
xLG No `1
..
.
(12)
pN Nf q
, n 0, 1, , N 1
(13)
% pq 1 q
l q l
l0
where q 1 denotes the unit delay, p is the model order, 0 , 1 , . . ., p denote the coefficients of the model
and 0 1. The sequence tn u8
n8 is a white noise process, which satisfies:
$
@n
& E pn q 0,
2
2
E pn q , @n
%
E pn n1 q 0, n n1
(14)
Sensors 2015, 15
116
sometimes suffer from overfitting. An alternative method of order determination will be discussed in
Section 2.4.
A linearly-optimal prediction for AR sequences is derived in [2123] under the minimum mean square
error (MMSE) criterion. However, the prediction formula involves a polynomial long division and a
coefficient polynomial recursion [23], making the calculation of the prediction somewhat inconvenient.
Alternatively, the following steady-state Kalman predictor [27] provides an equivalent prediction with
the MMSE predictor, while offering a simpler formula to facilitate the computation.
The AR model is regarded as a dynamic system. A specific state-space representation for a univariate
AR(p) process can be written as [25]:
#
n`1 Fn ` n
(15)
xn Hn ` n
where:
1
2
..
.
p1
p
and:
1
0 pppq
0
1
..
.
0 0
0 0
1
0
..
.
1 2 p1 p
1 0 0
pp1q
p1pq
(16)
(17)
(18)
The coefficient polynomials of xn and n are pq 1 q and one, respectively. Since they are relative
prime polynomials (or coprime), i.e., the transfer function is irreducible, the system of the AR model is a
joint controllable and observable discrete linear stochastic system [28]. Thus, there exists a steady-state
Kalman predictor:
#
n`1|n F n|n1 ` Ken
(19)
xn Hn|n1 ` en
Since both n and en are the innovation processes of xn , they are equal [27]:
(20)
en n
By comparing Equation (15) with Equation (19), we have:
#
n n|n1
(21)
` 1
xn`1 q
xn`1 l xnl`1
l1
(22)
Sensors 2015, 15
117
T
Fk1 gk0 , gk1 , , gkp1
`
Gk q 1 gk0 ` gk1 q 1 ` ` gkp1 q pp1q
`
`
q 1 x n`k|n Gk q 1 n
`
x n`k|n Gk q
xn
p1
gkl xnl
(23)
(24)
(25)
(26)
l0
p1
gkl xn`l
(27)
l0
where the superscript denotes the complex conjugate operator. To guarantee reasonable and effective
extrapolations, the step-size k should satisfy:
#
LG No ` k 0
LG k No LG
(28)
N ` LG 1 k N 1
In order to evaluate the residual filtering error of the proposed filtering strategy, we derive the mean
square error (MSE) in Appendix A1.
2.4. Adaptive Information Criteria for AR Order Determination
Given the impulse response of an analysis filter and AR coefficients, we can directly calculate MSE
by Equations (A2) and (A10). The precision of AR coefficient estimation is concerned with AR order.
Consequently, the filtering error at different AR orders can be evaluated with the preceding formulas;
conversely, the calculation of MSE can be used for order determination.
AIC and BIC are two common information criteria, whose purpose is to find a model with sufficient
goodness of fit and a minimum number of free parameters. In terms of the maximum likelihood estimate
(29)
(30)
As explained in [29], due to the lack of samples, both criteria encounter the risk of overfitting, where
the selected order will be larger than the truth order. In particular, AIC has the nonzero overfitting
probability as the sample number tends to infinity. Theoretically, both criteria consist of two terms: the
Sensors 2015, 15
118
first term involves MSE, and it decreases with the increment of the order p; the other term is a penalty that
is an increasing function of p. The preferred model order is the one with the lowest AIC or BIC value.
As shown in Figure 2a, the objective function curve S
1 P1 E1 reaches its minimum value at the point
P1 , which gives the correct order p. However, sometimes, both criteria may fail to determine available
orders, and those failures are often related to inadequate penalties. Figure 2b illustrates a representative
case. Since the change of the objective function instantly slows down as the order exceeds p, the point
P2 is the preferred point for order determination. However, the penalty strength is insufficient, so that the
objective function is still falling after P2 . To handle this situation, we propose an adaptive mechanism to
adaptively adjust the penalty strength. A geometric interpretation is depicted in Figure 2b. We assume
that the order interval for computation consists of the correct order. Then, the ray S2 E2 forms the X2 axis,
while the ray O2 Y2 forms the Y2 axis perpendicular to the ray S2 E2 throughout the intersection O2 of
the ray S2 E2 and the objective function axis. Under the new coordination system X2 O2 Y2 , the minimum
point P2 of the curve S
2 P2 E2 can help to determine the correct order. Meanwhile, this modification has
no impact on the case that the criterion works well (see Figure 2a).
Y1
O2
S2
S1
P1
E1
p
(a)
X1
order
objective function
objective function
O1
Y2
E2
P2
p
X2
order
(b)
Figure 2. Geometric interpretation for adaptive Akaike information criterion (AAIC) and
log
p ` N pp ` 1q
p
(31)
s.t. AAIC pps q AAIC ppe q
where ps and pe denote the start point and the end point of the computing order interval, respectively. If
ps 1, the adaptive parameter can be given by:
2
1
N
log
`1
(32)
logpe
2
p2e
Sensors 2015, 15
119
1
N
% logp
`
1
log
2
e
log N
(33)
pe
3. Implementation of SDSE
In this section, we discuss the implementation details of SDSE based on the proposed filtering
strategy. In particular, equiripple FIR filters are used as the analysis filters for their advantageous
features. To suppress spectral overlap and improve spectral precision in practice, we introduce a
mosaicking operation for sub-band spectra and discuss the compensation of the residual error of the
composite spectrum. After that, we summarize the entire algorithm and analyze the computational
complexity.
3.1. Properties and Design of Equiripple FIR Filters
Besides the advantages of FIR filters, i.e., exact linear phase response and inherent stabilization,
equiripple FIR filters have an explicitly specified transition width and passband/stop-band ripples (see
Figure 3). As analysis filters, equiripple FIR filters can bring some important benefits, such as stop-band
attenuation with a fixed maximum, the explicitly specified width of the invalid part of the sub-band
spectrum (which corresponds to the transition-band spectrum) and a limited maximum deviation of the
valid part of the sub-band spectrum (which corresponds to the passband spectrum). As shown in Figure 3,
the specifications of a typical equiripple FIR filter consist of the passband edge p , stop-band edge s and
maximum error in passband and stop-band p , s , respectively. The approximate relationship between
the optimal filter length and other parameters developed by Kaiser [11] is:
`a
20log10
p s 13
Nf
`1
(34)
14.6f
where f denotes the width of the transition-band,
f
s p
2
(35)
The maximum passband variation and the minimum stop-band attenuation in decibels are defined as:
1 ` p
Ap 20log10
dB
(36)
1 p
and:
As 20log10 ps q dB
respectively.
(37)
Sensors 2015, 15
120
Figure 3. Magnitude response and design parameters of an equiripple low-pass FIR filter.
When the specification of a filter is explicitly specified, we can complete the design with the
ParksMcClellan (PM) algorithm [30], since it is optimal with respect to the Chebyshev norm and results
in about 5 dB more attenuation than the windowed design algorithm [11].
3.2. Practical Consideration of Equiripple FIR Filters
Firstly, the equiripple low-pass FIR filter is combined with a preprocessing stepcomplex frequency
modulationto form a passband filter for sub-band decomposition (see Figure 4).
(38)
Sensors 2015, 15
121
As long as As is large enough and the downsample rate M meets the condition:
M
M
H L
s
(39)
(40)
The maximum stop-band attenuation should exceed the dynamic range of the signal to be analyzed.
Once the aforementioned conditions are satisfied, the shortest transition-width can be chosen by
Equation (34). Moreover, specific requirement will help to set the maximum passband variation.
Sensors 2015, 15
122
4
No2
No
5
3
2
Estimate the MSE
pi by (A10) and (A11), with O 48 ` 24 No ` 6 flops;
end for
Select an order p by Equation (31) or Equation (33), with O pNq flops.
Sequence Extrapolation:
Set the step-size k by Equation (28);
k1
Calculate tgkl up1
by
Equations
(16),
flops;
(17)
and
(24),
with
O
p2pq
l0
LG `N 1
,
xn unL
Implement forward and backward extrapolations by Equations (26) and (27), and obtain t
G No
with OpNo pq flops.
Sub-Band Spectral Estimation:
Set a rational factor M0 s , where rrss denotes a rational approximation;
for i 1 to M do
Compute H and L by Equation (38) and H ` L p2i 1q {M ;
LG `N 1
Perform pre-modulation and filtering for t
xn unL
by Figure 4 and Equation (10), and the
G No
computational complexity is in Op2 pN ` No q log pN ` No qq flops;
LG `N 1
by a factor of M0 , and obtain the sub-band sequence
Decimate the sequence t
xn unL
G No
Q
U
N
( M0 1
xpiq
, where rs denotes the ceiling function;
n n0
Q
U
( N 1
piq M0
Perform spectral analysis for the sub-band sequence xn n0 , and denote the length of the
sub-band spectrum as Ls ;
U
Q
Compute the length of overlapped spectrum by M10 p L2s and omit the overlapped parts at
both the left and the right side of the sub-band spectrum.
end for
Mosaic the residual sub-band spectrums into an entire spectrum.
Output: The entire spectrum.
Sensors 2015, 15
123
As shown in Algorithm 1, we summarize SDSE with the proposed filtering strategy and give the
computational complexity of the major steps. First, the proposed strategy can greatly reduce the
computational burden. We take the commonly-used amplitude and phase estimation (APES) [32]
2
algorithm as an example. The
full-band APES needs O pN
log Nq flops [33], while the computation
Q U2
Q U
requirement is decreased to O M MN0
flops by SDSE with the proposed strategy.
log MN0
Second, except the sub-band spectral estimation, the main computation requirement is induced by the
AR identification and the order selection. The computational complexity of this step is generally much
lower than that of the sub-band spectral estimation. In particular, if a proper order or a small enough
order interval is preselected before the AR identification, the computation of this step can be negligible.
4. Simulations and Analysis
In this section, both the feasibility and the effectiveness of the proposed strategy are evaluated by
typical numerical simulations, including FIR filtering and line spectral analysis of 1D or 2D sequences.
4.1. Filtering Analysis Using the Proposed Strategy
Suppose that the input sequence txn u is a mixed complex exponential sequence:
$
p2q
xn sp1q
n ` sn ` n
p1q
p2q
sn
100 exp tp0.55 ` 0.035lq jnu
l0
%
n 0, 1, . . . , 127
(41)
where the measurement noise tn u is a complex Gaussian process. All real parts and imaginary parts of
tn u are independent and identically distributed (i.i.d.) zero-mean Gaussian distributions with variance
2 , i.e., Re pn q , Im pn q N p0, 2 q. Our purpose is to non-distortedly extract the weak component
p2q
sp1q
n from xn or completely eliminate the strong component sn .
The equiripple half-band low-pass FIR filter is chosen for the extraction. The specifications of the
filter are:
Ap 1.4295 103dB, As 81.6852dB,f 0.08
(42)
The length of the designed filter based on the given specifications is 119.
As shown in Figure 7a, the decreasing trend of the estimated residual error by the proposed strategy
is consistent with the real error. When the order exceeds 57, the decrease of the estimated filtering
error instantly slows down. Hence, the preferred order is 57. By comparison, due to the deficiency of
the penalty strength, none of AIC and BIC can provide the right order; whereas, based on the adaptive
penalty terms, both AAIC and ABIC get the right order 57 (see Figure 7b).
As shown in Figure 8b, the weak component sp1q
n is completely covered by the sidelobe of the
p2q
out-of-band strong component sn ; thus, recognizing the existence of the weak component from the
mixed spectrum is completely impossible. From the view of the magnitude response (see Figure 8a), the
filter has the nominal ability of eliminating the interference of the out-of-band strong components for
Sensors 2015, 15
124
the in-band weak component. Due to the existence of the convolution filtering error, we still cannot find
out the weak component from the convolution spectrum, as shown in Figure 8b. By contrast, once the
samples contaminated by the filtering error are omitted by Equation (4) from the filtered sequence, the
weak component reappears in the spectrum of the remaining samples (refer to the truncated spectrum in
Figure 8b). However, the truncated spectrum has a much wider main lobe than the original spectrum,
which means the spectral resolution suffers from a severe decrease. In order to simultaneously maintain
the resolution and filter out the interference, we apply the proposed filtering strategy to handle the case.
As shown in Figure 8c, based on the proposed strategy, the restored spectrum for the noiseless sequence
closely coincides with the truth weak spectrum in shape, especially retaining the spectral resolution. In
addition, even when the signal-to-noise (SNR) of snp1q is low to 3 dB (when 2 1), the recovery is
still effective (see the magnified details of Figure 8c).
60
50
MSE(dB)
40
30
20
10
0
10
1
10
15
20
25
30
35
AR order
40
45
50
5557 60
64
40
45
50
5557 60 64
(a)
20
AIC
BIC
AAIC(=1.57)
ABIC(=1.36)
15
10
0
1
10
15
20
25
30 35
AR order
(b)
Figure 7. Mean square error of filtering and order selection: (a) quantitative comparison of
the filtering error by convolution filtering and the proposed filtering; (b) comparison of the
information criteria, including AIC, BIC, AAIC and ABIC.
Sensors 2015, 15
125
Magnitude(dB)
20
40
60
80
100
0
0.1
0.2
0.3
0.4 0.46 0.54 0.6
0.7
Normalized Frequency (x rad/sample)
0.8
0.9
(a)
Magnitude(dB)
80
60
Mixed spectrum
Weak spectrum
Convolution spectrum
Truncated spectrum
40
20
0
0
0.1
0.2
0.3
0.4 0.45 0.5 0.55 0.6
0.7
Normalized Frequency (x rad/sample)
0.8
0.9
(b)
50
Magnitude(dB)
40
30
40
30
20
20
0.43
0.44
0.45
0.46
0.47
10
0
0
0.1
0.2
0.3
0.4 0.45 0.5
0.6
0.7
Normalized Frequency (x rad/sample)
0.8
0.9
(c)
Sensors 2015, 15
126
xn sp1q
n ` sn ` n
& sp1q
k exp rj pk n ` k qs
n
k1
16
pq
pq
p`q
p`q
p2q
`
exp
j
n
`
100
exp
j
n
`
i
i
i
i
% n
(43)
i0
where:
1 3 5 5, 2 4 1
1 0.075, 2 0.03125, 3 0.01254 0.05625
p`q
pq
5 0.1, i p0.15 ` 0.05iq , i p0.15 ` 0.05iq
and:
n 0, 1, , N 1; N 128
tn u is a real-value sequence of i.i.d. zero-mean Gaussian random variables with variance 2 1.5811,
i.e., n N p0, 2 q. k , `
i and i are i.i.d. uniform random variables on the interval from zero to 2,
p`q
pq
i.e., k , i , i U r0, 2q.
In this case, we can get each components SNR of sp1q
n :
SNR1 SNR3 SNR5 5dB, SNR2 SNR4 2dB
We decompose the mixed-signal xn into four sub-bands using the proposed method with the filter
parameter set as:
Ap 0.01dB, As 60dB,f 0.05
(44)
The sub-band, whose radian frequency is within r0.125 , `0.125q, is used for frequency
estimation. Furthermore, we estimate the frequencies of complex sinusoids of sp1q
n that are contained
in both mixed-signal xn and the decomposed sub-band signal, via MUSIC, ESPRIT [34,35] and
SELF-SVD [15] algorithms (see Table 1). As shown in Table 1, we analyze the performance based
on the Monte Carlo method. Compared with ESPRIT, SELF-SVD in full-band spectral estimation
suffers from obvious performance degradations or even failures. Although SELF-SVD can theoretically
attenuate the out-of-band components for the in-band frequency estimations, the ability of attenuation is
not always sufficient, especially when the power of the out-of-band components are much stronger than
that of the in-band components or the SNR is relatively low. Instead of performing the SVD method
in the entire frequency domain as ESPRIT, SELF-SVD just performs it in the frequency interval of
interest. Obviously, the remaining out-of-band interferences will be treated as in-band components,
so that the frequency estimation with SELF-SVD sometimes fails. In the experiment, the power ratio
of the out-of-band components to the in-band components 2 and 4 is up to 10,000 times. As a
result, the corresponding frequency estimation with SELF-SVD fails to work. When we eliminate the
out-of-band interferences with our method, the estimation of SELF-SVD for the residual signal exhibits
similar performance as ESPRIT. In addition, MSEs of MUSIC and ESPRIT indicate that the frequency
estimation in the sub-band is much more accurate than that in the full-band.
Sensors 2015, 15
127
1 p1 0.07500q
2 p2 0.03125q
3 p3 0.01250q
4 p4 0.05625q
5 p5 0.10000q
2 1 105
2 2 105
2 3 105
2 4 105
2 5 105
MUSIC
0.0750
0.0309
0.0125
0.0572
0.1000
0.0072
1.6462
0.0125
3.0443
0.0360
Full-band
ESPRIT SELF-SVD
0.0750
0.0790
0.0309
0.0125
0.0175
0.0572
0.1000
0.1027
0.0072
1.8126
1.6462
0.0125
2.8847
3.0443
0.0360
0.7379
MUSIC
0.0749
0.0308
0.0123
0.0555
0.1000
0.0057
0.1662
0.0081
0.1729
0.0049
Sub-band
ESPRIT SELF-SVD
0.0749
0.0750
0.0313
0.0308
0.0123
0.0126
0.0557
0.0555
0.1000
0.1000
0.0058
0.0044
0.1259
0.1816
0.0074
0.0060
0.1199
0.1775
0.0041
0.0046
n2 q k
n1 p k
p1q
`
j
`
s
exp
j2
k
n1 ,n2
4N1
4N2
k1
$
p1q
p95.5`4l1 qn2
118.5n1
&
&
.
16
exp j2
` jl1
`
N1
N2
(46)
sp2q
1,
000
n1 ,n2
p2q
p95.5`4l1 qn2
54.5n1
l
0
`
exp
j2
`
j
`
1
l1
N1
N2
,
$
p1q
p54.5`4l
qn
95.5n
2
1
2
.
&
15
`
j
`
exp
j2
l2
N1
N2
`
1,
000
%
% ` exp j2 p54.5`4l2 qn1 ` 159.5n2 ` jp2q l2 1
l2
N1
N2
where:
n1 0, 1, , N1 1; n2 0, 1, , N2 1
and:
N1 N2 256, K 8, 192
p1q
p2q
p1q
p2q
tn1 ,n2 u is a real-value sequence following n1 ,n2 N p0, 2 q with 2 0.005. k , l1 , l1 , l2 , l2
p1q
p2q
p1q
p2q
are uniform random variables on the interval from zero to 2, i.e., k , l1 , l1 , l2 , l2 U r0, 2q.
The spectrum of this 2D sequence is shown in Figure 9. Since the magnitude of sp2q
n1 ,n2 is 60 dB greater
p1q
than that of sn1 ,n2 , the sidelobe of the former significantly affects the spectral estimation of the latter.
Sensors 2015, 15
128
This affect is especially more severe for the components around sp1q
n1 ,n2 . The region inside the red pane is
used to verify the performance of the proposed method.
Figure 9. Actual magnitude spectrum of the 2D signal (the black dot corresponds to sp1q
n1 ,n2 :
p2q
0 dB; the rounded blue spot corresponds to sn1 ,n2 : 60 dB; the red pane covers the region to
be analyzed).
The parameters of the analysis filter are selected as:
Ap 0.2 dB, As 80 dB, f 0.05
(47)
The comparison of Figure 10a and 10c indicates that the Fourier spectrum of sp1q
n1 ,n2 is severely affected
p2q
by sn1 ,n2 . By contrast, the result shown in Figure 10b seems to be almost exactly the same as the desired
result shown in Figure 10c. This decomposition result verifies the effectiveness of the proposed method.
To further testify the performance of our method, we select the APES [32] and the iterative adaptive
approach (IAA) [36,37] for spectral estimation. Since the ideal frequency domain filters suffer from
energy leakage and/or frequency aliasing problems, the APES result shown in Figure 10c is somewhat
blurred. By contrast, the APES result of the decomposed sub-band based on the proposed strategy (see
Figure 10d) is quite similar to the actual spectrum (see Figure 10e). Theoretically, the IAA is superior
to the APES. However, as shown in Figure 10g, it is even more likely than the APES to suffer from
out-of-band interferences. From the view of the sub-band IAA spectrum (see Figure 10h), most of
the interferences are eliminated, while the remaining filtering error still has impacts on the spectrum.
Thus, the spectral estimation experiment reveals that the sub-band decomposition based on the proposed
method can provide relatively ideal performance; whereas the developed method for extrapolation is
imperfect, so it can affect the performance of the IAA algorithm.
In addition, a simulated single-polarized SAR image of an airplane based on the physical and optical
model is processed via the APES. The computation time of full-band APES (refer to Figure 11a) is
Sensors 2015, 15
129
26.85 h, while the time of sub-band APES (refer to Figure 11b) is just 0.84 h. Obviously, the two
imaging results only have tiny differences, which are hardly recognized.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 10. Sub-band decomposition and spectral estimation within the analyzed region: (a)
the Fourier spectrum of xn1 ,n2 ; (b) the Fourier spectrum of decomposed sub-band signal
based on our method; (c) the Fourier spectrum of sp1q
n1 ,n2 ; (d) the amplitude and phase
estimation (APES) result of (a) corresponding to the ideal frequency domain filters-based
sub-band decomposition; (e) the APES result of (b); (f) the actual spectrum; (g) the iterative
adaptive approach (IAA) result of (a); (h) the IAA result of (b).
Sensors 2015, 15
130
(a)
(b)
Figure 11. Comparison between full-band (a) and sub-band (b) APES images: the imaging
for (a) costs 26.85 h, while for (b), it is 0.84 h.
5. Conclusion
This paper has investigated the problem of suppressing spectral overlap in sub-band spectral
estimation. The spectral overlap phenomenon is originated from the non-ideal behavior of the analysis
filtering, i.e., the filtering error. The error formation in convolution filtering was therefore discussed,
based on which an extrapolation-based filtering strategy was proposed to greatly suppress spectral
overlap. Several classical theories, including AR identification, Kalman prediction and the equiripple
FIR filtering technique, are integrated into the strategy for linearly-optimal extrapolation. To resolve the
overfitting in order determination with AIC and BIC, we modified the penalty terms for both criteria.
The improved criteria adaptively adjust the penalty strength and avoid overfitting to some extent. Both
1D and 2D complex exponential signals are utilized to validate the performance of the proposed method.
Moreover, we employed SAR image formation for a single-polarized SAR data, simulated based on
electromagnetic theory, to testify the efficiency of our method. Future research will focus on developing
more sophisticated methods for the problem of extrapolation, with which we can avoid model order
determination and further improve the extrapolation precision.
Acknowledgments
The work was in part supported by the NSFC (No. 41171317), in part supported by the key project of
the NSFC (No. 61132008), in part supported by the major research plan of the NSFC (No. 61490693),
in part supported by the Aeronautical Science Foundation of China (No. 20132058003) and in part
supported by the Research Foundation of Tsinghua University. The authors would also like to thank the
reviewers for their helpful comments.
Sensors 2015, 15
131
Author Contributions
Zenghui Li contributed to the original idea, algorithm design and paper writing. Bin Xu contributed
in part to data processing. Jian Yang and Jianshe Song supervised the work and helped with editing
the paper.
Appendix
A1. Residual Filtering Error Analysis
A stationary ARppq process can be represented as:
xn
(A1)
l nl
l0
where the coefficient series tl u meets mean square convergence and can be calculated recursively [28]:
$
minpp,lq
&
i li
l 1 l1 p lp
(A2)
i1
%
0 1, l 0 pl 0q
Then k-step prediction formula in an alternative form is:
x n|nk
(A3)
l nl
lk
k1
l0
(
E xn x n|nk 0
k1
!
2 )
2
|l |2
E xn x n|nk
(A4)
(A5)
l0
xn1 x n1 |n1 k
xn2 x n2 |n2 k
$
& 2
% 0
l1 l2 |n1 n2 | k 1
pl1 ,l2 qP
(A6)
otherwise
(A7)
y
Xh
(A8)
where:
and:
xLG
xLG `1
..
.
xLG 1
xLG
..
.
..
.
xLG No
xLG No `1
..
.
pN pNf qq
(A9)
Sensors 2015, 15
132
are truth samples, which are assumed to be known for derivation. Then, according
All elements of X
to (A6), we can derive the MSE as:
1
N
`
where:
nLG
N
No
o 1
1
N nL
1
`
N
N `L
G 1
yn yn |2
E |
No
nN
nN m1 0 m2 0
1
2 N
Conflicts of Interest
hm1 hm2 E p
xnm1 xnm1 q p
xnm2 xnm2 q
hm1 hm2
N `L 1
2 G
hm1hm2 E p
xnm1 xnm1 q p
xnm2 xnm2 q
m1 n`1 m2 n`1
N`L
G1 nN
nN
(
(A10)
l1 l2
hm1 hm2
l1 l2
|m m | k 1
1
2
1 pm1 , m2 q n ` 1 m1 , m2 No
$
LG n No 1 ,
l1 l2 m2 m1 /
&
.
p1q pl1 , l2 q 0 l1 , l2 k 1
/
%
pm1 , m2 q P 1
$
|m m | k 1
1
2
&
2 pm1 , m2 q 0 m1 , m2 n N
1
$
N n N ` LG ,
&
l1 l2 m2 m1 /
.
p2q
pl1 , l2 q 0 l1 , l2 k 1
/
%
pm1 , m2 q P 2
$
&
,
/
.
/
,
/
.
(A11)
/
-
Sensors 2015, 15
133
5. Rao, S.; Pearlman, W.A. Spectral estimation from sub-bands. In Proceedings of IEEE-SP
International Symposium on Time-Frequency and Time-Scale Analysis, Victoria, BC, Canada,
46 October 1992; pp. 6972.
6. Bonacci, D.; Mailhes, C.; Djuric, P.M. Improving frequency resolution for correlation-based
spectral estimation methods using sub-band decomposition. In Proceedings of 2003 IEEE
International Conference on Acoustics, Speech, and Signal Processing, Hong Kong, China,
610 April 2003; pp. 329332.
7. Rao, S.; Pearlman, W.A. Analysis of linear prediction, coding, and spectral estimation from
sub-bands. IEEE Trans. Inf. Theory 1996, 42, 11601178.
8. Lambrecht, C.B.; Karrakchou, M. Wavelet packets-based high-resolution spectral estimation.
Signal Process. 1995, 47, 135144.
9. Rouquette, S.; Berthoumieu, Y.; Najim, M. An efficient sub-band decomposition based on
the Hilbert transform for high-resolution spectral estimation. In Proceedings of the IEEE-SP
International Symposium on Time-Frequency and Time-Scale Analysis, Paris, France, 1821 June
1996; pp. 409412.
10. Tkacenko, A.; Vaidyanathan, P.P. The role of filter banks in sinusoidal frequency estimation. J.
Franklin Inst. 2001, 338, 517547.
11. Madisetti, V.K. The Digial Signal Processing Handbook; 2nd ed.; CRC Press: Boca Raton, FL,
USA, 2010.
12. Narasimhan, S.V.; Harish, M. Spectral estimation based on sub-band decomposition by harmonic
wavelet transform and modified group delay. In Proceedings of 2004 International Conference on
Signal Processing and Communications, Bangalore, India, 1114 December 2004; pp. 349353.
13. Narasimhan, S.V.; Harish, M.; Haripriya, A.R.; Basumallick, N. Discrete cosine harmonic wavelet
transform and its application to signal compression and sub-band spectral estimation using modified
group delay. Signal Image Video Process. 2009, 3, 8599.
14. Bonacci, D.; Michel, P.; Mailhes, C. Sub-band decomposition and frequency warping for spectral
estimation. In Proceedings of European Signal and Image Processing Conference (EUSIPCO),
Toulouse, France, 36 September 2002; pp. 147150.
15. Stoica, P.; Sandgren, N.; Seln, Y.; Vanhamme, L.; Huffel, S.V. Frequency-domain method based
on the singular value decompsition for frequency-selective NMR spectroscopy.J. Magn. Reson.
2003, 165, 8088.
16. Brito, A.E.; Chan, S.H.; Cabrera, S.D. SAR image superresolution via 2-D adaptive extrapolation.
Multidimens. Syst. Signal Process. 2003, 14, 83104.
17. Papoulis, A. A new algorithm in spectral analysis and band-limited extrapolation. IEEE Trans.
Circult Syst. 1975, CAS-22, 735742.
18. Devasia, A.; IAENG; Cada, M. Extrapolation of bandlimited signals using slepian functions. In
Proceedings of the World Congress on Engineering and Computer Science, San Francisco, CA,
USA, 2325 October 2013; pp. 16.
19. Shi, J.; Sha, X.; Zhang, Q.; Zhang, N. Extrapolation of bandlimited signals in linear canonical
transform domain. IEEE Trans. Signal Process. 2012, 60, 15021508.
Sensors 2015, 15
134