Adapt Range Track Radar
Adapt Range Track Radar
net/publication/252020029
CITATIONS READS
3 283
1 author:
Naum Chernoguz
22 PUBLICATIONS 45 CITATIONS
SEE PROFILE
All content following this page was uploaded by Naum Chernoguz on 01 June 2017.
Naum Chernoguz
Israel Aerospace Industries and ELFI Tech♦
Israel
Abstract—Range tracking, a traditional part of the radar/sonar When SNR is uncertain or time varying attribute a usual recipe is
range gating technique, requires a fully adaptive however simple adaptive Kalman filtering (AKF) [3, 4]. However, AKF exploits
target predictor. Under the constant velocity (CV) assumption, rather sophisticated tools based mostly upon decision-directed or
one can apply an adaptive gain form of the α–β filter based on noise identification methods. Actually, the range tracking technique
the recursive prediction error method. The adaptive α–β terms needs a simpler device in order to meet the TWS requirements.
tend to the Kalman gain and remain nearly optimal with a Regarding the above mentioned α–β tracker, in spite of its
minor degradation caused by the parameter misadjustment simplicity the adaptation of this device is not a trivial matter. Known
noise. An extended variant of the filter allows, in addition to the adaptive variants of the α–β filter, e.g. [5–7], either rely on the
CV form, both the polynomial and stochastic signals. The universal methods of AKF or use various forms of the user
adaptive tracker improves the sensor bandwidth and allows experience like the expert rules, fuzzy reasoning, etc. The remaining
more accurate range gating. niche can be filled in with a simple and equally fully automatic
adaptive gain tracker [8, 9]. The attractiveness of this filter is in that
the α–β gain is adjusted in a manner similar to the usual recursive
I. INTRODUCTION estimation of the autoregressive moving-average (ARMA) model,
In many tracking radar and sonar applications the target is based upon the classical parameter identification technique [10].
continuously tracked in range. A similar problem of the object
tracking/separation has attracted the interest of researchers in some Specifically, the position-velocity model underlying the two-state
♦ α–β filter is associated with the moving average, MA(2), a case of
biophysical, medical and other fields . The method of automatically
ARMA. An adaptive form of the α–β filter comes, after some
tracking relies on the split range gate principle [1]. Range gating
manipulations, in analogy to the MA estimator. In view of gating, this
(schematically illustrated in Fig. 1) is a regular part of the track-
adaptive filter (AF) should be given certain modification. In addition,
while-scan (TWS) circuitry based on the target predictor. In practice,
we shortly discuss some extended variants of the α–β filter enabling
for computational reasons, various simplified forms of the Kalman
more flexibility inside the same position-velocity framework.
filter (KF) are the candidates for such a predictor. Under the constant
velocity (CV) assumption, the common choice is the two-state fixed- A major objective of this work is to study properties of the
gain α–β filter [2]. The difficulty here is in getting information about adaptive α–β filter and its applicability for automatic tracking. As it
the signal-to-noise ratio (SNR), critical for the filter performance. will be shown, the devised AF closely follows the true SNR with just
minor deterioration induced by the gain misadjustment noise. As a
18
part of the split gate circuitry, it improves the tracker bandwidth,
minimizes the output noise and produces a smaller gate than does the
fixed gain tracker. As is clear from Fig. 1, it enables better isolation
16
Target 1
of one target excluding targets at other ranges.
The rest of the paper is organized as follows. In Section 2, the
14 adaptive α–β filter is given with some modification obliged by the
range gating procedure. Section 3 analyzes the root mean square
(RMS) of the output error influenced by the gain misadjustment
Range
Target 2
12
noise. Extended variants of the adaptive α–β filter are briefly
discussed in Section 4. Section 5 depicts the results of simulation, and
Gate
10 Section 6 concludes the study.
8
II. ADAPTIVE GAIN TRACKING FILTER
Target 3
The steady two-state α–β filter relies on the target model
6
1 2 3 4 5 6 7 8 9 10 x n+1 = Fx n + Γwn (1)
Sample T
where xn=[rn vn] is the kinematic range-velocity state vector,
Figure 1. Target tracking with range gating. subscript n denotes the time instant, wn – process noise of the
variance Q=σw2, and superscript T marks the matrix or vector
♦ transposition. F and Γ denote, respectively, the transition matrix
Sponsor of the presentation. (TM) and input matrix related to the time discrete T:
⎛1 T ⎞ ⎛ 0.5T 2 ⎞ Replacing the regression vector Ψ by H, one obtains a recursive
F = ⎜⎜ ⎟⎟, Γ = ⎜⎜ ⎟
⎟ (2) estimator for the α–β terms. By coupling the latter estimator, i.e. gain
⎝0 1 ⎠ ⎝ T ⎠ adaptor, with the state update equation (4)-(5) provides an adaptive
Given only position data, the observation equation reads form of the KF [8].
Next this filter is fused with range gating. When more than one
y n = hx n + vn (3) return falls inside the gate, possibly from different targets, it is
where h=[1 0] and vn – measurement noise with the variance R=σv2 common to use the closest one to the prediction (nearest neighbor KF
(wn and vn are assumed mutually uncorrelated and independent). [2]). If no return found inside, update is omitted. To form the gate
Given gain k, the state estimator follows as one needs the innovation variance, S. The effect of gating can be
counted using the robust variant of the RLS [10, pp. 498-499]. View
xˆ n = Fxˆ n−1 + ken (4) the problem as estimation of k by minimizing the weighted sum [10]
(
H = (∂en ∂α , ∂en ∂β ) = − q −1Δξ n , − q −1ξ n ) (14)
[
β = 0.25 Λ2 + 4Λ − Λ Λ2 + 8Λ ] (20)
The curves for optimal α and β are compared in Fig. 2 [8] versus
the values of obtained in simulation adaptive α and β. The adaptive
terms do not deviate much from the optimal and, as noted [8], come ⎧ r1 = b1 (1 − b2 )
nearer with the growing data length, N. On the base of practical ⎨ (26)
⎩r2 = b1 (1 − b2 ) + b2
2
evidence, the bias in the adaptive α and β is assumed negligible.
However, though unbiased the randomly fluctuating gain may affect
S. In the optimal KF, S0 follows as Substituting these terms into the covariance (24) results in
⎛ 1 − b22 − b1 (1 + b2 )⎞
S0 = hPhT + R = αS0 + R ⇒ S0 = R(1 − α ) Vb1 ,b2 (b1 , b2 ) = N −1 ⎜⎜
−1
(21) ⎟ (27)
−
⎝ 1 b (1 + b2 ) 1 − b22 ⎟⎠
With Λ→0, α→0 and S0→R. In the other limit Λ→∞, α→1 and
(21) is singular. Note however that 1-α tends to 0 along with where Vc(s) means the covariance of the vector c in terms of the
decreasing R and, as it follows from the familiar Kalman covariance vector s. After re-deriving the known Eq. (27) [13] we proceed, due
equation [12], S0 is bounded from above by ΓQΓT. To find S0 at the to the chain rule, with the covariance for α–β:
point Λ=∞ Eq. (21) needs a slight modification. Using the link [11]
Λ2 = β 2 (1 − α ) (22)
Vα , β (α , β ) = L−1Vb1 ,b2 [b1 (α , β ), b2 (α , β )] L−1 ( ) T
(28)
Eq. (21) can be rearranged as With (11)-(12) in mind, Eq. (28) reduces to
For the large data length, N, the covariance of the vector x n = [I − (k + δk )h]~
~ x n n −1 + (k + δk )ei
b=(b1,…,bk) of AR coefficients is approximated as [13 , (7.3.5) ]
= (I − kh )~
x n n −1 − δk ⋅ h~
x n n−1 + ken + δk ⋅ en (32)
V (b ) = N −1
(1 − r b)R
T −1
(24) Squaring both sides and taking expectation, we arrive to
where r=(r1,…,rk)T, rm=E[ynyn-m]/E[yn2], and R={rij}, rij=r|i-j|,
i,j=1,2,…,k, the data covariance matrix. One can readily connect λ [ ( ) ](I − kh) −
P (+ ) = (I − kh)E ~
x n n −1 ~
x n n−1
T T
and the equivalent window length N. Note that with large N Eq. (24)
applies as to AR so to MA coefficients. In the case of AR(2) the
(I − kh)E [~x (~x ) h δk ]− E [δk ⋅ h~x (~x ) ](I − kh )
n n −1 n n −1
T T T
n n −1 n n −1
T T
2
[
E δk ⋅ h~
x n n−1 ~ ( T
) ] [
x n n −1 h T δk T ≈ E δk ⋅ hPh T ⋅ δk T ]
(34)
1.8
[
= E δk ⋅ p11δk = p11Cov[δk ]
T
]
Alpha Theory Beta
Beta Theory
where p11 – left-upper entry of predictor covariance P. So,
1.6
Alpha Adapt
P (+ ) ≈ (I − kh)P(I − kh ) + kRk T + Cov[δk ]( p11 + R )
Beta Adapt
T
1.4
1.2 = P (+ ) + δP (+ ) , (35)
Gains
(+)
1 where an increment in the state estimator covariance P reads
Alpha
δP (+ ) = Cov (δk ) ⋅ S
0.8
(36)
0.6
With the aid of (35) and (36) P propagates [12] as:
[ ]
0.4
0.2
P = F P (+ ) + δP (+ ) F T + ΓQΓT = P + δP (37)
0
where δP is an increment in the state predictor covariance:
-3 -2 -1 0 1 2 3
Tracking Index Power
δP = FCov[δk ]SF T (38)
Figure 2. Optimal and adaptive gains.
1
First, to generalize the CV model within the position-velocity
10 framework, we can view the process model as a cascade of two
damped integrators with the decay factors μ1 and μ2. Let xt be, e.g., a
first-order Markov process decaying with the factor μ1, and driven by
the velocity vt obeying μ2. Introducing the notation ρi=exp(-μiT),
0
10
i=1,2 the corresponding TM, instead of (2), follows as
-1
⎛ρ (ρ1 − ρ 2 ) (μ 2 − μ1 )⎞
F (μ1 , μ 2 ) = ⎜⎜ 1
10
⎟⎟, (45)
⎝0 ρ2 ⎠
Factor
10
-2 TM (45) covers different two-state kinematic forms ranging from
the deterministic polynomial to the stochastic model ARMA(2, 2). In
particular, it encompasses the well-known exponentially-correlated
-3 velocity (ECV) case, a particular model with decaying velocity.
10
Including the corresponding decay factor into the vector of estimated
parameters implies an extended – the gain-and-tau form of the
-4
adaptive filter [15].
10
-4 -2 0 2 4 6
10 10 10 10 10 10 Another helpful idea is to constraint the adaptive gains α and β.
Tracking Index
One method is to use the optimal link [12]
Figure 3. Factor fs.
β = 2(2 − α ) − 4 1 − α (46)
Accordingly, the minimal output variance follows as In this case, the filter reduces to a single-parameter adaptive form
[7] where α is adjusted with a properly modified constrained
S 0 = h(P + δP )h T + R = S 0 + δS (39) regression vector while β is produced as a function of α. This variant
of AF provides lower adjustment noise and favors the steady mode.
with
In the case of maneuvering target, the CV model is usually
δS = h(δP )h = S 0 ⋅ hFCov[δk ]F h
T T T
(40) extended by an acceleration input, a way leading to the third-order
kinematic model with the extra acceleration state. The above analysis
By straightforward computations, of the output RMS can be applied here in a like manner. Leaving this
model outside the present study, in the following simulations we will
hFCov[δk ]F T h T = δα 2 + δβ 2 (41) present an example when the two-state adaptive α–β tracker rather
successfully copes with the abrupt input jump. Apparently, under
and, finally, more severe conditions the two-state AF can fail but its capability for
tracking may be always improved with the increasing sampling rate.
(
S = S 0 1 + δα 2 + δβ 2 ) (42)
2 2 V. SIMULATION
where δα +δβ can be thought as the matrix trace for (29), a norm of
the vector g: ||δg||=(δα)2+(δβ)2. Thus, gain adaptation amplifies S0 due Experiment 1 (Figs. 4 – 6) presents a typical run of the AF under
to (42) or, equivalently, σw=1 m/s, σv=1 m and T=1 s, in comparison to the KF produced with
Λ=1. The adaptive gain tends to optimal one and the difference in
[
S = S0 1 + N −1 f s ] (43)
range (and velocity) is small.
In Experiment 2 (Figs. 7 – 9), we consider a scenario when the
where target motion is perturbed by an acceleration step. Here the AF
properly changes its gain on the step onset while the transient
f S = (2 − β )(4 − 2α + β ) (44) duration agrees with that prescribed by λ. The two-state AF interprets
The latter factor (depicted in Fig. 3) is attenuated in (43) by the the acceleration as the velocity noise and properly modifies its gain.
window length N=(1-λ)-1, clearly in tradeoff with the tracking ability. After the step offset, the AF returns to the 'optimal' KF-like mode.
As seen from Fig. 10, the resulting RMS (dash curves) very little The AF gives nearly unbiased range and velocity, though at a cost of
differs from the square root of S0. That is, the AF is close to its higher noise, while the fixed-gain KF provides strongly biased states.
optimal counterpart – the point to be later checked in simulations. In Experiment 3 (Fig.10), we check whether the sampled output
The above advice for the KF to keep σv such that Λ is not too large variance S complies with theory. For this purpose, the AF runs with
(below Λ≈10) remains valid for AF. The required for gating S can be different Λ, σw and λ. Each scenario is repeated few times and the
found using the sampled variance – in the AF it is close to S0. sampled S is averaged. An initial segment corrupted by the AF
To prevent divergence, AKF usually needs monitoring. In the α– transient is excluded from consideration. The results of these trials
β filter, the stability triangle implies the limits 0<α<2 and 0<β<2 [14]. are depicted in Fig. 10 in comparison to Eq. (23) (bold curves) and
However, the upper bound for α is too high. Inspired by the Kalman Eq. (43) (dash), respectively. As seen, for a closer to 1 λ (e.g.,
gain curve (see Fig. 2), α may be restricted from above by the λ=0.995 with the equivalent N=50) the AF nearly equals KF. For
stronger α<1. Another issue is the tracking ability of the AF. As λ=0.8 (N=5), a provided by the AF S is slightly higher, in agreement
illustrated in simulations, the inspected AF automatically changes its with Eq. (43). Anyhow, the given by AF sampled variance S is very
gain when the input jumps. The transient duration depends on λ. close to the minimal S0 prescribed by the KF.
Experiment 4 (Fig. 11) illustrates the AF capability to work with
IV. EXTENDED KINEMATIC MODELS the small gate factor k (e.g., k=2). In spite of a rather high probability
The two-state CV model may be restrictive when more complex of rejected measurements, the AF holds stability. The gate factor may
forms underlie the target motion. In this view, several extended be properly reduced in order to enable the target isolation and
variants of the α-β filter are of interest. resistance to the clutter.
Range: AF vs KF 5 Range: AF vs KF
x 10
15
6000
10 Actual
AF
Range (m)
Range (m)
4000 AF
KF
5 KF
2000
0
0
-5
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Sample Sample
5
4
0
0
-2
-4 AF
-6 KF
-5
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Sample Sample
Velocity: AF vs KF Velocity: AF vs KF
40 3000
AF
2000
Velocity (m/s)
Velocity (m/s)
KF
20
Actual
1000
AF
0 KF
0
-20 -1000
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Sample Sample
10
AF
Velocity Error (m/s)
Difference (m/s)
5
5 KF
0
0
-5
-5 -10
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Sample Sample
Gain: AF vs KF Gain: AF vs KF
1 1.5
AF
0.8
KF
1
0.6
Alpha
Alpha
KF
0.4
AF 0.5
0.2
0 0
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
2 2.5
AF
2
1.5 KF KF
AF 1.5
Beta
Beta
1
1
0.5
0.5
0 0
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Sample Sample
2
10 100 variance and respectively smaller gate. The suggested AF is very
simple compared to different known forms of AKF and can be easily
1 10
implemented. Extended variants of this AF, based upon the same
10 principle, facilitate target tracking in more complex scenarios as, e.g.,
tracking of the highly maneuvering target with abruptly switching
0 1 acceleration or jerk.
10
-1 0.1
REFERENCES
10
[1] M. Skolnik, Introduction to Radar Systems, 2nd ed, McGraw Hill, 1980.
-5 0 5
10 10 10 [2] G. W. Pulford, "Relative performance of single-scan algorithms for
Tracking Index passive sonar tracking," Ann. Conf. Australian Acoustic Society,
Adelaide, Australia, pp. 212-221, 2002.
Figure 10. RMS of the prediction error. [3] X. R. Li, V. P. Jilkov, "A survey of maneuvering target tracking: Part
IV: Decision-Based Methods." In Proceedings of the 2002 SPIE
Conference Signal and Data Processing of Small Targets, vol. 4728, pp.
Prediction error and Gate
511−534, 2002.
15 [4] P. S. Maybeck, Stochastic Models, Estimation and Control, vol. 2.
Academic Press, New York, 1994.
10 [5] B.-D. Kim, J.-S. Lee, "Adaptive α–β tracker for TWS radar system,"
ICCAS2005, KINTEX, Gyeonggi-Do, June 2-5 2005.
5 [6] A. H. Mohamed, K. P. Schwarz, "Adaptive Kalman filtering for
INS/GPS," Journal of Geodesy, vol. 73, pp. 193-203, 1999.
[7] Y. H. Lho, J. H. Painter, "A fuzzy-tuned adaptive Kalman filter," Third
Magnitude (m)
0
Int. Conf. on Industrial Fuzzy Control and Intelligent Systems, pp. 144-
148, 1993.
-5
[8] N. Chernoguz, "Adaptive-gain tracking filters based on minimization of
the innovation variance," In Proceedings of the 2006 IEEE Int. Conf.
-10 on Acoustics, Speech and Signal Processing, Toulouse, vol. 3, pp. 41-
44, 2006.
-15 Pred. Error [9] N Chernoguz, "Adaptive-gain kinematic filters of order 2-4," Journal of
Gate Computers, Academy Publisher, vol. 2, no 6, pp. 17-25, 2007.
-20 [10] T. Söderström, P. Stoica, System Identification, Prentice Hall, 1989.
0 100 200 300 400 500 600 700 800 900 1000
Sample [11] P. Kalata, “The tracking index: A generalized parameter for α-β and α-
β-γ target trackers,” IEEE Trans. Aerospace and Electronic Systems,
Figure 11. Prediction error and Gate (k=2). vol. 20, no 2, pp. 174-182, 1984.
[12] Y. Bar-Shalom and T.E. Fortmann, Tracking and Data Association.
An interested reader can find in [8, 9] more experiments Academic Press, 1988.
demonstrating properties of the AF. One experiment shows that the [13] G. Box and G. Jenkins, Time series analysis. Forecasting and Control,
adaptive gain accurately follows the varying by abrupt steps SNR. 1970.
Another presents the single-parameter AF constrained by Eq. (46). [14] D. Tenne and T. Singh, "Characterizing performance of α-β-γ filters,"
IEEE Trans. Aerospace and Electronic. Systems, vol. 38, no 3, pp.
VI. CONCLUSIONS
1072-1087, July 2002.
[15] N. Chernoguz, "Adaptive-gain-and-tau tracking filters for correlated
A major concern of this paper is the adaptive α–β tracker oriented target maneuvers," In Proceedings of the 2008 IEEE Int. Conf. on
for practical use in the range gating technique. As shown, this filter is Acoustics, Speech and Signal Processing, Las Vegas, pp. 3553-3556,
nearly optimal and the prediction error produced by the AF is very 2008.