0% found this document useful (0 votes)
12 views7 pages

Adapt Range Track Radar

Uploaded by

omaressamra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views7 pages

Adapt Range Track Radar

Uploaded by

omaressamra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/252020029

Adaptive range tracking for radar technique

Conference Paper in IEEE National Radar Conference - Proceedings · May 2011


DOI: 10.1109/RADAR.2011.5960526

CITATIONS READS
3 283

1 author:

Naum Chernoguz

22 PUBLICATIONS 45 CITATIONS

SEE PROFILE

All content following this page was uploaded by Naum Chernoguz on 01 June 2017.

The user has requested enhancement of the downloaded file.


Adaptive Range Tracking for Radar Technique

Naum Chernoguz
Israel Aerospace Industries and ELFI Tech♦
Israel

Abstract—Range tracking, a traditional part of the radar/sonar When SNR is uncertain or time varying attribute a usual recipe is
range gating technique, requires a fully adaptive however simple adaptive Kalman filtering (AKF) [3, 4]. However, AKF exploits
target predictor. Under the constant velocity (CV) assumption, rather sophisticated tools based mostly upon decision-directed or
one can apply an adaptive gain form of the α–β filter based on noise identification methods. Actually, the range tracking technique
the recursive prediction error method. The adaptive α–β terms needs a simpler device in order to meet the TWS requirements.
tend to the Kalman gain and remain nearly optimal with a Regarding the above mentioned α–β tracker, in spite of its
minor degradation caused by the parameter misadjustment simplicity the adaptation of this device is not a trivial matter. Known
noise. An extended variant of the filter allows, in addition to the adaptive variants of the α–β filter, e.g. [5–7], either rely on the
CV form, both the polynomial and stochastic signals. The universal methods of AKF or use various forms of the user
adaptive tracker improves the sensor bandwidth and allows experience like the expert rules, fuzzy reasoning, etc. The remaining
more accurate range gating. niche can be filled in with a simple and equally fully automatic
adaptive gain tracker [8, 9]. The attractiveness of this filter is in that
the α–β gain is adjusted in a manner similar to the usual recursive
I. INTRODUCTION estimation of the autoregressive moving-average (ARMA) model,
In many tracking radar and sonar applications the target is based upon the classical parameter identification technique [10].
continuously tracked in range. A similar problem of the object
tracking/separation has attracted the interest of researchers in some Specifically, the position-velocity model underlying the two-state
♦ α–β filter is associated with the moving average, MA(2), a case of
biophysical, medical and other fields . The method of automatically
ARMA. An adaptive form of the α–β filter comes, after some
tracking relies on the split range gate principle [1]. Range gating
manipulations, in analogy to the MA estimator. In view of gating, this
(schematically illustrated in Fig. 1) is a regular part of the track-
adaptive filter (AF) should be given certain modification. In addition,
while-scan (TWS) circuitry based on the target predictor. In practice,
we shortly discuss some extended variants of the α–β filter enabling
for computational reasons, various simplified forms of the Kalman
more flexibility inside the same position-velocity framework.
filter (KF) are the candidates for such a predictor. Under the constant
velocity (CV) assumption, the common choice is the two-state fixed- A major objective of this work is to study properties of the
gain α–β filter [2]. The difficulty here is in getting information about adaptive α–β filter and its applicability for automatic tracking. As it
the signal-to-noise ratio (SNR), critical for the filter performance. will be shown, the devised AF closely follows the true SNR with just
minor deterioration induced by the gain misadjustment noise. As a
18
part of the split gate circuitry, it improves the tracker bandwidth,
minimizes the output noise and produces a smaller gate than does the
fixed gain tracker. As is clear from Fig. 1, it enables better isolation
16
Target 1
of one target excluding targets at other ranges.
The rest of the paper is organized as follows. In Section 2, the
14 adaptive α–β filter is given with some modification obliged by the
range gating procedure. Section 3 analyzes the root mean square
(RMS) of the output error influenced by the gain misadjustment
Range

Target 2
12
noise. Extended variants of the adaptive α–β filter are briefly
discussed in Section 4. Section 5 depicts the results of simulation, and
Gate
10 Section 6 concludes the study.

8
II. ADAPTIVE GAIN TRACKING FILTER
Target 3
The steady two-state α–β filter relies on the target model
6
1 2 3 4 5 6 7 8 9 10 x n+1 = Fx n + Γwn (1)
Sample T
where xn=[rn vn] is the kinematic range-velocity state vector,
Figure 1. Target tracking with range gating. subscript n denotes the time instant, wn – process noise of the
variance Q=σw2, and superscript T marks the matrix or vector
♦ transposition. F and Γ denote, respectively, the transition matrix
Sponsor of the presentation. (TM) and input matrix related to the time discrete T:
⎛1 T ⎞ ⎛ 0.5T 2 ⎞ Replacing the regression vector Ψ by H, one obtains a recursive
F = ⎜⎜ ⎟⎟, Γ = ⎜⎜ ⎟
⎟ (2) estimator for the α–β terms. By coupling the latter estimator, i.e. gain
⎝0 1 ⎠ ⎝ T ⎠ adaptor, with the state update equation (4)-(5) provides an adaptive
Given only position data, the observation equation reads form of the KF [8].
Next this filter is fused with range gating. When more than one
y n = hx n + vn (3) return falls inside the gate, possibly from different targets, it is
where h=[1 0] and vn – measurement noise with the variance R=σv2 common to use the closest one to the prediction (nearest neighbor KF
(wn and vn are assumed mutually uncorrelated and independent). [2]). If no return found inside, update is omitted. To form the gate
Given gain k, the state estimator follows as one needs the innovation variance, S. The effect of gating can be
counted using the robust variant of the RLS [10, pp. 498-499]. View
xˆ n = Fxˆ n−1 + ken (4) the problem as estimation of k by minimizing the weighted sum [10]

where the innovation (prediction error) en is defined as n


Ζn = ∑ω λ
i =1
i
n −i 2
ei (15)
en = yn − hFxˆ n−1 (5)
where λ is the forgetting factor (FF), coupled with the weight
In the α–β arrangement the fixed gain k is parameterized by the
normalized terms α and β as k=[α β/T]T. In the adaptive filter the ⎧⎪ 1 if ei ≤ k S i
steady (or slowly varying) gain k is adjusted due to the prediction ωi = ⎨ 2 2 (16)
⎩⎪ k S i ei if ei ≥ k S i
error, en. In the direct parameterization form of the steady KF [10] all
unknowns are cast in k. For the parameter estimation we use the The gate factor k is set by the tradeoff between the robustness and
input-output equation [8, 9] accuracy and usually varies from 2 – 4. In the resulting AF (Table 1)
the sampled variance is estimated with the sliding window as
y n = We→ y (q )en (6)
S n = λS n −1 + (1 − λ )e n2ω n (17)
The en→yn transfer function in (6) is specified as [8, 9]
Table 1. Adaptive-Gain Tracking Filter with Range Gating.
We→ y = D2 Δ(2 ) (7)
Design Parameters:
where Δ(m), m=1, 2,… means the m-order difference operator and Dm TM F [due to Eq. (2)], Factor k ,
is an m-order polynomial in q-1 (as usual, q-1 stands for the backward P0, Normalized Gain g0, Observ. Weight ω0=1,
time shift.) For the α–β filter m=2 and Initial FF λ0, Final FF λ∞
Initialization:
D2 = 1 + (α + β − 2)q −1 + (1 − α )q −2 (8) x0=x(0), k0=[1, 1/T]·g0, S0, Gate=k·√S0, ξ0 =ξ-1=0,
Compute For Time Instant n=1, 2, …
Viewing yn as time series, the enclosed in brackets of (8)
expressions give us MA coefficients b1-b2 being in one–one link with ------------------------State Estimator ---------------
the α–β terms. The gain adaptation is thought as parameter en= yn-hFxn-1 Innovation
identification and can be implemented using the recursive prediction if abs(en)>Gate, ωn= (Gate)2/en2; else ωn=1, end
error or, in particular, recursive least-square (RLS) method [10]. For xn=Fxn-1+kn-1en State Update
this sake, the given by KF innovation en replaces the prediction error. --------------------------RLS----------------------------
H= (-ξn-1+ξn-2, -ξn-1) Regressor, Eq. (13)
In the MA(2) case, the required by RLS regression vector (or dn= HPHT+λ n-1ωn Compute denominator
sensitivity function) represents the row-vector G=PHT/dn RLS gain
P=(P-GHP)/λn-1 Covariance Update
Ψ = −(ξ n −1 , ξ n −2 ) (9) gn=gn-1+Gen√ωn Parameter Update
where ξn stands for the so-called prefiltered prediction error, b=Lgn+ L0 Eq. (11)
kn=[1, 1/T]·gn Tracker Gain
ξ n = en D2 (10) ξn=en√ωn-b1ξn-1-…-bNξn-N Error Prefilter, Eq. (10)
T T Sn = λn-1Sn-1 +(1- λn-1)en2ωn Output Variance
As seen from (8), b=(b1, b2) is related to g=(α, β) due to
Gate=k√Sn Gate
b = Lg + L 0 (11) λn=μλn-1 +(1-μ)λ∞ Update FF
T
with L0 = [-2, 1] , and III. ANALYSIS
A parameter essential for gating is S. In theory, S depends on the
⎛ 1 1⎞
L = ⎜⎜ ⎟⎟ (12) gain, which in turn depends solely on the tracking index, suggested in
⎝ −1 0⎠ [11] integrated parameter
So, the regression vector for g (denoted by H) comes as
Λ = σ wT 2 σ n (18)
( −1
H = ΨL = − q Δξ n , − q ξ n −1
) (13) The optimal α and β thus follow as functions of Λ [12]:
Note that the same result can be obtained directly from Eq. (6) by
the derivative of en with respect to α or β:
[
α = −0.125 Λ2 + 8Λ − (Λ + 4 ) Λ2 + 8Λ ] (19)

(
H = (∂en ∂α , ∂en ∂β ) = − q −1Δξ n , − q −1ξ n ) (14)
[
β = 0.25 Λ2 + 4Λ − Λ Λ2 + 8Λ ] (20)
The curves for optimal α and β are compared in Fig. 2 [8] versus
the values of obtained in simulation adaptive α and β. The adaptive
terms do not deviate much from the optimal and, as noted [8], come ⎧ r1 = b1 (1 − b2 )
nearer with the growing data length, N. On the base of practical ⎨ (26)
⎩r2 = b1 (1 − b2 ) + b2
2
evidence, the bias in the adaptive α and β is assumed negligible.
However, though unbiased the randomly fluctuating gain may affect
S. In the optimal KF, S0 follows as Substituting these terms into the covariance (24) results in

⎛ 1 − b22 − b1 (1 + b2 )⎞
S0 = hPhT + R = αS0 + R ⇒ S0 = R(1 − α ) Vb1 ,b2 (b1 , b2 ) = N −1 ⎜⎜
−1
(21) ⎟ (27)

⎝ 1 b (1 + b2 ) 1 − b22 ⎟⎠
With Λ→0, α→0 and S0→R. In the other limit Λ→∞, α→1 and
(21) is singular. Note however that 1-α tends to 0 along with where Vc(s) means the covariance of the vector c in terms of the
decreasing R and, as it follows from the familiar Kalman covariance vector s. After re-deriving the known Eq. (27) [13] we proceed, due
equation [12], S0 is bounded from above by ΓQΓT. To find S0 at the to the chain rule, with the covariance for α–β:
point Λ=∞ Eq. (21) needs a slight modification. Using the link [11]

Λ2 = β 2 (1 − α ) (22)
Vα , β (α , β ) = L−1Vb1 ,b2 [b1 (α , β ), b2 (α , β )] L−1 ( ) T
(28)

Eq. (21) can be rearranged as With (11)-(12) in mind, Eq. (28) reduces to

S0 = σ w2T 4 β 2 (23) ⎛ 2(2 − α ) 2 −α − β ⎞


Vα , β (α , β ) = N −1 (2 − β )⎜⎜ ⎟⎟ (29)
Now it is obvious that when Λ→∞ and β→2 S0 is bounded by ⎝ 2 −α − β β ⎠
σw2T4/4. The minimal output RMS (square root of S0) as a function Now given the covariance of g, one can determine the effect of
of Λ and σw is depicted (bold curves) in Fig. 10 in Section 5, where the gain fluctuation on the state estimator covariance, denoted as P(+).
it is compared to the simulated S. As Λ grows, the output RMS As the adaptive gain deviates from the optimal k by a small term δk,
comes into saturation so that it can't fall below σwT2/2 – an amount the state estimator update equation reads
usually interpreted as the signal strength. Further efforts to decrease
σv would be futile – accuracy should be in proportion with the xˆ n = Fxˆ n −1 + (k + δk )( y n − hFxˆ n −1 ) (30)
expected σwT2/2, not better. In Fig. 2 this bound (nearly Λ≈10) takes Using notations
place when α≈1. (Note: This may explain the popular in practice ad
hoc rule assuming Λ=10.) In this position, using link (7) between the ~
x n n−1 = xˆ n n −1 − x n −1 , ~
x n = xˆ n − x n−1 , (31)
KF and ARMA models we shall evaluate an effect of the gain
misadjustment noise on S. one obtains:

For the large data length, N, the covariance of the vector x n = [I − (k + δk )h]~
~ x n n −1 + (k + δk )ei
b=(b1,…,bk) of AR coefficients is approximated as [13 , (7.3.5) ]
= (I − kh )~
x n n −1 − δk ⋅ h~
x n n−1 + ken + δk ⋅ en (32)
V (b ) = N −1
(1 − r b)R
T −1
(24) Squaring both sides and taking expectation, we arrive to
where r=(r1,…,rk)T, rm=E[ynyn-m]/E[yn2], and R={rij}, rij=r|i-j|,
i,j=1,2,…,k, the data covariance matrix. One can readily connect λ [ ( ) ](I − kh) −
P (+ ) = (I − kh)E ~
x n n −1 ~
x n n−1
T T

and the equivalent window length N. Note that with large N Eq. (24)
applies as to AR so to MA coefficients. In the case of AR(2) the
(I − kh)E [~x (~x ) h δk ]− E [δk ⋅ h~x (~x ) ](I − kh )
n n −1 n n −1
T T T
n n −1 n n −1
T T

Yule-Worker equation [13] reads + E [δk ⋅ h~ (~x ) h δk ] T T T


x n n −1 n n −1

⎧ r1 = b1 + b2 r1 + kRk T + kRE (δk ) + E (δk )Rk T + E δk ⋅ R ⋅ δk T


T
[ ] (33)
⎨ (25)
⎩r2 = b1r1 + b2 Assuming that δk is a zero-mean random variable independent to
Solution for r1– r2 follows as the state errors,

2
[
E δk ⋅ h~
x n n−1 ~ ( T
) ] [
x n n −1 h T δk T ≈ E δk ⋅ hPh T ⋅ δk T ]
(34)
1.8
[
= E δk ⋅ p11δk = p11Cov[δk ]
T
]
Alpha Theory Beta
Beta Theory
where p11 – left-upper entry of predictor covariance P. So,
1.6
Alpha Adapt
P (+ ) ≈ (I − kh)P(I − kh ) + kRk T + Cov[δk ]( p11 + R )
Beta Adapt
T
1.4

1.2 = P (+ ) + δP (+ ) , (35)
Gains

(+)
1 where an increment in the state estimator covariance P reads
Alpha

δP (+ ) = Cov (δk ) ⋅ S
0.8
(36)
0.6
With the aid of (35) and (36) P propagates [12] as:

[ ]
0.4

0.2
P = F P (+ ) + δP (+ ) F T + ΓQΓT = P + δP (37)

0
where δP is an increment in the state predictor covariance:
-3 -2 -1 0 1 2 3
Tracking Index Power
δP = FCov[δk ]SF T (38)
Figure 2. Optimal and adaptive gains.
1
First, to generalize the CV model within the position-velocity
10 framework, we can view the process model as a cascade of two
damped integrators with the decay factors μ1 and μ2. Let xt be, e.g., a
first-order Markov process decaying with the factor μ1, and driven by
the velocity vt obeying μ2. Introducing the notation ρi=exp(-μiT),
0
10
i=1,2 the corresponding TM, instead of (2), follows as
-1
⎛ρ (ρ1 − ρ 2 ) (μ 2 − μ1 )⎞
F (μ1 , μ 2 ) = ⎜⎜ 1
10
⎟⎟, (45)
⎝0 ρ2 ⎠
Factor

10
-2 TM (45) covers different two-state kinematic forms ranging from
the deterministic polynomial to the stochastic model ARMA(2, 2). In
particular, it encompasses the well-known exponentially-correlated
-3 velocity (ECV) case, a particular model with decaying velocity.
10
Including the corresponding decay factor into the vector of estimated
parameters implies an extended – the gain-and-tau form of the
-4
adaptive filter [15].
10
-4 -2 0 2 4 6
10 10 10 10 10 10 Another helpful idea is to constraint the adaptive gains α and β.
Tracking Index
One method is to use the optimal link [12]
Figure 3. Factor fs.
β = 2(2 − α ) − 4 1 − α (46)
Accordingly, the minimal output variance follows as In this case, the filter reduces to a single-parameter adaptive form
[7] where α is adjusted with a properly modified constrained
S 0 = h(P + δP )h T + R = S 0 + δS (39) regression vector while β is produced as a function of α. This variant
of AF provides lower adjustment noise and favors the steady mode.
with
In the case of maneuvering target, the CV model is usually
δS = h(δP )h = S 0 ⋅ hFCov[δk ]F h
T T T
(40) extended by an acceleration input, a way leading to the third-order
kinematic model with the extra acceleration state. The above analysis
By straightforward computations, of the output RMS can be applied here in a like manner. Leaving this
model outside the present study, in the following simulations we will
hFCov[δk ]F T h T = δα 2 + δβ 2 (41) present an example when the two-state adaptive α–β tracker rather
successfully copes with the abrupt input jump. Apparently, under
and, finally, more severe conditions the two-state AF can fail but its capability for
tracking may be always improved with the increasing sampling rate.
(
S = S 0 1 + δα 2 + δβ 2 ) (42)
2 2 V. SIMULATION
where δα +δβ can be thought as the matrix trace for (29), a norm of
the vector g: ||δg||=(δα)2+(δβ)2. Thus, gain adaptation amplifies S0 due Experiment 1 (Figs. 4 – 6) presents a typical run of the AF under
to (42) or, equivalently, σw=1 m/s, σv=1 m and T=1 s, in comparison to the KF produced with
Λ=1. The adaptive gain tends to optimal one and the difference in
[
S = S0 1 + N −1 f s ] (43)
range (and velocity) is small.
In Experiment 2 (Figs. 7 – 9), we consider a scenario when the
where target motion is perturbed by an acceleration step. Here the AF
properly changes its gain on the step onset while the transient
f S = (2 − β )(4 − 2α + β ) (44) duration agrees with that prescribed by λ. The two-state AF interprets
The latter factor (depicted in Fig. 3) is attenuated in (43) by the the acceleration as the velocity noise and properly modifies its gain.
window length N=(1-λ)-1, clearly in tradeoff with the tracking ability. After the step offset, the AF returns to the 'optimal' KF-like mode.
As seen from Fig. 10, the resulting RMS (dash curves) very little The AF gives nearly unbiased range and velocity, though at a cost of
differs from the square root of S0. That is, the AF is close to its higher noise, while the fixed-gain KF provides strongly biased states.
optimal counterpart – the point to be later checked in simulations. In Experiment 3 (Fig.10), we check whether the sampled output
The above advice for the KF to keep σv such that Λ is not too large variance S complies with theory. For this purpose, the AF runs with
(below Λ≈10) remains valid for AF. The required for gating S can be different Λ, σw and λ. Each scenario is repeated few times and the
found using the sampled variance – in the AF it is close to S0. sampled S is averaged. An initial segment corrupted by the AF
To prevent divergence, AKF usually needs monitoring. In the α– transient is excluded from consideration. The results of these trials
β filter, the stability triangle implies the limits 0<α<2 and 0<β<2 [14]. are depicted in Fig. 10 in comparison to Eq. (23) (bold curves) and
However, the upper bound for α is too high. Inspired by the Kalman Eq. (43) (dash), respectively. As seen, for a closer to 1 λ (e.g.,
gain curve (see Fig. 2), α may be restricted from above by the λ=0.995 with the equivalent N=50) the AF nearly equals KF. For
stronger α<1. Another issue is the tracking ability of the AF. As λ=0.8 (N=5), a provided by the AF S is slightly higher, in agreement
illustrated in simulations, the inspected AF automatically changes its with Eq. (43). Anyhow, the given by AF sampled variance S is very
gain when the input jumps. The transient duration depends on λ. close to the minimal S0 prescribed by the KF.
Experiment 4 (Fig. 11) illustrates the AF capability to work with
IV. EXTENDED KINEMATIC MODELS the small gate factor k (e.g., k=2). In spite of a rather high probability
The two-state CV model may be restrictive when more complex of rejected measurements, the AF holds stability. The gate factor may
forms underlie the target motion. In this view, several extended be properly reduced in order to enable the target isolation and
variants of the α-β filter are of interest. resistance to the clutter.
Range: AF vs KF 5 Range: AF vs KF
x 10
15
6000
10 Actual
AF
Range (m)

Range (m)
4000 AF
KF
5 KF
2000
0

0
-5
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Sample Sample

5
4

Range Error (m)


2
Difference (m)

0
0
-2
-4 AF
-6 KF
-5
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Sample Sample

Figure 4. Experiment 1: Range. Figure 7. Experiment 2: Range.

Velocity: AF vs KF Velocity: AF vs KF
40 3000
AF
2000
Velocity (m/s)

Velocity (m/s)
KF
20
Actual
1000
AF
0 KF
0

-20 -1000
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Sample Sample

10
AF
Velocity Error (m/s)
Difference (m/s)

5
5 KF

0
0
-5

-5 -10
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Sample Sample

Figure 5. Experiment 1: Velocity. Figure 8. Experiment 2: Velocity.

Gain: AF vs KF Gain: AF vs KF
1 1.5
AF
0.8
KF
1
0.6
Alpha

Alpha

KF
0.4
AF 0.5
0.2

0 0
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000

2 2.5
AF
2
1.5 KF KF
AF 1.5
Beta

Beta

1
1
0.5
0.5

0 0
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Sample Sample

Figure 6. Experiment 1: Gain. Figure 9. Experiment 2: Gain.


close to that provided by the optimal KF. When the parameters of the
4 optimal KF are unknown the AF can be readily applied instead. This
10
KF Theory filter is practical under the severe conditions with time-varying
AF Theory process and observation noise. The adaptive gain α–β filter doesn't
3
10 .* AF Simulation, FF=0.995
AF Simulation, FF=0.8
need any heuristic logic except a few common RLS parameters.
Oppositely to the widely used fixed gate, the AF increases reliability
Process noise RMS of the target isolation according to the minimized prediction error
Prediction Error RMS

2
10 100 variance and respectively smaller gate. The suggested AF is very
simple compared to different known forms of AKF and can be easily
1 10
implemented. Extended variants of this AF, based upon the same
10 principle, facilitate target tracking in more complex scenarios as, e.g.,
tracking of the highly maneuvering target with abruptly switching
0 1 acceleration or jerk.
10

-1 0.1
REFERENCES
10
[1] M. Skolnik, Introduction to Radar Systems, 2nd ed, McGraw Hill, 1980.
-5 0 5
10 10 10 [2] G. W. Pulford, "Relative performance of single-scan algorithms for
Tracking Index passive sonar tracking," Ann. Conf. Australian Acoustic Society,
Adelaide, Australia, pp. 212-221, 2002.
Figure 10. RMS of the prediction error. [3] X. R. Li, V. P. Jilkov, "A survey of maneuvering target tracking: Part
IV: Decision-Based Methods." In Proceedings of the 2002 SPIE
Conference Signal and Data Processing of Small Targets, vol. 4728, pp.
Prediction error and Gate
511−534, 2002.
15 [4] P. S. Maybeck, Stochastic Models, Estimation and Control, vol. 2.
Academic Press, New York, 1994.
10 [5] B.-D. Kim, J.-S. Lee, "Adaptive α–β tracker for TWS radar system,"
ICCAS2005, KINTEX, Gyeonggi-Do, June 2-5 2005.
5 [6] A. H. Mohamed, K. P. Schwarz, "Adaptive Kalman filtering for
INS/GPS," Journal of Geodesy, vol. 73, pp. 193-203, 1999.
[7] Y. H. Lho, J. H. Painter, "A fuzzy-tuned adaptive Kalman filter," Third
Magnitude (m)

0
Int. Conf. on Industrial Fuzzy Control and Intelligent Systems, pp. 144-
148, 1993.
-5
[8] N. Chernoguz, "Adaptive-gain tracking filters based on minimization of
the innovation variance," In Proceedings of the 2006 IEEE Int. Conf.
-10 on Acoustics, Speech and Signal Processing, Toulouse, vol. 3, pp. 41-
44, 2006.
-15 Pred. Error [9] N Chernoguz, "Adaptive-gain kinematic filters of order 2-4," Journal of
Gate Computers, Academy Publisher, vol. 2, no 6, pp. 17-25, 2007.
-20 [10] T. Söderström, P. Stoica, System Identification, Prentice Hall, 1989.
0 100 200 300 400 500 600 700 800 900 1000
Sample [11] P. Kalata, “The tracking index: A generalized parameter for α-β and α-
β-γ target trackers,” IEEE Trans. Aerospace and Electronic Systems,
Figure 11. Prediction error and Gate (k=2). vol. 20, no 2, pp. 174-182, 1984.
[12] Y. Bar-Shalom and T.E. Fortmann, Tracking and Data Association.
An interested reader can find in [8, 9] more experiments Academic Press, 1988.
demonstrating properties of the AF. One experiment shows that the [13] G. Box and G. Jenkins, Time series analysis. Forecasting and Control,
adaptive gain accurately follows the varying by abrupt steps SNR. 1970.
Another presents the single-parameter AF constrained by Eq. (46). [14] D. Tenne and T. Singh, "Characterizing performance of α-β-γ filters,"
IEEE Trans. Aerospace and Electronic. Systems, vol. 38, no 3, pp.
VI. CONCLUSIONS
1072-1087, July 2002.
[15] N. Chernoguz, "Adaptive-gain-and-tau tracking filters for correlated
A major concern of this paper is the adaptive α–β tracker oriented target maneuvers," In Proceedings of the 2008 IEEE Int. Conf. on
for practical use in the range gating technique. As shown, this filter is Acoustics, Speech and Signal Processing, Las Vegas, pp. 3553-3556,
nearly optimal and the prediction error produced by the AF is very 2008.

View publication stats

You might also like