0% found this document useful (0 votes)
6 views

An Algorithm to Track Multiple Targets

The document presents an algorithm for tracking multiple targets in cluttered environments, capable of initiating tracks and processing dependent reports while accounting for false or missing data. It employs a recursive approach and clustering technique to manage computational requirements, allowing for the correlation of measurements with their sources. The algorithm is designed to work with two types of sensors and aims to improve the accuracy of target tracking across various applications, including military and civilian scenarios.

Uploaded by

vixee
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

An Algorithm to Track Multiple Targets

The document presents an algorithm for tracking multiple targets in cluttered environments, capable of initiating tracks and processing dependent reports while accounting for false or missing data. It employs a recursive approach and clustering technique to manage computational requirements, allowing for the correlation of measurements with their sources. The algorithm is designed to work with two types of sensors and aims to improve the accuracy of target tracking across various applications, including military and civilian scenarios.

Uploaded by

vixee
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

TRANSACTIONS ON AUTOMATIC CONTROL, VOL. AC-24,NO.

EEE 6, DECEMBER 1979 843

An Algorithm for Tracking Multiple Targets

Abstmd-An algorithm for tracking mulliple targets in a cluttered algorithms. Clustering is the process of dividing the entire
set of targets and measurements into independent groups
environment is developed. ”le algorithm is capable of initiating tracks,
~ceountlng for false or missing reports, and processing sets of dependent
(orclusters). Instead of solving one largeproblem, a
reports As e a& measwement is received, probabilities are calculated for
number of smallerproblems are solved in parallel. Fi-
the hypolheses that the measurement came from previously known targ&
nally, it is desirable for an algorithm to be recursive so
in a target file, or from a new target, or that the measurement is false.
Target states areestimated from each such data-association h y p o W ithat all the previous data do not have to be reprocessed
wing a galma0 filter. As more measuTements are received, the p rowhenever a new data set is received.
ba
bi li-
ties ofjotnt hypotheses are calculated recnrsively using all availableThe algorithm can use measurements from two differ-
information swh as density of nnkoown t a r g e density of false targets,
ent generic types of sensors. The first type is capable of
probability of detection, and location uncertainty. ’Ibis branching tech-
nique allows correlation of ameasurement with its source based on sending information which can be used to infer the num-
ber of targets within the area of coverage of the sensor.
snbseqnent, as well as previons, data. To keep the number of hypoUleses
reawnable, unlikely hypotheses are eliminated and hypotheses with similar
Radar is an example of this type of sensor. This type of
target estimates are oombined.To 8 compocational requirements,
sensorgenerates a data setconsisting of oneormore
the entire set of targets and measurements is divided into clusters that are
reports, and no target can originate more than one report
solved independently. In an illustrative example of ahcraft bacldng, the
algorithm successfully hacks targets over a wide range of conditions. per data set. (The terms “data set” and “scan” are used
interchangeably in this paper to mean a set of measure-
ments at the same time. It is not necessary that they come
INTRODUCTION
I.
from a sensor that scans.) The second type of sensor does
HE SUBJECT of multitarget tracking has application not contain this “number-of-targets” type of information.
T in both military and civilian areas. For instance, ap- A radar detector, for example, would not detect a target
plication areas include ballistic missile defense (reentry unless the target’s radar were on. In this case, very little
vehicles), air defense (enemy aircraft), air traffic control can be implied about a target’s existence or nonexistence
(civil air traffic), ocean surveillance(surfaceships and by the fact that the target is not detected. Also, for the
submarines), and battlefield surveillance (ground vehicles second type of sensor, individual reports are transmitted
and military units). The foremost difficulty in the applica- and processed one at a time, instead of in a data set. The
tion of multiple-target tracking involves the problem of multiple-target tracking algorithm developed
here
associatingmeasurements with the appropriate tracks, accounts for these factors by using the detection and
especially when there are missing reports (probability of false-alarm statistics of the sensors, the expected density
detection less than unity), unknown targets (requiring of unknown targets, and the accuracy of the target esti-
track initiation), and false reports (from clutter). The key mates.
development of this paper is a method for calculating the A number of complicating factors not considered in this
probabilities of various data-association hypotheses. With paper include nonlinear measurements, nonlinear dy-
thisdevelopment,thesynthesis of anumber of other namics, maneuvering targets (abrupt and unknown
features becomes possible. change in target dynamics), requirement for an adaptive
In addition to the above data-association capabilities, algorithm (to account for unknown statistics),
some
the algorithm developed in this paper contains the desir- aspects of multiple sensors (problems of sensor configura-
able features of multiple-scan correlation, clustering, and tions, registration, and different types of information),
recursiveness.Multiple-scan correlation is the capability and time-delayed or out-of-sequence measurements. The
to use later measurements to aid in prior correlations first four factors have already been investigatedexten-
(associations) of measurements with targets. This feature sively in the single-target case,and donot aid in illuminat-
is usually found in batch-processing or track-splitting ing the multiple-target problem. The inclusion of the last
two factors would greatly increase the scope of this paper.
Manuscript received April25, 1978;revised June21,1979.Paper
In addition, the real-world constraints involved in imple-
recommended by J. L. Speyer, Chairman of theStochasticControl menting this algorithm are not explicitly considered.
Committee. This work was supported by theLockheed“Automatic
Multisensor/Multisouce Data Correlation”IndependentDevelopment References [l]-[S] are the basic reference papers that
Program. illustrate previouslyknown techniques for solving the
The author is with the Lockheed Palo Alto Research Laboratory, Palo
Alto, CA 94304. multiple-target tracking problem. The scope of each of

0018-9286/79/1200-0843$00.75 01979 IEEE


a44 VOL.
IEEE TRANSACTIONS ON AUTOMATIC C O ~ O L , AC-24,NO. 6, DECEMBER1979

TABLE I measurements up to this stage. This paper is unique in


SCOPEOF CURRENT PAPERSIN MULTIPLE-TARGET
TRACKING estimating the type of target (a discrete state) as well as
Algorithm
T Reference
- - the target’s continuous-valued states.
Characteristics 4 8
- - Reference [5],by Sittler, was published ahead of its time
Multiple Targets Yes Yes and is included here even though it is ten years older than
hlissing Measurements No Yes any of the other basicreferences. By usingverysimple
False Alarms 1e.g.. Clutter No Yes processes, Sittler illustrated most of the major concepts in
Track Initiation No Yes multitarget tracking. In addition to track initiation, false
Sensor Data Sets 1e.g..
Number-of-Targets No No alarms, and missing measurements, he included the possi-
Information) bility of a target ceasing toexist (track termination), a
hluliiple-Scan Correlation Yes Yes factor not covered in this paper. This possibility results in
Clustering No No several other concepts, such as track status. If data are
Recursive li.e., Filter1 Yes No being received that eliminate the possibility of the track
- -
being dropped, then the track status is defined to be good.
these papers is summarized in Table I. In addition, there In [6],Stein and Blackman implement and modernize
are a number of good papers incorporated into and refer- most of the concepts suggested in [5].As in [5],they retain
enced bytheseeightreferenceswhich are not repeated the concept of track dropping, as well as track initiation,
here. A more comprehensive set of papers is included in and derivetwo gates around each target. In their im-
the recent survey paper by Bar-Shalom [9].The algorithm plementation, they choose a suboptimal sequential
developed in this paper includes allthe characteristics method of processing the data. As each set of data is
shown in Table I. received, only the mostlikely assignment of targets and
Reference [l], by Singer et ai., is the culmination of measurements is selected.
several previous papers by the authors. In this reference, In [7], S m i t h and Buechlerverybriefly present a
they develop an “N-scan filter” for one target. Whenever branching algorithm for multiple-target tracking. By
a set of measurements is received, a set of hypotheses is calculating the relative likelihood of each branch, they are
formed as to the measurement that was originated by the able to eliminateunlikely branches. In calculating the
target. This branching process is repeated whenever a new likelihoods, they assume that each target is present (Po=
set of measurements is received so that an ever-expanding 1) anddo not account for false-alarm statistics. More
tree is formed. To keep the number of branches to a seriously, however, they apparently allow a target to be
reasonable number, all branches whichhave the last N associated withevery measurement within its gate. If
measurements in common are combined. A simulation of measurements are within several gates, this leads to sets of
their filter was included in the paper. The significant data-association hypotheses that are not mutually exclu-
finding of their simulation was that, for N = 1, the N-scan sive. On the other hand, the ad hoc procedure of eliminat-
filter gave near-optimal performance. This is significant in ing branches whose estimates are less thana specified
that the concept of track-splitting has been immediately distance away partially remedies this problem.
discounted by others as being too expensive. In [8], Morefield solves for the most likely data-associa-
In [2], Bar-Shalom and Tse also treat a single target tion hypothesis (as opposed to calculating the probabili-
with clutter measurements. They develop the “probabilis- ties of all the data-association hypotheses). He does so by
tic data association” filter that updates one target with formulating the problem in the framework of integer
every measurement, in proportion to thelikelihood that linear programming; as such, it is an interesting approach.
the target originated the measurement. The filter is subop- His algorithm isbasically a batch-processing technique.
timal in that track-splitting is not allowed(i.e.,itis an Even though a recursive version is included, it does not
N = O scan filter). In [3], Bar-Shalom extends this filter to guarantee optimality overseveraltime intervals as the
the multiple-target case. He separates all the targets and batch-processing version does.
measurements into “clusters” which can then be processed For the remainder of this paper, it is assumed that each
independently. He then suggests a rather complicated target is represented by a vector x of n state variables
technique to calculate the probability of eachmeasure- which evolves with time according to known laws of the
ment originating from each target. Compared to his tech- form
nique, the derivation in this paper reduces to a relatively
simple expression, as discussed in Section IV. +
x ( k 1) = a x ( k )+ r w ( k ) (1)
In [4], Alspach applies his nonlinear “Gaussian sum”
filter [lo]to the multitarget problem. His concepts are where
similar to those above; however, it should be noted that
his filter is not optimal in that the density function he is @=thestate transition matrix
trying to estimate “does not contain all the information =the disturbance matrix
available in the measurements since at each stage the state w=a white noise sequence of normal random
of the target giving rise to the nth measurement is condi- variables with zero mean and covariance
tioned on only the nth measurement ...” and not all Q.
These state variables are related to measurements
according to

where
z(k)=Hx(k)+u(k)
t

(2)
z
I
1
Receive New Data Set

Perform
Target
Time
Update

CLUSTR
I

identifying which targets


H = a measurement matrix ( A priori targets)
and measurements are REDUCE
associated with each
u = a white noise sequence of normal random I cluster. Reduce number of I
hypotheses by
variables with zero mean and covariance R.
combination.
theses, calculate their
If the measurements could be uniquely associated with probabilities, and per-
form a target measure-
each target, then the conditional probability distribution ment update for each
hypothesis of each
of the state variables of each target is a multivariate
normal distribution given by the Kalman filter [ 111. The
mean X and covariance of this distribution evolvewith
timebetween measurements according to the following
cluster.

+
simplifyhypothesismatrix
ofeachcluster.Transfer
blASH

tentative targets with unity


probability to confirmed
“time update” equations: targetcategory. Create ~~b~for
new clusters for confirmed next data set
targets no longer in h y p e
thesis matrix.

i
( stop )
Fig. 1. Flow diagram of multipletarget tracking algorithm.
When a measurement is received, the conditional mean f
and covariance are given by the following “measure-
ment update” equations: 111. HYPOTHESISGENERATIONTECHNIQUE
The basic approach used in this paper is to generate a
f(k)=F(k)+K[z(k)-HF(k)]
set of data-association hypotheses to account for all possi-
ble origins of every measurement. The filter in this paper
~(k)=P-PHT(HPH=+R)-’HP
generates measurement-oriented hypotheses, in contrast to
K = . P ~ H ~- I R
. (4) target-oriented hypotheses developed in [2] and [3]. In the
target-oriented approach, everypossible measurement is
listed for each target and vice versa for the measurement-
oriented approach. Although both approaches are equiv-
alent, a simpler representation is possible with the target-
A flow diagram of the tracking algorithm is shown in
oriented approach if there is no requirement for track
Fig. 1. Most of the processingis done within the four
initiation. However,with a track initiation requirement,
subroutines showninthefigure. The CLUSTR subroutine
the measurement-oriented approachappears simpler.
associates measurements with previous clusters. If two or
more previously independent clusters are associated be- -
Let ~ ( ke){ ~ , ( k ) ,m = 1,2, . ,Mk} denote the set of
cause of a measurement, then the two clusters are com- measurements in data set k ; Z k { Z(l),Z(2); ,Z(k)} -
bined into a “super cluster.” A new cluster is formed for denote the cumulative set of measurements up through
any measurement not associated with a prior cluster. As data set k ; Qk {Q;,i = 1,2, -
,Ik}denote the set of all
part of the initialization program, previously known hypotheses at the time of data set k which associate the
targets form their own individual clusters. cumulative set of measurements Z kwith targets or clutter;
The HYPGEN subroutine forms new data-association hy- and am
denote the set of hypotheses after the mth
potheses for the set of measurements associated with each measurement of a data set has been processed. As a new
cluster. The probability of each such hypothesis is calcu- set of measurements Z ( k + 1) is received, a newset of
lated and target estimates are updated for each hypothesis hypotheses Q“+’is formed as follows: Go is initialized by
of each cluster. setting Do= Ok. A new set of hypotheses is repetitively a‘‘‘
Both the CLUSTR and HYPGEN subroutines use the RE- formed for each prior hypothesis and each measure-
DUCE subroutine for eliminating unlikelyhypotheses or ment Z,(k+ 1). Each hypothesis in thisnew set is the
combining hypotheses with similar target estimates. Once joint hypothesis that is true and that measurement
the set of hypothesesissimplifiedby this procedure, Z,(k+ 1) came from target j . The values which j may
uniquely associated measurements areeliminated from the assume are 0 (to denote that the measurement is a false
hypothesismatrixby the MASH subroutine. Tentative alarm), the value ofany prior target, or a value one
targets become confirmed targets if they were the unique greater than the current number of tentative targets (to
origin of the eliminated measurement. denote that the measurement originated from a new
846 IEE TRANSACTIONS ON AUTOMATIC C O ~ O LVOL.
, AC-24,NO. 6,DECEMBER1979

target). This technique is repeated for every measurement After 3


hmsurements
-
in the new data set until the set of hypotheses a’+’= After 2

QMk+l is formed. A
- Target
Measurements

Before a new hypothesis is created, the candidate target u


Measurement Measurement
Ancr I

mustsatisfy a set of conditions. First, if the target is a


tentative target, its existence must be implied by the prior
hypothesis from which it is branching. Second, a check is
made to ensure that each target is not associated with
more than one measurement in the current data set.
Finally, a target is only associated with a measurement if Conliguratlon of targets and measurements
the measurement lies within the gate or validation region In esmple cluster
Orqin of Measurement
of the target. If X and are the mean and covariance of 0 0 2

the target estimate for the prior hypothesis a:, then the 1
3
0
0
2
2
0 4 2
covariance of v = 2, - HT is given by 1 4 2
3 4 2
0 0 5
B- H F H ~R+ (5) 1
2 0
0 5
5
3 0 5
0 2 5
and the measurement Z , lies within an “rpsigma” valida- 1 2 5
3 2 5
tion region if 0 4 5
1 4 5
2 4 5
3 4 5
(Zm-HF)rB-’(Zm-H5?)<q2. (6)
Hywthesls m t r w
In computer

Notethat the validation region also depends on the Hywtheses represented


a s branches of3 tree
measurements since R is included in (5); however, for
simplicity, it is assumed that all observations in the same
data set have the same covariance R.
The representation of the hypotheses as a tree and as
stored in the computer is shown in Fig. 2 for a representa-
tive cluster of two targets and three measurements. For ‘4-0
the example, the prior targets are numbered 1 and 2, and Y;
the new tentative targets are numbered 3, 4, and 5. The
Fig. 2. Representation of hypothesis matrix.
three measurements in the data set are numbered 11,12,
and 13. Notice, for example, that if target 2 is already
assigned to either measurement 11 or 12, a branch assign- “hypothesis relationship matrix“ is created for each target,
ing it to measurement 13 w ill not be formed since it is listing those cluster hypotheses which correspond to each
assumed that one target cannot generate more than one target hypothesis. Alternative target states are then esti-
measurement in adata set. The set of hypotheses is mated for each target hypothesis and not each cluster
represented in a computer by a two-dimensional array, the hypothesis. The target estimates for each hypothesis are
“hypothesis matrix” which has a row for each hypothesis, calculated by using a Kalman filter. The conditional prob-
and a column for each measurement. The entry in the ability distribution for the target states is then the sum of
array is the hypothesized origin of the measurement for the individual estimates for each hypothesis, weighted by
that particular hypothesis. In programming the automatic the probability of each hypothesis.
hypothesis generation routine, a simplification of the hy-
pothesis matrix occurs if the “prior hypothesis loop” is Iv. PROBABILITY OF EACHHYPOTHESIS
placed inside the “measurement loop.” In thiscase, the
hypothesis matrix at one stage is justa subset of the The derivation for determining the probability of each
hypothesismatrix for the next stage as shownin the hypothesis depends on whether the measurements are
figure. This follows the numbering scheme for hypotheses from a type 1 sensor or type 2 sensor. A type 1 sensor is
used in [ 11. one that includes numbers-of-targets type information as
Although there may be many hypotheses in a cluster, as well as information on the location of each target. AU the
far as each target in the cluster is concerned there are measurements in such a data set are considered together.
relativelyfew hypotheses. As an example, the cluster In addition, an estimate of the new target density must be
showninFig. 2 has 28 hypotheses; however, asfaras maintained to process measurements from this type of
target 1 is concerned, it only has two hypotheses: either it sensor. A type 2 sensor sends only positive reports. One
is associated with measurement 11 or it is not associated measurement at a time is processed for this type of sensor
with any measurement. Similarly, targets 2, 3, 4, and 5 and the new target density is not changed after each
have 4, 2, 2, and 2 target hypotheses, respectively. A report.
REID: ALGORITHM FOR TRACKING MULTIPLE TARGETS 847

A . Type I Sensor Assignment: The specific source of each measurement


which has been assigned to be from somepreviously
Let Pi" denote the probability of hypothesis a:, given known target.
measurements up through time k , i.e.,
noting that the prior hypothesis ai-'
Also, itisworth
Pi" P(a;IZk). (7) includes information as to the number of previously
known targets NTGT(g)within the area of coverage of the
Wemayview 8: as the joint hypothesis formed from sensor. This number includes any tentative targets whose
the prior hypothesis QE- and the association hypothesis existence is impliedby that prior hypothesis, as well as the
for the current data set $h, The hypothesis $h involves the confirmed targets for that cluster. However, according to
hypothetical assignment of eZrery measurement in the data the current data-association hypothesis, only NDT of these
set Z ( k ) with a target. We may write a recursive relation- targets are detected by the sensor.
ship for Pi" by use of Bayes' equation It isassumed that the number of previously known
targets that are detected is given by a binomial distribu-
tion, the number of false targets follows a Poisson distrib-
ution and the number of new targets also follows a
Poisson distribution. With these assumptions, the proba-
bility of the numbers NOT, NFT,and NNT given ai-' is
where for brevity we have dropped the conditioning on
past data through data set k - 1. The factor c is a normal-
izing factor found by summing the numerator over the
values of g and h. The first two terms on the right-hand
side (RHS) of the above equation will now be evaluated.
The first term is the likelihood of the measurements Z(k),
given the association hypothesis, and is given by
where
MK
P(z(k)la;-"-'$h)= II A m )
m=l
(9) PD =probability of detection
Pm= density of false targets
where
PNT= density of previously unknown targets
f ( m )= 1/ V if the mth measurement is from that have been detected (i.e., the PD term
clutter or a new target has already been included in it)
= N( Z , - H Z , B ) if the measurement is from a F,(X)= the Poisson probability distribution for
confirmed target or a tentative target whose n events when the average rate of
existence is implied by the prior hypothesis events is A.
Gk- I
g - The total number of measurements is given by
V is the volume (or area) of the region covered by the
sensor and N ( x , P ) denotes the normal distribution
exp[ - f x TP - ' x ] / y m . The values of x and B Of the MK measurements, there are many configurations
[through (5)] are those appropriate for the prior hypothesis or ways in whichwe may assign NDT of them to prior
a;-'. targets, Nm of them to false targets, and NNT of them to
The second term on the RHS of (8) is the probability of new targets. The number of configurations is given by
a current data-association hypothesis given the prior hy-
pothesis a:-'. Each current data-association hypothesis $h
associates each measurement in thedata set with a
specific source; as such, it includes the following informa-
(E)( )( MK-NDT
MK-NDT-NFT
NFT NNT

The probability of a specific configuration, given NOT,


tion. Nm, and NNT,is then
Number: The number of measurements associated with P(Configuration1NOT,N,, NNT)
the prior targets NDT(h),fhe number of measurements
associated withfalse targets NAh),and the number of - 1
- (12)
measurements associated with new targets NNT(h).
Configuration: Those measurements which are from
previously known targets, those measurements which are
from false targets, and those measurements which are For a given configuration, there are many ways to
from new targets. assign the NDT designated measurements to the NTGT
848 IEEE TRANSACTIONS ON AUTOMATIC CONTROL,VOL AC-24,NO. 6, DECEMBER 1979

targets. The number of possible assignments is given by either multiplying the prior probability by either Pn, PNT,
or PDN(Zm- HZ,B ) / ( 1 - P D ) as appropriate. After all
such branches are generated the likelihoods are then nor-
malized.
The probability of an assignment for a given configura- Concurrently with the above calculations, a calculation
tion is therefore of P N T , the density of new (i.e., unknown) targets, is
performed whenever a data set from a type 1 sensor is
received. The density of new targets PNT depends upon
the number of times the area has been observed by a type
1 sensor and the possible flux of undetected targets into
Combining these last three equations and simplifying, and out of the area.
we find that the probability of The development of this paper has implicitly assumed
that the probability distribution of the target state would
begivenby or approximated by a normal distribution
after one measurement. If the measurement vector con-
tains all the target state variables, then the initial state and
covariance of a target are given by x = Z,,, and P = R .
However, in general this will not be true but the normal
distribution assumption might nevertheless be made. For
Substituting this and (9) into (8), we find that example, if the target state is position and velocity and
only position is given in the measurement, then the veloc-
ity might be assumed to be normally distributed with zero
mean and a standard deviation equal to one-third the
maximum velocity of the target.
If the measurements or other factors are such that the
assumption of a normal distribution after one measure-
ment is not a good assumption, then appropriate modifi-
cations would have to be made to the gate criterion, the
hypothesis probability calculations and the Kalman filter
where for ease of notation the measurements have been
equations. As an example, consider the case where N
reordered so that the first NDT measurements correspond
targets on a plane surface, generate two sets of line-of-
to measurements from prior targets. Substituting for the
bearing (LOB) reports each containing exactly N LOB's
Poissonprocesses, the dependence on V is eliminated!
(i.e., PD = 1, @=O,.) The LOB's intersect in N 2 points,
Simplifying and combining constants into c, we finally
corresponding to the N real targets and N 2 - N "ghosts."
have
Since all the statistical degrees-of-freedom in the measure-
ments are necessary just to determine location, there are
no additional degrees-of-freedom remaining for correlat-
ing one LOB with another. Therefore, in this case, there is
no gate criterion for the second data set and each of the
N~ pairs are equally likely.

This equation is the key development presented in this B. Type 2 Sensor


paper. It is similar to (12) in the paper by Singer, Sea, and
Housewright [l], except it has been extended to the multi- To calculate the probability that a single measurement
ple-target case. They have a slightly different approach in from a type 2 sensor is from a false target, a previously
that they are only concerned with sensor returns within a known target, or a new target, letusassume that it is
target validation region. If this approach is extended to the selected at random from a set of NsT+ Nm + NNT possi-
multiple-target case (as suggested by Bar-Shalom in [3]), ble measurements, wherethe probabdlty of NOT, N=, and
considerable difficulty ensues in the derivation. Also, by NNTis given by the RHS of (10). For a given N O T , NET,
considering area outside validation regions, we now have and NNT, the probability of the measurement being from
a track initiation capability. clutter, a previous target, or a-new target is given by the
This equation is used iteratively within the hypothesis ratio of Nn, NDT, and NNT to their sum. Given that a
generation routine to calculate the probability of each measurement is from some previous target, the probability
data-association hypothesis. Although it appears com- itis from a particular target is l/NTGT' Finally, the
plicated, it is relatively easy to implement. If all the prior likelihood of the measurement, given the target which
hypotheses are first multiplied by (1 - PD)N", then as a originated it, is 1/ V if it is from a false or new target and
branch is created for each measurement and its hypothe- N(2, - HF,q.) if from a previous target.
sizedorigin,thelikelihood of the branch is found by Combining these effects, the likelihood of the measure-
849

B. Multiple-Scan A Igorithms

In multiple-scan algorithms, several hypotheses remain


after processing a data set. The advantage of this is that
subsequent measurements are used to aid in the correla-
tion of prior measurements. Hypotheses whose probabili-
ties are increased correspond to the case in which subse-
quent measurements increase the likelihood of that data
association. The simplest technique is again to prune all
=-NNT j = N T G T + 1. the unlikely hypotheses but keep all the hypotheses with a
V ’
probability above a specified threshold. In [l], an N-scan
The unconditional likelihood of the measurement is filter for the single-target case wasdeveloped in which
found by taking the expected value of (17), namely, hypotheses that have the last N data scans in common
- were combined. A remarkable conclusion of their simula-
C(Ml=j)=-=&,
NFT j=O tionwas that with N only equal to one, near-optimal
V performance was achieved.
An alternative criterion for binding branches together
=- NDT N ( Z m Hx,,B,)
- (i.e., combining hypotheses) is to combine those hypothe-
NTGT seswhichhavesimilareffects. Generally, this criterion
=PDN(Zm-Hxi,B,), 1<j<NTGT would correspond to the N-scan criterion, but not always.
- If hypotheses with the last N data scans in common are
j = NTGT+1. combined, thenhypotheses that differentiate between
measurements in earlier scans are eliminated. Examples
If these likelihoods are normalized, one obtains the proba- can be conceived[12]inwhich it ismore important to
bility for eachpossibleorigin of the measurement. The preserveearlier rather than later hypotheses. For this
implementation is the same as for (16) except that only reason, this paper uses the criterion of combining those
one measurement at a timeisprocessedfor a type 2 hypotheseswithsimilareffects concurrently with the
sensor and there are no (1 - Po) terms. criterion to eliminatehypotheseswith a probability less
than a specified amount a. For twohypotheses to be
similar,theymusthave the samenumber of tentative
V. HYPOTHESIS
REDUCTION TECHNIQUES targets and the estimates for all targets in each hypothesis
must be similar, i.e., both the means and the varianceis of
The optimal filter developedin the previoussection each estimate must be sufficiently close. The mean and
requires an ever-expandingmemory as more data are covariance of the resulting estimate is a combination of
processed. For this reason, techniques are needed to limit the individual estimates.
the number of hypotheses so that a practical version can
be implemented. The goal is an algorithm which requires
a minimum amount of computer memory and execution C. Simpl&ng the Hypothesis Matrix and
time while retaining nearly all the accuracy of the optimal Initiating Confirmed Targets
filter. All the hypotheses may be considered as branches
of a tree: thehypothesis reduction techniquesmaybe By eliminatinghypotheses, as in the previoussection,
viewed as methods of either pruning these branches or the number of rows in the hypothesis matrix is reduced,
binding together branches. This reduction may also allow us to reduce the number of
columns in the hypothesis matrix. If all the entries in a
A. Zero-Scan Algorithms column of the hypothesis matrix are the same, then that
measurement has a unique origin and that column may be
A zero-scan filter allows only one hypothesis to remain eliminated. This simple procedure is the only technique
after processing each data set. The simplest method (and used to simplify the hypothesis matrix of each cluster. If
that probablymost representative of current practice) is to the unique origin of the measurement is a tentative target,
choose the most likely data association hypothesis and use then that target is transferred to the confirmed target file.
a standard Kalman filter to estimate target states. This is In other words, the criterion for initiating a new con-
strictly a pruningoperation. An improved variation of this firmed target is that a tentative target has a probability of
is to still choose the maximum likelihood hypothesis but existing equal toone (after negligiblehypotheseshave
to increase the covariance in the Kalman filter to account been dropped). Once the hypothesis matrix has been sim-
for the possibility of miscorrelations. Another approach, plified as much as possible, many of the confirmed targets
developed in [2] and [3] and denoted the probabilistic data for that cluster may no longer be in the hypothesis matrix.
association (FDA) filter, is equivalent to combiningall the These targets may then be removed from that cluster to
hypotheses by making the target estimates depend on all form new dusters of their own. In this way, clusters are
the measurements. decomposed andpreventedfrom becomingeverlarger
850 EEE TRANSACTO
I NS ON AUTOMA~C CONTROL, VOL. AC-24, NO. 6, DECEMBER 1979

and larger through collisions. The features in this para-


graph have been incorporated into the MASH subroutine.

FORMATION
VI. CLUSTER

If the entire set of targets and measurements can be


divided into sets of independent clusters [3], then a great
deal of simplification may result. Instead of one large
tracking problem, a number of smaller tracking problems
can be solved independently. Since the amount of com-
puter storage and computation time grows exponentially
with the number of targets, this can have an important A . Truck Initiation
effect in reducing computer requirements. If every target
In the first example, a set of five measurements at five
could be separated into its own individual cluster, these
different times is used to illustrate track initiation. For
requirements would only grow linearly with the number of
thisexample, there are no initially known targets, the
targets.
initial density of unknown targets is 0.5, the density of
A cluster is completely defined by specifying the set of false reports is 0.1, the probability of detection is 0.9, and
targets and measurements in the cluster, and the alterna- both the process and measurement noise have variances of
tive data-association hypothesis (in the form of the hy- 0.04. The five measurements are shown as triangles in Fig.
pothesismatrix)which relates the targets and measure- 3. The most likely hypothesis after processing each
ments. Included in this description is the probability of measurement is that there is one target. The estimated
each hypothesis and a target file for each hypothesis. position, velocity, and l o error circle of the target for that
As part of the program initialization, one cluster is
hypothesis is also shown in the figure. As expected, the
created for each confirmed target whoseexistenceis
estimated position at each time lies between the previously
known a priori. Each measurement of the data setis
projected position and the measured position. After the
associated with a cluster if it fallswithin the validation
first measurement is processed, there is a 5/6 probability
region [(6)] of any target of that cluster for any prior
the measurement came from a target and a 1/6 probabil-
data-association hypothesis of that cluster. A new cluster
ity it came from a false report, since the relative densities
if formed for each measurement which cannot be
are initially 5 : 1. The probability there is at least one target
associated with any prior cluster. If any measurement is
increases withevery measurement to 99+ percent after
associated with two or more clusters, then those clusters
five measurements, at which point a confirmed target is
are combined into a “supercluster.” The set of targets and
created. There is an interesting effect after four measure-
measurements of the supercluster is the sum of those in
ments are processed. The most likely hypothesis is that all
the associated prior clusters. The number of data-associa-
four measurements came from the same target ( p =88
tion hypotheses of the supercluster is the product of the
percent); the second most likely hypothesis is that the first
number of hypotheses for the associated prior clusters.
measurement came from a false report, and the remaining
The hypothesis matrix, probabilities of hypotheses, and
three measurements came from the same target ( p = 4
target files must be created from those of their constituent
percent). Both of thesehypotheses declare that there is
prior clusters.
one target, and since the estimated state of the target in
It can be verified that the probabilities of a set of joint
both casesis nearly equal the two hypotheses are auto-
hypotheses formed by combining two or more clusters is matically combined.
the same whether calculated by(16) for the combined
clusters, or by multiplying the probabilities calculated by B. CrossingTrucks
this equation for each separate cluster. This property, in
fact, was one of the factors for choosing the Poisson and In the next example, we examine the capability of the
binomial distributions for describing the number of filter to process measurements from two crossing targets.
targets in (IO). One target starts in the upper left corner and moves
downward to the lower right while the other target starts
VII. EXAMPLE TO ILLUSTRATE FILTER in the lower left corner and moves upward to the upper
CHARACTERISTICS right corner. The existence of just one of the targets is
known a priori. The set of measurements and the target
A simple aircraft tracking problem from [ 11 was chosen estimates corresponding to the most likely hypothesis are
for illustrating and evaluating the filter derivedinthe shown in Fig. 4. The first two measurements at the top of
previous section. The state of the aircraft is its position the figure are processed as in the track initiation example,
and velocity in the X and Y directions. Measurements of and the first two measurements at the bottom are
position only are taken. Each dimension is assumed inde- processed as a track maintenance problem. At k = 3, how-
pendent, with identical equations of motion, measurement ever, the two clusters “collide” and a supercluster made of
errors, and process noise, i.e., both clusters and both measurements is formed. This
REID: ALGORITHM FOR TRACKING MULTIPLE TARGETS 85 1

K SCAN
NUhiBER This is one case in which later data did not help resolve a
A i~1EASUREMEM
0 :

ERRORELLIPSE
prior data-association hypothesis; in fact, the ambiguity
-L PROJECTED LOCATION was increased. At k = 5, the data-association hypotheses at
ESTlhlATED LOCATION k = 4 are the most significant and are preserved. (If the
N = 1 scan filter criterion was used, the hypotheses at
k = 4 would have been eliminated.) By the time measure-
Y ments at k =6 are processed, the difference in the hy-
I J I 83%probability of 1 taraet
(bl 898 probability of 1 o r 2 tarqets
potheses at k = 4 is no longer important since the target
IC)948 probability of 1 o r more tarqets estimates are now so similar. From then on, we have two
1111 99% orobabilihr of 1 or more taraets (8mthat all 4 measurements e separate track maintenance problems.
from'same tar'qet: 4% that first meas. I S false an+ last 7 are from
same tarqetl
fel 1 W probability that tarqet exists 19896 that last meas. .came from
target: 1% that last meas. came from new taraet: 1% that last meas.
came from false target) C. HighTargetDensity
I I I I
1 2 3 4 5 6 The last example illustrates the difficulty of associating
X
measurements into tracks for a more complicated arrange-
Fig. 3. Example of track initiation.
ment of measurements. This example is a single run from
"-7 I
the Monte Carlo program described in the next section. In
K = SCAN NUhlBER this example there are five real targets; the existence of
PROBABILI~ = INITIALLY KNOWN
TARGET four of them is initially known by the filter. The a priori
location and velocity estimates of these four targets as
PROJECTED LOCATION
= ESTIMATEDLOCATION
well as measurements from the first six scans are shown in
Fig. 5. Both the measurement noise and the process noise
are relatively large (4=r =0.40). Thedata points are
shown grouped according to the maximum likelihood data
EPARATE CLUSTERS SEPARATE
association hypotheses (except as noted below). In addi-
tion, there are approximately 15 other feasible groupings
of targets that are also possible arrangements. As
measurements are processed, the probabilities of these
different groupings change. For example, at scan 4 the
most
likely hypothesis is that measurement 19 is
associated with target 1 and measurement 18 is associated .
ONE SUPERCLUSTER with target 2; however, on scan 5 and subsequent scans,
another hypothesis becomes the most likely and reverses
= 6 PROBABILITY TiiAT MEASUREMENTCAhlE FROM
PRIOR TARGET this assignment. The one target unknown by the filter is
1 I I I
I I
being formed by measurements 2, 8, and 14. Even at scan
0 1 2 3 4 5 6 7
X 4 when there is only one measurement for either target 3
or the new target, the most likely hypothesis is that
Fig. 4. Example of crossing tracks. measurements 2, 8, and 14 are a new target and measure-
ments 5, 10,16, and 20 are from target 3. At scan 5,
collision is due to the fact that one of the hypotheses in however, the most likely hypothesis is that measurements
the top cluster is that the top measurement at k = 1 was 2, 10, and 14 (as well as 23) are false, and that measure-
from a target, but the next measurement was from clutter. ments 5, 8, 16, and 20 are associated with target 3. At scan
Since we assumed an initial variance in velocity of 1.0, the 6 the likelihood that measurements 2, 8, 14, 23, and 30
above target could have originated the measurement at form a new target is increased and by scan 7 it is part of
(3.2,2.9). After the measurements are processed, however, the most likely hypothesis. This target does not become a
thispossibilityis so remote that it is eliminated. After confirmed target until scan 9. In each case, the general
eliminating all the other negligible hypotheses at k = 3, the grouping of measurements corresponds to one of the
supercluster is separated into two clusters, corresponding actual targets.
to the two targets. To process the two measurements at
k=4, the supercluster has to be formed again. At this
time, the tentative target in the top of the figure becomes VIII. MONTECARLOSIMULATION
a confirmed target. Two hypotheses remain after process-
ing the measurements atk=4; that the lower target The independent factors affecting filter performance
originated the lower measurement and the higher target include both filter characteristics (e.g., the filter criteria
originated thehigher measurement ( p l =60 percent) or for eliminating or combining hypotheses) and environ-
vice versa (p,=40 percent). The measurements at k = 5 mental variables, such as target density Pn, measurement
are such that they reduce the difference in probabilities of accuracy R, target perturbances Q , false report densities
these two hypotheses (to p , = 54 percent, p z = 46 percent). fin, and data rate T, Pn.
852 IBEB 'fRANSAClTONS ON AuToMAnC CONTROL, VOL AC-24,NO. 6, DECEMBER 1979

12

1
a
6

_/--- --

Fig. 5. Tracking in a dense target environment.

The probability or ease of making correct associations associating a target with a measurement, and the probabil-
between targets and measurements appears to be a key ities of associating a target with a measurement correctly
factor in determining filter performance. As such, the and incorrectly, as the density-variance product increases.
primary effect of many of the environmental variables These curves may be viewed as the association probabili-
mentioned above is in affecting this factor. -The more ties of an N = 0 scan filter in processing one scan of target
dense the targets and measurements or the larger the information when the filter knows the number of true
uncertainty in their locations, the more dfficult it is to targets and their prior locations from the previous scan,
make the correct associations. The variables PIT, Dm, and and usesthefollowing association rules: a target is
PD determine the density of measurements according to associated with the closest measurement within its valida-
tion region; if no measurements are within validation
PM PIT + p D @ T T (20) region (or such a measurement isassigned to another
target), then no measurement is associated with the target.
and the variancebetween measurements and eliminated
The curves do not reflect the performance of any filter
target locations (just before an association is to be made)
over more than one scan.
is given by
The curves in Fig. 6 were generated for the case where a
02=F11+r. (21) proportion PD =0.9 of the true targets generated a
measurement, the relative density of true targets to false
If we make the approximation that p,,reaches its steady- measurements was 5 / 1, and the target gate size was 9 = 3.
state value as givenbysolving the Kalman filter equa- Since only 90 percent of targets generated measurements,
tions, then pl, can be related to q, r , and T. Po also the probability of correctly associating a target with a
affects PI, by affecting the average time between measure- measurement is no greater than 90 percent. As the den-
ments. In addition, the actual value of Fll wouldbe sity-variance product increases, this probability decreases.
affected by combining hypotheses so that the relationship Anevenworse condition is the related increase in the
between Fll and q, r , and T mustbe considered an probability of incorrectly associating a target and a
approximation. measurement. Incorrect association can have a cascading
An indication of the increasing difficulty of association effect on tracking algorithms which do not account for it.
withincreasing target and measurement densities and The probability a target is not associated with any
uncertainties isgivenby the association probabilities measurement is initially 10 percent for this set of condi-
shown in Fig. 6. The figure shows the probability of not tions and also decreases with increasing density-variance.
REID: ALGORITHM FOR TRACKPIG MULTIPLE TARGETS 853

_ = 0.2 -
'ercen 'ercenl
c-
PROBABILITY OF Brr $umber of orma lized
Case 'argets False
0.8
ASSOCIATION ,, 3.0 lyrmthese!
- -
'racked Brgets
-Error

q = r : O . M . a =0.01 10.52 97.3 29 a 920


IP = 0.01)
q=r=O.M. a =0.1 3.8 0.922
7.67 97.1
IP = 0.011

q=r = 0.04. a = 0.5 0.6 0.870


4.40 81. 0
(P = 0.01)
q = r = 0.12. u = 0.01 13. 65 93.6 6. 7 1.053
(P = 0.03
q = r = a l Z . . a= a i ao4 926 60 L 052
I P = 0.03)

q = r = O . l Z , 0 =0.5
P = Pnl" 2 IP = 0.03)
4.40 75.4 7.0 L W
q.r.0.12, a =0.01
1 6 41 89. 1 7.7 L 055
Fig. 6. T h e probability of correct and incorrect data association versus w = a1)
measurement densityX variance. q = r = a 4 , a =0.1
7.90 85.2 11.8 1.071
IP = 0. 11
q = r = 0 . 4 . a =0.5
A number of Monte Carlo runs were made to validate IP = 0.11
4. 40 75.0 65 1.043
the filter and examine its performance under various q = r = O . M , a =0.1
7.76 99.3 5. a a 537
conditions. The area under surveillance was 10 units by 10 IP = 0.01) (High 3! )
NT
units. Unless otherwise noted, values of Pn = 0.05, P, = q = r = 0 . 1 2 .
Ip 0.03) (High ON$
a = 0 . 1
a05 94 3 62 L 052
0.01, PD=0.9, q=O.O4, r=0.04, and T=1.0 were used in q = r - 0 . 4 , a =0.1
7.81 90.8 12 1 L 0%
generating the number of targets and the measurements. IP= 0. 1) (High DNT)

The fraction of true targets initially known by the filter -


was f N T = 0.80.The initial value of the density of unknown
targets in the filter was given by erate for all conditions. Under all conditions, the normal-
ized error (half the average squared position error in-
versely weighted by the filter's estimated variance in posi-
with a lower limit of PFT/4. tion) was approximately equal to one, indicating that the
In the first set of simulations, three versions of the filter, actual accuracy of the filter agreed with the accuracy
corresponding to values of the hypothesis elimination predicted by the filter. The filter was neither overconfi-
criterion a10.01, a=0.1, and a=O.5, were run with the dent nor underconfident.
density-variance product ranging from p = 0.01 to p =O. 1. All of the filter parameters are well-defined and
The filter is mechanized so that at least one data-associa- measurable quantities except the new target density PNT,
tion hypothesis is retained for each cluster so that cy =0.5 whichis scenario-dependent. A value of PNT should be
corresponds to the maximum likelihood zero-scan filter. chosen based upon the expected range of target densities
The density-variance product was varied by increasing and the relative importance the userassigns to missing
both q and r the same amount. The results of this simula- real tracks versus creating false tracks. The higher PNT,
tion are summarized in Table 11. Each data point the less likely the algorithm will miss a real target but the
summarizes results over 10 Monte Carlo runs at 10 time morelikelyitwill create a false track. The last three
intervals, or a total of 100 comparisons between the true entries in Table I1 are for the case where the value of PNT
target locations and the estimated target locations. There used by the filter istwicewhatit should have been
did not appear to be any error trends with time for these according to (22).
runs for the above values of Po, f N T , etc. except for an To present results of other factors which affect perfor-
increase in false targets with time. mance (e.g., Po, fNT, and &-/fin) would unduly increase
As the density-variance product increases, the difficulty the size of this paper and will not be done except to say
of making data-to-target association increases. This diffi- that decreases in PD and f N T and increases in fin/&,.
culty causes an increase in the number of hypotheses for reduce performance. Also, the percentage of targets
the a =0.01 filter. The effect of more drastic pruning tracked and the number of false tracks are functions of
(cy= 0.1) reduces the number of hypotheses. Increases in the scan number for values of Po, f N T , and Pm/PTT not
the density-variance product do not appear to increase the used in this simulation.
number of hypotheses for the heavily pruned cases ( a = The algorithm was coded in 1 500 lines of Fortran and
0.1 and a =0.5). The percent of targets tracked (ratio of executed on a UNIVAC 1100 computer. Each of the
correct tracks to true number of targets) appeared quite subroutines shown in Fig. 1 took approximafely 300 lines
good for the first case; however, as the density-variance of Fortran, with the main program and other subroutines
product increased or as more hypotheses were pruned, taking another 300 lines. The core memory requirements
this percentage dropped. The percent of false targets (ratio for an algorithm capable of handling 10 clusters and 30
of false tracks to total number of tracks) remained mod- targets (and including the Monte Carlo program for gen-
854 ~ E ~E S A ~ O ON
N AUTOMATIC
S CONTROL, VOL. AC-24,NO. 6, DECEMBER 1979

erating measurements and evaluating the algorithm) was Y. Bar-Shalom and E. Tse, “Tracking in a cluttered environment
with probabilistic data association,” Automatica, voi. 11, pp.
approximately 64K words. Ten Monte Carlo runs of 10 451460. 1975.
scans each were executed in 25-45 s depending upon the Y.-Bar-Shiom, “Extension of the probabilisitc data association
filter in multi-target tracking,” in Proc. 5th Symp. on Nonlinear
particular case. To handle more clusters and targets, or to Estimation, Sept. 1974, pp. 16-21.
reduce memory requirements, the cluster and target infor- D. L. Alspach, “A Gaussian s u m approach to the multi-target
identification-tracking problem,” Automatica, vol. 11, pp. 285-296,
mation could be put on a disk file. However, disk access . .<.
1975
<

timewould then cause a large increase in the overall R. W. Sittler, “An optimal data association problem in surveillance
theory,” IEEETrans.Mil.Electron., vol. MIL-I, pp. 125-139,
execution time of the program. Apr. 1964.
J. J. Stein and S. S. Blackman, “Generalized correlation of multi-
target track data,” IEEE Trans. Aerosp. Electron. %st., vol. AES-
IX. CONCLUSIONS 11, pp. 1207-1217, NOV.1975.
P. Smith and G. Buechler, “A branching algorithm for discriminat-
ing and tracking multiple objects,” IEEE Trans. Automat. Contr.,
This paper has developed a multiple-target tracking vol. AC-20. DD. 101-104. Feb. 1975.
C. L. Morifield, “Appbcation of 0-1 integer programming to
filter incorporating a wide range of capabilities not previ- multitarget tracking problems,” IEEE Trans. Automat. Contr., vol.
ously synthesized, including track initiation, multiple-scan AC-22, pp. 302-31 1, June 1977.
Y.Bar-Shalom, “Tracking methods in a multitarget environment,”
correlation, and the ability to process data sets with false IEEE Trans. Automat. Contr., vol. AC-23, pp. 618-626, Aug. 1978.
or missing measurements. The primary contribution is a D. L. Alspach and H. W. Sorenson, “Recursive Bayesian estima-
tion using Gaussian sums,” Automtica, vol. 7, 1971.
Bayesian formulation for determining the probabilities of R. E. Kalman, “A new approach to linear filtering and prediction
alternative data-to-target association hypotheses,which problems,” J . Basic Eng., vol. 82-D, pp. 35-45, 1960.
D. B.Reid, “A multiple hypothesis filter for traclung multiple
permitted this synthesis. In simulations of a simple targets in a cluttered environment,” LMSC Rep. D-560254, Sept.
aircraft tracking problem, the filter demonstrated its capa- 1977.
bilities over a wide range of target densities and measure-
ment uncertainties. The filter proved to be robust to errors
in the given filter parameters (e.g., unknown target den-
sity). Donald B. Reid (S69-”72) was born on March
29,1941, in Washington, X. He received the
B.S. degree from the U.S. Military Academy,
West Point, N Y , in 1963 and the M.S. and
ACKNOWLEDGMENT Ph.D. degrees in aero & astronautical engineer-
ing from Stanford University, Palo Alto, CA in
The author would like to thank Dr. H. R. Rauch, Mr. 1965 and 1972, respectively.
From 1965 to 1969 he served in the U.S. Air
R. G. Bryson, and the reviewers for their helpful sugges- Force as an Astronautical Engineer in the
tions. 6595th Aerospace Test Wing at Vandenberg
AFB and participated in the testing and launch
operations of military space programs. He was a member of the technical
REFERENCES staff at the Institute for Defense Analyses in Arlington, VA, from 1972
to 1976. Since July 1976 he has been a scientist with the Palo Alto
Research Laboratory of Lockheed Missiles & Space Company. His
[l] R. A. Singer, R . G. Sea, and, K.B. Housewright, “Derivation and
evduahon of unproved traclung fdters for use in dense multi-target current interests include multiple target tracking, orbital rendezvous and
environments,” IEEE Trans. Inform. Theory, vol.
IT-20, pp. station keeping, system identification, and military command and con-
423-432, July 1974. trol systems.

You might also like