GNN DA
GNN DA
Defence nationale
DTIC
MAR 1~31
July 1990
Canad! Ottawa
-bffgIo *W pea
,fAWRW•,mvv-
NBUTION Newm
SVaat-Ae f.t vý •
* National Dtfense
Defence nalionale
and
Sang Seok Lim
University of Ottawa
RESUME
Aaoeeseon ror
*NTIS GRA&I -
DTIC TAB
Uiuannounoed
Jl19tificatio
D13t~ributioa/
Availability Codos
Avail and/or
Dist Special
EXECUTIVE SUMMARY
v
TABLE OF CONTENTS
PAGE
vii
1.0 INTRODUCTION TO MULTIPLE TARGET TRACKING
2
2 below.
The second approach to be described is the Multiple
Hypothesis approach. As the name implies, this approach attempts
to solve the problem (of not being able to correctly decide at each
epoch which measurement corresponds to which target) by keeping all
reasonably likely "complete assignments" (as defined above) as
working hypothesis (with which to predict the next epoch's target
positions). The global distance measure ýof proximity of
measurements to predicted targets) can then be used to assign a
"probability" to each hypothesis at each epoch. These probabilities
can then be combined over several epochs, with the expectation that
incorrect hypothesis will lead to highly unlikely cumulative
probabilities (ie. dead ends). In this way these hypothesis can be
dropped and the correct hypothesis should eventually stand out
alone. The problem with this method is that, unless the number of
hypothesis carried forward from one epoch to the next is kept very
low, then the "hypothesis tree" will grow extremely quickly, and
a computationally unmanageable situation will arrise. This method
is described in more detail in chapter 3 below.
3
measurements of the objects being tracked has the form
Hkxk + vk (2 ) SM
Yk = Hkxeklk-1 + Wk (3)
Kk 0 kk-k~iHkTI HkPklk1,H Tr + Rk - (5
4
This requires knowledge of the measurement noise covariance Rk and
the imeasurement matrix Hk relating the measurements to the state
vector.
PD probability of detection.
5
Bkt - density of previously known targets that have been
detected.
Paklk_1 and Kaklk-1 are the Pal and Kal analogues corresponding to
the hypothesis "a".
S-1k = inverse of Sk which is the covariance of the
innovation Yk - yeklk-l given hypothesis "al".
6
Nc(t) - number of observations associatea with the target t
2.1 Introduction
yeklý_ -- H * xeklkI
8
Step 7: Solve the Assignment Matrix: Minimize the normalized
distance function using the modified Munkres optimal
assignment algorithm in section 2.3.
Step 4: For each row in the matrix, subtract the smallest element
of the row from each element in the row.
9
star the zero (i.e. Z*). Repeat for all zeros of the
matrix. Go to Step 8.
Step 11: a) Find the smallest uncovered element in the matrix and
call it "in. "m" will be positive.
b) Add "m" to each covered row.
c) Subtract "m" from each uncovered column.
d) Go to Step 9 without altering stars, primes, or
uncovered lines.
10
3.0 THE MULTIPLE HYPOTHESIS APPROACH
3.1 Introduction
2.1
NakIP1 -Kalklk-I + PalkjklHkT Salkk-IklYk-yeklk-l - HIKalkl]-1 (12)
where
5$ klk-1 a Hk'3klk _lk T
+ Rk
Pa k~k- PalkIlk- - PalkiklA T S-IkjI.HkPalkjk..1 (13)
or
12
3.3 KHT Algorithm
STEP 1: Set scan counter k-i and all the control parameters.
STEP 5: For each history (or hypothesis) "al", 1 < al < Lk-1,
receive unconditional estimation parameters (from STEP 11):
STEP 8: For each remaining history (or hypothesis) "a", and each
track within "a"
Compute Kaklk_1 using equation (12).
Compute PakilkI using equation (13).
13
STEP 9: For each history (or hypothesis) "a",
Calculate Ak using equation (16).
Calculate Kaklk using equation (17).
Calculate Paklk using equation (18).
STEP 12: GO TO STEP 5 for next scan data and/or output the results.
14
START
a) Hypothesis Generation,
b) Gating,
c) Hypothesis Matrix
d) Hypothesis Probabilities
Clustering
•,yes
SFor Simulation:
Print & Plot Results
35
4,0 JOINT PROBABILISTIC DATA ASSOCIATION
4.1 Yntroduction
42 A JPDA Al19 o- t
Step 1: Set k=l (time index) and all the control parameters.
= (w(#1t)t
(jt , j=l,2,...,Nm; t=l,2,...,Nt (25)
where
w(#jt) = 1 if meas. j is within the gate of
target t (event "#,t" occurs) (26)
0 otherwise
Y - yt
y(j,t) - yek (27)
16
S(jlt) - Hk Pklk-_ 1
kT + Rk. (28)
PhYP(#) - Pr{#IZk}
(CO/c)(C/c
=
*
* flNt(j)~
j=,N J [ _0".5 *YT J-S- IYjt / vfJ(2n)"m S(j't) I} ]
Rt Nc(t) [PDt)
S-I k S-1
17
Step 13: If desired, calculate and output the estimated current
target position, velocity etc. (at time k) from the current
state estimate, xeklk, or the predicted target position
etc., (at time k+l) using the predicted state estimate
xek+11k. The associated covariance may also be desired.
Remark:
In the above algorithm, the joint event probabilities are
comaputed using equation (29) which is derived under the assumption
that the probability mass function (PMF) of the false measurements
is given by the Poisson PMF (see equation (9-42) of [21):
18
5.0 CONCLUSIONS
CQmments:
19
clutter environment such as sonar tracking.
suggestions:
20
REPERENC
F
[9] Singer, R.A., Sea, R.G. and Housewright, K., "Derivation and
evaluation of improved tracking filters for use in dense
multitarget environment", IEEE Trans. Inform. Theory, Vol. IT-20,
pp.423-432, July 1974.
111] Reid, D.B., "An algorithm for tracking multiple targets", IEEE
Trans. Autom. Contr., Vol. AC-24, pp.843-854, Dec.1979.
[12] Athans, M., Whiting, R.H. and Gruber, M., "A suboptimal
estimation algorithm with probabilistic editing for false
measurements with application to target tracking with wake
phenomena", IEEE Trans. Autom. Contr., Vol. AC-22, pp.372-384, June
1977.
[13] Burgeois, F. and Lassalle, J.C., "An extension of the Munkres
algorithm for the assignment problem to rectangular matrices",
21
1971, pp. 8 02-80 6 .
Communications of the ACM, Vol.14, Dec.
22
APPENDIX A: HYPOTHESIS TREE AND HYPOTHESIS MATRIX
Example:
For the configuration of targets and measurements shown in
Fig.l, the hypothesis tree are formed using the rule mentioned
above. In the hypothesis tree shown in Fig.2, each node represents
the track qualities:
"~ is the false target or false alarm.
"1,2,..." are the confirmed or new targets.
2.11
z• Target
K [- Measurement
.2 0o H5)
0 5 (H22)
.~0(H8)
4 1Z-2
H15)
5 (I25)
0(H12)
(H13)
5 (P19)
2 c-----0 (116)
1 -5 (H23)
0- (H9)
4 J 2 (H16)
-1 (H126)
prior
hypothesis \-2 0- •--0 (H3)
-5 (H20)
3-1-5 (H27)
S/ý 2- 0 (H4)
-_._
0 ( H7 )
02 (1114)
~5 (H21)
3 4ýý -5 (H124)
0(HiI)
5 (H28)
0 false target I
1, 2: target 1 and 2 I
3,4,5:new targets I
24
The hypothesis matrix corresponding to the tree can be
formed as follows.
0 2 0 H5
1 2 0 H6
3 2 0 H7
0 4 0 H8
1 4 0 H9
2 4 0 HI0
3 4 0 H11
0 0 2 1112
1 0 2 H13
3 0 2 HI4
0 4 2 H15
1 4 2 H16
3 4 2 H17
0 0 5 H18
1 0 5 H19
2 0 5 H20
3 0 5 H21
0 2 5 H22
1 2 5 H23
3 2 5 H24
0 4 5 H25
1 4 5 H26
2 4 5 H27
3 4 5 H28
_ _ _ hypothesis numbers
25
APPENDIX B: HYPOTHESIS PROBABILITIES
PD*N[yk-Hkxeklk_1,S]/(l-PD)
Example
T3 H13
H112. H14
T42/ T4
H].1
T-6 H16
T5
26
Pr(HIl) - Pr(H1 )*Pr(Tl)
Pr(H12) - Pr(Hll)*Pr(T2)
Pr(H13) - Pr(H12)*Pr(T3)
Pr(H14) - Pr(H12)*Pr(T4)
Pr(H15) - Pr(HIll)*Pr(T5)
Pr(H16) - Pr(Hl5)*Pr(T6)
Pr(H17) - Pr(HlS)*Pr(T7)
Bkt* (I-PD)/c.
Remark:
27
e- /D" should be properly introduced in the
calculations (as per reference probability
[4] pp. 255-260).
28
APPENDIX C: HYPOTHESIS REDUCTION
29
APPENDIX D: CLUSTERING
30
SECURITY CLASSIFICATION OF FORM
(highest classification of Title, Abstract, Keywords)
1. ORIGINATOR (the name and address of the organization preparing the document. 2. SECURITY CLASSIFICATION
Organizations for whom the document was prepared, e.g. Establishment sponsoring (overall security classification of the document.
a contractor's report, or tasking agency, are entered in section 6.) including special werning terms if applicable)
DEFENCE RESEARCH ESTABLISHMENT OTTAWA UNCLASSIFIED
Department of National Defence
Ottawa, Ontario KIA 0Z4
3. TITLE (tie complete document title as indicated on the title page. Its classification should be indicated by the appropriate
abbreviation (S,C,R or U) in parentheses after the title.)
Data Association Algorithms for Multiple Target Tracking (U)
"S.DATE OF PUBLICATION (month and year of publication of 6a, NO. OF PAGES (total 6b. NO. OF REFS (total cited in
document) containing information. Include document)
August 1990 Annexes. Appendices, etc.)
36 14
7. DESCRIPTIVE NOTES (the category of the document, e.g. technicti report, technical note or memorandum. If appropriate, enter the type of
report, e.g. interim, progress, summary, annual or final- Give! thz inclusive dates when a specifiwreporting period is covered.)
DREO Report
S. GPONGORING ACTIV, ITYitin name or tinre eparrmeni project oP icH or reuormory bponwoing aic ir$ncarei ino U=vciupic,-iL lijur•rr ,.
address.)
Defence Research Establishment Ottawa
Department of National Defence
Ottawa, Ontario KIA 0Z4 .....
S9. PROJECT OR GRANT NO. (ifappropriate, the applicable research 9b. CONTRACT NO. (ifappropriate, the applicable number under
and development project or grant number under which the document which the document was written)
was written. Please specify whether prolect or grant)
041LJ
102. ORIGINATOR'S DOCUMENT NUMBER (the official drccument lOb. OTHER DOCUMENT NOS. (Any other numbers which fiaty
number by which the tocument iS identified by the originating be assigred this document eithei by the originator or by the
activity. This number must .e uni•Que
to this decument) sponsor)
1 1. DO)CLMENT AVAILABILITY (any limitations on further dissemination of the document, other than those imposed by security classification)
x) Unlimited distribution
) Distribution ;limited to defence departments and defence contractors; further distribution only as approved
I Distribution limited to defence departments and Canadian defence contractors; further distribution only as approved
I Distribution limited to government departments and agencies; further distribution only as approved
I Distribution limited to defence departments; further distribution only as approved
I Other (please specify)-
4
12. DOCUMENT ANNOUNCEMENT any limitation to the bibliographic announcement of this document. This will normally correspond to
the Document Availabilty (11;. However. where further distribution (beyond the audience sýecified in 11) is possible, a wider
announcement audience may be selected.)
Unlimited
UNCLASSIFIED
13. ABSTRACT ( a brief and factual summary of the document It may also appear elsewhere in the body of the document itsell. It is highly
desirable that the abstract of classified documents be unclassified, Each paragraph of the abstract shull begin with an indication of the
Security clasification of the information in the puagraph (unless the document itself is unclassified) represented as (S), (C), (R), or (U).
It is not ne-essary to include here abstracts in both officul languages unless the text is bilingual).
14 KEYWORDS, DESCRIPTORS or IDENTIFIERS (technically meaninoful terms or short phrases tnat characterize a document and could be
helpful in catamnu:rig the documient. They should be selected so that nc security classification is requied. Identifier$, such at equiiment
model designation, trade name. military prolbct code name, geographic location may also be Included. If possible keywords should be selected
from a published thesaurus. e.g. Thesaurus of Engineering and Scientific Terms (TEST) and that thesaurus-identi fied. If it is not possible to
select indexing terms which are Unclassified, the Llassification of each should be indicated as wi'h the title.)
target tracking
data association
tracking filter
multiple hypothesis
joint pLobabilistic
nearest neighbour
UINCLASSIFIED
SECURITY CLASSIFICATION OF FORM