0% found this document useful (0 votes)
15 views24 pages

Sensors 22 05800 v2

Uploaded by

Federico
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views24 pages

Sensors 22 05800 v2

Uploaded by

Federico
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

sensors

Article
Adaptive Data Fusion Method of Multisensors Based on
LSTM-GWFA Hybrid Model for Tracking Dynamic Targets
Hao Yin 1 , Dongguang Li 1,2 , Yue Wang 1, * and Xiaotong Hong 1

1 School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China;


[email protected] (H.Y.); [email protected] (D.L.); [email protected] (X.H.)
2 School of Mechatronical Engineering, North University of China, Taiyuan 038507, China
* Correspondence: [email protected]

Abstract: In preparation for the battlefields of the future, using unmanned aerial vehicles (UAV)
loaded with multisensors to track dynamic targets has become the research focus in recent years. Ac-
cording to the air combat tracking scenarios and traditional multisensor weighted fusion algorithms,
this paper contains designs of a new data fusion method using a global Kalman filter and LSTM
prediction measurement variance, which uses an adaptive truncation mechanism to determine the
optimal weights. The method considers the temporal correlation of the measured data and introduces
a detection mechanism for maneuvering of targets. Numerical simulation results show the accuracy
of the algorithm can be improved about 66% by training 871 flight data. Based on a mature refitted
civil wing UAV platform, the field experiments verified the data fusion method for tracking dynamic
target is effective, stable, and has generalization ability.

Keywords: multisensor; data fusion; LSTM; dynamic target tracking

Citation: Yin, H.; Li, D.; Wang, Y.; 1. Introduction


Hong, X. Adaptive Data Fusion
In the field of future air combat, high-precision tracking and monitoring of enemy
Method of Multisensors Based on
aircraft by UAVs will be an indispensable link in the OODA combat ring, which pro-
LSTM-GWFA Hybrid Model for
vides important support for combat decision-making [1]. It requires the use of computer
Tracking Dynamic Targets. Sensors
technology to analyze and synthesize the observed data of sensors to obtain a more accu-
2022, 22, 5800. https://doi.org/
rate trajectory of targets. With the rapid development of modern aerospace technology,
10.3390/s22155800
the speed, angle, acceleration, and other parameters of maneuvering targets in space are
Academic Editor: Sameh Nassar changing constantly, which makes the sequential position of dynamic targets have strong
Received: 30 June 2022
correlation. The research of tracking dynamic targets with maneuvering characteristics has
Accepted: 31 July 2022
become one hot spot of electronic warfare and there is an urgent need to study superior
Published: 3 August 2022
tracking filtering methods. Although the increased hardware performance of various sen-
sors has slightly improved the accuracy of systems, radar and photoelectric sensors have
Publisher’s Note: MDPI stays neutral
still been used to track enemy aircraft in recent decades. Moreover, the significant impact
with regard to jurisdictional claims in
of errors of the measurement system on the distance and angle of sensors should not be
published maps and institutional affil-
underestimated. In order to satisfy the accuracy of tracking enemy aircraft in the process
iations.
of moving, an adaptive online multisensor data fusion method is needed to improve the
traditional Kalman filter (KF) and other algorithms. Multisensor data fusion (MSDF) is
making full use of multisource data and combining their redundancy or complementarity
Copyright: © 2022 by the authors.
according to rules based on the hardware of multisensor systems, so as to obtain a more
Licensee MDPI, Basel, Switzerland. essential and objective understanding of the same thing. The tracking trajectory of dynamic
This article is an open access article targets is based on updating the system states in each time-step by fusion of multisensor
distributed under the terms and results, which can be more robust than any single data. A novel architecture of MSDF
conditions of the Creative Commons can give the system more accurate estimation of error in each KF-based model to obtain
Attribution (CC BY) license (https:// better fusion results. In this paper, a new data fusion method is proposed which integrates
creativecommons.org/licenses/by/ a Kalman filter, least squares (LS), a neural network (NN), and the maneuver detection
4.0/). mechanism to improve the performance of filtering. In contrast to traditional data fusion,

Sensors 2022, 22, 5800. https://doi.org/10.3390/s22155800 https://www.mdpi.com/journal/sensors


Sensors 2022, 22, x FOR PEER REVIEW 2 of 24

Sensors 2022, 22, 5800 2 of 24


the maneuver detection mechanism to improve the performance of filtering. In contrast
to traditional data fusion, which only considers a single filtering process, this method fo-
cuses on the architecture of global fusion, which makes full use of the relationship be-
which only considers a single filtering process, this method focuses on the architecture
tween the combined and the individual multisensors. It combines the impact of historical
of global fusion, which makes full use of the relationship between the combined and the
data on the current data, so as to realize online and real-time adaptive, fast, and accurate
individual multisensors. It combines the impact of historical data on the current data, so as
data fusion.
to realize online and real-time adaptive, fast, and accurate data fusion.
In the task of tracking enemy aircraft for air combat, this research used a data fusion
In the task of tracking enemy aircraft for air combat, this research used a data fusion
scene composed of UAVs with sensors and enemy aircraft, as illustrated in Figure 1. There
scene composed of UAVs with sensors and enemy aircraft, as illustrated in Figure 1. There
was a complete data link between UAVs which could communicate with each other at any
was a complete data link between UAVs which could communicate with each other at any
time. Each UAV carried only one type of detection sensor (photoelectric or radar sensor).
time. Each UAV carried only one type of detection sensor (photoelectric or radar sensor).
Enemy aircraft attempted to escape tracking through various maneuvers, causing errors
Enemy aircraft attempted to escape tracking through various maneuvers, causing errors in
in the
the continuousdetection
continuous detectionofoftheir
theirposition
positionby byUAVs.
UAVs. WhenWhen thethe sensor
sensor detection
detection of
of more
more
than two UAVs obtained the position of the enemy aircraft, data fusion
than two UAVs obtained the position of the enemy aircraft, data fusion could be carriedcould be carried
out
out
andand the final
the final fusion
fusion results
results wouldwould be shared
be shared to realize
to realize the cooperative
the cooperative trackingtracking
of multipleof
multiple UAVs. Then the UAVs with lost tracking field of view could be
UAVs. Then the UAVs with lost tracking field of view could be redirected to the correctredirected to the
correct
tracking tracking path.
path. The The research
research contentcontent
of this of this is
paper paper is the process
the process of autonomous
of autonomous data
data fusion
fusion after the UAVs obtained the multisource state of enemy aircraft
after the UAVs obtained the multisource state of enemy aircraft using multisensors. using multisensors.

Enemy Aircraft
UAV-2

Photoelectric
Sensor

UAV-1

Position Data
UAV-3
Data Fusion
Radar UAV-4
Sensor Result Sharing
Multi-Sensor Data Fusion Model

Thescene
Figure1.1.The
Figure sceneof
ofmultisensor
multisensordata
datafusion
fusionfor
fortracking
trackingaamaneuvering
maneuvering target
target in
in air
air combat.
combat.

To solve
To solve the
the problem
problemof ofdata
datafusion
fusionofofmultisensors
multisensors forfor
tracking
tracking of enemy
of enemy aircraft, this
aircraft,
paper puts forward several assumptions.
this paper puts forward several assumptions.
A. There
A. Theremaymaybe beweak
weakcommunication
communicationdelay delaybetween
betweenUAVs. UAVs.The Theacquired
acquiredposition
positiondata data
errorofofenemy
error enemyaircraft
aircraftcaused
causedby bycommunication
communicationdelay delayofofUAVs
UAVscan canbebecorrected
correctedby by
time and space alignment. (This paper focused on the data fusion
time and space alignment. (This paper focused on the data fusion process, rather than process, rather than
howto
how to reduce
reduce the
the communication
communication delay, delay,sosothethepre-processing
pre-processing of of
data is not
data what
is not whatthe
fusion algorithm itself needed to consider).
the fusion algorithm itself needed to consider).
B. Each
B. EachUAVUAVdetects
detectsthethe target
target independently,
independently, so so
thethe position
position datadata errors
errors of enemy
of enemy air-
aircraft
craft obtained
obtained by byeacheach sensor
sensor areare irrelevant.
irrelevant. (In(In thepractical
the practicalscenario
scenarioconsidered
consideredin in
this paper, the sensors carried by UAVs did not provide cooperation
this paper, the sensors carried by UAVs did not provide cooperation and support for and support for
othersensor
other sensordetection.
detection.Therefore,
Therefore,the theinteraction
interactionofofdatadataerrors
errorswaswasignored).
ignored).
C. The relative motion of UAVs and enemy aircraft can be
C. The relative motion of UAVs and enemy aircraft can be regarded as in the regarded as in thesame
sameplane
plane
during the continuous tracking process in discrete time. (For
during the continuous tracking process in discrete time. (For any maneuver form ofany maneuver form of
escape aircraft, the UAV adopt the form of bank-to-turn (BTT) and
escape aircraft, the UAV adopt the form of bank-to-turn (BTT) and keep it in the same keep it in the same
relativeplane,
relative plane,rather
ratherthan
thanthe thehorizontal
horizontalplane
planeand andthethesame
sameheight).
height).
D. Once
D. Oncethetheenemy
enemy aircraft
aircraft takes
takes maneuvering
maneuveringflight,flight,its
itsmain
mainstates
statesofof
speed
speedand anddirection
direc-
will change. (Dynamic changes of enemy aircraft are inevitable. Speed and direction
tion will change. (Dynamic changes of enemy aircraft are inevitable. Speed and di-
directly lead to position changes, which is very important for occupancy guidance or
rection directly lead to position changes, which is very important for occupancy guid-
defensive games in air combat. Sometimes acceleration may be considered).
ance or defensive games in air combat. Sometimes acceleration may be considered).
The purpose of existing data fusion is to eliminate the uncertainty in measurement
and obtain more accurate and reliable measurement results than the arithmetic mean of
Sensors 2022, 22, 5800 3 of 24

limited measurement information [2], since there are many methods of data fusion, such as
weighted average, classical organon, Bayesian estimation, and so on.
This paper used global filtering and a long short-term memory (LSTM) network to
improve measurement error prediction of a least squares Kalman filter as the basic fusion
method. It used the statistical characteristics of the measurement model to determine the
optimal estimation of the fusion data recursively. A general data fusion method for tracking
dynamic target was constructed to realize real-time high-precision trajectory acquisition,
verified by numerical simulation and field test.
The contributions of this paper include: (1) Based on the traditional multisensor
weighted fusion algorithm, a new data fusion method using a global Kalman filter and the
predicted measurement variance of LSTM was designed. An adaptive truncation mecha-
nism was used to determine the optimal weights by considering the influence of multisensor
data fusion on a single datum. (2) A maneuvering detection mechanism was added through
a variable dimension Kalman filter, whose results were used as data fusion parameters.
The effectiveness and stability of the fusion method for dynamic tracking were verified by
numerical simulation and field experiments and it has certain generalization ability.
The rest of the paper is organized as follows. Section 2 presents a complete review of
prior works in the literature relevant to the research. Section 3 describes the preparatory
work of data fusion, and proposes a new architecture of weighted data fusion based on
a Kalman filter, which makes the fusion result error smaller and closer to the real value.
Section 4 analyzes and discusses the results of numerical simulation and field tests. Finally,
Section 5 draws conclusions and outlines future research directions.

2. Related Works
As an important data processing method, data fusion technology has a wide range of
applications, such as target detection [3,4], battlefield evaluation [5], medical diagnosis [6],
remote sensing mapping [7–9], fault diagnosis [10–12], intelligent manufacturing [13–15],
etc. Traditional data fusion methods can be divided into groups based on probability,
Dempster Shafer theory, knowledge, etc. [16].
The data fusion methods based on probability include Bayesian inference [17], the
Kalman filter model [18,19], and the Markov model [20]. They represent the dependence
between random variables by introducing probability distribution and probability density
function. Sun et al. [21] proposed a weighted fusion robust incremental Kalman filtering
algorithm and introduced the incremental formula to eliminate the unknown measurement
error of the system. Xue et al. [22] proposed an adaptive multi-model extended Kalman filter
(AME-EKF) estimation method based on speed and flux which can adjust the transition
probability and system noise matrix continuously and adaptively to reduce the speed
estimation error by using residual sequence. Huang et al. [23] proposed a weighted fusion
method of real-time separation and correction of outliers which solved the problem of
the decline of ballistic data processing accuracy caused by the pollution of measurement
data by introducing the robust estimation theory. Aiming at the problem that the classical
interactive multi-model (IMM) cannot provide the a priori information of the target motion
model, Xue et al. [24] proposed a multisensor hierarchical weighted fusion algorithm for
maneuvering target tracking which obtained its model through hierarchical weighting, and
estimated the state using an extended Kalman filter (EKF).
The fusion method based on Dempster Shafer (D-S) theory expresses the uncertainty
of data by introducing confidence and rationality in dynamic situations. In order to
avoid counter-intuitive results when combining highly conflicting pieces of evidence,
Xiao et al. [25] proposed a weighted combination method for conflicting pieces of evidence
in multisensor data fusion by considering both the interplay between the pieces of evidence
and the impacts of the pieces of evidence themselves [sensors]. Zhao et al. [26] proposed a
multiplace parallel fusion method based on D-S evidence theory for sensor network target
recognition information fusion in which each sensor only needs to exchange information
with neighbor sensors. Aiming at the limitations of simple and incomplete information
Sensors 2022, 22, 5800 4 of 24

sources of single sensor fault diagnosis, Zheng et al. [27] proposed a multisensor infor-
mation fusion fault diagnosis method based on empirical mode decomposition sample
entropy and improved D-S evidence theory to realize the effective fusion of multiple sensor
information and the information complementarity between sensors is formed to achieve
the purpose of optimal decision-making.
The fusion method based on knowledge mainly uses support vector machine and
clustering methods to find the knowledge contained in the data and measure the correlation
and similarity between knowledge. Guan [28] proposed a data fusion method of high-
resolution and multispectral remote sensing satellite images based on multisource sensor
ground monitoring data by introducing the theory and method of support vector machine
(SVM) and applied it to water quality monitoring and evaluation. Zhu et al. [29] proposed a
hybrid multisensor data fusion algorithm based on fuzzy clustering by designing a strategy
to determine the initial clustering center, which can deal with the limitations of traditional
methods in data with noise and different density.
Since the proposal of deep learning, its application in data fusion has attracted the
attention of a large number of researchers. In order to improve the accuracy of wireless
sensor network data fusion, Cao et al. [30] proposed a wireless sensor network data fusion
algorithm (IGWOBPDA) based on an improved gray wolf algorithm and an optimized BP
neural network by improving control parameters and updating the position of dynamic
weights. To improve the performance of data fusion in wireless sensor networks, Pan [31]
proposed a data fusion algorithm that combines a stacked autoencoder (SAE) and a cluster-
ing protocol based on the suboptimal network powered deep learning model. Aiming at the
problem that soft sensing modeling methods of most complex industrial processes cannot
mine the process data resulting in low prediction accuracy and generalization performance,
Wu [32] proposed a soft sensing method based on an extended convolutional neural net-
work (DCNN) combined with data fusion and correlation analysis. However, most data
fusion research is the application of traditional filtering architecture to seek fewer errors
and accurate target state estimation in the field of tracking. Yu et al. [33] reviewed the LSTM
cell and its variants to explore the learning capacity of the LSTM cell and proposed future
research directions for LSTM networks. Sherstinsky [34] explained the essential RNN and
LSTM fundamentals in a single document, which formally derived the canonical RNN
formulation from differential equations by drawing from concepts in signal processing.
Ye et al. [35] proposed an attention-augmentation bidirectional multiresidual recurrent
neural network (ABMRNN), which integrates both past and future information at every
time step with an omniscient attention model. Duan et al. [36] proposed a tracking method
for unmarked AGVs by using the frame difference method and particle filter tracking, then
using a support vector machine (SVM) model to predict the area and correct the prediction
results of the LSTM network.

3. Methodology
3.1. Least Square Fitting Estimation
The weighted data fusion algorithm uses the least squares criterion to minimize the
weighted error sum of squares, when n sensors track a dynamic target at the same time. The
weighted least squares estimation criterion is to make the sum of the weighted error squares
take the minimum value and obtain the weighted least squares estimate by finding the
minimum value [37]. In the process of multisensor cooperative detection, the observation
of a certain state parameter of the detected target by n sensors is as shown in Equation (1).

y = Hx + e (1)

where x is a one-dimensional measured vector, which represents the true value of a certain
parameter of the detected target; y is an n-dimensional measurement vector, which is
a vector composed of multiple sensor observations of a certain parameter of the target,
set y = [y1 y2 · · · yn ]T ; H is a known n-dimensional constant vector under ordinary
Sensors 2022, 22, 5800 5 of 24

conditions; e is an n-dimensional vector composed of the measured noise of each sensor


including the internal noise of the sensor system and environmental interference noise, set
e = [ e1 e2 · · · e n ] T .
The criterion of least square estimation is to minimize the sum of squares of weighted
errors T (x̂).
T (x̂) = (y − Hx̂)T W (y − Hx̂) (2)
where W is a positive definite diagonal weighted matrix, which represents the fusion weight
of each sensor, set W = diag[w1 w2 · · · wn ]. x̂ is the estimated value of the true value x.
In order to ensure the minimum value of Equation (2), calculate the partial derivative to
obtain Equation (3).
∂T (x̂)  
= −HT W + W T (y − Hx̂) (3)
∂x̂
Make the reciprocal of Equation (3) be zero to obtain the estimated value x f usion . The
least square estimate of the method is as shown in Equation (4).
  −1 ∑ n wi y i
x f usion = HT WH HT W y = i=n 1 (4)
∑ i =1 wi

3.2. Fusion Weights and Measurement Variance


The noise of the sensor changes randomly and is irrelevant at any time. Therefore,
the following assumptions can be made for the measurement noise of sensors: 1 The
measurement noise of each sensor is an ergodic signal with each state. 2 The measurement
noise outputs of each sensor are Gaussian white noise, which obey normal distribution and
are independent of each other.

E(ei ) = 0 (i = 1, 2, 3 . . . n) (5)
 
E ei 2 = E[(zi − x )2 ] = σi 2 (i = 1, 2, 3 . . . n) (6)
 
E ( x − zi ) x − z j = 0 (i 6 = j ) (7)
where zi or z j is a observed value of a sensor. According to the measurement noise
characteristics of each sensor, mean error xe of the unbiased estimator x̂ at the target of a
state parameter x is calculated as shown in Equation (8).
2
∑in=1 wi x − ∑in=1 wi zi
xe = E[( x − x̂ )2 ] = E[( ) ] (8)
∑in=1 wi

Equation (8) is calculated and expanded as shown in Equations (9) and (10).
 2
n wi
xe1 = E{∑ [ ( x − zi )2 ]} (9)
i =1 ∑in=1 wi
n n wi y i
xe2 = E{∑i=1 ∑ j=1,j6=i [

]( x − zi ) x − z j } (10)
( ∑ i =1 wi )2
n

Equation (7) indicates that the value xe of Equation (10) is zero, so the mean error value
of unbiased estimator x̂ of target state parameter x is as shown in Equation (11).
" 2 # 2
n n n 
∑ w x − ∑ w z wi
2
xe = E[( x − x̂ ) ] = E i =1 i i =1 i i
= E{ ∑ [ ( x − zi )2 ]} (11)
∑in=1 wi i =1 ∑in=1 wi
Sensors 2022, 22, 5800 6 of 24

In order to find the minimum value of Equation (11), take the partial derivative
of wi and make it equal to zero. Then, the relationship between fusion weight wi and
measurement variance σi 2 is obtained as shown in Equation (12).

1
wi = (12)
σi 2

where wi is the weighted fusion coefficient of each sensor. σi 2 is the measurement variance
of each sensor. ei is the noise measured by each sensor. y j is the observation value of each
sensor. x is the real value of a state parameter of the target. x̂ is the unbiased estimator of a
state parameter x of the target. xe is the mean error between the estimated value x̂ and the
real value x. xe1 and xe2 are two parts of mean error xe.
The derivation shows that the fusion weights of each sensor are only determined by
measurement variance. The accuracy of sensor measurement variance calculation will
affect the accuracy of data fusion results directly. So the accuracy of weights is the core
to solve the problem of multisensor data fusion. The innovative algorithm of data fusion
proposed in this paper is based on weighted data fusion, and the accuracy of measurement
variance estimation will affect the effect of trajectory tracking ultimately.

3.3. Multisensor Fusion Architecture


In traditional data fusion, the original data is averaged directly as the input of the
Kalman filter or extended Kalman filter to estimate the state of the target, which ignores
the independence of each sensor and does not weaken the error in essence. In the least
squares weighted data fusion algorithm, the final results of state estimation depend on the
weight’s allocation according to the real-time measurement variance after the Kalman filter.
There are more advanced methods of real-time measurement variance estimation, such as
the forgetting factor, ordered weighted averaging (OWA) operator, and so on. However,
they do not consider the correlation of sensor measurement variance in continuous time
and the improvement of fusion architecture.
Based on the least square weighted Kalman filtering method, this paper proposes an
adaptive multisensor weighted fusion algorithm with global Kalman filter and long short-
term memory (LSTM-GWFA) network for tracking. The maneuver detection mechanism is
introduced as the correction of the filtering results and LSTM is applied to the prediction of
sensor measurement variance to improve the accuracy of weights allocation. The method
is divided into four stages as shown in Figure 2: independent Kalman filter, global Kalman
filter, measurement variance estimation, and data weighted fusion.

Independent The Global Measurement Variance


Data Weighted Fusion
Kalman Filter Kalman Filter Estimation

Independent
Sensor_1 Historical
Sensor_1 Measurement Measurement Sensor_1
Kalman Variance Final
Data Variance of
Filtering Measurement
each Sensor
Variance

Sensor_2 Sensor_2
Kalman Long Short Term Memory
Data Global Measurement
Filtering Neural Network Sensor_2
Variance Final
Arithmetic Kalma Measurement Final
Average Fusion
Variance Data
Filter Weight
Fusion

Sensor_3
Sensor_N Measurement Sensor_3
Data Kalman Variance
Filtering Final
Measurement
Variance

Figure 2. The operation process architecture of LSTM-GWFA.


Sensors 2022, 22, 5800 7 of 24

The stage of independent Kalman filtering works to filter the data of state obtained by
the sensor on each independent mechanism, which can modify the prediction model and
obtain the estimated data of the sensor itself.
The stage of global Kalman filtering works to obtain an initial fusion data from the
independently filtered data of each sensor through a unified filter. The measurement
variance of each sensor in the global filtering is calculated as the input of the next stage.
The stage of measurement variance estimation uses the measurement variance of
sensor history as the input of a LSTM network to obtain the estimated measurement
variance. Further, the estimated measurement variance is combined with the real-time
measurement variance to obtain the final measurement variance of each sensor.
The stage of weighted fusion allocates the weights according to the final measurement
variance of each sensor and to fuse the filtered data, then the result is sent back to the
Kalman filter of each sensor to modify the prediction model.

3.3.1. Independent Kalman Filter


Generally, there are two methods to obtain the state of aircraft under the research
background of this paper. One is to measure it directly and obtain the value of observation.
Another method is to predict it according to a certain rule based on the state of the previous
moment. Due to the existence of observation noise, the inaccuracy of the prediction model
and the estimation error of the position at the previous time, the two methods cannot
obtain the available accurate estimation of state. The traditional weighted fusion method
based on a Kalman filter combines the two results by weighting according to the values of
noise covariance, state estimation covariance, and state transition covariance to obtain a
relatively accurate state. Evolutionarily, this paper uses a variable dimension Kalman filter
to switch constant velocity (CV) and constant acceleration (CA) models for data processing
according to the motion characteristics of aircraft. In this stage, the maneuver is regarded
as the internal change of the dynamic characteristics of the target rather than the noise
modeling. A sliding window is used to count continuously the significant characteristics of
the change of velocity and acceleration over a period of time, then switch the corresponding
high-dimensional and low-dimensional models by determining whether the aircraft has
maneuvering displacement according to the set threshold and confidence interval referred
to in Appendix A.
The traditional Kalman filter uses the results of its own filtering as the input of the next
prediction to correct its accuracy iteratively. However, whether this method will inevitably
lead to the error correction depends on the measurement and filtering results at the current
time in multisensor data fusion, and the filtering results of each sensor have no contribution
to other prediction models. Therefore, this paper uses the final fusion results as the inputs
of the next independent Kalman filter prediction model, which can modify the Kalman
filter prediction model of each sensor accurately. The following Equations (13)–(18) show
the implementation process of the independent Kalman filter.

zik = Hxik + vik (13)

x̂i0k = Ax kf − 1
usion + Buik−1 + oik−1 (14)

Pi0k = APik−1 AT + Q (15)


−1
Kik = Pi0k H T [ HPi0k H T + R] (16)

x̂ik = x̂i0k + Kik (zik − H x̂i0k ) (17)

Pik = I − Kik H Pi0k



(18)

where zik is the observed value by sensor i at time k. x̂i0 is the priori value of state
k
estimate. x̂ik is the estimate value of posteriori state. A is the state transition matrix and
B is the control matrix. H is the observation matrix. uik−1 is the control variable. vik is the
Sensors 2022, 22, 5800 8 of 24

observation noise. oik−1 is the noise of the prediction process. Pik and Pik−1 are the posteriori
estimated covariance. Pi0 is the a priori estimated covariance. Q and R are measurement
k
noise covariance. Kik is the Kalman gain.
Combined with the fusion results x kf − 1
usion of the previous moment and state transition
matrix A, this paper used the estimation characteristics of the Kalman filter to filter the
observed data zik of each sensor.

3.3.2. The Global Kalman Filter


Through the in-depth study of the traditional algorithm of weighted data fusion, this
paper finds that two defects lead to the low accuracy of the fusion results and lack of
stability or anti-interference ability. Once the traditional algorithm is used to fuse the data
of two sensors, the final fusion weight coefficient of each sensor is proved always to be
0.5 by mathematical theory. The performance of the algorithm is reduced greatly; also the
algorithm is suitable for a situation in which the measurement accuracy of multiple sensors
is similar. If there are many sensors with poor measurement accuracy in the system, the
fusion results will be affected greatly. In general, once the number of sensors with a larger
error exceeds the number of sensors with a smaller error or individual sensors are seriously
interfered with, the accuracy of fusion results will decline and be even lower than that of
single sensor measurement.
Therefore, a global Kalman filter was constructed, as reported in this paper, considering
the contribution of each independent Kalman filter. It takes the initial observation value of
each sensor and the arithmetic average of the independent filtering results as the input, so
the measurement variance of each sensor can be obtained, which paves the way for further
calculation of the final fusion weights.
The estimated results x̂ik of each sensor are averaged arithmetically. Then, the Kalman
filter is applied to estimate the arithmetic mean value x̂k to obtain the state value x̂inew of
k
the second correction.
1 n
x̂k = ∑i=1 x̂ik (19)
n
x̂inew
k
= x̂k + Kinew
k
(zik − H x̂k ) (20)
The result of the global Kalman filter can be used as a standard value to calculate the
measurement variance of each sensor, which provides a basis for further weights allocation.

3.3.3. Measurement Variance Estimation


The purpose of this stage was to obtain the accurate measurement variance of each
sensor as a basis for calculating the final fusion weights. Considering the internal noise
of the sensors and environmental interference plus other factors, this paper used the
combination of instant variance and previous variance to calculate the final measurement
variance. The use of instant variance enhances the sensitivity to environmental interference
and previous variance emphasizes the influence of sensor factors on the measured value.
This paper introduces the forgetting factor α to enhance the amount of information pro-
vided by new data and consider the influence of old data to calculate the final measurement
variance σi2 as shown in Equation (22).
k

2
σi2k = [ x̂inew
e k
− H x̂k ] (21)

σi2k = αe
σi2k + (1 − α)σ2ik (22)

σi2 is the instant variance of measurement of sensor i at the time k. The calculation
where e
k
of historical measurement variance σ2ik is the mean of measurement variance in the past
period. However, this kind of artificial weight calculation method will inevitably lead to
the loss of error characteristics of all sensors, which will greatly reduce the contribution of
high-precision data of the sensors to the final fusion results. Considering the continuity of
Sensors 2022, 22, 5800 9 of 24

the measurement variance of the sensors, this paper proposes to use a NN with memory
to estimate the future measurement variance by considering the previous m times. Then,
it fuses the predicted measurement variances with the real-time measurement variance
through the forgetting factor α.
m
α= (23)
m+1
where m is the historical step size of the prediction input.
The basic NN can only deal with a single input, but the former input and the latter
input are not related at all. The one-way non-feedback connection mode of the network
means it can only deal with the information contained in the current input data, while
the data outside the current time period have no contribution. However, the common
time series data are related closely, especially in the process of tracking. Because of the
performance characteristics of the aircraft, the position state of its continuous flight time is
such time-series data. Because basic recurrent neural networks (RNN) can deal with certain
short-term dependence and cannot deal with long-term dependence, this research selected
the LSTM model, which is more suitable to solve this problem and hardly encounters
gradient dispersion. The data of the time m + 1 can be predicted by inputting the historical
data of the previous m times, which is a typical LSTM univariate prediction.
The key of LSTM in solving the problem of time-series data is that the state of the
hidden layer will save the previous input information. Based on neuron sequential con-
nection, it also adds a neuron feedback connection. It not only considers the input of the
current time, but also gives the network a memory function for the previous information.
When it propagates forward, its input layer and the information from the hidden layer
of the previous time act on the current hidden layer together. In contrast to RNN, LSTM
introduces three ‘gate’ structures and long- and short-memory in each neuron as shown in
Equations (24)–(28). Through the control of three gates, the historical information can be
recorded, so as to have a better performance in a longer sequence, referred to Appendix B.
The ‘forget’ gate needs to determine how much of the unit state Ct−1 at the previous
time is retained to the current time when information enters the LSTM network which
conforms to the algorithm authentication and cannot be forgotten. The forgetting gate of
the inclusion layer determines whether the unit state at the previous time can be retained
to the current time by substituting the input sum into Equation (24).
 
f t = σ W f · [ h t −1 , x t ] + b f (24)

The ‘input’ gate determines how many network inputs are saved to the memory unit
state at the current time through sigmoid. Secondly, the layer will generate a vector to
update the state of the memory unit. Then, data obtained through the forgetting gate and
the input gate are substituted into Equations (25) and (26), so that the current unit state and
the long-term unit state can be combined to form a new unit state.

it = σ (Wi · [ht−1 , xt ] + bi ) (25)

et = tan h(WC · [ht−1 , xt ] + bC )


C (26)
Equation (27) is the cell state (long time) at time t.

Ct = f t · Ct−1 + it · C
et (27)

The ‘output’ gate uses the current unit state Ct to control the output of ht of LSTM.
It obtains an initial output through the sigmoid layer as shown in Equation (28). Then, it
pushes the cell state value between −1 and 1 by layer. Finally, the output of the sigmoid
layer and other layers is multiplied to filter the unit state. The model output is obtained as
shown in Equation (29).
Sensors 2022, 22, 5800 10 of 24

ot = σ (Wo · [ht−1 , xt ] + bo ) (28)


ht = ot · tan h(Ct ) (29)
where f t is the result parameter of the forget gate. σ (·) indicates sigmoid activation function.
Tanh(∗) indicates tanh activation function. W f , Wi , WC , Wo , and b f , bi , bC , bo are parameters
which need to be trained. C et is the output of the input gate. ot is the process parameter of
the output gate. xt is the input time t data. In the LSTM-GWFA method of this paper, the
measurement variance of continuous time m is used as the input of the LSTM network to
predict the value at time m + 1.

σ2ik = LSTM([σi2t−m , · · · σi2t−1 , σi2t ]) (30)

3.3.4. Data Weighted Fusion


Through the accurate measurement variance obtained in the previous stage, the fusion
weight is calculated using Equation (12). In the process of final data fusion, this paper
used the results of independent sensor filtering as the input, but did not use the original
sampling data directly so that the result of data fusion is more reliable.
A weight elimination mechanism was proposed. When the value of the assigned
weight is less than a certain threshold, the value of the weight will be set to zero and its
value will be allocated to other weights in proportion. However, this method is not limited
to the number of sensors, so as to ensure that this method cannot trigger weight truncation.

∑in=1 σi 2
σavg 2 = , σi 2 < σbase 2 (31)
k

σi_new 2 = σi 2 + σavg 2 , σi 2 ≥ σbase 2



(32)
σi_new 2 = 0 , σi 2 < σbase 2
1
wi_new = (33)
σi_new 2
∑in=1 wi_new x̂ik
x kf usion = (34)
∑in=1 wi_new x̂ik
where σavg 2 is the truncated mean measurement variance. σbase 2 is the threshold of mea-
surement variance truncation. σi_new 2 is the updated measurement variance. Is the updated
fusion weight. x kf usion is the final fusion result.

4. Simulation Results and Analysis


In this section, the accuracy of the proposed algorithm is verified and evaluated
by analyzing the fusion results of sensors on the trajectory of UAVs. In the numerical
simulation, the proposed method will be compared with other data fusion methods to verify
the performance and the effectiveness of the algorithm will be verified in the equivalent
physical simulation. The evaluation of the algorithm was mainly carried out from these
several parameters: root mean square error (RMSE), mean absolute error (MAE), mean
absolute percentage error (MAPE), and index of agreement (IA). These four kinds of
measurement indexes with their formulas are shown in (35)–(38). n represents the number
of samples, yi represents the observed value, and f i represents the predicted value [38].
The root mean square error (RMSE) is based on the mean square error, and it can be
used to measure the deviation between the predicted value and the observed value.
r
1 n
δRMSE =
n ∑ t =1 ( f i − y i )2 (35)
Sensors 2022, 22, 5800 11 of 24

Mean absolute error (MAE) refers to the average of the absolute value of the error
between the predicted value and the observed value. The average absolute error can avoid
the cancellation of the positive and negative errors, so it can better reflect the actual error size.

1 n
δMAE =
n ∑ t =1 | f i − y i | (36)

Mean absolute percentage error (MAPE) is usually a statistical indicator that measures
the accuracy of forecasts, such as time-series forecasts. The smaller the value of MAPE, the
higher the accuracy of the model.

100 n f i − yi
δMAPE =
n ∑ t = 1 yi
(37)

Index of agreement (IA) describes the difference between the real value and the
predicted value. The larger the value, the more similar the fusion value is to the real value.
2
∑nt=1 | f i − yi |
II A = 2
(38)
∑nt=1 (| f i − y| + |y − yi |)

4.1. Algorithm Numerical Simulation


The simulation program was written in the MATLAB simulation environment which
ran in an Intel (R) Core (TM) i7-9700k, 32G RAM, and 64-bit Windows 10 operating system.
In this paper, the LSTM model of this algorithm has two hidden layers and the number
of nodes in the hidden layer is 10. The characteristic dimension of the single input neural
network is 3, since the prediction of measurement variance is a variable prediction problem
and the output dimension is 3. Batch processing parameter selection is 100. The same
dataset was used for 1000 iterations of cyclic training. The learning rate of the neural
network training is 0.001. Tanh function is selected as the activation function of the hidden
layer. The output layer activation function is Softmax. The training step length of the LSTM
is 10. The mean square error is regarded as the loss function. Then, the back propagation
and gradient descent algorithm with adaptive learning rate are used to optimize the
parameters iteratively until the loss function converges to complete the model training.
The set of simulation parameters is as shown in Table 1.

Table 1. Important hyperparameter settings.

Parameter Name Parameter Description Value


batch_size Batch size 100
learning_rate Initial learning rate 0.001
epoch Number of iterations 1000
s1_num Number of neurons in characteristic fusion layer 16
lstm_num Number of hidden units in LSTM layer 128
s2_num Number of hidden units in output transformation layer 1
m Time series step size 5

All the operation data of the tracking aircraft of the four sensors were collected by
the mean sampling method and the sampling frequency of each associated feature was
unified as 0.5 s. If there were missing data in the sampling time period of a sensor, the
data of the previous time were used to make up the missing value. Due to the large
difference in the value range of each measurement point, the minimum and maximum
normalization method was used to normalize the features of each dimension to eliminate
the impact of the difference in value range on the accuracy of the model. We obtained
871 times track sampling data and the column data of each track were different. The data
Sensors 2022, 22, 5800 12 of 24

Sensors 2022, 22, x FOR PEER REVIEW 12 of 24


in the dataset were a three-dimensional feature, then 671 pieces were used as a training
set, 100 pieces were used as a verification set, and 100 pieces were used as a test set. We
considered the comparison with various existing advanced algorithms, and compared the
fusion results with CNN-LSTM [39], KF-GBDT-PSO [40], FCM-MSFA [41], and MHT-
final 100 data fusion results with CNN-LSTM [39], KF-GBDT-PSO [40], FCM-MSFA [41],
ENKF [42]. CNN-LSTM was used to predict the trajectory of sick gait children in the orig-
and MHT-ENKF [42]. CNN-LSTM was used to predict the trajectory of sick gait children in
inal reference. This paper modified the training content of CNN-LSTM as speed and ac-
the original reference. This paper modified the training content of CNN-LSTM as speed
celeration, with a sliding window from 5 to 10, which matches the task of this study. KF-
and acceleration, with a sliding window from 5 to 10, which matches the task of this study.
GBDT-PSO was used for multisource sensor data fusion to improve the accuracy of vehi-
KF-GBDT-PSO was used for multisource sensor data fusion to improve the accuracy of
cle navigation in original reference. This paper took the KF-GBDT-PSO as the output of
vehicle navigation in original reference. This paper took the KF-GBDT-PSO as the output
the prediction model directly and fused it with real-time observation directly as the final
of the prediction model directly and fused it with real-time observation directly as the final
result. FCM-MSFA is a weighted integral data fusion algorithm for multiple radars which
result. FCM-MSFA is a weighted integral data fusion algorithm for multiple radars which
was proposed to reduce the error of observation trajectory and improve the accuracy of
was proposed to reduce the error of observation trajectory and improve the accuracy of the
the system in the original reference. This paper ignored the correction action of FCM-
system in the original reference. This paper ignored the correction action of FCM-MSFA on
MSFA on radar and retained the interference of acceleration speed, regarded as an inter-
radar and retained the interference of acceleration speed, regarded as an interference input
ference input with random characteristics. MHT-ENKF is a modified ensemble Kalman
with random characteristics. MHT-ENKF is a modified ensemble Kalman filter (EnKF) in
filter (EnKF) in the multiple hypotheses tracking (MHT) to achieve more accurate posi-
the multiple hypotheses tracking (MHT) to achieve more accurate positioning of the targe
tioning of the targe with the high nonlinearity in original reference. This paper used the
with the high nonlinearity in original reference. This paper used the data process of a single
data process of a single target trajectory of MHT-ENKF and ignored the trajectory corre-
target trajectory of MHT-ENKF and ignored the trajectory correlation part, which the result
lation part, which the result as prediction of a sensor. Then, this paper adjusted the data
as prediction of a sensor. Then, this paper adjusted the data acquisition frequency of all
acquisition frequency of all algorithms to 0.5 s.
algorithms to 0.5 s.
In order to compare the advantages of the components of LSTM-GWFA, six compar-
In order to compare the advantages of the components of LSTM-GWFA, six compari-
ison algorithms were added, which are N-N, N-GWFA, LSTM-N, N-AWFA, 5S-NN-
son algorithms were added, which are N-N, N-GWFA, LSTM-N, N-AWFA, 5S-NN-GWFA,
GWFA, and 10S-NN-GWFA. The algorithm of N-N was used to remove the stages of
and 10S-NN-GWFA. The algorithm of N-N was used to remove the stages of LSTM and
LSTM and GWFA and preserve the rest in the algorithm of LSTM-GWFA. The algorithm
GWFA and preserve the rest in the algorithm of LSTM-GWFA. The algorithm of N-GWFA
of N-GWFA was used to remove the stage of LSTM and preserve the rest in the algorithm
was used to remove the stage of LSTM and preserve the rest in the algorithm of LSTM-
of LSTM-GWFA. The algorithm of LSTM-N was used to remove the stage of GWFA and
GWFA. The algorithm of LSTM-N was used to remove the stage of GWFA and preserve
preserve the rest in the algorithm of LSTM-GWFA. The above three algorithms were for
the rest in the algorithm of LSTM-GWFA. The above three algorithms were for ablation
ablation analysis to demonstrate their advantages. The algorithm of N-AWFA was used
analysis to demonstrate their advantages. The algorithm of N-AWFA was used to replace
to replace the global Kalman filter with a mean value in the algorithm of N-GWFA. The
the global Kalman filter with a mean value in the algorithm of N-GWFA. The algorithms
algorithms of 5S-NN-GWFA and 10S-NN-GWFA were used to replace the stage of LSTM
of 5S-NN-GWFA and 10S-NN-GWFA were used to replace the stage of LSTM with neu-
with neural networks (NN) with different training prediction steps (5 and 10) in the algo-
ral networks (NN) with different training prediction steps (5 and 10) in the algorithm of
rithm of LSTM-GWFA. The above three algorithms were for optimization and comparison
LSTM-GWFA. The above three algorithms were for optimization and comparison of LSTM-
of LSTM-GWFA algorithm components. Then, we made a simple analysis for added al-
GWFA algorithm components. Then, we made a simple analysis for added algorithms by
gorithms by RMSE and IA and used Equations (35)–(38) to compare the statistical results
RMSE and IA and used Equations (35)–(38) to compare the statistical results as shown in
as shown in Figures 3–6.
Figures 3–6.

Figure
Figure 3. Comparison of
3. Comparison of RMSE
RMSE with
with100
100tracks
tracksin
indifferent
differentalgorithms.
algorithms.
Sensors 2022, 22, x FOR PEER REVIEW 13 of 24
Sensors 2022, 22, x FOR PEER REVIEW 13 of 24
Sensors 2022, 22, 5800 13 of 24

Figure 4. Comparison of MAE with 100 tracks in different algorithms.


Figure 4. Comparison
Figure 4. ofMAE
Comparison of MAEwith
with100
100tracks
tracksinindifferent
differentalgorithms.
algorithms.

Sensors 2022, 22, x FOR PEER REVIEW 14 of 24

Figure Comparisonof
Figure 5. Comparison ofMAPE
MAPEwith
with100
100tracks
tracksinindifferent
different algorithms.
algorithms.
Figure 5. Comparison of MAPE with 100 tracks in different algorithms.

Figure 6. Comparison
Figure 6. ofIA
Comparison of IAwith
with100
100tracks
tracksinindifferent
differentalgorithms.
algorithms.

Figure 3 shows that the RMSE of LSTM-GWFA of this paper was the smallest. In
particular, other algorithms made the results closer to high-precision radar results and the
LSTM-GWFA algorithm improved the overall accuracy, which produced errors smaller
than the errors of high-precision radar. CNN-LSTM is also a fusion algorithm based on
Sensors 2022, 22, 5800 14 of 24

Figure 3 shows that the RMSE of LSTM-GWFA of this paper was the smallest. In particular,
other algorithms made the results closer to high-precision radar results and the LSTM-GWFA
algorithm improved the overall accuracy, which produced errors smaller than the errors of
high-precision radar. CNN-LSTM is also a fusion algorithm based on LSTM and has high
accuracy improvement so that low-precision radar data would not have a great impact on
tracking, but the fusion result of CNN-LSTM is not as good as that of high-precision radar and
its error fluctuates greatly. The error stability of the algorithm in this paper is high, which is
due to the stage of weight fusion using the filtering results but not the original observation
data by adaptive weights referred to in Equation (33). Figures 4 and 5 show the error stability
of each algorithm in each track. When the detection error of each sensor fluctuates greatly,
the LSTM-GWFA algorithm in this paper can still maintain good error stability, in contrast to
other algorithms that will be affected. Figure 6 shows the similarity between each fusion track
and the real track. CNN-LSTM, KF-GBDT-PSO, and LSTM-GWFA algorithms all showed a
high degree of approximation (IA is less than 0.1), but the LSTM-GWFA algorithms had good
stability and IA was close to zero, almost matching the real track.
Figures 3–6 also show the performance of each component of the LSTM-GWFA. The
N-N algorithm without LSTM and GWFA did not achieve any improvement, and its
stability and accuracy of fusion were the worst. Even the fusion results exceeded the sensor
observation, with the lowest accuracy affected by the iterative cumulative error. On the
basis of the N-N algorithm, the performance of the N-LSTM algorithm after adding GWFA
can be significantly improved, but the performance decreased after replacing AWFA, which
shows that GWFA is more adaptive to filtering than the given mean value. When the
stage of GWFA was removed based on the LSTM-GWFA algorithm, the performance of
LSTM-GWFA was only slightly weakened, which was equivalent to or even better than
that of CNN-LSTM. On the other hand, the base line had great performance improvement
after adding the stage of LSTM, which was enough to show that LSTM is the core part of
the architecture. Therefore, we further replaced the LSTM network with the traditional NN
with different training steps for testing. The performance of 5S-NN-GWFA and 10S-NN-
GWFA were fragile compared with that of LSTM, but it could be seen that the alternative
network of short-step training was significantly better than that of long-step training in
the fusion architecture. From the overall algorithm comparison, FCM-MSFA and MHT-
ENKF as traditional filtering algorithms had almost no difference in fusion performance.
CNN-LSTM and KF-GBDT-PSO algorithms were improved by adding the NN algorithm,
but their curve trend of RMSE was similar, indicating that the network structure had not
changed, which affected the fusion performance. The LSTM-GWFA in this paper also used
NNs combined with the measurement variance of real-time data, but not all of them. The
improvement of the efficiency and stability of the algorithm also benefited from the motion
maneuver detection filter and adaptive truncation weight, which not only greatly reduced
the error of the fusion results but also made the accuracy of all individual radars high.
Table 2 shows the statistical results of the fusion data of 100 test tracks, in which the
LSTM-GWFA algorithm shows high performance in mean square deviation. Compared
with the advanced CNN-LSTM algorithm, the accuracy of the LSTM-GWFA algorithm
was improved by 66% and the matching degree with the real tracks was improved by
nearly 10 times. Although the algorithms of NN, N-GWFA, and N-AWFA show good
fusion stability, there is still a large error above 25 with the real results. The algorithms of
LSTM-N prove that the LSTM network is an indispensable part of the algorithm structure
of this paper, which improved the overall performance by nearly six times. Moreover, the
performance of algorithms of 5S-NN-GWFA and 10S-NN-GWFA may have been slightly
poor and affected by the lack of time correlation, but it still had a certain improvement
compared with the traditional filtering algorithm. Computer simulation results show that
the improved adaptive weight fusion algorithm not only inherits the advantages of all
traditional method algorithms but also has stronger noise suppression ability, higher data
smoothness, and higher fusion precision than the advanced NN algorithm.
Sensors 2022, 22, 5800 15 of 24

Table 2. Statistical results with 100 tracks in different algorithms.

Mean Square Deviation


Algorithm Name
RMSE MAE MAPE IA
CNN-LSTM 9.3293 2.7235 7.7508% 0.0295
KF-GBDT-PSO 14.1762 9.6664 23.7586% 0.0565
FCM-MSFA 24.9771 12.8531 36.0786% 0.1315
MHT-ENKF 24.0713 12.2834 33.4910% 0.1238
LSTM-GWFA 3.1672 0.2215 0.7437% 0.0037
N-N 40.9646 2.9509 9.45811% 0.1777
N-GWFA 27.2367 2.9698 7.54718% 0.1311
LSTM-N 7.61598 1.5690 4.30846% 0.0195
N-AWFA 32.7377 9.0784 23.6505% 0.1562
5S-NN-GWFA 10.1877 9.2575 15.7572% 0.0388
10S-NN-GWFA 23.1720 6.2006 12.4955% 0.1047

4.2. Equivalent Physical Simulation


In order to further verify the application effect of the LSTM-GWFA algorithm in air
tracking, this paper used a mature refitted civil wing UAV platform which was fixed for
equivalent flight test by adding detection sensors and task processors. Then, the algorithm
verification was carried out by transplanting the algorithm program of adaptive data fusion
which was completed in a fast, safe, low-cost, and multiple sorties manner. The flight test
Sensors 2022, 22, x FOR PEER REVIEW 16 of 24
system consisted of refitted civil UAVs and a set of ground station equipment as shown in
Figure 7.

Onboard Computer
(JETSON TX2) Online Flight Path Monitoring System
Radar Sensor

Online Aircraft Flight Simulation

Airspeed Meter
Photoelectric Sensor
UAV Platform Ground Station Vehicle Monitoring System

UAVs

UAV UAVs

Target

Enemy Aircraft
Target

Figure7.7.Composition
Figure Compositionofof flight
flight test
test system.
system.

In
Interms
termsofof
information
informationexchange, the air
exchange, theground link between
air ground the ground
link between the station
ground and
station
UAVs could monitor the status of all UAVs and send simulation tasks. UAVs
and UAVs could monitor the status of all UAVs and send simulation tasks. UAVs broad- broadcast
their position,
cast their speed,speed,
position, attitude, and other
attitude, andnavigation information
other navigation to the link,
information towhich gavewhich
the link,
the ground stations complete global information and the same level of autonomy and
gave the ground stations complete global information and the same level of autonomy
and decision-making ability. The performance of UAVs and ground station equipment
are shown in Table 3. The performance of the detection sensor carried by the UAVs is
shown in Table 4.
Sensors 2022, 22, 5800 16 of 24

decision-making ability. The performance of UAVs and ground station equipment are
shown in Table 3. The performance of the detection sensor carried by the UAVs is shown in
Table 4.

Table 3. The performance of UAV and ground station equipment.

UAV Performance Parameter Value Function of Ground Station


Endurance Time ≥2 h
Flight Altitude 300~3500 m
Cruise Speed 28~36 m/s 1 UAV Flight Area Control
(Range: 5 km × 5 km)
Maximum Flight Speed 44 m/s
2 Equivalent enemy aircraft speed
Stall Speed 19.4 m/s control (UAV Speed 30 m/s)
3 UAV position information receiving
Turning Radius ≤400 m
4 UAV speed information receiving
Maximum Climbing Speed ≤2.5 m/s 5 UAV flight altitude control and

Maximum Descent Speed 3.5 m/s monitoring of UAV


6 UAV initialization information
Engine 3W-342i (24 KW) input, etc.
400 W (Single battery
Airborne Power power supply 48 V, Battery
capacity 20,000 mah)

Table 4. The performance of sensors carried by UAVs.

Performance Name Radar Sensor Photoelectric Sensor Functions


Weight ≤30 kg ≤17 kg
Power ≤200 W ≤200 W
Operating Frequency 16 GHz 16 GHz
Detect, Identify, Locate and
Tracking Distance ≥5 km ≥5 km
Track Air Targets (UAVs).
Search Field ±30◦ / Align time and space to
obtain the position
Ranging Accuracy ≤20 m ≤25 m
of the aircraft.
Angle measurement accuracy 0.5◦ 0.2◦
Search Angular Velocity / ≥60◦ /s
Minimum Target Recognition RCS: 5 m2 Light: 3 m × 3 m

Considering that there may be many initial encounter scenarios in air combat, this paper
designed five single UAV flights with the same sorties of sensors and the data fusion trajectory
tracking test with different initial numbers of 2, 3, 4, 5 sensors, shown in Figures 8–10. The
accuracy of each sensor was different, but within the performance range described in
Table 4. The initialization parameters and statistical results of the tests are shown in Table 5.
In this paper, the data fusion results were counted within the effective sensor detection
time range to study the effectiveness and stability of the algorithm in practical applications.
In the off-site flight test, we could not obtain the real position information of the
aircraft in the ideal environment directly, but we used the high-precision data of GPS as
the standard value of the real position, and the detection data using longitude, latitude,
and altitude as the comparison parameters of the data fusion components refer to practical
application in air.
Figures 8–10. The
sion trajectory accuracy
tracking testofwith
eachdifferent
sensor was different,
initial but of
numbers within
2, 3, the
4, 5performance range
sensors, shown in
described in Table 4. The initialization parameters and statistical results of
Figures 8–10. The accuracy of each sensor was different, but within the performance range the tests are
shown in Table
described 5. In4.this
in Table Thepaper, the dataparameters
initialization fusion results
andwere counted
statistical within
results of the effective
tests are
sensor
shown in Table 5. In this paper, the data fusion results were counted withinalgorithm
detection time range to study the effectiveness and stability of the in
the effective
practical applications.
sensor detection time range to study the effectiveness and stability of the algorithm in
Sensors 2022, 22, 5800 17 of 24
practical applications.

Figure 8. Longitude error of multisensor data fusion target trajectory.


Figure
Figure 8. Longitudeerror
8. Longitude errorof
ofmultisensor
multisensordata
datafusion
fusion target
target trajectory.
trajectory.

Sensors 2022, 22, x FOR PEER REVIEW 18 of 24

Figure Latitudeerror
9. Latitude
Figure 9. errorofofmultisensor
multisensordata
datafusion
fusion target
target trajectory.
trajectory.
Figure 9. Latitude error of multisensor data fusion target trajectory.

Figure
Figure 10. Height error
10. Height error of
of multisensor
multisensor data
datafusion
fusiontarget
targettrajectory.
trajectory.

In the off-site flight test, we could not obtain the real position information of the air-
craft in the ideal environment directly, but we used the high-precision data of GPS as the
standard value of the real position, and the detection data using longitude, latitude, and
altitude as the comparison parameters of the data fusion components refer to practical
Sensors 2022, 22, 5800 18 of 24

Table 5. The error data statistics of absolute distance.

Test Number of Fight Initialization Parameters Data Fusion of Single Trajectory


Data Fusion Number of Sensors Ranging Accuracy (m) RMSE MAE MAPE ERR
NR = 1 Acc R_1 = ±5 m
#1 Test 3.7815 3.0366 50.61% 72.70%
NP = 1 Acc P_1 = ±15 m
Acc R_1 = ±5 m
NR = 2
#2 Test Acc R_2 = ±15 m 2.8785 2.2760 48.65% 80.22%
NP = 1
Acc P_1 = ±10 m
Acc R_1 = ±5 m
NR = 2 Acc R_2 = ±15 m
#3 Test 2.5607 2.0050 49.98% 85.89%
NP = 2 Acc P_1 = ±10 m
Acc P_2 = ±15 m
Acc R_1 = ±5 m
Acc R_2 = ±10 m
NR = 3
#4 Test Acc R_3 = ±15 m 2.2195 1.7387 50.93% 90.93%
NP = 2
Acc P_1 = ±15 m
Acc P_2 = ±20 m

Figures 8 and 9 show that data fusion of different accuracy and number of sensors can
reduce the error of tracking, which was significantly better than the detection and tracking
of a single sensor under the LSTM-GWFA algorithm. Especially when the noise error of
individual sensors was very large, from 887 s to 894 s, high-precision fusion could also be
achieved to suppress the interference of large error measurement data. Figure 10 shows
that the tracking accuracy was within 10 m and the stable tracking accuracy within 1 m
was achieved from 897.8 s under the five sensors in the height data fusion. This indicates
that the maneuvering detection converges while the flight altitude was almost unchanged,
so the fusion effect may be further improved with the increase of the number of sensors.
Further, we still used the error data statistics of absolute distance as the standard for the
actual application performance of the algorithm, as shown in Table 5.
Since the GPS data will also have errors, we removed the IA data statistics and added
the error reduction rate (ERR) compared with low precision sensors. Table 5 shows that the
data fusion effect of five sensors was the best; the fusion effect of two sensors was almost
the same but in MAPE. This phenomenon occurs mainly because a high-precision filtered
data is directly used as the final fusion result in the fusion process of two sensors, while
ignoring the impact of low-precision sensors. It will lead to the stability of the overall effect.
However, the deviation degree of the error was still the largest in data fusion of two sensors
from RMSE and MAE. Table 5 also shows that the fusion effect also improved with the
increase of the number of sensors and the increase of the number of low-precision sensors
had no impact on the fusion results. This case illustrates the LSTM-GWFA algorithm only
suppressed the low-precision measured values, but not a certain sensor, which gives the
method generalization ability.
Figure 11 shows the fusion trajectory results of five sensors and that the deviation
of the global Kalman filter results is very large when data fusion of tracking is started,
especially in maneuvering. However, with the correction of the fusion results, the deviation
can be quickly corrected. In the process, the result of global filtering also reverses the
fusion accuracy, making the prediction results of LSTM more accurate. After a period of
time, the multisensor fusion results show stable tracking of the trajectory, which proves the
effectiveness of the method and has certain universality for the conventional motion model.
Figure 11 shows the fusion trajectory results of five sensors and that the deviation of
the global Kalman filter results is very large when data fusion of tracking is started, espe-
cially in maneuvering. However, with the correction of the fusion results, the deviation can
be quickly corrected. In the process, the result of global filtering also reverses the fusion
accuracy, making the prediction results of LSTM more accurate. After a period of time, the
Sensors 2022, 22, 5800 19 of 24
multisensor fusion results show stable tracking of the trajectory, which proves the effective-
ness of the method and has certain universality for the conventional motion model.

Stable Tracking
Initial Traking

Figure 11. Three-dimensional


Figure 11. Three-dimensional schematic
schematic diagram
diagram of
of tracking
tracking results
resultsby
byfive
fivesensors.
sensors.

5. Conclusions
5. Conclusions
With the rapid development of information society, intelligence has entered every cor-
With the rapid development of information society, intelligence has entered every
ner of our life. Intelligent control needs to process a large number of sensor data. Because a
corner of our life. Intelligent control needs to process a large number of sensor data. Be-
sensor itself may have unpredictable failure problems, multisensor data fusion technology
cause a sensor itself may have unpredictable failure problems, multisensor data fusion
came into being. Aiming at the problem of accurate trajectory tracking for dynamic targets
technology came into being. Aiming at the problem of accurate trajectory tracking for dy-
in air combat, this paper proposes an adaptive data fusion architecture by analyzing the op-
namic targets in air combat, this paper proposes an adaptive data fusion architecture by
erational application scenarios and summarizing the existing data fusion trajectory tracking
analyzing the operational application scenarios and summarizing the existing data fusion
methods. Based on the traditional multisensor weighted fusion algorithm, considering the
trajectory
weak tracking
correlation of methods. Based on
a single Kalman thethe
filter, traditional multisensor
data fusion method of weighted fusion algo-
global Kalman filter
rithm, considering the weak correlation of a single Kalman filter, the
and LSTM prediction measurement variance was designed, and the adaptive truncation data fusion method
of global Kalman
mechanism was used filter
to and LSTMthe
determine prediction
optimal measurement
weights. At thevariance wasconsidering
same time, designed, and
the
the adaptive truncation mechanism was used to determine the
accurate tracking of target with maneuvering, a maneuvering detection mechanismoptimal weights. Atwas
the
same time,
added to theconsidering
filter whichthe accurate
results weretracking
used asofdata
target withparameters.
fusion maneuvering, Thea maneuvering
effectiveness
detection mechanism was added to the filter which results were used as
and stability of the data fusion tracking method for tracking were verified by numerical data fusion pa-
rameters. The effectiveness and stability of the data fusion tracking
simulation and field experiments, and it has certain generalization ability. method for tracking
wereFromverified by numerical simulation and field experiments, and it has
the simulation verification, the LSTM-GWFA method has certain advantages certain generali-
zation ability.
compared the existing more advanced methods. It can not only reduce the fusion error, but
also correct the comprehensive error in most cases. The fusion result is smaller than the
measurement results error of the highest-precision sensor. From the off-site flight test, this
paper found the data fusion results of the LSTM-GWFA method unable to achieve accurate
filtering during initial tracking, but it will be stable in a short time, and with the increase of
the number of sensors, the performance of the LSTM-GWFA also improved. Notably, when
the number of sensors was changed from 2 to 5, the error improvement rate reached 90.93%.
Even so, there are still many problems worthy of further exploration in the research process.
Due to the imperfection, redundancy, and correlation of multisensor fusion data, it
needs to more accurately and comprehensively describe the actual situation of the measured
object, so as to make more correct judgments and decisions. In recent years, the use of
artificial NN to deal with multisensor data fusion has gradually become a research direction
for scholars in various countries. However, while the reasonable application of NN in data
fusion gives the fusion algorithm strong robustness and generalization, it is still a difficult
problem. This is the follow-up research guided by this paper. The NN will be improved to
adapt to the data fusion trajectory tracking of various moving targets.
Again, tracking a dynamic target in air combat is an important link in the process of
attacking the target. Theoretical research should provide support for practical applications
to deal with a variety of emergencies, such as sensor data loss, transmission signal interrup-
tion, sensor time and space alignment, heterogeneous sensor data unification, and so on.
On the basis of this paper, there are a lot of problems in the field of multisensor data fusion
that need to be further studied, which is bound to become the trend of future exploration.
Sensors 2022, 22, 5800 20 of 24

Author Contributions: H.Y. wrote the program and designed the LSTM-GWFA model. Y.W. designed
this study and collected the dataset. H.Y. drafted the manuscript. X.H. revised the manuscript. D.L.
provided research advice and manuscript revising suggestions. All authors have read and agreed to
the published version of the manuscript.
Funding: National Provincial and Ministerial Scientific Research Projects (Grant No. 201920241193).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: We would like to thank Zhengjie Wang (Beijing Institute of Technology Beijing)
for her technical suggestions and Xinyue Zhang (Beijing Aerocim Technology Co., Ltd.) provided
test equipment.
Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design
of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or
in the decision to publish the results.

Appendix A. Variable Dimension Kalman Filter


The variable dimension filtering algorithm was proposed by Bar-Shalom and Birmiwal
in 1982. This method does not rely on the a priori hypothesis of target maneuvering and
regards maneuvering as the internal change of target dynamic characteristics rather than
as noise modeling. The detection method uses the average information method and the
adjustment method uses the on-off conversion. In the case of no maneuver, the tracking
filter uses the original model. Once the maneuver is detected, the filter will use different
state measurements with higher dimensions and the new state variables will be added.
Then the non-motorized detector is used to detect the motorized disease and convert it to
the original model.
This paper used two models, which are the constant velocity (CV) model of non-
maneuvering and the approximate constant acceleration (CA) model of maneuvering. In
the CV model, the state component of plane motion is shown in Equation (A1).
 . . . 0
X CV = x x y y z z (A1)

In the CA model, the state component is shown in Equation (A2).


 . . . .. .. .. 0
X CA = x x y y z z x y z (A2)
. . . .. .. ..
where x, y, z is the position in three directions. x, y, z is the speed in three directions. x, y, z
are acceleration in three directions. Under the condition of the CV model, the maneuver
detection is carried out according to the following methods. The average value ρ(k) of filter
ε v (k) is set as the new value of constant velocity filtering.

ρ(k) = γρ(k − 1) + ε v (k ), γ = 1 − 1/window (A3)

ε v ( k ) = v 0 ( k ) S −1 ( k ) v ( k ) (A4)
where γ is discount factor. window is sliding window length, which detect the presence of
maneuver by this length. ε v (k) is square of normalized information. v(k ) is the observation
matrix. S−1 (k) is estimated covariance. Based on the target model of non-maneuvering
situation, the threshold is set as shown in the Equation (A5).

Pr {ε v (k) ≤ ε max } = 1 − β (A5)

where ε max is a certain threshold. β is Significance level. The target is considered as maneu-
vering and the process noise covariance needs to be increased Q(k − 1) when threshold
is exceeded. Then, the increased process noise covariance Q(k − 1) is used until ε v (k) is
Sensors 2022, 22, 5800 21 of 24

less than the threshold ε max . Then, the target maneuver is considered to be over and the
original filtering model is restored.
Once ρ(k) exceeds the threshold set by Equation (A5), the assumption of maneuver
is received and the estimator is converted from non-maneuvering model to maneuvering
model at the threshold point. In contrast, the estimated acceleration is compared with
their standard deviation. If it is not statistically significant, the maneuvering hypothesis is
rejected and the model is changed from maneuvering model to non-maneuvering model.
For acceleration estimation, the statistics of explicit test are in Equation (A6).

δa (k) = â0 (k|k )[ Pam (k |k)]−1 â(k |k) (A6)

where â is the estimated value of the acceleration component. Pam is the block corresponding
to the covariance matrix from the maneuver model.
k
ρ a (k) = ∑ j=k−s+1 δa ( j) (A7)

where s is the length of the sliding window. When ε v (k) falls below the threshold, the
acceleration is considered to be insignificant. Once the acceleration suddenly drops to 0,
this may lead to great innovation in the maneuver model. The innovation of the maneuver
model exceeds the 95% confidence interval and it can be converted to a lower order model.
However, the target starts to have equal acceleration at time k − s + 1 by setting the filter
where s is the length of the effective sliding window and k is time of maneuver detected.
Then, the state estimation is modified appropriately at time k − s. The acceleration at k − s
time is estimated as Equation (A8).

2
X̂4m+i (k − s|k − s) = [zi (k − s) − ẑi (k − s|k − s − 1)], i = 1, 2, 3 (A8)
T2
where T is the sampling frequency and the estimated position component is taken as the
corresponding measurement value.

X̂2m+i (k − s|k − s) = zi (k − s), i = 1, 2, 3 (A9)

The estimated velocity component is corrected by acceleration estimation as Equation (A10).


m
X̂2i (k − s|k − s) = X̂2i (k − s|k − s − 1) + T X̂4m+i (k − s|k − s), i = 1, 2, 3 (A10)

The covariance matrix associated with the modified state estimation is Pm (k − s|k − s).
When maneuvering is detected, the state model of the target is changed by introducing
an additional state component of the target acceleration. The maneuver is modeled by
acceleration and other states related to position and speed are recursively estimated.

Appendix B. Long Short-Term Memory


Long short-term memory(LSTM) is an improved result of traditional recurrent neural
networks (RNN). It is a long-term and short-term memory network. It was proposed
by Hochreiter and Schmidhuber in 1997. Compared with ordinary RNN, LSTM adds a
memory cell to judge whether the information is useful or not, which solves the problems
of gradient disappearance and gradient explosion in the process of long sequence training.
This improvement enables it to perform better in longer sequences.
Appendix B: Long Short-Term Memory
Long short-term memory(LSTM) is an improved result of traditional recurrent neural
networks (RNN). It is a long-term and short-term memory network. It was proposed by
Hochreiter and Schmidhuber in 1997. Compared with ordinary RNN, LSTM adds a
memory cell to judge whether the information is useful or not, which solves the problems
Sensors 2022, 22, 5800 22 of 24
of gradient disappearance and gradient explosion in the process of long sequence training.
This improvement enables it to perform better in longer sequences.

Figure A1.
Figure A1. The repeating
repeating module
module in
in an
an LSTM
LSTM contains
contains four
four interacting
interacting layers.
layers.

Cell
Cell state
state is the key of LSTM. In order to protect and control the state of memory unit,
three
three control
controlgates
gatesare
areplaced
placed in in
a memory
a memory unit, which
unit, whichare called inputinput
are called gate,gate,
forgetting gate,
forgetting
and
gate,output gate, respectively.
and output Each control
gate, respectively. gate consists
Each control of a NNof
gate consists layer containing
a NN a sigmoida
layer containing
function
sigmoid and a point
function andmultiplication operation.operation.
a point multiplication The structure diagram of
The structure LSTM of
diagram memory
LSTM
unit is shown in the figure below. In the figure, a horizontal line running
memory unit is shown in the figure below. In the figure, a horizontal line running through through the top of
the schematic diagram from the input on the left to the output on
the top of the schematic diagram from the input on the left to the output on the right the right is the state of the
is
memory unit. The NN layers of forgetting gate, input gate, and output
the state of the memory unit. The NN layers of forgetting gate, input gate, and output gate gate are represented
by
arelayers, and the
represented byoutput
layers,results
and the areoutput
represented
resultsby arex1represented
, x2 , and x3 , by
respectively.
𝑥 , 𝑥 , and The𝑥 ,two
re-
layers in the figure correspond to the input and output of the memory
spectively. The two layers in the figure correspond to the input and output of the memory unit respectively.
The
unitfirst layer generates
respectively. The firsta vector to update athe
layer generates cell state.
vector to update the cell state.
LSTM
LSTM also relies on back propagation to update parameters.
also relies on back propagation to update parameters. The The keykey to to iteratively
iteratively
updating
updating the required parameters through the gradient descent method is to calculate
the required parameters through the gradient descent method is to calculate the
the
partial
partial derivatives
derivativesofofallall parameters
parameters based on the
based loss loss
on the function. ThereThere
function. are two arehidden states
two hidden
in LSTM,
states are ht and
which which
in LSTM, are C ℎ t . and 𝐶 .
∂L
δht = (A11)
∂h𝜕𝐿
𝛿 = t (A11)
𝜕ℎ
∂L
δCt = (A12)
∂C 𝜕𝐿
t
𝛿 = (A12)
where δht and δCt are the loss value used for back 𝜕𝐶 propagation. δh is only an interme-
t
diate
wherequantity
𝛿 andin𝛿 theare process
the loss and doesused
value not for
actually participate in𝛿theisback
back propagation. onlypropagation.
an interme-
The specific derivation process is not described in detail
diate quantity in the process and does not actually participate in the back here, and can be found in the
propagation.
literature [43].

References
1. Zhang, Y.; Wang, Y.Z.; Si, G.Y. Analysis and Modeling of OODA Circle of Electronic Warfare Group UAV. Fire Control Command
Control 2018, 43, 31–36.
2. Chen, X.; Wang, W.Y.; Chen, F.Y. Simulation Study on Tactical Attack Area of Air-to-Air Missile Based on Target Maneuver
Prediction. Electron. Opt. Control 2021, 28, 6.
3. Wei, Y.; Cheng, Z.; Zhu, B. Infrared and radar fusion detection method based on heterogeneous data preprocessing. Opt. Quantum
Electron. 2019, 51, 1–15. [CrossRef]
4. Zhang, P.; Liu, W.; Lei, Y. Hyperfusion-net:hyper-densely reflective feature fusion for salient object detection. Pattern Recognit.
2019, 93, 521–533. [CrossRef]
5. Li, C.; Li, J.X. The Effectiveness Evaluation Method of Systematic Combat Based on Operational Data. Aero Weapon. 2022,
29, 67–73.
6. Zheng, H.; Cai, A.; Zhou, Q. Optimal preprocessing of serum and urine metabolomic data fusion for staging prostate cancer
through design of experiment. Anal. Chim. Acta 2017, 991, 68–75. [CrossRef] [PubMed]
7. Dolly, D.R.J.; Peter, J.D.; Bala, G.J. Image fusion for stabilized medical video sequence using multimodal parametric registration.
Pattern Recognit. Lett. 2020, 135, 390–401. [CrossRef]
8. Lin, K.; Li, Y.; Sun, J. Multi- sensor fusion for body sensor network in medical human- robot interaction scenario. Inf. Fusion 2020,
57, 15–26. [CrossRef]
9. Maimaitijiang, M.; Sagan, V.; Sidike, P. Soybean yield prediction from UAV using multimodal data fusion an deep learning.
Remote Sens. Environ. 2020, 237, 111599. [CrossRef]
Sensors 2022, 22, 5800 23 of 24

10. Liu, X.; Liu, Q.; Wang, Y. Remote sensing image fusion based on two-stream fusion network. Inf. Fusion 2020, 55, 1–15. [CrossRef]
11. Rajah, P.; Odindi, J.; Mutanga, O. Feature level image fusion of optical imagery and Synthetic Aperture Radar(SAR) for invasive
alien plant species detection and mapping. Remote Sens. Appl. Soc. Environ. 2018, 10, 198–208. [CrossRef]
12. Huang, M.; Liu, Z.; Tao, Y. Mechanical fault diagnosis and prediction in IoT based on multi-source sensing data fusion. Simul.
Model. Pract. Theory 2020, 102, 101981. [CrossRef]
13. Yan, J.; Hu, Y.; Guo, C. Rotor unbalance fault diagnosis using DBN based on multi-source heterogeneous information fusion.
Procedia Manuf. 2019, 35, 1184–1189. [CrossRef]
14. Vita, F.D.; Bruneo, D.; Das, S.K. On the use of a full stack hardware/software infrastructure for sensor data fusion and fault
prediction in industry 4.0. Pattern Recognit. Lett. 2020, 138, 30–37. [CrossRef]
15. Rato, T.J.; Reis, M.S. Optimal fusion of industrial data streams with different granularities. Comput. Chem. Eng. 2019, 130, 106564.
[CrossRef]
16. Federico, C. A review of data fusion techniques. Sci. World J. 2013, 2013, 704504.
17. Jaramillo, V.H.; Ottewill, J.R.; Dudek, R. Condition monitoring of distributed systems using two-stage Bayesian inference data
fusion. Mech. Syst. Signal Process. 2017, 87, 91–110. [CrossRef]
18. Bader, K.; Lussier, B.; Schon, W. A fault tolerant architecture for data fusion: A real application of Kalman filters for mobile robot
localization. Robot. Auton. Syst. 2017, 88, 11–23. [CrossRef]
19. Zheng, Z.; Qiu, H.; Wang, Z. Data fusion based multirate Kalman filtering with unknown input for on- line estimation of dynamic
displacements. Measurement 2019, 131, 211–218. [CrossRef]
20. Wang, D.; Dey, N.; Sherratt, R.S. Plantar pressure image fusion for comfort fusion in diabetes mellitus using an improved fuzzy
hidden Markov model. Biocybern. Biomed. Eng. 2019, 39, 742–752.
21. Sun, X.J.; Zhou, H.; Shen, H.B.; Yan, G. Weighted Fusion Robust Incremental Kalman Filter. J. Electron. Inf. Technol. 2021, 43, 7.
22. Xue, H.F.; Li, G.Y.; Yang, J.; Ma, J.Y.; Bi, J.B. A Speed Estimation Method Based on Adaptive Multi-model Extended Kalman Filter
for Induction Motors. Micromotors 2020, 53, 7.
23. Huang, J.R.; Li, L.Z.; Gao, S.; Qian, F.C.; Wang, M. A UKF Trajectory Tracking Algorithm Based on Multi-Sensor Robust Fusion.
J. Air Force Eng. Univ. (Nat. Sci. Ed.) 2021, 22, 6.
24. Xue, Y.; Feng, X.A. Multi-sensor Hierarchical Weighting Fusion Algorithm for Maneuvering Target Tracking. J. Detect. Control
2020, 42, 5.
25. Xiao, F.Y.; Qin, B.W. A Weighted Combination Method for Conflicting Evidence in Multi-Sensor Data Fusion. Sensors 2018, 18, 1487.
[CrossRef]
26. Zhao, Z.; Wu, X.F. Multi-points Parallel Information Fusion for Target Recognition of Multi-sensors. Command Control Simul. 2020,
42, 23–27.
27. Zheng, R.H.; Cen, J.; Chen, Z.H.; Xiong, J.B. Fault Diagnosis Method Based on EMD Sample Entropy and Improved DS Evidence
Theory. Autom. Inf. Eng. 2020, 41, 8.
28. Guan, J. Research on the Application of Support Vector Machine in Water Quality Monitoring Information Fusion and Assessment; Hohai
University: Nanjing, China, 2006.
29. Zhu, M.R.; Sheng, Z.H. Data Fusion Algorithm of Hyubrid Multi-sensor Based on Fuzzy Clustering. Shipboard Electron.
Countermeas. 2019, 42, 5.
30. Cao, K.; Tan, C.; Liu, H.; Zheng, M. Data fusion algorithm of wireless sensor network based on BP neural network optimized by
improved grey wolf optimizer. J. Univ. Chin. Acad. Sci. 2022, 39, 232–239.
31. Pan, N. A sensor data fusion algorithm based on suboptimal network powered deep learning. Alex. Eng. J. 2022, 61, 7129–7139.
[CrossRef]
32. Wu, H.; Han, Y.; Jin, J.; Geng, Z. Novel Deep Learning Based on Data Fusion Integrating Correlation Analysis for Soft Sensor
Modeling. Ind. Eng. Chem. Res. 2021, 60, 10001–10010. [CrossRef]
33. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: Lstm cells and network architectures. Neural Comput. 2019,
31, 1235–1270. [CrossRef] [PubMed]
34. Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network. Phys. D
Nonlinear Phenom. 2020, 404, 132306. [CrossRef]
35. Ye, W.; Xz, B.; Mi, L.A.; Han, W.C.; Yc, C. Attention augmentation with multi-residual in bidirectional lstm. Neurocomputing 2020,
385, 340–347.
36. Duan, X.L.; Liu, X.; Chen, Q.; Chen, J. Unmark AGV tracking method based on particle filtering and LSTM network. Transducer
Microsyst. Technol. 2020, 39, 4.
37. Jiang, F.; Zhang, Z.K. Underwater TDOA/FDOA Joint Localization Method Based on Taylor-Weighted Least Squares Algorithm.
J. Signal Process. 2021, 37, 9.
38. Zhang, W.; Liu, Y.; Zhang, S.; Long, T.; Liang, J. Error Fusion of Hybrid Neural Networks for Mechanical Condition Dynamic
Prediction. Sensors 2021, 21, 4043. [CrossRef]
39. Kolaghassi, R.; Al-Hares, M.K.; Marcelli, G.; Sirlantzis, K. Performance of Deep Learning Models in Forecasting Gait Trajectories
of Children with Neurological Disorders. Sensors 2022, 22, 2969. [CrossRef] [PubMed]
40. Zhang, H.; Li, T.; Yin, L. A Novel KGP Algorithm for Improving INS/GPS Integrated Navigation Positioning Accuracy. Sensors
2019, 19, 1623. [CrossRef] [PubMed]
Sensors 2022, 22, 5800 24 of 24

41. Mao, Y.; Yang, Y.; Hu, Y. Research into a Multi-Variate Surveillance Data Fusion Processing Algorithm. Sensors 2019, 19, 4975.
[CrossRef] [PubMed]
42. Zhang, Z.; Fu, K.; Sun, X. Multiple Target Tracking Based on Multiple Hypotheses Tracking and Modified Ensemble Kalman
Filter in Multi-Sensor Fusion. Sensors 2019, 19, 3118. [CrossRef] [PubMed]
43. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to Forget: Continual Prediction with LSTM. Neural Comput. 2000, 12, 2451–2471.
[CrossRef]

You might also like