0% found this document useful (0 votes)
16 views16 pages

Mathematical Problems in Engineering - 2020 - Shin - Closed Form Distance Estimators Under Kalman Filtering Framework With

Uploaded by

Christine Allen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views16 pages

Mathematical Problems in Engineering - 2020 - Shin - Closed Form Distance Estimators Under Kalman Filtering Framework With

Uploaded by

Christine Allen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Hindawi

Mathematical Problems in Engineering


Volume 2020, Article ID 9141735, 16 pages
https://doi.org/10.1155/2020/9141735

Research Article
Closed-Form Distance Estimators under Kalman Filtering
Framework with Application to Object Tracking

Vladimir Shin ,1 Georgy Shevlyakov,2 Woohyun Jeong,3 and Yoonsoo Kim 3

1
Department of Information and Statistics, Research Institute of Natural Science, Gyeongsang National University, Jinju 52828,
Republic of Korea
2
Department of Applied Mathematics, Peter the Great St. Petersburg Polytechnic University, Saint Petersburg 195251, Russia
3
Graduate School of Mechanical and Aerospace Engineering, Gyeongsang National University, Jinju 52828, Republic of Korea

Correspondence should be addressed to Yoonsoo Kim; [email protected]

Received 9 April 2020; Revised 15 June 2020; Accepted 24 June 2020; Published 20 August 2020

Academic Editor: António M. Lopes

Copyright © 2020 Vladimir Shin et al. ,is is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

In this paper, the minimum mean square error (MMSE) estimation problem for calculation of distances between two signals via
the Kalman filtering framework is considered. ,e developed algorithm includes two stages: the Kalman estimate of a state vector
computed at the first stage is nonlinearly transformed at the second stage based on a distance function and the MMSE criterion. In
general, the most challenging aspect of application of the distance estimator is calculation of the multivariate Gaussian integral.
However, it can be successfully overcome for the specific metrics between two points in line, between point and line, between point
and plane, and others. In these cases, the MMSE estimator is defined by an analytical closed-form expression. We derive the exact
closed-form bilinear and quadratic MMSE estimators that can be effectively applied for calculation of an inner product, squared
norm, and Euclidean distance. A novel low-complexity suboptimal estimator for special composite functions of linear, bilinear,
and quadratic forms is proposed. Radar range-angle responses are described by the functions. ,e proposed estimators are
validated through a series of experiments using real models and metrics. Experimental results show that the MMSE estimators
outperform existing estimators that calculate distance and angle in nonoptimal manner.

1. Introduction vectors taken across the same features (variables) derived


from color, shape, and/or texture information. Image
,e problem of measuring the distance between real-valued similarity measures play an important role in many image
signals or images arises in most areas of scientific research. algorithms and applications including retrieval, classifica-
In particular, the familiar Euclidean distance plays a tion, change detection, quality evaluation, and registration
prominent role in many important application contexts not [6–12].
only in engineering, economics, statistics, and decision ,e proposed paper deals with the distance estimation
theory, but also in fields such as machine learning, cryp- between random signals. In signal processing, a good dis-
tography, image recognition, and others. ,e statistical tance metric helps in improving the performance of clas-
methods related to the distance estimation can be catego- sification, clustering and localization in wireless sensor
rized into image and signal processing areas. networks, radar tracking, and other applications [13–20].
,e concept of a distance metric is widely used in image ,e Bayesian classification approach based on concepts of
processing and computer vision [1–5] (also see references the Euclidian and Mahalanobis distances is often used in
therein). ,e distance provides a quantitative measure of the discriminant analysis. Survey of the classification procedures
degree of match between two images or objects. ,ese which minimize a distance between raw signals and classes
objects might be two profiles of persons, a person and a in multifeature space is given in [21, 22]. ,e distance es-
target profile, camera of a robot and people, or any two timation algorithm based on the goodness-of-fit functions
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
2 Mathematical Problems in Engineering

where the best parameters of the fitting functions are cal- (3) ,e MMSE estimators for quadratic and bilinear
culated given the training data is considered in [23]. Al- forms of a state vector are investigated and applied,
gorithm for estimation of a walking distance using a wrist- including the estimators for the square of a norm
mounted inertial measurement unit device is proposed in ‖xk ‖22 , the square of the Euclidean distance
[24]. ,e concept of distance between two samples or be- ‖xk − yk ‖22 , and the inner product 〈xk , yk 〉. A novel
tween two variables is fundamental in statistics due to the low-complexity algorithm for suboptimal estimation
fact that a sum of squares of independent normal random of a special class of composite functions is proposed.
variables has a chi-square distribution. Knowledge of the Tracking radar responses such as range, angles, and
distribution and usage of the usual approximations make a range rate are described by the functions.
confidence interval for distance metrics [25, 26]. Usage of (4) Performance of the proposed MMSE estimators
the Taylor series expansions for aircraft geometric-height through real examples illustrates their theoretical
estimation using range and bearing measurements is and practical usefulness.
addressed in [27, 28]. ,e minimum mean square error
(MMSE) estimation of a state vector in the presence of ,is paper is organized as follows. Section 2 presents a
information about the absolute value of a difference between statement of the MMSE estimation problem for an arbitrary
its subvectors is proposed in [29]. nonlinear function of a state vector within a Kalman filtering
In many applications, it is interesting to estimate not framework. In Section 3, the general MMSE estimator is
only a position or state of an object but also a nonlinear proposed, and computational complexity of the estimator is
distance function which gives information to effectively discussed. ,e concept of a closed-form estimator is in-
control target tracking. However, most authors have not troduced. In Section 4, the closed-form MMSE estimator for
focused on a simultaneous estimation of a state and distance absolute value of a linear form of a state vector is derived
functions in dynamical models such as a Kalman filtering (,eorem 1). In particular cases, the estimator calculates
framework. distances between two points in 1-D line, between a point
,e problem of estimation of the distance function, and line in 2-D plane, and between a point and plane in 3-D
dk � d(xk , yk ), between two vector signals xk and yk is space. ,e comparative analysis of the estimator via several
considered in the paper, but its difference from the afore- practical examples is presented. In Section 5, the MMSE
mentioned references is that the both signals xk and yk are estimators for quadratic and bilinear forms of a state vector
unknown, and they should be simultaneously estimated with are comprehensively studied (,eorems 2 and 3). Effective
the function dk using indirect measurements. For example, matrix formulas for the quadratic and bilinear MMSE es-
we observe positions of two points A(xk ) and B(yk ) in a line timators are derived and applied with the Euclidean dis-
and a distance between the points represents the absolute tance, a norm, and inner product of vector signals. In Section
difference, i.e., dk (A, B) � |xk − yk |. ,e positions xk and yk 6, a low-complexity suboptimal estimator for composite
and consequently the distance dk � |xk − yk | are unknown, nonlinear functions is proposed and recommended for
and our problem is to optimally calculate three estimates calculation of radar range-angle responses. In Section 7, the
􏽢 k , and d
x􏽢k , y 􏽢 . Note that the simple distance estimator d
􏽢 � efficiency of the suboptimal estimator is demonstrated on an
k k
|􏽢xk − y 􏽢 k | is not an optimal solution. 2-D dynamical model. Finally, we conclude the paper in
,e purpose of the paper is to derive an analytical closed- Section 8. ,e list of main notations is given in Table 1.
form MMSE estimator for distance functions between
random signals such as the absolute value, the Euclidean 2. Problem Statement
distance, inner product, bilinear, and quadratic forms. ,e
advantage of the estimator is quick and accurate calculation ,e basic framework for the Kalman filter involves esti-
of distance metrics compared to the approximate or iterative mation of a state of a discrete-time linear dynamical system
estimators. A further study of using the estimators is also with additive Gaussian white noise:
done for the object tracking problem where we can obtain xk+1 � Fk xk + Gk vk , k � 0, 1, . . . ,
important practical results for the distance estimation of (1)
signals in linear Gaussian discrete-time systems. y k � Hk x k + w k ,
,e following list highlights the primary contributions of
this paper: where xk ∈ Rn is a state vector, yk ∈ Rm is a measurement
vector, and vk ∈ Rr and wk ∈ Rm are zero-mean Gaussian
(1) Extension of the MMSE approach to the estimation white noises with process (Qk ) and measurement (Rk ) noise
of a nonlinear functions of a state vector within the covariances, respectively, i.e., vk ∼ N(0, Qk ), wk ∼ N(0, Rk ),
Kalman filtering framework. ,e obtained MMSE- and Fk ∈ Rn×n , Gk ∈ Rr×r , Qk ∈ Rr×r , Rk ∈ Rm×m , and
optimal solution represents a two-stage estimator. Hk ∈ Rm×n . ,e initial state x0 ∼ N(m0 , C0 ) and the process
(2) Derivation of analytical expressions for the different and measurement noises vk , wk are mutually uncorrelated.
metrics between two points in a line, between a point In parallel with the state-space model (1), consider the
and a line, and between a point and a plane. We nonlinear function of a state vector:
establish that the obtained estimators represent zk � f xk 􏼁: Rn ⟶ R, (2)
compact closed-form formulas depending on the
Kalman filter state estimates and error covariance. which in particular case represents a distance metric in Rn .
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Mathematical Problems in Engineering 3

Table 1: List of main notations. where x􏽢−k+1 � E(xk+1 | yk ) and P−k+1 � Cov(e−k+1 ), e−k+1 �
R n
Set of n-dimensional real column vectors xk+1 − x􏽢−k+1 are the time update estimate and error covari-
Rn×m Set of n × m real matrices ance, respectively, and Kk ∈ Rn×m is the filter gain matrix.
AT Transpose of matrix A Second stage (optimal MMSE estimator): next, the op-
In Identity matrix of size n × n timal MMSE estimate of the nonlinear function zk � f(xk )
On Null matrix of size n × n based on the measurements yk also represents a conditional
A− 1 Inverse of n × n matrix A mean, that is,
tr(A) Trace of n × n matrix A 􏼌􏼌 􏼌􏼌
z􏽢k � E􏼒zk 􏼌􏼌􏼌 yk 􏼓 � 􏽚 n f(x)p􏼒x 􏼌􏼌􏼌 yk 􏼓dx,
opt
Normal distribution with mean m and covariance (4)
N(m, C) R
matrix C
E(.) Expectation operator where p(x | yk ) � N(􏽢xk , Pk ) is a multivariate conditional
Cov(xk ) Covariance (covariance matrix) of random vector xk Gaussian probability density function.
Cov(xk , yk ) Cross covariance of random vectors xk and√��� yk
1 1 T −1
‖x‖2 Euclidean norm (2-norm) of vector, ‖x‖2 � xT x N x􏽢k , Pk 􏼁 � 􏼌 􏼌􏼌1/2 exp􏼔− x − x􏽢k 􏼁 Pk x − x􏽢k 􏼁􏼕. (5)
n/2 􏼌􏼌
􏼌 2
Inner product of vectors, (2π) 􏼌Pk 􏼌
〈x, y〉
〈x, y〉 � 􏽐ni�1 xi yi , x, y ∈ Rn
cT x Linear form (LF), cT x � 􏽐ni�1 ci xi , c, x ∈ Rn ,us, the best estimate in equation (4) represents the
opt
Quadratic form (QF), optimal MMSE estimator, z􏽢k � F(􏽢xk , Pk ), which depends
xT Ax on the Kalman estimate x􏽢k and error covariance Pk deter-
xT Ax � 􏽐ni,j�1 aij xi xj , A � [aij ]
xT B􏽥x Bilinear form (BLF), xT A􏽥 x � 􏽐ni,j�1 bij xi x
􏽥 j , B � [bij ] mined by KF equation (3).

Remark 1 (closed-form MMSE estimator). In general case,


opt
k
Given the overall noisy measurements y � 􏼈y1 , y2 , . . . , the calculation of the optimal estimate, z􏽢k � E(zk | yk ), is
yk }, k ≥ 1, our goal is to desire optimal estimators x􏽢k and 􏽢zk for reduced to calculation of the multivariate Gaussian integral
the state vector (1) and nonlinear function (2), respectively. (5). ,e lack of the estimate is impossibility to calculate the
,ere are a multitude of statistics-based methods to es- integral in explicit form for the arbitrary nonlinear function
timate the unknown value zk � f(xk ) from the sensor f(x). Analytical calculation of the integral (closed-form
measurements yk We focus on the MMSE approach, which MMSE estimator) is possible only in special cases considered
minimizes the mean square error (MSE), min^z E(‖zk − z􏽢k ‖22 ), in the paper. ,e closed-form estimators for distance metrics
which is a common measure of estimator quality. in terms of x􏽢k and Pk are proposed in Sections 4 and 5.
,e MMSE estimator is the conditional mean (expec- ,e Euclidean distance between two points, x1 , x2 ∈ Rn ,
tation) of the unknown zk � f(xk ) given the known ob- is defined as 􏽶�������������
opt 􏽴
served value of the measurements, z􏽢k � E(zk | yk ) [30, 31]. ��
def �� n
2
,e most challenging problem in the MMSE approach is d x1 , x2 􏼁 � ��x1 − x2 ��2 � 􏽘 􏼐x1,i − x2,i 􏼑 . (6)
i�1
how to calculate the conditional mean. In this paper, explicit
formulas for distance metrics within the Kalman filtering In this particular case where x1 and x2 represent two
framework are derived. points located on the 1-D line, the Euclidean distance
represents the absolute value (see Figure 1), i.e.,
􏼌􏼌 􏼌􏼌
3. General Formula for Optimal Two-Stage d x1 , x2 􏼁 � 􏼌􏼌x1 − x2 􏼌􏼌. (7)
MMSE Estimator
In Section 4, the MMSE estimator for the absolute value
In this section, the optimal MMSE estimator for the general is comprehensively studied.
function f(xk ) of a state vector is proposed. It includes two
stages: the optimal Kalman estimate of the state vector x􏽢k 4. Closed-Form MMSE Estimator for
computed at the first stage is used at the second stage for Absolute Value
estimation of f(xk ).
First stage (calculation of Kalman estimate): the mean 4.1. MMSE Estimator for Absolute Value of Linear Form
square estimate x􏽢k � E(xk | yk ) of the state xk based on the
measurements yk and error covariance Pk � Cov(ek ), ek � Lemma 1 (MMSE estimator for |x|). Let x ∈ R be a normal
xk − x􏽢k are described by the recursive Kalman filter (KF) random variable, and x 􏽢 and P � E(x − x 􏽢 )2 are the MMSE
equations [30, 31]: estimate and error variance, respectively. 2en, the MMSE
x􏽢−k+1 � Fk x􏽢k , x􏽢0 � m0 , k � 0, 1, . . . , estimator for the absolute value z � |x| has the following
closed-form expression:
P−k+1 � Fk Pk FTk + Gk Qk GTk , P0 � C0 , 􏽲���
􏼌􏼌 2P 􏽢2
x 􏽢
x
opt 􏼌
􏼌 k
z􏽢 � E􏼒|x| 􏼌 y 􏼓 � exp􏼠− 􏼡 + x 􏽢 􏼢1 − 2Φ􏼠− √�� 􏼡􏼣,
− 1
Kk+1 � P−k+1 HTk+1 􏼐Hk+1 P−k+1 HTk+1 + Rk+1 􏼑 , (3) π 2P P
x􏽢k+1 � x􏽢−k+1 + Kk+1 yk+1 − Hk+1 x􏽢−k+1 􏼁, (8)

Pk+1 � In − Kk+1 Hk+1 􏼁P−k+1 , where Φ(·) is the cumulative distribution function of the
standard normal distribution, N(0, 1).
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
4 Mathematical Problems in Engineering

Direction of motion
x1 x2 M (x1, x2)
dk (ℒ, M)
Line L:
|x1 – x2| Trajectory Ax1 + Bx2 + C = 0

Figure 1: Distance d(x1 , x2 ) � |x1 − x2 | between two moving


points in 1-D line.

,e derivation of equation (8) is given in the Appendix.


Let ℓ � cT x + d be a linear form (LF) of the normal Figure 2: Shortest distance dk (L, M) from moving point M(xk )
random vector, x ∈ Rn , and x􏽢 ∈ Rn and P ∈ Rn×n are the to 1-D line.
MMSE estimate and error covariance, respectively. ,en, the
MMSE estimate of the linear form ℓ and its error variance
Substituting cT � 􏼂 A B 􏼃 and d � C into equations (9)
can be calculated as
􏼌 and (10), we get the MMSE estimator for the shortest dis-
􏽢ℓ � E􏼒ℓ 􏼌􏼌􏼌􏼌 yk 􏼓 � cT x􏽢 + d, tance (12):
(9) 􏽳�����
1 ⎪

⎨ 2P(ℓ) 􏽢2
P(ℓ) � E(ℓ − 􏽢ℓ)2 � cT Pc, d
opt
􏽢 (L, M) � √������� k
exp⎝− ℓ k ⎞
⎛ ⎠
k
A2 + B 2 ⎪
⎩ π 2P(ℓ)
k
and we have the following theorem. (13)
⎡⎢ 􏽢ℓk ⎟⎤⎥⎥⎫⎪

Theorem 1 (MMSE estimator for absolute value of LF). Let + 􏽢ℓk ⎢⎢⎢⎣1 − 2Φ⎜


⎝− 􏽱��� �⎞

⎠⎥⎥⎦⎪,
x ∈ Rn be a normal random vector, and x􏽢 ∈ Rn and P ∈ Rn×n P(ℓ) ⎭
k
are the MMSE estimate and error covariance, respectively.
2en, the closed-form MMSE estimator for the absolute value where 􏽢ℓk and P(ℓ)
k are determined by equation (9):
z � |cT x + d| is defined by formula (8):
􏽳����� 􏽢ℓk � A􏽢
x1,x + B􏽢
x2,k + C,
opt 2P(ℓ) 􏽢ℓ2 􏽢ℓ (14)
z􏽢 � ⎝
exp − (ℓ) ⎞
⎛ ⎠ + 􏽢ℓ􏼢1 − 2Φ􏼠− √���� 􏼡􏼣, 2 2
π 2P P(ℓ) P(ℓ)
k � A P11,k + B P22,k + 2ABP12,k .

(10) ,e MMSE estimator (12)–(14) can be generalized on 3-


where 􏽢ℓ and P(ℓ) are determined by equation (9). D space.

,e MMSE estimator (10) allows to calculate distances Example 3 (distance between point and plane). Similar to
measured in terms of the absolute value in n-dimensional space. equation (12), the shortest distance between the moving
point M(xk ) � M(x1,k , x2,k , x3,k ) and the plane P: Ax1 +
4.2. Examples of MMSE Estimator for Distance between Bx2 + Cx3 + D � 0 in 3-D space,
Points. Let xk ∈ Rn be a normal state vector, and x􏽢k ∈ Rn 􏼌􏼌􏼌 􏼌􏼌
􏼌Ax1,k + Bx2,k + Cx3,k + D􏼌􏼌
and Pk ∈ Rn×n are the Kalman estimate and error covariance, dk (P, M) � √����������� , (15)
respectively, Pk � [Pij,k ]. A2 + B 2 + C 2
is shown in Figure 3.
Example 1 (distance on 1-D line). ,e MMSE estimator for Substituting cT � 􏼂 A B C 􏼃 and d � D into equations
the distance zk � |xk − ak | between the moving point (xk ) (9) and (10), we get
and given sequence (ak ) in 1-D line takes the form 􏽳�����
􏽲��� 1 ⎪

⎨ 2P(ℓ) 􏽢2
opt 2Pk 􏽢 − ak 􏼁2
x 􏽢 opt (P, M) � √����������
d � k
exp ⎝− ℓ k ⎞
⎛ ⎠
z􏽢k � exp􏼢− k 􏼣 k
A2 + B2 + C2 ⎪
⎩ π 2P(ℓ)
π 2Pk k
(11)
􏽢 k − ak 􏼁
x 􏽢ℓk ⎟⎥⎤⎫ ⎪

􏽢 k − ak 􏼁􏼢1 − 2Φ􏼠− 􏽰��� 􏼡􏼣. ⎡⎢ ⎜
+ x
Pk + 􏽢ℓk ⎢⎢⎢⎣1 − 2Φ⎛

⎝− 􏽱����⎞ ⎠⎥⎥⎦⎪,

Pk(ℓ) ⎭

(16)
Example 2 (distance between point and line). ,e shortest
distance dk (L, M) from the moving point where 􏽢ℓk and P(ℓ)
k are determined by equation (9):
M(xk ) � M(x1,k x2,k ) to the line L: Ax1 + Bx2 + C � 0 in 􏽢ℓk � A􏽢
x1,k + B􏽢
x2,k + C􏽢
x3,k + D,
2-D plane is shown in Figure 2. ,e distance is given by
􏼌􏼌 􏼌 P(ℓ) 2 2 2 (17)
􏼌􏼌Ax1,k + Bx2,k + C􏼌􏼌􏼌 k � A P11,k + B P22,k + C P33,k
dk (L, M) � √������� . (12) + 2AP12,k + 2ACP13,k + 2BCP23,k .
A2 + B 2
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Mathematical Problems in Engineering 5

M (x1, x2, x3) Remark 2. Reviewing formula (20), we find the following. If
the values of αk and βk are large (αk , |βk | ≫ 1), then
exp(− αk ) ≈ 0 and Φ(− βk ) ≈ 0 if x 􏽢 k > 0 or Φ(− βk ) ≈ 1 if
x􏽢 k < 0, which implies that both estimates are quite close, i.e.,
Plane opt
z􏽢k ≈ z􏽢sub xk |. Assuming the estimate x 􏽢 k is far enough
k � |􏽢
: Ax1 + Bx2 + Cx3 + D = 0 from zero, then the large values of the functions αk and βk
depend on the error variance Pk . Using (19), the steady-state
value of the variance P∞ satisfies the quadratic equation
􏽰������ �
Figure 3: Shortest distance from point M(x1 , x2 , x3 ) to a plane in P2∞ + qP∞ − rq � 0 with solution P∞ � (− q + q2 + 4rq )/2.
3-D. Since the variance P∞ � P∞ (q, r) depends on the noise
statistics q and r, this fact can be used in practice to compare
,e MMSE distance estimators in ,eorem 1 and Ex- the proposed estimators. For example, if the estimate x 􏽢 k is
amples 1∼3 are summarized in Table 2. far enough from zero and the product rq is small (rq ≪1),
then P∞ ≈ 0, and αk , |βk | ≫1. In this case, both estimators are
opt
close, z􏽢k ≈ z􏽢sub
k . Simulation results confirm this result.
4.3. Numerical Examples. In this section, numerical exam- Next, we test the efficiency of the proposed estimators. ,e
ples demonstrate the accuracy of the two closed-form es- estimators are compared under different values of the noise
timators calculated for the absolute value z � |cT x|. ,e variances q and r. ,e following scenarios were considered:
optimal MMSE estimator z􏽢opt is compared with the simple Case 1: small noises, q � 10− 4 , r � 10− 2
suboptimal one z􏽢sub � |cT x􏽢|. Case 2: medium noises, q � 0.1, r � 0.5
Case 3: large noises, q � 0.5, r � 1
4.3.1. Estimation of Distance between Random Location and Both estimators were run with the same random noises
Given Point in 1-D Line. Let xk be a scalar random position for further comparison. ,e Monte Carlo simulation with
measured in additive white noise; then, the system model 1000 runs was applied in calculation of the root mean square
is 􏽱�����������
opt opt
error (RMSE), RMSEk � E(zk − z􏽢k )2 , and RMSEsubk �
xk+1 � xk + vk , x0 � m0 , 􏽱������������
(18) sub 2
E(zk − z􏽢k ) . Define the average RMSE over the time
y k � xk + w k , k � 1, 2, . . . ,
interval k ∈ [k1 , k2 ] as
where m0 is the known initial condition and vk ∼ N(0, q) k
def 1 2
and wk ∼ N(0, r) are the uncorrelated white Gaussian R(􏽢z) � 􏽘 RMSEk . (21)
noises. k2 − k1 + 1 k�k
1

,e KF equation (3) gives the following:


,e simulation results are illustrated in Table 3 and
􏽢 k+1 � x
x 􏽢 k + Kk+1 yk+1 − x
􏽢 k 􏼁, Figures 4∼7.
In Case 1, interest is zero and nonzero initial condition
P−k+1
P−k+1 � Pk + q, P0 � σ 20 � 1, Kk+1 � , (19) x0 � m 0 .
r + P−k+1
At m0 � 0 and q � 10− 4 , the signal xk and its estimate
Pk+1 � 1 − Kk+1 􏼁P−k+1 , k � 0, 1, . . . . 􏽢 k are close to zero, and Pk ≈ 0.001 at k > 8. In this case,
x
the values of αk and βk are not large; therefore, the
Consider the distance between xk and the known point optimal and suboptimal estimates are different as
a, i.e., zk � |xk − a|. ,en, the optimal MMSE estimate of the shown in Figure 4 and confirmed by the values
distance is defined by (10). Further we are interested in the R(􏽢zopt ) � 0.0213 and R(􏽢zsub ) � 0.0327 in Table 3.
special case in which a � 0 and zk � |xk |. In this case, At m0 � 1 and q � 10− 4 , the estimate x 􏽢 k is far enough
formula (10) represents the optimal estimate of the distance from zero, and Pk ≈ 0.001. According to Remark 1, the
between the current position xk and the origin point, i.e., optimal and suboptimal estimates are approximately
opt
􏽲��� equal, z􏽢k ≈ z􏽢subk , as shown in Figure 5. ,e equal
opt 2Pk values R(􏽢zopt ) � R(􏽢zsub ) � 0.0334 confirm the fact.
z􏽢k � 􏽢 k 􏼂1 − 2Φ − βκ 􏼁􏼃,
exp − αk 􏼁 + x
π
In Cases 2 and 3, the variance P∞ is not small; therefore,
􏽢 2k
x the initial condition m0 does not play a significant role in
αk � , (20) comparing both estimators. In these cases, the optimal es-
2Pk 􏼁 opt
timator z􏽢k has better performance than the simple sub-
􏽢k
x optimal one z􏽢sub
k � |􏽢 xk |. Typical graphics are shown in
βk � 􏽰���. Figures 6 and 7, and the values R(􏽢zopt ) and R(􏽢zsub ) in Table 3
Pk
confirm that conclusion.
In parallel to the optimal estimate (20), consider the ,us, the simulation results in Section 4.3.1 show that the
simple suboptimal estimate, z􏽢sub
k � |􏽢
xk |. optimal estimator is suitable for practical applications.
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
6 Mathematical Problems in Engineering

Table 2: MMSE estimators for distance in the Cartesian coordinates.


Definition of distance MMSE distance estimator Equation
z􏽢opt � Γ(􏽢
x√, P),
���� √��
Absolute value z � |x| Γ(􏽢x, P) � 2P/π exp(− x􏽢 2 /2P) + x
􏽢 [1 − 2Φ(− x
􏽢 / P )]. (8)

z􏽢opt � Γ1 (􏽢ℓ, P(ℓ) ),


T √������ 2 √����
Absolute value of linear form z � |c x + d| Γ1 (􏽢ℓ, P(ℓ) ) � 2P(ℓ) /π exp(− 􏽢ℓ /2P(ℓ) ) + 􏽢ℓ[1 − 2Φ(− 􏽢ℓ/ P(ℓ) )], (9) and (10)
􏽢ℓ � cT x 􏽢 + d, P(ℓ) � cT Pc

z􏽢opt � Γ1 (􏽢ℓ, P),


Distance between two points on 1-D line z � |x − a| 􏽢ℓ � x (11)
􏽢− a
√�������
􏽢
d(L, M) � (1/ A2 + B2 )Γ1 (􏽢ℓ, P(ℓ) ),
Distance between point M and line L
√������ � in 2-D plane 􏽢ℓ � A􏽢x1 + B􏽢 x2 + C, (12)–(14)
d(L, M) � |Ax1 + Bx2 + C|/ A2 + B2
P(ℓ) � A2 P11 + B2 P22 + 2ABP12
√�����������
􏽢
d(P, M) � (1/ A2 + B2 + C2 )Γ1 (􏽢ℓ, P(ℓ) ),
Distance between point M and plane P in 3-D �space
√���������� 􏽢ℓ � A􏽢x1 + B􏽢
x2 + C􏽢 x3 + D, (15)–(17)
d(P, M) � |Ax1 + Bx2 + Cx3 + D|/ A2 + B2 + C2
P(ℓ) � A2 P11 + B2 P22 + C2 P33 + 2ABP12 + 2ACP13 + 2BCP23

Table 3: Simulation results for Section 4.3.1.


Case 1: small noises q � 0.0001 m0 � 0 R(􏽢zopt ) � 0.0213 R(􏽢zsub ) � 0.0327 Figures 4(a) and 4(b)
r � 0.001 opt sub
m0 � 1 R(􏽢z ) � 0.0334 R(􏽢z ) � 0.0334 Figures 5(a) and 5(b)
P∞ ≈ 0.001
opt sub
Case 2: medium noises q � 0.1 m0 � 0 R(􏽢z ) � 0.3110 R(􏽢z ) � 0.3883 Figures 6(a) and 6(b)
r � 0.5
m0 � 1 R(􏽢zopt ) � 0.3047 R(􏽢zsub ) � 0.3201
P∞ ≈ 0.18
Case 3: large noises q � 0.5 m0 � 0 R(􏽢zopt ) � 0.6296 R(􏽢zsub ) � 0.6991 Figures 7(a) and 7(b)
r�1 opt sub
m0 � 1 R(􏽢z ) � 0.5359 R(􏽢z ) � 0.5798
P∞ ≈ 0.5

0.12 0.07

0.1 0.06
0.05
0.08
Distance

0.04
RMSE

0.06
0.03
0.04
0.02
0.02 0.01
0 0
2 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12 14 16 18 20
Time, k Time, k

True Optimal RMSE


Optimal Suboptimal RMSE
Suboptimal
(a) (b)

Figure 4: Comparison between optimal and suboptimal estimators for small noises (Case 1) with q � 0.0001, r � 0.01, P∞ ≈ 0.001, and
zero initial condition m0 � 0. (a) True and estimated values for zk � |xk |. (b) RMSEs for both estimators.
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Mathematical Problems in Engineering 7

1.1 0.07
1.08
1.06 0.06
1.04 0.05
1.02
Distance

0.04

RMSE
1
0.98 0.03
0.96
0.02
0.94
0.92 0.01
0.9 0
2 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12 14 16 18 20
Time, k Time, k

True Optimal RMSE


Optimal Suboptimal RMSE
Suboptimal
(a) (b)

Figure 5: Comparison between optimal and suboptimal estimators for small noises (Case 1) with q � 0.0001, r � 0.01, P∞ � 0.001, and
nonzero initial condition m0 � 1. (a) True and very close estimates for zk � |xk |. (b) Very close RMSE values for both estimators.

2 1.2

1
1.5
0.8
Distance

RMSE

1 0.6

0.4
0.5
0.2

0 0
2 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12 14 16 18 20
Time, k Time, k

True Optimal RMSE


Optimal Suboptimal RMSE
Suboptimal
(a) (b)

Figure 6: Comparison between optimal and suboptimal estimators for medium noises (Case 2) with q � 0.1, r � 0.5, P∞ � 0.18, and zero
initial condition m0 � 0. (a) True and estimated values for zk � |xk |. (b) RMSEs for both estimators.

4 2.5
3.5
2
3
2.5 1.5
Distance

RMSE

2
1.5 1
1
0.5
0.5
0 0
2 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12 14 16 18 20
Time, k Time, k

True Optimal RMSE


Optimal Suboptimal RMSE
Suboptimal
(a) (b)

Figure 7: Comparison between optimal and suboptimal estimators for large noises (Case 3) with q � 0.5, r � 1.0, P∞ � 0.5, and zero initial
condition m0 � 0. (a) True and estimated values for zk � |xk |. (b) RMSEs for both estimators.
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
8 Mathematical Problems in Engineering

4.3.2. Estimation of Distance between Two Random Points in also valid for models (22) and (23). For example, if the
1-D Line. Consider a motion of two random points A1 (x1 ) estimate 􏽢ℓk � x
􏽢 1,k − x􏽢 2,k is far enough from zero and the
and A2 (x2 ) in 1-D line. Assume that evolution of the state variance P(ℓ) E(ℓ 􏽢 2 􏽢opt 􏽢sub
k � k − ℓk ) is small, then z k ≈z k . ,e
vector xk � [x1,k x2,k ]T from time tk to tk+1 is defined by the (ℓ)
simulation results in Figure 8 with Pk � 0.0015, k > 8, and
random walk model: very close values of the average RMSEs, R(􏽢zopt ) � 0.0189
x1,k+1 � x1,k + v1,k , x1,0 � m1 , k � 0, 1, . . . , and R(􏽢zsub ) � 0.0189, confirm this fact.
(22) In addition, we are interested in the following new
x2,k+1 � x2,k + v2,k , x2,0 � m2 , scenarios:
where m1 and m2 are the known initial conditions and Case 1: both points A1 (x1 ) and A2 (x2 ) are fixed, and
v1,k ∼ N(0, q1 ) and v2,k ∼ N(0, q2 ) are uncorrelated white their positions are measured with small noises
Gaussian noises. Case 2: the first point A1 (x1 ) is fixed, but the movement
Assuming we measure the true position of the points of the second one A2 (x2 ) is subject to a small noise
with correlated measurement white noises w1 and w2 , re-
spectively, the measurement equation is Case 3: the movement of both points is subject to a
medium noise
y1,k � x1,k + w1,k , w1,k ∼ N 0, r1 􏼁, ,e model parameters and simulation results for the
(23)
y2,k � x2,k + w2,k , w2,k ∼ N 0, r2 􏼁, scenarios are illustrated in Table 4. From Table 4, we observe
the strong difference between the average RMSEs R(􏽢zopt )
and R(􏽢zsub ), i.e., R(􏽢zopt ) < R(􏽢zsub ). It is not a surprise that
where E(w1,k w2,k ) � r12 .
the optimal estimator (24) is better than the suboptimal one,
Our goal is to estimate the unknown distance
z􏽢sub
k � |􏽢 􏽢 2,k |.
x1,k − x
d(A1 , A2 ) � |x1,k − x2,k | between the current location of the
points A1 (x1,k ) and A2 (x2,k ).
According to the proposed two-step estimation proce- 5. MMSE Estimators for Bilinear and
dure, the optimal Kalman estimate x􏽢k � 􏼂 x 􏽢 2,k 􏼃T and
􏽢 1,k x Quadratic Forms
error covariance Pk � [Pij,k ], P0 � I2 computed at the first
stage are used at the second stage for estimation of the 5.1. Optimal Closed-Form MMSE Estimator for Quadratic
distance zk � |x1,k − x2,k |. Using formulas (9) and (10) for Form. Consider a quadratic form (QF) of the state vector
cT � 􏼂 1 − 1 􏼃 and d � 0, we obtain the best MMSE estimate xk ∈ Rn :
for the distance: zk � xTk Ak xk , Ak � ATk . (25)
􏽳�����
In this case, the optimal MMSE estimator (4) can be
opt 2P(ℓ)
z􏽢k � k
exp − αk 􏼁 + 􏽢ℓk 􏼂1 − 2Φ − βk 􏼁􏼃, explicitly calculated in terms of the Kalman estimate x􏽢k and
π error covariance Pk .
􏽢ℓ2
k
αk � , Theorem 2 (MMSE estimator for QF). Let xk ∈ Rn be a
2P(ℓ)
k normal random vector, and x􏽢k ∈ Rn and Pk ∈ Rn×n are the
(24) Kalman estimate and error covariance, respectively. 2en, the
􏽢ℓk
βk � 􏽱��� �, optimal MMSE estimator for the QF zk � xTk Ak xk has the
P(ℓ) following closed-form structure:
k
opt
z􏽢k � x􏽢Tk Ak x􏽢k +tr Ak Pk 􏼁. (26)
􏽢ℓk � x
􏽢 1,k − x
􏽢 2,k ,

P(ℓ)
k � P11,k + P22,k − 2P12,k . Proof. Using the formulas xT Ax � tr(AxxT ) and
E(xxT ) � Cov(x) + E(x)E(xT ), we obtain
In parallel with the optimal distance estimator (24), we 􏼌􏼌 􏼌􏼌
z􏽢k � E􏼒xTk Ak xk 􏼌􏼌􏼌 yk 􏼓 � E􏼒tr􏼐Ak xk xTk 􏼑 􏼌􏼌􏼌 yk 􏼓
opt
consider the simple suboptimal estimator z􏽢sub
k � |􏽢 􏽢 2,k |.
x1,k − x
􏼌􏼌
opt
� tr􏼒Ak E􏼒xk xTk 􏼌􏼌􏼌 yk 􏼓􏼓
Remark 3. As we see, the optimal estimate z􏽢k of the dis-
􏼌􏼌 􏼌􏼌
tances in (20) and (24) depends on the functions αk and βk . � tr􏼒Ak 􏼔Pk + E􏼒xk 􏼌􏼌􏼌 yk 􏼓E􏼒xTk 􏼌􏼌􏼌 yk 􏼓􏼕􏼓
,e functions in formulas (20) and (24) are calculated in the
pairs of points (􏽢 xk , Pk ) and (􏽢ℓk , P(ℓ)
k ), respectively. ,e � tr Ak Pk 􏼁 + tr􏼐Ak x􏽢k x􏽢Tk 􏼑 � tr Ak Pk 􏼁 + x􏽢Tk Ak x􏽢k .
second pair depends on the state estimate x􏽢k � [􏽢 􏽢 2,k ]T
x1,k x
and error covariance Pk � [Pij,k ]. ,erefore, Remark 2 is (27)
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Mathematical Problems in Engineering 9

1.14 0.045
0.04
1.12
0.035
1.1 0.03
Distance

0.025

RMSE
1.08
0.02
1.06 0.015
0.01
1.04
0.005
1.02 0
0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20
Time, k Time, k
True Optimal RMSE
Optimal Suboptimal RMSE
Suboptimal
(a) (b)

Figure 8: Comparison between optimal and suboptimal estimators for small noises with q1 � q2 � 10− 4 , r1 � r2 � 10− 2 , r12 � 0.005, and
initial positions m1 � 0, m12 � 1. (a) True and very close estimates for the distance zk � |x1,k − x2,k |. (b) Very close RMSE values for both
estimators.

Table 4: Simulation results for Section 4.3.2.


r1 � 0.01;
m1 � 1; q1 � 0; r2 � 0.01; R(􏽢zopt ) � 0.0013
Case 1 P(ℓ)
∞ � 0.0001
m2 � 1.01 q2 � 0 r12 � 0.005 R(􏽢zsub ) � 0.0060
r1 � 0.01;
m1 � 1; q1 � 0; r2 � 0.01; R(􏽢zopt ) � 0.0378
Case 2 P(ℓ)
∞ � 0.015
m2 � 1.01 q2 � 0.01 r12 � 0.005 R(􏽢zsub ) � 0.0450
r1 � 0.05;
m1 � 1; q1 � 0.05; R(􏽢zopt ) � 0.1596
Case 3 r2 � 0.05; P(ℓ)
∞ � 0.1366
m2 � 1.01 q2 � 0.03 r12 � 0.001 R(􏽢zsub ) � 0.1796

In parallel to the optimal quadratic estimator (26), we 2


Psub 􏽢sub
z,k � E􏼐zk − z k 􏼑
consider the simple suboptimal estimator denoted as z􏽢sub k ,
which is obtained by direct calculation of the QF at the point opt 2
� E􏼐zk − z􏽢k + tr Ak Pk 􏼁􏼑
xk � x􏽢k such as
opt 2 opt
� E􏼐zk − z􏽢k 􏼑 + 2tr Ak Pk 􏼁E􏼐zk − z􏽢k 􏼑 + tr2 Ak Pk 􏼁
z􏽢sub
k � x􏽢Tk Ak x􏽢k . (28) opt
� Pz,k + tr2 Ak Pk 􏼁.
,e simple estimator (28) depends only on the Kalman (29)
estimate x􏽢k and does not require the KF error covariance Pk Let us illustrate ,eorem 2 and Lemma 2 on the example
in contrast to the optimal one (26). ,e following result of the squared norm of a random vector, zk � ‖xk ‖2 � xTk xk .
compares the estimation accuracy of the optimal and sub- ,en, Ak � In , and the quadratic estimators and difference
optimal quadratic estimators. □ between their MSEs take the form
opt
z􏽢k � x􏽢Tk x􏽢k + tr Pk 􏼁,
Lemma 2 (difference between MSEs for quadratic estima- opt 2
(30)
opt z􏽢sub
k �x􏽢Tk x􏽢k , δk � Psub
z,k − Pz,k � tr Pk 􏼁.
tors). 2e difference between the true MSEs Pz,k �
opt 2 sub sub 2
E(zk − z􏽢k ) and Pz,k � E(zk − z􏽢k ) for the optimal and We see the difference δk � tr2 (Pk ) depends on the
simple suboptimal quadratic estimators is tr2 (Ak Pk ). quality of the KF data processing (3). □

Proof. Using the fact that the MMSE estimator is unbiased, 5.2. Optimal Closed-Form MMSE Estimator for Bilinear Form.
opt
E(􏽢zk − zk ) � 0, and the equality z􏽢sub 􏽢opt
k � z k − tr(Ak Pk ), we Let xk ∈ Rn and x􏽥k ∈ Rn be two arbitrary state vectors. ,en, a
obtain bilinear form (BLF) on the state space can be written as follows:
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
10 Mathematical Problems in Engineering

uk � xTk Ak x􏽥k , Ak � ATk . (31) 􏽢 k � x􏽢k − x􏽢􏽥k ,


η
(η)
(36)
Note that a BLF can be written as a QF in the vector Pk � Pxx,k + P􏽥x􏽥x,k − Px􏽥x,k − PTx􏽥x,k .
Xk ∈ R2n . In this case,
xk Applying the quadratic estimator (26) with Ak � In , we

uk � xTk Ak x􏽥k � 􏽨 xTk x􏽥Tk 􏽩Bk ⎡

⎣ ⎤⎥⎥⎦ � XT B X , obtain the MMSE estimator for the squared Euclidean
k k k
x􏽥k distance:
opt (η) T (η)
xk 􏽢 Tk η
z􏽢k � η 􏽢 k + tr􏼐Pk 􏼑 � 􏼐x􏽢k − x􏽢􏽥k 􏼑 􏼐x􏽢k − x􏽢􏽥k 􏼑 + tr􏼐Pk 􏼑.

Xk � ⎡

⎣ ⎤⎥⎥⎦,
x􏽥k (37)
(32) ,e MMSE estimators for bilinear and quadratic forms
XTk � 􏽨 xTk x􏽥Tk 􏽩, are summarized in Table 5.
1


⎢ On Ak ⎤⎥⎥


⎢ 2 ⎥⎥⎥ 5.3. Practical Usefulness of Squared Euclidean Distance. In
⎢ ⎥⎥⎥
Bk � ⎢


⎢ ⎥⎥⎥. many practical problems, for example, finding the shortest


⎢ ⎥⎥⎦
⎣1 T distance from a point to a curve, minx,􏽥x∈M d(x, x􏽥), or
Ak On
2 comparing a distance with a threshold value, d(x, x􏽥) ≷ ε,
there is no􏽱need to calculate
����������� � the original Euclidean distance
For the QF (25), the optimal bilinear estimator can be
explicitly calculated in terms of the Kalman estimate d(x, x􏽥) � 􏽐ni�1 (xi − x 􏽥 i )2 , but we just need to calculate its
x􏽢k ∈ R2n and block error covariance matrix Pk ∈ R2n×2n : square due to the equivalence of the problems,
minx,􏽥x∈M d(x, x􏽥) ⇔ minx,􏽥x∈M d2 (x, x􏽥) or d(x, x􏽥) ≷ ε ⟺
x􏽢k d2 (x, x􏽥) ≷ ε2 . In such situations, the optimal quadratic es-
x􏽢k � 􏼢 􏼣,
x􏽥􏽢κ timator (37) for the squared Euclidean distance,
Pxx,k Px􏽥x,k (33) d2 (x, x􏽥) � 􏽐ni�1 (xi − x
􏽥 i )2 , can be successfully used.

Pk � ⎡

⎣ ⎥⎥⎦⎤,
PT P􏽥x 􏽥x,k
x􏽥
x,k Example 5. (deviation of normal and nominal trajectories).
Suppose that the piecewise feedback control law Uk∗ depends
where Px􏽥x,k � Cov(ex,k , e􏽥x,k ) is a cross covariance between on the difference between a normal (xk ) and nominal (xnk )
estimation errors ex,k � xk − x􏽢k and e􏽥x,k � x􏽥k − x􏽢􏽥k . trajectories. For example, it is given by
Applying ,eorem 2 to the QF zk � XTk Bk Xk and taking
into consideration the block structure of the matrix Bk , we 1, d xk , xnk 􏼁 < D,
have the following. Uk∗ � 􏼨 (38)
− 1, otherwise,

Theorem 3 (MMSE estimator for BLF). Let where d(xk , xnk ) is the Euclidean distance and D is the
T
xk � 􏽨 xTk x􏽥Tk 􏽩 2n
∈ R be a joint normal random vector, and distance threshold (see Figure 9).
x􏽢k ∈ R and Pk ∈ R2n×2n are the Kalman estimate and block
2n In view of the above, rewrite the control law in the
error covariance matrix (33). 2en, the optimal MMSE es- equivalent form:
timator for the BLF uk � xTk Ak x􏽥k has the following closed- 1, d xk , xnk 􏼁 < D2 ,
Uk∗ � 􏼨 (39)
form structure: − 1, otherwise,
􏽢 opt
u k �x􏽢Tk Ak x􏽢􏽥k + tr􏼐Ak Px􏽥x,k 􏼑. (34)
where d2 (xk , xnk ) is the squared of the Euclidean distance and
D2 is the new threshold.
Example 4 (estimation of inner product and squared Eu- Using the quadratic estimator (37) for the squared
clidean distance). Using the bilinear estimator (34) with distance zk � d2 (xk , xnk ), we obtain the MMSE estimator,
opt
Ak � In , the MMSE estimator for the inner product uk � z􏽢k � (􏽢xk − x􏽢nk )T (􏽢xk − x􏽢nk ) + tr(Pk ), which can be used in the
xTk x􏽥k takes the form control (39):

􏽢 opt
u 􏽢Tk x􏽢􏽥k + tr􏼐Px􏽥x,k 􏼑.
k � x (35) ⎨ 1, z􏽢opt
⎧ 2
k <D ,
Uk∗ � ⎩ (40)
Next, calculate the optimal MMSE estimator for the − 1, otherwise.
squared Euclidean distance between two points
zk � d2 (xk , x􏽥k ) � ‖xk − x􏽥k ‖22 or zk � ‖ηk ‖22 , where ηk � xk − In the next section, we discuss application of the linear,
x􏽥k . ,e Kalman estimate and error covariance of the dif- bilinear, and quadratic estimators (,eorems 1–3) for es-
ference ηk take the form timation of composite nonlinear functions.
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Mathematical Problems in Engineering 11

Table 5: MMSE estimators for QF, BLF, inner product, and squared norm.
Title Formula for function MMSE estimator Equation
T T z􏽢opt � x􏽢T A􏽢x + tr(AP)
QF z � x Ax, A � A (26)
u
􏽢 opt
� x􏽢 Bx􏽢􏽥 + tr(BPx􏽥x )
T
BLF u � xT B􏽥x, B � BT (34)
opt T􏽢
u
􏽢 � x􏽢 x􏽥 + tr(Px􏽥x )
Inner product u � 〈x, x􏽥〉 � xT x􏽥 (35)
z􏽢 � (􏽢x − x􏽥􏽢)T (􏽢x − x􏽢􏽥) + tr(P(η) ),
opt

Euclidean distance z � ‖x − x􏽥‖22 P(η) � Pxx + P􏽥x􏽥x − Px􏽥x − PTx􏽥x (37)

Normal trajectory

d (xk, xnk) < D


d (xk, xnk) <D
Nominal trajectory

Figure 9: Distance deviation between normal and nominal (desired) trajectories.

􏽱�����������
6. Suboptimal Estimator for Composite d� p2x + p2y + p2z � F1 (x),
Nonlinear Functions 􏽱�����
6.1. Definition of Composite Function. Consider a composite F1 (x) � g1 (x), g1 � x21 + x22 + x23 ,
function F depending on LF, QF, and BLF, such as
py
θ � tan− 1 􏼠 􏼡 � F2 (x),
F(x) � F g1 (x), g2 (x), . . . , gh (x)􏼁, (41) px

g2 (x)
where the inside functions are defined as F2 (x) � tan− 1 􏼠 􏼡, g 2 � x2 , g 3 � x1 ,
g3 (x)
LF � cT x, x, c ∈ Rn , pz



⎢ φ � 􏽱������� � F3 (x), (44)
gi (x) � ⎢
⎢ T
⎣ QF � x Ax,


n
x∈R ,A∈R n×n
, (42) px + p2y
2

BLF � xT B􏽥x, x, x􏽥 ∈ Rn , B ∈ Rn×n .


g4 (x)
F3 (x) � 􏽰���� �, g4 � x3 , g5 � x21 + x22 ,
g5 (x)

Example 6. (composite and inside functions in object px vx + py vy + pz vz


d_ � 􏽱����������� � F4 (x),
tracking). Let x ∈ R6 be an object state vector consisting of p2x + p2y + p2z
the position (px , py, pz ) and corresponding velocity
(vx , vy , vz ) components in the Cartesian coordinates g6 (x)
(x, y, z), i.e., F4 (x) � 􏽰���� �, g 6 � x1 x4 + x2 x5 + x3 x6 ,
g1 (x)
T
x � 􏼂 x 1 x2 x3 x4 x5 x 6 􏼃 where d is the range (distance), θ is the bearing angle, φ is the
T
(43) elevation angle, and d_ is the range rate.
� 􏽨 px py pz vx vy vz 􏽩 .

In the spherical coordinates, we assume that a 6.2. Suboptimal Estimator for Composite Functions. Given
Doppler radar is located at the origin of the Cartesian the Kalman estimate and covariance (􏽢x, P), we estimate a
coordinates, and it measures the following quantities quantity obtained via the composite function F(x) � F(g1
obtained via nonlinear composite functions Fi (g1 (x), . . . , gh (x)). ,e idea of the algorithm is based on the
(x), . . . , gh (x)) of the state components depending on LF, optimal MMSE estimators for LF, QF, and BLF proposed in
QF, and BLF: equations (10), (26), and (34), respectively. We have
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
12 Mathematical Problems in Engineering

For LF gi (x) � cT x: 􏽢 i (􏽢x, P) � cT x􏽢;


g 7. Numerical Example: Motion in a Plane
T
For QF gi (x) � x Ax: g 􏽢 i (􏽢x, P) � x􏽢T A􏽢x + tr(AP); (45)
In this section, we estimate the range and the bearing angle
T 􏽢 i (􏽢x, P) � x􏽢T Bx􏽥􏽢 + tr BPx􏽥x 􏼁.
For BLF gi (x) � x B􏽥x: g in 2-D motion of an object. Because of the difficulties of
getting analytical closed-form expressions for the optimal
Replacing the unknown inside functions gi (x) with the
estimators for range and bearing, we apply the simple es-
corresponding optimal estimates (45), we obtain the novel
timator (􏽢zsim ) and the estimator based on the composite
suboptimal estimator for the composite function
functions (􏽢zcom ). In addition, we are interested in the angle
z � F(g1 , . . . , gh ), i.e.,
between the two state vectors xk− 1 and xk at time instants tk− 1
def
z􏽢comp � F􏼐g
􏽢 1 (^x, P), . . . , g
􏽢 h (^x, P)􏼑. (46) and tk , respectively, φk � ∠(xk-1 , xk ).

Example 7 (estimation of cosine of angle). Let 7.1. Suboptimal Estimators for Range-Angle Response. ,e
T
X � 􏼂 xT x􏽥T 􏼃 ∈ R2n be a joint normal state vector, and example of Section 4.3.2 is considered again. Consider the 2-
⌢ x􏽢 D models (22) and (23) describing motion of the two
X � 􏼢 􏼣, random points A1 (x1,k ) and A2 (x2,k ). To calculate the range
x􏽢􏽥 (dk ), tangent of the bearing angle (θk ), and cosine of the
(47)
Pxx Px􏽥x angle (φk ), we use the following formulas:
P� T⎣
⎡ ⎦
⎤ 􏽱��������
Px􏽥x P􏽥x􏽥x def
f xk 􏼁 � dk � x21,k + x22,k ,
are the Kalman estimate and block error covariance. def x2,k
,e cosine of angle between two vectors x, x􏽥 ∈ Rn is h xk 􏼁 � tan θk 􏼁 � ,
x1,k
equal to
〈x, x􏽥〉 〈x, x􏽥〉 def 〈x , x 〉 (52)
cos(θ) � � √��� √��� . (48) g xk 􏼁 � cos φk 􏼁 � ��� k−���1 ���k ���
‖x‖ ×‖􏽥x‖ x x × x􏽥T x􏽥
T
�xk− 1 � × �xk �
We observe the ratio (48) represents the composite x1,k− 1 x1,k + x2,k− 1 x2,k
function z � F(x,√x􏽥)���
depending on the three inside functions � 􏽱����������� 􏽱��������.
√��� x21,k− 1 + x22,k− 1 × x21,k + x22,k
T T
g1 � x x, g2 � x􏽥 x􏽥, and g0 � 〈􏽥x, x􏽥〉:
g
z � cos(θ) � √�� 0 √��. (49) ,e following estimators for the range-angle responses
g1 × g2 (52) are illustrated and compared:
,e optimal MMSE estimators for the inside functions (1) Simple estimator:
g0 , g1 , and g2 are known. Using equation (45), we have 􏽱��������
􏽢 sim � x
􏽢 21,k + x
􏽢 22,k ,
􏽢 0 � x􏽢T x􏽥􏽢 + tr Px􏽥x 􏼁,
g (a) f k

􏽢 1 � x􏽢T x􏽢 + tr Pxx 􏼁,
g (50) sim 􏽢 2,k
x
T
(b) h􏽢k � ,
􏽢 2 � x􏽢􏽥 x􏽢􏽥 + tr P􏽥x􏽥x 􏼁.
g 􏽢 1,k
x (53)

􏽢i,
Replacing the inside functions gi with their estimates g 􏽢 1,k− 1 x
x 􏽢 1,k + x 􏽢 2,k− 1 x􏽢 2,k
􏽢 sim
(c) g k �
􏽱����������� 􏽱�������� .
we get the suboptimal estimator for the cosine of angle: 􏽢 21,k− 1 + x
x 􏽢 22,k− 1 × x 􏽢 21,k + x􏽢 22,k
􏽢0
g x􏽢T x􏽢􏽥 + tr􏼐Pxy 􏼑
z􏽢com � 􏽰��􏽰 �� � 􏽱������������ 􏽱����������� �. (51)
􏽢1 g
g 􏽢2 T
x􏽢T x􏽢 + tr Pxx 􏼁 × x􏽢􏽥 x􏽢􏽥 + tr P􏽥x􏽥x 􏼁 (2) Estimator for composite functions:

Numerical example illustrates the applicability of the all


estimators proposed in the paper.

com
􏽱�����������������������
􏽢
(a) f 􏽢 21,k + P11,k 􏼑 + 􏼐x
� 􏼐x 􏽢 22,k + P22,k 􏼑,
k

com sim
(b) h􏽢k � h􏽢k ,
(54)
􏽢 1,k− 1 x
x 􏽢 1,k + x P(1)
􏽢 2,k +
􏽢 2,k− 1 x + P(2)
􏽢 com
(c) g k
k− 1,k
� 􏽱����������������������������� 􏽱�����������������������. k− 1,k

􏽢 21,k− 1 + P11,k− 1 􏼑 + 􏼐x
􏼐x 􏽢 22,k− 1 + P22,k− 1 􏼑 × 􏼐x􏽢 21,k + P11,k 􏼑 + 􏼐x
􏽢 22,k + P22,k 􏼑
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Mathematical Problems in Engineering 13

2.5 0.7
0.6
2
0.5
1.5 0.4

RMSE
Range

1 0.3
0.2
0.5
0.1
0 0
0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20
Time, k Time, k
True Simple
Simple Composite
Composite
(a) (b)

Figure 10: Comparison of composite and simple estimators for range. (a) True and estimated ranges. (b) RMSE for both estimators.

sim
Note that the simple and composite estimates h􏽢k and
com
h􏽢k for a bearing angle coincide. In equation (54), P(1) k− 1,k �
2
E(e1,k− 1 e1,k ) and P(2)
k− 1,k � E(e e
2,k− 1 2,k ) are the error cross- Tangent of bearing angle
covariances satisfying the following recursive: 1.5

P(i) (i) (i) (i)


k− 1,k � 􏼐1 − Kk− 1 􏼑􏼐1 − Kk 􏼑Pk− 2,k− 1 , k ≥ 2, 1
(55)
2 2
P(i)
0,1 � 􏼐1 − K(i)
k 􏼑σ i , σ i � Cov􏼐xi,0 􏼑, i � 1, 2. 0.5

In equations (53)–(55), the values and Pii,k � 􏽢 i,k , K(i)


x k , 0
E(e2i,k ) represent the Kalman estimate, KF gain, and variance
􏽢 i,k , respectively.
of the error ei,k � xi,k − x –0.5
0 2 4 6 8 10 12 14 16 18 20
Time, k
7.2. Simulation Results. ,e simple and composite estimators True
were run with the same random noises for further com- Simple
parison. ,e Monte Carlo simulation with 1000 runs was
applied in calculation of the RMSEs for the range (dk ), the Figure 11: True tangent of bearing angle and its simple (com-
bearing angle (θk ), and the angle between state vectors (φk ). posite) estimate.
Figures 10–12 show the range and angle estimates for the
model parameters in equations (22) and (23), with corresponding simple (or composite) estimate,
m1 � 0.1, m2 � − 0.1, P0 � I2 , q1 � 0.2, q2 � 0.3, sim com
r1 � 0.05, r2 � 0.1, and r12 � 0. ,e following results about h􏽢k � h􏽢k � x 􏽢 2,k /􏽢
x1,k . We observe the negligible
the relative performance of the above estimators can be made. difference between the true tangent value h(xk ) and
sim
􏽢 sim , f
􏽢 com ) its simple estimate h􏽢 . ,e average RMSE of the
k
Figure 10(a) presents the range estimators (f k k sim
as well as the true range dk . Figure 10(b) shows the estimate over the time interval [1, 20] is R(h􏽢 ) �
comparison of the RMSEs for the range estimators. 0.0672. It demonstrates reasonable accuracy of the
Comparing R(􏽢rcom rsim estimator x 􏽢 2,k /􏽢
x1,k for the unknown ratio (tangent of
k ) and R(􏽢 k ) on the interval
k ∈ [1; 20], we obtain the values 0.0296 and 0.0541, 􏽢 2,k /􏽢
angle) x x1,k .
respectively. From Figures 10(a) and 10(b), the range (3) Similar simulation procedures, as in (1) and (2), were
estimator f􏽢 com has the better performance compared used to check performance of the estimators g 􏽢 sim
k and
k com
􏽢 sim . ,is is due to the fact that the 􏽢 k . ,e true cosine value gk is shown in Figure 12
g
to the simple one f k
opt for comparison with the estimated values.
MMSE estimate z􏽢k � (􏽢 x21,k + P11,k ) + (􏽢 x22,k + P22,k )
2
of the squared norm zk � ‖xk ‖ � x1,k + x22,k contains
2 For detailed consideration of the proposed estimators,
we divide the whole time interval into two subintervals I1 �
the error variances P11,k and P22,k as additional terms.
[1; 6] and I2 � [7; 20], respectively. From Figure 12, we can
If the variances tend to zero, Pii,k ⟶ 0, then the range
observe that on the first subinterval, the estimate g 􏽢 com
k is
estimators will converge, i.e., f 􏽢 com ≈ f􏽢 sim . better than g 􏽢 sim , and on the second one, the difference
k k k
(2) Figure 11 shows the true value of tangent of the between them is negligible. ,is is also confirmed by the
bearing angle h(xk ) � x2,k /x1,k and the values of R(􏽢gk ) presented in Table 6.
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
14 Mathematical Problems in Engineering

1.5 effective low-complexity suboptimal estimator for nonlinear


composite functions is developed using the MMSE bilinear
1 and quadratic estimators. As shown in Section 6.1, radar
Cosine of angle

tracking range-angle responses are described by the com-


0.5 posite functions.
Simulation and experimental results show that the
0 proposed estimators perform significantly better than the
existing suboptimal distance or angle estimators such as a
–0.5 simple estimator defined in the paper. ,e low-complexity
estimator developed in Section 6.1 is quite promising for
0 2 4 6 8 10 12 14 16 18 20 radar data processing. Also, the numerical results confirm
Time, k the fact that the more accurate the Kalman estimate of a state
True vector, the more accurately we can obtain the range and
Simple angle estimates.
Composite
Figure 12: True cosine of angle and simple and composite
estimates. Appendix

Proof of Lemma 1. ,e derivation of formula (8): direct


Table 6: Comparisons of average RMSE for the simple and calculation of the Gaussian integral gives
composite estimators.
􏼌􏼌 1 ∞
E􏼒|x| 􏼌􏼌􏼌 yk 􏼓 � √���� 􏽚 |x|e− (x− 􏽢x) /2P dx
2
gsim
R(􏽢 k ) gcom
R(􏽢 k )
2πP − ∞
I1 � [1; 6] 0.6519 0.4886
I2 � [7; 20] 0.1244 0.1226 √���� ∞
� 1/ 2πP 􏽚 xe− ((x− 􏽢x) /2P) dx
2

0
􏽼√√√√√√√√√√√√􏽻􏽺√√√√√√√√√√√ √􏽽
I1
(A.1)
Note that both estimators and 􏽢 sim
are based on the
g k 􏽢 com
g k
MMSE estimators for a squared norm and inner product. √���� 0
− 1/ 2πP 􏽚 xe− ((x− 􏽢x) /2P) dx .
2

,erefore, the difference between them becomes small if the


−∞
􏽼√√√√√√√√√√√√􏽻􏽺√√√√√√√√√√√√􏽽
KF error variances Pii,k are small (see (c) in equations (53) I2
and (54)). In our case, the steady-state values of the variances
are P11,k � 0.0214 and P22,k � 0.0101, k > 8.
To calculate (A.1), we start with the first integral I1 :

8. Conclusion 1 x− x􏽢
I1 � √���� 􏽚 xe− (x− 􏽢x) /2P dx � 􏼠t � √�� 􏼡
2

2πP 0 P
In this paper, we propose a novel MMSE approach for the
estimation of distance metrics under the Kalman filtering ∞ √��
1 t2 /2
framework. ,e main contributions of the paper are listed in 􏽢 )e−
� √��� 􏽚 √� (t P + x dt
the following. 2π − 􏽢x/ P
Firstly, an optimal two-stage MMSE estimator for an √�� ∞
P 􏽢
x ∞
arbitrary nonlinear function of a state vector is proposed. � √��� 􏽚 √� te− t2 /2
dt + √��� 􏽚 √� e− t2 /2
dt
,e distance metric is an important practical case of such 2π − 􏽢x/ P 2π − 􏽢x/ P
nonlinearities, detailed study of which is given in the √�� (A.2)
paper. Implementation of the MMSE estimator is re- P x2 /2P 􏽢
x ∞
t2 /2
duced to calculation of the multivariate Gaussian inte- � √���e− + √��� 􏽚 √� e− dt
2π 2π − 􏽢x/ P
gral. To avoid the difficulties associated with its
calculation, the concept of a closed-form estimator √�� √�
−􏽢
x/ P
depending on the Kalman filter statistics is introduced. P
� √���e− x2 /2P
􏽢⎛
+x ⎝1 − √1��� 􏽚 e− t2 /2 ⎠
dt⎞
We establish relations between the Euclidean metrics and 2π 2π − ∞
the closed-form estimator, which lead to simple compact
formulas for the real-life distances between points pre- √��
P x2 /2P −x􏽢
sented in Table 2. � √���e− 􏽢 􏼢1 − Φ􏼠√�� 􏼡􏼣.
+x
2π P
Secondly, an important class of bilinear and quadratic
estimators is comprehensively studied. ,ese estimators are
applied to a square of norm, Euclidean distance, and inner Similar technique can be used to find the second
product. Table 5 summarizes the results. Moreover, an integral I2 :
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Mathematical Problems in Engineering 15

1 0 x− x􏽢 robot to humans using egocentric RGB camera,” Sensors,


I2 � √���� 􏽚 xe− (x− 􏽢x) /2P dx � 􏼠t � √�� 􏼡
2
vol. 19, no. 14, pp. 3142–3155, 2019.
2πP − ∞ P [5] Y. S. Suh, N. H. Q. Phuong, and H. J. Kang, “Distance es-
√� timation using inertial sensor and vision,” International
1 −􏽢
x/ P √�� 2 Journal of Control, Automation and Systems, vol. 11, no. 1,
� √��� 􏽚 (t P + x 􏽢 )e− t /2 dt � . . .
2π − ∞ pp. 211–215, 2013.
(A.3) [6] K. Murawski, “Method of measuring the distance to an object
√�� √�
P − 􏽢x2 /2P 1 −􏽢
x/ P 2
based on one shot obtained from a motionless camera with a
�− √ ��
� e +x􏽢 √ ��
� 􏽚 e− t /2 dt fixed-focus lens,” ACTA Physica Polonica A, vol. 127, no. 6,
2π 2π − ∞ pp. 1591–1596, 2015.
√�� [7] F. Moreno Noguer, “3D human pose estimation from a single
P 2
􏽢
x
� − √���e− 􏽢x /2P + x
􏽢 Φ􏼠− √�� 􏼡, Image via distance matrix regression,” in Proceedings of the
2π P IEEE Conference on Computer Vision and Pattern Recognition,
pp. 1561–1570, Honolulu, HI, USA, July 2017.
and finally, [8] L. Wang, Y. Zhang, and J. Feng, “On the Euclidean distance of
√�� images,” IEEE Transactions on Pattern Analysis and Machine
􏼌􏼌 P 2
z􏽢 � E􏼒|x| 􏼌 y 􏼓 � I1 − I2 � ���e− 􏽢x /2P
􏼌
􏼌 k
√ Intelligence, vol. 27, no. 8, pp. 1334–1339, 2005.
2π [9] M. D. Malkauthekar, “Classification of facial images,” in
Proceedings of the International Conference on Emerging
􏽢
x Trends in Electrical and Computer Technology, pp. 507–511,
+x􏽢 􏼢1 − Φ􏼠− √�� 􏼡􏼣
P Nagercoil, India, March 2011.
√�� (A.4) [10] U. B. Gohatre and V. Patil, “Estimation of velocity and dis-
P 2
􏽢
x tance measurement for projectile trajectory prediction of 2D
− 􏼢− √��� e− 􏽢x /2P + x 􏽢 Φ􏼠− √�� 􏼡􏼣 image and 3D graph in real time system,” in Proceedings of the
2π P
International Conference on Energy, Communication, Data
􏽲��� Analytics and Soft Computing (ICECDS), pp. 2543–2546,
2P − 􏽢x2 /2P 􏽢
x Chennai, India, August 2017.
� e +x 􏽢 􏼢1 − 2Φ􏼠− √�� 􏼡􏼣.
π P [11] J. Fabrizio and S. Dubuisson, “Motion estimation using
tangent distance,” in Proceedings of the IEEE International
,is completes the derivation of equation (8). Conference on Image Processing, pp. 489–492, San Antonio,
TX, USA, September 2007.
Data Availability [12] M. Rezaei, M. Terauchi, and R. Klette, “Robust vehicle de-
tection and distance estimation under challenging lighting
,e data used to support the findings of this study are in- conditions,” IEEE Transactions on Intelligent Transportation
cluded within the article. Systems, vol. 16, no. 5, pp. 2723–2743, 2015.
[13] E. Kim and K. Kim, “Distance estimation with weighted least
squares for mobile beacon-based localization in wireless
Conflicts of Interest sensor networks,” IEEE Signal Processing Letters, vol. 17, no. 6,
pp. 559–562, 2010.
,e authors declare that there are no conflicts of interest [14] M. Yamada, N. Kikuma, and K. Sakakibara, “Distance esti-
regarding the publication of this paper. mation between base station and user terminal using multi-
carrier signal,” in Proceedings of the International Symposium
on Antennas and Propagation, pp. 173-174, Busan, Republic of
Acknowledgments Korea, August 2018.
,is study was supported by the National Research Foun- [15] P. H. Truong, S.-I. Kim, and G.-M. Jeong, “Real-time esti-
mation of distance traveled by cart using smartphones,” IEEE
dation of Korea Grant funded by the Ministry of Science
Sensors Journal, vol. 16, no. 11, pp. 4149-4150, 2016.
and ICT (NRF-2017R1A5A1015311) and the Gyeongsang [16] Y. Chen, D. Xu, H. Luo, S. Xu, and Y. Chen, “Maximum
National University Fund for Professors on Sabbatical Leave, likelihood distance estimation algorithm for multi-carrier
2018-2019. radar system,” 2e Journal of Engineering, vol. 2019, no. 21,
pp. 7432–7435, 2019.
References [17] B. Deng, X. Liu, and H. Wang, “Novel way of scalar miss
distance measurement,” in Proceedings of the International
[1] H. B. Mitchell, Image Fusion: 2eories, Techniques and Ap- Conference on Measuring Technology and Mechatronics Au-
plications, Springer Science & Business Media, Heidelberg, tomation, pp. 789–791, Changsha City, China, March 2010.
Germany, 2010. [18] H. Radhika, P. T. V Bhuvaneswari, and P. Senthil Kumar, “An
[2] B. Ramu, “A comparison study on methods for measuring efficient distance estimation algorithm using Kalman esti-
distance in images,” International Journal of Research in mator for outdoor wireless sensor network,” in Proceedings of
Computers, vol. 1, no. 2, pp. 34–38, 2012. the International Conference on Signal and Image Processing,
[3] L. Wang, Y. Zhang, and J. Feng, “On the Euclidean distance of pp. 506–510, Changsha, China, December 2010.
images,” IEEE Transactions on Pattern Analysis and Machine [19] X. Wang, M. Fu, and H. Zhang, “Target tracking in wireless
Intelligence, vol. 27, no. 8, pp. 1334–1339, 2005. sensor networks based on the combination of KF and MLE
[4] S. K. Pathi, A. Kiselev, A. Kristoffersson, D. Repsilber, and using distance measurements,” IEEE Trans. Mobile Com-
A. Loutfi, “A novel method for estimating distances from a puting, vol. 11, no. 4, pp. 567–576, 2012.
2629, 2020, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2020/9141735, Wiley Online Library on [23/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
16 Mathematical Problems in Engineering

[20] L. Angrisani, A. Baccigalupi, and R. Schiano Lo Moriello,


“Ultrasonic time-of-flight estimation through unscented
kalman filter,” IEEE Transactions on Instrumentation and
Measurement, vol. 55, no. 4, pp. 1077–1084, 2006.
[21] S. ,eodoridis, Machine Learning: A Bayesian and Optimi-
zation Perspective, Academic Press, Newyork, NY, USA, 2015.
[22] G. McLachlan, Discriminant Analysis and Statistical Pattern
Recognition, John Wiley & Sons, Newyork, NY, USA, 2004.
[23] J. Havelock, B. J. Oommen, and O.-C. Granmo, “Novel
distance estimation methods using “stochastic learning on the
line” strategies,” IEEE Access, vol. 6, pp. 48438–48454, 2018.
[24] H. T. Duong and Y. S. Suh, “Walking distance estimation of a
walker user using a wrist-mounted IMU,” in Proceedings of the
2017 56th Annual Conference of the Society of Instrument and
Control Engineers of Japan (SICE), pp. 1061–1064, Kanazawa,
Japan, September 2017.
[25] J. Zhang, “On the distribution of a quadratic form in normal
variates,” REVSTAT Statistical Journal, vol. 16, no. 3,
pp. 315–322, 2018.
[26] K.-H. Yuan and P. M. Bentler, “Two simple approximations to
the distributions of quadratic forms,” British Journal of
Mathematical and Statistical Psychology, vol. 63, no. 2,
pp. 273–291, 2010.
[27] J. H. Clements, “Recursive maximum likelihood estimation of
aircraft position using multiple range and bearing measure-
ments,” in Proceedings of Position, Location and Navigation
Symposium—PLANS ’96, pp. 199–204, Atlanta, GA, USA,
April 1996.
[28] D. E. Manolakis and A. I. Dounis, “Advances in aircraft-
height estimation using distance-measuring equipment,” IEE
Proceedings—Radar, Sonar and Navigation, vol. 143, no. 1,
pp. 47–52, 1996.
[29] D. Zachariah, I. Skog, M. Jansson, and P. Händel, “Bayesian
estimation with distance bounds,” IEEE Signal Processing
Letters, vol. 19, no. 12, pp. 880–883, 2012.
[30] D. Simon, Optimal State Estimation, Wiley&Sons, New York,
NJ, USA, 2006.
[31] Y. Bar-Shalom, X. Li, and T. Kirubarajan, Estimation with
Applications to Tracking and Navigation, Wiley&Sons, New
York, NY, USA, 2001.

You might also like