0% found this document useful (0 votes)
11 views19 pages

Inertia-Free Pose and Angular Velocity Estimation Using Monocular Vision

This paper presents an upgraded open-source pose estimation pipeline for estimating the relative pose and angular velocity of uncooperative space objects using monocular vision. The pipeline integrates convolutional neural networks for keypoint estimation, a nonlinear least-squares solver for static pose estimation, and a multiplicative extended Kalman filter for pose filtering, with a new module for moment-of-inertia-model-free angular velocity estimation. Simulated results demonstrate the performance of the upgraded pipeline on flight-like hardware in real-time, enhancing pose estimation accuracy without requiring knowledge of the target's moment-of-inertia.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views19 pages

Inertia-Free Pose and Angular Velocity Estimation Using Monocular Vision

This paper presents an upgraded open-source pose estimation pipeline for estimating the relative pose and angular velocity of uncooperative space objects using monocular vision. The pipeline integrates convolutional neural networks for keypoint estimation, a nonlinear least-squares solver for static pose estimation, and a multiplicative extended Kalman filter for pose filtering, with a new module for moment-of-inertia-model-free angular velocity estimation. Simulated results demonstrate the performance of the upgraded pipeline on flight-like hardware in real-time, enhancing pose estimation accuracy without requiring knowledge of the target's moment-of-inertia.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

(Preprint) AAS 23-210

INERTIA-FREE POSE AND ANGULAR VELOCITY ESTIMATION


USING MONOCULAR VISION

Siddarth Kaki* and Maruthi R. Akella†

The problem of estimating relative pose and angular velocity for uncooperative
space objects has garnered great interest, especially within applications such as
on-orbit assembly and satellite servicing. This paper presents an upgraded open-
source pose estimation pipeline that consists of convolutional neural networks for
keypoint estimation, a nonlinear least-squares solver for static pose estimation,
and a multiplicative extended Kalman filter for full pose filtering. In particular,
a new module is introduced to perform moment-of-inertia-model-free angular ve-
locity estimation. Simulated time-series results are presented to evaluate the per-
formance of the upgraded pipeline on flight-like hardware in real-time.

INTRODUCTION
This paper describes an upgraded version of the open-source pose filtering software package,
Pose Terrier‡ , motivated by the constraints of the Seeker program. Seeker is a NASA Johnson
Space Center (JSC) led technology demonstration mission which aims to engage a cubesat to au-
tonomously perform routine health and system-state inspections for the International Space Station
(ISS).1, 2 JSC has partnered with the authors of this work to develop vision-based navigation soft-
ware for Seeker. The first iteration of the program, the Seeker-1 cubesat (Figure 1), flew its mission
in September 2019,3 using a computer vision and machine learning based relative bearing estima-
tion software package.4 Seeker-1 deployed from and performed its navigation mission about the
Cygnus Enhanced cargo resupply ship (Figure 2) within a non-cooperative mode. Post Seeker-1,
the authors of this work have continued development of the vision-based navigation system, to ex-
pand capability from providing just relative bearing to a target to also providing filtered, full relative
pose estimates.5, 6
The full pipeline consisted of three main components: 1) a set of convolutional neural networks
(CNNs) to perform object detection and keypoint regression, 2) a nonlinear least-squares (NLS)
solver for static pose estimation, and 3) a multiplicative extended Kalman filter (MEKF) for full
pose tracking. The CNN-based object detection and keypoint regression problem is addressed in
Reference 7. Importantly, while a 3D model of Cygnus was required, Pose Terrier was designed
with the assumption that the mass and moment-of-inertia properties of Cygnus (or any other target
spacecraft) would not be available. Therefore, the MEKF propagation step is only kinematic, with-
out considering true dynamics. While we have demonstrated accurate, real-time performance under
such restrictive assumptions, there is certainly room for improvement.
* Ph.D. Student, Department of Aerospace Engineering and Engineering Mechanics, The University of Texas at Austin,
Austin, Texas 78712, AAS Student Member, Email: [email protected]

Professor and Cockrell Family Chair in Engineering #19, Department of Aerospace Engineering and Engineering Me-
chanics, The University of Texas at Austin, Austin, Texas 78712, AAS Fellow, Email: [email protected]

https://github.com/siddarthkaki/pose_terrier

1
Figure 1: Seeker-1 Vehicle with Kenobi Relay (Credit: NASA)

In particular, there exists a class of batch angular velocity estimators, such as the recent Quater-
nion Regression Algorithm (QuateRA),8 which produces angular velocity estimates based on a se-
ries of quaternion-parameterized attitude measurements (which can be supplied by a static pose
estimation solver over time). Remarkably, QuateRA-like estimators are able to provide angular ve-
locity estimates, even with a variable true angular velocity, without needing the moment-of-inertia
properties of the target. A companion paper to this work for this conference presents new insights
and batch covariance bounds for QuateRA-like estimators.9 These new developments are incorpo-
rated into the Pose Terrier pipeline, working in conjunction with the MEKF, to both improve the
filtered attitude estimates, along with providing more accurate angular velocity estimates.
Much recent development towards monocular camera-based pose estimation systems has been
motivated by the SPEED10 and SPEED+11 competitions. However, all these works only provide
static pose estimates without any filtering, such as in References 7, 12–14. A few works in addi-
tion perform dynamic filtering, but either require exact moment-of-inertia knowledge15, 16 or simply
assume constant angular velocity.17 This present work bridges the gap by improving the pose esti-
mation accuracy when the target moment-of-inertia is unknown.
This paper first presents the overall end-to-end pipeline architecture, and provides summaries for
each of the individual components. Next, the details of the MEKF implementation are summarized.
Simulated results are then presented in which the improvements to the Pose Terrier package specif-
ically are first demonstrated. A full end-to-end pipeline test on flight-like hardware in real-time is
also conducted. Finally, concluding remarks are made, followed by funding acknowledgments.

FULL POSE ESTIMATOR ARCHITECTURE

General Software Architecture

The high-level representation of the adopted software architecture is depicted in Figure 3. A


stream of images taken by a monocular camera is fed into a series of two CNNs. The first CNN is an
object detection network that localizes the target spacecraft within the image frame. This selected
region-of-interest (RoI) containing the target spacecraft is cropped and passed on to a keypoint
regression network trained to output the relative azimuth (az.) and elevation (el.) angles to a pre-

2
Figure 2: View of Cygnus Enhanced Vehicle from Seeker-1 (Credit: NASA)

determined set of feature points on the target spacecraft body (e.g., Cygnus). These relative bearing
angles (az., el.) are then fed to a NLS-based static pose estimation solver. These pose estimates
are then treated as measurements for the MEKF loop, which filters the relative pose estimates.
In addition, the NLS solver supplies quaternion-parameterized attitude measurements to QuateRA,
which generates batch angular velocity and associated covariance estimates to be fed into the MEKF
as an additional measurement source. It is important to note that the same NLS pose solution
at a single time instance is not fed into both the MEKF and QuateRA to avoid correlating the
measurements. Instead, Pose Terrier alternates between sending the NLS static pose estimate to
the MEKF directly, and to QuateRA, at a configurable rate. The filtered MEKF pose estimates are
then outputted, and for example, are written to disk. The “Neural Networks” and “Pose Terrier”
components run as two different computer processes, and communicate via serialized data streams
over named pipes. The serialization is accomplished via Protocol Buffers* . The software also
supports basic memory management by logging filtered pose solutions to disk and subsequently
clearing program memory periodically.

Overall Neural Network Architecture

The presented neural network architecture is composed of a two-step process. An object detec-
tion network first determines the 2D bounding box of the target spacecraft, and then feeds a cropped
RoI to a separate keypoint regression network. The keypoint regression network regresses the 2D
locations of predetermined 3D surface keypoints on the spacecraft model, from which the relative
azimuth and elevation angles per keypoint can be obtained. This overall architecture was cho-
sen because prior works,10, 12, 13, 18 as well as our own in-house experiments, have found that such
keypoint-based architectures tend to outperform other techniques such as direct pose regression or
classification on pose regression tasks.
Both neural networks are trained entirely on synthetically generated training data. Training im-
* https://developers.google.com/protocol-buffers/

3
logging
Camera (csv)

images filtered relative


pose estimate

relative angular
Object velocity estimate
Detection Pose MEKF
Network

cropped relative pose QuateRA


RoI estimate
rel. az., el. angles to
Keypoint each feature point NLS
Regression
Network Solver relative attitude
estimate

Neural Pose
Networks Terrier

Figure 3: Software Pipeline Architecture

ages and the associated labels are generated using Blender.19 The use of synthetic imagery allows
the system to be applied to any known spacecraft provided that a 3D mesh model of the spacecraft
is available.
In-depth details of the neural network design and implementation, and of the synthetic data gen-
eration can be found in Reference 7.

NLS Solver
The NLS problem is summarized as follows. The origin of the chaser (e.g., Seeker) frame is
denoted Oc , where the chaser z-axis is along the camera boresight (see Figure ??). The origin of
the target (e.g., Cygnus) frame is denoted Ot . The true relative translation at any time from the
chaser to the target in the chaser frame is denoted by the position vector rrelc , and the true relative

attitude at any time from the chaser to the target is denoted by the rotation matrix Tct = (Ttc )T . The
c , and the relative attitude estimate is denoted by T̂ t . The
relative position estimate is denoted r̂rel c
fixed position of the camera in the chaser frame is denoted by rcamc .

The fixed position of a 3D feature point i in the target body frame is denoted rit . Therefore, the
true position of a 3D feature point i, with respect to the camera, expressed in the chaser frame, is
given by:

ric = rrel
c c
− rcam + Ttc rit (1)
T
ric = xci yic zic

(2)
The estimated position of the same 3D feature point i, with respect to the camera, expressed in the
chaser frame, is given by: To be
Fixed and known
found
r̂ic c
= r̂rel c
− rcam + T̂tc rit (3)
T
r̂ic = x̂ci ŷic ẑic

(4)

4
The NLS cost function is defined as:
!
1 Xh i
min J = min 2 (φi − φ̂i )2 + (εi − ε̂i )2 (5)
c , Tt
rrel c
c rrel , Tct σbearing i

where:
xci x̂ci

  
φi = arctan ; φ̂i = arctan
zic ẑ c
 c  ci
yi ŷi
εi = arctan ; ε̂i = arctan
zic ẑic
and σbearing represents the standard deviation of the relative bearing angles (az., el.) generated by
the CNN. It is important to note that our adopted CNN model does not produce accompanying un-
certainty estimates for the keypoint predictions. Instead, σbearing is treated as a tuning parameter
that is experimentally selected by computing the average standard deviation of the spread of key-
point estimates in a Monte Carlo setting. The measured relative bearing angles φi , εi are modeled
as:

z = h(x) + w (6)
where the non-linear measurement model is:
 T
h(x) = φ1 ε1 . . . φ n εn (7)
where n is the number of feature points, and the additive noise w is modeled as a white Gaussian
sequence with covariance matrix Rbearing .
Full details of the NLS implementation can be found in Reference 6.

QuateRA
Consider the case when noise-free quaternion measurements q̄k are corrupted by time-uncorrelated
measurement noise (in multiplicative form) at each time t = tk , represented by noise quaternion
ηk :
qk = ηk ⊗ q̄k (8)
wherein ⊗ denotes the quaternion product. These measurements (Eq. 8) are stacked up from time
t0 to tn−1 into a measurement matrix Zn , where n is the total number of measurements:
 
Zn = η0 ⊗ q̄0 η1 ⊗ q̄1 . . . ηn−1 ⊗ q̄n−1 4×n (9)
 
= q0 q1 . . . qn−1 4×n

Consider the noisy spatiotemporal matrix Qn := Zn ZnT , which is a real, symmetric, positive
semi-definite 4 × 4 matrix. The eigenvalues and the eigenvectors of Qn therefore coincide with
the squared singular values and the left singular vectors of Zn , respectively. The first two left sin-
gular vectors of Zn (labeled respectively as u1 and u2 ) span the least-squares-fit plane-of-rotation
P (u1 , u2 ). Therefore, the best least-squares estimate of the spin-axis direction â is given by:8
 
0
= u2 ⊗ u−11 (10)
â 4×1

5
It is important to note that â is an ambiguous estimate of a up to a sign error. This is not an issue in
practice because estimating the angular velocity magnitude Ω̂ will absorb the sign; the total angular
velocity vector estimate ω̂ = Ω̂â will be consistent with ω = Ωa.
It remains to estimate the angular velocity magnitude Ω̂. Each quaternion qk in the batch Zn is
first projected to the optimal-fit plane-of-rotation P (u1 , u2 ):8
α1 u1 + α2 u2
q̃k = p 2 (11)
α1 + α22

wherein:

α1 = qkT u1 α2 = qkT u2

Next, we define the in-plane angle spanned by a measurement qk :


perp
Φk = 2 atan2 q̃kT v0 , q̃kT v0

(12)

wherein:
perp
v0 = q̃0 v0 = [â⊗] q̃0

0 −âT
 
[â⊗] =
â − [â×] 4×4

 
0 −â(3) â(2)
[â×] =  â(3) 0 −â(1)
−â(2) â(1) 0 3×3

These angles are stacked up into an unwrapped vector (that is, the angles must be increasing; the
angle wrap-around must be accounted for):
 T
Φn = Φ0 Φ1 . . . Φn−1 (13)

We then assume the system model:


 
  Φ̄0
Φ̄k = Φ̄0 + Ω(tk − t0 ) = 1 (tk − t0 ) (14)

and in-plane angle “measurement” model:

Φk = Φ̄k + νk (15)

wherein νk ∼ N 0, σν2 and Φ̄k represents the noise-free value corresponding to Φk . Finally, the


angular velocity magnitude Ω̂ can be estimated via least-squares with:


 
Φ0 −1 T
X̂ = = HT H H Φn (16)
Ω̂ 2×1

6
wherein:
 T
1 1 ... 1
H=
t0 t1 . . . tn−1

The covariance estimate of X̂, assuming uncorrelated measurement noise between any two mea-
surements (E [νi νj ] = 0, i ̸= j), is given by:
−1
PX̂ = σν2 H T H (17)

The variance estimate σΩ̂2 of the angular velocity magnitude estimate Ω̂ is then the bottom right

value of the 2 × 2 matrix PX̂ .


It remains to compute the covariance estimate of the spin-axis direction. As derived in the com-
panion paper to this work,9 the mean and standard deviation of the ith element of the j th left singular
vector uj,i of Zn , where i, j ∈ {1, 2, 3, 4}, are given by:
 
4
ε2 X λ̄m + λ̄j 
ūj,i + O ε4

⟨uj,i ⟩ = 1 − 2
(18)
2 (λ̄m − λ̄j )
m=1; m̸=j

v
u 4
u X λ̄m + λ̄j 2 2

σj,i = εt (ūm,i ) + O ε (19)
(λ̄m − λ̄j )2
m=1; m̸=j

wherein λ̄j is the j th eigenvalue of the noise-free spatiotemporal matrix Q̄n = Z̄n Z̄nT , ūj,i is the ith
element of the j th left singular vector of the noise-free measurement matrix Z̄n , and ε is a modeled
noise term as defined in Reference 9. The analytical expressions for the aforementioned eigenvalues
and left singular vectors for the noise-free case in Reference 9 are summarized here:
!
1 sin n ∆θ
2
λ̄1 , λ̄2 = n±
sin ∆θ

2 2
(20)
λ̄3 ≡ λ̄4 ≡ 0

n+1 ∆θ
− sin n+1 ∆θ 
   
cos 2 2 2 2
ū1 =   ū2 =  
n+1 ∆θ n+1 ∆θ
 
sin 2 2 a 4×1
cos 2 2 a 4×1
(21)
   
0 0
1 −a2  1 −a3 
ū3 = p 2   ū4 = p 2  
a1 + a22  a1  a1 + a23  0 
0 4×1 a1 4×1

7
wherein the unit 3-vector a = [a1 a2 a3 ]T is the true underlying spin-axis direction. Reference 9
then derives the the covariance matrix of the spin-axis direction estimate as:
h iT 
2 2 2 2 2 2

Pâ = diag (σ1,2 ) (σ1,3 ) (σ1,4 ) / u1,2 + u1,3 + u1,4 (22)

based on u1 , or:
h iT 
2 2 2
u22,2 u22,3 u22,4

Pâ = diag (σ2,2 ) (σ2,3 ) (σ2,4 ) / + + (23)

based on u2 . Note that in either diagonal construction of Pâ above, each element is assumed to be
independent.
Finally, Reference 9 provides an estimate of the overall angular velocity vector covariance matrix.
In particular, the variance of each element i ∈ {1, 2, 3} of the angular velocity vector is given by:
 
2
= Ω̂2 + σΩ̂
2
â(i)2 + Pâ (i, i) − Ω̂2 â(i)2

σω̂,i (24)

Therefore, the overall covariance matrix for the angular velocity vector can be constructed as:
h iT 
σ 2 σ 2 σ 2
Pω̂ = diag ω̂,1 ω̂,2 ω̂,3 (25)

FULL POSE FILTERING


A full pose filtering loop is implemented over the static pose estimates generated by the NLS
solver and the angular velocity estimates generated by QuateRA. The filtering achieves the follow-
ing objectives; it provides: (1) smoothed estimates at a higher frequency than the frequency at which
measurements are made, (2) a kinematic model for the underlying dynamics, and (3) a statistical
method to suppress the effect of outliers on the relative pose estimates. The filtering is accomplished
by a Multiplicative Extended Kalman Filter (MEKF) for both attitude and position estimation,8, 20–23
which is summarized below.

Filter States
The filter states, which describe the target with respect to the chaser, are the following:
h iT
x = δg T ωT αT xTp (26)

wherein:

• Quaternion for the global attitude representation:


T T = q Tt
 T   
q = qw qx qy qz = qw qvec c

• Gibbs vector for the error attitude representation: δg = δqvec /δqw


 T
• Angular velocity vector: ω = ωx ωy ωz
 T
• Angular acceleration vector: α = αx αy αz

8
 T
• Position states: xp = x ẋ ẍ y ẏ ÿ z ż z̈

and:
c
 T
rrel = x y z
c
 T
ṙrel = ẋ ẏ ż
c
 T
r̈rel = ẍ ÿ z̈

Filter Dynamics
Because we assume we have no true dynamics models for either the target attitude or translation,
we employ kinematics models for both. The kinematics model employed for the attitude states, with
time-step ∆t, is given by:
qk+1 = Ak qk (27)
ωk+1 = ωk + αk ∆t (28)
αk+1 = e−∆t/τ αk + vkα (29)
wherein the angular acceleration dynamics (Eq. 29) is modeled as a discrete-time 1st -order Gauss-
Markov process with time constant τ , i.i.d. noise vkα ∼ N (0, Qα ), and:
   
1 1
Ak = cos Ω(∆t) I4×4 + sin Ω(∆t) ω ⃗⊗ (30)
2 2
ωT
 
0 −⃗
⃗⊗ =
ω (31)
ω⃗ −[⃗ ω ×] 4×4
Ω = ∥ω̂∥2 (32)
ω̂
⃗ =
ω (33)

 T
wherein the skew-symmetric cross-product matrix for arbitrary vector v = vx vy vz is de-
fined by:  
0 −vz vy
[v×] :=  vz 0 −vx  (34)
−vy vx 0
and ω̂ is the current estimate of the angular velocity vector. The 1st -order Gauss-Markov process as-
sumption for the angular acceleration dynamics implies that in steady-state, α is zero-mean and has
bounded covariance. If instead the steady-state bounded covariance assumption for α is unrealistic,
then a random walk process could be assumed for the dynamics of α instead.
A discrete-time Wiener-process acceleration (DWPA) kinematics model is assumed for the posi-
tion states xp with time-step ∆t, which is given by:
xp,k+1 = Fp xp,k + vp,k (35)

F ′ 03×3 03×3 1 2
   
1 ∆t 2 (∆t)
Fp = 03×3 F ′ 03×3  ; F ′ = 0 1 ∆t  (36)
03×3 03×3 F ′ 9×9 0 0 1

9
where i.i.d. noise vp,k ∼ N (0, Qp ), and process noise covariance matrix Qp is given by:
1 4 1 3 1 2
Q′ 03×3 03×3
 
4 (∆t) 2 (∆t) 2 (∆t)
Qp = 03×3 Q′ 03×3  ; Q′ = 2 1
σv 2 (∆t)3 (∆t) 2 ∆t  (37)
03×3 03×3 Q′ 9×9 1
2 (∆t)
2 ∆t 1

The DWPA model is of third-order (as opposed to the white noise acceleration model, which is
of second-order). That is, the DWPA model implies that the process noise vp,k is the acceleration
increment at time-step k. This increment is assumed to be a zero-mean white sequence. Is it possible
to also model DWPA such that the third-order derivative (jerk) is a zero-mean white sequence,
though the acceleration increment formulation is more convenient.20 We choose the third-order
model over the second-order model for added flexibility.
The overall covariance propagation is given by:

Pk+1|k = Fk Pk|k FkT + Qk (38)

e∆t·Aatt
 
09×9
Fk = (39)
09×9 Fp 18×18

 
− [ω̂×] I3×3 03×3
Aatt =  03×3 03×3 I3×3  (40)
03×3 03×3 − τ1 I3×3 9×9

06×3 06×9
   
I6×6 I6×6 06×3 06×9
τ ·Qpsd
Qk = 03×6 σv2 · · 1 − e−2∆t/τ · I3×3 03×9  = 03×6 σv2 · τ2 · Qα 03×9 


2
09×6 09×3 Qp 18×18 09×6 09×3 Qp 18×18
(41)

wherein the scalar Qpsd is the power spectral density of each element of the underlying continuous-
time model of the discrete noise term vkα . In particular, the covariance of vkα , given earlier as Qα , is
defined as:  
Qα = Qpsd · 1 − e−2∆t/τ · I3×3 (42)

Because we adopt a multiplicative filter formulation, the attitude is globally represented in quater-
nion form, but is represented in Gibbs 3-vector form for covariance propagation and for the local
measurement update (see next section). Hence, Aatt is a 9 × 9 matrix.

Pose Measurement Update

The employed measurement model for a static pose generated by the NLS solver is characterized
as follows. The measurement innovation for the attitude measurement qm,k+1 is given by:

δqvec
νa,k+1 = 2δgk+1 = 2 (43)
δqw

10
−1
where δq = qm,k+1 ⊗ q̂k+1|k . The overall measurement model is given by:
h iT
T T
ym,k+1 = qtrue,k+1 rtrue,k+1 + wk+1 (44)
h iT
T T
= qm,k+1 rm,k+1 (45)

where i.i.d. noise wp,k+1 ∼ N (0, Rk+1 ).


The overall measurement sensitivity matrix employed for the covariance update is given by:
 
I 03×6 03×3 03×6
Hk+1 = 3×3 (46)
03×3 03×6 I3×3 03×6 6×18
 
which is in terms of the error attitude state. The attitude measurements qm = q T̂ct and position
measurements rm = r̂relc are taken from the NLS pose solution, and the NLS pose covariance esti-

mate at each time-step is employed as the filter measurement covariance matrix Rk+1 . Importantly,
the NLS-sourced covariance matrix accounts for the cross-correlations among both the attitude and
position state estimates. The filter propagation, measurement update, and reset steps follow those
of the standard MEKF implementation.

Angular Velocity Measurement Update


The employed measurement model for an angular velocity estimate generated by QuateRA is
characterized as follows. The measurement sensitivity matrix employed is given by:
av
 
Hk+1 = 03×3 I3×3 03×12 3×18 (47)

The overall measurement model is given by:


av av
ym,k+1 = ωtrue,k+1 + wk+1 (48)
= ωm,k+1
av
where i.i.d. noise wp,k+1 av ). The measurement covariance matrix R av (taken as P
∼ N (0, Rk+1 k+1 ω̂
from Eq. 25) is also supplied by QuateRA for each angular velocity estimate ωm,k+1 (taken as the
product Ω̂â from Eqs. 10 and 16) to be used as a filter measurement. The measurement update step
follows that of the standard MEKF implementation.

RESULTS
A series of results are provided in this section to evaluate the performance of the pipeline. In
particular, the benefit of incorporating the QuateRA module is highlighted. The first several exper-
iments stress the Pose Terrier package, while simulating keypoint estimates based off a modeled
truth propagation. Finally, the entire end-to-end pipeline, including the neural networks, is tested
with simulated imagery of the Cygnus vehicle. The end-to-end pipeline test is conducted on the UP
Board single-board computer, which has a quad-core Intel Atom x5-z8350 processor. The UP Board
is a slightly slower computer than the Intel Joule 570x, which was the flight computer employed on
the Seeker-1 mission. Although the presented pipeline provides estimates for both relative attitude
and position, the results in the sequel will report only the attitude estimates, which is the focus of
the improvements introduced this work.

11
Test 1

In this first test, the underlying relative angular velocity is set be constant, while the target’s ori-
entation accordingly evolves over time. The simulation is conducted twice, once with the QuateRA
module disabled, and once with the QuateRA module enabled. Overall time-series statistics are
provided in Table 1. In the presented rotation time-series plots in Figure 4, attitude is represented
in Euler angle form (roll - ϕ, pitch - θ, yaw - ψ) for intuition. The corresponding uncertainties
of the filtered estimates are depicted via 3σ plots in Figure  5. For the 3σ plots, attitude errors are
represented in the Gibbs vector form g = [g1 g2 g3 ]T , which is the local error representation
adopted by the MEKF. Finally, angular velocity time-series plots are provided in Figure 6. The
benefit of including the QuateRA module is immediately obvious - the rotation error and associated
uncertainty are approximately halved. Although angular velocity is technically observable by the
filter without the QuateRA module, the inclusion of the QuateRA module significantly increases
the accuracy of the angular velocity estimation, which in turn decreases the error in the attitude
estimation.

Table 1: Test 1: Rotation Error Statistics (deg)

Statistic NLS Filtered Filtered (with QuateRA)


Mean 8.815 6.064 2.272
Median 8.150 5.451 1.507
Std. Dev. 4.767 3.502 1.837

80 80
NLS NLS
40 filtered 40 filtered
[deg]

[deg]

truth truth
0 0

-40 -40
0 10 20 30 40 50 60 0 10 20 30 40 50 60
180 180
90 90
[deg]

[deg]

0 0
-90 -90
-180 -180
0 10 20 30 40 50 60 0 10 20 30 40 50 60
100 100

60 60
[deg]
[deg]

20 20

-20 -20
0 10 20 30 40 50 60 0 10 20 30 40 50 60

30 30
error [deg]

error [deg]

20 20
10 10
0 0
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time [s] time [s]

(a) Without QuateRA (b) With QuateRA

Figure 4: Test 1: Rotation

12
0.2 3 0.2
3
error error
0.1 0.1

g 1 error
g 1 error
0 0

-0.1 -0.1

-0.2 -0.2
0 10 20 30 40 50 60 0 10 20 30 40 50 60

0.2 0.2

0.1 0.1
error

g 2 error
0 0
2
g

-0.1 -0.1

-0.2 -0.2
0 10 20 30 40 50 60 0 10 20 30 40 50 60

0.2 0.2

0.1 0.1

g 3 error
g 3 error

0 0

-0.1 -0.1

-0.2 -0.2
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time [s] time [s]

(a) Without QuateRA (b) With QuateRA

Figure 5: Test 1: Gibbs Vector Error ±3σ

25 filtered 25 quatera
truth filtered
[deg/s]

10 10 truth
[deg/s]

-5 -5

-20 -20
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time [s] time [s]

15 15
[deg/s]
[deg/s]

0 0

-15 -15

-30 -30
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time [s] time [s]

20 20
[deg/s]

5 5
[deg/s]

-10 -10

-25 -25
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time [s] time [s]

(a) Without QuateRA (b) With QuateRA

Figure 6: Test 1: Angular Velocity

13
Test 2

The second test stresses the system with a more challenging situation. In particular, the under-
lying relative angular velocity is now time-varying; it follows the Euler dynamics evolution for a
specified moment-of-inertia matrix. In addition, the underlying noise when simulating keypoints is
doubled. Overall time-series statistics are provided in Table 2. The rotation time-series plots are
presented in Figure 7, the corresponding uncertainties of the filtered estimates are depicted via 3σ
plots in Figure 8, and the angular velocity time-series plots are provided in Figure 9. As indicated
by the NLS time-series in Figure 7, this scenario was quite challenging. While the filter sans Quat-
eRA improves upon the static NLS solution over time, the filter with QuateRA performs noticeably
better, again.

Table 2: Test 2: Rotation Error Statistics (deg)

Statistic NLS Filtered Filtered (with QuateRA)


Mean 20.777 12.401 8.049
Median 16.401 10.114 8.045
Std. Dev. 17.862 7.867 3.057

90 90
45 45
[deg]

[deg]

0 NLS 0 NLS
-45 filtered -45 filtered
truth truth
-90 -90
0 10 20 30 40 50 60 0 10 20 30 40 50 60
180 180
90 90
[deg]

[deg]

0 0
-90 -90
-180 -180
0 10 20 30 40 50 60 0 10 20 30 40 50 60
180 180
90 90
[deg]

[deg]

0 0
-90 -90
-180 -180
0 10 20 30 40 50 60 0 10 20 30 40 50 60
100 100
error [deg]

error [deg]

75 75
50 50
25 25
0 0
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time [s] time [s]

(a) Without QuateRA (b) With QuateRA

Figure 7: Test 2: Rotation

14
0.4 0.4
3 3
0.2 error 0.2 error

g 1 error
error
0 0

1
g -0.2 -0.2

-0.4 -0.4
0 10 20 30 40 50 60 0 10 20 30 40 50 60

0.4 0.4

0.2 0.2
g 2 error

g 2 error
0 0

-0.2 -0.2

-0.4 -0.4
0 10 20 30 40 50 60 0 10 20 30 40 50 60

0.4 0.4

0.2 0.2
g 3 error

g 3 error
0 0

-0.2 -0.2

-0.4 -0.4
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time [s] time [s]

(a) Without QuateRA (b) With QuateRA

Figure 8: Test 2: Gibbs Vector Error ±3σ

25 25

10 10
[deg/s]

[deg/s]

quatera
-5 -5
filtered filtered
truth truth
-20 -20
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time [s] time [s]

15 15
[deg/s]

[deg/s]

0 0

-15 -15

-30 -30
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time [s] time [s]

20 20

5 5
[deg/s]

[deg/s]

-10 -10

-25 -25
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time [s] time [s]

(a) Without QuateRA (b) With QuateRA

Figure 9: Test 2: Angular Velocity

15
Test 3

The performance of the entire end-to-end pipeline, including the neural networks, is examined in
this third test. A 90 second simulated video of Cygnus is generated via Blender, and is processed by
the pipeline in real-time on the UP Board single-board computer. The underlying relative angular
velocity is time-varying and unknown, following the Euler dynamics evolution for some specified
moment-of-inertia matrix. In addition, the Cygnus vehicle moves out-of-frame for approximately
10 seconds, before returning to the camera field-of-view. Overall time-series statistics are provided
in Table 3. The rotation time-series plots are presented in Figure 10, and the corresponding uncer-
tainties of the filtered estimates are depicted via 3σ plots in Figure 11. Angular velocity time-series
plots are not provided because the true angular velocity is unknown. As seen in Figure 10a, both the
NLS and filtered solution sans QuateRA perform quite accurately until Cygnus moves out-of-frame
around 45 seconds, when both the NLS and filter estimates diverge. Once Cygnus returns to the
camera field-of-view around 56 seconds, the estimates converge again. The filter with QuateRA is
tuned to weigh the QuateRA-provided measurements more favorably than in the previous tests be-
cause of the known out-of-frame period. Therefore, although the filter estimates before 45 seconds
are decent in Figure 10b, the NLS solutions are actually more accurate. However, once Cygnus goes
out-of-frame around 45 seconds, the filter with QuateRA is able to significantly reduce the large er-
rors. In fact, statistics in Table 3 demonstrate that on average over time, the filter with QuateRA
improves over both the NLS solution and the filter sans QuateRA.

Additionally, computation time statistics when run on the UP Board single-board computer are
provided in Table 4 for each of the pipeline components, along with the overall pipeline statistics.
The entire pipeline runs at approximately 4.5 Hz on flight-like hardware in real-time.

Table 3: Test 3: Rotation Error Statistics (deg)

Statistic NLS Filtered Filtered (with QuateRA)


Mean 14.189 16.400 7.304
Median 2.442 2.737 4.755
Std. Dev. 35.221 7.867 7.431

Table 4: Mean Computation Time (ms)


Object Detection NN Keypoint Regression NN Pose Terrier (NLS+MEKF+QuateRA) Combined
136 65 19 220

16
60 NLS 60 NLS
30 filtered 30 filtered

[deg]
[deg]
0 truth 0 truth
-30 -30
-60 -60
0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 80 90
180 180
90 90
[deg]

[deg]
0 0
-90 -90
-180 -180
0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 80 90
180 180
90 90
[deg]

[deg]
0 0
-90 -90
-180 -180
0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 80 90
40 40
error [deg]

30 30

error [deg]
20 20
10 10
0 0
0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 80 90
time [s] time [s]

(a) Without QuateRA (b) With QuateRA

Figure 10: Test 3: Rotation

1 1
3 3
0.5 error 0.5 error
g 1 error
g 1 error

0 0

-0.5 -0.5

-1 -1
0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 80 90

1 1

0.5 0.5
g 2 error

g 2 error

0 0

-0.5 -0.5

-1 -1
0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 80 90

1 1

0.5 0.5
g 3 error
error

0 0
3
g

-0.5 -0.5

-1 -1
0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 80 90
time [s] time [s]

(a) Without QuateRA (b) With QuateRA

Figure 11: Test 3: Gibbs Vector Error ±3σ

17
CONCLUSION
This paper has presented an upgraded open-source pose estimation pipeline that consists of con-
volutional neural networks for keypoint estimation, a nonlinear least-squares solver for static pose
estimation, and a multiplicative extended Kalman filter (MEKF) for full pose filtering. In particular,
the Quaternion Regression Algorithm (QuateRA) and recent improvements to it have been incor-
porated as a new module into the overall pipeline. Several simulated results have been presented,
demonstrating the benefit of including QuateRA as an additional measurement source for the MEKF
to track the relative pose of a non-cooperative target such as the Cygnus resupply vehicle. Impor-
tantly, accurate pose and angular velocity estimates are achieved without needing to know the mass
and moment-of-inertia properties of the target. The performance of the entire pipeline has been
demonstrated on flight-like hardware in real-time.

ACKNOWLEDGMENT
This material is based upon work supported in part by the National Science Foundation Grad-
uate Research Fellowship Program under Grant No. DGE-1610403. Any opinions, findings, and
conclusions or recommendations expressed in this material are those of the author(s) and do not
necessarily reflect the views of the National Science Foundation.

The authors also acknowledge research funding from the National Aeronautics and Space Ad-
ministration under Grant/Contract/Agreement No. 4200732007 issued through the Johnson Space
Center (Technical Point of Contact: Dr. Christopher N. D’Souza) that supported this work in part.

REFERENCES
[1] S. Pedrotty, J. Sullivan, E. Gambone, and T. Kirven, “Seeker free-flying inspector GNC system
overview,” American Astronautical Society Annual Guidance and Control Conference (AAS GNC 2019),
No. JSC-E-DAA-TN64849, 2019.
[2] S. Pedrotty, J. Sullivan, E. Gambone, and T. Kirven, “Seeker Free-Flying Inspector GNC Flight Per-
formance,” 43rd Annual AAS Guidance, Navigation, and Control Conference, Paper No. AAS, 2020,
pp. 20–158.
[3] B. Banker and R. Askew, “Seeker 1.0: Prototype robotic free flying inspector mission overview,” Annual
Conference on Small Satellites, Vol. SSC19-XI-04, 2019.
[4] N. Dhamani, G. Martin, C. Schubert, P. Singh, N. Hatten, and M. R. Akella, “Applications of Machine
Learning and Monocular Vision for Autonomous On-Orbit Proximity Operations,” AIAA Scitech 2020
Forum, 2020, p. 1376, 10.2514/6.2020-1376.
[5] S. Kaki, “On-orbit pose and angular velocity estimation,” Master’s thesis, The University of Texas at
Austin, 2020.
[6] S. Kaki and M. R. Akella, “Pose Terrier - Image-based Pose Estimation and Filtering For Spacecraft
Applications,” 2021 AAS/AIAA Astrodynamics Specialist Conference, Vol. AAS 21-640, 2021.
[7] K. Black, S. Shankar, D. Fonseka, J. Deutsch, A. Dhir, and M. R. Akella, “Real-Time, Flight-Ready,
Non-Cooperative Spacecraft Pose Estimation Using Monocular Imagery,” 31st AAS/AIAA Space Flight
Mechanics Meeting, Vol. AAS 21-283, 2021.
[8] M. M. d. Almeida, D. Mortari, R. Zanetti, and M. Akella, “Quatera: The quaternion regression algo-
rithm,” Journal of Guidance, Control, and Dynamics, Vol. 43, No. 9, 2020, pp. 1600–1616.
[9] S. Kaki and M. R. Akella, “Kinematic Batch Filter Formulation for Angular Velocity Estimation with
Covariance Bounds,” 2023 AAS/AIAA Astrodynamics Specialist Conference, Vol. AAS 23-209, 2023.
[10] M. Kisantal, S. Sharma, T. H. Park, D. Izzo, M. Märtens, and S. D’Amico, “Satellite Pose Estimation
Challenge: Dataset, Competition Design, and Results,” IEEE Transactions on Aerospace and Electronic
Systems, Vol. 56, No. 5, 2020, pp. 4083–4098, 10.1109/TAES.2020.2989063.
[11] T. H. Park, M. Märtens, G. Lecuyer, D. Izzo, and S. D’Amico, “SPEED+: Next-Generation Dataset
for Spacecraft Pose Estimation across Domain Gap,” 2022 IEEE Aerospace Conference (AERO), 2022,
pp. 1–15, 10.1109/AERO53065.2022.9843439.

18
[12] T. H. Park, S. Sharma, and S. D’Amico, “Towards Robust Learning-Based Pose Estimation of Nonco-
operative Spacecraft,” 2019 AAS/AIAA Astrodynamics Specialist Conference, Portland, Maine, August
11-15 2019.
[13] B. Chen, J. Cao, A. Parra, and T. Chin, “Satellite Pose Estimation with Deep Landmark Regression and
Nonlinear Pose Refinement,” 2019 IEEE/CVF International Conference on Computer Vision Workshop
(ICCVW), 2019, pp. 2816–2824, 10.1109/ICCVW.2019.00343.
[14] M. Piazza, M. Maestrini, and P. Di Lizia, “Monocular Relative Pose Estimation Pipeline for Un-
cooperative Resident Space Objects,” Journal of Aerospace Information Systems, 2022, pp. 1–20,
10.2514/1.I011064.
[15] T. H. Park and S. D’Amico, “Adaptive Neural Network-based Unscented Kalman Filter for Spacecraft
Pose Tracking at Rendezvous,” arXiv preprint arXiv:2206.03796, 2022.
[16] D. Kaidanovic, M. Piazza, M. Maestrini, P. Di Lizia, et al., “Deep Learning-Based Relative Navigation
About Uncooperative Space Objects,” INTERNATIONAL ASTRONAUTICAL CONGRESS: IAC PRO-
CEEDINGS, 2022, pp. 1–11.
[17] L. P. Cassinis, R. Fonod, E. Gill, I. Ahrns, and J. G. Fernandez, “CNN-Based Pose Estimation System
for Close-Proximity Operations Around Uncooperative Spacecraft,” AIAA Scitech 2020 Forum, 2020,
10.2514/6.2020-1457.
[18] L. Pasqualetto Cassinis, R. Fonod, and E. Gill, “Review of the robustness and applicability of monoc-
ular pose estimation systems for relative navigation with an uncooperative spacecraft,” Progress in
Aerospace Sciences, Vol. 110, 2019, p. 100548, 10.1016/j.paerosci.2019.05.008.
[19] B. O. Community, Blender - a 3D modelling and rendering package. Blender Foundation, Stichting
Blender Foundation, Amsterdam, 2020.
[20] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with applications to tracking and navigation:
theory algorithms and software. John Wiley & Sons, 2004.
[21] F. L. Markley, “Attitude error representations for Kalman filtering,” Journal of guidance, control, and
dynamics, Vol. 26, No. 2, 2003, pp. 311–317.
[22] J. Sola, “Quaternion kinematics for the error-state Kalman filter,” arXiv preprint arXiv:1711.02508,
2017.
[23] E. Leffens, F. L. Markley, and M. D. Shuster, “Kalman filtering for spacecraft attitude estimation,”
Journal of Guidance, Control, and Dynamics, Vol. 5, No. 5, 1982, pp. 417–429.

19

You might also like