0% found this document useful (0 votes)
28 views20 pages

Sensors 24 06947

A New Iterative Algorithm for Magnetic Motion Tracking

Uploaded by

lifeisaserenade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views20 pages

Sensors 24 06947

A New Iterative Algorithm for Magnetic Motion Tracking

Uploaded by

lifeisaserenade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

sensors

Article
A New Iterative Algorithm for Magnetic Motion Tracking
Tobias Schmidt 1,2 , Johannes Hoffmann 1 , Moritz Boueke 1 , Robert Bergholz 3 , Ludger Klinkenbusch 2
and Gerhard Schmidt 1, *

1 Digital Signal Processing and System Theory, Institute of Electrical Engineering and Information Technology,
Faculty of Engineering, Kiel University, Kaiserstr. 2, 24143 Kiel, Germany; [email protected] (T.S.);
[email protected] (J.H.); [email protected] (M.B.)
2 Computational Electromagnetics, Institute of Electrical Engineering and Information Technology,
Faculty of Engineering, Kiel University, Kaiserstr. 2, 24143 Kiel, Germany; [email protected]
3 Pediatric Surgery, University Hospital Schleswig-Holstein, Kiel University, Arnold-Heller Str. 3,
24105 Kiel, Germany; [email protected]
* Correspondence: [email protected]

Abstract: Motion analysis is of great interest to a variety of applications, such as virtual and aug-
mented reality and medical diagnostics. Hand movement tracking systems, in particular, are used as
a human–machine interface. In most cases, these systems are based on optical or acceleration/angular
speed sensors. These technologies are already well researched and used in commercial systems.
In special applications, it can be advantageous to use magnetic sensors to supplement an existing
system or even replace the existing sensors. The core of a motion tracking system is a localization
unit. The relatively complex localization algorithms present a problem in magnetic systems, leading
to a relatively large computational complexity. In this paper, a new approach for pose estimation of
a kinematic chain is presented. The new algorithm is based on spatially rotating magnetic dipole
sources. A spatial feature is extracted from the sensor signal, the dipole direction in which the
maximum magnitude value is detected at the sensor. This is introduced as the “maximum vector”. A
relationship between this feature, the location vector (pointing from the magnetic source to the sensor
position) and the sensor orientation is derived and subsequently exploited. By modelling the hand
as a kinematic chain, the posture of the chain can be described in two ways: the knowledge about
Citation: Schmidt, T.; Hoffmann, J.; the magnetic correlations and the structure of the kinematic chain. Both are bundled in an iterative
Boueke, M.; Bergholz, R.; algorithm with very low complexity. The algorithm was implemented in a real-time framework
Klinkenbusch, L.; Schmidt, G. A New and evaluated in a simulation and first laboratory tests. In tests without movement, it could be
Iterative Algorithm for Magnetic shown that there was no significant deviation between the simulated and estimated poses. In tests
Motion Tracking. Sensors 2023, 24, with periodic movements, an error in the range of 1° was found. Of particular interest here is the
6947. https://doi.org/10.3390/
required computing power. This was evaluated in terms of the required computing operations and
s24216947
the required computing time. Initial analyses have shown that a computing time of 3 µs per joint is
Academic Editor: Kimiaki required on a personal computer. Lastly, the first laboratory tests basically prove the functionality of
Shirahama the proposed methodology.
Received: 11 October 2024
Keywords: magnetic motion tracking; localization; rotating magnetic dipole; iterative algorithms;
Revised: 25 October 2024
Accepted: 28 October 2024
human–machine interface
Published: 29 October 2024

1. Introduction
Copyright: © 2024 by the authors. Human motion tracking is of great interest to many applications such as virtual/
Licensee MDPI, Basel, Switzerland.
augmented reality [1] and medical diagnostics [2]. Among the several variants of motion
This article is an open access article
tracking, this contribution focuses on hand-motion tracking as a human–machine interface
distributed under the terms and
for robot-assisted surgery. However, the proposed method can also be used for other
conditions of the Creative Commons
applications where the movement can be modelled by kinematic chains.
Attribution (CC BY) license (https://
Camera-based (optical motion capture, OMC) systems, which are considered to be the
creativecommons.org/licenses/by/
gold standard in motion tracking methods, allow for accuracy in the millimeter or even
4.0/).

Sensors 2023, 24, 6947. https://doi.org/10.3390/s24216947 https://www.mdpi.com/journal/sensors


Sensors 2023, 24, 6947 2 of 20

submillimeter range [3]. However, OMC systems have the disadvantage that direct lines of
sight between the objects (usually reflecting markers) and relevant cameras are required.
An alternative method is the use of gloves with attached inertial measurement units
(IMU) or flex sensors. Several of the corresponding solutions are shown in [4]. The well-
investigated IMU are used in commercial applications such as XSens’ Quantum Metaglove [5].
However, IMU-based systems do not measure the quantities of interest (i.e., positions and
angles) directly but instead measure their time derivatives (i.e., acceleration and angular
speed). This leads to drift problems.
Approaches based on magnetic sensors are still in the early phases of research
(see [6,7], for examples). Future magnetic methods could be either a stand-alone func-
tional alternative or be used as a supplement to improve the performance of optical or
IMU-based systems.
The present work aims to design an input device for robot-assisted surgery based on
magnetic sensor technology. For this purpose, a glove will be equipped with magnetic
sensors such that each kinematic element can be assigned to at least one sensor. The heart
of the proposed motion tracking system is the localization unit. This unit determines the
pose of the object with a constant sample rate with a typical duration of the period of 15 ms,
which in turn leads to a localization update rate of 67 Hz [8,9]. If the sampling period is
longer, it might lead to disruptive handling in human–machine interface applications. In
our case, a kinematic chain with up to 20 degrees of freedom has to be estimated every
15 ms. The allowed latency is one of the challenges, as it leads to limited computing time
and the algorithms must be designed to work efficiently.
Magnetic localization is usually solved with numerical or analytical approaches. Nu-
merical solutions are used in applications with 1D sensors or flexible setups [10–13]. Ana-
lytical methods are used with fixed sensor array configurations, such as 3D sensors [14–16]
or gradient sensors [17–20]. On the one hand, numerical methods are generally compu-
tationally intensive, which can become a problem if many (>20) sensors are involved, as
it may no longer be possible to maintain the latency time. On the other hand, analytical
methods usually use defined sensor setups such as 3D sensors. These may already be too
large for the structures to be observed, which in case of a finger are only a few centimetres
in size. In [6], a magnetic sensor glove with a numerical localization approach is described.
In the approaches mentioned above, while poses are determined by a minimization
of a cost function and some kinematic constraints are kept, up to 55 hand reconstructions
per second had been achieved thus far. Since about at least 67 hand reconstructions per
second are required for surgical interfaces, these algorithms are not fully capable of solving
the problem at this time. With the progress in computer hardware, these algorithms could
become able to satisfy these conditions in the future. However, we are looking for a solution
that can be executed on standard personal computers where further processing beyond
motion analysis (e.g., gesture recognition) usually need to be executed.
For the overall system, we use two nested localization methods: The external local-
ization is responsible for the absolute localization of a reference point within a defined
measurement volume. In our case, this reference point could be the wrist, for example. A
3D coil is attached to this wrist, which can then be localized using conventional algorithms,
as shown in [15,16]. Based on this, there is an internallocalization. This estimates the posi-
tion and orientation of the sensors attached to the fingers with respect to the reference point
mentioned above. To this end, the individual fingers are modelled as kinematic chains.
This offers an advantage in that the number of degrees of freedom is reduced. In general, a
1D sensor has five degrees of freedom: three for the position and two for the orientation. By
attaching the sensor to the kinematic chain, the number of degrees of freedom per sensor
is reduced to two. Here, movement is limited by the rotation of the joints. This will be
utilized in an efficient algorithm, which will be explained in the following. An advantage of
the presented algorithm is that it combines localization and mapping to a kinematic chain.
In this way, prior knowledge about a kinematic chain is integrated into a localization, thus
Sensors 2023, 24, 6947 3 of 20

narrowing down the solution space and simplifying the calculation. An overview of this
nested localization scheme is shown in Figure 1.

3D sensors
1D sensors

3D source

Relative
localization
Position Gesture
combination recognition
Absolute
localization

Figure 1. System overview: The illustration includes an external localization (blue) consisting of a
defined setup of (here 8) sensors. The inner localization (red) consists of a 3D coil which is attached
to the wrist as well as magnetic 1D sensors which are attached to each finger element. Following the
localization, gesture recognition or processing of the data for the human–machine interface can be
carried out.

First, we will introduce a magnetic signal feature called the “maximum vector”. Then,
we discuss the relations between this feature, the sensor position, and the sensor orientation.
Eventually, these relations are linked to the known anatomy through an iterative algorithm.
For validation, the algorithm was implemented in a real-time environment and tested with
a simulation and an initial laboratory setup. The paper closes with a discussion about the
results and the restrictions of the algorithm. The central idea and a suitable setup is shown
in Figure 2.
z

y 3D coil
(transmitter)
x
Kinematic chain with joints and connections
Coordinate
system
Magnetic sensors

Origin of the
coordinate system

Sensor
signals

Signal processing on the receiver (sensor) side

AD converter Feature
Projection
and pre- extration Estimated poses
estimation
processing (max vector)
Coil
signals
Signal processing on the transmitter (coil) side

DA converter
Spatial Signal
and current
rotation generation
amplifier

Figure 2. Typical example of use of the presented algorithm: At the origin of the coordinate system a
3D magnetic transmitter is located. A kinematic chain is equipped with 1D magnetic sensors, such as
fluxgate magnetometers or magnetoelectric sensors, on every chain element. The kinematic chains
are connected through joints with ellipsoidal cross-sections, each providing two degrees of freedom.
Any additional information from the kinematic chain about the position is used to increase the speed
of the localization algorithm.
Sensors 2023, 24, 6947 4 of 20

2. Spatial Signal Feature


In this section, a magnetic signal feature will be introduced, which we refer to as the
“maximum vector”, abbreviated as MV in the following. A rotating magnetic dipole at a
defined position and a sensor at a defined position and orientation are assumed. The MV
describes the dipole orientation for which the maximum signal is detected at the sensor.
The spatial relationships between the MV, the sensor location, and the sensor orientation
will be derived in the following.

2.1. Field-Theoretical Basics


The magnetic field of a slowly time-varying current density distribution can be approx-
imated by Biot–Savart’s law. However, even this simplification of Maxwell’s equations can
result in a very time-consuming numerical calculation, requiring a spatial integration over
each voxel of the current–density distribution. For a localized current–density distribution
produced by coils, the magnetic field calculation can be further simplified using a magnetic
dipole model. For a good approximation, it is important that the distance between the coil
and the sensor is much larger than the typical dimensions of the coil. For a magnetic dipole
⃗ located at the origin (r = 0), the magnetic field ⃗Bdip (⃗r ) can described by [21]:
m

⃗Bdip (⃗r ) = 1 3⃗r (⃗ ⃗ r2


m ·⃗r ) − m
2 3
. (1)
4πr r
In the present work, we assume an ideal sensor at position ⃗r with known orientation
⃗es . “Ideal” means that the sensor is assumed to be located at a single point and that the
output of the sensor is an undisturbed projection of the dipole field on the main sensor axis.
The output Bsensor (⃗r, ⃗es ) is then found as follows:

Bsensor (⃗r, ⃗es ) = ⃗Bdip (⃗r ) ·⃗es . (2)

For the generation of the magnetic field ⃗Bdip , we apply the 3D coil as represented in
Figure 3. It consists of a superposition of three orthogonal coils. For the current application,
this source can be represented by a superposition of three magnetic dipoles polarized in the
x-, y-, and z-directions, respectively. Clearly, by appropriately weighting the amplitudes
(i.e., currents) of the orthogonal three coils, an arbitrary single dipole can be created.
Eventually, this will be used to form a single dipole that spatially rotates as a function of
time. Figure 2 shows the setup. The source is located at the origin and the sensors are
aligned with the chain elements.

0.01
z [m]

0.00

−0.01

0.01
0.00 0.01
0.00 3 cm
−0.01 −0.01
y [m] x [m]

(a) Simulation Object (b) 3D coil


Figure 3. 3D coil: (a) sketches the modelled simulation object. A photograph of the corresponding
realization is shown in (b). Note that both constructions consist of three orthogonal coils.
Sensors 2023, 24, 6947 5 of 20

2.2. Maximum Vector (MV)


We assume a sensor at an arbitrary position ⃗r with an arbitrary orientation ⃗es where
both values are unknown. A rotating dipole source is located at the origin of the global
coordinate system r = 0. We define MV ⃗emax as the orientation of the dipole field ⃗Bdip for
which the sensor signal Bsensor in Equation (2) becomes a maximum.
We first derive a relationship between the direction of the sensor position ⃗er , the sensor
orientation ⃗es , and the MV ⃗emax . To this end, we first normalize the Equation (1) according
to
3
⃗Bnorm = 4πr ⃗Bdip (⃗r ) = 3⃗er (⃗em ·⃗er ) −⃗em , (3)
m
where m⃗ = m⃗em .
Without loss of generality, we next assume that ⃗er and ⃗es lie in the xy-plane. As
sketched in Figure 4, for the Cartesian coordinates of ⃗er and ⃗es we have
   
cos(ϕ) cos(ϕs )
⃗er =  sin(ϕ)  and ⃗es =  sin(ϕs ) . (4)
0 0

The magnetic dipole moment m ⃗ can point in any direction. In spherical coordinates,
the Cartesian components of ⃗em read (see Figure 4)
 
cos(ϕm ) sin(θm )
⃗em =  sin(ϕm ) sin(θm ) . (5)
cos(θm )

z
y

⃗es
ϕs

θm ⃗r
m
⃗ ϕ

ϕm

Figure 4. Geometry used for the derivation: ⃗r and ⃗es both lie in the xy-plane. ϕm and θm define the
orientation of the rotating magnetic dipole m
⃗ in spherical coordinates.

Inserting Equations (4) and (5) into Equation (3) yields the three Cartesian field com-
ponents of the normalized dipole field:
 
Bnorm,x (ϕ, ϕm , θm ) = 3 cos(ϕ)2 − 1 cos(ϕm ) sin(θm )
+ 3 sin(ϕm ) sin(θm ) sin(ϕ) cos(ϕ), (6)
 
Bnorm,y (ϕ, ϕm , θm ) = 3 sin(ϕ)2 − 1 sin(ϕm ) sin(θm )
+ 3 cos(ϕm ) sin(θm ) cos(ϕ) sin(ϕ), (7)

Bnorm,z (θm ) = − cos(θm ). (8)

The sensor signal is obtained using Equations (2)–(4) according to


Sensors 2023, 24, 6947 6 of 20

m 
Bsensor (ϕ, ϕs , ϕm , θm , r ) = 3
(cos(ϕs ) Bnorm,x (ϕ, ϕm , θm ) + sin(ϕs ) Bnorm,y (ϕ, ϕm , θm ) . (9)
4πr
Obviously, the variation in the distance between the sensor and the rotating dipole
affects the measured signal according to r13 . Regarding the variation in θm , the measured
signal becomes a maximum only if θm = π/2. Moreover, we remark that the component
of the rotating dipole which is perpendicular to the plane spanned by ⃗er and ⃗es has no
effect on the measured signal. From these observations we deduce that the MV must also
lie in that plane. To find the desired relationship between the three unit vectors, we—
without limiting the generality—place the location vector ⃗r on the x-axis (see the left side of
Figure 5). For the Cartesian components of the corresponding unit vectors we thus have:
     
1 cos(ϕm ) cos(ϕs )
⃗er = , ⃗em = , ⃗es = . (10)
0 sin(ϕm ) sin(ϕs )

In that case, it holds

Bnorm,x (ϕ) = 2 cos(ϕm ), (11)


Bnorm,y (ϕ) = − sin(ϕm ), (12)
Bnorm,z (ϕ) = 0, (13)

and Equation (3) simplifies to


m 
Bsensor (ϕs , ϕm ) = 3
2 cos(ϕm ) cos(ϕs ) − sin(ϕm ) sin(ϕs ) . (14)
4πr
To derive the angle ϕm = ϕmax where the right-hand side of Equation (14) becomes a
maximum, we calculate

dBsensor (ϕm ) m
= (−2 sin(ϕm ) cos(ϕs ) − sin(ϕs ) cos(ϕm )) (15)
dϕm 4πr3

−2 sin(ϕmax ) cos(ϕs ) − sin(ϕs ) cos(ϕmax ) = 0, (16)


and finally obtain the relation:

−2 · tan(ϕmax ) = tan(ϕs ). (17)


024 submitted to Sensors 7 of 20
Note that this relation is valid for any ϕ (not just for ϕ = 0), as sketched in Figure 5.

ϕs
⃗es
y y ϕmax
Sensor
⃗emax

⃗es ⃗r
ϕs ϕ
Sensor
Coil ⃗r ϕmax x Coil x
⃗emax
Figure 5. The relation inFigureEqn. 175. is
Theindependent of the angle
relation in Equation (17)ϕ.is Moreover,
independent theofunique relationship
the angle between
ϕ. Moreover, the relation-
the unique
3 unit vectors ⃗es , ⃗emax , and ⃗erbetween
ship is clarified.
the three unit vectors ⃗es , ⃗emax , and ⃗er is clarified.

As graphically demonstrated in Figure 5 there is a unique relation between the three


2. In the second step,
unitthe angle⃗ebetween
vectors the MV and the location unit vector is calculated:
s , ⃗emax , and ⃗er . This relation will now be used to uniquely derive the sensor
150

ϕmax = arccos(⃗emax ·⃗er ). (19)

3. Subsequently the angle between the location unit vector and the sensor orientation is determined 151

from Eqn. 17:  152


Sensors 2023, 24, 6947 7 of 20

orientation ⃗es from a given ⃗er and a measured MV ⃗emax . As an example, for a systematic
procedure we start from a given origin (i.e., the location of the “rotating” 3D coil) and a
given sensor location ⃗r.
1. First, we determine the “rotation axis” ⃗en :

⃗emax ×⃗er
⃗en = . (18)
∥⃗emax ×⃗er ∥

2. In the second step, the angle between the MV and the location unit vector is calculated:

ϕmax = arccos(⃗emax ·⃗er ). (19)

3. Subsequently, the angle between the location unit vector and the sensor orientation is
determined from Equation (17):

ϕs = arctan − 2 tan(ϕmax ) . (20)

4. Finally, for any given ⃗er , the sensor orientation ⃗es is calculated by

⃗es = cos(ϕs )(⃗en ×⃗er ) ×⃗en + sin(ϕs )(⃗en ×⃗er ). (21)

Figure 6 shows the calculated sensor orientations as blue vectors starting at different
sensor locations ⃗r in the xy-plane. The MV is located at the origin and polarized in the
y-direction. For the 3D case, i.e., if ⃗r is an arbitrary vector, we simply have to rotate the blue
sensor orientations around the MV.

1.5

0.5

0
y

−0.5

−1
Potential Poses
−1.5 Maxvector
−1.5 −1 −0.5 0 0.5 1 1.5
x
Figure 6. Blue vectors: Calculated sensor orientations ⃗es for different values of the sensor location ⃗r.
The starting point of each blue vector represents the corresponding ⃗r. Yellow vector: The maximum
vector at the origin, always polarized in the y-direction. Note that the lengths of the blue vectors are
not of interest here, as only the directions are relevant.

3. Signal Processing
In this section, it is shown how (temporal) signal processing can be used to extract
the required signal feature. Subsequently, an iterative algorithm for pose estimation is
presented.
Sensors 2023, 24, 6947 8 of 20

3.1. Feature Extraction


The three orthogonal coils are driven with three different current signals. These
currents are chosen such that the absolute value of the dipole moment |⃗ m(t)| = m0 is
constant for all values of t:
 
cos(ωϕ t) sin(ωθ t)
⃗ (t) = m0  sin(ωϕ t) sin(ωθ t) .
m (22)
cos(ωθ t)

In particular, for the circle frequencies it holds that ωθ = Nω ωϕ , where Nω is a positive


integer. Thus, the x and y coils are driven by an amplitude-modulated signal while the z
coil produces a standard harmonic magnetic signal. Consequently, as a function of time,
⃗ moves on the surface of a sphere with radius m0 , as exemplary shown for Nω = 10 in
m
Figure 7.

Dipole tip track

0.4

0.2

0
z

−0.2

−0.4

0.4
0.2 0.4
0 0.2
0
−0.2 −0.2
y
−0.4 −0.4 x

Figure 7. Track of the magnetic dipole m⃗ (t) with starting point at the origin as a function of
time. The tip of m
⃗ moves on the surface a sphere with radius m0 , according to Equation (22) for
Nω = ωθ /ωϕ = 10.

While searching for the MV, an obvious method would be to try each direction to
detect the maximum field within a given period of time. Instead, we will prove in the
following that the directions where no field is measured are orthogonal to the MV. We call
such vectors zero-crossing vectors. By detecting two independent zero-crossing vectors, the
MV can then be calculated by simply building their ϕmax , we first set Equation (14) (which
is valid if ⃗em and ⃗es are lying in the xy-plane) for ϕm = ϕzero to zero:

0
= 2 cos(ϕzero ) cos(ϕs ) − sin(ϕzero ) sin(ϕs ) (23)
2 cot(ϕzero ) = tan(ϕs ). (24)

With Equation (17) we have


 
cot ϕzero = tan − ϕmax (25)
  π 
cot ϕzero = cot + ϕmax (26)
2
π
ϕzero = ϕmax ± . (27)
2
Sensors 2023, 24, 6947 9 of 20

Equation (27) proves that the zero-crossing vector and the MV are orthogonal if both
are lying in the xy-plane. Since zero-crossing vectors and MVs are orthogonal, we conclude
that for the general case the zero-crossing vectors can again be obtained by a rotation
around the MV. Figure 8 illustrates an arbitrarily directed MV, the corresponding plane of
zero-crossing vectors, and the direction of the sensor.
1
Signal at sensor
Sensor 0.8 Zeromarker
Zerocrossing vectors
1 Maxvector 0.6

0.4
0.5

magnetic Field
0.2
0
Z

−0.2
−0.5
−0.4
−1 −0.6
1.5
1 1 −0.8
0.5 0.5
0 −1
0
Y −0.5 0 0.2 0.4 0.6 0.8 1
−0.5−1 X Time in s

(a) Setup with introduced vectors (b) Simulated signal


Figure 8. The left figure exemplary shows a max vector at the origin (yellow), the position and
orientation (orange) of the sensor, and the corresponding plane of zero-crossing vectors (purple).
The right side shows the corresponding sensor signal as a function of time. The times when a
zero-crossing is achieved are marked with a purple dot. The simulation works with a source which
is driven with Nω = ωθ /ωϕ = 10. The absolute values/lengths are not relevant, as the relative
relationship between the vectors and the zero crossings are both of interest.

3.2. Zero-Crossing Polarity


In the previous section, we showed that the MV is orthogonal to all zero-crossing
vectors. Using the cross product, a vector can be determined that is perpendicular to
them. There are two solutions to this condition. For example, in the xy-plane the normal
vectors are ⃗ez and −⃗ez . Therefore, it is necessary to observe the polarity here. Each zero-
crossing is assigned a polarity that depends on two parameters. The direction of the
zero-crossing vector, i.e., from positive to negative (z+) or from negative to positive (z−)
and the corresponding z-current Iz . The zero-crossing vectors ⃗ezero,k are indexed due to the
detection time. Depending on this, the order of the cross product is swapped


 ⃗ezero,k ×⃗ezero,k-1 dIz dIz

 ∥⃗e , for z+ ∧ dt > 0, or z- ∧ dt <0
zero,k ×⃗ezero,k-1 ∥
⃗emax =
 ⃗ezero,k-1 ×⃗ezero,k

 ,else . (28)
∥⃗ezero,k-1 ×⃗ezero,k ∥

3.3. Iterative Algorithm


For the proposed algorithm, knowledge of the geometry is required. This includes the
relative position of the joint ⃗rj and the distance of the sensor to that joint. The orientation
of the sensor is aligned with the second bone. The corresponding setup is sketched in
Figure 2. On the condition that the orientation of the sensor is correct, the proposed
algorithm is convergent and delivers its position though applying the kinematic chain. Vice
versa, the orientation determined for this position again matches the assumed position.
To come to an iterative algorithm, we initially assume a random orientation ⃗ês,i=0 , and the
kinematic chain model is used to determine a related estimate of the sensor position ⃗r̂s,i :

⃗r̂s,i = ⃗rj + ljs ⃗ês,i . (29)


Sensors 2023, 24, 6947 10 of 20

From the procedure described in Equations (18)–(21) (Section 2.2), we next determine
an update of the sensor orientation ⃗ês,i+1 according to

⃗ês,i+1 = ⃗f (⃗r̂s,i , ⃗emax ). (30)

This process is iteratively repeated until convergence is achieved. The number of


iterations needed for this goal depends on the ratio between ljs and |⃗rj |. The algorithm is
represented as a flow chart in Figure 9. Figure 10 shows an example for the corresponding
iterative progress. Based on the orientation of the previous chain element, we calculate the
position of the next joint. Thus, the true alignment of the kinematic chain is found after a
suitable number of repetitions of this systematic procedure.

Start

Choose a random start orientation

Calculate the sensor position


(kinematic chain)

Calculate the sensor orientation


(magnetic relation)
Yes

Change of
orientation bigger than
threshold?

Continue with the next chain


element

Figure 9. Flow chart of the iterative algorithm: The algorithm starts with a random initial orientation.
Then, the sensor position relative to the source is determined. Afterwards, the corresponding
orientation is calculated. When there is no relevant change between the data obtained with two
subsequent iterations, convergence is reached, and this orientation is the estimated result.

Estimated
Normalized
Potential Ground Kine-
Position Sensor
Positions Truth matic
Vector
Chain

(a) Setup (b) Iteration 1 (c) Iteration 2 (d) Iteration 15


Figure 10. Exemplary iterative process: This figure shows the iterations for a simple setup. The first
sub-figure shows the used setup with the coordinate system. The origin of this setup is located at
the first kinematic chain element, where the source is also located. The source is attached to the first
kinematic element in such a way that the relative position of the source to the kinematic chain is
always constant. In the following figures, the coordinate system has been omitted. The blue vectors
represent the potential poses for the detected MV. The light green construction shows the ground
truth. The red vector is a normalized position vector of the sensor which points to the potential pose
in this direction. The sensor is mounted on the second bone. It is represented by a black rectangle
with a vector in the sensitive direction. Subfigure (b) starts with a bone orientation in x-direction. For
some iterations, the kinematic chain and the related position vector are shown. After 15 iterations,
subfigure (d), the sensor pose matches a potential pose (ground truth) and the algorithm converges.
Sensors 2023, 24, 6947 11 of 20

The convergence speed of the algorithm depends on the ratio Q of the length from the
actuator point to the joint laj and the length from this joint to the sensor ljs :

ljs
Q= . (31)
laj

Figure 11 shows the amount of the absolute angular error as a function of the number
of iterations for each Q. The true sensor orientation is assumed to be in the z-direction. The
initial sensor orientation is always assumed to be in the y-direction, i.e., the corresponding
angular error starts at 90◦ .

Lenght ratios
80
1/2.5
1/3
Angular error (°)

60 1/4
1/5
1/8
40 1/10

20

0
1 4 7 10 13 16 19
Iteration
Figure 11. Angular error in dependence of the iteration: The figure shows the behaviour of the
angular error in dependence of the number of iteration. Different setups of length ratios are looked
at. The legend shows the corresponding Q for each curve. All curves tend closer to zero with each
iteration.

3.4. Uniqueness
In this section, we will show that the procedure described above delivers a unique
result if the Q as defined in Equation (31) is not between 0.5 and 1. For a proof, we refer
to Figure 5 and note that the orientation ⃗es has two components, i.e., two scalar degrees
of freedom. The input of the algorithm has also two scalar given variables, which leads
to a problem with two given variables and two unknowns. To solve this, we first split
the two-dimensional solution vector and show that each can be calculated independently.
As previously shown, ⃗emax , ⃗er , and ⃗es lie in one plane. From the sensor position, the joint
position can be obtained by the vector addition of ⃗es scaled by the known length ljs . This
relationship shows that the joint position must also lie in the plane already shown. This
allows the detected ⃗emax to be used to determine a plane in which the solution vector lies.
This reduces the number of unknowns to 1 and the first degree of freedom can thus be
determined unambiguously. In the previous derivation, the relative relationship between
⃗er , ⃗es , and ⃗emax was shown. In order to show that a unique ⃗es can be assigned to each ⃗emax ,
a reference point must be selected. The angle (ϕmax,j ) is defined as between ⃗emax and the
position vector of the joint ⃗rj and the angle (ϕs,j ) is defined as being between ⃗es and ⃗rj . These
angles are shown in Figure 12.
We set the coordinate system such that ⃗er , ⃗es and ⃗emax lie in the xy-plane. Furthermore,
without limiting the generalization of the solution we assume that ⃗rj lies on the y-axis. The
position of the joint and the sensor can thus be described as follows:
 
0
⃗rj = (32)
laj
 
ljs cos(ϕs,j )
⃗rs = (33)
laj + ljs sin(ϕs,j )
Sensors 2023, 24, 6947 12 of 20

Now, the angle ϕr between ⃗rj , ⃗rs can be determined as

ljs cos(ϕs,j ) 
ϕr = arctan (34)
laj + ljs sin(ϕs,j )

These descriptions can now be used to determine ϕmax according to Equation (17):

1
ϕmax = − arctan( tan(ϕs − ϕrs ) (35)
2
This angle is now related to the position vector of the sensor. The ϕmax,j can be
described as follows:
ϕmax,j = ϕmax − ϕr (36)
The obtained formulas reveal the analytical relationship ϕmax,j (ϕs,j ). For the uniqueness
of the solution, the inverse function is required. Since this is not a trivial task, a lookup
table has been created while the axes are swapped. Figure 13 shows the results obtained
for different values of Q.

ϕmax,j
⃗emax ϕs,j

⃗es

⃗rj

3D coil

Figure 12. Definition of the angles at a joint between two bones.

Obviously, there is a unique solution only for a certain range of Q. For Q = 1, i.e.,
where the non-uniqueness is most significant, a simulation including a measurement of
the convergence speed was carried out, see Figure 14. We observe that the maximum error
does not approach zero, but oscillates periodically and is undamped around zero.
Sensors 2023, 24, 6947 13 of 20

180 180

160 160

140 140

120 120

100 100

ϕs,j (°)
ϕs,j (°)

80 Lenght ratios 80 Lenght ratios


0.0 0.5
60 0.1 60 0.6
0.2 0.7
40 0.3 40 0.8
0.4 0.9
20 0.5 20 1.0

0 0
0 20 40 60 80 100 120 140 160 180 −20 0 20 40 60 80 100 120 140 160 180
ϕmax,j (°) ϕmax,j (°)

(a) 0 < Q <= 0.5 (b) 0.5 < Q <= 1

180

160

140

120

100
ϕs,j (°)

80 Lenght ratios
1.1
60 1.2
1.3
40 1.4
1.5
20 1.6

0
−180 −150 −120 −90 −60 −30 0
ϕmax,j (°)

(c) 1 < Q
Figure 13. The relation between ϕs,j and ϕmax,j is represented for different values of Q. The plots are
subdivided for different values of Q. In the ranges 0 < Q < 0.5 and Q > 1 there is a clear assignment,
i.e., there is a unique bidirectional relation between ϕmax,j and ϕs,j . However, between 0.5 and 1 we
observe a non-unique relation.

180
Q=1
160
Angular error (°)

140
120
100
80
60
40
20
0
0 20 40 60 80 100
Iteration
Figure 14. For Q = 1, the maximum angular error does not approach zero even after several iterations,
i.e., the algorithm is non-convergent.
Sensors 2023, 24, 6947 14 of 20

4. Results
4.1. Simulation Results
For validation, the proposed iterative algorithm was numerically implemented and
the motion of a single joint was observed. To that end, an existing real-time framework
based on C was applied [22]. An overview of the used modules is shown in Figure 15. A
complete description of the implementation can be found in [23].

Simulation pipeline
Kinematic
Magnetic simulation
sensor
simulation Magnetic
source
simulation

Processing pipeline
Signal
generation

Zerocrossing MaxVector Pose Pose


determination determination estimation estimation

Figure 15. Simulation overview: The simulation is divided in two sections. The upper (dark) part
simulates the motion and the resulting field at the sensor. In the lower part (bright), the described
algorithm is implemented and the pose is calculated.

As an example, a magnetic source is attached at the beginning of a first bone of 40 cm


length. The sensor is attached to the second bone at a distance of 10 cm to the joint. To not
repeat the simulation without movement that has already been discussed in Section 3.3, a
simple movement in the xz-plane was performed: The second bone rotates in the xz-plane
around the y-axis in a range from 90◦ to −90◦ with a constant angular frequency 1 Hz. As
illustrated in Figure 16, the 3D coil (source) is localized in the origin at the end of the first
bone with equivalent magnetic dipoles at frequencies ωϕ = 7 Hz and ωθ = 7000 Hz.

Sensor

z
y

x Source

Figure 16. Simulation of a motion: All elements are in the yz-plane. The 3D coil source is located in
the origin. The first bone is aligned with the z-axis and its end represents the position of the joint.
The second bone moves from 90◦ to −90◦ with respect to the axis of the first bone.

The results of the simulations are shown in Figure 17a. The ground-truth angle
between the second element and the z-axis is shown for comparison with the estimated
angle. For better visibility of both signals, the estimation is shifted by 10◦ . The absolute
error of the amplitude which plotted in Figure 17b, is for most time steps in the range < 1°.
Sensors 2023, 24, 6947 15 of 20

90 estimated θ 10 degrees shifted Error signal


ground truth θ 3
Version 70
October 25, 2024 submitted to Sensors 17 of 20
50 2

Angular error (°)


30 1
Angle (°)

10
0
−10
−30 −1

−50 −2
−70 −3
−90
0 1 2 3 4 5 0 1 2 3 4 5
Time in (s) Time in (s)

(a) Estimated angles vs. simulated angles (b) Error signal


Figure 17. Figure a shows both,
90
the estimated angle and theθ simulated one.
estimated θ 10 degrees shifted
ground truth 3 The dark red line represents theError signal
70
Figure 17. (a) shows both the estimated angle and the simulated one. The dark red line represents
simulation (ground truth) while
50
the 30
the light red line is the estimation of the2 described
simulation (ground truth) while the light red line
algorithm.The latter is
is the estimation of the described algorithm.

Angular error (°)


1
shifted 10° to enhance theTheclarity
latter of the visualization. In Figure b, the difference between the simulation and
Angle (°)

10
−10
is shifted 10° to enhance the clarity of the 0visualization. In (b), the difference between the

the estimation is plotted.simulation


We−30 observeand the estimation is plotted. We observe an errormovement.
an error signal which follows the angle
− 1 of the signal which follows the angle of the
−50 −2
movement.
−70 −3
−90
4.2. Experimental Results
0 1
Time in (s)
2 3 4 5 0 1 2
Time in (s)
3 4 5

For an angles
(a) Estimated initialvs.laboratory test, a prototype
simulated angles (b)consisting
Error signal of two kinematic elements made
3D coil ofFigure
PVC17.was fabricated
Figure (seethe
a shows both, Figure 18).angle
estimated Theandtwotheelements are The
simulated one. connected by represents
dark red line a screw, theallowing
one degree(ground
simulation of freedom.
truth) whileA 3D coil red
the light is attached at the beginning
line is the estimation of thealgorithm.The
of the described longer elementlatter iswhile a
fluxgate
shifted 10°magnetometer is of
to enhance the clarity attached to the Inshorter
the visualization. Figure b,element.
FluxgateFirstly,
the difference tests
Magnetometer
between were carried
the simulation and out
y with a staticissetup,
the estimation plotted. i.e., the prototype
We observe was
an error signal held
which in athe
follows fixed
angleposition using suitable wedges
of the movement.
z
which was attached to the table and the prototype with double-sided adhesive tape.
x
3D coil Joint

y Fluxgate Magnetometer
z
x
Joint

Figure 18. Upper figure: The built prototype consists of two PVC elements, connected to each other with a
screw allowing for one degree
Figureof18.freedom. TheThe
Upper figure: 3Dbuilt
coilprototype
source consists
is located at PVC
of two one end of the
elements, longer toelement.
connected each otherOn
with a
Figure 18. Upper figure: The built prototype consists of two PVC elements, connected to each other
screw
the shorter element a fluxgate allowing for one degree of freedom. The 3D coil source is located at one end of the longer element. On
with a magnetometer
screw allowing[24] is mounted.
for one degree ofThe illustration
freedom. The 3D incoil
the source
lower figure shows
is located the end
at one of the
the shorter element a fluxgate magnetometer [24] is mounted. The illustration in the lower figure shows the
assembly for a 30° position.
longer element. On the shorter element, a fluxgate magnetometer [24] is mounted. The illustration in
assembly for a 30° position.
the lower figure shows the assembly for a 30° position.
Sensors 2023, 24, 6947 16 of 20

We have used seven equidistant angles between 0° and 90° degrees. The same software
used for the simulation has been applied for data acquisition and signal processing. A
series of 10 s measurements was recorded for each position, and a new angle estimate was
calculated every 20 ms. Hence, after 10 s we obtained 500 angle estimations represented in
Figure 19.

90

75
Measured Angle in °

60

45

30

15

0° 15° 30° 45° 60° 75° 90°


Ground Truth
Figure 19. The boxplots show the experimental results of the measured sensor angles for each of the
seven given (ground-truth) joint angles. The box plots show the median, the first quartile, the third
quartile, the minimum, the maximum, and several outliers for each joint angle.

4.3. Computational Cost


Determining the computational effort of an algorithm is not trivial when using libraries,
as the implementation of the functions is not disclosed. Therefore, the mathematical opera-
tions used were given a score depending on their complexity. The score was determined by
performing the corresponding operation one million times and normalizing it to the ADD
operation. This should make it possible to compare the results with other algorithms. All
operations are implemented in C with the “math.h” library.
Table 1 shows the calculation effort for the used mathematical operations. One iteration
of the presented algorithm has an average equivalent of 212 addition operations. All
operations were performed on an AMD Ryzen 5 5600X 6-Core processor.

Table 1. Time consumption of different complex operations for one million executions, normalized to
the (basic) addition operation.

Operation Time Consumption/µs Score


ADD/SUB 940 1
MULT/DIV 960 1.02
SIN/COS/TAN 4970 5.29
ARCCOS/ARCSIN/ARCTAN 68,000 72.34
SQUARE ROOT 3250 3.46
CROSS PRODUCT 3700 3.94
SCALAR PRODUCT 2200 2.34
Algorithm 1 Iteration 199,280 212.08
Sensors 2023, 24, 6947 17 of 20

5. Discussion
5.1. Convergence and Uniqueness
The convergence of the algorithm depends on the length ratio Q, i.e., the ratio of the
distance between the actuator and the joint laj and the distance between the joint and the
sensor location ljs . It has been shown that the algorithm uniquely converges unless Q is
between 0.5 and 1. Moreover, the number of iterations required for a given error threshold
increases as Q increases. As illustrated in Figure 20, such behaviour sounds logical. For a
fixed length of the first bone, the permissible angular range δ is limited by the length of the
second one.

y y

δ δ
x x

Figure 20. Possible angle ranges δ for two different lengths of the second bone at a fixed length of the
first bone (Q is higher for the left realization).

Regarding uniqueness, the role of Q may well lead to problems in a possible subse-
quent application, where care must be taken when designing and positioning the sensors
and sources to ensure that the possible Q is not between 0.5 and 1.

5.2. Results
The algorithm was implemented in an existing real-time framework. A periodic move-
ment was implemented using a simulation pipeline. The simulations have shown that for
an unmoved kinematic chain, the estimate agrees with the simulated posture. Subsequently,
a periodic movement with a frequency of 1 Hz was performed. The movement showed an
error of approximately 1 degree. Outside the simulation, a larger deviation is to be expected.
Due to the dipole approximation, small deviations occur in the modelling. In addition,
noise was not used in the first investigations. This would not interfere with the algorithm
but would worsen the result. Inaccuracies caused by noise may be improved or corrected
by averaging the input signal. First experimental results indicate that the algorithm works
in practice. It has been observed that the averaged values each have an offset to the true
values. These deviations can be explained by the fact that there are model properties that
have not yet been taken into account. For example, the algorithm assumes that the sensor
is located exactly in the centre of the kinematic element. In this case, the sensor would
move on a circular path. However, as the sensor cannot be located in the centre in reality,
it tends to move along an elliptical curve. This leads to an error depending on the angle.
The dipole approach leads to further errors. The closer the sensor is to the source, the
worse the approximation becomes. In [25], it was shown that the deviation from the dipole
approximation at a distance of 20 times the radius of the source is only 0.0027%. This is the
case in our setup. If the algorithm is used in a setup where the distance cannot be kept large
enough, approaches as in [26] can be used. The variance of the measurement series differs
Sensors 2023, 24, 6947 18 of 20

greatly. Particularly, it is unclear why the variance becomes smaller and smaller as the
sensor angle increases. The signal quality was almost the same in all measurement series,
so at first glance a similar variance could actually be assumed for each setup. However, the
transfer function from the MV to the sensor projection is non-linear, which would explain a
stronger fluctuation in the 0° range.

5.3. Computational Effort and Timings


For a motion capture system, the sample rate at which positions are captured is of
particular importance. Therefore, a high sample rate is needed for fast movements. In
human–machine interface applications, a low latency in the range of 15 ms is required. If
the latency increases, surgery becomes more difficult for the user. To be able to guarantee
such a maximum latency, the required computing time must be kept as low as possible. For
this analysis, the implemented code was divided into each weighted individual operations.
A computing time of 3 µs per joint and 60 µs for a kinematic construct consisting of a hand
with up to 20 joints was determined. During the investigation, we have used a number of
15 iterations showing good results for the most setups. In a real application, this number
could be even reduced. Because of the limited moving speed of each kinematic chain
element, it is possible to use a well-fitting initial orientation. Especially for slow motions,
this would reduce the number of needed iterations. Note that the investigation shows only
a theoretically possible calculation time. In a real setup, there will be optimizations to the
signal processing pipeline, primarily consisting of pre-processing and feature extraction.
These additional parts will further increase the computational effort. Such promising
results and ideas for finding optimal conditions lead to the outcome that the theoretically
maximal latency can also be observed in a real application. Moreover, note that so far the
implementation has been performed without any computational parallelization. As an
example, the hand consists of several kinematic chains that can move independently of
each other. A multi-threading implementation would thus be a further step to increase the
efficiency and decrease latencies.

6. Conclusion and Outlook


This work has dealt with a new algorithm for motion tracking. To this end, we have
introduced spatially rotating magnetic dipole sources. In this context, the maximum vector
(MV) has been introduced as a new signal feature, and the spatial relationship between
the MV, the sensor position, and its orientation were investigated. The correlations were
linked to the model of a kinematic chain, such that this self-consistency was exploited and
a computationally efficient algorithm was developed. The algorithm was implemented and
validated in a real-time signal processing code. The performance was evaluated in terms of
the accuracy of the results and the required computational effort. It was shown that the
presented algorithm is very efficient in determining the posture of a kinematic chain. The
theoretical functionality of the algorithm has already been demonstrated by simulation and
with the realization of a first demonstrator. These were initially very simple movements
with only one degree of freedom. More complex movements of real people are planned
for a later stage of research. The behaviour of the algorithm in the presence of magnetic
interference or in the event of a sensor failure has not yet been investigated and will be
addressed in the future.
The algorithm currently uses a strong simplification of a kinematic chain. In addition,
the modelling must take into account further parameters such as the thickness, the precise
position of the sensor on the element and a potential tilting of the sensor. A consideration
of more details would increase the number of model parameters, and an automatic/semi-
automatic calibration procedure should be developed to keep it manageable. Moreover,
an additional automated distance measure could be beneficially for the stability of the
outcomes. Finally, as a 3D coil generates a signal that provides three independent pieces
of information and in the current implementation only two phases of information have
been used, from the absolute signal value, more distance information can be obtained. This
Sensors 2023, 24, 6947 19 of 20

could be integrated with a Kalman filter approach as in [27] and/or correct a parameter of
the model during runtime so that the error can be minimized.

7. Patents
The content of this paper was used for a patent application. It was granted by the
German Patent Office with the patent number DE 10 2023 119 167.

Author Contributions: Conceptualization, T.S., L.K. and G.S.; methodology, T.S.; software, T.S.;
validation, T.S.; formal analysis, T.S.; investigation, T.S.; resources, T.S., J.H. and M.B.; data curation,
T.S.; writing—original draft preparation, T.S.; writing—review and editing, T.S., J.H., M.B., R.B., L.K.
and G.S.; visualization, T.S. and G.S.; supervision, R.B., L.K. and G.S.; project administration, L.K. and
G.S.; funding acquisition, L.K. and G.S. All authors have read and agreed to the published version of
the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data presented in this study are available on request from the
corresponding author.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Minh, V.T.; Katushin, N.; Pumwa, J. Motion tracking glove for augmented reality andvirtual reality. J. Behav. Robot. 2019, 10,
160–166, .
2. Mirelman, A.; Bernad-Elazari, H.; Thaler, A.; Giladi-Yacobi, E.; Gurevich, T.; Gana-Weisz, M.; Saunders-Pullman, R.; Raymond,
D.; Doan, N.; Bressman, S.B.; et al. Arm swing as a potential new prodromal marker of Parkinson’s disease. Mov. Disord. 2016, 31,
1527–1534.
3. Qualisys, A.B. Technical Specifications OMC System. Available online: https://www.https://www.qualisys.com/cameras/
miqus/#tech-specs (accessed on 9 October 2024)
4. Dipietro, L.; Sabatini, A.; Dario, P. A Survey of Glove-Based Systems and Their Applications. IEEE Trans. Syst. Man. Cybern. Part
(Appl. Rev.) 2008, 38, 461–482.
5. Roetenberg, D.; Luinge, H.; Slycke, P. Xsens MVN: Full 6DOF Human Motion Tracking Using Miniature Inertial Sensors. Xsens
Motion Technol. BV Tech. Rep. 2009, 1,1–7.
6. Santoni, F.; De Angelis, A.; Moschitta, A.; Carbone, P. MagIK: A Hand-Tracking Magnetic Positioning System Based on a
Kinematic Model of the Hand. IEEE Trans. Instrum. Meas. 2021, 70, 9507313.
7. Ma, Y.; Mao, Z.; Jia, W.; Li, C.; Yang, J.; Sun, M. Magnetic Hand Tracking for Human-Computer Interface. IEEE Trans. Magn. 2011,
47, 970–973.
8. Elbamby, M.S.; Perfecto, C.; Bennis, M.; Doppler, K. Toward low-latency and ultrareliable virtual reality. IEEE Netw. 2018, 32,
78–84.
9. Maier, M.; Chowdhury, M.; Rimal, B.P.; Van, D.P. The Tactile Internet: Vision, Recent Progress, and Open Challenges. IEEE
Commun. Mag. 2016, 54, 138–145.
10. Plotkin, A.; Paperno, E. 3-D Magnetic Tracking of a Single Subminiature Coil with a Large 2-D Array of Uniaxial Transmittes.
IEEE Trans. Magn. 2003, 39, 3295–3297.
11. Ran, X.; Qiu, W.; Hu, H. Magnetic Dipole Target Localization Using Improved Salp Swarm Algorithm. In Proceedings of the 42nd
Chinese Control Conference (CCC), Tianjin, China, 24–26 July 2023; pp. 3372–3377.
12. Zeising, S.; Thalmayer, A.; Fischer, G.; Kirchner, J. Toward Magnetic Localization of Capsule Endoscopes during Daily Life
Activities. In Proceedings of the 2021 Kleinheubach Conference, Miltengerg, Germany, 28–30 September 2021; pp. 1–4.
13. Shen, H.-M.; Ge, D.; Lian, C.; Yue, Y. Real-Time Passive Magnetic Localization Based on Restricted Kinematic Property for Tongue-
Computer-Interface. In Proceedings of the 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics
Hong Kong, China, 8–12 July 2019.
14. Paperno, E.; Sasada, I.; Leonovich, E. A new method for magnetic position and orientation tracking. IEEE Trans. Magn. 2001, 37,
1938–1940.
15. Raab, F.; Blood, E.; Steiner, T.; Jones, H. Magnetic Position and Orientation Tracking System. IEEE Trans. Aerosp. Electron. Syst.
1979, AES-15, 709–718.
16. Paperno, E.; Keisar, P. Three-Dimensional Magnetic Tracking of Biaxial Sensors. IEEE Trans. Magn. 2004, 40, 1530–1536.
17. Nara, T.; Suzuki, S.; Ando, S. A closed-form formula for magnetic dipole localization by measurement of its magnetic field and
spatial gradients. IEEE Trans. Magn. 2006, 42, 3291–3293.
Sensors 2023, 24, 6947 20 of 20

18. Fan, L.; Kang, X.; Zheng, Q.; Zhang, X.; Liu, X.; Chen, C.; Kang, C. A Fast Linear Algorithm for Magnetic Dipole Localization
Using Total Magnetic Field Gradient. IEEE Sensors J. 2018, 18, 1032–1038.
19. Fischer, C.; Quirin, T.; Chautems, C.; Boehler, Q.; Pascal, J.; Nelson, B.J. Gradiometer-Based Magnetic Localization for Medical
Tolls. IEEE Trans. Magn. 2023, 59, 2.
20. Sharma, S.; Ding, G.; Aghlmand, F.; Talkhooncheh, A.; Shapiro, M.; Emami, A. Wireless 3D Surgical Navigation and Tracking
System with 100µ m Accuracy Using Magnetic-Field Gradient-Based Localization. IEEE Trans. Med. Imaging 2021, 40, 2066–2079.
21. Bao, J.; Hu, C.; Lin, W.; Wang, W. On the magnetic field of a current coil and its localization. In Proceedings of the IEEE
International Conference on Automation and Logistics, Zhengzhou, China, 15–17 August 2012; pp. 573–577.
22. Chair for Digital Signal Processing and System Theory. Real-Time Framework. Available online: https://dss-kiel.de/index.php/
research/realtime-framework (accessed on 9 October 2024).
23. Hoffmann, J.; Bald, C.; Schmidt, T.; Boueke, M.; Engelhardt, E.; Krüger, K.; Elzenheimer, E.; Hansen, C.; Maetzler, W.; Schmidt, G.
Designing and Validating Magnetic Motion Sensing Approaches with a Real-time Simulation Pipeline. Curr. Dir. Biomed. Eng.
2023, 9, 455–458.
24. Stefan Mayer Instruments GmbH Co. KG, Miniatur Fluxgate FLC100. Available online: https://stefan-mayer.com/de/produkte/
magnetometer-und-sensoren/magnetfeldsensor-flc-100.html (accessed on 9 October 2024).
25. Paperno, E.; Plotkin, A. Cylindrical induction coil to accurately imitate the ideal magnetic dipole. Sens. Actuators Phys. 2004, 112,
248–252.
26. Ren, Y.; Hu, C.; Xiang, S.; Feng, Z. Magnetic Dipole Model in the Near-field. In Proceeding of the 2015 IEEE International
Conference on Information and Automation, Lijiang, China, 8–10 August 2015.
27. Boueke, M.; Hoffmann, J.; Schmidt. T.; Bald, C.; Bergholz, R.; Schmidt, G. Model-based Tracking of Magnetic Sensor Gloves in
Real Time. Curr. Dir. Biomed. Eng. 2023, 9, 85–88.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like