Sensors 24 06947
Sensors 24 06947
Article
A New Iterative Algorithm for Magnetic Motion Tracking
Tobias Schmidt 1,2 , Johannes Hoffmann 1 , Moritz Boueke 1 , Robert Bergholz 3 , Ludger Klinkenbusch 2
and Gerhard Schmidt 1, *
1 Digital Signal Processing and System Theory, Institute of Electrical Engineering and Information Technology,
Faculty of Engineering, Kiel University, Kaiserstr. 2, 24143 Kiel, Germany; [email protected] (T.S.);
[email protected] (J.H.); [email protected] (M.B.)
2 Computational Electromagnetics, Institute of Electrical Engineering and Information Technology,
Faculty of Engineering, Kiel University, Kaiserstr. 2, 24143 Kiel, Germany; [email protected]
3 Pediatric Surgery, University Hospital Schleswig-Holstein, Kiel University, Arnold-Heller Str. 3,
24105 Kiel, Germany; [email protected]
* Correspondence: [email protected]
Abstract: Motion analysis is of great interest to a variety of applications, such as virtual and aug-
mented reality and medical diagnostics. Hand movement tracking systems, in particular, are used as
a human–machine interface. In most cases, these systems are based on optical or acceleration/angular
speed sensors. These technologies are already well researched and used in commercial systems.
In special applications, it can be advantageous to use magnetic sensors to supplement an existing
system or even replace the existing sensors. The core of a motion tracking system is a localization
unit. The relatively complex localization algorithms present a problem in magnetic systems, leading
to a relatively large computational complexity. In this paper, a new approach for pose estimation of
a kinematic chain is presented. The new algorithm is based on spatially rotating magnetic dipole
sources. A spatial feature is extracted from the sensor signal, the dipole direction in which the
maximum magnitude value is detected at the sensor. This is introduced as the “maximum vector”. A
relationship between this feature, the location vector (pointing from the magnetic source to the sensor
position) and the sensor orientation is derived and subsequently exploited. By modelling the hand
as a kinematic chain, the posture of the chain can be described in two ways: the knowledge about
Citation: Schmidt, T.; Hoffmann, J.; the magnetic correlations and the structure of the kinematic chain. Both are bundled in an iterative
Boueke, M.; Bergholz, R.; algorithm with very low complexity. The algorithm was implemented in a real-time framework
Klinkenbusch, L.; Schmidt, G. A New and evaluated in a simulation and first laboratory tests. In tests without movement, it could be
Iterative Algorithm for Magnetic shown that there was no significant deviation between the simulated and estimated poses. In tests
Motion Tracking. Sensors 2023, 24, with periodic movements, an error in the range of 1° was found. Of particular interest here is the
6947. https://doi.org/10.3390/
required computing power. This was evaluated in terms of the required computing operations and
s24216947
the required computing time. Initial analyses have shown that a computing time of 3 µs per joint is
Academic Editor: Kimiaki required on a personal computer. Lastly, the first laboratory tests basically prove the functionality of
Shirahama the proposed methodology.
Received: 11 October 2024
Keywords: magnetic motion tracking; localization; rotating magnetic dipole; iterative algorithms;
Revised: 25 October 2024
Accepted: 28 October 2024
human–machine interface
Published: 29 October 2024
1. Introduction
Copyright: © 2024 by the authors. Human motion tracking is of great interest to many applications such as virtual/
Licensee MDPI, Basel, Switzerland.
augmented reality [1] and medical diagnostics [2]. Among the several variants of motion
This article is an open access article
tracking, this contribution focuses on hand-motion tracking as a human–machine interface
distributed under the terms and
for robot-assisted surgery. However, the proposed method can also be used for other
conditions of the Creative Commons
applications where the movement can be modelled by kinematic chains.
Attribution (CC BY) license (https://
Camera-based (optical motion capture, OMC) systems, which are considered to be the
creativecommons.org/licenses/by/
gold standard in motion tracking methods, allow for accuracy in the millimeter or even
4.0/).
submillimeter range [3]. However, OMC systems have the disadvantage that direct lines of
sight between the objects (usually reflecting markers) and relevant cameras are required.
An alternative method is the use of gloves with attached inertial measurement units
(IMU) or flex sensors. Several of the corresponding solutions are shown in [4]. The well-
investigated IMU are used in commercial applications such as XSens’ Quantum Metaglove [5].
However, IMU-based systems do not measure the quantities of interest (i.e., positions and
angles) directly but instead measure their time derivatives (i.e., acceleration and angular
speed). This leads to drift problems.
Approaches based on magnetic sensors are still in the early phases of research
(see [6,7], for examples). Future magnetic methods could be either a stand-alone func-
tional alternative or be used as a supplement to improve the performance of optical or
IMU-based systems.
The present work aims to design an input device for robot-assisted surgery based on
magnetic sensor technology. For this purpose, a glove will be equipped with magnetic
sensors such that each kinematic element can be assigned to at least one sensor. The heart
of the proposed motion tracking system is the localization unit. This unit determines the
pose of the object with a constant sample rate with a typical duration of the period of 15 ms,
which in turn leads to a localization update rate of 67 Hz [8,9]. If the sampling period is
longer, it might lead to disruptive handling in human–machine interface applications. In
our case, a kinematic chain with up to 20 degrees of freedom has to be estimated every
15 ms. The allowed latency is one of the challenges, as it leads to limited computing time
and the algorithms must be designed to work efficiently.
Magnetic localization is usually solved with numerical or analytical approaches. Nu-
merical solutions are used in applications with 1D sensors or flexible setups [10–13]. Ana-
lytical methods are used with fixed sensor array configurations, such as 3D sensors [14–16]
or gradient sensors [17–20]. On the one hand, numerical methods are generally compu-
tationally intensive, which can become a problem if many (>20) sensors are involved, as
it may no longer be possible to maintain the latency time. On the other hand, analytical
methods usually use defined sensor setups such as 3D sensors. These may already be too
large for the structures to be observed, which in case of a finger are only a few centimetres
in size. In [6], a magnetic sensor glove with a numerical localization approach is described.
In the approaches mentioned above, while poses are determined by a minimization
of a cost function and some kinematic constraints are kept, up to 55 hand reconstructions
per second had been achieved thus far. Since about at least 67 hand reconstructions per
second are required for surgical interfaces, these algorithms are not fully capable of solving
the problem at this time. With the progress in computer hardware, these algorithms could
become able to satisfy these conditions in the future. However, we are looking for a solution
that can be executed on standard personal computers where further processing beyond
motion analysis (e.g., gesture recognition) usually need to be executed.
For the overall system, we use two nested localization methods: The external local-
ization is responsible for the absolute localization of a reference point within a defined
measurement volume. In our case, this reference point could be the wrist, for example. A
3D coil is attached to this wrist, which can then be localized using conventional algorithms,
as shown in [15,16]. Based on this, there is an internallocalization. This estimates the posi-
tion and orientation of the sensors attached to the fingers with respect to the reference point
mentioned above. To this end, the individual fingers are modelled as kinematic chains.
This offers an advantage in that the number of degrees of freedom is reduced. In general, a
1D sensor has five degrees of freedom: three for the position and two for the orientation. By
attaching the sensor to the kinematic chain, the number of degrees of freedom per sensor
is reduced to two. Here, movement is limited by the rotation of the joints. This will be
utilized in an efficient algorithm, which will be explained in the following. An advantage of
the presented algorithm is that it combines localization and mapping to a kinematic chain.
In this way, prior knowledge about a kinematic chain is integrated into a localization, thus
Sensors 2023, 24, 6947 3 of 20
narrowing down the solution space and simplifying the calculation. An overview of this
nested localization scheme is shown in Figure 1.
3D sensors
1D sensors
3D source
Relative
localization
Position Gesture
combination recognition
Absolute
localization
Figure 1. System overview: The illustration includes an external localization (blue) consisting of a
defined setup of (here 8) sensors. The inner localization (red) consists of a 3D coil which is attached
to the wrist as well as magnetic 1D sensors which are attached to each finger element. Following the
localization, gesture recognition or processing of the data for the human–machine interface can be
carried out.
First, we will introduce a magnetic signal feature called the “maximum vector”. Then,
we discuss the relations between this feature, the sensor position, and the sensor orientation.
Eventually, these relations are linked to the known anatomy through an iterative algorithm.
For validation, the algorithm was implemented in a real-time environment and tested with
a simulation and an initial laboratory setup. The paper closes with a discussion about the
results and the restrictions of the algorithm. The central idea and a suitable setup is shown
in Figure 2.
z
y 3D coil
(transmitter)
x
Kinematic chain with joints and connections
Coordinate
system
Magnetic sensors
Origin of the
coordinate system
Sensor
signals
AD converter Feature
Projection
and pre- extration Estimated poses
estimation
processing (max vector)
Coil
signals
Signal processing on the transmitter (coil) side
DA converter
Spatial Signal
and current
rotation generation
amplifier
Figure 2. Typical example of use of the presented algorithm: At the origin of the coordinate system a
3D magnetic transmitter is located. A kinematic chain is equipped with 1D magnetic sensors, such as
fluxgate magnetometers or magnetoelectric sensors, on every chain element. The kinematic chains
are connected through joints with ellipsoidal cross-sections, each providing two degrees of freedom.
Any additional information from the kinematic chain about the position is used to increase the speed
of the localization algorithm.
Sensors 2023, 24, 6947 4 of 20
For the generation of the magnetic field ⃗Bdip , we apply the 3D coil as represented in
Figure 3. It consists of a superposition of three orthogonal coils. For the current application,
this source can be represented by a superposition of three magnetic dipoles polarized in the
x-, y-, and z-directions, respectively. Clearly, by appropriately weighting the amplitudes
(i.e., currents) of the orthogonal three coils, an arbitrary single dipole can be created.
Eventually, this will be used to form a single dipole that spatially rotates as a function of
time. Figure 2 shows the setup. The source is located at the origin and the sensors are
aligned with the chain elements.
0.01
z [m]
0.00
−0.01
0.01
0.00 0.01
0.00 3 cm
−0.01 −0.01
y [m] x [m]
The magnetic dipole moment m ⃗ can point in any direction. In spherical coordinates,
the Cartesian components of ⃗em read (see Figure 4)
cos(ϕm ) sin(θm )
⃗em = sin(ϕm ) sin(θm ) . (5)
cos(θm )
z
y
⃗es
ϕs
θm ⃗r
m
⃗ ϕ
ϕm
Figure 4. Geometry used for the derivation: ⃗r and ⃗es both lie in the xy-plane. ϕm and θm define the
orientation of the rotating magnetic dipole m
⃗ in spherical coordinates.
Inserting Equations (4) and (5) into Equation (3) yields the three Cartesian field com-
ponents of the normalized dipole field:
Bnorm,x (ϕ, ϕm , θm ) = 3 cos(ϕ)2 − 1 cos(ϕm ) sin(θm )
+ 3 sin(ϕm ) sin(θm ) sin(ϕ) cos(ϕ), (6)
Bnorm,y (ϕ, ϕm , θm ) = 3 sin(ϕ)2 − 1 sin(ϕm ) sin(θm )
+ 3 cos(ϕm ) sin(θm ) cos(ϕ) sin(ϕ), (7)
m
Bsensor (ϕ, ϕs , ϕm , θm , r ) = 3
(cos(ϕs ) Bnorm,x (ϕ, ϕm , θm ) + sin(ϕs ) Bnorm,y (ϕ, ϕm , θm ) . (9)
4πr
Obviously, the variation in the distance between the sensor and the rotating dipole
affects the measured signal according to r13 . Regarding the variation in θm , the measured
signal becomes a maximum only if θm = π/2. Moreover, we remark that the component
of the rotating dipole which is perpendicular to the plane spanned by ⃗er and ⃗es has no
effect on the measured signal. From these observations we deduce that the MV must also
lie in that plane. To find the desired relationship between the three unit vectors, we—
without limiting the generality—place the location vector ⃗r on the x-axis (see the left side of
Figure 5). For the Cartesian components of the corresponding unit vectors we thus have:
1 cos(ϕm ) cos(ϕs )
⃗er = , ⃗em = , ⃗es = . (10)
0 sin(ϕm ) sin(ϕs )
dBsensor (ϕm ) m
= (−2 sin(ϕm ) cos(ϕs ) − sin(ϕs ) cos(ϕm )) (15)
dϕm 4πr3
ϕs
⃗es
y y ϕmax
Sensor
⃗emax
⃗es ⃗r
ϕs ϕ
Sensor
Coil ⃗r ϕmax x Coil x
⃗emax
Figure 5. The relation inFigureEqn. 175. is
Theindependent of the angle
relation in Equation (17)ϕ.is Moreover,
independent theofunique relationship
the angle between
ϕ. Moreover, the relation-
the unique
3 unit vectors ⃗es , ⃗emax , and ⃗erbetween
ship is clarified.
the three unit vectors ⃗es , ⃗emax , and ⃗er is clarified.
3. Subsequently the angle between the location unit vector and the sensor orientation is determined 151
orientation ⃗es from a given ⃗er and a measured MV ⃗emax . As an example, for a systematic
procedure we start from a given origin (i.e., the location of the “rotating” 3D coil) and a
given sensor location ⃗r.
1. First, we determine the “rotation axis” ⃗en :
⃗emax ×⃗er
⃗en = . (18)
∥⃗emax ×⃗er ∥
2. In the second step, the angle between the MV and the location unit vector is calculated:
3. Subsequently, the angle between the location unit vector and the sensor orientation is
determined from Equation (17):
ϕs = arctan − 2 tan(ϕmax ) . (20)
4. Finally, for any given ⃗er , the sensor orientation ⃗es is calculated by
Figure 6 shows the calculated sensor orientations as blue vectors starting at different
sensor locations ⃗r in the xy-plane. The MV is located at the origin and polarized in the
y-direction. For the 3D case, i.e., if ⃗r is an arbitrary vector, we simply have to rotate the blue
sensor orientations around the MV.
1.5
0.5
0
y
−0.5
−1
Potential Poses
−1.5 Maxvector
−1.5 −1 −0.5 0 0.5 1 1.5
x
Figure 6. Blue vectors: Calculated sensor orientations ⃗es for different values of the sensor location ⃗r.
The starting point of each blue vector represents the corresponding ⃗r. Yellow vector: The maximum
vector at the origin, always polarized in the y-direction. Note that the lengths of the blue vectors are
not of interest here, as only the directions are relevant.
3. Signal Processing
In this section, it is shown how (temporal) signal processing can be used to extract
the required signal feature. Subsequently, an iterative algorithm for pose estimation is
presented.
Sensors 2023, 24, 6947 8 of 20
0.4
0.2
0
z
−0.2
−0.4
0.4
0.2 0.4
0 0.2
0
−0.2 −0.2
y
−0.4 −0.4 x
Figure 7. Track of the magnetic dipole m⃗ (t) with starting point at the origin as a function of
time. The tip of m
⃗ moves on the surface a sphere with radius m0 , according to Equation (22) for
Nω = ωθ /ωϕ = 10.
While searching for the MV, an obvious method would be to try each direction to
detect the maximum field within a given period of time. Instead, we will prove in the
following that the directions where no field is measured are orthogonal to the MV. We call
such vectors zero-crossing vectors. By detecting two independent zero-crossing vectors, the
MV can then be calculated by simply building their ϕmax , we first set Equation (14) (which
is valid if ⃗em and ⃗es are lying in the xy-plane) for ϕm = ϕzero to zero:
0
= 2 cos(ϕzero ) cos(ϕs ) − sin(ϕzero ) sin(ϕs ) (23)
2 cot(ϕzero ) = tan(ϕs ). (24)
Equation (27) proves that the zero-crossing vector and the MV are orthogonal if both
are lying in the xy-plane. Since zero-crossing vectors and MVs are orthogonal, we conclude
that for the general case the zero-crossing vectors can again be obtained by a rotation
around the MV. Figure 8 illustrates an arbitrarily directed MV, the corresponding plane of
zero-crossing vectors, and the direction of the sensor.
1
Signal at sensor
Sensor 0.8 Zeromarker
Zerocrossing vectors
1 Maxvector 0.6
0.4
0.5
magnetic Field
0.2
0
Z
−0.2
−0.5
−0.4
−1 −0.6
1.5
1 1 −0.8
0.5 0.5
0 −1
0
Y −0.5 0 0.2 0.4 0.6 0.8 1
−0.5−1 X Time in s
⃗ezero,k ×⃗ezero,k-1 dIz dIz
∥⃗e , for z+ ∧ dt > 0, or z- ∧ dt <0
zero,k ×⃗ezero,k-1 ∥
⃗emax =
⃗ezero,k-1 ×⃗ezero,k
,else . (28)
∥⃗ezero,k-1 ×⃗ezero,k ∥
From the procedure described in Equations (18)–(21) (Section 2.2), we next determine
an update of the sensor orientation ⃗ês,i+1 according to
Start
Change of
orientation bigger than
threshold?
Figure 9. Flow chart of the iterative algorithm: The algorithm starts with a random initial orientation.
Then, the sensor position relative to the source is determined. Afterwards, the corresponding
orientation is calculated. When there is no relevant change between the data obtained with two
subsequent iterations, convergence is reached, and this orientation is the estimated result.
Estimated
Normalized
Potential Ground Kine-
Position Sensor
Positions Truth matic
Vector
Chain
The convergence speed of the algorithm depends on the ratio Q of the length from the
actuator point to the joint laj and the length from this joint to the sensor ljs :
ljs
Q= . (31)
laj
Figure 11 shows the amount of the absolute angular error as a function of the number
of iterations for each Q. The true sensor orientation is assumed to be in the z-direction. The
initial sensor orientation is always assumed to be in the y-direction, i.e., the corresponding
angular error starts at 90◦ .
Lenght ratios
80
1/2.5
1/3
Angular error (°)
60 1/4
1/5
1/8
40 1/10
20
0
1 4 7 10 13 16 19
Iteration
Figure 11. Angular error in dependence of the iteration: The figure shows the behaviour of the
angular error in dependence of the number of iteration. Different setups of length ratios are looked
at. The legend shows the corresponding Q for each curve. All curves tend closer to zero with each
iteration.
3.4. Uniqueness
In this section, we will show that the procedure described above delivers a unique
result if the Q as defined in Equation (31) is not between 0.5 and 1. For a proof, we refer
to Figure 5 and note that the orientation ⃗es has two components, i.e., two scalar degrees
of freedom. The input of the algorithm has also two scalar given variables, which leads
to a problem with two given variables and two unknowns. To solve this, we first split
the two-dimensional solution vector and show that each can be calculated independently.
As previously shown, ⃗emax , ⃗er , and ⃗es lie in one plane. From the sensor position, the joint
position can be obtained by the vector addition of ⃗es scaled by the known length ljs . This
relationship shows that the joint position must also lie in the plane already shown. This
allows the detected ⃗emax to be used to determine a plane in which the solution vector lies.
This reduces the number of unknowns to 1 and the first degree of freedom can thus be
determined unambiguously. In the previous derivation, the relative relationship between
⃗er , ⃗es , and ⃗emax was shown. In order to show that a unique ⃗es can be assigned to each ⃗emax ,
a reference point must be selected. The angle (ϕmax,j ) is defined as between ⃗emax and the
position vector of the joint ⃗rj and the angle (ϕs,j ) is defined as being between ⃗es and ⃗rj . These
angles are shown in Figure 12.
We set the coordinate system such that ⃗er , ⃗es and ⃗emax lie in the xy-plane. Furthermore,
without limiting the generalization of the solution we assume that ⃗rj lies on the y-axis. The
position of the joint and the sensor can thus be described as follows:
0
⃗rj = (32)
laj
ljs cos(ϕs,j )
⃗rs = (33)
laj + ljs sin(ϕs,j )
Sensors 2023, 24, 6947 12 of 20
ljs cos(ϕs,j )
ϕr = arctan (34)
laj + ljs sin(ϕs,j )
These descriptions can now be used to determine ϕmax according to Equation (17):
1
ϕmax = − arctan( tan(ϕs − ϕrs ) (35)
2
This angle is now related to the position vector of the sensor. The ϕmax,j can be
described as follows:
ϕmax,j = ϕmax − ϕr (36)
The obtained formulas reveal the analytical relationship ϕmax,j (ϕs,j ). For the uniqueness
of the solution, the inverse function is required. Since this is not a trivial task, a lookup
table has been created while the axes are swapped. Figure 13 shows the results obtained
for different values of Q.
ϕmax,j
⃗emax ϕs,j
⃗es
⃗rj
3D coil
Obviously, there is a unique solution only for a certain range of Q. For Q = 1, i.e.,
where the non-uniqueness is most significant, a simulation including a measurement of
the convergence speed was carried out, see Figure 14. We observe that the maximum error
does not approach zero, but oscillates periodically and is undamped around zero.
Sensors 2023, 24, 6947 13 of 20
180 180
160 160
140 140
120 120
100 100
ϕs,j (°)
ϕs,j (°)
0 0
0 20 40 60 80 100 120 140 160 180 −20 0 20 40 60 80 100 120 140 160 180
ϕmax,j (°) ϕmax,j (°)
180
160
140
120
100
ϕs,j (°)
80 Lenght ratios
1.1
60 1.2
1.3
40 1.4
1.5
20 1.6
0
−180 −150 −120 −90 −60 −30 0
ϕmax,j (°)
(c) 1 < Q
Figure 13. The relation between ϕs,j and ϕmax,j is represented for different values of Q. The plots are
subdivided for different values of Q. In the ranges 0 < Q < 0.5 and Q > 1 there is a clear assignment,
i.e., there is a unique bidirectional relation between ϕmax,j and ϕs,j . However, between 0.5 and 1 we
observe a non-unique relation.
180
Q=1
160
Angular error (°)
140
120
100
80
60
40
20
0
0 20 40 60 80 100
Iteration
Figure 14. For Q = 1, the maximum angular error does not approach zero even after several iterations,
i.e., the algorithm is non-convergent.
Sensors 2023, 24, 6947 14 of 20
4. Results
4.1. Simulation Results
For validation, the proposed iterative algorithm was numerically implemented and
the motion of a single joint was observed. To that end, an existing real-time framework
based on C was applied [22]. An overview of the used modules is shown in Figure 15. A
complete description of the implementation can be found in [23].
Simulation pipeline
Kinematic
Magnetic simulation
sensor
simulation Magnetic
source
simulation
Processing pipeline
Signal
generation
Figure 15. Simulation overview: The simulation is divided in two sections. The upper (dark) part
simulates the motion and the resulting field at the sensor. In the lower part (bright), the described
algorithm is implemented and the pose is calculated.
Sensor
z
y
x Source
Figure 16. Simulation of a motion: All elements are in the yz-plane. The 3D coil source is located in
the origin. The first bone is aligned with the z-axis and its end represents the position of the joint.
The second bone moves from 90◦ to −90◦ with respect to the axis of the first bone.
The results of the simulations are shown in Figure 17a. The ground-truth angle
between the second element and the z-axis is shown for comparison with the estimated
angle. For better visibility of both signals, the estimation is shifted by 10◦ . The absolute
error of the amplitude which plotted in Figure 17b, is for most time steps in the range < 1°.
Sensors 2023, 24, 6947 15 of 20
10
0
−10
−30 −1
−50 −2
−70 −3
−90
0 1 2 3 4 5 0 1 2 3 4 5
Time in (s) Time in (s)
10
−10
is shifted 10° to enhance the clarity of the 0visualization. In (b), the difference between the
For an angles
(a) Estimated initialvs.laboratory test, a prototype
simulated angles (b)consisting
Error signal of two kinematic elements made
3D coil ofFigure
PVC17.was fabricated
Figure (seethe
a shows both, Figure 18).angle
estimated Theandtwotheelements are The
simulated one. connected by represents
dark red line a screw, theallowing
one degree(ground
simulation of freedom.
truth) whileA 3D coil red
the light is attached at the beginning
line is the estimation of thealgorithm.The
of the described longer elementlatter iswhile a
fluxgate
shifted 10°magnetometer is of
to enhance the clarity attached to the Inshorter
the visualization. Figure b,element.
FluxgateFirstly,
the difference tests
Magnetometer
between were carried
the simulation and out
y with a staticissetup,
the estimation plotted. i.e., the prototype
We observe was
an error signal held
which in athe
follows fixed
angleposition using suitable wedges
of the movement.
z
which was attached to the table and the prototype with double-sided adhesive tape.
x
3D coil Joint
y Fluxgate Magnetometer
z
x
Joint
Figure 18. Upper figure: The built prototype consists of two PVC elements, connected to each other with a
screw allowing for one degree
Figureof18.freedom. TheThe
Upper figure: 3Dbuilt
coilprototype
source consists
is located at PVC
of two one end of the
elements, longer toelement.
connected each otherOn
with a
Figure 18. Upper figure: The built prototype consists of two PVC elements, connected to each other
screw
the shorter element a fluxgate allowing for one degree of freedom. The 3D coil source is located at one end of the longer element. On
with a magnetometer
screw allowing[24] is mounted.
for one degree ofThe illustration
freedom. The 3D incoil
the source
lower figure shows
is located the end
at one of the
the shorter element a fluxgate magnetometer [24] is mounted. The illustration in the lower figure shows the
assembly for a 30° position.
longer element. On the shorter element, a fluxgate magnetometer [24] is mounted. The illustration in
assembly for a 30° position.
the lower figure shows the assembly for a 30° position.
Sensors 2023, 24, 6947 16 of 20
We have used seven equidistant angles between 0° and 90° degrees. The same software
used for the simulation has been applied for data acquisition and signal processing. A
series of 10 s measurements was recorded for each position, and a new angle estimate was
calculated every 20 ms. Hence, after 10 s we obtained 500 angle estimations represented in
Figure 19.
90
75
Measured Angle in °
60
45
30
15
Table 1. Time consumption of different complex operations for one million executions, normalized to
the (basic) addition operation.
5. Discussion
5.1. Convergence and Uniqueness
The convergence of the algorithm depends on the length ratio Q, i.e., the ratio of the
distance between the actuator and the joint laj and the distance between the joint and the
sensor location ljs . It has been shown that the algorithm uniquely converges unless Q is
between 0.5 and 1. Moreover, the number of iterations required for a given error threshold
increases as Q increases. As illustrated in Figure 20, such behaviour sounds logical. For a
fixed length of the first bone, the permissible angular range δ is limited by the length of the
second one.
y y
δ δ
x x
Figure 20. Possible angle ranges δ for two different lengths of the second bone at a fixed length of the
first bone (Q is higher for the left realization).
Regarding uniqueness, the role of Q may well lead to problems in a possible subse-
quent application, where care must be taken when designing and positioning the sensors
and sources to ensure that the possible Q is not between 0.5 and 1.
5.2. Results
The algorithm was implemented in an existing real-time framework. A periodic move-
ment was implemented using a simulation pipeline. The simulations have shown that for
an unmoved kinematic chain, the estimate agrees with the simulated posture. Subsequently,
a periodic movement with a frequency of 1 Hz was performed. The movement showed an
error of approximately 1 degree. Outside the simulation, a larger deviation is to be expected.
Due to the dipole approximation, small deviations occur in the modelling. In addition,
noise was not used in the first investigations. This would not interfere with the algorithm
but would worsen the result. Inaccuracies caused by noise may be improved or corrected
by averaging the input signal. First experimental results indicate that the algorithm works
in practice. It has been observed that the averaged values each have an offset to the true
values. These deviations can be explained by the fact that there are model properties that
have not yet been taken into account. For example, the algorithm assumes that the sensor
is located exactly in the centre of the kinematic element. In this case, the sensor would
move on a circular path. However, as the sensor cannot be located in the centre in reality,
it tends to move along an elliptical curve. This leads to an error depending on the angle.
The dipole approach leads to further errors. The closer the sensor is to the source, the
worse the approximation becomes. In [25], it was shown that the deviation from the dipole
approximation at a distance of 20 times the radius of the source is only 0.0027%. This is the
case in our setup. If the algorithm is used in a setup where the distance cannot be kept large
enough, approaches as in [26] can be used. The variance of the measurement series differs
Sensors 2023, 24, 6947 18 of 20
greatly. Particularly, it is unclear why the variance becomes smaller and smaller as the
sensor angle increases. The signal quality was almost the same in all measurement series,
so at first glance a similar variance could actually be assumed for each setup. However, the
transfer function from the MV to the sensor projection is non-linear, which would explain a
stronger fluctuation in the 0° range.
could be integrated with a Kalman filter approach as in [27] and/or correct a parameter of
the model during runtime so that the error can be minimized.
7. Patents
The content of this paper was used for a patent application. It was granted by the
German Patent Office with the patent number DE 10 2023 119 167.
Author Contributions: Conceptualization, T.S., L.K. and G.S.; methodology, T.S.; software, T.S.;
validation, T.S.; formal analysis, T.S.; investigation, T.S.; resources, T.S., J.H. and M.B.; data curation,
T.S.; writing—original draft preparation, T.S.; writing—review and editing, T.S., J.H., M.B., R.B., L.K.
and G.S.; visualization, T.S. and G.S.; supervision, R.B., L.K. and G.S.; project administration, L.K. and
G.S.; funding acquisition, L.K. and G.S. All authors have read and agreed to the published version of
the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data presented in this study are available on request from the
corresponding author.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Minh, V.T.; Katushin, N.; Pumwa, J. Motion tracking glove for augmented reality andvirtual reality. J. Behav. Robot. 2019, 10,
160–166, .
2. Mirelman, A.; Bernad-Elazari, H.; Thaler, A.; Giladi-Yacobi, E.; Gurevich, T.; Gana-Weisz, M.; Saunders-Pullman, R.; Raymond,
D.; Doan, N.; Bressman, S.B.; et al. Arm swing as a potential new prodromal marker of Parkinson’s disease. Mov. Disord. 2016, 31,
1527–1534.
3. Qualisys, A.B. Technical Specifications OMC System. Available online: https://www.https://www.qualisys.com/cameras/
miqus/#tech-specs (accessed on 9 October 2024)
4. Dipietro, L.; Sabatini, A.; Dario, P. A Survey of Glove-Based Systems and Their Applications. IEEE Trans. Syst. Man. Cybern. Part
(Appl. Rev.) 2008, 38, 461–482.
5. Roetenberg, D.; Luinge, H.; Slycke, P. Xsens MVN: Full 6DOF Human Motion Tracking Using Miniature Inertial Sensors. Xsens
Motion Technol. BV Tech. Rep. 2009, 1,1–7.
6. Santoni, F.; De Angelis, A.; Moschitta, A.; Carbone, P. MagIK: A Hand-Tracking Magnetic Positioning System Based on a
Kinematic Model of the Hand. IEEE Trans. Instrum. Meas. 2021, 70, 9507313.
7. Ma, Y.; Mao, Z.; Jia, W.; Li, C.; Yang, J.; Sun, M. Magnetic Hand Tracking for Human-Computer Interface. IEEE Trans. Magn. 2011,
47, 970–973.
8. Elbamby, M.S.; Perfecto, C.; Bennis, M.; Doppler, K. Toward low-latency and ultrareliable virtual reality. IEEE Netw. 2018, 32,
78–84.
9. Maier, M.; Chowdhury, M.; Rimal, B.P.; Van, D.P. The Tactile Internet: Vision, Recent Progress, and Open Challenges. IEEE
Commun. Mag. 2016, 54, 138–145.
10. Plotkin, A.; Paperno, E. 3-D Magnetic Tracking of a Single Subminiature Coil with a Large 2-D Array of Uniaxial Transmittes.
IEEE Trans. Magn. 2003, 39, 3295–3297.
11. Ran, X.; Qiu, W.; Hu, H. Magnetic Dipole Target Localization Using Improved Salp Swarm Algorithm. In Proceedings of the 42nd
Chinese Control Conference (CCC), Tianjin, China, 24–26 July 2023; pp. 3372–3377.
12. Zeising, S.; Thalmayer, A.; Fischer, G.; Kirchner, J. Toward Magnetic Localization of Capsule Endoscopes during Daily Life
Activities. In Proceedings of the 2021 Kleinheubach Conference, Miltengerg, Germany, 28–30 September 2021; pp. 1–4.
13. Shen, H.-M.; Ge, D.; Lian, C.; Yue, Y. Real-Time Passive Magnetic Localization Based on Restricted Kinematic Property for Tongue-
Computer-Interface. In Proceedings of the 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics
Hong Kong, China, 8–12 July 2019.
14. Paperno, E.; Sasada, I.; Leonovich, E. A new method for magnetic position and orientation tracking. IEEE Trans. Magn. 2001, 37,
1938–1940.
15. Raab, F.; Blood, E.; Steiner, T.; Jones, H. Magnetic Position and Orientation Tracking System. IEEE Trans. Aerosp. Electron. Syst.
1979, AES-15, 709–718.
16. Paperno, E.; Keisar, P. Three-Dimensional Magnetic Tracking of Biaxial Sensors. IEEE Trans. Magn. 2004, 40, 1530–1536.
17. Nara, T.; Suzuki, S.; Ando, S. A closed-form formula for magnetic dipole localization by measurement of its magnetic field and
spatial gradients. IEEE Trans. Magn. 2006, 42, 3291–3293.
Sensors 2023, 24, 6947 20 of 20
18. Fan, L.; Kang, X.; Zheng, Q.; Zhang, X.; Liu, X.; Chen, C.; Kang, C. A Fast Linear Algorithm for Magnetic Dipole Localization
Using Total Magnetic Field Gradient. IEEE Sensors J. 2018, 18, 1032–1038.
19. Fischer, C.; Quirin, T.; Chautems, C.; Boehler, Q.; Pascal, J.; Nelson, B.J. Gradiometer-Based Magnetic Localization for Medical
Tolls. IEEE Trans. Magn. 2023, 59, 2.
20. Sharma, S.; Ding, G.; Aghlmand, F.; Talkhooncheh, A.; Shapiro, M.; Emami, A. Wireless 3D Surgical Navigation and Tracking
System with 100µ m Accuracy Using Magnetic-Field Gradient-Based Localization. IEEE Trans. Med. Imaging 2021, 40, 2066–2079.
21. Bao, J.; Hu, C.; Lin, W.; Wang, W. On the magnetic field of a current coil and its localization. In Proceedings of the IEEE
International Conference on Automation and Logistics, Zhengzhou, China, 15–17 August 2012; pp. 573–577.
22. Chair for Digital Signal Processing and System Theory. Real-Time Framework. Available online: https://dss-kiel.de/index.php/
research/realtime-framework (accessed on 9 October 2024).
23. Hoffmann, J.; Bald, C.; Schmidt, T.; Boueke, M.; Engelhardt, E.; Krüger, K.; Elzenheimer, E.; Hansen, C.; Maetzler, W.; Schmidt, G.
Designing and Validating Magnetic Motion Sensing Approaches with a Real-time Simulation Pipeline. Curr. Dir. Biomed. Eng.
2023, 9, 455–458.
24. Stefan Mayer Instruments GmbH Co. KG, Miniatur Fluxgate FLC100. Available online: https://stefan-mayer.com/de/produkte/
magnetometer-und-sensoren/magnetfeldsensor-flc-100.html (accessed on 9 October 2024).
25. Paperno, E.; Plotkin, A. Cylindrical induction coil to accurately imitate the ideal magnetic dipole. Sens. Actuators Phys. 2004, 112,
248–252.
26. Ren, Y.; Hu, C.; Xiang, S.; Feng, Z. Magnetic Dipole Model in the Near-field. In Proceeding of the 2015 IEEE International
Conference on Information and Automation, Lijiang, China, 8–10 August 2015.
27. Boueke, M.; Hoffmann, J.; Schmidt. T.; Bald, C.; Bergholz, R.; Schmidt, G. Model-based Tracking of Magnetic Sensor Gloves in
Real Time. Curr. Dir. Biomed. Eng. 2023, 9, 85–88.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.