Sensors 24 00082
Sensors 24 00082
Article
Integrating Abnormal Gait Detection with Activities of Daily
Living Monitoring in Ambient Assisted Living: A 3D
Vision Approach
Giovanni Diraco * , Andrea Manni and Alessandro Leone *
National Research Council of Italy, Institute for Microelectronics and Microsystems, SP Lecce-Monteroni km 1.200,
73100 Lecce, Italy; [email protected]
* Correspondence: [email protected] (G.D.); [email protected] (A.L.); Tel.: +39-0832-422514 (A.L.)
Abstract: Gait analysis plays a crucial role in detecting and monitoring various neurological and mus-
culoskeletal disorders early. This paper presents a comprehensive study of the automatic detection of
abnormal gait using 3D vision, with a focus on non-invasive and practical data acquisition methods
suitable for everyday environments. We explore various configurations, including multi-camera
setups placed at different distances and angles, as well as performing daily activities in different
directions. An integral component of our study involves combining gait analysis with the monitoring
of activities of daily living (ADLs), given the paramount relevance of this integration in the context
of Ambient Assisted Living. To achieve this, we investigate cutting-edge Deep Neural Network ap-
proaches, such as the Temporal Convolutional Network, Gated Recurrent Unit, and Long Short-Term
Memory Autoencoder. Additionally, we scrutinize different data representation formats, including
Euclidean-based representations, angular adjacency matrices, and rotation matrices. Our system’s
performance evaluation leverages both publicly available datasets and data we collected ourselves
while accounting for individual variations and environmental factors. The results underscore the
effectiveness of our proposed configurations in accurately classifying abnormal gait, thus shedding
light on the optimal setup for non-invasive and efficient data collection.
Keywords: Abnormal Gait Detection; Ambient Assisted Living; RGB-D Camera; Temporal Convolutional
Network; Gated Recurrent Unit; Long Short-Term Memory Autoencoder
Citation: Diraco, G.; Manni, A.;
Leone, A. Integrating Abnormal Gait
Detection with Activities of Daily
Living Monitoring in Ambient
Assisted Living: A 3D Vision 1. Introduction
Approach. Sensors 2024, 24, 82. Gait disorders, which are common among the elderly population, pose a significant
https://doi.org/10.3390/s24010082 threat to their overall well-being. They often lead to balance problems, injuries, disabilities,
Academic Editor: Giovanni Saggio
and loss of independence, ultimately diminishing quality of life [1].
These disorders are closely associated with musculoskeletal and neurological im-
Received: 26 October 2023 pairments, making them some of the primary underlying causes of an abnormal gait.
Revised: 11 December 2023 Remarkably, studies indicate that around 25% of individuals aged 70 to 74 exhibit gait
Accepted: 21 December 2023 irregularities, a figure that nearly doubles to almost 60% for those in the 80 to 84 age
Published: 23 December 2023
bracket [2]. The identification and characterization of gait anomalies play a pivotal role in
diagnosing these musculoskeletal and neurological disorders. Gait analysis has emerged as
a valuable method for assessing these conditions; however, it is fraught with challenges.
Copyright: © 2023 by the authors.
One of the primary challenges lies in the absence of universally accepted standards for
Licensee MDPI, Basel, Switzerland. evaluating the gait of older individuals [3].
This article is an open access article Some researchers focus on fundamental time–distance factors, like walking speed
distributed under the terms and and step length. Others delve into intricate biomechanical analyses by examining various
conditions of the Creative Commons parameters, such as joint rotation and position, to uncover musculoskeletal and neuro-
Attribution (CC BY) license (https:// logical irregularities [4]. Regardless of the chosen methodology, the process of manually
creativecommons.org/licenses/by/ analyzing gait data involves a labor-intensive phase, wherein relevant gait parameters are
4.0/). derived through visual inspection and annotation. The accuracy of this manual analysis
ous methods in recognizing different gait patterns. The integration of SLAM and pose
estimation improves the accuracy of each subsystem.
One notable common aspect across the reported papers is the use of supervised
machine learning (ML) and deep learning (DL) approaches for extracting gait features
in the context of anomaly detection and classification. Many of these studies involve
laboratory settings with healthy subjects simulating pathological gaits in order to train
supervised models. However, gait abnormalities are subject-specific; thus, models trained
only on simulated abnormalities might not guarantee the same performance achieved in a
laboratory setting when tested in the field with unhealthy subjects [13].
The challenges posed by the significance of subject-specific differences were under-
scored by Guo et al. [13]. They highlighted the necessity for unsupervised learning to rectify
the issue of poor generalization when applied to novel subjects. To tackle these challenges,
the authors introduced an approach called Maximum Cross-Domain Classifier Discrepancy
(MCDCD). This method was designed to alleviate domain discrepancies, thereby enhancing
the detection of gait abnormalities through Unsupervised Domain Adaptation (UDA). The
UDA method leveraged information gleaned from labeled training subjects to optimize
classification performance when applied to a previously unseen test subject.
Chen et al. [14] highlighted the multifaceted advantages associated with the analysis
of gait within the context of activities of daily living (ADLs). This analytical approach
proves instrumental in various aspects, including early disease diagnosis, the identification
of potential health concerns, and the ongoing monitoring of treatment efficacy. Utilizing
Parkinson’s Disease (PD) as an illustrative example, it is noteworthy that the early stages
of PD may manifest with subtle symptoms and inconspicuous locomotor or balance chal-
lenges. As emphasized by the authors, scrutinizing gait performance during diverse ADLs
allows for the efficient extraction and analysis of specific gait parameters, such as turning
steps, turning time, and turning angle, thereby enhancing the diagnostic capabilities for
PD. Beyond the realm of gait parameters, this approach yields invaluable insights into
mobility—an essential consideration for clinical applications. Continuous monitoring of
ADLs provides a dynamic assessment of mobility changes throughout the day and week,
thus offering a nuanced understanding of responses to interventions and the influence
of environmental factors on mobility. Accumulated research underscores that routine
assessment of everyday mobility serves as a pertinent indicator of disease progression and
rehabilitation effectiveness. Furthermore, the sequential analysis of various ADLs over
time emerges as a powerful tool for unearthing potential health issues.
Climent-Perez et al. [15] pointed out the potential benefits of recognizing ADL from
video for active and assisted living. They emphasized the need to tap into the full potential
of this technology, which can aid behavior understanding and lifelogging for caregivers and
end users. Their paper introduced alternative pre-processing techniques, specifically for
the Toyota Smarthomes dataset, to enhance action recognition. These techniques involved
normalizing skeletal pose data and expanding activity crops for RGB data. Their results
showed that the proposed techniques improved recognition accuracy, especially when
combining pre-trained branches and feeding outputs into the separable spatio-temporal
attention network. However, the attention network’s contribution to overall improvement
was marginal.
Beyond the previously mentioned advantages, the integration of gait analysis into
the realm of ADL monitoring unveils novel potential. This integration facilitates the
smooth blending of supervised postural descriptors commonly used for ADL classifica-
tion with unsupervised approaches for anomaly detection. Such a cohesive synthesis
holds the promise of surpassing the limitations inherent in fully supervised methods,
especially when dealing with subject-specific patterns that may prove challenging for
conventional approaches.
To the best of the authors’ knowledge, there are no contributions in the literature
where the issue of abnormal gait detection has been explored in conjunction with ADL
monitoring within Ambient Assisted Living (AAL) applications.
RGB images and depth frames provided by the 3D cameras were aligned to
comprehensive understanding of the scene. This alignment process allowed
integration of color information with spatial data obtained from the depth fra
accurately aligning RGB images and depth frames, it became possible to
Sensors 2024, 24, 82 4 of 22
synchronized representation where each pixel in the RGB image corresponded to a
point in the 3D space with a known depth value. In this way, a 3D representation
was
2. obtained
Materials and by using the MediaPipe Pose library, as detailed in Section 2.2.
Methods
The
The aim3D skeletal
of this descriptor
study was was
monitoring the used
humanfor
gaitboth
duringADL recognition
the execution and abnor
of common
detection. The recognition of ADLs was based on supervised DL by
ADLs. The suggested framework for ADL recognition and gait monitoring is presented in comparing T
Convolutional
Figure 1. Data wereNetwork (TCN)
collected using twoand GRU approaches,
3D cameras in a laboratorydescribed in several
setting, where Sections 2.3
volunteers performed various ADLs together with “normal” and
Furthermore, the unsupervised approaches were used for DL gait features“abnormal” gaits, as that, i
detailed in Section 2.1. Data captured by 3D cameras were stored in datasets for offline
were used for detecting abnormal gaits with an autoencoder (AE) based on bidir
analysis, and, in addition, publicly available datasets were used for comparison purposes
LSTM,
(as better as described
specified in Section
in Section 2.1). 2.5.
Figure
Figure 2.2.Laboratory
Laboratory setup
setup forfor data
data collection.
collection.
Figure 2. Laboratory setup for data collection.
Figure 3. Schematic representation of the camera setup used in the laboratory setting.
FigureMore
Figure 3. specifically,
3. Schematic
Schematic the abnormal
representation
representation execution
ofcamera
of the the camera was in
setup
setup used simulated
used in thewhen
the laboratory moving
laboratory
setting. from one
setting.
position to another or getting out of bed or chairs by adopting the following expedients:
More specifically,
(1) unnaturally slowed the abnormal(2)
movements, execution
limped was simulated
walking, when moving from oneand
More specifically, the abnormal execution wasand (3)
simulatedstopping
when suddenly
moving from o
positionagain
starting to another or getting
several times. out of bed or
Abnormal chairs
gaits by adopting
were primarily the following
involved in expedients:
walking expedien
and
position
(1)
to another
unnaturally slowed
or getting
movements,
out of
(2)
bed
limped
or chairs
walking,
by
and
adopting
(3)
the
stopping
following
suddenly and
housekeeping activities. In addition, the simulation of abnormal execution also occurred
(1) unnaturally slowed movements, (2) limped walking, and (3) stopping suddenly a
during transitions between positions and activities, such as getting out of bed or chairs.
starting again several times. Abnormal gaits were primarily involved in walking a
housekeeping activities. In addition, the simulation of abnormal execution also occur
during transitions between positions and activities, such as getting out of bed or chair
Sensors 2024, 24, 82 6 of 22
starting again several times. Abnormal gaits were primarily involved in walking and
housekeeping activities. In addition, the simulation of abnormal execution also occurred
during transitions between positions and activities, such as getting out of bed or chairs.
In order to achieve a realistic simulation of abnormal gait, participants underwent a
guided process based on specific guidelines provided by neurologists with expertise in
neuromotor disorders, specifically targeting Parkinson’s and Huntington’s diseases. These
guidelines were meticulously crafted to replicate the distinctive gait patterns associated
with these neurodegenerative disorders. The guidelines centered on several critical aspects
integral to the manifestation of abnormal gait. Specifically, participants were instructed to
pay attention to and simulate abnormalities related to left–right asymmetry, tremor, rigidity,
and postural instability. Each of these components represents key characteristics observed
in individuals affected by neuromotor disorders.
Left–Right Asymmetry: Participants were directed to simulate variations in stride
length, step time, and overall movement coordination between the left and right sides.
This aimed to capture the asymmetrical gait commonly observed in individuals with
neuromotor disorders.
Tremor: Emphasis was placed on reproducing the rhythmic and involuntary shaking
movements associated with tremors. Participants were guided to incorporate tremor-
like features into their gait patterns to authentically represent this characteristic aspect of
abnormal motor function.
Rigidity: The guidelines addressed the simulation of increased muscle tone and
stiffness characteristic of rigidity in neuromotor disorders. Participants were instructed
to convey a sense of resistance and inflexibility in their movements, thus mirroring the
restricted mobility often seen in affected individuals.
Postural Instability: Participants were guided to mimic challenges in maintaining
balance and stability during walking. This involved incorporating swaying or unsteadiness
into their gait, thus replicating the postural instability commonly observed in individuals
with neuromotor disorders.
By providing detailed guidelines that specifically targeted these nuanced aspects of
abnormal gait, we aimed to ensure a comprehensive and faithful representation of the
distinctive motor characteristics associated with neuromotor disorders. This approach
facilitated the creation of a simulated abnormal gait dataset for effective abnormality
detection analysis.
The previously mentioned parameters (cameras’ heights h, baseline b, orientation
angles α, and subjects’ orientations during ADL execution) are summarized in Table 1.
Parameter Value/s
α 45◦
b 5m
h {1.4, 1.6, 1.8, 2.0} m
Directions { NE, N, NW }
ADLs { EAT, DRN, DRS, HSK, SLP, W LK }
Volunteers 3
The process of selecting the six activities was driven by a deliberate consideration
of privacy concerns and ethical implications associated with the monitoring of specific
ADLs. Notably, certain activities, such as toileting and bathing, if monitored using a
camera, could potentially infringe on privacy. To address this concern, our objective was to
strike a balance between achieving a comprehensive representation of daily activities and
upholding ethical considerations. In the context of our study objectives, the inclusion of
housekeeping activities (in particular, the task of sweeping the floor) was purposeful. This
choice was motivated by the inherent physical movement involved in the activity (namely,
walking), which presents a conducive scenario for effective abnormality detection through
Sensors 2024, 24, 82 7 of 22
Parameter Value/s
Non-ambiguity range 0.3–3 m
Depth technology Stereoscopic
Minimum depth distance at maximum resolution ~28 cm
Depth sensor FOV 87◦ × 58◦
Depth frame resolution Up to 1280 × 720
Depth framerate Up to 90 fps
RGB sensor FOV 69◦ × 42◦
RGB sensor resolution 2 MP
Camera dimensions
90 mm × 25 mm × 25 mm
(Length × Depth × Height)
Connector USB-C 3.1 Gen 1
The furniture included in the simulation comprised a bed, a chair, and a table. For
the specific activities of EAT and DRN, participants were seated on a chair in front of a
table. The DRS activity involved sitting on the bed, while the SLP activity was performed
lying on the bed. The housekeeping activity entailed sweeping the floor with a broom.
Regarding the consistency of activities, it is important to note that each activity involved the
continuous repetition of simple movements throughout the 12 min duration (1 min sessions
with 12 repetitions). For example, in the case of housekeeping, the action of moving around
while swinging the arms holding a broom was consistently repeated. This repetitive nature
facilitates activity recognition, so capturing a few frames is sufficient to identify the type
of activity.
To compare the suggested framework with the state of the art, publicly available data
were also considered, i.e., the Pathological Gait Datasets provided by Jun et al. [11]. The
goal of the authors was to classify complicated pathological gait patterns, including both
normal gaits and five pathological gaits: antalgic, stiff-legged, lurching, steppage, and
Trendelenburg gaits. The datasets used in the study were collected using a multiperspective
Kinect system composed of six cameras arranged in two rows along the subject’s direc-
tion of walking. Ten healthy individuals participated in data collection by simulating the
five pathological gaits based on provided guidelines. The Kinect sensors (© Microsoft,
Redmond, WA, USA) captured 3D coordinates of 25 joints, resulting in a total of 10 partici-
pants, 6 gait types per participant, and 20 instances per type, which is equal to 120 instances
for each gait.
Figure 4.
Figure Skeleton models,
4. Skeleton models,original (a) and
original reduced
(a) and (b). Sitting
reduced pose represented
(b). Sitting using the original
pose represented using the origi
model (c) and the reduced one (d).
model (c) and the reduced one (d).
Various skeletal data representation approaches were considered. A skeleton, in this
2.3. The Temporal
context, is a graph Convolutional
characterized byNeural Network
N vertices and N − 1 edges. The skeletal graph can be
depicted
TCNusing three different
networks, methods. The
as elucidated first et
by Bai method involves
al. [17] listing
in their the 3D Euclidean
empirical study, repres
of its vertices, necessitating the use of 3 ×
convolutional neural networks engineered for the purpose of effectively).analyzing
coordinates 27 parameters (x n , y n , z n The ti
second method entails creating an adjacency list of Spherical coordinates for the vertices.
series data. In comparison to LSTM networks, TCN networks exhibit super
Each vertex’s Spherical coordinates are determined concerning a Spherical reference sys-
performance attributes.
tem, with its origin located A salient
at the characteristic
previous of TCN
vertex in the list. networks
This method resides in th
also requires
incorporation
3 × 27 parametersof (ρdilated
n , φn , ψncausal convolutions
). The third method uses(DCCs), wherein
an adjacency list of 3D only temporal
rotations to valu
preceding
define each the current
vertex in relation time steppreceding
to the are considered. Thisinstrategic
adjacent vertex the list. Note temporal convolut
that in the
second and
enables TCNthirdnetworks
cases, it is possible to assume
to capture a fixed distance
extensive long-termbetween adjacent patterns,
temporal vertices, there
which results in a reduction in the total number of parameters.
augmenting the receptive field without necessitating the utilization of pooling laye
2.3. The Temporal Convolutional Neural Network
TCN networks, as elucidated by Bai et al. [17] in their empirical study, represent
convolutional neural networks engineered for the purpose of effectively analyzing time
series data. In comparison to LSTM networks, TCN networks exhibit superior performance
Sensors 2024, 24, 82 9 of 22
Sensors 2024, 24, x FOR PEER REVIEW 9 of 23
on 𝐾 residual blocks, each including two DCCs with equal dilation factor, depth, and
size. The
Suchgeneral
blocksTCN architecture, provided
are characterized in Figure
by residual 5, has a modular
connections structure based
that, as suggested by He et at.
on K residual blocks, each including two DCCs with equal dilation factor,
[19], improve the performance of deep architectures by adding the block inputdepth, and size. to its
Such blocks are characterized by residual connections that, as suggested by He et at. [19],
output.
improve the performance of deep architectures by adding the block input to its output.
convolutional filters Nk , the filter sizes Sk , and the drop out percentages Dk , are reported in
Table 3.
z k = σ W 0 ∗ [ h k −1 x k ] + b 0 ,
(4)
rk = σ W 00 ∗ [hk−1 xk ] + b00 .
(5)
When an entry in zk is approaching 1, it means that the current state heavily relies on
the candidate state. Conversely, if an entry is approaching 0, then the current state leans
more on the previous state. In simple terms, zk essentially decides the proportion of the
candidate state that should be incorporated into the current state. In Equations (3)–(5),
W, W 0 , W 00 and b, b0 , b00 represent the weights and biases vectors, respectively.
The parameters of the GRU architecture were chosen as in [11], i.e., k = 4 blocks and
Nk = 125 hidden neurons.
the supervised models were pre-trained by using collected time series datasets, whereas
ground-truth data were used for performance evaluation. The ground-truth data were
Sensors 2024, 24, x FOR PEER REVIEW
gathered through a manual approach. This involved annotating various elements, such as
the performed activities and associated geometric parameters, including person orientation
and camera heights, during the data collection process.
where the individuals being monitored typically do not partake in the training of supervised
models. This approach ensures that our experimentation aligns more faithfully with
practical conditions and enhances the applicability of our findings to real-life situations.
In the testing phase, instead, the joined networks operate in an unsupervised mode
because activations (i.e., learned features) extracted from the pre-trained TCN/GRU are
supplied as input to the LSTMAE, which operates naturally in an unsupervised manner,
and then the reconstruction error (RE) is estimated by comparing learned features and
reconstructed ones using the following equation:
1 M
2 ∑ i =1 i
RE(z, ẑ) = kz − ẑi k2 , (6)
where zi is the time series provided as input to the AE network, E(zi ) is the encoded
representation provided by the encoder network, and ẑi = D ( E(zi )) is the reconstructed
input provided by the decoder. In such a way, the LSTMAE can be trained on the “normal”
activity (i.e., walking, because we are interested in abnormal gait detection) as performed
by the monitored subject, not by other healthy subjects as happens in a supervised scenario.
In the joint architecture, the parameters of the LSTMAE network are optimized based
on the activations extracted from the TCN/GRU networks and by following the approach
Sensors 2024, 24, x FOR PEER REVIEW
presented in Diraco et al. [20]. The optimized architecture is provided in Figure 8, and opti- 13
mized network parameters, i.e., number of hidden units Bk , output size Fk , and dropping
out probability Dk , are reported in Tables 4 and 5.
Figure 8. Architecture of the LSTMAE network adopted in conjunction with the two unsupervised
Figure 8. Architecture of the LSTMAE network adopted in conjunction with the two unsuperv
models.
models.
Table 4. Optimized parameters of the LSTMAE using activations from TCN.
As already mentioned, the LSTMAE was trained using a semi-supervised appro
This was Network Parameters
accomplished by observing normal Optimized
walking Values
patterns and subseque
B ,
identifying abnormal F , D
1 1 walking
1 256, 200, 0.0083
patterns characterized by significant deviations leadin
B2 , F2 , D2 128, 100, 0.2875
high reconstruction errors. It is important to note that
B3 , F3 , D3
normal walking patterns coul
256, 200, 0.0095
exhibited by the subjects under observation or replicated by other individuals
controlled setting, such as a laboratory. In this study, both scenarios were thoroug
explored by training the model LSTMAE with data from various sources.
The data from subjects different from the monitored individual were designate
“inter-class samples”, thus representing simulated normal walking patterns. Convers
data obtained from the same monitored subject were categorized as “intra-class samp
signifying instances where their normal walking patterns were directly observed.
Sensors 2024, 24, 82 13 of 22
3. Experimental Results
In this section, the results of the laboratory experiments are presented. First of all, the
three skeleton data representations discussed in the previous section were compared in
terms of processing speed.
An evaluation of these three representation approaches was conducted in terms of
processing time. It was found that the second approach was the best performing, as it took
only 17% of processing resources for feature extraction. In contrast, the first approach took
43%, and the third one even took 63%. Thus, the second approach was employed in the
experiments, assuming a constant distance between consecutive joints (i.e., a feature space
of 54 dimensions).
The data collection software was entirely written in Python 2.7 using the wrapper
for Intel RealSense SDK 2.0, pyrealsense2 [23], which provides the C++ (Visual Studio
2017) to Python binding required to access the SDK. Meanwhile, as already mentioned,
the MediaPipe Pose library (version 0.10.10) was used for the body skeleton capture. The
data collection software was executed on an MS-Windows-OS (© Microsoft, Redmond, WA,
USA)-based mini-PC with a Core i5-9600T 2.3 GHz Intel (© Intel Corporation, Santa Clara,
CA, USA) processor and 8 GB of RAM. In contrast, the performances of the TCN, GRU,
and LSTMAE models were evaluated in Matlab R2023b (Natick, MA, USA) using the
Deep Learning Toolbox (version 23.2) executed on an MS-Windows-OS-based workstation
equipped with two NVIDIA GPUs (Santa Clara, CA, USA), RTX 3060 (Ampere), and Titan
X (Pascal), both with 12 GB GRAM.
The data supplied to the Deep Neural Network (DNN) models consisted of time
series comprising skeleton pose features, as per the previously established discussions.
Concerning the temporal extent of the time series furnished at each computational iteration,
it was fixed at N = 30 frames, wherein each frame was characterized by a dimensionality of
M = 54. These specifications were chosen to ensure a consistent frame rate of 20 fps during
the data acquisition phase, a minimum requisite to facilitate the near-real-time operation of
ADL monitoring applications [24]. It is important to note that this frame rate value does not
encompass the computational load incurred by DNN processing, but only camera RGB-D
frames grabbing, 3D point cloud computing, skeleton pose capture, and feature extraction.
The two supervised approaches, TCN and GRU, were experimented with using the
datasets collected in the laboratory setting presented in Section 2. Thus, by varying the
setup parameters (h, d) in {1.4, 1.6, 1.8, 2.0} × {N, NW, NE}, different accuracies were
obtained, as reported in Figures 9 and 10. As is visible from the two figures, in both cases,
the best performance was achieved for h = 1.6 m and d = ‘N’.
pose capture, and feature extraction.
The two supervised approaches, TCN and GRU, were experimented with using the
datasets collected in the laboratory setting presented in Section 2. Thus, by varying the
setup parameters (h, d) in { 1.4, 1.6, 1.8, 2.0 } × { N, NW, NE }, different accuracies were
Sensors 2024, 24, 82 obtained, as reported in Figures 9 and 10. As is visible from the two figures, in both
14 cases,
of 22
the best performance was achieved for h = 1.6 m and d = ‘N’.
Figure 10. ADL recognition accuracies upon varying the parameters h and d by using t
based supervised approach.
Regarding the direction d = ‘N’, it was split into Frontal (captured from Cam
Lateral (captured from Cam 2), thus estimating separately the corresponding acc
as reported in Figure 11. For the sake of completeness and due to space limitati
Figure 10.10.ADL
Figure ADLrecognition accuraciesupon
recognition accuracies uponvarying
varying
thethe parameters
parameters h andh dand d by the
by using using the GRU-
GRU-based
confusion
based supervised
supervised matrices
approach.are provided only for the best-performing setups in Figure 12
approach.
Regarding the direction d = ‘N’, it was split into Frontal (captured from Cam 1) and
Lateral (captured from Cam 2), thus estimating separately the corresponding accuracies,
as reported in Figure 11. For the sake of completeness and due to space limitations, the
confusion matrices are provided only for the best-performing setups in Figure 12 .
Figure 11. ADL recognition accuracies of the two supervised approaches under the direction d = ’N’
split into 11.
Figure Frontal
ADL and Lateral.
recognition accuracies of the two supervised approaches under the directio
split into Frontal and Lateral.
Figure 11. ADL recognition accuracies of the two supervised approaches under the direction d = ’N’
Sensors 2024, 24, 82 15 of 22
Regarding the direction d = ‘N’, it was split into Frontal (captured from Cam 1) and
Lateral (captured from Cam 2), thus estimating separately the corresponding accuracies,
Figure 11. ADL
as reported in recognition accuracies
Figure 11. For the sakeof of
thecompleteness
two supervised approaches
and under
due to space the direction
limitations, thed = ’N’
split into Frontal and Lateral.
confusion matrices are provided only for the best-performing setups in Figure 12.
Figure 12.
Figure Confusion matrices
12. Confusion matricesfor
forthe best-performing
the setups.
best-performing setups.
Figure 13.Abnormal
Figure13. Abnormalgait
gaitdetection
detectionperformance
performance forfor
thethe
twotwo
semi-supervised
semi-supervised approaches, TCN–
approaches, TCN–
LSTMAE
LSTMAEand andGRU–LSTMAE,
GRU–LSTMAE, using either
using intra-class
either or inter-class
intra-class training
or inter-class data.data.
training
Sensors 2024, 24, 82 16 of 22TCN–
Figure 13. Abnormal gait detection performance for the two semi-supervised approaches,
LSTMAE and GRU–LSTMAE, using either intra-class or inter-class training data.
Sensors 2024,
Sensors 24,24,
2024, x FOR PEER
x FOR PEERREVIEW
REVIEW 17 of 23 17
Figure14.
Figure 14. Confusion
Confusion matrices
matrices of
of intra-class
intra-class TCN–LSTMAE
TCN–LSTMAEand
andGRU–LSTMAE
GRU–LSTMAEfor
forabnormal
abnormal gait
detection.
gait detection.
Figure15.15.
Figure GRU-based
GRU-based approach
approach by Jun by
et al.Jun
[11]et al. [11]with
compared compared with the two approaches
the two semi-supervised semi-supervised
approaches
using using theGait
the Pathological Pathological Gait Datasets [11].
Datasets [11].
Figure 15. GRU-based approach by Jun et al. [11] compared with the two semi-super
approaches using the Pathological Gait Datasets [11].
Figure16.
Figure 16.Confusion
Confusion matrices
matrices of the
of the twotwo best-performing
best-performing semi-supervised
semi-supervised approaches.
approaches.
4. Discussion
The
Figure 16. study presented
Confusion in of
matrices this
thepaper focused on indoor
two best-performing monitoring approaches.
semi-supervised for both ADL
recognition and abnormal gait detection. In particular, the aim was to explore various
camera setups and DL methodologies in order to find the most suitable setup for pursuing
Sensors 2024, 24, 82 17 of 22
4. Discussion
The study presented in this paper focused on indoor monitoring for both ADL recog-
nition and abnormal gait detection. In particular, the aim was to explore various camera
setups and DL methodologies in order to find the most suitable setup for pursuing abnor-
mal gait recognition during the monitoring of ADLs.
Because the ADL monitoring was the entry-level functionality that the system should
be able to provide, the two state-of-the-art supervised learning methodologies, TCN and
GRU, and the camera setups were investigated for ADL recognition.
For both of the supervised learning methodologies, the best-performing camera setup
was achieved for height h = 1.6 m and direction d = ‘N’, i.e., when actions were performed
frontally and laterally to the camera. However, the use of two (or more) cameras to capture
the same scene may be cumbersome; thus, it is relevant to consider the employment of only
one camera. For that reason, the best-performing direction d = ’N’ was separated into the
contributions provided by the two cameras, which, due to the geometry of the experimental
setup, viewed the scene frontally and laterally. And, as reported in Figure 11, the lateral
view was that which provided the best accuracy with both unsupervised approaches. This
aspect is of paramount importance in an indoor environment where no more than one
camera can be deployed in each room. By appropriately positioning the camera, various
ADL executions under different points of view can be captured over medium to long
time intervals, among which the chances to include frontal or lateral views increase with
the time.
The best-performing camera height of h = 1.6 m provided the smallest amount of
noise in skeleton capture and tracking, even if that height may be uncomfortable to be
maintained in indoor, home-like environments due to the susceptibility to occlusions. On
the other hand, it should also be considered that the monitored subject is rarely left totally
alone without any kind of supervision; thus, the caregiver can periodically check the correct
camera operation. In addition, it is important to note that the aim of this study is not
to detect dangerous situations, but to monitor for possibly medium–long time periods
the execution of ADLs in order to detect abnormalities that may help with early diagno-
sis of neurodegenerative movement-correlated diseases and to monitor the evolution of
such diseases.
The ADL recognition rates were quite high for both supervised approaches, with
slightly better performance for the GRU-based one (Figure 12). This could be due to the
superior ability of the GRU model to deal with long and articulated sequences, an especially
helpful characteristic when actions are hierarchically structured, such as HSK, which is built
on top of WLK with the addition of upper limb movements and torso rotations. In addition,
from the point of view of abnormal gait detection, the ADL recognition performance was
highly favorable, because actions most related to gait, i.e., WLK and HSK, were among
those with higher recognition rates.
In this study, abnormal gait detection was investigated in close relationship with
ADL recognition. The inclusion of various ADLs in the training dataset, instead of solely
focusing on walking, facilitates the development of a model more reflective of real-life
scenarios. In practical situations, individuals participate in many activities beyond mere
walking. Enriching our training data with various ADLs is a deliberate choice to foster
a model that demonstrates robust generalization to the intricate and diverse movements
encountered in everyday life.
The rationale behind incorporating a range of ADLs is rooted in the recognition that
restricting the training dataset to walking alone might lead to an overly specialized model
that is less adaptable to the broader spectrum of everyday activities. By exposing the model
to diverse ADLs, we seek to enhance its ability to discern abnormal gait patterns across
different activities, thus ensuring its versatility and effectiveness in real-world applications.
The underlying assumption was that gait can be assessed during the execution of
ADLs. For that reason, the state-of-the-art learning models for ADL recognition, TCN and
GRU, were stacked with LSTMAE for abnormal detection based on the reconstruction error
Sensors 2024, 24, 82 18 of 22
of learned features provided as inputs. However, while LSTMAE did not require labelled
data, as it learned through an unsupervised modality, the features were supervised-learned
by using TCN and GRU, which required labelled samples for training.
When addressing the challenge of abnormal gait detection, or abnormal event detec-
tion in a broader sense, obtaining labeled data for training supervised models proves to
be highly difficult. Questions arise about the feasibility of observing abnormal gait—can
it be realistically simulated in a laboratory setting, or must we await its manifestation in
the monitored subject? Relying solely on observing the occurrence of enough anomalies
to gather the necessary data (i.e., positive samples) for training a supervised model is
evidently impractical.
Therefore, in this study, it was decided to investigate the approach based on semi-
supervised training, i.e., based on the observation of normal gaits, to try to identify the
abnormal gaits as deviations characterized by high reconstruction errors. However, normal
gaits can be performed by the monitored subjects or simulated by other subjects (i.e., in
a laboratory setting). In this study, both cases were investigated by training the semi-
supervised model LSTMAE with inter-class and intra-class samples. Inter-class samples
were TCN/GRU-learned features obtained from subjects different from the monitored ones,
thus representing the simulation of normal gaits. On the other hand, intra-class samples
were TCN/GRU-learned features obtained from the same monitored subject, representing
a case in which normal gaits were observed directly from the monitored subject.
The abnormal gait detector, based on LSTMAE combined with TCN/GRU, provided
better accuracies when trained with intra-class samples (Figure 13). This may be due to the
fact that the normal gait of another subject differs substantially from that of the monitored
subject, and this difference affects the reconstruction error, thus ultimately deteriorating the
detection performance. Furthermore, unlike the case of ADL recognition, the performance
of abnormal gait detection was slightly better when LSTMAE was combined with TCN
rather than with GRU.
At first glance, both TCN and GRU seem equally adept at processing time series
data, with both architectures offering unique advantages. However, a closer analysis
reveals nuances that may explain the performance gap observed. One primary aspect to
consider is the memory utilization and sequence handling capabilities of both architectures.
GRUs, despite their efficiency over traditional RNNs, do not possess extra memory units.
They do operate similarly to LSTMs, and while they are designed to handle long-term
dependencies, their intrinsic architecture might not be as efficient for capturing extensive
historical sequences. On the contrary, TCNs are inherently designed to manage time series
of any length by effectively mapping them to an output of the same length. This advantage
becomes especially crucial when analyzing sequences where gait abnormalities may be
subtly dispersed across a vast timeline.
Furthermore, when delving into the depths of these architectures—quite literally—the
handling of deep structures offers more insights. Deep architectures can often run into the
notorious problems of vanishing or exploding gradients. GRUs, though efficient compared
to traditional RNNs, might still grapple with these challenges when dealing with long-term
dependencies. TCN, with its incorporation of residual connections, provides a tangible
solution to this problem. These connections ensure that even in the face of deep networks,
the performance does not degrade, and gradient problems are kept at bay.
TCN’s flexibility further reinforces its advantage. Its combination of causal convo-
lutions, dilated convolutions, and residual connections offers a malleable structure that
can adapt to varied lengths and complexities of time series data. While GRU’s gating
mechanism does introduce a degree of flexibility, its complexity might sometimes be an
overkill for sequences that require simpler, more direct interpretations.
The performance superiority of the TCN-based abnormal gait detector was also con-
firmed by using the Pathological Gait Dataset [11]. In addition to comparing the approaches
investigated in this study, the performance was also compared with the original GRU-
based approach proposed by Jun et al. [11] on their datasets. Again, the performance of
Sensors 2024, 24, 82 19 of 22
the combined TCN–LSTMAE approach was found to be superior to the GRU-based one
(Figures 15 and 16). It is also interesting to note that the GRU-based approach proposed by
Jun et al. was trained using inter-class samples, i.e., obtained from observations of different
subjects, but it was also trained using inter-class samples coming only from the same sub-
ject. However, in this case, the intra-class mode performed worse than the inter-class one,
because the GRU-based approach proposed by Jun et al. [11] was a supervised one. The
comparative results thus offer further confirmation of the superiority of semi-supervised
approaches (i.e., TCN/GRU combined with LSTMAE) when trained in intra-class mode.
It is noteworthy that the supervised approach presented by Jun et al. [11] demonstrated
superior performance in inter-class cases, as supervised models are specifically trained on
abnormal samples. However, this approach poses practical challenges in real-life scenarios,
as inducing or waiting for the occurrence of (rare) abnormal events for model training is
impractical. In contrast, our semi-supervised model offers a more practical solution by
being trained on normal events, which are abundantly observed in everyday activities.
Abnormal events can then be identified by detecting deviations from normality.
Moreover, both supervised models, TCN and GRU, exhibited superior performance
compared to state-of-the-art results, as reported by Climent-Perez et al. [15], in ADLs, such
as drinking (87% accuracy in [15]), eating (87% accuracy in [15]), laying down/sleeping
(86% accuracy in [15]), and walking (91% accuracy in [15]). This underscores the efficacy
of our joint ADL–gait approach, thus showcasing improved gait abnormality detection
without compromising the accuracy of ADL classification.
The dataset in our study involves data from three individuals, which prompts a
closer examination of how this impacts the robustness of our model. Despite the limited
number of participants, each individual performed every activity 12 times, resulting in a
substantial dataset of 311,040 frames. This extensive repetition, combined with variations
in three directions and four heights, contributes to a nuanced representation of the activities
under analysis. For activity classification and abnormality detection, we utilized 30-frame
sequences distinguishing between 6 classes and normal/abnormal activities, thus yielding
a total of 10,368 instances. The dataset was meticulously divided into training, validation,
and testing sets, which revealed consistent model performance across these partitions and
indicated resilience to overfitting. While datasets with a larger number of individuals, such
as the Pathological Gait Dataset [11], exist, the strength of our dataset lies in its careful
curation and the richness derived from repeated activities under diverse conditions. The
observed stability in model performance across varied scenarios reinforces its capability to
generalize effectively.
In our pursuit of detecting abnormal gaits, particularly those associated with neu-
rologically induced motor disorders, such as Parkinson’s and Huntington’s diseases, we
aimed for a faithful simulation guided by the expertise of neurologists. The guidelines pro-
vided focused on replicating the nuanced characteristics of abnormal gaits by specifically
addressing left–right asymmetry, tremor, rigidity, and postural instability.
It is crucial to highlight the distinctions of our approach compared to the study
conducted by Jun et al. [11], as we performed a comparative analysis. In our simulation,
participants were guided to emulate the characteristic features of neurologically induced
motor disorders, resulting in gait abnormalities marked by left–right asymmetry, tremor,
rigidity, and postural instability.
In contrast, the Pathological Gait Dataset curated by Jun et al. [11] featured five
predefined pathological gaits—antalgic, stiff-legged, lurching, steppage, and Trendelenburg
gaits. These predefined gaits provided a different perspective by focusing on specific
pathological patterns rather than the broader spectrum of abnormalities encompassed in
our neurologically guided simulation.
this approach. Simulating abnormal gaits, particularly those linked to neurologically in-
duced motor disorders, such as Parkinson’s and Huntington’s diseases, introduces a set of
challenges that warrant careful consideration.
Generalizability Concerns. Simulated abnormal gaits, by nature, may not capture the
full spectrum of variability present in real-world pathological gaits. The controlled nature
of simulations might limit the diversity of abnormalities observed in clinical settings. This
raises concerns about the generalizability of our findings to the broader population affected
by these disorders.
Complexity Replication. Accurately replicating the complexities of pathological gaits
poses challenges. Neurologically induced motor disorders exhibit multifaceted symptoms,
and our attempt to simulate left–right asymmetry, tremor, rigidity, and postural instability
may not fully encapsulate the intricate nuances seen in clinical scenarios.
Variability Among Individuals. Individual variations in the manifestation of abnormal
gaits further complicate the simulation process. While our study involved three individuals,
the diversity in gait abnormalities observed across a larger and more diverse population
may not be fully represented.
Ethical and Privacy Considerations. The simulation of abnormal gaits involves a
balance between authenticity and ethical considerations. Real-world gaits may involve per-
sonal and sensitive aspects of individuals’ lives, and the ethical implications of simulating
these abnormalities need careful attention.
In light of these limitations, it is imperative to interpret our results within the context
of the simulated nature of the abnormal gaits. While our approach provides valuable
insights and controlled conditions for analysis, the challenges outlined underscore the
importance of further research involving clinical datasets and real-world gait abnormalities.
Future studies should aim to validate findings in more diverse populations, considering
the multifaceted nature of neurologically induced motor disorders. The use of simulated
abnormal gaits serves as a pragmatic starting point for exploration, offering controlled
conditions for analysis. However, researchers and practitioners must exercise caution
when extending findings to real-world applications by recognizing the complexities and
limitations inherent in the simulation process.
6. Conclusions
In this study, we have delved into gait analysis and its role in early detection and
monitoring of neurological and musculoskeletal disorders. Our exploration centered on
the automatic detection of abnormal gait utilizing 3D vision, with a specific emphasis on
non-invasive data acquisition methods applicable in everyday environments. Our findings
and the implications they hold are twofold, underlining their significance for both research
and practical applications.
Firstly, we have examined various configurations for gait analysis, encompassing
multi-camera setups deployed at different distances and angles, as well as performing
daily activities in different orientations. Our investigation has further involved state-of-
the-art DNN approaches, including TCN, GRU, and LSTMAE, thus revealing the superior
performance of semi-supervised techniques when trained using the intra-class mode.
We have also investigated diverse data representation formats, such as Euclidean-based
representations, angular adjacency matrices, and rotation matrices.
Secondly, our study underscores the relevance of integrating gait analysis with the
monitoring of ADLs in the context of AAL applications. This holistic approach not only
streamlines the monitoring process but also offers a comprehensive picture of an indi-
vidual’s overall well-being. It presents a transformative shift from manual gait analysis,
which is labor-intensive and expertise-dependent, to an automated, non-invasive, everyday
monitoring system.
Looking ahead, we are actively pursuing ongoing work to initiate clinical trials involv-
ing hundreds of patients affected by movement-correlated neurological diseases, such as
Parkinson’s disease and Huntington’s chorea. These trials will further validate the effec-
Sensors 2024, 24, 82 21 of 22
tiveness and real-world applicability of our system by assessing its potential to enhance
the lives of individuals suffering from such conditions.
Author Contributions: Conceptualization, G.D., A.M. and A.L.; methodology, G.D.; software, A.M.;
validation, A.L.; formal analysis, G.D., A.M. and A.L.; investigation, G.D., A.M. and A.L.; resources,
A.L.; data curation, G.D., A.M. and A.L.; writing—original draft preparation, G.D.; writing—review
and editing, G.D., A.M. and A.L.; visualization, G.D., A.M. and A.L.; supervision, A.L.; project
administration, A.L.; funding acquisition, A.L. All authors have read and agreed to the published
version of the manuscript.
Funding: This work was carried out within the project PON “4FRAILTY” ARS01_00345 funded by
the MUR-Italian Ministry for University and Research.
Institutional Review Board Statement: This study was conducted in accordance with the Declaration
of Helsinki. Ethical review and approval were waived for this study due to the reason that all involved
subjects were authors of the present study, the data are fully anonymized, and no hazards or other
potential ethical concerns were identified. Additionally, this research adhered to all relevant national
and international regulations governing human research, and informed consent was obtained from
each participating author–subject prior to their inclusion in this study.
Informed Consent Statement: Informed consent was obtained from all subjects involved in
this study.
Data Availability Statement: Data are available upon request due to privacy restrictions.
Conflicts of Interest: The authors declare no conflicts of interest. The funders had no role in the
design of this study; in the collection, analyses, or interpretation of data; in the writing of the
manuscript; or in the decision to publish the results.
References
1. McCrum, C.; van der Hulst, E.G. How Older Adults can Avoid Falls via Proactive and Reactive Gait Adaptability: A Brief
Introduction. 2023. Available online: https://osf.io/preprints/osf/yvjbu (accessed on 20 December 2023).
2. Verghese, J.; LeValley, A.; Hall, C.B.; Katz, M.J.; Ambrose, A.F.; Lipton, R.B. Epidemiology of gait disorders in community-residing
older adults. J. Am. Geriatr. Soc. 2006, 54, 255–261. [CrossRef] [PubMed]
3. Guo, Y.; Yang, J.; Liu, Y.; Chen, X.; Yang, G.Z. Detection and assessment of Parkinson’s disease based on gait analysis: A survey.
Front. Aging Neurosci. 2022, 14, 916971. [CrossRef] [PubMed]
4. Stotz, A.; Hamacher, D.; Zech, A. Relationship between Muscle Strength and Gait Parameters in Healthy Older Women and Men.
Int. J. Environ. Res. Public Health 2023, 20, 5362. [CrossRef] [PubMed]
5. Rueangsirarak, W.; Zhang, J.; Aslam, N.; Ho, E.S.; Shum, H.P. Automatic musculoskeletal and neurological disorder diagnosis
with relative joint displacement from human gait. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 2387–2396. [CrossRef] [PubMed]
6. Beyrami, S.M.G.; Ghaderyan, P. A robust, cost-effective and non-invasive computer-aided method for diagnosis three types of
neurodegenerative diseases with gait signal analysis. Measurement 2020, 156, 107579. [CrossRef]
7. Cimolin, V.; Vismara, L.; Ferraris, C.; Amprimo, G.; Pettiti, G.; Lopez, R.; Galli, M.; Cremascoli, R.; Sinagra, S.; Mauro, A.; et al.
Computation of gait parameters in post stroke and parkinson’s disease: A comparative study using RGB-D sensors and
optoelectronic systems. Sensors 2022, 22, 824. [CrossRef] [PubMed]
8. Ferraris, C.; Cimolin, V.; Vismara, L.; Votta, V.; Amprimo, G.; Cremascoli, R.; Galli, M.; Nerino, R.; Mauro, A.; Priano, L.
Monitoring of gait parameters in post-stroke individuals: A feasibility study using RGB-D sensors. Sensors 2021, 21, 5945.
[CrossRef] [PubMed]
9. Kaur, R.; Motl, R.W.; Sowers, R.; Hernandez, M.E. A Vision-Based Framework for Predicting Multiple Sclerosis and Parkinson’s
Disease Gait Dysfunctions—A Deep Learning Approach. IEEE J. Biomed. Health Inform. 2022, 27, 190–201. [CrossRef] [PubMed]
10. Tang, Y.M.; Wang, Y.H.; Feng, X.Y.; Zou, Q.S.; Wang, Q.; Ding, J.; Shi, R.C.J.; Wang, X. Diagnostic value of a vision-based intelligent
gait analyzer in screening for gait abnormalities. Gait Posture 2022, 91, 205–211. [CrossRef] [PubMed]
11. Jun, K.; Lee, Y.; Lee, S.; Lee, D.W.; Kim, M.S. Pathological gait classification using kinect v2 and gated recurrent neural networks.
IEEE Access 2020, 8, 139881–139891. [CrossRef]
12. Guo, Y.; Deligianni, F.; Gu, X.; Yang, G.Z. 3-D canonical pose estimation and abnormal gait recognition with a single RGB-D
camera. IEEE Robot. Autom. Lett. 2019, 4, 3617–3624. [CrossRef]
13. Guo, Y.; Gu, X.; Yang, G.Z. MCDCD: Multi-source unsupervised domain adaptation for abnormal human gait detection. IEEE J.
Biomed. Health Inform. 2021, 25, 4017–4028. [CrossRef] [PubMed]
14. Chen, D.; Cai, Y.; Qian, X.; Ansari, R.; Xu, W.; Chu, K.C.; Huang, M.C. Bring gait lab to everyday life: Gait analysis in terms of
activities of daily living. IEEE Internet Things J. 2019, 7, 1298–1312. [CrossRef]
Sensors 2024, 24, 82 22 of 22
15. Climent-Perez, P.; Florez-Revuelta, F. Improved action recognition with separable spatio-temporal attention using alternative
skeletal and video pre-processing. Sensors 2021, 21, 1005. [CrossRef] [PubMed]
16. Lugaresi, C.; Tang, J.; Nash, H.; McClanahan, C.; Uboweja, E.; Hays, M.; Zhang, F.; Chang, C.L.; Yong, M.; Lee, J.; et al. MediaPipe:
A Framework for Perceiving and Processing Reality. In Proceedings of the Third Workshop on Computer Vision for AR/VR at
IEEE Computer Vision and Pattern Recognition (CVPR) 2019, Long Beach, CA, USA, 17 June 2019.
17. Bai, S.; Kolter, J.Z.; Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling.
arXiv 2018, arXiv:1803.01271.
18. Lara-Benítez, P.; Carranza-García, M.; Luna-Romera, J.M.; Riquelme, J.C. Temporal convolutional networks applied to energy
related time series forecasting. Appl. Sci. 2020, 10, 2322. [CrossRef]
19. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
20. Diraco, G.; Siciliano, P.; Leone, A. Remaining Useful Life Prediction from 3D Scan Data with Genetically Optimized Convolutional
Neural Networks. Sensors 2021, 21, 6772. [CrossRef] [PubMed]
21. Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International
Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814.
22. Cho, K.; Van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations
using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078.
23. Intel(R) RealSense(TM) Pyrealsense2 2.54.2.5684. Available online: https://pypi.org/project/pyrealsense2/ (accessed on 10
January 2022).
24. Madhuranga, D.; Madushan, R.; Siriwardane, C.; Gunasekera, K. Real-time multimodal ADL recognition using convolution
neural networks. Vis. Comput. 2021, 37, 1263–1276. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.