BodiTrack2 Pro
BodiTrack2 Pro
A R T I C L E I N F O A B S T R A C T
Keywords: This study aimed to identify correct sitting postures during prolonged periods using a pressure mapping system
Posture recognition on the seat along with Convolutional Neural Networks (CNN). The study was conducted in three stages. In the
Sitting first stage, twenty-two volunteers participated to obtain a dataset of pressure maps of three correct postures (flat,
Pressure distribution
short lordosis, and long lordosis) using two pressure mapping systems and validating with two methods. This
Neural network
Transfer learning
involved angle measurement through an Inertial Measurement Unit (IMU) system and image recognition. In the
second stage, a CNN model was trained using data from the first stage that represented each posture, and then
Transfer Learning was implemented using a different pressure mapping system. In the third stage, the system was
used for long-term monitoring based on the model, and feedback on the number of correct postures was provided
to two healthy individuals during prolonged sitting periods for at least 2 h. The system provided feedback on
posture recognition to the participants. Additionally, the participants evaluated their experience with the system
using a Likert-5 questionnaire. The results showed that the system accurately identified the three postures with
an accuracy of 0.854, precision of 0.856, and recall of 0.854. The study found that the data collection meth
odology was non-invasive and unobtrusive. Moreover, the participants found the system’s feedback under
standable and helpful in improving their posture. The approach presented in this study has the potential to
facilitate further research on sitting posture since it can be easily adapted to various pressure mapping systems
without altering the methodology.
1. Introduction because they are less expensive, require low computational costs, and do
not need a controlled environment [6]. Many studies have used pressure
The analysis of sitting posture is a widely researched topic due to its sensor arrays or individual sensors placed strategically to provide
common occurrence in daily activities [1–3]. Various methods have posture information [5]. Researchers have successfully classified sitting
been used to detect posture, including video cameras [4], individual postures using pressure sensor data in controlled laboratory conditions
pressure sensors, and pressure sensor arrays [5–7]. [5–7].
W. Albarracín et al. conducted a study on sitting posture analysis Identifying correct postures has also been achieved by using pressure
using the Kinect system. The system’s RGB camera and depth sensor sensors and neural networks. Wang et al. developed a posture recogni
allowed for capturing 3D motion from fragmented video frames or im tion system utilizing Spiking Neural Networks (SNN) and a liquid state
ages. Their findings showed that the system determined a person’s machine. They placed Force Sensitive Resistors (FSRs) on the seat and
posture angles while seated and evaluated whether they were correct or backrest of an office chair to provide real-time feedback. Their algorithm
incorrect. However, direct visual access to the participant was necessary imitated a neuron’s response, considering immediate and previous im
for detection, resulting in high computational costs and a specific pulses. Wang et al. concluded that their system using FSRs provided
environment setting [4]. sufficient feedback to distinguish postures and that SNN could accu
Recently, pressure sensors have become the preferred method for rately classify postures [6]. The study only took place in a laboratory and
analyzing sitting posture over computer vision systems [8]. This is lacked user feedback, so actual feedback is required to assess its
* Corresponding autor.
E-mail address: [email protected] (L. Garay-Jiménez).
https://doi.org/10.1016/j.bspc.2024.106306
Received 5 December 2023; Received in revised form 24 February 2024; Accepted 8 April 2024
Available online 17 April 2024
1746-8094/© 2024 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
I. Morales-Nolasco et al. Biomedical Signal Processing and Control 95 (2024) 106306
2
I. Morales-Nolasco et al. Biomedical Signal Processing and Control 95 (2024) 106306
3
I. Morales-Nolasco et al. Biomedical Signal Processing and Control 95 (2024) 106306
Fig. 2. Setup to obtain angles from video of different postures. On the left, the Flat (F) posture is shown; in the center, the Long Lordosis (LL) posture is shown; on
the right, the proposed Short Lordosis (SL) posture is shown. Backbone identification is made with colored points: red T1, green T5, blue T10, pink L3, and purple S2;
An example of an angle generated between three points is shown in gold/brown lines.
capture 30 samples per second. The analysis was conducted on a 50-sec kernel size (p × p) [16]. If overfitting was observed during training, a
ond segment while the participants maintained their posture to elimi dropout layer was recommended. This layer randomly deactivated a
nate any noise associated with finding the required posture. percentage of the layer’s neurons, accelerating the training process and
The angle datasets obtained from the video and IMU with the 22 reducing overfitting. In the final part of the CNN, the FC layers received
volunteers in Stage 1 were compared to previously reported angles ob the features and created a high-level abstraction, similar to a multilayer
tained with the posture angles defined by Claus [18]. perceptron network (MLP) [15]. Finally, the classification was gener
Analysis of Variance (ANOVA) was conducted to compare the angle ated using an ending layer such as the SoftMax function. Once the
datasets; significance was defined at α = 0.05. The analysis involved training stage was completed, the score represented the probability of a
comparing the range and the mean value of each angle that defines a new instance belonging to a specific class [18].
posture: Thoracic, Thoracic Lumbar, and Lumbar. For sitting posture recognition, we proposed a classification model
based on two-dimensional CNN (2D CNN) to train and classify the three
2.4.2. Convolutional Neural network implementation postures. The network architecture is shown in Fig. 3, and the param
In the implemented CNN model, the input x was defined by the eters corresponding to the 2D CNN model structure are shown in
image’s height, width, and depth (m × m × r). Then, k kernels were Table 2. The CNN model has two convolutional layers, two max-pooling
applied in each convolutional layer with similar dimensions to the input layers, one dropout layer, and one fully connected layer. The size of the
x ,( n × n × q ) with n < m and r = q; the kernels were the basis set convolution kernel is (3,3), the Stride is two, and the activation func
associated with the local connections, and hk were the k feature maps tions of the convolutional layer were the Rectified Linear Unit (ReLU)
with similar weight Wk and bias bk parameters and size of p = m − n − 1. function.
These maps, also called filters, were convolved with the input presented The network input data size is 26 × 26 × 3, and it had a data
in Eq. (2) transformation by rotation in a range of 20 and a width and height shift
of 0.2. Before feeding the images into the network, they were pre
hk = f (W k x + bk ) (2) processed using VGG16 in Phyton. The VGG16 preprocessing involved
converting the images from RGB to BGR and then zero-centering each
In the convolution layer, a dot product was performed between the
color channel is zero-centered with respect to the ImageNet dataset,
weights generated by the map and the input. After that, the filter shifts
without scaling. The convolutional layers extract the features, and the
over the input matrix to compute the next dot product. The number of
classification is made using the dropout layer and max-pooling layers on
pixels shifts is called Stride. Finally, the output was obtained by applying
the ReLU dense layer.
a nonlinearity or activation function.
The open-source TensorFlow framework and Python algorithms
After the convolution and activation operations in the CNN, a pool
running on NVIDIA GTX960M were implemented to train the CNN. The
ing function was used to down-sample the feature maps to reduce the
hyperparameters are defined as learning rate η = 0.0001, 200 Epochs,
number of network parameters. This pooling function was either a
verbose = 2, loss = categorical cross entropy, using an ADAM optimizer,
maximum or average function applied to each adjacent area with a
4
I. Morales-Nolasco et al. Biomedical Signal Processing and Control 95 (2024) 106306
Table 3
2.4.3. Transfer learning approach
Structure of Transfer Learning method applied in the CNN model.
Transfer learning (TL) is a method that involves leveraging knowl
edge gained from one task to improve performance on another related Layer Kernel Filters Stride Activation Phase
Function
task. In transfer learning, a model that has been trained with a large
dataset is used as a starting point. This study conducted an experiment to Conv2D (3,3) 16 − ReLU Pre- trained
evaluate the benefits of transfer learning when using datasets from Max- (2,2) − 2 − network.
pooling2D
different pressure mapping systems, from data gathered from XSEN Dropout − − − −
SOR® to BodiTrak® and from BodiTrak® to XSENSOR® to compare the Conv2D (3,3) 32 − ReLU
performance of the proposed classification method. Then, the TL model Max- (2,2) − 2 −
was compared with the stand-alone CNN model for each dataset of the pooling2D
Conv2D (3,3) 16 ReLU Trainable
pressure mapping systems.
−
Max- (2,2) − 2 − parameters
The image size was reduced from 32 x 32 pixels to 26 x 26 pixels to pooling2D
ensure uniformity. The dataset also had the same data transformation, Dropout − − − −
by rotation in a range of 20 and width and height shift of 0.2, and a Flatten − − − −
VGG16 image preprocessing was used to resize the input. The features Dense 3 − − Softmax
5
I. Morales-Nolasco et al. Biomedical Signal Processing and Control 95 (2024) 106306
Table 4 [18]. Each posture, Flat, Long Lordosis, and Short Lordosis, consists of
Data set definitions and volunteer information. three spine-analysis angles: Thoracic, thoracic lumbar, and lumbar.
Sensor Participants Postures Results Fig. 6 is an example of the correspondence in the distribution of the
angle comparison sumarized in Table 5.
XSENSOR® 10 volunteers Flat 12,000 pressure maps per According to the ANOVA results, the Flat position’s angles were
23–27 years old. Long posture similar to what Claus et al. reported. However, there was some variation
av. weight: 72.7 Lordosis 10 database of measurements in the thoracic and lumbar angles of the Long Lordosis. Despite this, the
kg Short of angles of the back while collected data was within the range reported in their study. The Short
av. high: 1.71 m Lordosis sitting.
av. BMI: 24.74 10 videos while sitting
Lordosis posture was discovered to be the most difficult to reproduce.
kg/m2 Overall, the distribution of angle data had features similar to the mean
BodiTrak® 12 volunteers Flat 5760 pressure maps per and range reported by Claus et al [18].
27 ± 10 years old. Long posture
av. weight: 72.3 Lordosis 12 database of measurements
kg ± 11.4 kg Short of angles of the back while
3.3. Classification results
av. high: 1.71 m Lordosis sitting.
± 7.7 cm 12 videos while sitting The CNN network trained with XSENSOR® dataset presents an
av. BMI: 24.46 average accuracy of 84.7 %, The CNN network trained with Boditrak®
kg/m2
dataset presents an average accuracy of 98.4 %. Transfer Learning using
BodiTrak® 2 volunteers Flat 2 h of recordings of pressure
Long maps previous CNN layers of Boditrak® and trained with XSENSOR® shows
Lordosis And 3.5 h of recordings of an average accuracy of 66.7 %. Transfer Learning using previous CNN
Short pressure maps layers of XSENSOR® and trained with Boditrak® shows an average ac
Lordosis curacy of 85.4 %.
Incorrect
Table 6 summarizes the mean (x) and standard deviation (σ ) of the
postures metrics showing the performance of the classifiers with the two TL
methods. The metrics were obtained by cross-validation on the same
dataset and with the same hyperparameters (N = 7 times each).
Fig. 4. Example of pressure images associated to each posture. Images are obtained from the same subject with BodiTrak®.
Fig. 5. Example of a training batch. The first row shows a batch of original XSENSOR® pressure maps. The second row shows a batch of rotation, shifting, and
vgg16 preprocessed XSENSOR® pressure maps. The third row shows a batch of original BodiTrak® pressure maps. The fourth row shows a batch of rotation, shifting,
and vgg16 preprocessed BodiTrak® pressure maps.
6
I. Morales-Nolasco et al. Biomedical Signal Processing and Control 95 (2024) 106306
Fig. 6. Comparison of the angle distributions. The Angle Measurement System (AMS) and video data distribution are compared to the adjusted distribution of the
results reported by Claus for the Flat thoracic angle.
Additionally, the study results were compared with the results of other help improve their overall health and quality of life [13]. This research
classification methods used to detect various sitting postures, as pre introduced a reliable approach to detecting and categorizing good
sented in Table 7. sitting postures, which can reduce pain and discomfort, as indicated by
Additionally, after training the CNN model with the data of nine out previous studies.
of ten volunteers and testing with the remaining volunteers’ data, the In this study, the pressure mappings were converted to images and
accuracy during the training was 86.4 %, while in the validation, it was the CNN because this technique is considered a well-established tech
34.9 %. This indicates that although pre-training with data from other nique as automatic image feature extractor technique; in recent years,
subjects can be helpful, personalized fine-tuning of the CNN input is still using CNN as a feature extractor has become useful to obtain Yoga poses
necessary. from images [19], and even the basic characteristics of sitting postures
such as back, forward, left and right-leaning into FSR mapping [14,20].
In evaluating sitting posture, some studies have focused on wheel
3.4. Feedback results
chair users or long-term drivers. However, to the best of the authors’
knowledge, no specific dataset is available related to the long-term
The scores from the Likert-5 questionnaire were as follows: for
sitting of healthy individuals. Therefore, the use of Transfer Learning
question 1 (sensor perception) was 5, for question 2 (no distraction) was
(TL) has been helpful in demonstrating the strength and reliability of the
5, for question 3 (posture improvement) was 3.5, question 4 (under
model used in this study, which was able to effectively utilize two
standable graphs) was 4.5, for question 5 (prevalence graph usefulness)
different pressure mapping systems. The XSENSOR® system has less
was 4, and for question 6 (total time sitting in different posture) was 4.5.
resolution than the BodiTrak® sensor. On a stand-alone version, the test
These individual question ratings collectively contributed to an overall
presents a relationship between sensor resolution and the performance
score of 4.25, reflecting the general system acceptance.
metrics (accuracy 0.847 ± 0.011 vs. 0.984 ± 0.005, respectively).
Participants agreed that they did not perceive the pressure mapping
Since deep learning methods have shown better results with large
system BodiTrak® during extended work hours. They also reported that
datasets, augmentation methods is a common practice, and transfer
having a screen with the FSA program for data collection did not distract
learning is recommended when transferring from larger to smaller
them from their work. These findings suggest that the data collection
datasets. The TL version obtained a resolution from the larger to smaller
method was non-invasive and went unnoticed by the participants. Par
dataset (0.854 ± 0.007), similar to the stand-alone CNN with the
ticipants slightly agreed that the posture report helps improve posture,
XSENSOR performance (0.847 ± 0.011). However, in the reversal TL
indicating that the report could be improved, and one comment sug
version, the performance decreased from the smaller to the larger
gested that continuous feedback would be more helpful. The partici
dataset (0.667 ± 0.005). It was expected because the bigger version was
pants perceived the graphs as understandable and useful for posture
used to train the feature extraction and the last part to tune-fine the
improvement.
classification. In the opposite way, the TL application means that the
smaller dataset was used to train to extract the features and a low-
4. Discussion resolution dataset for fine-tuning. The models learned useful features
and patterns relevant to similar tasks in all tested cases, even with
Sitting with poor posture for extended periods can lead to discomfort different sensor characteristics. So, fine-tuning or additional training
and injuries [3]. To reduce such risks, sitting properly or limiting was performed on the new task-specific dataset.
abnormal postures is crucial [11,12]. A smart system that identifies Other studies by J. Wang, Y. Min, and C. Ma examined various
irregular sitting postures and reminds individuals to correct them can
7
I. Morales-Nolasco et al. Biomedical Signal Processing and Control 95 (2024) 106306
Table 5 Table 6
Statistics of the angle measurement with the AMS and image systems associated Summary of performance of the different training methods with both datasets.
with the three postures: Flat, Long, and Short Lordosis. Method Accuracy Precision Recall F1-score
Position Angle (◦ ) Video AMS Claus (x ± σ) (N (x ± σ) (N (x ± σ) (x ± σ)
= 7) = 7) (N = 7) (N = 7)
FLAT Thoracic Mean:17.1 Mean:17.9 Mean: 19.0
Range: Range: Range: (15.0, CNN (XSENSOR dataset) 0.847 ± 0.849 ± 0.847 ± 0.846 ±
(7.9,16.3) (16.1,19.7) 23.0) 0.011 0.012 0.011 0.012
ANOVA: ANOVA: CNN (BodiTrak dataset) 0.984 ± 0.984 ± 0.984 ± 0.984 ±
F = 13.61 F = 0.85 0.005 0.005 0.005 0.005
P value = P value = CNN (XSENSOR dataset) + 0.854 ± 0.856 ± 0.854 ± 0.854 ±
0.0003 0.350 Transfer Learning 0.007 007 0.008 0.008
Thoracic Mean: 5.1 Mean: 5.5 Mean: 4.6 (BodiTrak dataset)
Lumbar Range: Range: Range: (0.0, CNN (BodiTrak dataset) + 0.667 ± 0.672 ± 0.669 ± 0.667 ±
(4.3,6.0) (4.2,6.8) 9.2) Transfer Learning 0.005 0.004 0.004 0.005
ANOVA: ANOVA: (XSENSOR dataset)
F = 2.20 F = 2.57
P value = P value =
0.140 0.130 In contrast, our research used more complex postures that might not
Lumbar Mean: − 2.7 Mean: − 7.2 Mean: − 1.5 be visually identified and are being reported as beneficial in preventing
Range: Range: (− 8.6, Range: (− 5.2,
back pain. Because of this complexity, the double-validation method to
(− 1.9,3.4) − 5.9) 3.7)
ANOVA: ANOVA: detect a correct posture by measuring the angles in the back using an
F = 5.37 F = 53.61 IMU system and by image recognition used in the study helped train the
P value = 0.02 P value = 1x CNN model effectively with data representing each posture.
10− 10 It is worth noting that Wang and Ma achieved 95 % accuracy with the
LONG Thoracic Mean:16.6 Mean:18.5 Mean: 16.2
aid of back sensors, and we want to reduce the sensors needed, so we
LORDOSIS Range: (15.7, Range: (16.7, Range:
17.5) 20.2) (12.2,20.2) only used a seat pressure distribution sensor to distinguish between
ANOVA: ANOVA: positions.
F = 0.77 F = 5.85 It is worth noting that our model’s metrics might seem to be lower
P value = 0.38 P value = 0.01
compared to other models, as shown in Table 7. However, it is essential
Thoracic Mean: − 4.1 Mean: − 6.3 Mean: − 2.6
Lumbar Range: (− 4.8, Range: (− 7.9, Range:
to consider that the pressure maps generated during complex postures,
− 3.4) − 4.8) (− 7.0,1.8) such as long lordosis, short lordosis, and flat, were more complicated
ANOVA: ANOVA: than those generated when a person is leaning toward one direction.
F = 7.62 F = 16.51 This indicates that our model can capture the inherent complexity of
P value = P value =
these postures and is capable of recognizing a higher level of complexity.
0.006 0.0001
Lumbar Mean: − 10.1 Mean: − 9.3 Mean: − 9.2 Moreover, our findings demonstrate the reliability of our model and
Range: (− 11.3, Range: (− 10.5, Range: (− 12.4, suggest the potential for further investigation into posture dynamics
− 8.9) − 8.1) − 6.0) over extended periods.
ANOVA: ANOVA:
The robustness of the posture classification methodology was eval
F = 2.01 F = 0.04
P value = 0.15 P value = 0.83
uated using two different pressure mapping systems and preprocessing
SHORT Thoracic Mean:19.4 Mean:16.7 Mean: 18.9 the batch input with augmentation methods.. First, the XSENSOR®
LORDOSIS Range: Range: Range: system was used, and then it was replaced with the BodiTrak® system to
(18.4,20.3) (15.5,18.0) (14.0,23.0) evaluate the posture. Transfer learning tests were implemented to
ANOVA: ANOVA:
confirm the feasibility and versatility of the algorithms to use different
F = 0.39 F = 8.61
P value = P value = pressure distribution systems. This work created the infrastructure for
0.531 0.004 further studies of sitting posture, which can be adapted without
Thoracic Mean: 5.3 Mean: 5.2 Mean: 3.1 changing the methodology. Moreover, based on the participant feed
Lumbar Range: (4.5, Range: (4.1, Range:
back, the graphs were found to be easy to understand and helpful for
6.1) 6.3) (0.6,5.6)
ANOVA: ANOVA:
improving posture. This enables the user to become more aware of their
F = 25.66 F = 13.61 posture and ultimately improve their well-being.
P value = 1x P value =
10− 6 0.0004 5. Conclusions
Lumbar Mean: − 8.3 Mean: − 7.6 Mean: − 4.1
Range: (− 10.0, Range:(− 8.7, Range: (− 7.6-
− 6.6) − 6.4) ,0.6) This study provided the algorithms and methodology to detect cor
ANOVA: ANOVA: rect sitting postures in long-term seating periods. The study demon
F = 21.88 F = 26.73 strated that transfer learning could be used in limited datasets provided
P value = 9x P value = 1x by different sensor map characteristics. The interface pressure mapping
10− 6 10− 6
system along with the CNN-based recognition system, showed great
potential for obtaining objective data about sitting posture.
visually distinguishable postures, including sitting while leaning left or The results of this study demonstrated the robustness of the posture
right, sitting on the edge of a chair, and crossing feet. All proposed classification methodology and the feasibility and versatility of the al
postures seem to be visually identified. Their study used a force-sensitive gorithms to be used with different pressure distribution systems through
resistor (FSR) inserted in the backrest, as well as the chair’s seat pan. the implementation of Transfer Learning (0.854 ± 0.007). However, it is
Their main goal was to compare different classification methods, such as relevant to highlight that the sample size and sensor resolution in pre-
MLP, CNN, and Spike neural networks, with and without preprocessing training could have an impact on the final performance, as demon
methods of complex patterns associated with the FSR maps, not evaluate strated in this study. CNN trained with the BodiTrak® dataset presented
their prevalence or type of sitting posture. This study inspires our pro an accuracy of 0.984 ± 0.005 because the sensor resolution was higher
posal using Convolutional Neural Networks. than the XSENSOR® dataset with an accuracy of 0.847 ± 0.011 when
they were stand-alone.
8
I. Morales-Nolasco et al. Biomedical Signal Processing and Control 95 (2024) 106306
Table 7
Comparison of precision, accuracy, and recall of different classification methods to detect different sitting postures.
Author Sensor type and sensor position Participants Detected postures Classification method Evaluation
(#) condition
J. Wang et al. FSR Matrix (9x9) in the seat and FSR 19 volunteers (1) seating Upright (SU) SNN + LR (logistic Precision: 0.88
2020 [6] Matrix (10x9) in the backrest (4) Leaning right (LR), left (LL), regression) Recall: 0.86
forward (LF) and back (LB)
(10 combination)
SU & left leg crossed over the right leg
(LC)
SU & rigth leg crossed over the left leg
(RC)
LC & LB
RC & LB
SU& left ankle resting on the leg (LA)
SU& right ankle resting on the leg(RA)
LA & LB &
RA & LB
Slouchy back down
Sitting on the leading edge
Kim, Y. M et al. FSR Matrix (8x8) in the seat 10 volunteers (1)sitting straight CNN: LeNet-5 Accuracy:
2018 [7] (2) lean left and right 0.942
(1) sitting at the front of the chair
(1) sitting crossed-legged
Ma et al. 7 FSR in the seat and 12 volunteers User seated correctly on the MLP Accuracy:
2017 [5] 5 FSR in the backrest wheelchair 0.955
Lean Left (LL) Precision:0.926
Lean Right (LR) Recall: 0.926
Lean Forward (LF)
Lean Backward (LB)
This study XSENSOR® (26x26) 10 volunteers Long lordosis CNN + Transfer Accuracy:
BodiTrak® (32x32) (XSENSOR dataset) Short lordosis Learning 0.854
Both in the seat. 12 volunteers (Boditrak Flat Precision:0.856
dataset) Recall: 0.854
The approach utilized in this study has the potential to facilitate analysis, Conceptualization. Laura Garay-Jiménez: Writing – review &
additional research on sitting posture since it is designed to be versatile editing, Visualization, Validation, Supervision, Resources, Project
and can easily be adapted to various pressure mapping systems without administration, Methodology, Investigation, Funding acquisition,
altering the methodology. Moreover, this methodology can serve as a Conceptualization.
foundation for further exploration into the dynamics of long-term
sitting. The insights gained from such research could be invaluable in
helping individuals maintain good posture, leading to improved health Declaration of competing interest
and overall well-being.
The authors declare that they have no known competing financial
CRediT authorship contribution statement interests or personal relationships that could have appeared to influence
the work reported in this paper.
Isaac Morales-Nolasco: Writing – original draft, Visualization,
Software, Methodology, Investigation, Formal analysis, Data curation, Data availability
Conceptualization. Sandra Arias-Guzman: Writing – review & editing,
Writing – original draft, Validation, Supervision, Methodology, Formal Data will be made available on request.
9
I. Morales-Nolasco et al. Biomedical Signal Processing and Control 95 (2024) 106306
Annex.
.
Annex1. Screenshot of the feedback dashboard.
10
I. Morales-Nolasco et al. Biomedical Signal Processing and Control 95 (2024) 106306
[18] A.P. Claus, J.A. Hides, G.L. Moseley, P.W. Hodges, Is ‘ideal’sitting posture real?: [20] Z. Fan, X. Hu, W.-M. Chen, D.-W. Zhang, X. Ma, A deep learning based 2-dimen
Measurement of spinal curves in four sitting postures, Manual Therapy 14 (2009) sional hip pressure signals analysis method for sitting posture recognition,
404–408. Biomedical Signal Processing and Control 73 (2022) 103432.
[19] S. Garg, A. Saxena, R. Gupta, “Yoga pose classification: a CNN and MediaPipe
inspired deep learning approach for real-world application,” Journal of Ambient
Intelligence and Humanized, Computing (2022) 1–12.
11