Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (76)

Search Parameters:
Keywords = mediapipe

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 4018 KiB  
Article
A MediaPipe Holistic Behavior Classification Model as a Potential Model for Predicting Aggressive Behavior in Individuals with Dementia
by Ioannis Galanakis, Rigas Filippos Soldatos, Nikitas Karanikolas, Athanasios Voulodimos, Ioannis Voyiatzis and Maria Samarakou
Appl. Sci. 2024, 14(22), 10266; https://doi.org/10.3390/app142210266 - 7 Nov 2024
Viewed by 553
Abstract
This paper introduces a classification model that detects and classifies argumentative behaviors between two individuals by utilizing a machine learning application, based on the MediaPipe Holistic model. The approach involves the distinction between two different classes based on the behavior of two individuals, [...] Read more.
This paper introduces a classification model that detects and classifies argumentative behaviors between two individuals by utilizing a machine learning application, based on the MediaPipe Holistic model. The approach involves the distinction between two different classes based on the behavior of two individuals, argumentative and non-argumentative behaviors, corresponding to verbal argumentative behavior. By using a dataset extracted from video frames of hand gestures, body stance and facial expression, and by using their corresponding landmarks, three different classification models were trained and evaluated. The results indicate that Random Forest Classifier outperformed the other two by classifying argumentative behaviors with 68.07% accuracy and non-argumentative behaviors with 94.18% accuracy, correspondingly. Thus, there is future scope for advancing this classification model to a prediction model, with the aim of predicting aggressive behavior in patients suffering with dementia before their onset. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Image Processing)
Show Figures

Figure 1

25 pages, 5540 KiB  
Article
IMITASD: Imitation Assessment Model for Children with Autism Based on Human Pose Estimation
by Hany Said, Khaled Mahar, Shaymaa E. Sorour, Ahmed Elsheshai, Ramy Shaaban, Mohamed Hesham, Mustafa Khadr, Youssef A. Mehanna, Ammar Basha and Fahima A. Maghraby
Mathematics 2024, 12(21), 3438; https://doi.org/10.3390/math12213438 - 3 Nov 2024
Viewed by 531
Abstract
Autism is a challenging brain disorder affecting children at global and national scales. Applied behavior analysis is commonly conducted as an efficient medical therapy for children. This paper focused on one paradigm of applied behavior analysis, imitation, where children mimic certain lessons to [...] Read more.
Autism is a challenging brain disorder affecting children at global and national scales. Applied behavior analysis is commonly conducted as an efficient medical therapy for children. This paper focused on one paradigm of applied behavior analysis, imitation, where children mimic certain lessons to enhance children’s social behavior and play skills. This paper introduces IMITASD, a practical monitoring assessment model designed to evaluate autistic children’s behaviors efficiently. The proposed model provides an efficient solution for clinics and homes equipped with mid-specification computers attached to webcams. IMITASD automates the scoring of autistic children’s videos while they imitate a series of lessons. The model integrates two core modules: attention estimation and imitation assessment. The attention module monitors the child’s position by tracking the child’s face and determining the head pose. The imitation module extracts a set of crucial key points from both the child’s head and arms to measure the similarity with a reference imitation lesson using dynamic time warping. The model was validated using a refined dataset of 268 videos collected from 11 Egyptian autistic children during conducting six imitation lessons. The analysis demonstrated that IMITASD provides fast scoring, takes less than three seconds, and shows a robust measure as it has a high correlation with scores given by medical therapists, about 0.9, highlighting its effectiveness for children’s training applications. Full article
Show Figures

Figure 1

10 pages, 1248 KiB  
Article
A Non-Contacted Height Measurement Method in Two-Dimensional Space
by Phu Nguyen Trung, Nghien Ba Nguyen, Kien Nguyen Phan, Ha Pham Van, Thao Hoang Van, Thien Nguyen and Amir Gandjbakhche
Sensors 2024, 24(21), 6796; https://doi.org/10.3390/s24216796 - 23 Oct 2024
Viewed by 522
Abstract
Height is an important health parameter employed across domains, including healthcare, aesthetics, and athletics. Numerous non-contact methods for height measurement exist; however, most are limited to assessing height in an upright posture. This study presents a non-contact approach for measuring human height in [...] Read more.
Height is an important health parameter employed across domains, including healthcare, aesthetics, and athletics. Numerous non-contact methods for height measurement exist; however, most are limited to assessing height in an upright posture. This study presents a non-contact approach for measuring human height in 2D space across different postures. The proposed method utilizes computer vision techniques, specifically the MediaPipe library and the YOLOv8 model, to analyze images captured with a smartphone camera. The MediaPipe library identifies and marks joint points on the human body, while the YOLOv8 model facilitates the localization of these points. To determine the actual height of an individual, a multivariate linear regression model was trained using the ratios of distances between the identified joint points. Data from 166 subjects across four distinct postures: standing upright, rotated 45 degrees, rotated 90 degrees, and kneeling were used to train and validate the model. Results indicate that the proposed method yields height measurements with a minimal error margin of approximately 1.2%. Future research will extend this approach to accommodate additional positions, such as lying down, cross-legged, and bent-legged. Furthermore, the method will be improved to account for various distances and angles of capture, thereby enhancing the flexibility and accuracy of height measurement in diverse contexts. Full article
Show Figures

Figure 1

20 pages, 896 KiB  
Article
SWL-LSE: A Dataset of Health-Related Signs in Spanish Sign Language with an ISLR Baseline Method
by Manuel Vázquez-Enríquez, José Luis Alba-Castro, Laura Docío-Fernández and Eduardo Rodríguez-Banga
Technologies 2024, 12(10), 205; https://doi.org/10.3390/technologies12100205 - 18 Oct 2024
Viewed by 971
Abstract
Progress in automatic sign language recognition and translation has been hindered by the scarcity of datasets available for the training of machine learning algorithms, a challenge that is even more acute for languages with smaller signing communities, such as Spanish. In this paper, [...] Read more.
Progress in automatic sign language recognition and translation has been hindered by the scarcity of datasets available for the training of machine learning algorithms, a challenge that is even more acute for languages with smaller signing communities, such as Spanish. In this paper, we introduce a dataset of 300 isolated signs in Spanish Sign Language, collected online via a web application with contributions from 124 participants, resulting in a total of 8000 instances. This dataset, which is openly available, includes keypoints extracted using MediaPipe Holistic. The goal of this paper is to describe the construction and characteristics of the dataset and to provide a baseline classification method using a spatial–temporal graph convolutional network (ST-GCN) model, encouraging the scientific community to improve upon it. The experimental section offers a comparative analysis of the method’s performance on the new dataset, as well as on two other well-known datasets. The dataset, code, and web app used for data collection are freely available, and the web app can also be used to test classifier performance on-line in real-time. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

20 pages, 3585 KiB  
Article
A Study of Exergame System Using Hand Gestures for Wrist Flexibility Improvement for Tenosynovitis Prevention
by Yanqi Xiao, Nobuo Funabiki, Irin Tri Anggraini, Cheng-Liang Shih and Chih-Peng Fan
Information 2024, 15(10), 622; https://doi.org/10.3390/info15100622 - 10 Oct 2024
Viewed by 504
Abstract
Currently, as an increasing number of people have been addicted to using cellular phones, smartphone tenosynovitis has become common from long-term use of fingers for their operations. Hand exercise while playing video games, which is called exergame, can be a good solution [...] Read more.
Currently, as an increasing number of people have been addicted to using cellular phones, smartphone tenosynovitis has become common from long-term use of fingers for their operations. Hand exercise while playing video games, which is called exergame, can be a good solution to provide enjoyable daily exercise opportunities for its prevention, particularly, for young people. In this paper, we implemented a simple exergame system with a hand gesture recognition program made in Python using the Mediapipe library. We designed three sets of hand gestures to control the key operations to play the games as different exercises useful for tenosynovitis prevention. For evaluations, we prepared five video games running on a web browser and asked 10 students from Okayama and Hiroshima Universities, Japan, to play them and answer 10 questions in the questionnaire. Their playing results and System Usability Scale (SUS) scores confirmed the usability of the proposal, although we improved one gesture set to reduce its complexity. Moreover, by measuring the angles for maximum wrist movements, we found that the wrist flexibility was improved by playing the games, which verifies the effectiveness of the proposal. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Graphical abstract

15 pages, 1892 KiB  
Article
Smart Physiotherapy: Advancing Arm-Based Exercise Classification with PoseNet and Ensemble Models
by Shahzad Hussain, Hafeez Ur Rehman Siddiqui, Adil Ali Saleem, Muhammad Amjad Raza, Josep Alemany-Iturriaga, Álvaro Velarde-Sotres, Isabel De la Torre Díez and Sandra Dudley
Sensors 2024, 24(19), 6325; https://doi.org/10.3390/s24196325 - 29 Sep 2024
Cited by 1 | Viewed by 1202
Abstract
Telephysiotherapy has emerged as a vital solution for delivering remote healthcare, particularly in response to global challenges such as the COVID-19 pandemic. This study seeks to enhance telephysiotherapy by developing a system capable of accurately classifying physiotherapeutic exercises using PoseNet, a state-of-the-art pose [...] Read more.
Telephysiotherapy has emerged as a vital solution for delivering remote healthcare, particularly in response to global challenges such as the COVID-19 pandemic. This study seeks to enhance telephysiotherapy by developing a system capable of accurately classifying physiotherapeutic exercises using PoseNet, a state-of-the-art pose estimation model. A dataset was collected from 49 participants (35 males, 14 females) performing seven distinct exercises, with twelve anatomical landmarks then extracted using the Google MediaPipe library. Each landmark was represented by four features, which were used for classification. The core challenge addressed in this research involves ensuring accurate and real-time exercise classification across diverse body morphologies and exercise types. Several tree-based classifiers, including Random Forest, Extra Tree Classifier, XGBoost, LightGBM, and Hist Gradient Boosting, were employed. Furthermore, two novel ensemble models called RandomLightHist Fusion and StackedXLightRF are proposed to enhance classification accuracy. The RandomLightHist Fusion model achieved superior accuracy of 99.6%, demonstrating the system’s robustness and effectiveness. This innovation offers a practical solution for providing real-time feedback in telephysiotherapy, with potential to improve patient outcomes through accurate monitoring and assessment of exercise performance. Full article
(This article belongs to the Special Issue IMU and Innovative Sensors for Healthcare)
Show Figures

Figure 1

16 pages, 4954 KiB  
Article
Real-Time Hand Gesture Monitoring Model Based on MediaPipe’s Registerable System
by Yuting Meng, Haibo Jiang, Nengquan Duan and Haijun Wen
Sensors 2024, 24(19), 6262; https://doi.org/10.3390/s24196262 - 27 Sep 2024
Viewed by 922
Abstract
Hand gesture recognition plays a significant role in human-to-human and human-to-machine interactions. Currently, most hand gesture detection methods rely on fixed hand gesture recognition. However, with the diversity and variability of hand gestures in daily life, this paper proposes a registerable hand gesture [...] Read more.
Hand gesture recognition plays a significant role in human-to-human and human-to-machine interactions. Currently, most hand gesture detection methods rely on fixed hand gesture recognition. However, with the diversity and variability of hand gestures in daily life, this paper proposes a registerable hand gesture recognition approach based on Triple Loss. By learning the differences between different hand gestures, it can cluster them and identify newly added gestures. This paper constructs a registerable gesture dataset (RGDS) for training registerable hand gesture recognition models. Additionally, it proposes a normalization method for transforming hand gesture data and a FingerComb block for combining and extracting hand gesture data to enhance features and accelerate model convergence. It also improves ResNet and introduces FingerNet for registerable single-hand gesture recognition. The proposed model performs well on the RGDS dataset. The system is registerable, allowing users to flexibly register their own hand gestures for personalized gesture recognition. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 5886 KiB  
Article
Innovative Chair and System Designs to Enhance Resistance Training Outcomes for the Elderly
by Teng Qi, Miyuki Iwamoto, Dongeun Choi, Siriaraya Panote and Noriaki Kuwahara
Healthcare 2024, 12(19), 1926; https://doi.org/10.3390/healthcare12191926 - 26 Sep 2024
Viewed by 889
Abstract
Introduction: This study aims to provide a safe, effective, and sustainable resistance training environment for the elderly by modifying chairs and movement systems used during training, particularly under unsupervised conditions. Materials and Methods: The research focused on investigating the effect of modified chair [...] Read more.
Introduction: This study aims to provide a safe, effective, and sustainable resistance training environment for the elderly by modifying chairs and movement systems used during training, particularly under unsupervised conditions. Materials and Methods: The research focused on investigating the effect of modified chair designs on enhancing physical stability during resistance training by involving 19 elderly participants (mean 72.1, SD 4.7). The study measured changes in the body’s acceleration during movements to compare the effectiveness of the modified chairs with those commonly used in chair-based exercise (CBE) training in maintaining physical stability. A system was developed based on experimental video data, which leverages MediaPipe to analyze the videos and compute joint angles, identifying whether the actions are executed correctly. Results and Conclusions: Comparisons revealed that modified chairs offered better stability during sitting (p < 0.001) and stand-up (p < 0.001) resistance training. According to the questionnaire survey results, compared to the regular chair without an armrest, the modified chair provided a greater sense of security and a better user experience for the elderly. Video observations indicated that the correct completion rate for most exercises, except stand-up resistance training, was only 59.75%, highlighting the insufficiency of modified chairs alone in ensuring accurate movement execution. Consequently, the introduction of an automatic system to verify proper exercise performance is essential. The model developed in this study for recognizing the correctness of movements achieved an accuracy rate of 97.68%. This study proposes a new chair design that enhances physical stability during resistance training and opens new avenues for utilizing advanced technology to assist the elderly in their training. Full article
Show Figures

Figure 1

12 pages, 693 KiB  
Article
Signsability: Enhancing Communication through a Sign Language App
by Din Ezra, Shai Mastitz and Irina Rabaev
Software 2024, 3(3), 368-379; https://doi.org/10.3390/software3030019 - 12 Sep 2024
Viewed by 844
Abstract
The integration of sign language recognition systems into digital platforms has the potential to bridge communication gaps between the deaf community and the broader population. This paper introduces an advanced Israeli Sign Language (ISL) recognition system designed to interpret dynamic motion gestures, addressing [...] Read more.
The integration of sign language recognition systems into digital platforms has the potential to bridge communication gaps between the deaf community and the broader population. This paper introduces an advanced Israeli Sign Language (ISL) recognition system designed to interpret dynamic motion gestures, addressing a critical need for more sophisticated and fluid communication tools. Unlike conventional systems that focus solely on static signs, our approach incorporates both deep learning and Computer Vision techniques to analyze and translate dynamic gestures captured in real-time video. We provide a comprehensive account of our preprocessing pipeline, detailing every stage from video collection to the extraction of landmarks using MediaPipe, including the mathematical equations used for preprocessing these landmarks and the final recognition process. The dataset utilized for training our model is unique in its comprehensiveness and is publicly accessible, enhancing the reproducibility and expansion of future research. The deployment of our model on a publicly accessible website allows users to engage with ISL interactively, facilitating both learning and practice. We discuss the development process, the challenges overcome, and the anticipated societal impact of our system in promoting greater inclusivity and understanding. Full article
Show Figures

Figure 1

17 pages, 4715 KiB  
Article
IoT-MFaceNet: Internet-of-Things-Based Face Recognition Using MobileNetV2 and FaceNet Deep-Learning Implementations on a Raspberry Pi-400
by Ahmad Saeed Mohammad, Thoalfeqar G. Jarullah, Musab T. S. Al-Kaltakchi, Jabir Alshehabi Al-Ani and Somdip Dey
J. Low Power Electron. Appl. 2024, 14(3), 46; https://doi.org/10.3390/jlpea14030046 - 5 Sep 2024
Viewed by 50855
Abstract
IoT applications revolutionize industries by enhancing operations, enabling data-driven decisions, and fostering innovation. This study explores the growing potential of IoT-based facial recognition for mobile devices, a technology rapidly advancing within the interconnected IoT landscape. The investigation proposes a framework called IoT-MFaceNet (Internet-of-Things-based [...] Read more.
IoT applications revolutionize industries by enhancing operations, enabling data-driven decisions, and fostering innovation. This study explores the growing potential of IoT-based facial recognition for mobile devices, a technology rapidly advancing within the interconnected IoT landscape. The investigation proposes a framework called IoT-MFaceNet (Internet-of-Things-based face recognition using MobileNetV2 and FaceNet deep-learning) utilizing pre-existing deep-learning methods, employing the MobileNetV2 and FaceNet algorithms on both ImageNet and FaceNet databases. Additionally, an in-house database is compiled, capturing data from 50 individuals via a web camera and 10 subjects through a smartphone camera. Pre-processing of the in-house database involves face detection using OpenCV’s Haar Cascade, Dlib’s CNN Face Detector, and Mediapipe’s Face. The resulting system demonstrates high accuracy in real-time and operates efficiently on low-powered devices like the Raspberry Pi 400. The evaluation involves the use of the multilayer perceptron (MLP) and support vector machine (SVM) classifiers. The system primarily functions as a closed set identification system within a computer engineering department at the College of Engineering, Mustansiriyah University, Iraq, allowing access exclusively to department staff for the department rapporteur room. The proposed system undergoes successful testing, achieving a maximum accuracy rate of 99.976%. Full article
Show Figures

Figure 1

24 pages, 8029 KiB  
Article
Real-Time Machine Learning for Accurate Mexican Sign Language Identification: A Distal Phalanges Approach
by Gerardo García-Gil, Gabriela del Carmen López-Armas, Juan Jaime Sánchez-Escobar, Bryan Armando Salazar-Torres and Alma Nayeli Rodríguez-Vázquez
Technologies 2024, 12(9), 152; https://doi.org/10.3390/technologies12090152 - 4 Sep 2024
Viewed by 1768
Abstract
Effective communication is crucial in daily life, and for people with hearing disabilities, sign language is no exception, serving as their primary means of interaction. Various technologies, such as cochlear implants and mobile sign language translation applications, have been explored to enhance communication [...] Read more.
Effective communication is crucial in daily life, and for people with hearing disabilities, sign language is no exception, serving as their primary means of interaction. Various technologies, such as cochlear implants and mobile sign language translation applications, have been explored to enhance communication and improve the quality of life of the deaf community. This article presents a new, innovative method that uses real-time machine learning (ML) to accurately identify Mexican sign language (MSL) and is adaptable to any sign language. Our method is based on analyzing six features that represent the angles between the distal phalanges and the palm, thus eliminating the need for complex image processing. Our ML approach achieves accurate sign language identification in real-time, with an accuracy and F1 score of 99%. These results demonstrate that a simple approach can effectively identify sign language. This advance is significant, as it offers an effective and accessible solution to improve communication for people with hearing impairments. Furthermore, the proposed method has the potential to be implemented in mobile applications and other devices to provide practical support to the deaf community. Full article
Show Figures

Figure 1

15 pages, 6877 KiB  
Article
Finger Multi-Joint Trajectory Measurement and Kinematics Analysis Based on Machine Vision
by Shiqing Lu, Chaofu Luo, Hui Jin, Yutao Chen, Yiqing Xie, Peng Yang and Xia Huang
Actuators 2024, 13(9), 332; https://doi.org/10.3390/act13090332 - 2 Sep 2024
Viewed by 561
Abstract
A method for measuring multi-joint finger trajectories is proposed using MediaPipe. In this method, a high-speed camera is used to record finger movements. Subsequently, the recorded finger movement data are input into MediaPipe, where the system automatically extracts the coordinate data of the [...] Read more.
A method for measuring multi-joint finger trajectories is proposed using MediaPipe. In this method, a high-speed camera is used to record finger movements. Subsequently, the recorded finger movement data are input into MediaPipe, where the system automatically extracts the coordinate data of the key points in the finger movements. From this, we obtain data pertaining to the trajectory of the finger movements. In order to verify the accuracy and effectiveness of this experimental method, we compared it with the DH method and the Artificial keypoint alignment method in terms of metrics such as MAPE (Mean Absolute Percentage Error), maximum distance error, and the time taken to process 500 images. The results demonstrated that our method can detect multiple finger joints in a natural, efficient, and accurate manner. Then, we measured posture for three selected hand movements. We determined the position coordinates of the joints and calculated the angular acceleration of the joint rotation. We observed that the angular acceleration can fluctuate significantly over a very short period of time (less than 100 ms), in some cases increasing to more than ten times the initial acceleration. This finding underscores the complexity of finger joint movements. This study can provide support and reference for the design of finger rehabilitation robots and dexterous hands. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

11 pages, 935 KiB  
Article
Next-Gen Dynamic Hand Gesture Recognition: MediaPipe, Inception-v3 and LSTM-Based Enhanced Deep Learning Model
by Yaseen, Oh-Jin Kwon, Jaeho Kim, Sonain Jamil, Jinhee Lee and Faiz Ullah
Electronics 2024, 13(16), 3233; https://doi.org/10.3390/electronics13163233 - 15 Aug 2024
Viewed by 2140
Abstract
Gesture recognition is crucial in computer vision-based applications, such as drone control, gaming, virtual and augmented reality (VR/AR), and security, especially in human–computer interaction (HCI)-based systems. There are two types of gesture recognition systems, i.e., static and dynamic. However, our focus in this [...] Read more.
Gesture recognition is crucial in computer vision-based applications, such as drone control, gaming, virtual and augmented reality (VR/AR), and security, especially in human–computer interaction (HCI)-based systems. There are two types of gesture recognition systems, i.e., static and dynamic. However, our focus in this paper is on dynamic gesture recognition. In dynamic hand gesture recognition systems, the sequences of frames, i.e., temporal data, pose significant processing challenges and reduce efficiency compared to static gestures. These data become multi-dimensional compared to static images because spatial and temporal data are being processed, which demands complex deep learning (DL) models with increased computational costs. This article presents a novel triple-layer algorithm that efficiently reduces the 3D feature map into 1D row vectors and enhances the overall performance. First, we process the individual images in a given sequence using the MediaPipe framework and extract the regions of interest (ROI). The processed cropped image is then passed to the Inception-v3 for the 2D feature extractor. Finally, a long short-term memory (LSTM) network is used as a temporal feature extractor and classifier. Our proposed method achieves an average accuracy of more than 89.7%. The experimental results also show that the proposed framework outperforms existing state-of-the-art methods. Full article
Show Figures

Figure 1

22 pages, 12633 KiB  
Article
MediaPipe Frame and Convolutional Neural Networks-Based Fingerspelling Detection in Mexican Sign Language
by Tzeico J. Sánchez-Vicinaiz, Enrique Camacho-Pérez, Alejandro A. Castillo-Atoche, Mayra Cruz-Fernandez, José R. García-Martínez and Juvenal Rodríguez-Reséndiz
Technologies 2024, 12(8), 124; https://doi.org/10.3390/technologies12080124 - 1 Aug 2024
Cited by 2 | Viewed by 2101
Abstract
This research proposes implementing a system to recognize the static signs of the Mexican Sign Language (MSL) dactylological alphabet using the MediaPipe frame and Convolutional Neural Network (CNN) models to correctly interpret the letters that represent the manual signals coming from a camera. [...] Read more.
This research proposes implementing a system to recognize the static signs of the Mexican Sign Language (MSL) dactylological alphabet using the MediaPipe frame and Convolutional Neural Network (CNN) models to correctly interpret the letters that represent the manual signals coming from a camera. The development of these types of studies allows the implementation of technological advances in artificial intelligence and computer vision in teaching Mexican Sign Language (MSL). The best CNN model achieved an accuracy of 83.63% over the sets of 336 test images. In addition, considering samples of each letter, the following results are obtained: an accuracy of 84.57%, a sensitivity of 83.33%, and a specificity of 99.17%. The advantage of this system is that it could be implemented on low-consumption equipment, carrying out the classification in real-time, contributing to the accessibility of its use. Full article
Show Figures

Figure 1

14 pages, 3613 KiB  
Article
Biometric Image-Analysis Techniques for Monitoring Chronic Neck Pain
by Wagner de Aguiar, José Celso Freire Junior, Guillaume Thomann and Gilberto Cuarelli
Appl. Sci. 2024, 14(15), 6429; https://doi.org/10.3390/app14156429 - 24 Jul 2024
Viewed by 762
Abstract
The term “mechanical neck pain” is a generic term used to define neck pain in people with neck injuries, neck dysfunction, or shoulder and neck pain. Several factors must be considered during the physical-therapy evaluation of cervical disorders, including changes in the visual [...] Read more.
The term “mechanical neck pain” is a generic term used to define neck pain in people with neck injuries, neck dysfunction, or shoulder and neck pain. Several factors must be considered during the physical-therapy evaluation of cervical disorders, including changes in the visual systems and postural and proprioceptive balance. Currently, the Cervicocephalic Relocation Test (CRT) is used by physiotherapists to detect changes in cervical proprioception. This procedure requires precise equipment, customized installation in a dedicated area and, above all, a significant amount of time post-treatment for the doctor to make the diagnosis. An innovative system composed of Google’s MediaPipe library combined with a personal laptop and camera is proposed and evaluated. The system architecture was developed, and a user interface was designed with the goal of allowing the system to be used more easily, more quickly, and more effectively by the healthcare practitioner. The tool is presented in this paper and tested in a use case, and the results are presented. The final user report, containing the visualization of the results of the CRT, which are ready for analysis by the physical therapist, can be exported from the developed tool. Full article
Show Figures

Figure 1

Back to TopTop