Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (41)

Search Parameters:
Keywords = hand posture recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 7134 KiB  
Article
Single-Handed Gesture Recognition with RGB Camera for Drone Motion Control
by Guhnoo Yun, Hwykuen Kwak and Dong Hwan Kim
Appl. Sci. 2024, 14(22), 10230; https://doi.org/10.3390/app142210230 - 7 Nov 2024
Viewed by 262
Abstract
Recent progress in hand gesture recognition has introduced several natural and intuitive approaches to drone control. However, effectively maneuvering drones in complex environments remains challenging. Drone movements are governed by four independent factors: roll, yaw, pitch, and throttle. Each factor includes three distinct [...] Read more.
Recent progress in hand gesture recognition has introduced several natural and intuitive approaches to drone control. However, effectively maneuvering drones in complex environments remains challenging. Drone movements are governed by four independent factors: roll, yaw, pitch, and throttle. Each factor includes three distinct behaviors—increase, decrease, and neutral—necessitating hand gesture vocabularies capable of expressing at least 81 combinations for comprehensive drone control in diverse scenarios. In this paper, we introduce a new set of hand gestures for precise drone control, leveraging an RGB camera sensor. These gestures are categorized into motion-based and posture-based types for efficient management. Then, we develop a lightweight hand gesture recognition algorithm capable of real-time operation on even edge devices, ensuring accurate and timely recognition. Subsequently, we integrate hand gesture recognition into a drone simulator to execute 81 commands for drone flight. Overall, the proposed hand gestures and recognition system offer natural control for complex drone maneuvers. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

17 pages, 6630 KiB  
Article
An Actively Vision-Assisted Low-Load Wearable Hand Function Mirror Rehabilitation System
by Zheyu Chen, Huanjun Wang, Yubing Yang, Lichao Chen, Zhilong Yan, Guoli Xiao, Yi Sun, Songsheng Zhu, Bin Liu, Liang Li and Jianqing Li
Actuators 2024, 13(9), 368; https://doi.org/10.3390/act13090368 - 19 Sep 2024
Viewed by 535
Abstract
The restoration of fine motor function in the hand is crucial for stroke survivors with hemiplegia to reintegrate into daily life and presents a significant challenge in post-stroke rehabilitation. Current mirror rehabilitation systems based on wearable devices require medical professionals or caregivers to [...] Read more.
The restoration of fine motor function in the hand is crucial for stroke survivors with hemiplegia to reintegrate into daily life and presents a significant challenge in post-stroke rehabilitation. Current mirror rehabilitation systems based on wearable devices require medical professionals or caregivers to assist patients in donning sensor gloves on the healthy side, thus hindering autonomous training, increasing labor costs, and imposing psychological burdens on patients. This study developed a low-load wearable hand function mirror rehabilitation robotic system based on visual gesture recognition. The system incorporates an active visual apparatus capable of adjusting its position and viewpoint autonomously, enabling the subtle monitoring of the healthy side’s gesture throughout the rehabilitation process. Consequently, patients only need to wear the device on their impaired hand to complete the mirror training, facilitating independent rehabilitation exercises. An algorithm based on hand key point gesture recognition was developed, which is capable of automatically identifying eight distinct gestures. Additionally, the system supports remote audio–video interaction during training sessions, addressing the lack of professional guidance in independent rehabilitation. A prototype of the system was constructed, a dataset for hand gesture recognition was collected, and the system’s performance as well as functionality were rigorously tested. The results indicate that the gesture recognition accuracy exceeds 90% under ten-fold cross-validation. The system enables operators to independently complete hand rehabilitation training, while the active visual system accommodates a patient’s rehabilitation needs across different postures. This study explores methods for autonomous hand function rehabilitation training, thereby offering valuable insights for future research on hand function recovery. Full article
(This article belongs to the Special Issue Actuators and Robotic Devices for Rehabilitation and Assistance)
Show Figures

Figure 1

17 pages, 5681 KiB  
Article
Visual Perception and Multimodal Control: A Novel Approach to Designing an Intelligent Badminton Serving Device
by Fulai Jiang, Yuxuan Lin, Rui Ming, Chuan Qin, Yangjie Wu, Yuhui Liu and Haibo Luo
Machines 2024, 12(5), 331; https://doi.org/10.3390/machines12050331 - 13 May 2024
Viewed by 1002
Abstract
Addressing the current issue of limited control methods for badminton serving devices, this paper proposes a vision-based multimodal control system and method for badminton serving. The system integrates computer vision recognition technology with traditional control methods for badminton serving devices. By installing vision [...] Read more.
Addressing the current issue of limited control methods for badminton serving devices, this paper proposes a vision-based multimodal control system and method for badminton serving. The system integrates computer vision recognition technology with traditional control methods for badminton serving devices. By installing vision capture devices on the serving device, the system identifies various human body postures. Based on the content of posture information, corresponding control signals are sent to adjust parameters such as launch angle and speed, enabling multiple modes of serving. Firstly, the hardware design for the badminton serving device is presented, including the design of the actuator module through 3D modeling. Simultaneously, an embedded development board circuit is designed to meet the requirements of multimodal control. Secondly, in the aspect of visual perception for human body recognition, an improved BlazePose candidate region posture recognition algorithm is proposed based on existing posture recognition algorithms. Furthermore, mappings between posture information and hand information are established to facilitate parameter conversion for the serving device under different postures. Finally, extensive experiments validate the feasibility and stability of the developed system and method. Full article
(This article belongs to the Special Issue Advanced Methodology of Intelligent Control and Measurement)
Show Figures

Figure 1

24 pages, 4796 KiB  
Article
sEMG-Based Robust Recognition of Grasping Postures with a Machine Learning Approach for Low-Cost Hand Control
by Marta C. Mora, José V. García-Ortiz and Joaquín Cerdá-Boluda
Sensors 2024, 24(7), 2063; https://doi.org/10.3390/s24072063 - 23 Mar 2024
Cited by 1 | Viewed by 1397
Abstract
The design and control of artificial hands remains a challenge in engineering. Popular prostheses are bio-mechanically simple with restricted manipulation capabilities, as advanced devices are pricy or abandoned due to their difficult communication with the hand. For social robots, the interpretation of human [...] Read more.
The design and control of artificial hands remains a challenge in engineering. Popular prostheses are bio-mechanically simple with restricted manipulation capabilities, as advanced devices are pricy or abandoned due to their difficult communication with the hand. For social robots, the interpretation of human intention is key for their integration in daily life. This can be achieved with machine learning (ML) algorithms, which are barely used for grasping posture recognition. This work proposes an ML approach to recognize nine hand postures, representing 90% of the activities of daily living in real time using an sEMG human–robot interface (HRI). Data from 20 subjects wearing a Myo armband (8 sEMG signals) were gathered from the NinaPro DS5 and from experimental tests with the YCB Object Set, and they were used jointly in the development of a simple multi-layer perceptron in MATLAB, with a global percentage success of 73% using only two features. GPU-based implementations were run to select the best architecture, with generalization capabilities, robustness-versus-electrode shift, low memory expense, and real-time performance. This architecture enables the implementation of grasping posture recognition in low-cost devices, aimed at the development of affordable functional prostheses and HRI for social robots. Full article
Show Figures

Figure 1

19 pages, 5168 KiB  
Article
Detection of Rehabilitation Training Effect of Upper Limb Movement Disorder Based on MPL-CNN
by Lijuan Shi, Runmin Wang, Jian Zhao, Jing Zhang and Zhejun Kuang
Sensors 2024, 24(4), 1105; https://doi.org/10.3390/s24041105 - 8 Feb 2024
Cited by 1 | Viewed by 1604
Abstract
Stroke represents a medical emergency and can lead to the development of movement disorders such as abnormal muscle tone, limited range of motion, or abnormalities in coordination and balance. In order to help stroke patients recover as soon as possible, rehabilitation training methods [...] Read more.
Stroke represents a medical emergency and can lead to the development of movement disorders such as abnormal muscle tone, limited range of motion, or abnormalities in coordination and balance. In order to help stroke patients recover as soon as possible, rehabilitation training methods employ various movement modes such as ordinary movements and joint reactions to induce active reactions in the limbs and gradually restore normal functions. Rehabilitation effect evaluation can help physicians understand the rehabilitation needs of different patients, determine effective treatment methods and strategies, and improve treatment efficiency. In order to achieve real-time and accuracy of action detection, this article uses Mediapipe’s action detection algorithm and proposes a model based on MPL-CNN. Mediapipe can be used to identify key point features of the patient’s upper limbs and simultaneously identify key point features of the hand. In order to detect the effect of rehabilitation training for upper limb movement disorders, LSTM and CNN are combined to form a new LSTM-CNN model, which is used to identify the action features of upper limb rehabilitation training extracted by Medipipe. The MPL-CNN model can effectively identify the accuracy of rehabilitation movements during upper limb rehabilitation training for stroke patients. In order to ensure the scientific validity and unified standards of rehabilitation training movements, this article employs the postures in the Fugl-Meyer Upper Limb Rehabilitation Training Functional Assessment Form (FMA) and establishes an FMA upper limb rehabilitation data set for experimental verification. Experimental results show that in each stage of the Fugl-Meyer upper limb rehabilitation training evaluation effect detection, the MPL-CNN-based method’s recognition accuracy of upper limb rehabilitation training actions reached 95%. At the same time, the average accuracy rate of various upper limb rehabilitation training actions reaches 97.54%. This shows that the model is highly robust across different action categories and proves that the MPL-CNN model is an effective and feasible solution. This method based on MPL-CNN can provide a high-precision detection method for the evaluation of rehabilitation effects of upper limb movement disorders after stroke, helping clinicians in evaluating the patient’s rehabilitation progress and adjusting the rehabilitation plan based on the evaluation results. This will help improve the personalization and precision of rehabilitation treatment and promote patient recovery. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

13 pages, 867 KiB  
Article
A Four-Stage Mahalanobis-Distance-Based Method for Hand Posture Recognition
by Dawid Warchoł and Tomasz Kapuściński
Appl. Sci. 2023, 13(22), 12347; https://doi.org/10.3390/app132212347 - 15 Nov 2023
Viewed by 892
Abstract
Automatic recognition of hand postures is an important research topic with many applications, e.g., communication support for deaf people. In this paper, we present a novel four-stage, Mahalanobis-distance-based method for hand posture recognition using skeletal data. The proposed method is based on a [...] Read more.
Automatic recognition of hand postures is an important research topic with many applications, e.g., communication support for deaf people. In this paper, we present a novel four-stage, Mahalanobis-distance-based method for hand posture recognition using skeletal data. The proposed method is based on a two-stage classification algorithm with two additional stages related to joint preprocessing (normalization) and a rule-based system, specific to hand shapes that the algorithm is meant to classify. The method achieves superior effectiveness on two benchmark datasets, the first of which was created by us for the purpose of this work, while the second is a well-known and publicly available dataset. The method’s recognition rate measured by leave-one-subject-out cross-validation tests is 94.69% on the first dataset and 97.44% on the second. Experiments, including comparison with other state-of-the-art methods and ablation studies related to classification accuracy and time, confirm the effectiveness of our approach. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

20 pages, 5818 KiB  
Article
Recognition of Grasping Patterns Using Deep Learning for Human–Robot Collaboration
by Pedro Amaral, Filipe Silva and Vítor Santos
Sensors 2023, 23(21), 8989; https://doi.org/10.3390/s23218989 - 5 Nov 2023
Cited by 2 | Viewed by 2173
Abstract
Recent advances in the field of collaborative robotics aim to endow industrial robots with prediction and anticipation abilities. In many shared tasks, the robot’s ability to accurately perceive and recognize the objects being manipulated by the human operator is crucial to make predictions [...] Read more.
Recent advances in the field of collaborative robotics aim to endow industrial robots with prediction and anticipation abilities. In many shared tasks, the robot’s ability to accurately perceive and recognize the objects being manipulated by the human operator is crucial to make predictions about the operator’s intentions. In this context, this paper proposes a novel learning-based framework to enable an assistive robot to recognize the object grasped by the human operator based on the pattern of the hand and finger joints. The framework combines the strengths of the commonly available software MediaPipe in detecting hand landmarks in an RGB image with a deep multi-class classifier that predicts the manipulated object from the extracted keypoints. This study focuses on the comparison between two deep architectures, a convolutional neural network and a transformer, in terms of prediction accuracy, precision, recall and F1-score. We test the performance of the recognition system on a new dataset collected with different users and in different sessions. The results demonstrate the effectiveness of the proposed methods, while providing valuable insights into the factors that limit the generalization ability of the models. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

13 pages, 4441 KiB  
Article
Study on the Design and Performance of a Glove Based on the FBG Array for Hand Posture Sensing
by Hongcheng Rao, Binbin Luo, Decao Wu, Pan Yi, Fudan Chen, Shenghui Shi, Xue Zou, Yuliang Chen and Mingfu Zhao
Sensors 2023, 23(20), 8495; https://doi.org/10.3390/s23208495 - 16 Oct 2023
Cited by 6 | Viewed by 1545
Abstract
This study introduces a new wearable fiber-optic sensor glove. The glove utilizes a flexible material, polydimethylsiloxane (PDMS), and a silicone tube to encapsulate fiber Bragg gratings (FBGs). It is employed to enable the self-perception of hand posture, gesture recognition, and the prediction of [...] Read more.
This study introduces a new wearable fiber-optic sensor glove. The glove utilizes a flexible material, polydimethylsiloxane (PDMS), and a silicone tube to encapsulate fiber Bragg gratings (FBGs). It is employed to enable the self-perception of hand posture, gesture recognition, and the prediction of grasping objects. The investigation employs the Support Vector Machine (SVM) approach for predicting grasping objects. The proposed fiber-optic sensor glove can concurrently monitor the motion of 14 hand joints comprising 5 metacarpophalangeal joints (MCP), 5 proximal interphalangeal joints (PIP), and 4 distal interphalangeal joints (DIP). To expand the measurement range of the sensors, a sinusoidal layout incorporates the FBG array into the glove. The experimental results indicate that the wearable sensing glove can track finger flexion within a range of 0° to 100°, with a modest minimum measurement error (Error) of 0.176° and a minimum standard deviation (SD) of 0.685°. Notably, the glove accurately detects hand gestures in real-time and even forecasts grasping actions. The fiber-optic smart glove technology proposed herein holds promising potential for industrial applications, including object grasping, 3D displays via virtual reality, and human–computer interaction. Full article
(This article belongs to the Special Issue Fiber Grating Sensors and Applications)
Show Figures

Figure 1

15 pages, 4788 KiB  
Article
Saliency-Driven Hand Gesture Recognition Incorporating Histogram of Oriented Gradients (HOG) and Deep Learning
by Farzaneh Jafari and Anup Basu
Sensors 2023, 23(18), 7790; https://doi.org/10.3390/s23187790 - 11 Sep 2023
Cited by 2 | Viewed by 1353
Abstract
Hand gesture recognition is a vital means of communication to convey information between humans and machines. We propose a novel model for hand gesture recognition based on computer vision methods and compare results based on images with complex scenes. While extracting skin color [...] Read more.
Hand gesture recognition is a vital means of communication to convey information between humans and machines. We propose a novel model for hand gesture recognition based on computer vision methods and compare results based on images with complex scenes. While extracting skin color information is an efficient method to determine hand regions, complicated image backgrounds adversely affect recognizing the exact area of the hand shape. Some valuable features like saliency maps, histogram of oriented gradients (HOG), Canny edge detection, and skin color help us maximize the accuracy of hand shape recognition. Considering these features, we proposed an efficient hand posture detection model that improves the test accuracy results to over 99% on the NUS Hand Posture Dataset II and more than 97% on the hand gesture dataset with different challenging backgrounds. In addition, we added noise to around 60% of our datasets. Replicating our experiment, we achieved more than 98% and nearly 97% accuracy on NUS and hand gesture datasets, respectively. Experiments illustrate that the saliency method with HOG has stable performance for a wide range of images with complex backgrounds having varied hand colors and sizes. Full article
Show Figures

Figure 1

25 pages, 5766 KiB  
Article
Application of Foot Hallux Contact Force Signal for Assistive Hand Fine Control
by Jianwei Cui, Bingyan Yan, Han Du, Yucheng Shang and Liyan Tong
Sensors 2023, 23(11), 5277; https://doi.org/10.3390/s23115277 - 2 Jun 2023
Viewed by 1477
Abstract
Accurate recognition of disabled persons’ behavioral intentions is the key to reconstructing hand function. Their intentions can be understood to some extent by electromyography (EMG), electroencephalogram (EEG), and arm movements, but they are not reliable enough to be generally accepted. In this paper, [...] Read more.
Accurate recognition of disabled persons’ behavioral intentions is the key to reconstructing hand function. Their intentions can be understood to some extent by electromyography (EMG), electroencephalogram (EEG), and arm movements, but they are not reliable enough to be generally accepted. In this paper, characteristics of foot contact force signals are investigated, and a method of expressing grasping intentions based on hallux (big toe) touch sense is proposed. First, force signals acquisition methods and devices are investigated and designed. By analyzing characteristics of signals in different areas of the foot, the hallux is selected. The peak number and other characteristic parameters are used to characterize signals, which can significantly express grasping intentions. Second, considering complex and fine tasks of the assistive hand, a posture control method is proposed. Based on this, many human-in-the-loop experiments are conducted using human–computer interaction methods. The results showed that people with hand disabilities could accurately express their grasping intentions through their toes, and could accurately grasp objects of different sizes, shapes, and hardness using their feet. The accuracy of the action completion for single-handed and double-handed disabled individuals was 99% and 98%, respectively. This proves that the method of using toe tactile sensation for assisting disabled individuals in hand control can help them complete daily fine motor activities. The method is easily acceptable in terms of reliability, unobtrusiveness, and aesthetics. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

25 pages, 1204 KiB  
Article
Research on Railway Dispatcher Fatigue Detection Method Based on Deep Learning with Multi-Feature Fusion
by Liang Chen and Wei Zheng
Electronics 2023, 12(10), 2303; https://doi.org/10.3390/electronics12102303 - 19 May 2023
Cited by 2 | Viewed by 1622
Abstract
Traffic command and scheduling are the core monitoring aspects of railway transportation. Detecting the fatigued state of dispatchers is, therefore, of great significance to ensure the safety of railway operations. In this paper, we present a multi-feature fatigue detection method based on key [...] Read more.
Traffic command and scheduling are the core monitoring aspects of railway transportation. Detecting the fatigued state of dispatchers is, therefore, of great significance to ensure the safety of railway operations. In this paper, we present a multi-feature fatigue detection method based on key points of the human face and body posture. Considering unfavorable factors such as facial occlusion and angle changes that have limited single-feature fatigue state detection methods, we developed our model based on the fusion of body postures and facial features for better accuracy. Using facial key points and eye features, we calculate the percentage of eye closure that accounts for more than 80% of the time duration, as well as blinking and yawning frequency, and we analyze fatigue behaviors, such as yawning, a bowed head (that could indicate sleep state), and lying down on a table, using a behavior recognition algorithm. We fuse five facial features and behavioral postures to comprehensively determine the fatigue state of dispatchers. The results show that on the 300 W dataset, as well as a hand-crafted dataset, the inference time of the improved facial key point detection algorithm based on the retina–face model was 100 ms and that the normalized average error (NME) was 3.58. On our own dataset, the classification accuracy based the an Bi-LSTM-SVM adaptive enhancement algorithm model reached 97%. Video data of volunteers who carried out scheduling operations in the simulation laboratory were used for our experiments, and our multi-feature fusion fatigue detection algorithm showed an accuracy rate of 96.30% and a recall rate of 96.30% in fatigue classification, both of which were higher than those of existing single-feature detection methods. Our multi-feature fatigue detection method offers a potential solution for fatigue level classification in vital areas of the industry, such as in railway transportation. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Deep Learning and Its Applications)
Show Figures

Figure 1

16 pages, 4964 KiB  
Article
Deep Learning for Highly Accurate Hand Recognition Based on Yolov7 Model
by Christine Dewi, Abbott Po Shun Chen and Henoch Juli Christanto
Big Data Cogn. Comput. 2023, 7(1), 53; https://doi.org/10.3390/bdcc7010053 - 22 Mar 2023
Cited by 24 | Viewed by 5941
Abstract
Hand detection is a key step in the pre-processing stage of many computer vision tasks because human hands are involved in the activity. Some examples of such tasks are hand posture estimation, hand gesture recognition, human activity analysis, and other tasks such as [...] Read more.
Hand detection is a key step in the pre-processing stage of many computer vision tasks because human hands are involved in the activity. Some examples of such tasks are hand posture estimation, hand gesture recognition, human activity analysis, and other tasks such as these. Human hands have a wide range of motion and change their appearance in a lot of different ways. This makes it hard to identify some hands in a crowded place, and some hands can move in a lot of different ways. In this investigation, we provide a concise analysis of CNN-based object recognition algorithms, more specifically, the Yolov7 and Yolov7x models with 100 and 200 epochs. This study explores a vast array of object detectors, some of which are used to locate hand recognition applications. Further, we train and test our proposed method on the Oxford Hand Dataset with the Yolov7 and Yolov7x models. Important statistics, such as the quantity of GFLOPS, the mean average precision (mAP), and the detection time, are tracked and monitored via performance metrics. The results of our research indicate that Yolov7x with 200 epochs during the training stage is the most stable approach when compared to other methods. It achieved 84.7% precision, 79.9% recall, and 86.1% mAP when it was being trained. In addition, Yolov7x accomplished the highest possible average mAP score, which was 86.3%, during the testing stage. Full article
Show Figures

Figure 1

22 pages, 3984 KiB  
Article
Research on Recognition and Analysis of Teacher–Student Behavior Based on a Blended Synchronous Classroom
by Taojie Xu, Wei Deng, Si Zhang, Yantao Wei and Qingtang Liu
Appl. Sci. 2023, 13(6), 3432; https://doi.org/10.3390/app13063432 - 8 Mar 2023
Cited by 6 | Viewed by 2378
Abstract
Due to the impact of the COVID-19 pandemic, many students are unable to attend face-to-face courses, Therefore, in this case, distance education should be promoted to replace face-to-face education. However, because of the imbalance of education in different regions, such as the imbalance [...] Read more.
Due to the impact of the COVID-19 pandemic, many students are unable to attend face-to-face courses, Therefore, in this case, distance education should be promoted to replace face-to-face education. However, because of the imbalance of education in different regions, such as the imbalance of education resources between rural and urban areas, the quality of distance education may not be guaranteed. Therefore, in China and some regions, there have been efforts made to carry out blended synchronous classroom attempts. In hybrid synchronous classroom situations, teachers’ workloads have increased, and it is difficult to fully understand students’ learning efficiency and class participation. We use deep learning to identify the behaviors of teachers and students in a blended synchronous classroom-based situation, aiming to automate the analysis of classroom videos, which can help teachers in classroom reflection and summary in a blended synchronous classroom or face-to-face classroom. In the behavior recognition of students and teachers, we combine the head, hand, and body posture information of teachers and students and add the feature pyramid (FPN) and convolutional block attention module (CBAM) for comparative experiments. Finally, S–T (student–teacher) analysis and engagement analysis were carried out on the identification results. Full article
(This article belongs to the Special Issue Artificial Intelligence in Online Higher Educational Data Mining)
Show Figures

Figure 1

23 pages, 20417 KiB  
Article
Sign2Pose: A Pose-Based Approach for Gloss Prediction Using a Transformer Model
by Jennifer Eunice, Andrew J, Yuichi Sei and D. Jude Hemanth
Sensors 2023, 23(5), 2853; https://doi.org/10.3390/s23052853 - 6 Mar 2023
Cited by 10 | Viewed by 3405
Abstract
Word-level sign language recognition (WSLR) is the backbone for continuous sign language recognition (CSLR) that infers glosses from sign videos. Finding the relevant gloss from the sign sequence and detecting explicit boundaries of the glosses from sign videos is a persistent challenge. In [...] Read more.
Word-level sign language recognition (WSLR) is the backbone for continuous sign language recognition (CSLR) that infers glosses from sign videos. Finding the relevant gloss from the sign sequence and detecting explicit boundaries of the glosses from sign videos is a persistent challenge. In this paper, we propose a systematic approach for gloss prediction in WLSR using the Sign2Pose Gloss prediction transformer model. The primary goal of this work is to enhance WLSR’s gloss prediction accuracy with reduced time and computational overhead. The proposed approach uses hand-crafted features rather than automated feature extraction, which is computationally expensive and less accurate. A modified key frame extraction technique is proposed that uses histogram difference and Euclidean distance metrics to select and drop redundant frames. To enhance the model’s generalization ability, pose vector augmentation using perspective transformation along with joint angle rotation is performed. Further, for normalization, we employed YOLOv3 (You Only Look Once) to detect the signing space and track the hand gestures of the signers in the frames. The proposed model experiments on WLASL datasets achieved the top 1% recognition accuracy of 80.9% in WLASL100 and 64.21% in WLASL300. The performance of the proposed model surpasses state-of-the-art approaches. The integration of key frame extraction, augmentation, and pose estimation improved the performance of the proposed gloss prediction model by increasing the model’s precision in locating minor variations in their body posture. We observed that introducing YOLOv3 improved gloss prediction accuracy and helped prevent model overfitting. Overall, the proposed model showed 17% improved performance in the WLASL 100 dataset. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

19 pages, 9085 KiB  
Article
sEMG-Based Hand Posture Recognition and Visual Feedback Training for the Forearm Amputee
by Jongman Kim, Sumin Yang, Bummo Koo, Seunghee Lee, Sehoon Park, Seunggi Kim, Kang Hee Cho and Youngho Kim
Sensors 2022, 22(20), 7984; https://doi.org/10.3390/s22207984 - 19 Oct 2022
Cited by 5 | Viewed by 2105
Abstract
sEMG-based gesture recognition is useful for human–computer interactions, especially for technology supporting rehabilitation training and the control of electric prostheses. However, high variability in the sEMG signals of untrained users degrades the performance of gesture recognition algorithms. In this study, the hand posture [...] Read more.
sEMG-based gesture recognition is useful for human–computer interactions, especially for technology supporting rehabilitation training and the control of electric prostheses. However, high variability in the sEMG signals of untrained users degrades the performance of gesture recognition algorithms. In this study, the hand posture recognition algorithm and radar plot-based visual feedback training were developed using multichannel sEMG sensors. Ten healthy adults and one bilateral forearm amputee participated by repeating twelve hand postures ten times. The visual feedback training was performed for two days and five days in healthy adults and a forearm amputee, respectively. Artificial neural network classifiers were trained with two types of feature vectors: a single feature vector and a combination of feature vectors. The classification accuracy of the forearm amputee increased significantly after three days of hand posture training. These results indicate that the visual feedback training efficiently improved the performance of sEMG-based hand posture recognition by reducing variability in the sEMG signal. Furthermore, a bilateral forearm amputee was able to participate in the rehabilitation training by using a radar plot, and the radar plot-based visual feedback training would help the amputees to control various electric prostheses. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

Back to TopTop