SEMG Basedhandgesturesclassificationusingasemi Supervisedmulti LayerneuralnetworkswithAutoencoder
SEMG Basedhandgesturesclassificationusingasemi Supervisedmulti LayerneuralnetworkswithAutoencoder
net/publication/383679347
CITATIONS READS
0 33
2 authors, including:
Hussein Naser
Carleton University
7 PUBLICATIONS 19 CITATIONS
SEE PROFILE
All content following this page was uploaded by Hussein Naser on 04 September 2024.
Keywords: This work presents a semi-supervised multilayer neural network (MLNN) with an Autoencoder to develop
Electromyographic signals a classification model for recognizing hand gestures from electromyographic (EMG) signals. Using a Myo
Neural Networks armband equipped with eight non-invasive surface-mounted biosensors, raw surface EMG (sEMG) sensor
Classification
data were captured corresponding to five hand gestures: Fist, Open hand, Wave in, Wave out, and Double
Myo armband
tap. The sensor collected data underwent preprocessing, feature extraction, label assignment, and dataset
Autoencoder
organization for classification tasks. The model implementation, validation, and testing demonstrated its
efficacy after incorporating synthetic sEMG data generated by an Autoencoder. In comparison to the state-of-
the-art techniques from the literature, the proposed model exhibited strong performance, achieving accuracy of
99.68%, 100%, and 99.26% during training, validation, and testing, respectively. Comparatively, the proposed
MLNN with Autoencoder model outperformed a K-Nearest Neighbors model established for comparative
evaluation.
1. Introduction classifier. They used the ε𝑌 𝐶𝑏 𝐶𝑟 ε color space for skin color determi-
nation, Histogram of Oriented Gradient (HOG) descriptors for feature
In the realm of human–machine interaction (HMI), the combi- extraction and a support vector machine (SVM) with linear kernel
nation of cutting-edge technology and the human innate ability to classifier for classification. Their results showed that this approach
communicate through gestures is reshaping the way of interaction with improved accuracy and reduced training and validation time. Neural
machines. Hand gesture-based human–machine interaction is a strik- network has been efficient in several engineering applications [12–
ing example of such cutting-edge technology. Conventional human– 15]. In [13], the authors developed a convolutional neural network
machine interactions have used different techniques such as joysticks, (CNN) classification approach for human hand gesture detection and
keyboards, radio transmitters, inertial measurement units (IMU), hap- recognition. They utilized region-of-interest segmentation, normaliza-
tic devices, and speech recognition systems to control robotic plat- tion of the segmented image, and a connected component analysis
forms [1–9]. However, in recent years, researchers have utilized non- algorithm to segment the finger tips from hand images. Histogram
verbal communication techniques such as hand gesture recognition
equalization was used to improving the accuracy of their model. The
(HGR) to create a human–machine interface for suitable HMI.
proposed methodology achieved 96.2% of classification accuracy and
recognition rate. In [16], a real-time hand gesture recognizer based
1.1. Motivation
on a color glove was presented. The system consists of three modules:
hand image identification, feature extraction, and classification using
Researchers in [10] developed a vision-based hand gesture recogni-
Learning Vector Quantization. The recognizer achieved a high recogni-
tion system using a transfer learning CNN-based classifier. They used
tion rate when tested on a dataset of 907 hand gestures, demonstrating
hand shapes as static gestures to be classified by their trained classifier
its effectiveness in real-time hand gesture recognition. Although the
and then converted the recognized gesture to a command to interact
with computer. The prototype was tested on seven different subjects aforementioned methods for hand gesture classification have relatively
using different backgrounds and light conditions with an accuracy of good accuracies; however, they face several challenges. For instance,
93.09%. In [11], the authors proposed a bare hand dynamic gesture they necessitate the user presence in front of the camera to capture
recognition method using real-time video and a support vector machine movements, limiting the user mobility. Additionally, these methods
∗ Corresponding author.
E-mail addresses: [email protected] (H. Naser), [email protected] (H.A. Hashim).
https://doi.org/10.1016/j.sasc.2024.200144
Received 17 July 2024; Received in revised form 17 August 2024; Accepted 26 August 2024
Available online 2 September 2024
2772-9419/© 2024 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
H. Naser and H.A. Hashim Systems and Soft Computing 6 (2024) 200144
2
H. Naser and H.A. Hashim Systems and Soft Computing 6 (2024) 200144
3.3. Augmentation
3.4. Autoencoder
The dataset comprises sEMG signals acquired using the Myo Ges- 3.5. Feature extraction
tures Control Armband, a wearable biosensor device featuring eight
surface-mounted biosensors for capturing electromyographic signals Feature extraction involves the identification and extraction of per-
from hand muscles Fig. 2. To accurately record hand muscle activity, tinent characteristics from preprocessed signals. Using MATLAB built-in
users wore the armband in proximity to the elbow to cover the muscles functions in conjunction with the EMG feature extraction tool detailed
of interest in their forearms. For each gesture, raw sEMG data were ob- in [29,37], 16 features were extracted from each of the three sEMG
tained from user’s arm muscles using the eight sensors of the armband signals corresponding to gestures within the five specified classes.
as depicted in Fig. 3(a). The dataset encompasses five distinct classes Consequently, the resulting instance vector comprises 48 features for
of hand gestures: Fist, Open hand, Wave in, Wave out, and Double tap. each gesture, contributing to a dataset sized 900 × 48, serving as the
For each gesture, 45 recordings were obtained and saved in a CSV file. input for the classification model. For more elaboration on the ex-
Data inspection revealed that every gesture was correlated with only tracted features of the sEMG signals, brief definitions and mathematical
three active channels. Consequently, these three effective channels per equations for each feature are provided as follows:
gesture were identified, discarding non-effective ones to reduce data
size and computational processing time. The three effective channels 3.5.1. Average amplitude change (AAC)
that represent one of the gestures in the dataset are shown in Fig. 3(b). AAC is the averaged cumulative length of the sEMG waveform over
a segment of time [29,37,38]. It is a commonly used sEMG feature that
can be expressed as follows:
3.2. Preprocessing
1 ∑
𝑁−1
𝐴𝐴𝐶 = |𝑠(𝑖 + 1) − 𝑠(𝑖)| (1)
𝑁 𝑖=1
Preprocessing is a crucial step in refining raw data to prepare for
analysis of muscular electrical activity. sEMG signals, which record the where 𝑁 is the total number of samples in the sEMG signal while 𝑠(𝑖)
electrical impulses generated by muscle contractions, are often contam- represents the amplitude of the sEMG signal at a specific time point 𝑖.
inated by various artifacts and noise sources. The preprocessing phase
involves a series of techniques designed to address these challenges, 3.5.2. Mean energy (ME) value
such as filtering to eliminate unwanted frequencies, rectification to ME of an sEMG signal is the mean power or intensity of the signal
convert the signal into a positive form, trimming the unwanted portions over a specified duration. The ME can be defined as follows:
of each signal to reduce the data size, and smoothing to reduce high-
1 ∑
𝑁
frequency noise. The ultimate goal of sEMG signal preprocessing is to 𝑀𝐸 = (𝑠(𝑖))2 (2)
𝑁 𝑖=1
provide a clean and accurate representation of muscle activity, laying
the foundation for subsequent analysis, interpretation, and meaningful where 𝑁 is the total number of samples in the sEMG signal and 𝑠(𝑖)
insights into neuromuscular behavior and related gestures. represents the amplitude of the sEMG signal at a specific time point 𝑖.
3
H. Naser and H.A. Hashim Systems and Soft Computing 6 (2024) 200144
Fig. 3. (a) The raw sEMG data obtained from the eight channels of biosensors, (b) The three effective channels for the gesture after reduction.
3.5.3. Mean absolute value (MAV) 3.5.4. Enhanced mean absolute value (EMAV)
EMAV was proposed in [29] to focus on the most informative
MAV is one of the commonly used features in sEMG signal analy-
window of the sEMG signal. It can be defined as follows:
sis [29,37]. It provides insights into the overall signal strength and can
1 ∑
𝑁
be expressed as follows:
𝐸𝑀𝐴𝑉 = |(𝑠(𝑖))𝑎 |
𝑁 𝑖=1
{
1 ∑
𝑁
𝑀𝐴𝑉 = |𝑠(𝑖)| (3) 0.75, if 𝑖 >= 0.2𝑁&𝑖 <= 0.8𝑁
𝑁 𝑖=1 𝑎= (4)
0.50, otherwise
where 𝑁 refers to the total number of samples in the sEMG signal and where 𝑁 describes the total number of samples in the sEMG signal and
𝑠(𝑖) represents the amplitude of the sEMG signal at time 𝑖. 𝑠(𝑖) refers to the amplitude of the sEMG signal at time 𝑖.
4
H. Naser and H.A. Hashim Systems and Soft Computing 6 (2024) 200144
where 𝑁 is the total number of samples in the sEMG signal and 𝑠(𝑖)
represents the amplitude of the sEMG signal at a specific time point 𝑖.
1 ∑
𝑁
𝑉 𝐴𝑅𝐸 = (𝑠(𝑖))2 (10)
𝑁 − 1 𝑖=1
Fig. 4. Autoencoder schematic diagram.
where 𝑁 is the total number of samples in the sEMG signal and 𝑠(𝑖)
represents the amplitude of the sEMG signal at a specific time point 𝑖.
3.5.5. Difference absolute mean value (DAMV)
DAMV is another metric that calculates the average of the absolute 3.5.11. Waveform length (WL)
WL is a popular sEMG feature that is calculated by the cumulative
differences between consecutive sEMG signal values [37]. It can be
length of waveform summation [37]. The mathematical expression of
computed as follows:
the WL is given as follows:
1 ∑
𝑁−1
𝐷𝐴𝑀𝑉 = |𝑠(𝑖 + 1) − 𝑠(𝑖)| (5) ∑
𝑁−1
𝑁 − 1 𝑖=1 𝑊𝐿= |𝑠(𝑖 + 1) − 𝑠(𝑖)| (11)
𝑖=1
where 𝑁 is the total number of samples in the sEMG signal, 𝑠(𝑖) rep- where 𝑁 is the total number of samples in the sEMG signal and 𝑠(𝑖)
resents the amplitude of the sEMG signal at time 𝑖, and |𝑠(𝑖 + 1) − 𝑠(𝑖)| represents the amplitude of the sEMG signal at a specific time point 𝑖.
calculates the absolute difference between two consecutive samples.
3.5.12. Zeros crossing (ZC)
ZC of the sEMG signal refers to the points where the signal changes
3.5.6. Kurtosis (KURT)
its polarity from positive to negative or vice versa, crossing the zero-
KURT is a measure of the sEMG that can provide insight into the amplitude axis.
signal’s statistical properties related to its peak or tail behavior. Its {
mathematical formula is given as follows: ∑ 1 if (𝑠(𝑖) ⋅ 𝑠(𝑖 + 1)) < 0
𝑁−1
𝑍𝐶 = (12)
1 ∑𝑁 𝑖=1 0 otherwise
𝑁
̄4
𝑖=1 (𝑠(𝑖) − 𝑠)
𝐾𝑈 𝑅𝑇 = ( )2 − 3 (6) where 𝑁 is the total number of samples in the sEMG signal, 𝑠(𝑖)
1 ∑𝑁
𝑁
̄2
𝑖=1 (𝑠(𝑖) − 𝑠) represents the amplitude of the sEMG signal at a specific time point 𝑖,
and 𝑠(𝑖).𝑠(𝑖+1) checks for a change in sign between consecutive samples.
where 𝑁 defines the total number of samples in the sEMG signal, 𝑠(𝑖) If the product is negative, it indicates a sign change or zero crossing.
represents the amplitude of the sEMG signal at a specific time point 𝑖,
and 𝑠̄ denotes the mean of the sEMG signal values. 3.5.13. Standard deviation (SD)
SD measures the extent of variability of the sEMG signal’s amplitude
values from its mean. It measures how much the signal’s values spread
3.5.7. Skewness (SKEW)
around its mean. It can be expressed as follows:
SKEW quantifies the lack of symmetry in the sEMG signal’s ampli- √
√
tude distribution. Its mathematical formula is given as follows: √ 1 ∑ 𝑁
𝑆𝐷 = √ ̄2
(𝑠(𝑖) − 𝑠) (13)
1 ∑𝑁 𝑁 − 1 𝑖=1
𝑁
̄3
𝑖=1 (𝑠(𝑖) − 𝑠)
𝑆𝐾𝐸𝑊 = (7)
( ∑ )3 where 𝑁 is the total number of samples in the sEMG signal, 𝑠(𝑖)
1 𝑁 2 2
𝑁 𝑖=1 (𝑠(𝑖) − 𝑠)
̄ represents the amplitude of the sEMG signal at a specific time point
𝑖, and 𝑠̄ denotes the mean of the sEMG signal values.
where 𝑁 is the total number of samples in the sEMG signal, 𝑠(𝑖)
represents the amplitude of the sEMG signal at a specific time point
3.5.14. Slope sign change (SSC)
𝑖, and 𝑠̄ denotes the mean of the sEMG signal values. SSC measures the number of times the sign of the slope of a sEMG
signal changes within a specific window. It is used to assess the rapidity
3.5.8. Root mean square (RMS) of changes in the sEMG signal [29,37]. It can be defined as follows:
RMS is one of the commonly used features in sEMG signal analy- ⎧1 if (sgn(𝛥𝑠(𝑖)) ≠ sgn(𝛥𝑠(𝑖 − 1)))
∑
𝑁−1
⎪
sis [29,37]. It provides a measure used to assess the magnitude of the SSC = ⎨ and (sgn(𝛥𝑠(𝑖)) ≠ sgn(𝛥𝑠(𝑖 + 1))) (14)
signal over time, and can be expressed as follows: 𝑖=2 ⎪
⎩0 otherwise
√
√
√1 ∑ 𝑁 where 𝑁 is the total number of samples in the sEMG signal, 𝑠(𝑖)
𝑅𝑀𝑆 = √ 𝑠(𝑖)2 (8) represents the amplitude of the sEMG signal at a specific time point
𝑁 𝑖=1
𝑖, 𝛥𝑠(𝑖) represents the first derivative of the signal, often computed as
where 𝑁 is the total number of samples in the sEMG signal and 𝑠(𝑖) 𝑠(𝑖) − 𝑠(𝑖 − 1), and sgn(.) denotes the sign function, returning −1 for
represents the amplitude of the sEMG signal at a specific time point 𝑖. negative values, 0 for zero, and 1 for positive values.
5
H. Naser and H.A. Hashim Systems and Soft Computing 6 (2024) 200144
1 ∑
𝑁
𝑇 𝑀3 = | (𝑠(𝑖))3 | (15)
𝑁 𝑖=1
1 ∑
𝑁
𝑇 𝑀4 = | (𝑠(𝑖))4 | (16)
𝑁 𝑖=1
1 ∑
𝑁
𝑇 𝑀5 = | (𝑠(𝑖))5 | (17)
𝑁 𝑖=1
where 𝑁 is the total number of samples in the sEMG signal and 𝑠(𝑖)
represents the amplitude of the sEMG signal at a specific time point 𝑖.
Fig. 5. Scatter plot of the dataset.
3.5.16. Simple square integral (SSI)
SSI is the integration or summation of the squared values of the
sEMG over a specified time window or across the entire signal duration.
It provides information about the magnitude or the total energy of the
signal [29,37]. It can be defined as follows:
∑
𝑁
𝑆𝑆𝐼 = (𝑠(𝑖))2 (18)
𝑖=1
where 𝑁 is the total number of samples in the sEMG signal and 𝑠(𝑖)
represents the amplitude of the sEMG signal at a specific time point 𝑖.
3.5.17. Normalization
After extraction of all the features above, the resulting dataset is
normalized. Normalization in machine learning is a process that scales
the features of a data set to a similar range, typically between 0 and 1
as follows:
𝑋 − 𝑋min Fig. 6. Accuracy and loss vs. epochs plot.
𝑋normalized = (19)
𝑋max − 𝑋min
where 𝑋 is the original value of a feature, 𝑋normalized is the normalized
value of the feature, 𝑋min is the minimum value of the feature in the 3.8. Training
dataset, and 𝑋max is the maximum value of the feature in the dataset.
Training the model involves providing the prepared dataset to the
3.6. Dataset creation and organization designed model, enabling it to learn patterns and correlations between
the extracted features and the corresponding hand gestures. This itera-
After preprocessing and feature extraction, the dataset undergoes tive process fine-tunes the model’s parameters, minimizing errors and
structuring and organization. This pivotal step involves labeling the enhancing accuracy. The dataset was divided into training, validation,
resultant dataset based on the corresponding hand gestures or classes. and testing sets, with training and validation sets comprising 85% of the
To ensure proper partitioning, the dataset is divided into distinct sets data, and the test set containing 15%. Initially, the model was trained
using the original dataset. Subsequently, to enhance its robustness
for training, validation and testing, with proportions of 70%, 15%, and
against noisy data, the model was retrained by integrating synthetic
15%, respectively. This dataset organization streamlines efficient model
data generated from the Autoencoder. To optimize the model, a cat-
training and evaluation. To provide a visual insight into the dataset,
egorical cross-entropy loss function coupled with the Adam optimizer
Fig. 5 exhibits a scatterplot showcasing two features, highlighting the
was employed. The model was implemented in Python using Tensor-
necessity of a robust model to handle the classification task.
Flow and the Keras library. The training spanned 250 epochs, utilizing
EarlyStopping and ModelCheckpoint callbacks to prevent overfitting
3.7. Model design and preserve the best model parameters. Additionally, for comparative
analysis, a KNN model employing the Euclidean distance metric to
Model design involves crafting the architecture and framework of classify EMG signals was implemented and trained using both datasets
the classification model. For the classification of the sEMG signal, a used for training the Neural Network model.
Multilayer Neural Network Classifier (MLNNC) was used. It consists of
two hidden layers with 30 neurons for each and 5 neurons in the output 3.9. Evaluation
layer. These layers employ a modified linear unit (ReLU) activation
function in the hidden layers and a SoftMax activation function in The evaluation step assesses the performance and effectiveness of
the output layer. The MLNNC adeptly processes extracted features the trained model. This step involves testing the model on unseen data
to make precise predictions about hand gestures within the dataset. (the test set) to measure its performance and accuracy metrics. It helps
Additionally, for comparative analysis, a K-Nearest Neighbors (KNN) determine the model’s ability to generalize to new data and accurately
algorithm was implemented alongside the MLNNC model. classify hand gestures based on sEMG signals.
6
H. Naser and H.A. Hashim Systems and Soft Computing 6 (2024) 200144
Fig. 7. Confusion matrix between the actual targets and predictions before adding the synthetic data; (a) MLNNC, (b) KNN.
7
H. Naser and H.A. Hashim Systems and Soft Computing 6 (2024) 200144
Fig. 9. Confusion matrix between the actual targets and predictions after adding the synthetic data; (a) MLNNC, (b) KNN.
and (0.1850, 0.1879, and 0.2920) for losses across training, validation, CRediT authorship contribution statement
and test sets. Fig. 8 demonstrates the model’s accuracy and loss trends,
showcasing a halt in training after 175 epochs due to the EarlyStopping Hussein Naser: Writing – original draft, Validation, Methodology,
condition. The MLNN model demonstrated robust performance on noisy Investigation, Formal analysis, Conceptualization. Hashim A. Hashim:
synthetic data, surpassing the KNN model’s accuracy of 94%. Writing – review & editing, Validation, Supervision, Software, Project
The confusion matrix Fig. 9(a) highlighted misclassifications be- administration, Investigation, Funding acquisition.
tween class 1 (Double tap) and class 2 (Fist), as well as instances
between class 4 (Wave out) and class 1 (Double tap) or class 5 (Wave Declaration of competing interest
in). Additionally, Fig. 9(b) illustrates the confusion matrix of the
KNN model, which further demonstrating misclassifications within the The authors declare that they have no known competing finan-
dataset. cial interests or personal relationships that could have appeared to
influence the work reported in this paper.
5. Conclusion and future work
Data availability
In this paper, a classification model was established employing Neu-
ral Networks and KNN to distinguish hand gestures based on sEMG sig- No data was used for the research described in the article.
nals. The findings revealed that the Neural Networks model surpassed
the performance of the KNN model, achieving the highest accuracy Acknowledgments
of 99.26% on unseen data. These results demonstrate comparable or
superior performance to previous studies utilizing Neural Networks This work was supported in part by the National Sciences and
for EMG signal classification. For future endeavors, the trained model Engineering Research Council of Canada (NSERC), under the grant
holds promise in controlling quadcopters through hand gestures and RGPIN-2022-04937. The authors will also like to acknowledge the
movements, paving the way for practical applications in task execution support by University of Thi-Qar, Iraq Ministry of Higher Education
and control. and Scientific Research.
8
H. Naser and H.A. Hashim Systems and Soft Computing 6 (2024) 200144
References [20] P. Chen, Z. Li, S. Togo, H. Yokoi, Y. Jiang, A layered sEMG–FMG hybrid
sensor for hand motion recognition from forearm muscle activities, IEEE Trans.
[1] S. Bordoni, G. Tang, Development and assessment of a contactless 3D joystick Hum.-Mach. Syst. 1 (2023).
approach to industrial manipulator gesture control, Int. J. Ind. Ergon. 93 (2023) [21] P. Tran, S. Jeong, K.R. Herrin, J.P. Desai, Hand exoskeleton systems, clinical
103376, http://dx.doi.org/10.1016/j.ergon.2022.103376. rehabilitation practices, and future prospects, IEEE Trans. Med. Robot. Bionics 3
[2] A. Oña, V. Vimos, M. Benalcázar, P.J. Cruz, Adaptive non-linear control for a (3) (2021) 606–622.
virtual 3d manipulator, in: 2020 IEEE ANDESCON, IEEE, 2020, pp. 1–6. [22] H.N. Hasan, A wearable rehabilitation system to assist partially hand paralyzed
[3] H.A. Hashim, K.G. Vamvoudakis, Adaptive neural network stochastic-filter-based patients in repetitive exercises, in: Journal of Physics: Conference Series, 1279,
controller for attitude tracking with disturbance rejection, IEEE Trans. Neural (1) IOP Publishing, 2019, 012040.
Netw. Learn. Syst. 35 (1) (2022) 1217–1227. [23] M. Sathiyanarayanan, T. Mulling, Map navigation using hand gesture recognition:
[4] E. Abdi, D. Kulić, E. Croft, Haptics in teleoperated medical interventions: Force A case study using myo connector on apple maps, Procedia Comput. Sci. 58
measurement, haptic interfaces and their influence on user’s performance, IEEE (2015) 50–57.
Trans. Biomed. Eng. 67 (12) (2020) 3438–3451, http://dx.doi.org/10.1109/ [24] P.J. Cruz, J.P. Vásconez, R. Romero, A. Chico, M.E. Benalcázar, R. Álvarez, L.I.
TBME.2020.2987603. Barona López, Á.L. Valdivieso Caraguay, A Deep Q-Network based hand gesture
[5] C. González, J.E. Solanes, A. Muñoz, L. Gracia, V. Girbés-Juan, J. Tornero, recognition system for control of robotic platforms, Sci. Rep. 13 (1) (2023) 7956.
Advanced teleoperation and control system for industrial robots based on [25] N. Abaid, V. Kopman, M. Porfiri, An attraction toward engineering careers: The
augmented virtuality and haptic feedback, J. Manuf. Syst. 59 (2021) 283–298. story of a Brooklyn outreach program for K∖ufffd12 students, IEEE Robot. Autom.
[6] J. Botero-Valencia, M. Mejía-Herrera, D. Betancur-Vásquez, Development of an Mag. 20 (2) (2012) 31–39.
inertial measurement unit (IMU) with datalogger and geopositioning for mapping [26] A.K. Das, V. Laxmi, S. Kumar, Hand gesture recognition and classification
the Earth’s magnetic field, HardwareX 16 (2023) e00485, http://dx.doi.org/10. technique in real-time, in: 2019 International Conference on Vision Towards
1016/j.ohx.2023.e00485. Emerging Trends in Communication and Networking, ViTECoN, 2019, pp. 1–5,
[7] Y. Zhang, Y. Chen, H. Yu, X. Yang, W. Lu, Learning effective spatial–temporal http://dx.doi.org/10.1109/ViTECoN.2019.8899619.
features for sEMG armband-based gesture recognition, IEEE Internet Things J. 7 [27] K. Bakircioğlu, N. Özkurt, Classification of EMG signals using convolution neural
(8) (2020) 6979–6992. network, Int. J. Appl. Math. Electron. Comput. 8 (4) (2020) 115–119.
[8] J. Kofman, X. Wu, T.J. Luu, S. Verma, Teleoperation of a robot manipulator [28] L. Tirel, A.M. Ali, H.A. Hashim, Novel hybrid integrated Pix2Pix and WGAN
using a vision-based human-robot interface, IEEE Trans. Ind. Electron. 52 (5) model with Gradient Penalty for binary images denoising, Syst. Soft Comput.
(2005) 1206–1219. (2024) 200122.
[9] J.-S. Park, H.-J. Na, Front-end of vehicle-embedded speech recognition for [29] J. Too, A.R. Abdullah, N.M. Saad, Classification of hand movements based on
voice-driven multi-UAVs control, Appl. Sci. 10 (19) (2020) 6876. discrete wavelet transform and enhanced feature extraction, Int. J. Adv. Comput.
[10] S. Hussain, R. Saxena, X. Han, J.A. Khan, H. Shin, Hand gesture recognition using Sci. Appl. 10 (6) (2019).
deep learning, in: 2017 International SoC Design Conference, ISOCC, 2017, pp. [30] C. Li, D. Belkin, Y. Li, P. Yan, M. Hu, N. Ge, H. Jiang, E. Montgomery, P. Lin, Z.
48–49, http://dx.doi.org/10.1109/ISOCC.2017.8368821. Wang, et al., Efficient and self-adaptive in-situ learning in multilayer memristor
[11] M.S. Sefat, M. Shahjahan, A hand gesture recognition technique from real neural networks, Nature Commun. 9 (1) (2018) 2385.
time video, in: 2015 International Conference on Electrical Engineering and [31] F. Duan, L. Dai, W. Chang, Z. Chen, C. Zhu, W. Li, sEMG-based identification of
Information Communication Technology, ICEEICT, 2015, pp. 1–4, http://dx.doi. hand motion commands using wavelet neural network combined with discrete
org/10.1109/ICEEICT.2015.7307386. wavelet transform, IEEE Trans. Ind. Electron. 63 (3) (2015) 1923–1934.
[12] H.A. Hashim, S. El-Ferik, F.L. Lewis, Neuro-adaptive cooperative tracking control [32] E.Y. Boateng, J. Otoo, D.A. Abaye, Basic tenets of classification algorithms K-
with prescribed performance of unknown higher-order nonlinear multi-agent nearest-neighbor, support vector machine, random forest and neural network: a
systems, Internat. J. Control 92 (2) (2019) 445–460. review, J. Data Anal. Inf. Process. 8 (4) (2020) 341–357.
[13] P. Neethu, R. Suguna, D. Sathish, An efficient method for human hand gesture [33] K. Shah, H. Patel, D. Sanghvi, M. Shah, A comparative analysis of logistic
detection and recognition using deep learning convolutional neural networks, regression, random forest and KNN models for the text classification, Augment.
Soft Comput. 24 (2020) 15239–15248. Hum. Res. 5 (2020) 1–16.
[14] X. Chen, Y. Li, R. Hu, X. Zhang, X. Chen, Hand gesture recognition based [34] F. Modaresi, S. Araghinejad, K. Ebrahimi, A comparative assessment of artificial
on surface electromyography using convolutional neural network with transfer neural network, generalized regression neural network, least-square support
learning method, IEEE J. Biomed. Health Inf. 25 (4) (2020) 1292–1304. vector regression, and K-nearest neighbor regression for monthly streamflow
[15] W. Batayneh, E. Abdulhay, M. Alothman, Comparing the efficiency of artificial forecasting in linear and nonlinear conditions, Water Res. Manag. 32 (2018)
neural networks in sEMG-based simultaneous and continuous estimation of hand 243–258.
kinematics, Digit. Commun. Netw. 8 (2) (2022) 162–173. [35] P. Wei, J. Zhang, F. Tian, J. Hong, A comparison of neural networks algorithms
[16] L. Lamberti, F. Camastra, Real-time hand gesture recognition using a color glove, for EEG and sEMG features based gait phases recognition, Biomed. Signal Process.
in: Image Analysis and Processing–ICIAP 2011: 16th International Conference, Control 68 (2021) 102587.
Ravenna, Italy, September 14-16, 2011, Proceedings, Part I 16, Springer, 2011, [36] O. Chamberland, M. Reckzin, H.A. Hashim, An autoencoder with convolutional
pp. 365–373. neural network for surface defect detection on cast components, J. Fail. Anal.
[17] N. Jarrassé, T. Proietti, V. Crocher, J. Robertson, A. Sahbani, G. Morel, A. Prev. 23 (4) (2023) 1633–1644.
Roby-Brami, Robotic exoskeletons: a perspective for the rehabilitation of arm [37] A. Phinyomark, P. Phukpattaranont, C. Limsakul, Feature reduction and selection
coordination in stroke patients, Front. Human Neurosci. 8 (2014) 947. for EMG signal classification, Expert Syst. Appl. 39 (8) (2012) 7420–7431.
[18] H. Naser, H. Hammood, A.Q. Migot, Internet-based smartphone system for after- [38] R.J. Doniec, et al., The detection of alcohol intoxication using electrooculography
stroke hand rehabilitation, in: 2023 International Conference on Engineering, signals from smart glasses and machine learning techniques, Syst. Soft Comput.
Science and Advanced Technology, ICESAT, IEEE, 2023, pp. 69–74. 6 (2024) 200078.
[19] B.A. De la Cruz-Sánchez, M. Arias-Montiel, E. Lugo-González, EMG-controlled
hand exoskeleton for assisted bilateral rehabilitation, Biocybern. Biomed. Eng.
42 (2) (2022) 596–614.