0% found this document useful (0 votes)
13 views

Real-Time Conversion for Sign-to-Text and Text-to-Speech Communication using Machine Learning

This research paper presents a machine learning-based system for real-time conversion of sign language to text and text to speech, aiming to bridge communication gaps between deaf individuals and those who use spoken language. The system employs advanced techniques such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) for gesture recognition and text-to-speech synthesis, facilitating bidirectional communication. The study highlights the importance of accessibility in communication and evaluates the performance of the proposed methods in terms of translation accuracy and speech naturalness.

Uploaded by

Pratham Dubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Real-Time Conversion for Sign-to-Text and Text-to-Speech Communication using Machine Learning

This research paper presents a machine learning-based system for real-time conversion of sign language to text and text to speech, aiming to bridge communication gaps between deaf individuals and those who use spoken language. The system employs advanced techniques such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) for gesture recognition and text-to-speech synthesis, facilitating bidirectional communication. The study highlights the importance of accessibility in communication and evaluates the performance of the proposed methods in terms of translation accuracy and speech naturalness.

Uploaded by

Pratham Dubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Real-Time Conversion for Sign-to-Text and Text-to-Speech

Communication using Machine Learning


Rachna Jain
Department of CSE
JSS Academy of Technical Education
NOIDA, INDIA
[email protected]

Shaurya Gupta Pratham Dubey Harshit Garg


Department of CSE Department of CSE Department of CSE
JSS Academy of Technical Education JSS Academy of Technical Education JSS Academy of Technical Education
NOIDA, INDIA NOIDA, INDIA NOIDA, INDIA
[email protected] [email protected] [email protected]

I. INTRODUCTION As we delve deeper into computer vision and deep learning,


Effective communication is the foundation of human this research explores the development of powerful language
relationships, improving understanding and connection. But recognition models that can operate in real time. At the same
challenges remain for those who communicate with sign time, communication can be achieved between people using
language, as there are significant communication barriers the same language by creating written text in spoken
with those who rely on the language. The integration of language using language processing tools.
machine learning into instant messaging offers new solutions
to close this gap, enabling interaction between people using
the language and people communicating through This article provides a framework for understanding
conversation. This research paper introduces a technology language change by examining existing literature. Language
called ""Using machine learning to instantly convert recognition, text generation, and text-to-speech synthesis.
characters into text and speech". The next section describes the system architecture, language
recognition and text-to-speech techniques used,
The main goal of our project is to create a system that can implementation concepts, and performance evaluation.
instantly interpret and convert hand gestures into text, as well Through this research, we want to increase community
as convert text into natural speech . . Long-standing problems participation in a joint effort, reducing communication by
that prevent effective interaction between language users and changing the power of machine learning.
the wider community.

The importance of this research lies in its ability to II. ABSTRACT


encourage deaf people and offer them a way to communicate
easily in many ways. Make gestures and create contact
letters using the latest machine learning techniques, This research paper introduces an innovative approach to
including blueprint, computer vision, and language address communication barriers between individuals using
processing. sign language and those reliant on spoken language. Titled
"Real-Time Conversion for Sign to Text and Text to Speech
Additionally, the integration of text-to-speech synthesis Using Machine Learning," the project seeks to harness the
ensures that communication is two-way, allowing for a capabilities of machine learning to create a system that
broad and inclusive discussion between individuals using the seamlessly converts sign language gestures into text and,
interaction. simultaneously, transforms text into natural-sounding speech
in real-time.
Minimizing the verbal exchange gap among D&M and non-
D&M people turns into a want to make certain effective The pervasive lack of communication accessibility for the
conversations among all. Sign language translation is among deaf and hard-of-hearing population is a persistent societal
All. Sign language translation is among the most growing challenge. To mitigate this, the project employs cutting-edge
lines of research, and it enables the maximum natural machine learning techniques, merging computer vision and
manner of communication for those with hearing natural language processing. The primary goal is to break
impairments. A hand gesture recognition system offers an down barriers by providing a bidirectional communication
opportunity for deaf people to talk with vocal humans channel—interpreting sign language gestures
without the need of an interpreter. The system is built for the instantaneously and conveying information through both
automated conversion of ASL into textual content and textual representation and synthesized speech.
speech.
The research begins with an exploration of the existing
literature, examining the evolution of sign language
recognition, text generation, and text-to-speech synthesis.
This establishes a foundation for the subsequent sections,
which detail the system architecture and methodologies
employed in sign language recognition and text-to-speech
synthesis.
The system's architecture comprises two main modules. The The proposed method combines Convolutional Neural Networks
speech recognition module uses neural networks (CNN) and (CNN) and Recurrent Neural Networks (RNN) to achieve accurate
neural networks (RNN) to interpret gestures, while the text- language recognition. Natural language processing systems such as
to-speech synthesis module uses language processing WaveNet and Tacotron are also used to combine text-to-speech
techniques such as WaveNet and Tacotron to generate real and create two-way communication. The combination of these
words from written text. patterns helps in instant interpretation of sign language and
The operating instructions include software and hardware conversion of text into lifelike speech.
requirements and the seamless integration of language
recognition and text-to-speech interfaces. The article ends
with an assessment of the work of the body, examining the Through extensive literature review, this article provides a
accuracy of translation and naturalness of speech. With this framework for the project process by establishing the transition
research, we want to bring together more integrated, advanced between language skills and writing to speech. Detailed
communication through the revolutionary power of machine information, including software and hardware, showing the
learning. effectiveness of the system should be presented.

This research paper introduces a transformation that uses the This research aims to contribute to an integrated society by
power of machine learning to instantly convert hand gestures eliminating communication barriers and providing technological
into text and text into speech. This project uses computer solutions that support communication based on the knowledge of
vision and natural language processing technologies to solve people using the language. Evaluation of performance in terms of
communication problems of language users and people who accuracy of translation and naturalness of synthesized speech
rely on language. demonstrates the social impact of this new educational practice.

TABLE I : Summary of Research Literatures

Paper Author Advantages Disadvantages

Sign Language B. Suneetha, J. Enables two-way Limited to the signs included


Translator for Deaf and Mrudula, S. communication between in the dataset. - Relies on
Dumb Using Machine Deeraj, deaf-dumb and ordinary webcam and microphone
Learning Geethanjali individuals. input, may have
environmental dependencies.
Uses machine learning
models for sign language to Requires a visual sign word
speech and speech to sign library for accurate speech to
language conversion. sign language conversion.

Real-time sign language-to- Relies on a camera for data


American Sign Language Aditi Bailur, Yesha text translation. - Innovative source, may have
Recognition and its Limbachia, Moksha use of CNN with Inception environmental dependencies.
Conversion from Text to Shah, Harshil Shah,
Speech V3 for accurate ASL gesture
Prof. Atul Kachare
recognition. Specific to American Sign
Language, may not
Converts ASL words into generalize well to other
text and further into audible sign languages.
speech.
Sign Language Ameer Khan B, Real-time method for Challenges in achieving
Detection and Chandru M, fingerspelling based ASL. - high accuracy in noisy or
Conversion to Text Kalaiselvan R Utilizes neural networks for challenging environments.
and Speech hand gesture recognition. - - Potential latency in real-
Conversion Achieves a high accuracy of time recognition.
98.00% for the alphabet
A Machine Learning Rahul Solleti - Addresses communication - Relies on the availability
Framework and Method challenges for individuals with and accuracy of regional sign
to Translate Speech to hearing disabilities. - Utilizes language datasets. -
Real-Time Sign advanced technologies like AR Implementation may require
Language for AR glasses for real-time sign extensive collaboration with
Glasses language translation. the Deaf community for
dataset curation.
Sign Language to Speech Prof. M.T. Dangat, Addresses communication Limited to American Sign
Conversion Rudra Chandgude, challenges for deaf individuals. Language (ASL). - Accuracy
Pravin Kushwaha, - Uses flex sensors on a glove may vary based on individual
Mohammed Champeli for sign language recognition. gestures.

Sign Language to Text Shubham Thakar, High accuracy (98.7%) Assumes a smooth
Conversion in Real Time Samveg Shah, achieved with transfer learning background in images;
Bhavya Shah, Anant compared to CNN (94%). Future scope includes
V. Nimkar diversifying the model for
different sign languages and
improving robustness to
diverse image backgrounds
Sign Language to Text Shreyas Viswanathan, Affordable and efficient Limited to 11 ASL alphabets
and Speech Conversion Saurabh Pandey, solution using Raspberry Pi. due to processing power
Using CNN Kartik Sharma, Dr. P. Hand gesture recognition for constraints. Challenges in
Vijayakumar American Sign Language diverse lighting conditions.

Sign Language Mary Jane C. Accurate recognition of ASL Limited to fingerspelling in


Fingerspelling Samonte, Carl Jose fingerspelling ASL
Recognition Using M. Guingab, Ron
Depth Information and Andrew Relayo, Mark
Deep Belief Networks Joseph C. Sheng,
John Ray D. Tamayo

Sign Language to Text- Akshatha Rani K, Dr. Bridges communication gap Achieves 74% accuracy -
Speech Translator Using N Manjanaik between deaf-mute individuals Recognizes almost all letters
Machine Learning and others - Utilizes efficient in ASL - Addresses the
hand tracking with media pipe - challenge of communication
Converts recognized signs to for deaf and mute individuals
speech, aiding blind
individuals.
Sign Language S. Kumara Krishnan, The proposed system utilizes a Dependency on hardware. -
Recognition and V. Prasanna virtual reality headset for Limitation to alphabets. -
Response via Virtual Venkatesan, V. immersive sign language Cost implications with
Reality Suriya Ganesh, D.P. learning. It employs Leap increased sensors.
Sai Prassanna, K. Motion controller features for
Sundara Skandan real-time gesture recognition.

KoSign Sign Language Mathew Huerta- Quantitative evaluation of the Translating low-context and
Translation Project: Enochian, Du Hui translation methodology, unclear phrases into KSL
Introducing The Lee, Hye Jin Myung, revealing that text-free - Signing Dates as the day of
NIASL2021 Dataset Kang Suk Byun, Jun prompting produced better the month cannot be signed
Woo Lee translations than text-based without also signing the
prompting. month

SummarizeAI - SummarizeAI enhances The project faces challenges in


Summarization of the Dhairya Khanna, Rishab podcast accessibility by handling real-time podcast
Podcasts Bhushan, Khushboo providing text and audio summarization, especially with
Goel and Shallu Juneja summaries through NLP and varying audio qualities,
machine learning, catering to background noise, and
the growing user base and overlapping voices, impacting
making content consumption accuracy.
more manageable.
Development Of A Text- O.M Olaniyan and The developed Yoruba TTS Alignment quality and MOS
to-speech Synthesis For Victor Akinode synthesis system, based on scores may need further
Yoruba Language Using deep learning, enhances refinement to ensure optimal
Deep Learning accessibility and inclusivity, performance and overcome
improving user experiences and potential issues in out-of-
facilitating communication in domain applications.
various applications.
stable single colour so that we don’t need to segment it on
the basis of skin colour. This would help us to get better
III. HAND GESTURE TECHNIQUES
results.

In recent years there has been tremendous research done  Gesture Classification:
on the hand gesture recognition. In Hidden Markov Models (HMM) is used for the
With the help of literature survey, we realized that the classification of the gestures. This model deals with
basic steps in hand gesture recognition are: - dynamic aspects of gestures. Gestures are extracted from
a sequence of video images by tracking the skin-color
 Data acquisition: blobs corresponding to the hand into a body– face space
Use of sensory devices: It uses centred on the face of the user.
electromechanical devices to provide exact hand
configuration and position. Different glove-based The goal is to recognize two classes of gestures: deictic
approaches can be used to extract information. But it and symbolic. The image is filtered using a fast look–up
is expensive and not user friendly. indexing table. After filtering, skin colour pixels are
gathered into blobs. Blobs are statistical objects based on
Vision based approach: In vision-based the location (x, y) and the colorimetry (Y, U, V) of the
methods, the computer webcam is the input device skin color pixels in order to determine homogeneous
for observing the information of hands and/or areas.
fingers. The Vision Based methods require only a
camera, thus realizing a natural interaction between In Naïve Bayes Classifier is used which is an effective
humans and computers without the use of any extra and fast method for static hand gesture recognition. It is
devices, thereby reducing cost. These systems tend based on classifying the different gestures according to
to complement biological vision by describing geometric based invariants which are obtained from
artificial vision systems that are implemented in image data after segmentation.
software and/or hardware. The main challenge of
vision-based hand detection ranges from coping with Thus, unlike many other recognition methods, this
the large variability of the human hand’s appearance method is not dependent on skin colour. The gestures are
due to a huge number of hand movements, to extracted from each frame of the video, with a static
different skin-color possibilities as well as to the background. The first step is to segment and label the
variations in viewpoints, scales, and speed of the objects of interest and to extract geometric invariants
camera capturing the scene. from them. Next step is the classification of gestures by
using a K nearest neighbor algorithm aided with distance
weighting algorithm (KNNDW) to provide suitable data
 Data Pre-Processing and Feature for a locally weighted Naïve Bayes‟ classifier.
extraction for vision-based approach:
In the approach for hand detection combines threshold- According to the paper on “Human Hand Gesture
based colour detection with background subtraction. Recognition Using a Convolution Neural Network” by
We can use AdaBoost face detector to differentiate Hsien-I Lin, Ming-Hsiang Hsu, and Wei-Kai Chen
between faces and hands as they both involve similar (graduates of Institute of Automation Technology
skin-color. National Taipei University of Technology Taipei,
Taiwan), they have constructed a skin model to extract
We can also extract necessary image which is to be the hands out of an image and then apply binary threshold
trained by applying a filter called Gaussian Blur (also to the whole image. After obtaining the threshold image
known as Gaussian smoothing). The filter can be easily they calibrate it about the principal axis in order to centre
applied using open computer vision (also known as the image about the axis. They input this image to a
OpenCV). convolutional neural network model in order to train and
predict the outputs. They have trained their model over 7
For extracting necessary image which is to be trained hand gestures and using this model they produced an
we can use instrumented gloves. This helps reduce accuracy of around 95% for those 7 gestures.
computation time for Pre-Processing and gives us more
concise and accurate data compared to applying filters
on data received from video extraction.

We tried doing the hand segmentation of an image


using color segmentation techniques but skin colorur
and tone is highly dependent on the lighting conditions
due to which output, we got for the segmentation we
tried to do were no so great. Moreover, we have a huge
number of symbols to be trained for our project many
of which look similar to each other like the gesture for
symbol ‘V’ and digit ‘2’.Hence we decided that in
order to produce better accuracies for our large number
of symbols, rather than segmenting the hand out of a
random background we keep background of hand a
IV. KEYWORDS and DEFINITIONS

 Feature Extraction and Representation:


The representation of an image as a 3D matrix
having dimension as of height and width of the
image and the value of each pixel as depth (1 in
case of Grayscale and 3 in case of RGB).

Further, these pixel values are used for extracting


useful features using CNN.

 Artificial Neural Network (ANN):


Artificial Neural Network is a connection of
neurons, replicating the structure of human brain.
Each connection of neuron transfers information to
another neuron.

Inputs are fed into first layer of neurons which


processes it and transfers to another layer of
neurons called as hidden layers. After processing of
information through multiple layers of hidden
layers, information is passed to final output layer

 Convolutional Neural Network (CNN):

Unlike regular Neural Networks, in the layers of


CNN, the neurons are arranged in 3 dimensions:
width, height, depth. The neurons in a layer will
only be connected to a small region of the layer
(window size) before it, instead of all of the
neurons in a fully-connected manner.

Moreover, the final output layer would have


dimensions (number of classes), because by the end
of the CNN architecture we will reduce the full
image into a single vector of class scores.

 TensorFlow
TensorFlow is an end-to-end open-source platform
for Machine Learning. It has a comprehensive,
flexible ecosystem of tools, libraries and
community resources that lets researchers push the
state-of-the-art in Machine Learning and developers
easily build and deploy Machine Learning powered
applications.

TensorFlow offers multiple levels of abstraction so


you can choose the right one for your needs. Build
and train models by using the high-level Keras API,
which makes getting started with TensorFlow and
machine learning easy.

If you need more flexibility, eager execution allows


for immediate iteration and intuitive debugging. For
large ML training tasks, use the Distribution Strategy
API for distributed training on different hardware
configurations without changing the model
definition.
 Keras:
Keras is a high-level neural networks library
written in python that works as a wrapper to
TensorFlow. It is used in cases where we want to
quickly build and test the neural network with
minimal lines of code.

It contains implementations of commonly used


neural network elements like layers, objective,
activation functions, optimizers, and tools to make
working with images and text data easier.

 OpenCV:
OpenCV (Open-Source Computer Vision) is an
open-source library of programming functions used
for real-time computer-vision.

It is mainly used for image processing, video


capture and analysis for features like face and
object recognition. It is written in C++ which is its
primary interface, however bindings are available
for Python, Java, MATLAB/OCTAVE.

V. CONCLUSION

In this report, a functional real time vision based


American Sign Language recognition for D&M
people have been developed for asl alphabets.
We achieved final accuracy of 98.0% on our data set.
We have improved our prediction after implementing
two layers of algorithms wherein we have verified
and predicted symbols which are more like each
other.
This gives us the ability to detect almost all the
symbols if they are shown properly, there is no noise
in the background and lighting is adequate.
.
review, 03 July 2021
[27] M. ali, A. H., Abbas, H. H., & Shahadi, H. I. (2022). Real-time sign
VI. REFERENCES language recognition system. International Journal of Health Sciences,
6(S4), pp. 10384– 10407, 27 July 2022.
[28] Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and
[1] Sign Language Translator for Deaf and Dumb Using Machine Xuanjing Huang. 2020. Extractive Summarization as Text Matching. arXiv
Learning, ISSN: 0970-2555 Volume: 52, Issue 6, June: 2023 preprint arXiv:2004.08795 (2020).
[2] American Sign Language Recognition and its Conversion
[29] N.Inamdar1, Z.Inamdar. A Survey Paper on Sign Language Recognition Vol
from Text to Speech, Volume 11 Issue IX Sep 2023
1, issue 4, pp. 1696-1699, April 2022.
[3] Sign Language Detection and Conversion to Text and Speech [30] R. Nagar, D. Aggarwal, U. R. Saxena and V.Bali, “Early Prediction and
Conversion, Volume: 07 Issue: 10 | October - 2023 Diagnosis for Cancer Based on Clinical and Non-Clinical Parameters: A
[4] A Machine Learning Framework and Method to Translate Speech Review”, International Journal of Grid and Distributed Computing, vol. 13,
to Real-Time Sign Language for AR Glasses, Vol. 03, Issue 10, no. 1, (2020), pp. 548-557.
October 2023 [31] R.Patil, V.Patil and A.Bahuguna. Indian Sign Language Recognition using
[5] Sign Language to Speech Conversion, Volume 11 Issue X Convolutional Neural Network 2021.
Oct 2023 doi.org/10.1051/itmconf/20214003004, pp. 1-5.
[6] Sign Language to Text Conversion in Real Time using
[32] R.A. Kadhim, M.Khamees. A Real-Time American Sign Language
Transfer Learning, December 2022
Recognition System using Convolutional Neural Network for Real Datasets,
[7] Sign Language to Text and Speech Conversion Using CNN,
Volume 9, Issue 3, Pages 937-943, ISSN 2217-8309, DOI:
Volume:03/Issue:05/May-2021
10.18421/TEM93-14, August 2020.
[8] Sign Language Fingerspelling Recognition Using Depth [33] R.Nagar, D.Aggarwal, Urvashi Rahul Saxena, V.Bali. (2020). Cancer
Information and Deep Belief Networks, Proceedings of the Prediction Using Machine Learning Techniques Based on Clinical & Non-
International Conference on Industrial Engineering and Clinical Parameters. International Journal of Advanced Science and
Operations Management. Istanbul, Turkey, March 7-10, 2022 Technology, 29(04), 8281 -8293.
[9] Sign Language to Text-Speech Translator Using Machine [34] Yang Liu and Mirella Lapata. 2019. Text Summarization with Pre
Learning, Volume 09. No. 7, July 2021 Trained Encoders. Proceedings of the 2019 Conference on Empirical
[10] A.Haldera, A.Tayade. Real-time Vernacular Sign Language Methods in Natural Language Processing and the 9th International
Recognition using MediaPipe and Machine Learning Vol (2) Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
Issue (5), pp. 9-17, 2021. 3721–3731.
[11] A.Muppidi, A.Thodupunoori, Lalitha. Real Time Sign [35] Zhu, C., et al. (2021). Recent advances in text-tospeech synthesis:
Language Detection for the Deaf and Dumb, Volume 11, pp. From concatenative to parametric approaches. IEEE Signal Processing
153-157, August 06,2022. Magazine, 38(3), 51-66.
[12] Ahire, Prashant G., et al. "Two Way Communicator between [36] Kim J. Kong J. & Son J., “Conditional variational autoencoder with
Deaf and Dumb People and Normal People." Computing adversarial learning for end-to-end text-to-speech,” in International
Communication Control and Automation (ICCUBEA), 2015 Conference on Machine Learning. PMLR, 2021.
International Conference on. IEEE, 2015, pp. 641-644.
[37] Hayashi, T., Inaguma, H., Ozaki, H., Yamamoto, R., Takeda, K., &
[13] D.Aggarwal (2018). Sentiment Analysis: An insight into Aizawa, A. (2021). ESPnet-TTS: Unified, Reproducible, and
Techniques, Application and Challenges. International Integratable Open Source End-to-End Text-to-Speech Toolkit.
Journal of Computer Sciences and Engineering, 6(5), 697- Proceedings of the 2021 IEEE Automatic Speech Recognition and
703. DOI: 10.26438/ijcse/v6i5.697703 Understanding Workshop (ASRU 2021).
[14] D.Aggarwal, V.Bali, A.Agarwal,K.Poswal, M.Gupta, [38] S.S.Kumar and A.Asha. A Review on Indian Sign Language
A.Gupta (2021). Sentiment Analysis of Tweets Using Recognition, pp. 3147-3159, IJSRR, 8(2) June., 2019.
Supervised Machine Learning Techniques Based on Term [39] Donahue, J., et al., End-to-end adversarial text-tospeech. arXiv
Frequency. Journal of Information Technology preprint arXiv:2006.03575, 2020.
Management, 13(1), 119-141. [40] Biswas N, Uddin KM, Rikta ST, Dey SK. A comparative analysis of
[15] D. Aggarwal, K. Banerjee, R. Jain, S. Agrawal, S. Mittal and machine learning classifiers for stroke prediction: A predictive
V. Bhatt, "An Insight into Android Applications for Safety analytics approach. Healthcare Analytics. 2022 Nov 1;2:100116.
of Women: Techniques and Applications," 2022 IEEE Delhi [41] Tyagi, S., Bonafonte, A., Lorenzo-Trueba, J. and Latorre, J., 2021.
Section Conference (DELCON), 2022, pp. 1-6. Proteno: Text normalization with limited data for fast deployment in
[16] Sign Language Recognition and Response via Virtual text to speech systems. arXiv preprint arXiv:2104.07777.
Reality, Volume 5, Issue 2, March-April 2023. [42] Ro, J.H., Stahlberg, F., Wu, K. and Kumar, S., 2022. Transformer-
[17] Furkan, Ms. N.Sengar, Real-Time Sign Language Recognition based Models of Text Normalization for Speech Applications. arXiv
System For Deaf And Dumb People, volume 9, June 2021, preprint arXiv:2202.00153.
pp. 390- 394.
[18] KoSign Sign Language Translation Project: Introducing The
NIASL2021 Dataset, Language Resources and Evaluation
Conference (LREC 2022), Marseille, 20-25 June 2022
[19] Sign language recognition system for communicating to
people with disabilities, Volume 216, 2023
[20] J.Kaur,C.R Krishna. An Efficient Indian Sign Language
Recognition System using Sift Descriptor Volume-8 Issue-6,
pp. 1456-1461 August, 2019.
[21] J.Kim and P.O’Neill-Brown. Improving American Sign
Language Recognition with Synthetic Data, volume 1, pp-
151-161, August, 2019.
[22] K.Y Lum, Y.H Goh, Y.B Lee. American Sign Language
Recognition Based on MobileNetV2 2020, Vol. 5, No. 6, pp.
481-488
Vol. 5.
[23] Kumari, Sonal, and S.K. Mitra. "Human action recognition
using DFT." Computer Vision, Pattern Recognition, Image
Processing and Graphics (NCVPRIPG), 2011 Third National
Conference on. IEEE, pp. 239-242, October 15,2022.
[24] L.K.S. Tolentino and R.O.S Juan. Static Sign Language
Recognition Using Deep Learning ,pp. 821-827, December
2019.
[25] Li, Dongxu and Rodriguez, Cristian and Yu, Xin and Li,
Hongdong. Word-level.Deep Sign Language Recognition
from Video: A New Large-scale Dataset and Methods
Comparison, 2020, pp. 1459-1469.
[26] Machine translation from text to sign language: a systematic

You might also like