0% found this document useful (0 votes)
16 views

Advancement_Of_Sign_Language_Recognition_Through_Technology_Using_Python_And_OpenCV

Uploaded by

Jeffrin M
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Advancement_Of_Sign_Language_Recognition_Through_Technology_Using_Python_And_OpenCV

Uploaded by

Jeffrin M
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Advancement Of Sign Language Recognition

Through Technology Using Python And OpenCV

Aneesh Pradeep Mukhammadkhon Asrorov Muxlisa Quronboyeva


Software Engineering Software Engineering Software Engineering
New Uzbekistan University New Uzbekistan University New Uzbekistan University
Tashkent, Uzbekistan Tashkent, Uzbekistan Tashkent, Uzbekistan
[email protected] [email protected] [email protected]
2023 7th International Multi-Topic ICT Conference (IMTIC) | 979-8-3503-3846-1/23/$31.00 ©2023 IEEE | DOI: 10.1109/IMTIC58887.2023.10178445

Abstract—The ability to communicate effectively is critical be attained, among other things, by teaching a computer to
for every individual in society, but communication can be a understand speech, human gestures, and emotional expressions
challenge for people who are deaf or hard of hearing. Sign on the face. Gestures are used to convey nonverbal information
language is one of the primary means of communication for this
population. However, communication can be difficult for people [1]. The concept of a human-to-human connection and open
who need to learn sign language, leading to misunderstandings discourse that must symbolize the relationship between the
and frustration. Sign language recognition technology can help user and the machine is being popularized with the help of
bridge this communication gap. Sign Language Recognition gesture recognition, a feature of human-computer interaction
(SLR) handles the recognition of hand gestures and continues that shows an academic treatise.
to generate a text or voice for the corresponding hand gesture.
Static and dynamic hand motions are the two different categories.
Although static hand gesture recognition is simpler than dynamic II. S YSTEM OVERVIEW
hand gesture recognition, both recognitions are essential for
human communities. This paper focuses on smart gloves that The scientific field of gesture analysis can identify hand,
can detect sign language and on sign language recognition using arm, head, and even structural motions, typically including a
Python and OpenCV.
particular posture and motion. The speaker can say more in
Index Terms—Sign Language Recognition (SLR), static and
dynamic, Python and OpenCV, smart gloves. less time by using hand gestures. Many methods have been
developed to apply computer-vision concepts to the real-time
processing of gesture outputs. The open CV framework and
I. I NTRODUCTION
Python programming language are used in the computer vision
For those who are deaf or hard of hearing, understanding study’s primary objective to recognize movements. Language
sign language is essential to communication. Without relying makes up a significant amount of communication. A disabled
on spoken language, they can communicate with and compre- individual does not need languages. Gestures are a vital part
hend others. The quality of life for those who are deaf or hard of communication for blind people [2]. This computer-based
of hearing can be improved using sign language recognition technique will make it easier for ordinary people to understand
technology in various settings, including schools, hospitals, what the person with a handicap is trying to convey [3].
and the workplace. Sign language recognition can also help There are further related algorithms and monitoring systems
reduce language barriers, promote inclusivity, and increase for object recognition. By allowing gesture identification, this
accessibility for everyone, regardless of whether they can hear. gets beyond the restrictions and limitations of prior systems.
To make the world fairer and more accessible for everyone, There have been several effective gesture recognition meth-
developing and implementing sign language recognition sys- ods created and tested. A hand gesture detection system for
tems is crucial. The success of international initiatives and controlling a robotic arm is shown in the video or image.
the funding they get emphasize sign language’s importance. The techniques used in the reconnaissance include support
In this day and age, a computer-based solution is essential vector machines, neural networks, and adaptive boosting. A
for the deaf. However, scientists have been working on the convex hull is used to integrate hand motions for better
problem for a while, and the outcomes are now becoming fingertip detection [4]. The accuracy result for the associated
apparent. Despite exciting technologies becoming available paper is higher than those of other current systems. This
for voice recognition, a commercially viable solution for sign aims to display popular, efficient techniques for recording
recognition on the market still needs to be available. The goals gestures that have become increasingly important in recent
are to make computers understand human language and design years. Using the convex hull method and the YCbCr color
a user-friendly human-computer interface (HCI). This goal can space transformation, the study also describes determining skin

Authorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on July 27,2024 at 06:01:30 UTC from IEEE Xplore. Restrictions apply.
Fig. 1. How CNN works to predict the detected image. [8]

tone for shape recognition. Ten users supply 330 cases of


varied hand movements for precisely formed results. Fig. 2. The sign language alphabet [9]
Mr. Deepak K Ray describes a Python-based, Linux-based
hand motion recognition system [5]. The algorithm being used
is context-free. It tracks how many activities were completed is being done on self-powered triboelectric HMIs using a
precisely and how many fingertips were used. According to range of prototypes, including a touchpad, wristband, sock,
the National Institute on Deafness and Other Communication and glove, in addition to typical HMIs that use resistive and
Disorders (NIDCD), approximately 1 to 2 out of every 1,000 capacitive techniques. The glove has many potentials since
children in the United States are born with a detectable level it can quickly recognize the complex movements that our
of hearing loss in one or both ears [6]. This information is dexterous hands may make that go beyond simple control.
based on a report published by the Centers for Disease Control There have been numerous attempts in recent years to
and Prevention (CDC) in 2020, which analyzed data from create triboelectric glove HMI [10], fusing the glove plat-
various sources including the National Health and Nutrition form’s advantages (such as conformability, usability, and af-
Examination Survey (NHANES) and the Early Hearing Detec- fordability) with the TENG technology (such as dynamic
tion and Intervention (EHDI) program [7]. Thus, a system is sensing effect and self-powered capacity). Detailed testing
required since these deaf and speech-impaired individuals need of TENG gloves, for instance, that monitor finger motions
a suitable channel for communication with everyday people. using magnitude analysis or pulse counting. However, most of
Not everyone can understand disabled people’s sign language. their data analytics are concentrated on manually extracting
Therefore, our project aims to convert sign language gestures features like amplitude, frequency, and peak number, which
into easily readable text. leaves a tiny number of identified hand motions/gestures with
Wearable sensors have enormous promise for use in a wide severe feature loss. Discriminating between sophisticated hand
range of fields, including soft robotics, personalized health- gestures is still a challenge. Artificial intelligence (AI) has re-
care, environmental monitoring, and human-machine interac- cently unlocked intelligent data analytics in cross-disciplinary
tion. These characteristics include modest comfortability, good sectors by applying thorough sensory information extraction
compliance, and minimal weight. Typical wearable sensors and autonomous learning. More complex and sophisticated
measure a person’s state and detect their environment using gesture monitoring may be achievable with the TENG glove
capacitive and resistive technology. Because they typically and AI integration than with manual feature extraction. A
require an external power source to produce signal excitation, TENG glove with AI capabilities was demonstrated in earlier
they are used less commonly. Since their introduction in 2012, studies to distinguish 11 complex gestures for complex virtual
triboelectric nanogenerator (TENG)-based wearable sensors, reality/augmented reality (VR/AR) commands. It shows the
which are known as power-compatible and self-sustaining potential of using a minimal TENG sensor architecture for
alternatives, have been used more for human motion tracking, accurate and extensive hand gesture recognition using AI.
healthcare monitoring, and human-machine interface (HMI) III. P RACTICALITY OF THE SYSTEM
due to their simplicity of fabrication, accessibility of a variety
of materials, and quick dynamic response. One of them is A. Methods of Hand-Gesture Recognition in Sign Language
wearable HMIs, and their acceptance as a possible and cutting- Recognition
edge method of achieving human-machine and even human- The methodology used in the earlier systems for sign lan-
human (e.g., signers and non-signers) interaction through guage recognition was given a detailed study. Our review indi-
efficient human status tracking is rising. Significant research cates that earlier studies have thoroughly investigated HMM-

Authorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on July 27,2024 at 06:01:30 UTC from IEEE Xplore. Restrictions apply.
based techniques, including their modifications. Convolutional D. Sign Language Interpreter using Machine Learning and
Neural Networks, one type of deep learning, have gained Image Processing:
popularity over the past five years. Systems combining CNN-
HMM and deep learning have made positive outcomes and Machine learning and image processing techniques can be
new research directions possible. Clustering and hefty com- used to develop sign language interpreter systems that can
putational demands continue to obstruct their implementation accurately recognize and translate sign language into spoken
in any case. Future research should concentrate on creating a or written language. These systems typically use a combination
more efficient network that embeds the feature learner within of computer vision algorithms, machine learning techniques,
the classification in a multi-layered neural network approach and natural language processing tools to analyze video data
and achieves excellent performance while using minimal CPU and extract meaningful features. Machine learning techniques
resources. can also be used to analyze image data and extract meaningful
patterns and features. For example, neural networks can be
trained to recognize specific hand gestures and movements,
B. Communication between normal and deaf people allowing for more accurate and reliable sign language recog-
nition. These systems can be trained on large datasets of sign
The project’s primary goal is to accurately convert sign language video data, allowing them to learn and adapt to
language to voice or text to improve communication between various sign languages and dialects. In addition to recognizing
people who are non-disabled, deaf or dumb, and regular peo- sign language, these systems can also be used to translate sign
ple. The deaf and the dumb use sign language to communicate, language into spoken or written language. Natural language
which can perplex those unfamiliar. In order to translate processing tools can be used to analyze the recognized sign
motions into speech and text, a device must be created. This language and generate spoken or written language translations.
will be a big step toward making it possible for persons who
• SignAloud: SignAloud is a sign language interpreter sys-
are dumb or deaf to interact with the general populace.
tem that uses machine learning algorithms and computer
vision to recognize and interpret American Sign Lan-
C. Image Processing for Intelligent Sign Language Recogni- guage (ASL) gestures. The system uses a pair of gloves
tion equipped with sensors that capture hand movements and
transmit the data to a computer, which then uses machine
Intelligent sign language recognition systems can be created learning algorithms to translate the signs into spoken
using image processing techniques to accurately translate and words. [12]
understand the hand gestures and motions used in sign lan- • Hand Talk: Hand Talk is a mobile app that uses image
guage. To evaluate video data and extract useful information, processing and machine learning to translate spoken or
these systems often include computer vision algorithms, ma- written Portuguese into Brazilian Sign Language (Libras).
chine learning strategies, and image processing technologies The app uses an animated avatar named Hugo, which
[11]. One of the critical challenges in sign language recog- interprets the text or speech into sign language. The app
nition is detecting and tracking the movements of the hands also includes a feature that allows users to learn sign
and fingers, which can be complex and highly variable. Image language through interactive tutorials [13].
processing techniques can help to address this challenge by • Project Aslan: Project Aslan is a sign language translator
allowing for more accurate and precise detection and tracking system that uses a combination of computer vision,
of hand movements. For example, edge detection algorithms machine learning, and robotics to interpret sign language
can be used to identify the outline of the hand, while feature and produces spoken output. The system consists of a
extraction techniques can be used to identify specific hand robotic hand and a camera, which captures the user’s sign
gestures and movements. For those who are deaf or hard of language gestures and transmits them to a computer for
hearing, image processing techniques provide a potent and interpretation using machine learning algorithms.
adaptable way to create sophisticated sign language recog- • Sign Language Recognition System: This is a system that
nition systems that enhance accessibility and communication. uses image processing and machine learning to recognize
These methods could revolutionize how we communicate and and interpret Indian Sign Language (ISL) gestures [14].
engage with the outside world as they develop and improve. The system uses a webcam to capture the user’s hand
HMMs are suitable for fully identifying ASL signs because gestures and then applies machine learning algorithms
they are inherently time-varying. Because most ASL gestures to classify the gestures and convert them into spoken or
can be created by combining several of the 36 basic hand written text.
shapes, the fundamental hand forms can be extracted and • SignSpeak: SignSpeak is a sign language interpreter
used as the input to the HMM processor after dividing the system that uses a combination of machine learning,
continuous indications. Then, it is possible to determine the computer vision, and natural language processing to
ASL words’ output using the fundamental hand forms. The interpret American Sign Language (ASL) and translate
system can be developed into a complete sign recognition it into spoken English [?]. The system uses a camera
system using the methods described in this paper. to capture the user’s gestures and then applies machine

Authorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on July 27,2024 at 06:01:30 UTC from IEEE Xplore. Restrictions apply.
learning algorithms to recognize and interpret the signs. smart gloves are further benefits. Smart gloves enable more
The interpreted text is then converted into spoken English flexible and practical communication than conventional video-
using natural language processing techniques. based systems, which call for cameras and other hardware
MATLAB’s digital image processing toolbox within MAT- to be set up in a specified area. Those who are deaf or
LAB software provides a set of functions and tools for process- hard of hearing, who may need to communicate in various
ing and analyzing digital images [15]. The toolbox includes contexts and circumstances, should pay particular attention to
image enhancement, filtering, segmentation, feature extraction, this. For those who are hard of hearing or deaf, smart glove
and pattern recognition functions. It allows users to perform technology may help increase accessibility [16]. Smart gloves
various operations on images, such as scaling, rotating, crop- can aid in removing communication barriers and promoting
ping, and flipping. Several image processing algorithms and more inclusion in society by offering a more precise and user-
methods, such as edge detection, thresholding, morphological friendly method of sign language detection. This may involve
operations, and Fourier analysis, are also included in the applications in the fields of education and healthcare. For
toolbox. With these tools, users can edit photographs and example, smart gloves may be used in education to increase
draw out valuable information, such as object recognition communication between teachers and students or in healthcare
and image classification. In research, business, and education, to enhance communication between patients and medical staff.
MATLAB’s digital image processing toolbox is frequently Typical wearable sensors track a person’s state and detect
used for tasks including computer vision, robotics, remote their environment using capacitive and resistive technologies.
sensing, and medical picture analysis [15]. MATLAB’s digital They typically need an external power source to produce signal
image processing toolbox offers an extensive and adaptable excitation, which prevents their widespread adoption in the
environment for designing and executing image processing future. Since their creation in 2012, triboelectric nanogenerator
algorithms and applications. (TENG)-based wearable sensors have been used increasingly
for human motion tracking, healthcare monitoring, and human-
• Image acquisition: Using a camera or an existing dataset,
machine interface (HMI) due to their straightforward construc-
acquiring images of hand gestures.
tion, wide material selection, and quick dynamic response.
• Pre-processing an image involves eliminating noise and
Wearable HMIs are among them, and their popularity is grow-
enhancing it with techniques like smoothing and his-
ing as a potential and innovative approach to achieve human-
togram equalization.
machine and even human-human (e.g., signers and non-
• The process of extracting features from hand gestures,
signers) interaction through effective human status tracking.
such as their size, shape, and texture.
Significant research is being done on self-powered triboelectric
• Classification: Using methods like k-nearest neighbors
HMIs using a variety of prototypes, including a touchpad,
(KNN), support vector machines (SVM), or neural net-
wristband, sock, and glove, in addition to standard HMIs that
works to categorize the hand gestures.
use resistive and capacitive approaches. The glove has many
• Performance evaluation: Analyzing the recognition sys-
potentials because it can easily detect our dexterous hands’
tem’s precision and dependability.
numerous degrees of freedom motions, which go beyond
These steps can be carried out using the built-in tools and simple control.
functions of MATLAB, such as the Image Processing Tool- The creation of triboelectric glove HMI has been attempted
box, Computer Vision System Toolbox, and Neural Network on multiple occasions in recent years, uniting the glove plat-
Toolbox. form’s benefits (such as conformability, usability, and afford-
ability) with the TENG technology. (such as dynamic sensing
E. Smart Gloves effect and self-powered capability). For instance, TENG gloves
By offering a more precise and user-friendly method of are typically utilized to demonstrate how pulse counting or
capturing and interpreting hand movements, smart glove tech- magnitude analysis can be used to track finger motions.
nology has the potential to revolutionize the recognition of sign However, most of their data analyses are based on manual
language. Smart gloves integrate sensors and other hardware simple feature extraction, which results in severe feature loss
directly into the wearable device, unlike conventional video- in a small number of identified hand motions and gestures. (for
based systems, which rely on cameras and computer vision example, amplitude, frequency, and peak number). Complex
algorithms to identify and monitor indications. This enables hand gesture discrimination is still tricky. Artificial intelligence
more accurate and thorough tracking of hand movements and (AI) has lately unlocked intelligent data analytics in cross-
can give the user haptic feedback. Sign language is a complex disciplinary sectors by applying complete sensory information
and expressive language with a wide range of hand movements extraction and autonomous learning. With the TENG glove
and gestures that can convey various meanings. Traditional and AI integration, more intricate and sophisticated gesture
video-based recognition systems can need help to capture these monitoring may be achieved than is generally possible with
subtleties, leading to inaccuracies and misunderstandings. By manual feature extraction. In earlier research, it was shown
contrast, smart gloves can capture even the most subtle hand that a TENG glove with AI capabilities could distinguish 11
movements, providing a more accurate and natural means complicated motions for complex virtual reality/augmented
of communication.The portability and simplicity of usage of reality (VR/AR) commands. It demonstrates the potential

Authorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on July 27,2024 at 06:01:30 UTC from IEEE Xplore. Restrictions apply.
of adopting a straightforward TENG sensor architecture for software components that provide a set of tools and
precise and comprehensive hand gesture identification. functions for developers to create applications that can
interact with the smart glove. They enable developers to
IV. G LOVE CONFIGURATION AND SENSOR easily access and manipulate the data collected by the
CHARACTERIZATION
sensors.
A smart glove is a wearable device that can be used • Data processing and analysis software: Data processing
to track and analyze hand movements and gestures. The and analysis software, such as Python and OpenCV, can
glove is equipped with sensors that detect and measure hand be used to analyze the data collected by the sensors.
movements and a microcontroller that processes the data and These software packages enable developers to apply
communicates with software running on a computer. In this advanced algorithms and techniques to the sensor data to
example, we will provide a general overview of a mechanical detect hand movements and gestures and extract mean-
and software configuration of a smart glove using Python and ingful information.
OpenCV. • User interface software: User interface software is used
to display the data collected by the sensors and provide a
A. Mechanical Configuration way for the user to interact with the smart glove. This can
The mechanical configuration of a smart glove usually include graphical user interfaces (GUIs), command-line
consists of a fabric glove that is fitted with sensors and a interfaces, or web-based interfaces.
microcontroller. The main component of the glove is typically
made of a stretchable fabric or material that can fit different C. Sensor Characterization
hand sizes [17]. The material should also be lightweight,
Before using the smart glove, it is essential to characterize
breathable, and comfortable. Sensors are a critical component
the sensors to determine their accuracy and sensitivity. The
of smart gloves. Different types of sensors can be used de-
characterization process involves measuring the output of the
pending on the application, but some standard sensors include
sensors in response to known hand movements and gestures.
flex, accelerometer, and gyroscope sensors. These sensors can
This data can then be used to calibrate the sensors and improve
detect the movements of the fingers and hand and provide
the system’s accuracy. Sensors in smart gloves are designed
data that can be analyzed to understand hand movements
to detect and measure the movement and orientation of the
and gestures.A microcontroller is a small computer that is
hand and the fingers. They enable the smart glove to capture
used to process and store data from the sensors. It collects
data on hand gestures and movements, which can be analyzed
sensor data and transmits it to a computer or other device for
to control electronic devices, communicate with computers, or
further processing and analysis. Smart gloves require power to
perform other tasks.
operate. Typically, a tiny battery is used to power the sensors
and microcontroller. The battery should be lightweight and • Flex sensors: Flex sensors are flexible resistive sensors
provide enough power to operate the glove for an extended that change resistance as they are bent or flexed. They
period. Wiring and connectors are used to connect the sensors can be placed on the fingers or the hand to detect the
and microcontroller to the power supply. Using reliable and movement of the fingers or the hand [18].
secure connections is essential to prevent data loss or power • Accelerometer sensors: Accelerometer sensors measure
failure. the acceleration of the glove in three-dimensional space.
They can be used to detect the movement and orientation
B. Software Configuration of the hand and the glove.
The software configuration of a smart glove typically con- • Gyroscope sensors: Gyroscope sensors measure the rota-
sists of two main components: data acquisition and processing. tional rate of the glove in three-dimensional space. They
For data acquisition, we can use Python to interface with can be used to detect the rotation and orientation of the
the microcontroller and read the sensor data. For processing, hand and the glove.
we can use OpenCV to analyze the data and detect hand • Pressure sensors: Pressure sensors can be used to detect
movements and gestures. the pressure applied by the fingers or the hand. They can
• Firmware: Firmware is software that is embedded in the
be placed on the fingertips or the palm of the glove to
microcontroller of the smart glove. It is responsible for detect the pressure applied during gripping or holding.
managing the sensors, collecting data, and transmitting • Temperature sensors: Temperature sensors can be used to
data to a computer or other device. monitor the temperature of the glove and the hand. They
• Drivers: Drivers are software components that allow the
can be used to detect changes in temperature that may
microcontroller to communicate with the computer or indicate changes in hand position or movement.
other device. They are necessary to establish a connec-
D. Code Structure
tion and transfer data between the smart glove and the
computer. The process of building a sign language recognition system
• APIs and SDKs: APIs (Application Programming In- using Python and OpenCV involves collecting data, prepro-
terfaces) and SDKs (Software Development Kits) are cessing the data, training a machine learning model, testing

Authorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on July 27,2024 at 06:01:30 UTC from IEEE Xplore. Restrictions apply.
the model, and using the model to recognize sign language in solution for sign language recognition that can be implemented
real-time. on a wide range of platforms and devices. Python is a highly
1) Collect sign language data: This involves collecting a adaptable programming language that enables developers to
dataset of images or videos of sign language gestures. construct sign language recognition systems that accurately
The dataset should include various hand positions and identify hand gestures and movements. OpenCV provides a
movements for different signs. robust set of image and video processing tools that can be
2) Preprocess the data: Preprocessing involves resizing the combined in numerous ways to achieve high accuracy. In
images, converting them to grayscale, and applying addition, Python and OpenCV can analyze video and images in
various image filters to enhance the features in the real time, allowing for rapid and seamless communication with
images. users of sign language. Overall, the use of Python and OpenCV
3) Train a machine learning model: Use a machine learning in sign language recognition provides a superior solution when
algorithm to train a model on the preprocessed data. This compared to other existing technologies, thereby enhancing
can be done using techniques like supervised learning, accessibility and inclusiveness for sign language users.
where the model is trained on labeled data, or unsuper-
R EFERENCES
vised learning, where the model learns patterns in the
data without labels. [1] V. Evola and J. Skubisz, “Coordinated collaboration and nonverbal
4) Test the model: Test the accuracy of the model on a social interactions: A formal and functional analysis of gaze, gestures,
and other body movements in a contemporary dance improvisation
separate test dataset to evaluate its performance. performance,” Journal of Nonverbal Behavior, vol. 43, no. 4, pp. 451–
5) Use the model to recognize sign language: Once the 479, 2019.
model is trained, it can be used to recognize sign [2] N. Dhingra, E. Valli, and A. Kunz, “Recognition and localisation of
pointing gestures using a rgb-d camera,” in HCI International 2020-
language gestures in real-time. This involves capturing Posters: 22nd International Conference, HCII 2020, Copenhagen, Den-
video or images of the hand gestures and applying the mark, July 19–24, 2020, Proceedings, Part I. Springer, 2020, pp. 205–
model to detect the sign being performed. 212.
[3] P. Meulenbroek and L. R. Cherney, “Usability and acceptability of
a computer-based social communication intervention for persons with
CONCLUSION traumatic brain injury: A mixed-methods study,” in Seminars in Speech
As a result of substantial technological breakthroughs in and Language, vol. 43, no. 03. Thieme Medical Publishers, Inc., 2022,
pp. 218–232.
sign language recognition, people who use sign language as [4] A. Anitha, S. Vaid, and C. Dixit, “Implementation of touch-less input
their primary form of communication now have greater access recognition using convex hull segmentation and bitwise and approach,”
to services. The ability to recognize sign language motions in Artificial Intelligence and Sustainable Computing for Smart City: First
International Conference, AIS2C2 2021, Greater Noida, India, March
more precisely and effectively has been made possible using 22–23, 2021, Revised Selected Papers 1. Springer, 2021, pp. 149–161.
machine learning algorithms, image processing methods, and [5] R. Sudhakar, V. Gayathri, P. Gomathi, S. Renuka, and N. Hemalatha,
smart gloves. Python and OpenCV, which provide program- “Sign language detection,” South Asian Journal of Engineering and
Technology, vol. 13, no. 1, pp. 49–56, 2023.
mers with robust tools to create sign language recognition [6] J. E. Stewart and J. E. Bentley, “Hearing loss in pediatrics: what the
systems, have led these developments. Python is a versatile medical home needs to know,” Pediatric Clinics, vol. 66, no. 2, pp.
programming language widely used in machine learning and 425–436, 2019.
computer vision applications, while OpenCV provides a com- [7] B. Grey, E. K. Deutchki, E. A. Lund, and K. L. Werfel, “Impact of
meeting early hearing detection and intervention benchmarks on spoken
prehensive library of image-processing functions. Combining language,” Journal of Early Intervention, vol. 44, no. 3, pp. 235–251,
these two tools has made it possible for developers to create 2022.
robust sign-language recognition systems with high accuracy [8] S. Narasimhaswamy, Z. Wei, Y. Wang, J. Zhang, and M. Hoai, “Con-
textual attention for hand detection in the wild,” in Proceedings of the
rates. The communication gap between those who use sign IEEE/CVF international conference on computer vision, 2019, pp. 9567–
language and those who do not has been significantly reduced 9576.
because of developments in sign language recognition technol- [9] R. G. Rajan and M. J. Leo, “American sign language alphabets recogni-
tion using hand crafted and deep learning features,” in 2020 International
ogy. More precise and effective sign language recognition has Conference on Inventive Computation Technologies (ICICT). IEEE,
been made possible using Python, OpenCV, and other machine 2020, pp. 430–434.
learning algorithms and image processing techniques. It will [10] Z. Sun, M. Zhu, and C. Lee, “Progress in the triboelectric human–
machine interfaces (hmis)-moving from smart gloves to ai/haptic enabled
be fascinating to see how this technology develops over time hmi in the 5g/iot era,” Nanoenergy Advances, vol. 1, no. 1, 2021.
and how it affects those who use sign language daily. Machine [11] S. Pramada, D. Saylee, N. Pranita, N. Samiksha, and M. Vaidya,
learning algorithms such as convolutional neural networks “Intelligent sign language recognition using image processing,” IOSR
Journal of Engineering (IOSRJEN), vol. 3, no. 2, pp. 45–51, 2013.
(CNNs) can be trained to recognize sign language gestures [12] K. Kirkpatrick, “Technology for the deaf,” Communications of the ACM,
based on image and video data. Additionally, depth cameras vol. 61, no. 12, pp. 16–18, 2018.
such as the Kinect can be used to capture three-dimensional [13] C. Preetham, G. Ramakrishnan, S. Kumar, A. Tamse, and N. Krish-
napura, “Hand talk-implementation of a gesture recognizing glove,” in
data of hand gestures, which can then be analyzed and 2013 Texas Instruments India Educators’ Conference. IEEE, 2013, pp.
recognized using computer vision techniques. However, these 328–331.
technologies may require specialized hardware and software, [14] J. Bukhari, M. Rehman, S. I. Malik, A. M. Kamboh, and A. Salman,
“American sign language translation through sensory glove; signspeak,”
which can limit their accessibility and ease of use. In com- International Journal of u-and e-Service, Science and Technology, vol. 8,
parison, Python and OpenCV offer a flexible and accessible no. 1, pp. 131–142, 2015.

Authorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on July 27,2024 at 06:01:30 UTC from IEEE Xplore. Restrictions apply.
[15] A. C. B. Monteiro, Y. Iano, R. P. França, and R. Arthur, “Development of
a laboratory medical algorithm for simultaneous detection and counting
of erythrocytes and leukocytes in digital images of a blood smear,”
in Deep learning techniques for biomedical and health informatics.
Elsevier, 2020, pp. 165–186.
[16] O. Ozioko, W. Taube, M. Hersh, and R. Dahiya, “Smartfingerbraille: A
tactile sensing and actuation based communication glove for deafblind
people,” in 2017 IEEE 26th International Symposium on Industrial
Electronics (ISIE). IEEE, 2017, pp. 2014–2018.
[17] B. B. Kang, H. Choi, H. Lee, and K.-J. Cho, “Exo-glove poly ii: A
polymer-based soft wearable robot for the hand with a tendon-driven
actuation system,” Soft robotics, vol. 6, no. 2, pp. 214–227, 2019.
[18] A. Sreejan and Y. S. Narayan, “A review on applications of flex
sensors,” International Journal of Emerging Technology and Advanced
Engineering, vol. 7, no. 7, pp. 97–100, 2017.

Authorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on July 27,2024 at 06:01:30 UTC from IEEE Xplore. Restrictions apply.

You might also like