Audio To Sign Language Tool
Audio To Sign Language Tool
TOOL
1
D Vignan Sai Ram, 2Tirrthangkar Roy, 3M Shashmith, 4Ch Sai Rishi
Abstract: People who are speech- and hearing- their languages may vary. There are several spoken
impaired struggle to communicate with others. languages around the world, including Urdu, French, and
These people struggle to communicate since not English. Similarly, hearing-impaired persons utilize a
everyone is conversant in sign language. The variety of sign languages and phrases around the world.
purpose of this work is to develop a system that aids In India, 6.3% of the population, or 63 million
those with speech and hearing impairments and
converts a voice into Indian sign language (ISL). people, have substantial hearing loss,
People may find it difficult to learn sign language, according to the 2011 census. Among them,
thus this research offers a method based on speech between 76 and 89 percent of the Indian Deaf
recognition and picture processing. The
development of sign languages has made are illiterate in spoken or written language.
communication simple, especially for the deaf and One of the following two factors may be the cause of the
hard of hearing. low literacy rate:
In this work we propose a real real-time that -Lack of interpreters for sign language
recognizes the voice input through Pyaudio, NLP, -Absence of the ISL tool.
and Google speech recognition API and converts it
into text, followed by sign language output of text -ISL research is lacking.
which is displayed on the screen of the machine in Communication is challenging for deaf people in
the form of series of images or motioned video by settings like banks, trains, and hospitals because of their
the help of various python libraries. disabilities. It is necessary to develop a system that will
translate text into Indian Sign Language and backward
Index Terms: Speech recognition, ISL, Image to improve their ability to communicate with the outside
processing. world. These systems will raise the community's
standard of living. Although sign languages have
I. INTRODUCTION received less research than spoken languages, there is
still much to learn about them. is the idea that
Sign language (SL) is a natural visual-spatial individuals or groups of people show up at a location to
language that combines facial emotions, hand forms, arm participate in an event that was previously organized.
orientation, and movement with the movement of the
upper body and upper body parts to produce verbal Audio to sign language translator is a web-based
utterances in three dimensions rather than just one. The application developed for deaf or hard to hearing
people in India who are hard of hearing, deaf, and dumb people. It translates English audio into Indian Sign
gave rise to the language. There are several groups of Language. The system takes simple English sentences
deaf and dumb people around the world, and as a result, as input and generates ISL.
II. LITERATURE SURVEY Helping individuals who are battling with hearing
[1]Youhao Yu discussed all of the steps required in the loss is our aim. The conversion of sign language from
voice recognition challenge as well as the primary input to text or audio has been done in several sign
approaches employed for it. Speech recognition's language initiatives. However, tools for translating audio
significance and uses in other fields are also discussed. into sign language have only sometimes been developed.
Choosing a voice recognition method is helpful when It is advantageous to both hearing people and the hearing
this system's task involves transforming speech input impaired. In this project, we provide a state-of-the-art
into text output. audio- to-sign language translator built on Python. It
accepts audio as input, searches the recording using
Google API, APIows the text on the screen, and then
[8] Vaishali Kulkarni and Purva C. Badhe proposed a generates the sign code for the input using an ISL (Indian
method for translating Indian sign language to English. It Sign Language) generator. Then, each word in the
makes use of gesture recognition to translate a gesture sentence is compared to each entry from the dictionary
into the appropriate text. The design of the model along with any accompanying images or GIFs.
training, data collection, data preparation, and other
crucial steps helpful for creating our desired system Although it is commonly recognized that facial
were provided in this study work. emotions communicate a significant portion of sign
language, this experiment didn't specifically focus on
[3] By creating a system for automated recognition of them. This technique may be used in a variety of
sign language utilizing KNN classification techniques situations, such as accessing government websites
and Neural networks, Madhuri Sharma, Ranjna Pal, and without a video clip for the hearing impaired or filling
Ashok Kumar Sahoo developed a system beneficial in out online forms without an interpreter around.
communication between signing persons and non- Procedure:
signing people. This method teaches the principles of 1 for Converting Audio to Text: The Python PyAudio
Indian sign language and aids in understanding the library is used to capture audio input.
opposite of the desired system.
-AudiAudio-to-textversion with a microphone
[11] Taner Arsan and Ouz Ülgen created a method using -Dependency parsers are used to determine the
Java to translate sign language from speech to sign relationships between words and analyzed sentence
language and vice versa using the Microsoft Kinect syntax.
Sensor for the Xbox 360. With the aid of the Java 2. Using Google Speech API, convert text to sign
conversion application CMU Sphinx, Google Voice language.
Recognition was utilized to recognize the voice and
convert it to sign language. The suggested technique -Using NLP for Text Preprocessing.
helps comprehend sign language and use Google -Machine translation based on dictionaries.
Speech Recognition, which is what we want.
-ISL generator that applies ISL grammatical rules to the
supplied sentence.
[7] An Android app was proposed by M Mahesh, Arvind
Jayaprakash, and M Geetha to translate sign language -With the help of a signing avatar, sign language was
into regular English and make it easier for dumb and created.
deaf persons to connect with others. This system
Filler words, including "is," "are," "was," and "were,"
provides excellent insight into picture processing and
among others, scarcely help to context-to-sign language
input-to-output format conversion. As a potential future
conversion in natural language processing. As a result, the
scope work for our intended product, which is now a
filler words are eliminated from the speech or phrase by the
desktop application, a mobile version of this research
algorithm.
can be developed
-Root Words - The words may be used as gerunds,
adjectives, or plurals. The suggested technique will
III. PROPOSED MODEL
eliminate these word forms and identify the word's root.
The existing system is less interactive than the The effective translation of spoken language into sign
proposedmodel. We investigate the many strategies and language will benefit from these fundamental terms.
ideas employed in system design. The first job is to utilize
voice recognition to identify the audio input coming from
Dataset - To map Indian sign language words to text or
the user. Using several Python modules, the detected
text identified from voice, the system has a sizable dataset of
audio is examined before being transformed into a string
Indian sign language terms. All Indians who are deaf will
and compared to the dataset created. The final picture or
thus benefit from it. It helps individuals understand the
GIF is then shown on the computer screen using Indian
majority of speech.
Sign Language.
A. Execution steps the help of HTML, CSS, and Javascript
The following steps can be followed to use the website
and chatbot:
1. Go to the tool website
2. If you are already a user, log in to the website;
otherwise, go to the register page.
3. To talk, simply click the microphone button. The
website will convert your voice to text and display it in
the chat box.
Figure 2: UI of Sign Language Tool
4. To display the avatar with Indian sign language, click on
the play/pause button.
Figure 2 shows the UI of the Sign Language Tool.
5. The user receives the required reply.
The front end is developed using HTML, CSS, and
JavaScript. The back end is developed using python
Django and SqlLite is used as a Database.
B. Database
The dataset utilized is crucial for the system's smooth b) NLTK
operation in any project requiring machine translation Natural Language Tool Kit (NLTK) is a toolkit
and natural language processing. We used a dataset for designed forPython-based NLP operations. Numerous
Indian Sign Language for our project, which includes text-processing packages and test datasets are included in
visuals for every English character, as well as a GIF NLTK. Using NLTK, a variety of activities are carried
collection for some frequently used words and phrases. out, including the tokenizing and parsing tree.
The dataset of characters used to construct the system is In the proposed system following methods of NLTK are
applied to the input query that is provided:
Tokenizing:
Tokenization converts a sentence into an
individualcollection of words. It follows a
Structured process.
Stemming:
Finds the root word of the given word.
IV. RESULTS AND DISCUSSION However, there are certain limitations and challenges
associated with the current implementation. The
The primary objective of this project was to develop a accuracy of speech recognition and NLP algorithms may
system capable of aiding individuals with speech and vary depending on factors such as accent, background
hearing impairments by converting spoken language into noise, and linguistic complexity. Improvements in these
Indian Sign Language (ISL). The system proposed in areas could enhance the overall performance and
this work utilizes a combination of speech recognition, usability of the system.
natural language processing (NLP), and image
processing techniques to achieve this goal. The Furthermore, the availability and quality of the ISL
discussion will delve into the results obtained from the dataset play a crucial role in the accuracy of gesture
implementation and the implications of the proposed generation. Continuous updates and expansions of the
system. dataset are necessary to encompass a broader range of
vocabulary and linguistic nuances
Results: