Visvesvaraya Technological University
Visvesvaraya Technological University
A Project Synopsis on
BACHELOR OF ENGINEERING
IN
DIVYA A 4GW18CS023
GRACY S 4GW18CS026
MANAVI M U 4GW18CS044
Under the Guidance of
Dr Raviraj P
Professor
2 Literature Survey 1
3 Problem Statement 2
9 References 5
i
LIST OF FIGURES
ii
Automated Speech To Indian Sign Language Translation System Using Machine Learning Algorithm
INTRODUCTION
Sign language is the mother language of deaf people. This includes the combination of hand
movements, arms or body and facial expressions. Communication via gestures is a visual language
used by deaf individuals as their first language. It is additionally a communication platform
between a disabled and a non disabled. Sign language communication is significant for their
social, enthusiastic and semantic development. There are 135 types of sign languages all over the
world. Some of them are American Sign Language (ASL), Indian Sign Language (ISL), British
Sign Language (BSL), Australian Sign Language (Auslan) and many more.
The use of this project is to decrease the communication gap faced by the mute and deaf and the
people by making a program that can take in speech and convert it into Indian Sign Language and
also to take in sign language and convert it to speech
Increasing the ease of communication between differently abled people and people who do not
understand sign language Can be used as a tool to teach and learn sign language to a normal
person as there are not many schools that teach this language Aims to reduce discrimination
against this group of differently abled population.
LITERATURE SURVEY
This section contains the description about the survey done to identify the problems faced by
hearing and speech impaired persons. Further, the study about various proposed and available
solutions.
[1] As per Amit Kumar Shinde on his study of sign language to text and vice versa in Marathi
Sign language recognition Apr.2021 is one of the most important research and it is the most
natural and common way of communication for the people with hearing problems. A hand
gesture recognition system can help deaf persons to communicate with normal people in the
absence of an interpreter. The system works both in offline mode and through web camera.
[2] Neha Poddar, Shrushti Rao, Shruti Sawant, Vrushali Somavanshi, Prof. Sumita Chandak in
their paper Aug.2015 discussed about the prevalence of deafness in India is fairly significant
as it is the second most common cause of disability. A portable interpreting device which
convert higher mathematics sign language into corresponding text and voice can be very
useful for the deaf people and solve many difficulties.
[3] The glove based deaf-mute communication interpreter introduced by Anbarasi Rajamohan,
Hemavathy R., Dhanalakshmi is a great research. Oct.2019 The glove comprises of five flex
Dept. of CSE, GSSSIETW, Mysore Page 1
Automated Speech To Indian Sign Language Translation System Using Machine Learning Algorithm
sensors, tactile sensors and accelerometer. The controller matches the gesture with pre-
stored outputs. The evaluation of interpreter was carried out for ten letters.
[4] As per the Neha V. Tavari A. V. Deorankar Dr. P. N. Chatur Oct.2017 in his report discuss
that many physically impaired people rely on sign language translators to express their
thoughts and to be in touch with rest of the world. The project introduces the image of the
hand which is captured using a web camera. The image acquired is processed and features
are extracted. Features are used as input to a classification algorithm for recognition. The
recognized gesture is used to generate speech or text. In this system, flex sensor gives
unstable analog output and also it requires many circuits and is thus very expensive.
[5] K Gunasekaran et al. in the paper 12, December 2016 “Sign Language to Speech Translation”
modeled a system composed of four modules viz., sensing unit, processing unit, voice
storage unit, and a wireless communication unit. The system is im- plemented by integrating
a flux sensor and APR9600 with PIC16F877A. Users wear gloves that are fitted with flex
sensors which respond to hand gestures. Suitable circuit response is used to provide the
Micro-controller with input and the microcontroller hence plays the recorded voice using
APR9600.
[6] S Rajaganapathy et al. in the paper “Conversation of Sign Language to Speech with Human
Gestures” Feb.2018 proposed a system to convert the sign language to speak with the
understanding of human gesture and motion capture. Microsoft Kinect is used for motion
capture. A speech-impaired person can perform the human gestures and the system in-turn
converts the gestures to speech and plays it out loud so that a speechimpaired person can
communicate with another person or a crowd or gathering. The system is planned to bring
higher efficiency to the users for improved communication. The system was tested for a
sample of 100 spells and achieved accuracy as high as 90.
PROBLEM STATEMENT
Work done so far in this field has been much more focused on ASL or BSL, but for ISL, systems
that have been developed are very few. The underlaying architecture for most of the systems are
based on:
Direct Translation: Words from source language are directly transformed into
target language words. Output may not be a desired one.
Statistical Machine Translation: It requires large parallel corpus, which is not
readily available in case of sign language.
HARDWARE REQUIREMENTS :
Device : Laptop/Desktop with Mic
Ram : 4GB
Processor : i3 Core
SOFTWARE REQUIREMENTS :
IDE : Pycharm Community
Language : Python
Tool : Blender (Animation)
METHODOLOGY
Phase 1
1. Audio input on a electronic device using the python PyAudio module.
2. Conversion of audio to text using Google Speech API.
Dept. of CSE, GSSSIETW, Mysore Page 3
Automated Speech To Indian Sign Language Translation System Using Machine Learning Algorithm
3. Dependency parser for analysing grammatical structure of the sentence and establishing
relationship between words.
4. ISL Generator: ISL of input sentence using ISL grammar rules.
5. Generation of Sign language with signing Avatar or with Images.
Phase 2
We would make the architectures of various self-developed and pre-trained deep neural
networks, machine learning algorithms and their corresponding performances for the task of
hand gesture to audio. We will try to train the model to classify these hand gestures which
correspond to a letter in the indian sign language
Phase 3
Development of a User interface with the following technologies:
● HTML
● CSS
● JavaScript
EXPECTED OUTCOME
Output for a given English text is produced by generating its equivalent sign language
depiction. The output of this system will be a clip of ISL words. The predefined database
will be having video for each and every separate words and the output video will be a
merged video of such words
REALTIME APPLICATIONS
1. Increasing the ease of communication between differently abled people and people who do
not understand sign language.
2. Can be used as a tool to teach and learn sign language to a normal person as there are not
many schools that teach this language.
3. Aims to reduce discrimination against this group of differently abled population.
4. The broad application of this can erase communicational barriers.
REFERENCES
• https://www.geeksforgeeks.org/project-idea-audio-sign-language-translator/
• https://core.ac.uk/download/pdf/25725245.pdf
• https://github.com/mjk188/ASL-Translator
• https://github.com/Tachionstrahl/SignLanguageRecognition
• https://www.researchgate.net/publication/282839736_Sign_Language_Converter