Deep Learning For Sign Language Recognition
Deep Learning For Sign Language Recognition
ISSN No:-2456-2165
Abstract:- A sign language is a way of communicating by Due to lack of ease in sign languages, non-sign language
using the hands instead of spoken words. Sign language is interpreter speaks or pays less attention to communicate with
used by deaf and dump people to communicate with other dump and deaf people. Most of people know nothing about
individuals. People who are speech-impaired and also sign language except for people engaged in special education
some people who have autism spectrum disorder face and most people will not take initiative to spend time and
problem while communicating with normal as they can learn sign language. The Indian sign language (ISL) is one of
converse using only sign language. So, it becomes difficult the popular languages among Sign languages. It uses gestures
for other individuals to understand this sign language. for communication. Gestures are various movements used in
Each country usually has its own native sign language. the process of communication either hand or body.
The Indian Sign Language recognition application
proposed here aims at solving the communication Sign languages uses gestures which usually make use of
problem between people. System will capture the different visually transmitted patterns. ISL presents various hand
hand gestures through camera of mobile phone. The signs movements using both right and left hands. This ISL gestures
used in sign language are identified by the features can be classified into two categories static and dynamic. In
extracted from the hand gestures. Then, processing of the static gesture there is no movement of hand. Most of gestures
image takes place by using convolution neural network of alphabets in ISL are static signs. Dynamic gestures involve
algorithm. After all the processing we finally get the hand movements during performing gestures. The gesture of
output as a text which can be easily understood by all hand recognition includes hand detection and recognition of
people. We are keeping purpose of developing system that that hand sign during processing. People find it very difficult
will make communication between Deaf and Dumb to understand sign language hence it become important to
person and normal person easy and convenient. design vision-based sing language translator. Many
researchers try to combine recent models for deaf community
Keywords:- Sign Language Recognition Application, to communicate more easily using AI in real world scenario
Convolution Neural Network, Deep Learning. which includes machine learning models and some dataset
collection. There are two major methods for sign language
I. INTRODUCTION translation. 1)Vision based 2) Glove based which includes
sensors and gloves for implementation. Glove-based hand
The history of sign language in Western societies dates gesture recognition system has introduced as a beginning, due
back to the 17th century, as a visual language although to wearing heavy devices and cable connections, it lacks
references to the forms of communication using sign naturalness. This research proposes a vision-based sign
language date back to the 5th century BC Greece. Sign language device capable of translating ISL to text. It includes
language is made up of a system of general gestures, to basic models like object detection, feature extraction and
represent the letters of the alphabet and emotions. Many sign classification. The training model built for classifying hand
languages are native languages, distinct from the structure of gestures using ML. The hand gestures are captured
spoken languages used near them, and are mainly used by dynamically using camera.
deaf people to speak. Hearing is the important sense among 5
human senses. Deafness hinders a person from understanding A Convolutional Neural Network (CNN) is a Deep
spoken languages Unlike spoken languages, where grammar Learning algorithm which is applied to Image and Video
is expressed using punctuation-based symbols, feature, recognition, Image Analysis and Classification etc. We use
attitude and syntax sign languages use gesture, punctuation, Convolutional Neural Network to increase the accuracy of
and body and facial expressions to form grammar. Sign sign classification. Therefore, this system balances the
language is for community that helps to interact with each communication between deaf people and normal people
other. They are used for conveying meaning through visual without any requirement of intermediate translator by
modality. converting gestures into text, which achieves our objective
and reduces the barrier in communication. The aim of this
paper is to recognize signs used in Indian Sign Language. It