0% found this document useful (0 votes)
13 views2 pages

PROJECT Synopsis

Uploaded by

pushpsharma678
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views2 pages

PROJECT Synopsis

Uploaded by

pushpsharma678
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

AMITY SCHOOL OF ENGINEERING AND TECHNOLOGY

PROJECT SYNOPSIS

B.TECH INFORMATION TECHNOLOGY

PROJECT TITLE: Sign Language Recognition using Machine Learning


ACADEMIC SESSION: 2020 - 2021
PROJECT GUIDE: Dr. Chetna Choudhary
PROJECT TEAM: Team of 2

PROGRAMME: B.TECH IT YEAR/SEMESTER: 7 IT 1

01 A2305318009 Paras Sachdeva

02 A2305318031 Yashasvi Kakar

Abstract/Project Summary: Indian Sign Language (ISL) is the language of the hearing-
impaired people of India. Around 10 million people use it to communicate among themselves.
However, the problem becomes more pronounced when hearing-impaired people want to
communicate with normal people. In the real world, this problem is mostly solved using a
human interpreter. It is highly impossible to pair a normal human with knowledge of sign
language with a hearing-impaired person at all times. Hence a Machine interpreter is the need
of the hour that can convert sign language video data to speech or text and vice versa. Our
project aims to bridge the gap between speech and hearing-impaired people and normal people.
The basic idea of this project is to make a system using which hearing-impaired people can
significantly communicate with all other people using their normal gestures. In this project, we
are going to create desktop application which uses an image processing system and
CNN(Convolutional neural networks) to identify and convert continuous Indian sign language
data set into voice or text. The signs are from an Indian sign language dataset.
The whole system can be divided into 5 Steps:
● Pre-processing
Preprocessing is the name used for operations on images at the lowest level of abstraction
which is very useful in a variety of situations since it helps to suppress information that is not
relevant to the specific analysis task. This is achieved by normalization and noise removal.
● Segmentation
The problem of image segmentation differs considerably and the visual relevance of the error
should be measured rather than simply their plurality. Evaluation study is done to compare
various segmentation approaches and different parameters are tuned and estimated. Each
method used for comparison.
● Feature Extraction
Although a large number of approaches have been proposed for feature extraction, a robust
sign recognition system requires a more compatible method for achieving higher accuracy.
An attempt for combinational features is carried out.
● Classification
In real-life classification problems, a higher overall classification probability does not
necessarily mean a better model. A true performance of a model is identified for the proposed
work.
● Recognition
The task of the sign pattern recognition is to identify different sections of the image based on
certain properties and efforts are done to automate this process using advance machine
learning algorithms

Technologies to be used:-
1) Python
2) TensorFlow
3) Keras
4) OpenCV
5) PyAudio library

Resource requirement (Hardware & software etc):- Camera, a Computer, Speaker

Justification of the project:-


There are millions of hearing-impaired people around the world, and many of them learn sign
language to communicate with others. However, those who are not affected by this do not
know much about sign language, and they are frequently unable to understand what the person
is saying in sign language. A person with a deficiency may encounter difficulties as a result of
misunderstanding.
Machine Learning and AI can help solve this problem significantly because it recognizes the
pattern and shapes made by fingers and interprets them to the word they want to say, and a
person who does not know sign language can easily understand the sign using that.
It will also save time for those who want to understand what they are saying because there will
be no need to retain a sign language expert to translate what they are saying.

You might also like