0% found this document useful (0 votes)
51 views16 pages

Sign Lang Detection Project

The document outlines a project focused on recognizing Indian Sign Language (ISL) using machine learning, specifically through a Convolutional Neural Network (CNN) architecture. It aims to improve communication between deaf/mute individuals and those who do not understand sign language by accurately classifying hand gestures in real-time. The project utilizes datasets from Kaggle and ISLTranslate, and discusses various methodologies, algorithms, and applications related to hand gesture recognition.

Uploaded by

mastervijay478
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views16 pages

Sign Lang Detection Project

The document outlines a project focused on recognizing Indian Sign Language (ISL) using machine learning, specifically through a Convolutional Neural Network (CNN) architecture. It aims to improve communication between deaf/mute individuals and those who do not understand sign language by accurately classifying hand gestures in real-time. The project utilizes datasets from Kaggle and ISLTranslate, and discusses various methodologies, algorithms, and applications related to hand gesture recognition.

Uploaded by

mastervijay478
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Identification Of Sign

Language Recognition
using Machine learning
Content

• Introduction

• Literature Review

• Objectives

• Methodology

• Results

• Working Model

• Conclusion

• References
Introduction

• Sign language is a language used as a manual communication method used by people who are deaf ,
mute, etc.
• Hand gesture is one of the main methods used in this language for non-verbal communication.
• In this project, we will be using the ISL (Indian Sign Language) dataset from Kaggle as well as
ISLTranslate, which is a dataset containing frames and videos containing sign language, visual language,
fingerspelling, and facial expressions in Indian Sign Language.
• The model will use a Deep Learning architecture that is efficient in Image recognition (Convolutional
Neural Network Architecture).
• Using this model, we will train the model to recognize hand gestures and movement of hands with the
dataset acquired.
• Once the model can successfully classify and recognize the images in real time, it will generate English
text according to Sign Language, which will make communication with mute and deaf people easy.
Domain Imformation

• Sign language is a language used as a manual communication method used by people who are deaf ,
mute, etc.
• Hand gesture is one of the main methods used in this language for non-verbal communication.
• In this project, we will be using the ISL (Indian Sign Language) dataset from Kaggle as well as
ISLTranslate, which is a dataset containing frames and videos containing sign language, visual language,
fingerspelling, and facial expressions in Indian Sign Language.
• The model will use a Deep Learning architecture that is efficient in Image recognition (Convolutional
Neural Network Architecture).
• Using this model, we will train the model to recognize hand gestures and movement of hands with the
dataset acquired.
• Once the model can successfully classify and recognize the images in real time, it will generate English
text according to Sign Language, which will make communication with mute and deaf people easy.
Motivatio
n
• Sign language is a manual
type of communication
commonly used by deaf
and mute people.
• Our goal is to improve the
communication between
deaf/mute people from
different areas and those
who cannot understand
sign language.
Technical Concepts (Algorithms)
used
•Using a CNN (Convolutional Neural Network) model for the classification of Indian Sign Languages involves several technical
concepts and algorithms. CNN is a deep learning architecture known for its ability to process grid-like data such as images and videos
effectively. When applying CNN architecture for Indian Sign Language classification, we encountered the following technical concepts
and algorithms:

• CNN Architecture: Convolutional Neural Network (CNN) architecture is designed for processing and analysing computer
vision tasks. It utilizes various layers to train very deep neural networks effectively on image and video dataset.

• Convolutional Layers: These layers automatically extract features from hand gesture images. They're crucial for
recognizing patterns in images of the hand, helping the model identify signs language and classify them correctly.

• Pooling layers: These layers downgrade the dimensions of the image input by reducing its size. This helps in reducing the
computational usage and focuses on important functions while retaining its features.

• Fully Connected Layers: These layers connect to every neuron in one layer with another layer, enabling the network to
classify and predict based on the learned weights and features

• Evaluation Metrics: Metrics such as accuracy, precision, recall, F1-score, and AUC-ROC are used to assess the model's
performance, ensuring its ability to correctly identify Hand Gestures.
Problem Statement
•Sign language is a manual type of communication
commonly used by deaf and mute people. It is not a
universal language, so many deaf/mute people from
different regions speak different sign languages. So, this
project aims to improve the communication between
deaf/mute people from different areas and those who
cannot understand sign language. We are using deep
learning methods which can improve the classification
accuracy of sign language gestures.
Area of Applications
• Human-Computer Interaction (HCI): By enabling people to connect with computers and other electronic devices using natural hand
gestures, hand gesture recognition systems transform conventional input techniques and improve user experiences.

• Assistive technology: Hand gesture recognition systems can help people with mobility limitations use devices, traverse interfaces, and
communicate more successfully.

• Sign language recognition: Hand gesture recognition systems are essential for deciphering sign language motions, enabling smooth
communication between the deaf and hard of hearing people.

• VR and Gaming: Hand gesture recognition technologies allow users to interact with virtual worlds and play games in a natural way,
controlling avatars or characters with their hands.

• Home automation: Hand gesture recognition technologies can be incorporated into smart home systems, giving consumers the ease and
efficiency of simple hand gestures to operate lights, appliances, and other IoT devices.

• Medical Applications: In order to improve efficiency and accuracy in medical procedures, surgeons and other medical professionals use
hand gesture recognition systems in operating rooms and diagnostic settings.
Dataset and input
format
➔ Input Format:
Hand Gesture Image Data:
High-resolution Hand Gesture images should make
up the majority of the dataset. A variety of
alphabets and numbers which are labeled with
hand gestures averaging about 1200 images must
be covered by these photographs
Image Labels:
For the supervised learning approach to work, accurate
alphabets and numeric labels are essential. The
precise labels that each hand gesture image depicts
will be identified by a clear label. To enable model
training, a reliable and consistent labeling procedure
will be adopted.
LITERATURE
Title: Hand Gesture Recognition Based on Computer Vision: A Review of Techniques
SURVEY - 1
Journal: Journal of Imaging
Methodology:
● The study looks at studies that investigate hand gestures as a nonverbal
communication method in a variety of domains, including medical applications,
robot control, human-computer interface (HCI), home automation, and
communication for the deaf and silent.
● It groups the literature according to several methods, such as computer vision and
instrumented sensor technology.
● Additionally, the article classifies hand gestures according to their posture,
dynamic/static nature, or hybrid forms.
Research Gap:
● The present study largely ignores real-world healthcare applications in favour of
computer applications, sign language, and virtual environment engagement.
● The majority of studies place more emphasis on developing algorithms and
improving frameworks than on actually implementing healthcare practices, which
indicates a large research vacuum in this area.
LITERATURE
Title: Hand Gesture Recognition: A Literature Review
SURVEY - 2
Journal: International Journal of Artificial Intelligence & Applications
Methodology:
● The literature discusses techniques including orientation histogram for features
representation, fuzzy c-means clustering, neural networks (NN), and hidden
markov models (HMM).
● In particular, HMM techniques perform well in dynamic gestures, especially in robot
control scenarios.
● Neural networks play a crucial role in hand form recognition as classifiers. In
gesture recognition systems, methods for extracting features—such as algorithms
for capturing hand shape—are essential.
Research gap:
● Their uses have been well documented in the literature, there are still plenty of
unanswered questions regarding the real-world applications of these technologies,
particularly in healthcare settings.
● Moreover, although the article address current recognition methods, a thorough
assessment and comparison of these systems in practical healthcare settings is
LITERATURE
Title: An Exploration into Human–Computer Interaction: Hand Gesture Recognition
SURVEY
Management in- a3Challenging Environment
Journal: SpringLink
Methodology:
● The methodology entails a methodical examination of pertinent literature to
pinpoint important developments, strategies, and obstacles in hand gesture
detection and human-computer interaction.
● After that, an image dataset is chosen for analysis, and then picture enhancement
and segmentation procedures are carried out.
● By isolating the main image from its backdrop, converting colour spaces, and
lowering background noise, these methods help to improve the quality of raw
photographs.
● The next phase is using machine learning methods, namely Convolutional Neural
Networks (CNN), to recognize hand gestures and learn attributes.
Research Gap:
● In order to provide a fair and impartial model, the study emphasises the
significance of contrasting analytical and discriminatory inclinations.
LITERATURE SURVEY
Title: Real-Time Hand Gesture Recognition Using Fine-Tuned Convolutional Neural
- Network
4
Journal: Sensors
Methodology:
● The research paper highlights the benefits and drawbacks of different sensors for
the development of HGR systems through a methodological comparison.
● Hand regions are identified and resized using image enhancement and
segmentation algorithms to match the input sizes of pre-trained Convolutional
Neural Networks (CNNs).
● Hand region segmentation is achieved by using maximum-area-based filtering
algorithms and depth thresholding.
● Additionally, the study uses a score-level fusion strategy that combines, using a
sum-ruled-based fusion procedure, the normalised score vectors from two fine-
tuned CNNs.
Research Gap:
● There are important concerns, aspects like user comfort, real-time performance,
and adaptation to changing contexts are not fully covered.
● Furthermore, the research leaves gaps in comprehending the larger ramifications
and possible social repercussions of HGR systems because it focuses exclusively
Objecti
ve
Main Objective
❖ To detect and classify the hand gesture used for sign language with high accuracy
and precision.

Sub Objective
❖ To use the trained model to detect and classify in real time.
Methodology
Reference Software
Model
Steps:
-

You might also like