0% found this document useful (0 votes)
47 views17 pages

Final PPT Capstone Project

The document discusses developing a sign language recognition system using computer vision and deep learning techniques. It outlines the aims, motivation, literature review on previous work, proposed methodology, and system architecture. The goal is to build an accurate model that can interpret sign language and help deaf individuals communicate more easily.

Uploaded by

Deepanshu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views17 pages

Final PPT Capstone Project

The document discusses developing a sign language recognition system using computer vision and deep learning techniques. It outlines the aims, motivation, literature review on previous work, proposed methodology, and system architecture. The goal is to build an accurate model that can interpret sign language and help deaf individuals communicate more easily.

Uploaded by

Deepanshu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Capstone Project

FINAL
Review
Sign Language Recognition using Canny Edge Detection and Deep Learning

Submitted by:-
Adarsh Singh 19BCE2284
Deepanshu 19BCE0174
Amit kumar Singh 19BCE2611
Topic
 AIM & Objective
 Motivation
 Analysis and Literature Survey
 Methodology Adapted
 System Architecture
 Demonistration & Result
 Conclusion
 Future Devlopment
AIM & OBJECTIVE

This project explores the potential of leveraging machine learning and computer
vision techniques for sign language interpreter. We will investigate how to apply
canny edge detection to detect hand and finger movements, in order to create a model
that can accurately interpret sign language. As deafness is predicted to increase
exponentially, this type of technology could be extremely beneficial in helping those
with hearing impairments communicate more efficiently.
By harnessing the power of machine learning, we hope to develop an accurate
interpreter model that will enable deaf individuals to have reliable communication
with the world around them.
MOTIVATION
 The motivation for the project is to develop a sign language recognition system that can be used to improve
the lives of deaf and hard of hearing people. Sign language recognition systems have the potential to make
it easier for deaf and hard of hearing people to communicate with others in a variety of settings, such as in
the workplace, in school, and in social situations. There are a number of benefits to using sign language
recognition systems. First, they can help to break down the communication barriers that exist between deaf
and hearing people. Second, they can provide deaf and hard of hearing people with greater independence
and self-reliance. Third, they can help to improve the quality of life for deaf and hard of hearing people by
making it easier for them to participate in social and educational activities.
 The project team is motivated to develop a sign language recognition system that can make a positive
impact on the lives of deaf and hard of hearing people. They believe that this technology has the potential
to make a real difference in the lives of many people and they are committed to working hard to make it a
reality
Analysis and Literature Survey
 Real Time Gesture Recognition System for Interaction in Dynamic Environment (Siddharth
S.Rautaray, AnupamAgrawal )
Human-computer interaction techniques have become a bottleneck in the efficient use of available
information flow. The evolution of user interfaces has an impact on changes in humancomputer
interaction (HCI). Human hand gestures have long been a popular form of nonverbal communication.
The naturalistic and intuitive nature of hand gestures has been a great motivator for HCI researchers to
put their efforts into researching and developing more promising means of interaction between humans
and computers. In this paper, a system for gestural interaction between a user and a computer in a
dynamic environment is designed. The gesture recognition system employs image processing techniques
to detect, segment, track, and recognise hand gestures in order to convert them into meaningful
commands. The interface proposed here can be applied to a variety of applications such as image
browsers, games, and so on
 Gesture Recognition Using Deep Learning Techniques (Shubham Shukla)
The paper discusses the process of recognising hand gestures using deep convolutional neural networks
(CNN). A custom CNN model with five layers is created from scratch and trained on over 2400 images
of hand gestures to identify 26 alphabets and 10 numerical values ranging from 0 to 9. The paper also
discusses the neural networks' ability to match patterns on audio signals for speech recognition. An
audio file is generated and recognised using the Google API for speech recognition, resulting in text
processing using NLP. As a result, a GIF or hand gestures corresponding to the audio are displayed on
the screen.
 Gesture Recognition using Recurrent Neural Networks
It is presented a gesture recognition method for Japanese sign language. We created a neural network-
based posture recognition system that could recognise a finger alphabet of 42 symbols. We then created
a gesture recognition system in which each gesture represents a word. Because it must deal with
dynamic processes, gesture recognition is more difficult than posture recognition. A recurrent neural
network is used to deal with dynamic processes. In this section, we describe a method for recognising
continuous gestures. The findings of our research are then discussed.
 Sign Language Recognition System Using TensorFlow Object Detection API
Because not everyone knows and understands sign language, communication between a normal person and a
deaf or dumb person can be difficult. To overcome this barrier, a machine learning model can be built. A model
can be trained to recognise different sign language gestures and translate them into English. Many people will
benefit from this in communicating and conversing with deaf and dumb people. Existing Indian Sing Language
Recognition systems use machine learning algorithms to recognise single and double-handed gestures, but they
are not realtime. In this paper, we propose a method for creating an Indian Sign Language dataset using a
webcam and then training a TensorFlow model with transfer learning to create a real-time Sign Language
Recognition system.
 Machine learning methods for sign language recognition: A critical review and analysis
Because not everyone knows and understands sign language, communication between a normal person and a
deaf or dumb person can be difficult. To overcome this barrier, a machine learning model can be built. A model
can be trained to recognise different sign language gestures and translate them into English. Many people will
benefit from this in communicating and conversing with deaf and dumb people. Existing Indian Sing Language
Recognition systems use machine learning algorithms to recognise single and double-handed gestures, but they
are not realtime. In this paper, we propose a method for creating an Indian Sign Language dataset using a
webcam and then training a TensorFlow model with transfer learning to create a real-time Sign Language
Recognition system.
 Research on Gesture Recognition Method Based on Computer Vision
Gesture recognition is an important method of interacting between humans and computers. People
are becoming less satisfied with wearable technology-based gesture recognition and are hoping for
a more natural approach. Human emotions and instructions can be easily and effectively
transmitted to computers using computer visionbased gesture recognition, increasing the
effectiveness of human-computer interaction. The key building blocks for computer vision-based
gesture detection are hidden Markov, dynamic time rounding, and neural network algorithms. The
method's steps are as follows: image collection; hand segmentation; gesture recognition and
categorization; and classification
 Hand Gesture Recognition with Skin Detection and Deep Learning Method
Despite years of research, the problem of gesture detection remains challenging. The problem is
exacerbated by the complex background, camera angles, and lighting conditions. As a result, using
RGB video, this study proposes a quick and reliable method for hand motion recognition. First, we
identify the skin by its colour. The contour is extracted after segmenting the hand region. Finally,
the gesture is acknowledged. The results of the experiment show that the proposed method
recognises gestures more accurately than current methods
 A Dynamic Gesture Trajectory Recognition Based on Key Frame Extraction and HMM
This study introduces a real-time dynamic gesture trajectory recognition approach based on key frame
extraction and HMM with the goal of improving dynamic gesture recognition methods' high computational
cost, underdeveloped real-time, and low recognition rate. Rather than keeping track of all the specifics of a
single dynamic move, key frames are chosen based on degree differences between frames. Before building
the dynamic gesture Hidden Markov Model, the trajectory data stream is sorted using the time-warping
approach. Finally, the best transition probabilities are used to implement dynamic gesture recognition.
According to the findings of this study, this strategy is highly resilient and real-time. On average, dynamic
gesture recognition rates range from 0.76% to 87.67%, with a time efficiency of 0.46 seconds
 Static Hand Gesture Recognition using Convolutional Neural Network with Data Augmentation
Computers, which are used in a variety of industries, are present in our daily lives. To facilitate human-
computer interaction, traditional input devices such as the mouse, keyboard, and others are used. Hand
gestures can help humans and computers communicate more effectively. Gesture orientation and shape differ
from person to person. As a result, this problem is nonlinear. In a recent study, Convolutional Neural
Networks (CNN) were shown to be superior for image representation and classification. In this study, a static
hand gesture detection approach based on CNN was developed because it can learn complex and non-linear
correlations between images. Rescaling, zooming, shearing, rotation, and width and height shifting were all
used to improve the dataset
 Bangla Sign Language (BdSL) Alphabets and Numerals Classification Using a Deep Learning Model
A realtime Bangla Sign Language interpreter can help more than 200 000 hearing and speechimpaired
people in Bangladesh join the workforce. Bangla Sign Language (BdSL) recognition and detection is a
difficult topic in computer vision and deep learning research because the accuracy of sign language
recognition varies depending on skin tone, hand orientation, and background. Using two wellsuited and
robust datasets, this study used deep machine learning models for accurate and reliable BdSL Alpha
bets and Numerals. This study's dataset includes the largest image database for BdSL Alphabets and
Numerals in order to reduce interclass similarity while dealing with diverse image data that includes
different backgrounds and skin tones. To determine the best working model for BdSL Alphabets and
Numerals interpretation, the papers compared classification with and without background imagesThe
CNN model trained with images pg. 18 with backgrounds was found to be more effective than those
without. To improve overall sign recognition accuracy, the hand detection portion of the segmentation
approach must be more accurate in the hand detection process. ResNet18 outperformed the works in the
literature for BdSL Alphabets and Numerals recognition, with 99.99% accuracy, precision, F1 score,
sensitivity, and 100% specificity. This dataset is made freely available to researchers in order to support
and encourage further research on Bangla Sign Language Interpretation so that hearing and
speechimpaired people can benefit from this research
Methodology adapted
 Data collection: Collect a dataset of Indian Sign Language gestures, including a variety of hand shapes,
positions, and movements. This dataset should be representative of the regional variations in Indian
Sign Language.
 Skin masking: Preprocess the images using skin masking to extract the hand and fingers in the sign
language gesture. This step helps to reduce the amount of noise in the image and isolate the most
important features of the gesture. Next, apply Canny Edge Detection to extract the edges of the hand
and fingers in the sign language gesture. This step will help to further highlight the most important
features of the gesture.
 Edge Detection: Preprocess the images using Canny Edge Detection to extract the edges of the hand
and fingers in the sign language gesture. This step will help to reduce the amount of noise in the image
and highlight the most important features of the gesture.
Methodology adapted
 Feature extraction: Extract features from the preprocessed images using Surf
feature detection, such as the length and curvature of the fingers, the position of the
hand in the frame, and the movement of the hand over time. These features will be
used as input to the machine learning model.
 Model training: Train a deep learning model using the preprocessed images and
extracted features. The model should be trained to recognize the different Indian
Sign Language gestures and translate them into the target language.
 Model evaluation: Evaluate the performance of the model using a validation
dataset. This step will help to identify any issues with the model's accuracy or
generalization.
Architecture Diagram
DEMONSTRATION & RESULT
CONCLUSION
 In conclusion, the sign language translator project successfully recognized and
translated sign language gestures into text with high accuracy levels. The project
utilized computer vision techniques and machine learning algorithms to achieve
this. The results and observations of the project provide valuable insights into the
development of real-time sign language translation systems.
Future Development
THANK YOU

You might also like