0% found this document useful (0 votes)
5 views9 pages

Project - 01

The project titled 'US Sign Language Detector' aims to develop an AI-powered model that translates sign language into text using deep learning techniques like LSTM and Mediapipe. It addresses communication barriers between sign language users and non-users by providing real-time recognition and translation of gestures. The model achieved 100% accuracy during testing and is designed to enhance accessibility for the hearing-impaired community.

Uploaded by

jhask15012003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views9 pages

Project - 01

The project titled 'US Sign Language Detector' aims to develop an AI-powered model that translates sign language into text using deep learning techniques like LSTM and Mediapipe. It addresses communication barriers between sign language users and non-users by providing real-time recognition and translation of gestures. The model achieved 100% accuracy during testing and is designed to enhance accessibility for the hearing-impaired community.

Uploaded by

jhask15012003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 9

DCE

DARBHANGA

PROJECT:- 01
PROJECT - 01
ABOUT 1
.
PROJECT TITLE
US SIGN LANGUAGE
DETECTOR

TEAM MEMBER
2
.
SUB-TITLE
1.SONU KUMAR JHA - REAL-TIME SIGN LANGUAGE
RECOGNITION WITH MEDIAPIPE AND
21105111025 DEEP LEARNING
2.VIDYAPATI KUMAR -
21105111035 3
.
TAGLINE
3.CHHAVINATH KR. CHY- "BRIDGING COMMUNICATION GAPS
21105111025 THROUGH AI-POWERED SIGN
4.SAURAV KUMAR- LANGUAGE RECOGNITION."
21105111038
INTRODUCTION
This project focuses on developing an AI-powered model that
interprets sign language and translates it into text .
Leveraging Recurrent Neural Networks (RNNs) and Long
Short-Term Memory (LSTM) architectures, the model captures
and processes sequential data from sign language gestures.
By analyzing motion patterns, hand positions, and contextual
cues, it ensures accurate recognition and real-time
translation. The system bridges communication gaps for the
hearing-impaired community, enabling seamless interaction
with non-signing individuals. This innovative approach
combines deep learning and natural language processing,
promising accessibility and inclusivity through cutting-edge
technology.
WHAT IS SIGN LANGUAGE?
• A VISUAL LANGUAGE USED BY THE DEAF AND HARD-
OF-HEARING COMMUNITY.
• Importance of accurate and real-time recognition.

PROBLEM STATEMENT:
• COMMUNICATION BARRIERS BETWEEN SIGN
LANGUAGE USERS AND NON-USERS.
• Need for an automated system to translate sign
language into actionable outputs.

SOLUTION
• A REAL-TIME SIGN LANGUAGE DETECTOR USING
DEEP LEARNING (LSTM) AND COMPUTER VISION
(MEDIAPIPE).
METHODOLOG SOFTWARE/TOOLS
YPipeline
Input:
Overview:
Preprocessing and Feature Extraction:
Real-time video feed from a webcam. OpenCV,
MediaPipe
Keypoint Extraction:
Mediapipe Holistic model extracts pose, Deep Learning Architectures :
face, and hand landmarks. RNN ,
LSTM
Data Preprocessing:
Keypoints are flattened and normalized
Data Handling :
for LSTM input.
TensorFlow , NUMPY
Model Training:
LSTM neural network trained on labeled
sign language data. Platform for execution :
Jupyter
Output: Notebook
Predicted sign language gesture
displayed in real-time.
FLOWCHART
IMPLEMENTATION
AND RESULTS
• Data Collection:
⚬ Collected 30 sequences per gesture (e.g.,
"hello," "thanks," "I love you").
⚬ Each sequence consists of 30 frames of
keypoint data.
• Model Architecture:
⚬ 3 LSTM layers with 64, 128, and 64 units
respectively.
⚬ Dense layers for classification.
⚬ Activation: ReLU for hidden layers,
Softmax for output.
• Training:
⚬ 2000 epochs with Adam optimizer.
⚬ Achieved 100% accuracy on test data.
• Real-Time Testing:
⚬ Successfully recognized gestures in real-
time with a confidence threshold of 0.5.
RESULT
THAN
YOU!
K
PROJECT GROUP DETAILS

1. SONU KUMAR JHA(21-CS-02) 21105111025

2. VIDYAPATI KUMAR(21-CS-38) 21105111035

3. CHHAVINATH KUMAR CHY.(21-CS-33) 21105111023

4.SAURAV KUMAR(21-CS-40) 21105111038

You might also like