0% found this document useful (0 votes)
2 views

SignLang

The document presents a project aimed at developing a Sign Language Recognition (SLR) system using Python to facilitate communication for individuals with hearing or speech impairments. It outlines the project's objectives, methodology, technical details, and future work, emphasizing the use of machine learning techniques for real-time gesture recognition. The system is designed to bridge the communication gap between deaf and hearing communities by converting hand gestures into text or speech.

Uploaded by

naanthandummy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

SignLang

The document presents a project aimed at developing a Sign Language Recognition (SLR) system using Python to facilitate communication for individuals with hearing or speech impairments. It outlines the project's objectives, methodology, technical details, and future work, emphasizing the use of machine learning techniques for real-time gesture recognition. The system is designed to bridge the communication gap between deaf and hearing communities by converting hand gestures into text or speech.

Uploaded by

naanthandummy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 19

EGS PILLAY ENGINEERING COLLEGE, NAGAPATTINAM

DEPARTMENT OF COMPUTER SCIENCE AND


ENGINEERING

First Review

PERMISO
A COMPREHENSIVE PLATFORM FOR
MANAGING ON-DUTY AND LEAVE
REQUEST

Presented by
• RAJ RATHINAM.S (8208E22CSR082)
• SHATHIS KUMAR.S (8208E22CSR096)
• SYED MOHAMED YOUSUF BADURUDEEN.S (8208E22CSR108)
CONTENT:
1.Introduction.
2.Literature Review.
3.Methodology.
4.Technical Details & Concepts.
5.Project Progress.
6.Future Work.
7.Questions.
INTRODUCTIO
•N:
Objective :
The objective of this project is to develop a Sign
Language Recognition (SLR) system using
Python that can recognize and interpret hand
gestures into text or speech. This system aims to
bridge the communication gap between
individuals with hearing or speech impairments
and the general public by leveraging computer
vision and machine learning techniques.
• Scope:
­ Real-time Gesture Recognition : The
system will use a webcam to detect and
classify hand gestures dynamically.
­ Machine Learning Approach :
Implementation of CNN for image feature
extraction and LSTM for recognizing
sequential gesture patterns.
­ Support for Multiple Sign Languages:
Initially trained for American Sign Language
(ASL), with the potential to extend to Indian
Sign Language (ISL) and others.
• Problem Statement :
- Deaf and mute individuals face
challenges in interacting with non-sign language
users.
- A computer vision-based system can
provide a real-time, cost-effective, and accessible
solution.
• Significance & Impact :
- Bridges the communication gap
between the deaf and hearing communities,
promoting inclusivity.
Literature Review:
• Existing Systems & Their Limitations:
- Image Processing
Approaches: Early systems used color
segmentation and edge detection to identify
gestures, but they lacked scalability
- Machine Learning Methods: Traditional
classifiers like SVM and KNN were used but
required manual feature extraction.
• Challenges Identification
Changes in lighting, background, and hand
Methodology:
• Overall Approach:

- Collect Data: Gather images of different sign

language gestures.

- Preprocess Data: Resize, enhance, and prepare images fo

training

- Build Model: Use deep learning (CNN + LSTM) to recogniz

gestures.

- Train & Test: Train the model with labeled data and test

accuracy.
WORKFLOW DIAGRAM
• Tools & Technologies Used:

- Programming

Language: Python -

Libraries & Frameworks: OpenCV,

TensorFlow/Keras, MediaPipe, NumPy.

- Model Architecture: CNN + LSTM for gesture

recognition
Technical Details :
• Core Concepts :
­ Python Libraries Used: OpenCV for image
processing, TensorFlow/Keras for model
training, and MediaPipe for hand tracking.

­ Gesture Recognition Model: Uses CNN to


extract features and LSTM to recognize
gesture sequences.

­ Real-time Processing: Webcam captures


gestures, which are analyzed frame-by-frame
for instant recognition.

­ Text Conversion: Detected gestures are


converted into readable text for
• Technical Challenges :
- Lighting & Background
Issues → Used adaptive thresholding.
- Hand Occlusion → Trained model to detect
hand landmarks.
- Slow Processing → Optimized model and
used GPU.
• Solutions & Innovations :
- Deep Learning for Better Accuracy.
- Real-time Gesture Recognition.
- Future Integration with Speech Output.
ARCHITECTURE DIAGRAM
• Preliminary Results & Screenshots :

Login Page
Student Dashboard
Staff Dashboard
Sample Letter
Format
Future Work :
• Complete request approval workflow with
real-time notifications.
• Enhance security with better
authentication.
• Optimize UI/UX for a better user
experience.

• Fully deploy and test on Render.


ANY Questions ?
THANK YOU !

You might also like