0% found this document useful (0 votes)
3 views11 pages

Silent Expressions

The document presents a project aimed at developing an AI-based system for recognizing Indian Sign Language (ISL) using hand tracking and deep learning techniques. It utilizes MediaPipe for hand tracking and a Convolutional Neural Network (CNN) for gesture classification, achieving over 90% accuracy in real-time recognition. The system also incorporates speech synthesis for audio output, with plans for future enhancements including multi-language support and dynamic gesture interpretation.

Uploaded by

Riya Awalkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views11 pages

Silent Expressions

The document presents a project aimed at developing an AI-based system for recognizing Indian Sign Language (ISL) using hand tracking and deep learning techniques. It utilizes MediaPipe for hand tracking and a Convolutional Neural Network (CNN) for gesture classification, achieving over 90% accuracy in real-time recognition. The system also incorporates speech synthesis for audio output, with plans for future enhancements including multi-language support and dynamic gesture interpretation.

Uploaded by

Riya Awalkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11

SILENT EXPRESSIONS: TWO-HANDED

INDIAN SIGN LANGUAGE RECOGNITION


USING MEDIAPIPE AND MACHINE
LEARNING
Team Members:
Riya Awalkar(210105231004)
Aditi Sah(210105231034)
Renuka Barahate(210105231020)
Yash Kharche()
Project Guide:
Ms. Ashwini Magar
Problem Statement

Developing an AI-based system to recognize Indian Sign


Language using hand tracking and deep learning, enabling
seamless communication for the hearing and speech
impaired.
Abstract

Indian Sign Language (ISL) is an essential communication medium for individuals with
hearing and speech impairments. This research introduces an efficient ISL recognition
system that integrates deep learning with real-time hand tracking. Utilizing MediaPipe
Hands for landmark detection and a Convolutional Neural Network (CNN) for
classification, the model enhances recognition accuracy by incorporating two-hand
detection. Additionally, pyttsx3 is used for speech synthesis, providing audio output for
detected gestures. The system is designed to function in diverse environments, ensuring
accessibility. Experimental evaluations demonstrate high accuracy, and the framework is
adaptable for future enhancements, such as multi-language recognition and dynamic gesture
interpretation.
Introduction

Communication plays a fundamental role in human interaction, and sign language is a vital
tool for individuals with hearing and speech impairments. Indian Sign Language (ISL) is
widely used across India, yet automated tools for its recognition remain limited.
Advancements in artificial intelligence and deep learning have facilitated real-time sign
language recognition, reducing the communication barrier for the deaf and mute
communities.
Objectives

■ Recognize Indian Sign Language using hand tracking and deep


learning.
■ Use MediaPipe Hands for accurate gesture tracking.
■ Train a CNN model on skeleton-based hand images.
■ Implement pyttsx3 for audio output.
Technologies Used

■ Programming Language: Python


■ Libraries & Tools:
o MediaPipe (Hand Tracking)
o OpenCV (Image Processing)
o CNN/LSTM (Deep Learning)
o Pyttsx3 (Text-to-Speech)
o Tkinter
Data Collection

■ Approach: Capturing images of hand signs for alphabets and digits.


■ Dataset: Created using MediaPipe Hand Tracking.
Model Architecture

■ CNN Model for Image-based Classification:


■ Input: Skeletonized hand images.
■ Layers: Convolutional layers, Pooling layers, Fully Connected layers.
■ Output: Predicted sign.
Implementation Steps

1. Hand Tracking with MediaPipe.


2. Data Collection & Preprocessing.
3. Model Training using CNN.
4. Testing & Evaluation.
5. Integration with Pyttsx3 for Speech Output.
Conclusion

This project is a vision-based ISL recognition system leveraging deep learning techniques
for accurate and real-time sign language interpretation. The combination of MediaPipe
Hands for hand tracking and a CNN model for classification ensures efficiency and
robustness. With an accuracy exceeding 90%, the system demonstrates potential for real-
world applications. Future work will focus on enhancing dynamic gesture recognition,
integrating multiple sign languages, and optimizing real-time deployment on mobile and
embedded devices.This work serves as a step toward bridging the communication gap for
the hearing and speech-impaired community using advanced AI-driven solutions.

You might also like