Silent Expressions
Silent Expressions
Indian Sign Language (ISL) is an essential communication medium for individuals with
hearing and speech impairments. This research introduces an efficient ISL recognition
system that integrates deep learning with real-time hand tracking. Utilizing MediaPipe
Hands for landmark detection and a Convolutional Neural Network (CNN) for
classification, the model enhances recognition accuracy by incorporating two-hand
detection. Additionally, pyttsx3 is used for speech synthesis, providing audio output for
detected gestures. The system is designed to function in diverse environments, ensuring
accessibility. Experimental evaluations demonstrate high accuracy, and the framework is
adaptable for future enhancements, such as multi-language recognition and dynamic gesture
interpretation.
Introduction
Communication plays a fundamental role in human interaction, and sign language is a vital
tool for individuals with hearing and speech impairments. Indian Sign Language (ISL) is
widely used across India, yet automated tools for its recognition remain limited.
Advancements in artificial intelligence and deep learning have facilitated real-time sign
language recognition, reducing the communication barrier for the deaf and mute
communities.
Objectives
This project is a vision-based ISL recognition system leveraging deep learning techniques
for accurate and real-time sign language interpretation. The combination of MediaPipe
Hands for hand tracking and a CNN model for classification ensures efficiency and
robustness. With an accuracy exceeding 90%, the system demonstrates potential for real-
world applications. Future work will focus on enhancing dynamic gesture recognition,
integrating multiple sign languages, and optimizing real-time deployment on mobile and
embedded devices.This work serves as a step toward bridging the communication gap for
the hearing and speech-impaired community using advanced AI-driven solutions.