Signlanguage Abstract
Signlanguage Abstract
MAJOR PROJECT-ABSTRACT
TITLE: ENHANCED COMMUNICATION USING GESTURE RECOGNITION
AND DIRECT TRANSLATION SYSTEM FOR DEAF AND DUMB
TEAM MEMBERS: 1. CHIRRA THARUN KUMAR (21671A0417)
2. RAYARAO SRIRAM (21671A0444)
The system employs advanced computer vision techniques, leveraging Convolutional Neural
Networks (CNNs) for image processing and the MediaPipe framework for real-time hand gesture
identification. For speech recognition, we utilize the Vosk toolkit, which transcribes spoken
language into text that is then translated directly into ISL using HamNoSys notation, bypassing
the intermediate text translation step. This dual approach ensures comprehensive communication
support, facilitating interaction between deaf and mute individuals and the wider society.
The entire application is implemented in Python and executed on a Raspberry Pi, which is
connected to an external camera for capturing hand gestures. OpenCV libraries are used for video
processing, and optimization techniques are employed to manage CPU, memory, and other
resources efficiently. By integrating these technologies, the system provides real-time, efficient
translation and feedback, enhancing communication and inclusivity in various social settings. This
unified system represents a significant step forward in bridging the communication gap and
promoting the inclusion of deaf and mute individuals in everyday interactions.