0% found this document useful (0 votes)
23 views1 page

Signlanguage Abstract

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views1 page

Signlanguage Abstract

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

J.B.

INSTITUTE OF ENGINEERING AND TECHNOLOGY


ELECTRONICS &COMMUNICATION ENGINEERING

MAJOR PROJECT-ABSTRACT
TITLE: ENHANCED COMMUNICATION USING GESTURE RECOGNITION
AND DIRECT TRANSLATION SYSTEM FOR DEAF AND DUMB
TEAM MEMBERS: 1. CHIRRA THARUN KUMAR (21671A0417)
2. RAYARAO SRIRAM (21671A0444)

UNDER GUIDENCE OF:


TITLE DESCRIPTION:
Sign language and hand gestures have long been fundamental modes of communication for deaf
and mute individuals, serving as crucial tools for inclusivity and interaction. However,
communication barriers persist, as many individuals outside these communities struggle to
comprehend and utilize sign language effectively. To address this issue, we propose a
groundbreaking unified system that combines speech-to-sign language translation with hand
gesture recognition and audio conversion. This innovative system aims to develop an application
capable of translating spoken language into Indian Sign Language (ISL) using the HamNoSys
notation system, while simultaneously recognizing and converting hand gestures into text and
audio feedback.

The system employs advanced computer vision techniques, leveraging Convolutional Neural
Networks (CNNs) for image processing and the MediaPipe framework for real-time hand gesture
identification. For speech recognition, we utilize the Vosk toolkit, which transcribes spoken
language into text that is then translated directly into ISL using HamNoSys notation, bypassing
the intermediate text translation step. This dual approach ensures comprehensive communication
support, facilitating interaction between deaf and mute individuals and the wider society.

The entire application is implemented in Python and executed on a Raspberry Pi, which is
connected to an external camera for capturing hand gestures. OpenCV libraries are used for video
processing, and optimization techniques are employed to manage CPU, memory, and other
resources efficiently. By integrating these technologies, the system provides real-time, efficient
translation and feedback, enhancing communication and inclusivity in various social settings. This
unified system represents a significant step forward in bridging the communication gap and
promoting the inclusion of deaf and mute individuals in everyday interactions.

SIGNATURE OF THE STUDENTS SIGNATURE OF THE GUIDE

You might also like