0% found this document useful (0 votes)
16 views12 pages

Indonesian-Sign-Language-Translation-System 11

The Indonesian Sign Language Translation System aims to facilitate communication for the hearing-impaired by utilizing a ResNet-50 architecture-based Convolutional Neural Network for accurate real-time translation of BISINDO signs into text and speech. The system is designed to be user-friendly and accessible, with applications in education, public services, and media accessibility. Future developments will focus on expanding vocabulary and improving performance in various environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views12 pages

Indonesian-Sign-Language-Translation-System 11

The Indonesian Sign Language Translation System aims to facilitate communication for the hearing-impaired by utilizing a ResNet-50 architecture-based Convolutional Neural Network for accurate real-time translation of BISINDO signs into text and speech. The system is designed to be user-friendly and accessible, with applications in education, public services, and media accessibility. Future developments will focus on expanding vocabulary and improving performance in various environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 12

G MADHEGOWDA INSTITUTE OF TECHNOLOGY

BHARTHINAGARA - 571422
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
8TH SEMESTER, 2024-25

Indonesian Sign Language Translation System

Presented by: Under Guidance of

Chandra Shekhar S Mrs Susheela N

(4MG22CS408) Asst.Professor,
Dept of CSE,GMIT
Indonesian Sign Language
Translation System
This presentation introduces an Indonesian Sign
Language translation system. It addresses
communication challenges for the hearing-impaired. The
system uses a ResNet-50 architecture-based
Convolutional Neural Network. This enhances accuracy
and real-time performance in translation. Its goal is to
empower communication, education, and social
integration for the deaf community in Indonesia.
Project Objectives
1 Develop a BISINDO 2 Improve real-time
translation system. performance.
Utilize ResNet-50 CNN Enable seamless
architecture for accurate communication for
sign recognition. hearing-impaired
individuals.

3 Create a user-friendly 4 Enhance


interface. Communication
Accessibility
Ensure ease of use and
accessibility for all users. Bridge the communication
gap by translating sign
language into text or
speech in real time.
Application of Indonesian Sign Language
Translation System
•Communication Aid for the Deaf and Hard of Hearing
•Real-time translation of BISINDO gestures into spoken or written Indonesian.
•Helps bridge communication with people who don’t understand sign language.
•Educational Tools
•Interactive apps and software for teaching BISINDO to students, teachers, or parents.
•Useful in inclusive schools to support hearing-impaired learners.
•Interpreter Substitution in Public Services
•Used in hospitals, police stations, banks, or government offices where live interpreters may not be available.
•Provides instant translation support to improve accessibility.
•Mobile Applications
•Portable apps allow users to translate BISINDO on the go.
•Some use the smartphone camera for gesture input and provide speech/text output.
•Media Accessibility
•Translates BISINDO in video content, making news, tutorials, or entertainment accessible for deaf viewers.
•Can be integrated into live broadcasts or captioning systems.
Existing Systems for Sign Language Translation

Google's AI Interpreter HandTalk App CNN-Based Motion Sensor Systems


Recognition
AI-powered interpreter Accessible communication Convolutional neural Systems employing motion
using computer vision and by translating sign networks (CNNs) to identify sensors or wearable gloves
deep learning for real-time language into both speech BISINDO hand gestures to capture hand
sign language gesture and text on mobile devices. effectively. movements, converted into
recognition. text or speech.
Proposed System

High Accuracy Real-Time User-Friendly


Translation Interface
ResNet-50 CNN
architecture for precise Enables instant Intuitive design for
sign recognition. communication. ease of use.

Text & Speech


Output
converts recognized
signs into readable
text or synthesized
speech.
Techniques and
Methods
•Computer Vision (CV)
•Used to process hand gesture images or video frames.
•Techniques include hand segmentation, background subtraction, and contour detection.
•Convolutional Neural Networks (CNN)
•Common for recognizing static hand signs (like alphabet or numbers).
•Effective in feature extraction from images (e.g., ResNet, MobileNet, Xception).
•Recurrent Neural Networks (RNN) and LSTM
•Used for dynamic gesture recognition (sequences of hand movements).
•LSTM helps in capturing time-based patterns in sign gestures.
•YOLO (You Only Look Once)
•An object detection method used for real-time gesture localization.
•Helps detect hand position quickly and accurately in live video.
•Transfer Learning
•Pre-trained models like Xception, MobileNetV2, and EfficientNet are fine-tuned for BISINDO.
•Reduces training time and improves performance with smaller datasets.
Software Requirements
Python TensorFlow
Programming language for Deep learning framework for
system development. CNN implementation.

OpenCV Development Tools


Computer vision library for Jupyter Notebook / Google
image processing. Colab
Visual Studio Code / PyCharm
System Architecture

Input Video Preprocessing ResNet-50 CNN Output Text


Capture signs language via Image enhancement and Feature extraction and sign Translated text displayed to
camera. normalization. classification. the user.
Algorithm
s
1. Image Recognition & Classification Algorithms
These are primarily used for recognizing static hand gestures from
images or frames.
•Convolutional Neural Network (CNN)
Used to extract features from gesture images and classify them (e.g.,
ResNet, Exception, Mobile Net).
•Transfer Learning Algorithms
Use pre-trained models (like InceptionV3, VGG16, Efficient Net) and fine-
tune them for BISINDO
2. Detection Algorithms
Useful for detecting hands or gestures in real-time video.
•YOLO (You Only Look Once)
Fast object detection, ideal for real-time BISINDO gesture
spotting.
•SSD (Single Shot Detector)
An alternative to YOLO, balances speed and accuracy.
3. Gesture Comparison & Matching
Used when the system compares gestures over time.
•Dynamic Time Warping (DTW)
Compares input gestures with stored sequences by aligning
them in time.
Conclusion

The Indonesian Sign Language (BISINDO) Translation System represents a significant advancement in
enabling communication for the hearing-impaired community. By leveraging the ResNet-50 CNN
architecture, the system achieves high accuracy in recognizing BISINDO signs and provides real-time
translation into both text and speech.
The user-friendly interface and versatile output options make it an accessible tool for a wide range of
users. This technology has the potential to bridge communication gaps, promote inclusivity, and empower
individuals to connect more effectively. Future work will focus on expanding the system's vocabulary,
improving its robustness in diverse environments, and exploring integration with mobile platforms to
enhance accessibility and usability.
Thank You

You might also like