0% found this document useful (0 votes)
3 views5 pages

Complete Sign Language Recognition Report

The document presents a Sign Language Recognition System utilizing a customized Convolutional Neural Network (CNN) to translate hand gestures into text and speech, achieving an accuracy of 99.92%. It details the system's architecture, methodology, and results, emphasizing its potential to aid communication for individuals with hearing impairments. Future enhancements include expanding to regional sign languages and integrating with augmented reality systems.

Uploaded by

Varsha P Variath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views5 pages

Complete Sign Language Recognition Report

The document presents a Sign Language Recognition System utilizing a customized Convolutional Neural Network (CNN) to translate hand gestures into text and speech, achieving an accuracy of 99.92%. It details the system's architecture, methodology, and results, emphasizing its potential to aid communication for individuals with hearing impairments. Future enhancements include expanding to regional sign languages and integrating with augmented reality systems.

Uploaded by

Varsha P Variath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

“Sign Language Recognition System using Customized Convolution Neural

Network”

ACKNOWLEDGEMENT

I would like to place on record my deep sense of gratitude to Shri. D K Shivakumar,


Chairman, Global Academy of Technology, Bangalore, India, for providing excellent
Infrastructure and Academic Environment at GAT without which this work would not
have been possible.

I am extremely thankful to Dr. H B Balakrishna, Principal, GAT for providing me the


academic ambience and everlasting motivation to carry out this work and shaping our
careers.

I express my sincere gratitude to Dr. Madhavi M, HOD, Dept. of Electronics and


communication Engineering, GAT for her stimulating guidance, continuous
encouragement, impressive technical suggestions to complete my project work and
motivation throughout the course of present work.

I also wish to extend my thanks to Prof. Kavya M project guide, Dept. of Electronics and
communication Engineering, GAT for her critical, insightful comments, guidance and
constructive suggestions to improve the quality of this work.

Finally, to all my friends, classmates who always stood by me in difficult situations,


helped me in some technical aspects and last but not the least I wish to express deepest
sense of gratitude to my parents who were a constant source of encouragement and stood
by me as pillar of strength for completing this work and course successfully.

Name: Hitesh K V

USN: 1GA21EC055
ABSTRACT

This paper presents a deep learning-based Sign Language Recognition System using a
customized Convolutional Neural Network (CNN). The system aims to support
communication for people with hearing or vocal disabilities by translating hand gestures
into meaningful text and speech. The proposed method utilizes a dataset containing 2400
images for each of the 44 classes, including alphabets, numerals, and words. A CNN
model with seven convolution layers was trained using the OpenCV library for image
processing and Python libraries such as Keras and TensorFlow. The model achieved an
accuracy of 99.92%. The system performs real-time classification using webcam input,
and recognized gestures are converted to speech using the pyttsx3 library. The proposed
system emphasizes accuracy, efficiency, and accessibility, especially for children and
individuals with hearing impairments.
TABLE OF CONTENTS
Sl.No Topics Page Number
1 Acknowledgement 1
2 Abstract 2
3 Introduction 4
4 System Architecture 5-6
5 Methodology 7
6 Results and Discussion 8-9
7 Conclusion 10
8 Future Scope 10
9 References 11
INTRODUCTION

Communication plays a critical role in human interaction. For individuals with speech or
hearing impairments, sign language serves as a vital medium of communication. With
technological advancement in computer vision and machine learning, automating the
translation of sign language into text or voice is now feasible. The aim of this system is to
provide a tool for hearing-impaired individuals, especially children, to learn alphabets,
numbers, and basic words using sign language with the aid of deep ...

SYSTEM ARCHITECTURE
The architecture of the Sign Language Recognition System consists of two major
components:

1. 1. OpenCV-based hand gesture extraction module


2. 2. Customized Convolutional Neural Network (CNN) for gesture classification

The flow diagram consists of webcam input, region of interest extraction, gesture
segmentation, and prediction using CNN. The model then converts the output into text
and optionally into speech using the pyttsx3 library.

METHODOLOGY

The methodology includes data collection, image pre-processing, CNN model training,
real-time classification, and prediction display. Key stages are:

• Creating Histogram: Used for distinguishing hand gestures from background.


• Dataset Creation: 105600 images across 44 gesture classes using webcam.
• Image Processing: Converting RGB to HSV, thresholding, applying Gaussian blur, and
binarization.
• CNN Model Design: Seven-layer convolutional network with ReLU and Softmax.
• Displaying Predictions: Real-time gesture predictions with pyttsx3 speech synthesis.

RESULTS AND DISCUSSION

The CNN model trained on the created dataset achieved a validation accuracy of 99.92%.
Real-time predictions showed high reliability and performance. The confusion matrix
showed minimal misclassifications. Model performance was evaluated using metrics like
accuracy, training/validation loss, and real-time classification consistency.

CONCLUSION

This report presents a highly accurate Sign Language Recognition System using a
customized CNN. The system bridges communication gaps between the hearing-impaired
and the rest of society. With a large dataset, pre-processing, and effective model design,
99.92% accuracy was achieved. The system can be extended for more complex sign
gestures and integrated into educational platforms.

FUTURE SCOPE

• Expand to regional sign languages


• Integrate with augmented reality systems
• Improve hardware interface for embedded deployment
• Enable multi-hand gesture support
• Apply to other gesture-based applications like robotics and gaming

REFERENCES
3. [1] Narayana P et al., Gesture recognition on ISOGD dataset, CVPR, 2018.
4. [2] Hossen MA et al., Bengali Sign Language Recognition, IEV, 2018.
5. [3] Dieleman S et al., Sign language CNNs, ECCV, 2014.
6. [4] Cheng W et al., CNN and RBM-based gesture system, ECCV, 2014.
7. [5] Rajendran R et al., Deep CNN Sign Language, IJRSM, 2021.
8. [6] Beena MV et al., ANN on depth maps, MEJSR, 2017.

You might also like