Alkeshcpp
Alkeshcpp
A PROJECT IS
“SIGN LANGUAGE TO TEXT”
Submitted for the partial fulfillment of term work of
Final Year Computer Branch
TEAM MEMBERS
1. MANDHAR PATIL (12)
2. KEDAR PIMPLE (15)
3. ALKESH SONTAKKE (20)
4. ARYAN ALI CHISHTY (42)
GUIDED BY
MR. JAHANGIR ANSARI
SENIOR LECTURER OF COMPUTER BRANCH Academic Year
2023-2024
CERTIFICATE
This is to certify that Mr. ALKESH SONTAKKE, MANDHAR PATIL,
KEDAR PIMPLE, ARYAN ALI CHISHTY from Anjuman
Polytechnic, Sadar, Nagpur has completed Project Planning
Report having title “SIGN LANGUAGE TO TEXT” in a group
consisting of 4 candidates under the guidance of the faculty
Mr JAHANGIR ANSARI .
PRINCIPAL
MR. Anwar Ahsan
ACKNOWLEDGE
Success is the manifestation of perseverance and
motivation. We, the projectees, attribute our success in this
venture to our guide Mr. Jahangir Ansari and Head of
Department Mr. Anwar Ahsan . Whose endeavors for
projection, enthusiasm, foresight, and innovation
contributed to completing this project. It is the reflection of
their thought, ideas, concept, and above all their modest
efforts.
We are deeply indebted to our Principal Mr. Anwar
Ahsan Sir, for the facilities provided without which our project
would not have turned into reality.
We are also thankful to all the faculty members of our
department, who have also helped directly or indirectly in our
endeavors.
Our thanks are also to all those who have shown keen
interest in this work and provided much-needed
encouragement.
ABSTRACT
Chapter 1: Introduction
Chapter 2: Literature Survey
Chapter 3: Scope of the Project
Chapter 4: Methodology
Chapter 5:Detail of design, working and process
Chapter 6: Result and Application
Chapter 7: Conclusion and future scope
Chapter 8: Reference and Bibliography
CHAPTER 1: INTRODUCTION
CHAPTER 1:
INTRODUCTION
Motivation:
1. Objective Definition
Primary Goal: Clearly state the main objective of the
Hardware
Normal laptop with built-in webcam
Software
Python: The core programming language for development.
OpenCV: For real-time video capture and processing.
TensorFlow/Keras: Deep learning libraries for model training
and recognition.
Numpy: For numerical operations and array handling.
Machine Learning library (e.g., TensorFlow)
Teachable Machine website for simplified model training and
export
TensorFlow:
TensorFlow is an end-to-end open-source platform for
Machine Learning. It has a comprehensive, flexible ecosystem
of tools, libraries and community resources that lets
researchers push the state-of-the-art in Machine Learning and
developers easily build and deploy Machine Learning powered
applications.
TensorFlow offers multiple levels of abstraction so you can
choose the right one for your needs. Build and train models by
using the high-level Keras API, which makes getting started
with TensorFlow and machine learning easy.
OpenCV:
OpenCV (Open-Source Computer Vision) is an open-source
library of programming functions used for real-time computer-
vision.
It is mainly used for image processing, video capture and
analysis for features like face and object recognition. It is
written in C++ which is its primary interface, however bindings
are available for Python, Java, MATLAB/OCTAVE.
CHAPTER 6 :RESULT AND
APPLICATION
CHAPTER 6 EXPERIMENTA RESULT AND APPLICATION
Image 1 image 2
Image3 image 4
Image 5
Chapter 7: CONCLUSION AND
FUTURE SCOPES
Conclusion
In this report, a functional real time vision based Sign
Language recognition for D&M people have been developed
for asl alphabets.
We achieved final accuracy of 98.0% on our data set. We have
improved our prediction after implementing two layers of
algorithms wherein we have verified and predicted symbols
which are more similar to each other.
This gives us the ability to detect almost all the symbols
provided that they are shown properly, there is no noise in the
background and lighting is adequate.
Future Scopes:
We are planning to achieve higher accuracy even in case of
complex backgrounds by trying out various background
subtraction algorithms.
We are also thinking of improving the Pre Processing to predict
gestures in low light conditions with a higher accuracy.
This project can be enhanced by being built as a web/mobile
application for the users to conveniently access the project.
Also, the existing project only works for ASL; it can be extended
to work for other native sign languages with the right amount of
data set and training. This project implements a finger spelling
translator; however, sign languages are also spoken in a
contextual basis where each gesture could represent an object,
or verb. So, identifying this kind of a contextual signing would
require a higher degree of processing and natural language
processing (NLP).
CHAPTER 8 : REFERENCE
CHAPTER 8
REFERENCE
References:
[1] T. Yang, Y. Xu, and “A., Hidden Markov Model for Gesture
Recognition”, CMU-RI-TR-94 10, Robotics Institute, Carnegie
Mellon Univ., Pittsburgh, PA, May 1994.
[2] Pujan Ziaie, Thomas M uller, Mary Ellen Foster, and Alois
Knoll “A Na ̈ıve Bayes Munich, Dept. of Informatics VI, Robotics
and Embedded Systems, Boltzmannstr. 3, DE-85748 Garching,
Germany.
[3]https://docs.opencv.org/2.4/doc/tutorials/imgproc/
gausian_median_blur_bilateral_filter/
gausian_median_blur_bilateral_filter.html
[4] Mohammed Waleed Kalous, Machine recognition of Auslan
signs using PowerGloves: Towards large-lexicon recognition of
sign language.
[5]aeshpande3.github.io/A-Beginner%27s-Guide-To-
Understanding-Convolutional-Neural Networks-Part-2/
[6]
http://www-i6.informatik.rwth-aachen.de/~dreuw/database.ph
p
[7] Pigou L., Dieleman S., Kindermans PJ., Schrauwen B. (2015)
Sign Language Recognition Using Convolutional Neural
Networks. In: Agapito L., Bronstein M., Rother C. (eds)
Computer Vision - ECCV 2014 Workshops. ECCV 2014. Lecture
Notes in Computer Science, vol 8925. Springer, Cham
[8] Zaki, M.M., Shaheen, S.I.: Sign language recognition using a
combination of new vision-based features. Pattern Recognition
Letters 32(4), 572–577 (2011).