BCA 8th Project Report (Sign Language Translator)
BCA 8th Project Report (Sign Language Translator)
Submitted to
Xavier International College
Department of Computer Application
Bouddha, Kathmandu
Submitted by:
Bipin Parajuli
Reg no: 54602008
Ajit Thapa
Reg no:
Supervisor Recommendation
I hereby recommend that this project report under my supervision by Bipin Parajuli and
Ajit Thapa entitled “Sign Language Translator” in partial fulfillment of the requirement
for a Bachelor's Degree in Computer Application of Tribhuvan University be processed for
evaluation.
…………………….
Mr. Amit Chaudhary
Project Supervisor
Bouddha, Kathmandu
i
Tribhuvan University
Faculty of Humanities and Social Science
Xavier International College
LETTER OF APPROVAL
This is to certify that this project prepared by Bipin Parajuli and Ajit Thapa entitled
“Sign Language Translator” in partial fulfillment of the requirements for the degree of
Bachelor in Computer Application has been evaluated. In our opinion, it is satisfactory in
the scope and quality of a project for the required degree.
ii
Acknowledgment
We would like to express our deepest appreciation to all those who provided us with the
possibility to complete this report. A special gratitude is given to our final project
supervisor, Mr. Amit Chaudhary, whose contribution in stimulating suggestions and
encouragement, helped us to contribute to our project, especially in writing this report.
Furthermore, we would also like to acknowledge with much appreciation the crucial role
of the coordinator, who gave the permission to use all required equipment and the
necessary materials to complete our project. Special thanks to our Academic Manager, Mr.
Tyson lama, who gave us valuable suggestions regarding the project. Last but not least,
many thanks go to our teachers, friends, and guardians who directly or indirectly helped us
in achieving the goal. We would like to thank all the guidance which has improved our
presentation skills thanks to their comment and advice.
iii
Abstract
According to the World Health Organization (WHO), 466 million people across the world
have disabling hearing loss (over 5% of the world's population), of whom 34 million
are children. There are only about 250 certified sign language interpreters in India for a
deaf population of around 7 million. With these significant statistics, the need for
developing a tool for smooth flow of communication between abled and people with
speech/hearing impairment is very high. Our application promises to secure a two-way
conversation, as it deploys machine learning and deep learning models to convert sign
language to text. The opposite receiver can get a text as his response, which will then be
visible to the disabled person in the form of text. The client can use the tutorials and learn
the basic functioning of the application and ASL. This system eliminates the need for an
interpreter, and the traditional methods of pen and paper can also be discarded. This
application ensures the automation of communication and thereby provides a solution to
the hurdles faced by hearing/speech-impaired people.
iv
Table of Contents
Supervisor’s Certificate ........................................................................................................i
Letter of Apporoval..............................................................................................................ii
Acknowledgement..............................................................................................................iii
Abstract...............................................................................................................................iv
List of Figures....................................................................................................................vii
List of abbreviation.............................................................................................................ix
Chapter 1: Introduction........................................................................................................1
1.3 Objectives…………………………………………………………………….2
3.1.System Analysis…………………………………………………………………5
i. Functional Requirements………………………………………………………...5
v
i. Technical Feasibility…………………………………,…………………………7
3.2.1 Design…………………………………………………………..14
3.2.3 Design……………………………………...……………………..16
4.1 Implementation……………………………………………………………….18
4.2 Testing…………………………………………………………………………..21
Unit Testing…………………………...………………………………………….21
System Testing………………………...…………………………………………22
5.2 Conclusion…………………………………………………………………..23
Appendices……………………………………………………………………………….24
REFERENCES…………………………………………………………………………...34
vi
List of Figures
Figure 3.1 Use Case Diagram……………………………………………………………..6
Figure 3.2 Gantt Chart……………………………………………………………………..9
Figure 3.3 ER diagram ……………………..………………………………………….11
Figure3.4 Level 1 DFD…………………………………………………………………..13
Figure3.5 Level 2 DFD…………………………………………………………………..13
Figure 3.6 Database Schema ……………………..……………………………………15
Figure 3.7 Flowchart ..…………………………………………………………………17
Figure 4.1 Waterfall Model…………………………………………………………….19
Figure 5.1 User Screen.…………………………………………………………………..24
vii
List of Tables
viii
List of abbreviation
ix
Chapter 1: Introduction
1.1 Introduction
With the increase of innovations and technology, life has become significantly easier for
humans. The sudden surge of growth in tech has left many overjoyed and overwhelmed
because of the good fruits it bears. It has paved the way for the poor people to become
rich, the sick to become strong, and the disabled to experience the life of an abled. People
with speech/hearing impairment have always found it difficult to communicate and mingle
but with technology, that barrier has been destroyed. They can now communicate without
any difficulty and find themselves in a public setting, communicating confidently.
With the help of technology & internet, we can control and access the machines and things
that are connected to the Internet even if the distances are too long. Without human-human
and computer-human interaction, we can send and receive information.
1
1.3 Objectives
The general objectives are listed below:
This project has a great scope and some of them are listed below:
• Explore educational applications to help individuals learn sign language more effectively.
• Use the project for accessibility applications to assist the deaf and hard-of-hearing community
in various settings, including educational institutions, workplaces, and public spaces.
1.4.2 Limitation:
There are some limitations to this system. They are listed as follows:
● The user has to provide the dataset by making the sign language gestures and then
label them.
● The system may struggle with recognizing gestures in low-light conditions or when
the user's hands are partially occluded.
● The system may struggle to recognize complex hand gestures or gestures that involve
intricate finger movements, as it relies on predefined landmark points.
2
1.5 Report Organization
The report consists of five chapters in which all the phases of application design and
development will be covered.
Chapter One: The first chapter introduces the system and the problems, and gives an
overview of the study.
Chapter Two: The second chapter covers the background study and the literature review
of the project.
Chapter Three: The third chapter covers the system analysis and design phase of the
application. It explains the methodology used while developing the system.
Chapter Four: The fourth chapter discusses the implementation and testing phase of the
application development.
Chapter Five: The last chapter i.e. the fifth chapter covers the conclusion,
recommendations, and future works to improve this project.
3
Chapter 2: Background Study and Literature Review
2.1 Background Study
Sign language is a system of communication using visual gestures and signs. Sign
languages are expressed through manual articulations in combination with non-manual
elements. Sign languages are full-fledged natural languages with their own grammar and
lexicon. Sign languages are not universal and are usually not mutually intelligible with
each other, although there are also similarities among different sign languages.
Using the video input from the webcam Sign Language Translator (SLT) translates the
signs into text output. American Sign Language (ASL) is the predominant sign language
of Deaf communities in the United States and most of Anglophone Canada. Besides North
America, dialects of ASL and ASL-based creoles are used in many countries around the
world, including much of West Africa and parts of Southeast Asia.
https://www.theatlantic.com/technology/archive/2017/11/
Motion Savvy is building a tablet that detects when a person is using ASL and converts it
to text or voice. The software also has voice recognition through the tablet’s mic, which
allows a hearing person to respond with a voice to the person signing
https://techcrunch.com/2014/06/06/motionsavvy-is-a-tablet-app-thatunderstands-sign
language/
4
Chapter 3: System Analysis and Design
3.1. System Analysis
The system analysis of our system is as follows:
i. Functional Requirements
Functional requirement defines a function of a system or its component, where a function is
described as a specification of behavior between outputs and inputs.
5
Figure 3. 1 Use Case Diagram
6
ii. Non-Functional Requirements
Non-functional Requirements are often called “quality attributes” of a system. Other terms
for non-functional requirements are “qualities”, “quality goals”, “quality of service
requirements”, “constraints” and “non-behavioral requirements”. Non-functional
requirements are:
∙Availability: The system is available 24 hours so that the general public can access and
use the system.
∙Performance: The performance of the application is good and the user interface is
user-friendly. Anyone who can understand English and have knowledge of web
application can use the system.
∙User Satisfaction: The system is designed to satisfy the user’s needs and requirements.
∙Layout: The system deals with viewing the different layouts of the application like (UI -
User Interface & UX – user Experience design).
i. Technical Feasibility
7
ii. Operational Feasibility
Operational feasibility refers to whether a system will be used effectively after it has been
developed. Operational feasibility is the ability to utilize, support, and perform the
necessary tasks of a system or program. It includes everyone who creates, operates, or
uses the system. To be operationally feasible, the system must fulfill a need required by 6
the business. An example of an operational feasibility study, or the fourth type, analyzes
the inside operations of how a deemed process will work, be implemented, and how to
deal with change resistance and acceptance.
In this Gantt chart, there are different tasks performed and their schedule while creating a
project. Similarly, the total time consumed to create the overall project is also mentioned
below:
8
Activities 1w 2w 3w 4w 5w 6w 7w 8w 9w 10w 11w 12w
Planning
Analysis
Design
Implementation
Testing
Documentation
Review
Presentation *
10
ER Diagram
11
3.1.4 Process modeling
Modeling of structured activities or tasks that produce a specific product for particular
users or customers is process modeling. This system involves different processes like the
login process, prescription uploading process, and process. These processes can be
visually modeled by using an ER diagram.
12
Figure 3.4Level0DFD
Figure3.5Level1DFD
13
3.2 System Design
During the development, we used the following diagrams to understand the requirements
of the system.
14
3.2.3 Interface Design
The high-level design of a Sign language Recognition system using Python contains three basic
components:
Input Segment: In a separate area of the interface, in this area user make the sign of different
character and they are saved on separate storage which will be useful for training.
Video Feed: A real-time video feed from the user's camera displays on the main screen. This feed
allows users to see themselves signing in sign language.
Sign Language Recognition Output: A dedicated section of the screen displays recognized
signs or phrases in real time. This is where users can see the system's interpretation of their sign
language.
Process design illustrates how each process in the system communicates with each other
to perform designated tasks. Different figures are used to illustrate different processes of
communication:
a) Flowchart
Flowcharts are graphical representations of workflows of stepwise activities and
actions with support for choice and iteration. It shows the overall flow of control
as a form of picture.
15
Figure 3.7 Flowchart of Sign language recognization
16
Chapter 4: Implementation and Testing
4.1 Implementation
For the development of the project, the waterfall model suits perfectly. The waterfall
model is a sequential design process, used in software development processes, in which
progress is seen as flowing steadily downwards through the phases of planning, analysis,
design, implementation, and testing.
The planning phase is used for understanding why a system should be built and
determining how the project team will go about building it. The analysis phase answers
the questions of who will use the system, what the system will do, and where and when it
will be used. During this analysis phase, the project team investigates any current system,
identifies improvement opportunities, and develops a concept for the new system. The
design phase decides how the system will operate in terms of the hardware, software, and
network infrastructure that will be in place. In the implementation phase, a system is
actually created and this phase usually gets the most attention, because for most systems
it is the longest and most expensive single part of the development process. The final
phase is the testing phase, where the system after implementation needs to be tested to
make it error-free.
17
Figure 4.1 Waterfall Model
18
4.1.1 Tools Used
Python: Python is a versatile programming language commonly used for machine learning,
computer vision, and general software development. It serves as the primary programming
language for the project.
NumPy: NumPy is a fundamental library for numerical computing in Python. It is used for the
efficient handling of multidimensional arrays, which is essential for processing image data and
preparing it for machine learning.
19
4.1.2 Implementation Details of Modules
Implementing a Sign Language Recognition project involves several key modules that
work together to capture, process, recognize, and translate sign language gestures. Below,
I'll provide an overview of the implementation details for each module in the project:
Data Sources: Capture real-time video feed from a camera (e.g., webcam).
Data Acquisition: Continuously capture video frames and convert them to image data for
processing.
Data Preprocessing: Resize, crop, and normalize image data to the required format.
Feature Extraction: Utilize a hand detection model from MediaPipe to identify and
extract hand landmarks in each frame.
Bounding Box Calculation: Create bounding boxes around detected hands for further
analysis.
Filtering: Apply filters to remove noise and improve hand landmark accuracy.
Data Labeling: Annotate the detected hand landmarks and gestures. Associate each
frame with the corresponding sign language gesture.
Data Storage: Store annotated data in a structured format for training and evaluation.
Dataset Creation: Prepare the annotated data for machine learning by converting it into a
suitable format (e.g., NumPy arrays).
Data Splitting: Split the dataset into training and testing sets for model evaluation.
Model Architecture: Design and build an LSTM-based neural network for recognizing
sign language gestures.
20
Model Training: Train the neural network using the training dataset and evaluate its
performance.
Gesture Dictionary: Create a dictionary or database that maps recognized sign language
gestures to their corresponding meanings or text translations.
Real-time Processing: Continuously process video frames, extract hand landmarks, and
feed them to the trained recognition model.
Translation Output: Update the translation output as new signs are recognized, creating a
seamless conversation.
4.2 Testing
Software Testing is the process of testing the functionality and correctness of software.
Software testing is an empirical technical investigation conducted to provide stakeholders
with information about the quality of the product to the context in which it is intended to
operate. This includes but is not limited to, the process of executing a program or
application with the intent of finding errors.
Unit Testing
Unit testing refers to the testing of individual units/components of software. The purpose
is to validate that each unit of the software performs as designed. A unit is the smallest
testable part of any software. It usually has one or a few inputs and usually a single
output.
System Testing
System testing tests a completely integrated system and the outputs generated by it to
verify that the system meets its requirements. It is also used to check logic changes made
in it with the intention of finding errors. This process helps in validating the system by
21
testing the system as a whole that covers each module of the application, database
22
Chapter 5: Conclusion and Recommendation
5.2 Conclusion
This project was undertaken to solve the underlying issue faced by hearing and
speech-impaired people. They often don’t even stand a chance in the competitive global
arena because of communication hurdles.
This project, however, helps in eradicating the social stigma of them not being able to
participate in many domains and successfully gives them the confidence to stand upright in
any field they want.
The application provides the necessary platform to communicate with much ease and
gives them the ability to interact without any external help. The need for an interpreter is
eradicated, and the smooth flow of a conversation is well-developed.
22
Appendices
23
Figure 5.2 Running Screen
24
Figure 5.3 Gesture Signs HomeScreen
25
Figure 5.4 Letter C hand sign Screen
26
Figure 5.5 Output of sign U
27
REFERENCES
[1] Elmahgiubi, Mohammed, et al. "Sign language translator and gesture recognition."
2015 Global Summit on Computer & Information Technology (GSCIT). IEEE, 2015.
55.
[2] Li, Kin Fun, et al. "A web-based sign language translator using 3d video processing."
2011 14th International Conference on Network-Based Information Systems.
[3] Yin, Kayo, and Jesse Read. "Better sign language translation with STMC
transformer." arXiv preprint arXiv:2004.00588 (2020)..
[4] Halawani, Sami M. "Arabic sign language translation system on mobile devices."
IJCSNS International Journal of Computer Science and Network Security 8.1 (2008):
251-256..
[5] Abhishek, Kalpattu S., Lee Chun Fai Qubeley, and Derek Ho. "Glove-based hand
gesture recognition sign language translator using capacitive touch sensor." 2016 IEEE
international conference on electron devices and solid-state circuits (EDSSC). IEEE,
2016.
34