0% found this document useful (0 votes)
7 views

Sign Language Detection System report

The document outlines a minor project report on a Sign Languages Detection System developed by students at Parul University, aimed at facilitating communication for deaf and hard-of-hearing individuals. The project utilizes machine learning techniques, specifically LSTM Neural Networks, to recognize sign language gestures in real-time and translate them into text. It emphasizes the importance of accessibility and aims to bridge communication gaps between deaf individuals and the hearing population.

Uploaded by

rahul
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Sign Language Detection System report

The document outlines a minor project report on a Sign Languages Detection System developed by students at Parul University, aimed at facilitating communication for deaf and hard-of-hearing individuals. The project utilizes machine learning techniques, specifically LSTM Neural Networks, to recognize sign language gestures in real-time and translate them into text. It emphasizes the importance of accessibility and aims to bridge communication gaps between deaf individuals and the hearing population.

Uploaded by

rahul
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Sign languages detection system

SIGN LANGUAGES DETECTION SYSTEM


MINOR PROJECT REPORT

Submitted To:

PARUL UNIVERSITY, VADODARA, GUJARAT (INDIA)

Submitted By:
Rahul Raslal Mandal (200305105120)
RISHI JETHVA(200305105108)

Under The Guidance of:


Prof. Kapil Dev Raghuwanshi

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

PARUL INSTITUTE OF TECHNOLOGY VADODARA, GUJARAT


SESSION: AY 2022-2023
Sign languages detection system

Parul University
Parul Institute of Technology

(Session: 2022 -2023)

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

CERTIFICATE

This is to certify that ………RAHUL RASLAL MANDAL ,RISHI JETHVA .…………,


Students of CSE VI Semester of Parul Institute of Technology, Vadodara has completed
their Minor Project titled “SIGN LANGUAGES DETECTION SYSTEM”, as per the
syllabus and has submitted a satisfactory report on this project as a partial fulfilment towards
the award of degree of Bachelor of Technology in Computer Science and Engineering
under Parul University, Vadodara, Gujarat (India).

Sign: Sign: Sign:


Prof. Kapil Dev Raghuwanshi Prof. Sumitra Menaria Dr. Swapnil Parikh
(Project Guide) Head (CSE) Principal
PIT, Vadodara PIT, Vadodara PIT, Vadodara
(CSE / IT)
Sign languages detection system

DECLARATION

We the undersigned solemnly declare that the project report “SIGN LANGUAGES
DETECTION SYSTEM” is based on my own work carried out during the course of our study
under the supervision of Prof. Kapil Dev Raghuwanshi, CSE PIT, Vadodara.

We assert the statements made and conclusions drawn are the outcomes of my own work. I
further certify that
The work contained in the report is original and has been done by us under the general
supervision of our supervisor

1. The work has not been submitted to any other Institution for any other degree / diploma
/ certificate in this university or any other University of India or abroad.

2. We have followed the guidelines provided by the university in writing the report.

Whenever we have used materials (data, theoretical analysis, and text) from other sources, we
have given due credit to them in the text of the report and giving their details in the references.

Rahul Raslal Mandal [200305105120] SIGN:

Rishi Jethva [200305105108] SIGN:


Sign languages detection system

ACKNOWLEDGEMENT

In this semester, we have completed our project on “SIGN LANGUAGES DETECTION


SYSTEM”. During this time, all the group members collaboratively worked on the project
and learnt about the industry standards that how projects are being developed in IT
Companies. We also understood the importance of teamwork while creating a project and got
to learn the new technologies on which we are going to work in near future.

We gratefully acknowledge for the assistance, cooperation guidance and clarification


provided by “Prof. Kapil Dev Raghuwanshi” during the development of our project. We
would also like to thank our Head of Department Prof. Sumitra Menaria and our Principal
Dr. Swapnil Parikh Sir for giving us an opportunity to develop this project. Their
continuous motivation and guidance helped us overcome the different obstacles for
completing the Project.
We perceive this as an opportunity and a big milestone in our career development. We will
strive to use gained skills and knowledge in our best possible way and we will work to
improve them.
Rahul Raslal Mandal [200305105120] SIGN:

Rishi Jethva [200305105108] SIGN:


Sign languages detection system

LIST OF FIGURES

S. No. Figure Name of Figure Page


No. No.

1 Fig. 1.1 Left Hand landmarks keypoints

2 Fig. 1.2 Mediapipe landmarks for hand

3 Fig. 1.3 American Sign Language

4 Fig. 1.4 Indian Sign Language

5 Fig. 1.5 PROJECT MODULE DIAGRAM

6 Fig. 3.1 USE CASE DIAGRAM

7 Fig. 3.2 Import and Install Dependencies


8 Fig. 3.3 Keypoints using MP Holistic code
9 Fig. 3.4 Extract Keypoint Values code
10 Fig. 3.5 Setup Folders for Collection code
11 Fig. 3.6 Collect Keypoint Values for Training
and Testing code
12 Fig. 4 Preprocess Data and Create Labels
and Features code
13 Fig. 4.1 Collect Keypoint Values for Training
and Testing (DATA COLLECTION)
14 Fig. 4.1 Make Predictions code
Sign languages detection system

15 Fig. 4.2 Evaluation using Confusion Matrix


and Accuracy code
16 Fig. 4.3 Test in Real Time code
Sign languages detection system

ABSTRACT
 Deaf and hard-of-hearing persons, as well as others who are unable to communicate
verbally, utilise sign language to communicate within their communities and with
others. Sign languages are a set of preset languages that communicate information
using a visual-manual modality. The dilemma of real-time finger-spelling recognition
in Sign Language is discussed. We gathered a dataset for identifying 36 distinct
gestures (alphabets and numerals) and a dataset for typical hand gestures in ISL
created from scratch using webcam images.

 The system accepts a hand gesture as input and displays the identified character on
the monitor screen in real time. This project falls under the category of human-
computer interaction (HCI) and tries to recognise multiple alphabets (a-z), digits (0-9)
and several typical ISL hand gestures. To apply Transfer learning to the problem, we
used LSTM Neural Network architecture trained on our own dataset. In the vast
majority of situations, we constructed a robust model that consistently classifies Sign
language. Many studies have been done in the past in this area employing sensors
(such as glove sensors) and other image processing techniques (such as edge
detection technique, Hough Transform, and so on), but these technologies are quite
costly, and many people cannot afford them. During the study, various human-
computer interaction approaches for posture recognition were investigated and
evaluated.
 The optimum solution was determined to use mediapipe library which convert live
picture into landmarks and keypoints which give Sign Language signs with an
accuracy of 70-80%. As a result, we're creating this software to assist such folks
because it's free and simple to use. However, aside from a small group of people, not
everyone is familiar with sign language, and they may need an interpreter, which
may be cumbersome and costly. This research intends to bridge the communication
gap by building algorithms that can anticipate alphanumeric hand motions in sign
language in real time. The main goal of this research is to create a computer-based
intelligent system that will allow deaf persons to interact effectively with others by
utilising hand gestures.
Sign languages detection system

CHAPTER I
LITERATURE SURVEY

Chapter I INTRODUCTION

1.1 OVERVIEW

1.2 PROBLEM STATEMENT

1.3 OBJECTIVE OF PROJECT

1.4 APPLICATIONS OR SCOPE

1.5 ORGANIZATION OF REPORT


Sign languages detection system

1.1 OVERVIEW

 Overview of our Project is to make an essential tool to help deaf people to


communicate with everyone .sign Sign Language Detection is important tool of deaf
to communicate and to express their view more freely.
 We made a project which is based on Python programming language which is highly
interactive and dynamical language with the help of this tool we are able to make such
project as a success.

Left Hand landmarks keypoints

 We used many References to make this project as a very accurate level based and to
make more reliable and assured project we also checked the quality of our business
content (code contexts).
 This project falls within the HCI (Human Computer Interface) sector and seeks to
recognise multiple alphabets (a-z), digits (0-9) and several typical ISL family hand
motions such as Thank you, Hello, and so on. Hand-gesture recognition is a difficult
problem, and ISL recognition is particularly difficult owing to the use of both hands.
Many studies have been done in the past employing sensors (such as glove sensors)
and various image processing techniques (such as edge detection, Hough Transform,
and so on), but they are quite costly, and many people cannot afford them.
Sign languages detection system

Mediapipe landmarks for hand

 Next, the report will detail the implementation of the system, including the software
and hardware requirements, and the user interface designed for healthcare
professionals. The report will also discuss the testing process used to evaluate the
system's performance, including the accuracy, sensitivity, and specificity of the system.

 Finally, the report will conclude with a discussion of the potential impact of the
proposed system on All the Sign Language detection. This will include a comparison
of the system's performance with current methods, as well as a discussion of the
potential benefits and challenges associated with its implementation in clinical
settings. Overall, this report aims to provide a comprehensive overview of the
proposed computer-aided detection system for deaf people and its potential impact on
improving communication gap between deaf people and everyone.
Sign languages detection system

1.2
PROBLEM STATEMENT

 The most frequent sensory deficiency in people today is hearing loss. According to
WHO estimates, there are roughly 63 million persons in India who suffer from
Significant Auditory Impairment, putting the prevalence at 6.3 percent of the
population. According to the NSSO study, there are 291 people with severe to profound
hearing loss for every 100,000 people (NSSO, 2001). A substantial number of them are
youngsters between the ages of 0 - 14. With such a huge population of hearing-impaired
young Indians, there is a significant loss of physical and economic output. The main
problem is that people who are hard of hearing, such as the deaf and dumb, find it
difficult to interact with normal people since people who are not impaired do not learn
how to communicate with each other using sign language.

 The solution is to develop a translator that can detect sign language used by a disabled
person, and then feed that sign into a machine-learning algorithm called transfer
learning, which is then detected by the neural network and translated on the screen so
that a normal person can understand what the sign is saying. It's a lot easier now, thanks
to speech to text and translators. But what about individuals who are unable to speak or
hear?

 The main goal of this project is to create an application that can assist persons being
unable to speak or hear. The language barrier is also a very significant issue. Hand
signals and gestures are used by people who are unable to speak. Ordinary people have
trouble comprehending their own language. As a result, a system that identifies various
signals and gestures and relays information to ordinary people is required. It connects
persons who are physically handicapped with others who are not. We can recognise the
indications and provide the appropriate text output using computer vision and neural
networks.

 Allow the sufferer to communicate on his or her own. Make yourself available to folks
who are on a budget. It's completely free, and anyone may use it. Many firms are
creating solutions for deaf and hard of hearing persons, but not everyone can afford
them. Some are very pricey for ordinary middle-class individuals to bring Know which
medical requests and feedback matter most to which customers and prioritize
accordingly, rather than by volume alone.
Sign languages detection system

1.3
OBJECTIVES OF PROJECT

 1. Develop a machine learning model that can accurately and efficiently detect
sign and hand gestures with help of dataset in MP_Data folder .
 2.Evaluate the performance of the developed model in terms of accuracy,
sensitivity, specificity, and other relevant metrics, and compare it with existing
models.
 3.Explore different machine learning techniques and algorithms, such as deep
learning, LSTM Neural Network, to identify the most effective neural network
model for sign language detection.
 4.Optimize the model to increase accuracy of hand and sign detection and
improve the reliability and consistency of the results.
 5.Validate the developed model using independent datasets to ensure its
generalizability and applicability in real-time scenarios.
 6.Investigate the feasibility and utility of the developed model and its potential
impact on deaf people , maintenance and update costs, and resource utilization.
 7.Address ethical and regulatory issues related to the use of machine learning
models in real time sign and hand gestures detection in working system with
relevant guidelines and regulations.
The ultimate objective of sign language detection system project is to develop a
reliable, accurate, and efficient tool for sign language detection and communication
between deaf person to peoples ,which can help deaf person easily express there view
and more easy ways.
 8.Develop a user-friendly interface for the machine learning model that can be
easily detect sign and hand gestures of deaf person and convert it into English
language and show the sentence in screen in real time.
 9.Conduct a thorough analysis of the dataset used in the project, including data
cleaning, preprocessing, and augmentation techniques to ensure the quality and
representativeness of the data.
 10.Investigate the impact of different imaging in different background, on the
performance of the machine learning model and identify the most effective
approach for sign and hand gesture detection
 11.Explore the use of transfer learning techniques to leverage pre-trained models
and reduce the amount of required data and training time.
 12.Investigate the impact of different factors, such as skin color ,hand shape
,hand speed etc. on the performance of the machine learning model and identify
correct sign and hand gestures
Sign languages detection system

 13.Conduct a cost-effectiveness analysis of the developed model and compare it


with existing screening methods to assess its economic feasibility and potential
savings.
 14.Investigate the potential of the developed model for predicting the sign and
hand gestures in low light background or hard background
 Address the limitations and challenges of the developed model, such as
interpretability, generalizability, and ethical implications, and identify potential
areas for future research and development.
With the help of our tool we can easily communicate with a deaf person.
Sign languages detection system

1.4
Application or Scope

The scope and potential applications of a sign language detection system project using
machine learning techniques are vast and can have a significant impact on comucnication
for deaf people . Some potential applications and scope of such a project are:
 applications
1. The dataset can easily be extended and customized according to the need of the
user and can prove to be an important step towards reducing the gap of
communication for dumb and deaf people.
2. Using the sign detection model, meetings held at a global level can become easy for
the disabled people to understand and the value of their hard work can be given.
3. The model can be used by any person with a basic knowledge of tech and thus
available for everyone
4. This model can be implemented at elementary school level so that kids at a very
young age can get to know about the sign language.
 Future Scope
1. The implementation of our model for other sign languages such as Indian sign
language or American sign language.
2. Further training with large dataset to efficiently recognize symbols.
3. Improving the model's ability to identify expression.

In summary, the potential applications and scope of a SIGN LANGUAGES DETECTION


SYSTEM project using machine learning techniques are far-reaching, and the development of
an accurate and reliable model can have a significant impact on the communication system in
deaf people and everyone.
Sign languages detection system

1.5
Organization of Report

1. Abstract: A brief summary of the project, including the problem statement, objectives,
methods, and results.
2. Introduction : deaf people communication problem, solution of that problem..The
introduction should also include a clear statement of the problem and the objectives of
the project.
3. Literature Review: A review of relevant literature on sign language detection using
machine learning techniques, including an overview of existing models, datasets, and
performance metrics.
4. Methods: A detailed description of the methods used in the project, including data
collection, pre-processing, feature extraction, model development, evaluation, and
validation.
5. Results: A presentation of the results of the project, including the performance of the
developed model in terms of accuracy, sensitivity, specificity, and other relevant
metrics. The results should also include a discussion of the potential impact of the
developed model on sign language detection.
6. Discussion: A discussion of the strengths and limitations of the developed model,
including its potential clinical applications, challenges, and future research directions.
7. Conclusion: A summary of the main findings of the project, including the
contributions to the field of disease detection using machine learning techniques.
8. References: A list of all references cited in the report, following a specific citation
format.
9. Appendices: Optional appendices can be included to provide additional details on the
project, such as data pre-processing steps, model architecture, and hyperparameters.
Sign languages detection system

CHAPTER II
LITERATURE SURVEY

The purpose of the Literature Survey is to give the brief overview and also to establish
complete information about the reference papers.

The goal of Literature Survey is to completely specify the technical details related to the
main project in a concise and unambiguous manner.
In different approaches have been used by different researchers for recognition of various
hand gestures which were implemented in different fields. The whole approaches could be
divided into three broad categories

1) Hand segmentation approaches


2) Feature extraction approaches and
3) Gesture recognition approaches.

The methods in which computers and humans communicate have changed in tandem with
the advancement of information technology. In order to assist deaf and hearing individuals
communicate more successfully, a lot of effort has been done in this sector. Because sign
language consists of a series of gestures and postures, any attempt to recognise it falls
within the category of human-computer interaction.

The detection of sign language is divided into two categories.

The Data Glove technique, in which the user wears a glove with electromechanical devices
attached to digitalize hand and finger movements into processable data, is the first
category. The downside of this approach is that you must constantly wear more clothing,
and the findings are less precise. Computer-vision-based techniques,

on the other hand, use only a camera and allow for natural contact between humans and
computers without the use of any extra technologies. Apart from significant advancements
in the field of ASL, Indians began to work in the field of ISL. Image key point detection using
SIFT, followed by a comparison of a new image's midaipipe landmarks key point to the
landmarks key points of standard images per alphabet in a database to categorise the new
image with the label of the closest match. Similarly various work has been put into
recognising the edges efficiently one of the idea was to use a combination of the colour data
with bilateral filtering in the depth images to rectify edges.
Communicate with someone who is hard of hearing:
Sign languages detection system

a) Speak clearly, not loudly


b) Talk at a reasonable speed.
c) Communicate face to face.
d) Create a quiet space
e) Seek better options.
f) Make it easy to lip-read.
g) Choose a mask that allows for lip-reading.

These solutions are for persons who have a little hearing impairment;
nevertheless, if a person is completely deaf, he or she will be unable to understand
anything. At this time, Sign Language is their best and only option. Deaf-dumb people rely
on sign language as their primary and only means of communication. Because sign language
is a formal language that uses a system of hand gestures to communicate (by the deaf), it is
the sole means of communication for those who are unable to speak or hear. Physically
challenged persons can convey their thoughts and emotions via sign languag International
In this paper, a unique sign language identification technique for detecting alphabets and
motions in sign language is suggested. Deaf individuals employ a style of communication
based on visual gestures and signs. The visual-manual modality is used to transmit meaning
in sign languages. It is mostly utilised by Deaf or hard of hearing individuals.

Sign language is used by youngsters who are neither deaf or hard of hearing. Hearing
nonverbal children who are nonverbal owing to problems such as Down syndrome, autism,
cerebral palsy, trauma, brain diseases, or speech impairments make up another big group of
sign language users.

The ISL (Indian Sign Language) alphabet is used for fingerspelling. There is a symbol for each
letter of the alphabet. On your palm, we may use these letter signs to spell out words –
most commonly names and locations – and phrases.

1) American Sign Language (ASL): -


The National Institute on Deafness and Other Communications Disorders (NIDCD) points out
that the 200-year-old American
Sign Language is a complete, complex language but is the main language for most North
American deaf people. Too much will help
deaf people develop sign language action systems and the aggregation of people using
modern technology. Take a look at how
different CNN architectures and symbolic spaces work. This study is based on various input
sensors, gesture segmentation, conceptual
Sign languages detection system

findings Classification. This is to analyze and compare the usage used in this paper SLR
systems, classifications used are the most
reliable options and future learning. Recent features of the class group, the many questions
recently proposed to them in the
classification group, such as hybrid structure and in-depth learning. Based on our review,
HMM-specific approaches have been
explored in detail-In the previous connection, with the beneficiary. Hybrid CNN-HMM and
fully Deep Learning.

American Sign Language

Indian Sign Language (ISL): -


Sign languages detection system

Indian Sign Language (ISL) is a complete language with its own grammar, syntax,
vocabulary. And certain languages. It is used
by over 5 million deaf people in India. Currently, there is no publicly available data set in ISL
for sign language recognition (SLR)
testing methods. In this connection, the dictionary presents the Ketik language dataset -
Include - 0.27 million frames ISL data set in
4,287 videos 26-word symbols in 153-word range. Reported Experienced signature to
provide similarities related to natural
conditions. A subset of 50-word symbols is selected for all word categories to describe
INCLUDE-50 for rapid testing of SLR
methods by hyperparameter tuning. As a group SLR study in ISL, we are looking Many deep
neural networks consisting of various
techniques, e.g., extraction, Coding and coding. The most efficient model achieves 94.5%
accuracy in the INCLUDE-50 database
and 85.6% in the INCLUDE database. This model uses a pre-trained feature and slider
feature and only trains the output. We are also
exploring common practice by fine- tuning American Sign Language database video. For
ASLLVD with 48 classes, our model has
92.1% accuracy; to improve on existing outcomes and to provide effective support for SLR
multilingualism.
Sign languages detection system

Indian sign language


Sign languages detection system
Sign languages detection system

Literature survey of sign language detection system

Here are some key findings from the literature survey on patient segmentation:
 Sign language detection can be based on a variety of factors, open CV
,mediapipe lanmarks , and LSTM Neural Network. Effective
 Machine learning and other advanced analytical methods can be used to
identify sign and hand gestures.

Overall, the literature suggests that sign language detection system is a valuable tool
for improving communication between deaf and normal person . By identifying
Sign and hand gestures with the help of camera and visual data. Which we have
collected in MP_DATA folder.
Sign languages detection system

CHAPTER III
METHODOLOGY

CHAPTER III METHODOLOGY

3.1 BACKGROUND / OVERVIEW OF METHODOLOGY

3.2 PROJECT PLATFORMS USED IN PROJECT

3.3 PROPOSED METHODOLOGY

3.4 PROJECT MODULES

3.5 DIAGRAMS (ER, USE CASE DFD, ETC.)


Sign languages detection system

I Background / Overview of Methodology

I. Introduction
Many people in India are deaf or hard of hearing, thus they communicate with others using
hand gestures. However, aside from a small group of people, not everyone is familiar with
sign language, and they may need an interpreter, which may be complex and costly. The goal
of this research is to build software that can anticipate ISL alphanumeric hand movements in
real time, bridging the communication gap
.
This project aims to achieve the following objectives:
1. Make a new platform in which deaf person and normal people can communicate even
if a normal person doesn’t know sign language .
2. Make a system which convert sign language to English sentence live with the help of
camera
3. Choose an appropriate machine learning model and optimize its performance through
training and validation.
4. Deploy the Neural Network model which is fits the most in system .
5. Predict which sign or hand gesture is on camara with trained model and note accuracy
and time .
6. Regularly update and maintain the system to ensure its accuracy and reliability.

II. Literature Review


The literature review will provide an overview of how to increase accuracy in model make
our system more accurate. It will also review the various approaches used to develop reliable
sign language detection systems, including statistical models, machine learning algorithms,
and deep learning techniques. The review will identify gaps in knowledge and potential areas
for improvement.
III. Data Collection
The data collection phase in this phase we will collect data through camera of a- z alphabet
in sign language and store in MP_DATA folder which is created by us . The sources of the
data will be clearly defined, and the data will be checked for missing values, anomalies, or
outliers. The data will be prepared by cleaning, filtering, and processing it to make it ready
for use in the prediction system.

IV. Model Selection


Choosing the appropriate machine learning model is critical for developing an accurate and
effective sign language detection system. Different machine learning algorithms have
different strengths and weaknesses, and the selection of a model should be based on the
dataset and the problem to be solved. Commonly used models for sign language detection
system is Sequential model and use LSTM Neural Network. The model selection process
Sign languages detection system

should involve comparing the performance of different models using appropriate evaluation
metrics.

V. Training and Validation


Once the dataset and model have been selected, the model needs to be trained and validated
using the selected dataset. The dataset should be divided into training and testing sets, with a
large proportion of the data used for training the model and a smaller portion used for testing.
The model should be evaluated using metrics such as sensitivity, specificity, accuracy, and
ROC curve, and the performance of the model should be validated on a separate testing
dataset. Hyperparameter tuning can be used to fine-tune the model's parameters to optimize
its performance. Cross-validation can be used to assess the stability and generalizability of
the model.

VI. Deployment
After the model has been trained, validated, and optimized, it needs to be deployed in a
production environment where it can be used for sign language prediction . This involves
integrating the model with the proper software and testing it on real-world data. In real time .

2 Project Platforms used in Project


The sign language detection system project was a significant undertaking that required the
use of several technologies to ensure the efficient and effective development of the
application. The use of these technologies helped to create a robust system that could
accurately predict sign and hand gesture at a high degree of accuracy. The following section
will provide a more detailed analysis of the technologies used in the project,

TensorFlow library : Being an Open-Source library for deep learning and machine learning,
TensorFlow plays a role in text-based applications, image recognition, voice search, and
many more. DeepFace, Facebook’s image recognition system, uses TensorFlow for image
recognition. It is used by Apple’s Siri for voice recognition. Every Google app has made
good use of TensorFlow to improve your experience.

Tensorflow-gpu library :TensorFlow is a free and open-source software library for machine
learning created by Google, and it's most notably known for its GPU accelerated computation
speed.

OpenCV :is a Python library that allows you to perform image processing and computer
vision tasks. It provides a wide range of features, including object detection, face recognition,
and tracking.
Sign languages detection system

MediaPipe: framework is mainly used for rapid prototyping of perception pipelines with AI
models for inferencing and other reusable components. It also facilitates the deployment of
computer vision applications into demos and applications on different hardware platforms.

Scikit-learn (Sklearn) is the most useful and robust library for machine learning in Python. It
provides a selection of efficient tools for machine learning and statistical modeling including
classification, regression, clustering and dimensionality reduction via a consistence interface
in Python.

The OS module in Python provides functions for creating and removing a directory (folder),
fetching its contents, changing and identifying the current directory, etc. You first need to
import the os module to interact with the underlying operating system.

Matplotlib is a python library used to create 2D graphs and plots by using python scripts. It
has a module named pyplot which makes things easy for plotting by providing feature to
control line styles, font properties, formatting axes etc.

NumPy is a Python library that is widely used for numerical computing. In the project,
NumPy was used to perform mathematical operations and computations on the dataset, such
as statistical analysis and data manipulation. NumPy is an essential library in machine
learning as it provides fast and efficient mathematical computations that can be performed on
arrays, making it easier to work with data. The ability to perform efficient numerical
operations makes NumPy a crucial component in the development of machine learning
models.
Python is a high-level programming language that was used extensively in the project for
developing the algorithms and models used in the breast cancer prediction system. Python is
an excellent choice for machine learning projects as it has a vast range of libraries and
packages, making it easier to build machine learning models. The ability to develop
algorithms and models using Python allowed developers to create a robust and accurate
system that could predict breast cancer with a high degree of accuracy.

Anaconda is a distribution of Python and R programming languages that is commonly used


for scientific computing. In the breast cancer prediction system project, Anaconda was used
to create a virtual environment for the project, ensuring that the required dependencies and
packages were installed. Anaconda provided an easy way to manage Python environments,
making it easier to work on multiple projects. The ability to create virtual environments
makes Anaconda an excellent tool for developing machine learning applications that require
the use of specific packages and dependencies.

Google Colab is a cloud-based platform that provides a free Jupyter notebook environment,
enabling the development and execution of machine learning models on Google's servers. In
the breast cancer prediction system project, Google Colab was used to run the machine
learning models on a virtual machine, allowing for faster training and testing of the models.
Sign languages detection system

Google Colab provided a useful platform for machine learning projects as it provided a
cloud-based environment, allowing users to work on their projects from anywhere.

GitHub is a web-based platform used for version control and collaboration in software
development projects. In the breast cancer prediction system project, GitHub was used to
manage and store the project's source code, allowing for collaboration and version control.
GitHub provided a secure and efficient way to manage source code, making it easier to work
with multiple team members. The ability to collaborate and manage source code efficiently
makes GitHub an essential tool in software development projects.

In conclusion, the sign language detection system project was a significant undertaking that
required the use of several technologies to ensure its success. The technologies used in the
project were carefully chosen to ensure that they provided the necessary functionality
required to create an accurate and robust system. Tensorflow, tensorflow-gpu, openc, sklearn,
matplotlib, NumPy, Python, Anaconda, Google Colab, GitHub, and mediapipe were all
crucial in the development of reliable sign language detection system. Their ability to provide
landmarks MP Holistic key points in hand and face in camera with landmarks keypoints our
system increase accuracy and use less data because we have to store only keypoints of
landmarks Which make it crucial for project .

3 Proposed Methodology

The methodology for the SIGN LANGUAGES DETECTION SYSTEMs project involves
several key steps that include data collection, data pre-processing,, model development, and
model evaluation.

Data Collection: The first step in the project is the collection of data. The data will be
collected from following code :

cap = cv2.VideoCapture(0)
# Set mediapipe model
with mp_holistic.Holistic(min_detection_confidence=0.5, min_tracking_confidence=0.5) as
holistic:

# NEW LOOP
# Loop through actions
for action in actions:
# Loop through sequences aka videos
for sequence in range(no_sequences):
# Loop through video length aka sequence length
for frame_num in range(sequence_length):
Sign languages detection system

# Read feed
ret, frame = cap.read()

# Make detections
image, results = mediapipe_detection(frame, holistic)
# print(results)

# Draw landmarks
draw_styled_landmarks(image, results)

# NEW Apply wait logic


if frame_num == 0:
cv2.putText(image, 'STARTING COLLECTION', (120,200),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0,255, 0), 4, cv2.LINE_AA)
cv2.putText(image, 'Collecting frames for {} Video Number {}'.format(action,
sequence), (15,12),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
# Show to screen
cv2.imshow('OpenCV Feed', image)
cv2.waitKey(2000)
else:
cv2.putText(image, 'Collecting frames for {} Video Number {}'.format(action,
sequence), (15,12),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
# Show to screen
cv2.imshow('OpenCV Feed', image)

# NEW Export keypoints


keypoints = extract_keypoints(results)
npy_path = os.path.join(DATA_PATH, action, str(sequence), str(frame_num))
np.save(npy_path, keypoints)

# Break gracefully
if cv2.waitKey(10) & 0xFF == ord('q'):
break

cap.release()
Sign languages detection system

cv2.destroyAllWindows()

we will use this code to collect data in put in MP_DATA folder we made by using
following code :

DATA_PATH = os.path.join('MP_Data')

# Actions that we try to detect


actions = np.array(['hello', 'thanks', 'iloveyou'])

# Thirty videos worth of data


no_sequences = 30

for action in actions:


for sequence in range(no_sequences):
try:
os.makedirs(os.path.join(DATA_PATH, action, str(sequence)))
except:
pass
.

Data Pre-processing: After collecting the data, the next step is to pre-process it. The data
pre-processing stage involves several steps such as cleaning, normalization, and feature
extraction. During the cleaning process, the data will be inspected for missing values,
duplicates, and outliers. Missing values and outliers will be replaced or removed as
necessary. Normalization will be done to ensure that the data is on a similar scale. Feature
extraction will be done to select the most important attributes that will be used in the
development of the prediction model. The pre-processing stage is critical to ensure that the
data is accurate, consistent, and suitable for use in the model development process.

Exploratory Data Analysis (EDA): After preprocessing the data, the next step is to perform
exploratory data analysis. EDA is an essential step that allows the project team to understand
the data and gain insights into the relationships between the different variables. The EDA
process may involve techniques such as scatter plots, box plots, and correlation matrices. EDA
can help to identify potential relationships between different variables that may be used in the
model development process.

Model Development: The next step in the project is to substrate train and test data by using:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05)
Sign languages detection system

the development of the prediction model. The prediction model will use machine learning
algorithms such as logistic regression, decision trees, random forests, and support vector
machines. These algorithms will be used to build a model that can predict the sign and hand
using Sequential model and Train LSTM Neural Network .The model will be trained using
the pre-processed data and evaluated using various metrics such as accuracy and precision.
The evaluation process will be done using Confusion Matrix to ensure that the model is
robust and can generalize well to unseen data.

Model Deployment: Once Sequential model and Train LSTM Neural Network were
developed and evaluated, the next step is to Train the data in 200 epochs or more . which will
give us at least 70 to 90 % accuracy in prediction model . It will developed in jupyter
notebook in anaconda .

Model code is following:

from tensorflow.keras.models import Sequential


from tensorflow.keras.layers import LSTM, Dense
from tensorflow.keras.callbacks import TensorBoard
log_dir = os.path.join('Logs')
tb_callback = TensorBoard(log_dir=log_dir)
model = Sequential()
model.add(LSTM(64, return_sequences=True, activation='relu', input_shape=(30,1662)))
model.add(LSTM(128, return_sequences=True, activation='relu'))
model.add(LSTM(64, return_sequences=False, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(actions.shape[0], activation='softmax'))

model.compile(optimizer='Adam', loss='categorical_crossentropy',
metrics=['categorical_accuracy'])
model.fit(X_train, y_train, epochs=2000, callbacks=[tb_callback])

Performance Monitoring: After the deployment of the model, the next step is to monitor its
performance. The performance of the model will be monitored using various metrics such as
the number of requests, response time, and error rates. The performance monitoring process
will help to identify potential issues and bottlenecks in the deployment process. The
performance metrics will be regularly reviewed and optimized to ensure that the model is
functioning optimally.
We can check Performance Monitoring with following code >
model.summary()
Sign languages detection system

User Feedback: The final step in the project is to gather user feedback. The user feedback
process will involve soliciting feedback from end-users to identify potential improvements
and areas for optimization. User feedback will be gathered using various methods such as
surveys, feedback forms, and focus groups. The feedback will be analysed and used to
identify areas for improvement and optimization.

In summary, SIGN LANGUAGES DETECTION SYSTEM project will involve the


collection and pre-processing of data, the development of a machine learning model, the
deployment of the model using a jupyter notebook in anaconda , performance monitoring,
and user feedback. These steps are critical to ensure that the model is accurate, efficient, and
user-friendly.

4 Project Modules

The SIGN LANGUAGES DETECTION SYSTEMs project is composed of several modules,


each with specific functionalities that contribute to the system's effectiveness. Let's take a
closer look at each module in more detail:

Data Collection Module: The first step in the project is the collection of data. The data
Collection Module is following :

cap = cv2.VideoCapture(0)
# Set mediapipe model
with mp_holistic.Holistic(min_detection_confidence=0.5, min_tracking_confidence=0.5) as
holistic:

# NEW LOOP
# Loop through actions
for action in actions:
# Loop through sequences aka videos
for sequence in range(no_sequences):
# Loop through video length aka sequence length
for frame_num in range(sequence_length):

# Read feed
ret, frame = cap.read()

# Make detections
image, results = mediapipe_detection(frame, holistic)
Sign languages detection system

# print(results)

# Draw landmarks
draw_styled_landmarks(image, results)

# NEW Apply wait logic


if frame_num == 0:
cv2.putText(image, 'STARTING COLLECTION', (120,200),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0,255, 0), 4, cv2.LINE_AA)
cv2.putText(image, 'Collecting frames for {} Video Number {}'.format(action,
sequence), (15,12),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
# Show to screen
cv2.imshow('OpenCV Feed', image)
cv2.waitKey(2000)
else:
cv2.putText(image, 'Collecting frames for {} Video Number {}'.format(action,
sequence), (15,12),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
# Show to screen
cv2.imshow('OpenCV Feed', image)

# NEW Export keypoints


keypoints = extract_keypoints(results)
npy_path = os.path.join(DATA_PATH, action, str(sequence), str(frame_num))
np.save(npy_path, keypoints)

# Break gracefully
if cv2.waitKey(10) & 0xFF == ord('q'):
break

cap.release()
cv2.destroyAllWindows()

we will use this code to collect data in put in MP_DATA folder we made by using
following code :

DATA_PATH = os.path.join('MP_Data')
Sign languages detection system

# Actions that we try to detect


actions = np.array(['hello', 'thanks', 'iloveyou'])

# Thirty videos worth of data


no_sequences = 30

for action in actions:


for sequence in range(no_sequences):
try:
os.makedirs(os.path.join(DATA_PATH, action, str(sequence)))
except:
pass
.
Data Pre-processing Module: After the data is collected, it needs to be pre-processed before
being used for prediction. This module involves cleaning the data, handling missing values,
and performing feature scaling, among other task .The goal of this module is to ensure that
the data is in a format that can be used by the machine learning model.

Machine Learning Module: Sequential model and Train LSTM Neural Network were
Machine Learning Module , the next step is to Train the data in 200 epochs or more . which
will give us at least 70 to 90 % accuracy in prediction model . It will developed in jupyter
notebook in anaconda .

Model code is following:

from tensorflow.keras.models import Sequential


from tensorflow.keras.layers import LSTM, Dense
from tensorflow.keras.callbacks import TensorBoard
log_dir = os.path.join('Logs')
tb_callback = TensorBoard(log_dir=log_dir)
model = Sequential()
model.add(LSTM(64, return_sequences=True, activation='relu', input_shape=(30,1662)))
model.add(LSTM(128, return_sequences=True, activation='relu'))
model.add(LSTM(64, return_sequences=False, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(actions.shape[0], activation='softmax'))
Sign languages detection system

model.compile(optimizer='Adam', loss='categorical_crossentropy',
metrics=['categorical_accuracy'])
model.fit(X_train, y_train, epochs=2000, callbacks=[tb_callback])

Model Deployment Module: After the model is developed, it needs to be deployed to a


production environment. The module involves integrating the machine learning model with
the user interface and ensuring that the system runs smoothly. This module also includes
testing the model's performance and making any necessary adjustments to ensure that it
performs well. The goal of this module is to ensure that the system is ready to be used by
end-users and that the model is performing as expected.

Security Module: This module ensures the security of the user's data and the system as a
whole. It includes measures such as encryption, secure socket layer (SSL) certificates, and
access control mechanisms to prevent unauthorized access to the system. The goal of this
module is to ensure that the user's data is safe and secure and that the system is protected
from attacks.

Each of these modules plays a critical role in the sign language detection system, and they
need to work together seamlessly to produce accurate and reliable results for the end-user.
The project's success depends on the effectiveness of each module, and constant evaluation
and improvement are necessary to enhance the system's performance. The sign language
detection system project is complex and requires a high level of expertise in machine
learning, software engineering, and security to develop a functional and efficient system.

5 Diagram:
 Project Module
Sign languages detection system

 Use case Diagram

The Use Case diagram depicts the system's functionality by representing different actors
and their interactions with the system. It provides a clear understanding of the system's
use cases and helps to identify the actors and the corresponding use cases, which are
critical for defining the system's requirements.
A use case diagram is a visual representation of the interactions between actors (users or
systems) and a system or application. It is a type of UML (Unified Modelling Language)
diagram that is commonly used in software engineering to model the functionality of a
system or application.

A use case diagram consists of a set of use cases (represented by ovals) that describe the
actions or functions performed by the system or application. Each use case is associated
with one or more actors (represented by stick figures) that interact with the system or
application to accomplish a specific goal.
The actors in a use case diagram may include users, other systems or applications, or even
external entities such as sensors or devices. The use cases describe the interactions
between the actors and the system or application, and may include preconditions,
postconditions, and exceptions.
Sign languages detection system

Use case diagrams are often used early in the development process to help stakeholders
understand the functional requirements of a system or application, and to identify
potential areas of improvement. They can also be used to validate and refine the system or
application design, and to communicate the design to developers, testers, and other
stakeholders.
Overall, a use case diagram provides a high-level overview of the functionality of a
system or application, and helps stakeholders to visualize the interactions between the
actors and the system or application.
Sign languages detection system

CHAPTER IV
IMPLEMENTATION

CHAPTER IV IMPLEMENTATION

4.1 MAIN FUNCTIONS WITH EXPLANATION

4.2 CODING WITH EXPLANATION


Sign languages detection system

1 Main Functions with explanation

The implementation phase of the project is where the detailed design is actually transformed
into working code. Aim of the phase is to translate the design into a best possible solution in
a suitable programming language. This chapter covers the implementation aspects of the
project, giving details of the programming language and development environment used. It
also gives an overview of the core modules of the project with their step-by-step flow.
The implementation stage requires the following tasks.

• Careful planning.
• Investigation of the system and constraints.
• Design of methods to achieve the changeover.
• Evaluation of the changeover method.
• Correct decisions regarding selection of the platform
• Appropriate selection of the language for application development

2 Code with Explanation

1.Import and Install Dependencies


Sign languages detection system

3. Keypoints using MP Holistic

4.Extract Keypoint Values


Sign languages detection system

4.Setup Folders for Collection

5. Collect Keypoint Values for Training and Testing (DATA


COLLECTION)
Sign languages detection system

6.Preprocess Data and Create Labels and Features


Sign languages detection system

7. Separate test data and training data

8. Build and Train Sequential model and LSTM Neural Network

8. Make Predictions

9.Save trained Model


Sign languages detection system

10. Evaluation using Confusion Matrix and Accuracy

11. Test in Real Time


Sign languages detection system
Sign languages detection system

CHAPTER V
RESULTS

LSTM Neural Network Results

Model summary and accuracy


Sign languages detection system

Accuracy is 78%
Sign languages detection system

CHAPTER VI
USER MANUAL

CHAPTER VI USER MANUAL………………………………………

6.1 SOFTWARE REQUIREMENTS……………

6.2 HARDWARE REQUIREMENTS……………

6.3 STEPS TO RUN THE PROJECT……………

6.4 APPLICATION / EXE OF PROJECT (IF


APPLICABLE) ……………
Sign languages detection system

1. SOFTWARE REQUIREMENTS

Designing a SIGN LANGUAGES DETECTION SYSTEM requires a web-based interactive


computing platform for writing code collection data do a real time test etc. like jupyter
notebook in anaconda .

Data collection and processing: The system must be able to collect and process large volumes
of imaging data so it can increase accuracy of system .

Data storage and retrieval: The system should have a robust database to store all data
securely and efficiently. The system should also be able to retrieve the data quickly and
accurately.

Machine learning models: The system should have machine learning algorithms that can
analyze the data and predict the sign language though camera . The models can include
supervised or unsupervised learning algorithms, such as decision trees, support vector
machines, and neural networks.

User interface: The system should have a user-friendly interface for deaf people so they can
More easily,

Continuous improvement: The system should be designed to continuously learn from the new
data, update the models, and improve the accuracy of the predictions. This is important to
ensure that the system I give high accuracy.

In summary, the software requirements for a SIGN LANGUAGES DETECTION SYSTEM


involve data collection and processing, data storage and retrieval, machine learning models,
user interface, security and privacy, and continuous improvement.

2. HARDWARE REQUIREMENTS

The hardware requirements for sign language detection system depend on the size of the
dataset, the complexity of the machine learning models, and the desired speed and accuracy
of the system. Here are some general hardware requirements that you may consider:

Processor: The system should have a powerful processor to handle the large amounts of data
and complex machine learning algorithms. A multi-core processor with a clock speed of 3
GHz or higher is recommended.
Sign languages detection system

Memory: The system should have enough memory to store the data and models. At least 8
GB of RAM is recommended, and more is better for larger datasets and more complex
models.

Storage: The system should have sufficient storage space to store the data, models, and other
system files. A solid-state drive (SSD) is recommended for faster data access and better
performance.

Graphics processing unit (GPU): A GPU can accelerate the processing of large datasets and
machine learning algorithms. A high-end GPU with at least 8 GB of memory is
recommended.

Network connectivity: The system should have fast and reliable network connectivity to
access and share the data with other healthcare organizations and systems.

Backup and recovery: The system should have a backup and recovery strategy to prevent data
loss in case of hardware failure or system crashes.

It is important to note that the hardware requirements may vary based on the specific
requirements of the sign language detection system. It is recommended to consult with
experts in data science and software development to determine the optimal hardware
requirements for your specific system.

3. STEPS TO RUN THE PROJECT

The steps to run a SIGN LANGUAGES DETECTION SYSTEM can vary depending on the
specific system and the software and hardware configurations used. However, here are some
general steps that can be followed:

Collect and process the data: Gather the required data through camera and Process the data
and convert it into a format that can be used by SIGN LANGUAGES DETECTION
SYSTEM.

Build and train the machine learning models: Build the machine learning models using the
processed data. Train the models using a subset of the data, and evaluate their accuracy.
Select the best model based on its performance metrics, such as sensitivity, specificity, and
accuracy.

Integrate the models into the software: Integrate the selected machine learning models into
the SIGN LANGUAGES DETECTION SYSTEM software. Ensure that the software can
receive input data in the correct format and output the predictions in a meaningful way.
Sign languages detection system

Test the system: Test the SIGN LANGUAGES DETECTION SYSTEM using a new set of
data in real time. Evaluate the performance of the system by comparing its predictions with
the actual outcomes.

Continuously monitor and improve the system: Monitor the performance of the system
continuously and make improvements as necessary. Update the machine learning models
with new data and new research findings to improve the accuracy and reliability of the
system.
Sign languages detection system

CHAPTER VII
CONCLUSION & FUTURE SCOPE

CHAPTER VII CONCLUSION & FUTURE SCOPE………………..

7.1 CONCLUSION………………………………………………

7.2 FUTURE WORK………………………………………….

CONCLUSION

The fundamental goal of a sign language detecting system is to provide a practical


mechanism for normal and deaf individuals to communicate through hand gestures. The
proposed system will be used with a webcam or any other in-built camera that detects and
processes indicators for recognition.

We may deduce from the model's findings that the suggested system can produce reliable
results under conditions of regulated light and intensity. Furthermore, new motions may be
simply incorporated, and more images captured from various angles and frame will supply
the model with greater accuracy. As a result, by expanding the dataset, the model may simply
be scaled up to a vast size.
The model has some limitations, such as environmental conditions such as low light intensity
and an unmanaged backdrop, which reduce detection accuracy. As a consequence, we'll
attempt to fix these problems as well as expand the dataset for more accurate
findings.
Sign languages detection system

FUTURE WORK

SIGN LANGUAGES DETECTION SYSTEMs have made significant progress in recent


years, but there is still a lot of potential for further development and improvement. Here are
some potential future directions for SIGN LANGUAGES DETECTION SYSTEMs:

1.Convert this project in two ways : We will convert this project in two ways so it can detect
sign language and also convert English sentence into animated sign language so deaf person
also understand what person trying to say .

2) The implementation of our model for other sign languages such as Indian sign language or
American sign language.

3) Further training with large dataset to efficiently recognize symbols.

4) Improving the model's ability to identify expression


Sign languages detection system

CHAPTER VIII
REFERENCES
CHAPTER VIII REFERENCES………………………………………...

 NumPy Documentation
 (https://numpy.org/doc/stable/)
 MatplobLib Documentation
 (https://matplotlib.org/stable/index.html)
 Mediapipe Documentation
 (https://mediapipe.readthedocs.io/
 Jamie Berke , James Lacy March 01, 2021 “Hearing loss/deafness| Sign Language”
https://www.verywellhealth.com/sign-language-nonverbal-users-1046848

 International Journal for Research in Applied Science & Engineering Technology


(IJRASET)

 “National Health Mission -report of deaf people in India”, nhm.gov.in . 21-12-2021.


 “Computer Vision” https://www.ibm.com/in-en/topics/computer-vision
 Stephanie Thurrott| November 22 ,2021 “The Best Ways to Communicate with
Someone Who Doesn’t Hear
Well”https://www.forbes.com/sites/bernardmarr/2019/04/08/7-amazing-examples-of-
computer-and-machine-vision-in-practice/?sh=60a27b1b1018
 By great learning team ,“ Sign Language Detection using ACTION RECOGNITION
with Python | LSTM Deep Learning Mode”, https://youtu.be/doDUihpj6ro /
 Jeffrey Dean, minute 0:47 / 2:17 from YouTube clip “TensorFlow: Open source
machine learning”. Google. 2015. Archived from the original on November 11,2021.
 "It is machine learning software being used for various kinds of perceptual and
language understanding tasks"
 From https://github.com/nicknochnack
 Ramswarup Kulhary “OpenCV -Python” https://www.geeksforgeeks.org/opencv-
overview/
 "About Python". Python Software Foundation. Archived from the original on 20 April
2012. Retrieved 24 April 2012 Rossum, Guido Van (20 January 2009).
 Jason Brownlee on December 20, 2017 in Deep Learning for Computer Vision |
Updated on September 16, 2019|A Gentle Introduction to Transfer Learning for
 Deep Learning| machinelearningmastery.com
 “transfer learning”, tensorflow.org
https://www.tensorflow.org/tutorials/images/transfer_learning

You might also like