Sign Language Detection System report
Sign Language Detection System report
Submitted To:
Submitted By:
Rahul Raslal Mandal (200305105120)
RISHI JETHVA(200305105108)
Parul University
Parul Institute of Technology
CERTIFICATE
DECLARATION
We the undersigned solemnly declare that the project report “SIGN LANGUAGES
DETECTION SYSTEM” is based on my own work carried out during the course of our study
under the supervision of Prof. Kapil Dev Raghuwanshi, CSE PIT, Vadodara.
We assert the statements made and conclusions drawn are the outcomes of my own work. I
further certify that
The work contained in the report is original and has been done by us under the general
supervision of our supervisor
1. The work has not been submitted to any other Institution for any other degree / diploma
/ certificate in this university or any other University of India or abroad.
2. We have followed the guidelines provided by the university in writing the report.
Whenever we have used materials (data, theoretical analysis, and text) from other sources, we
have given due credit to them in the text of the report and giving their details in the references.
ACKNOWLEDGEMENT
LIST OF FIGURES
ABSTRACT
Deaf and hard-of-hearing persons, as well as others who are unable to communicate
verbally, utilise sign language to communicate within their communities and with
others. Sign languages are a set of preset languages that communicate information
using a visual-manual modality. The dilemma of real-time finger-spelling recognition
in Sign Language is discussed. We gathered a dataset for identifying 36 distinct
gestures (alphabets and numerals) and a dataset for typical hand gestures in ISL
created from scratch using webcam images.
The system accepts a hand gesture as input and displays the identified character on
the monitor screen in real time. This project falls under the category of human-
computer interaction (HCI) and tries to recognise multiple alphabets (a-z), digits (0-9)
and several typical ISL hand gestures. To apply Transfer learning to the problem, we
used LSTM Neural Network architecture trained on our own dataset. In the vast
majority of situations, we constructed a robust model that consistently classifies Sign
language. Many studies have been done in the past in this area employing sensors
(such as glove sensors) and other image processing techniques (such as edge
detection technique, Hough Transform, and so on), but these technologies are quite
costly, and many people cannot afford them. During the study, various human-
computer interaction approaches for posture recognition were investigated and
evaluated.
The optimum solution was determined to use mediapipe library which convert live
picture into landmarks and keypoints which give Sign Language signs with an
accuracy of 70-80%. As a result, we're creating this software to assist such folks
because it's free and simple to use. However, aside from a small group of people, not
everyone is familiar with sign language, and they may need an interpreter, which
may be cumbersome and costly. This research intends to bridge the communication
gap by building algorithms that can anticipate alphanumeric hand motions in sign
language in real time. The main goal of this research is to create a computer-based
intelligent system that will allow deaf persons to interact effectively with others by
utilising hand gestures.
Sign languages detection system
CHAPTER I
LITERATURE SURVEY
Chapter I INTRODUCTION
1.1 OVERVIEW
1.1 OVERVIEW
We used many References to make this project as a very accurate level based and to
make more reliable and assured project we also checked the quality of our business
content (code contexts).
This project falls within the HCI (Human Computer Interface) sector and seeks to
recognise multiple alphabets (a-z), digits (0-9) and several typical ISL family hand
motions such as Thank you, Hello, and so on. Hand-gesture recognition is a difficult
problem, and ISL recognition is particularly difficult owing to the use of both hands.
Many studies have been done in the past employing sensors (such as glove sensors)
and various image processing techniques (such as edge detection, Hough Transform,
and so on), but they are quite costly, and many people cannot afford them.
Sign languages detection system
Next, the report will detail the implementation of the system, including the software
and hardware requirements, and the user interface designed for healthcare
professionals. The report will also discuss the testing process used to evaluate the
system's performance, including the accuracy, sensitivity, and specificity of the system.
Finally, the report will conclude with a discussion of the potential impact of the
proposed system on All the Sign Language detection. This will include a comparison
of the system's performance with current methods, as well as a discussion of the
potential benefits and challenges associated with its implementation in clinical
settings. Overall, this report aims to provide a comprehensive overview of the
proposed computer-aided detection system for deaf people and its potential impact on
improving communication gap between deaf people and everyone.
Sign languages detection system
1.2
PROBLEM STATEMENT
The most frequent sensory deficiency in people today is hearing loss. According to
WHO estimates, there are roughly 63 million persons in India who suffer from
Significant Auditory Impairment, putting the prevalence at 6.3 percent of the
population. According to the NSSO study, there are 291 people with severe to profound
hearing loss for every 100,000 people (NSSO, 2001). A substantial number of them are
youngsters between the ages of 0 - 14. With such a huge population of hearing-impaired
young Indians, there is a significant loss of physical and economic output. The main
problem is that people who are hard of hearing, such as the deaf and dumb, find it
difficult to interact with normal people since people who are not impaired do not learn
how to communicate with each other using sign language.
The solution is to develop a translator that can detect sign language used by a disabled
person, and then feed that sign into a machine-learning algorithm called transfer
learning, which is then detected by the neural network and translated on the screen so
that a normal person can understand what the sign is saying. It's a lot easier now, thanks
to speech to text and translators. But what about individuals who are unable to speak or
hear?
The main goal of this project is to create an application that can assist persons being
unable to speak or hear. The language barrier is also a very significant issue. Hand
signals and gestures are used by people who are unable to speak. Ordinary people have
trouble comprehending their own language. As a result, a system that identifies various
signals and gestures and relays information to ordinary people is required. It connects
persons who are physically handicapped with others who are not. We can recognise the
indications and provide the appropriate text output using computer vision and neural
networks.
Allow the sufferer to communicate on his or her own. Make yourself available to folks
who are on a budget. It's completely free, and anyone may use it. Many firms are
creating solutions for deaf and hard of hearing persons, but not everyone can afford
them. Some are very pricey for ordinary middle-class individuals to bring Know which
medical requests and feedback matter most to which customers and prioritize
accordingly, rather than by volume alone.
Sign languages detection system
1.3
OBJECTIVES OF PROJECT
1. Develop a machine learning model that can accurately and efficiently detect
sign and hand gestures with help of dataset in MP_Data folder .
2.Evaluate the performance of the developed model in terms of accuracy,
sensitivity, specificity, and other relevant metrics, and compare it with existing
models.
3.Explore different machine learning techniques and algorithms, such as deep
learning, LSTM Neural Network, to identify the most effective neural network
model for sign language detection.
4.Optimize the model to increase accuracy of hand and sign detection and
improve the reliability and consistency of the results.
5.Validate the developed model using independent datasets to ensure its
generalizability and applicability in real-time scenarios.
6.Investigate the feasibility and utility of the developed model and its potential
impact on deaf people , maintenance and update costs, and resource utilization.
7.Address ethical and regulatory issues related to the use of machine learning
models in real time sign and hand gestures detection in working system with
relevant guidelines and regulations.
The ultimate objective of sign language detection system project is to develop a
reliable, accurate, and efficient tool for sign language detection and communication
between deaf person to peoples ,which can help deaf person easily express there view
and more easy ways.
8.Develop a user-friendly interface for the machine learning model that can be
easily detect sign and hand gestures of deaf person and convert it into English
language and show the sentence in screen in real time.
9.Conduct a thorough analysis of the dataset used in the project, including data
cleaning, preprocessing, and augmentation techniques to ensure the quality and
representativeness of the data.
10.Investigate the impact of different imaging in different background, on the
performance of the machine learning model and identify the most effective
approach for sign and hand gesture detection
11.Explore the use of transfer learning techniques to leverage pre-trained models
and reduce the amount of required data and training time.
12.Investigate the impact of different factors, such as skin color ,hand shape
,hand speed etc. on the performance of the machine learning model and identify
correct sign and hand gestures
Sign languages detection system
1.4
Application or Scope
The scope and potential applications of a sign language detection system project using
machine learning techniques are vast and can have a significant impact on comucnication
for deaf people . Some potential applications and scope of such a project are:
applications
1. The dataset can easily be extended and customized according to the need of the
user and can prove to be an important step towards reducing the gap of
communication for dumb and deaf people.
2. Using the sign detection model, meetings held at a global level can become easy for
the disabled people to understand and the value of their hard work can be given.
3. The model can be used by any person with a basic knowledge of tech and thus
available for everyone
4. This model can be implemented at elementary school level so that kids at a very
young age can get to know about the sign language.
Future Scope
1. The implementation of our model for other sign languages such as Indian sign
language or American sign language.
2. Further training with large dataset to efficiently recognize symbols.
3. Improving the model's ability to identify expression.
1.5
Organization of Report
1. Abstract: A brief summary of the project, including the problem statement, objectives,
methods, and results.
2. Introduction : deaf people communication problem, solution of that problem..The
introduction should also include a clear statement of the problem and the objectives of
the project.
3. Literature Review: A review of relevant literature on sign language detection using
machine learning techniques, including an overview of existing models, datasets, and
performance metrics.
4. Methods: A detailed description of the methods used in the project, including data
collection, pre-processing, feature extraction, model development, evaluation, and
validation.
5. Results: A presentation of the results of the project, including the performance of the
developed model in terms of accuracy, sensitivity, specificity, and other relevant
metrics. The results should also include a discussion of the potential impact of the
developed model on sign language detection.
6. Discussion: A discussion of the strengths and limitations of the developed model,
including its potential clinical applications, challenges, and future research directions.
7. Conclusion: A summary of the main findings of the project, including the
contributions to the field of disease detection using machine learning techniques.
8. References: A list of all references cited in the report, following a specific citation
format.
9. Appendices: Optional appendices can be included to provide additional details on the
project, such as data pre-processing steps, model architecture, and hyperparameters.
Sign languages detection system
CHAPTER II
LITERATURE SURVEY
The purpose of the Literature Survey is to give the brief overview and also to establish
complete information about the reference papers.
The goal of Literature Survey is to completely specify the technical details related to the
main project in a concise and unambiguous manner.
In different approaches have been used by different researchers for recognition of various
hand gestures which were implemented in different fields. The whole approaches could be
divided into three broad categories
The methods in which computers and humans communicate have changed in tandem with
the advancement of information technology. In order to assist deaf and hearing individuals
communicate more successfully, a lot of effort has been done in this sector. Because sign
language consists of a series of gestures and postures, any attempt to recognise it falls
within the category of human-computer interaction.
The Data Glove technique, in which the user wears a glove with electromechanical devices
attached to digitalize hand and finger movements into processable data, is the first
category. The downside of this approach is that you must constantly wear more clothing,
and the findings are less precise. Computer-vision-based techniques,
on the other hand, use only a camera and allow for natural contact between humans and
computers without the use of any extra technologies. Apart from significant advancements
in the field of ASL, Indians began to work in the field of ISL. Image key point detection using
SIFT, followed by a comparison of a new image's midaipipe landmarks key point to the
landmarks key points of standard images per alphabet in a database to categorise the new
image with the label of the closest match. Similarly various work has been put into
recognising the edges efficiently one of the idea was to use a combination of the colour data
with bilateral filtering in the depth images to rectify edges.
Communicate with someone who is hard of hearing:
Sign languages detection system
These solutions are for persons who have a little hearing impairment;
nevertheless, if a person is completely deaf, he or she will be unable to understand
anything. At this time, Sign Language is their best and only option. Deaf-dumb people rely
on sign language as their primary and only means of communication. Because sign language
is a formal language that uses a system of hand gestures to communicate (by the deaf), it is
the sole means of communication for those who are unable to speak or hear. Physically
challenged persons can convey their thoughts and emotions via sign languag International
In this paper, a unique sign language identification technique for detecting alphabets and
motions in sign language is suggested. Deaf individuals employ a style of communication
based on visual gestures and signs. The visual-manual modality is used to transmit meaning
in sign languages. It is mostly utilised by Deaf or hard of hearing individuals.
Sign language is used by youngsters who are neither deaf or hard of hearing. Hearing
nonverbal children who are nonverbal owing to problems such as Down syndrome, autism,
cerebral palsy, trauma, brain diseases, or speech impairments make up another big group of
sign language users.
The ISL (Indian Sign Language) alphabet is used for fingerspelling. There is a symbol for each
letter of the alphabet. On your palm, we may use these letter signs to spell out words –
most commonly names and locations – and phrases.
findings Classification. This is to analyze and compare the usage used in this paper SLR
systems, classifications used are the most
reliable options and future learning. Recent features of the class group, the many questions
recently proposed to them in the
classification group, such as hybrid structure and in-depth learning. Based on our review,
HMM-specific approaches have been
explored in detail-In the previous connection, with the beneficiary. Hybrid CNN-HMM and
fully Deep Learning.
Indian Sign Language (ISL) is a complete language with its own grammar, syntax,
vocabulary. And certain languages. It is used
by over 5 million deaf people in India. Currently, there is no publicly available data set in ISL
for sign language recognition (SLR)
testing methods. In this connection, the dictionary presents the Ketik language dataset -
Include - 0.27 million frames ISL data set in
4,287 videos 26-word symbols in 153-word range. Reported Experienced signature to
provide similarities related to natural
conditions. A subset of 50-word symbols is selected for all word categories to describe
INCLUDE-50 for rapid testing of SLR
methods by hyperparameter tuning. As a group SLR study in ISL, we are looking Many deep
neural networks consisting of various
techniques, e.g., extraction, Coding and coding. The most efficient model achieves 94.5%
accuracy in the INCLUDE-50 database
and 85.6% in the INCLUDE database. This model uses a pre-trained feature and slider
feature and only trains the output. We are also
exploring common practice by fine- tuning American Sign Language database video. For
ASLLVD with 48 classes, our model has
92.1% accuracy; to improve on existing outcomes and to provide effective support for SLR
multilingualism.
Sign languages detection system
Here are some key findings from the literature survey on patient segmentation:
Sign language detection can be based on a variety of factors, open CV
,mediapipe lanmarks , and LSTM Neural Network. Effective
Machine learning and other advanced analytical methods can be used to
identify sign and hand gestures.
Overall, the literature suggests that sign language detection system is a valuable tool
for improving communication between deaf and normal person . By identifying
Sign and hand gestures with the help of camera and visual data. Which we have
collected in MP_DATA folder.
Sign languages detection system
CHAPTER III
METHODOLOGY
I. Introduction
Many people in India are deaf or hard of hearing, thus they communicate with others using
hand gestures. However, aside from a small group of people, not everyone is familiar with
sign language, and they may need an interpreter, which may be complex and costly. The goal
of this research is to build software that can anticipate ISL alphanumeric hand movements in
real time, bridging the communication gap
.
This project aims to achieve the following objectives:
1. Make a new platform in which deaf person and normal people can communicate even
if a normal person doesn’t know sign language .
2. Make a system which convert sign language to English sentence live with the help of
camera
3. Choose an appropriate machine learning model and optimize its performance through
training and validation.
4. Deploy the Neural Network model which is fits the most in system .
5. Predict which sign or hand gesture is on camara with trained model and note accuracy
and time .
6. Regularly update and maintain the system to ensure its accuracy and reliability.
should involve comparing the performance of different models using appropriate evaluation
metrics.
VI. Deployment
After the model has been trained, validated, and optimized, it needs to be deployed in a
production environment where it can be used for sign language prediction . This involves
integrating the model with the proper software and testing it on real-world data. In real time .
TensorFlow library : Being an Open-Source library for deep learning and machine learning,
TensorFlow plays a role in text-based applications, image recognition, voice search, and
many more. DeepFace, Facebook’s image recognition system, uses TensorFlow for image
recognition. It is used by Apple’s Siri for voice recognition. Every Google app has made
good use of TensorFlow to improve your experience.
Tensorflow-gpu library :TensorFlow is a free and open-source software library for machine
learning created by Google, and it's most notably known for its GPU accelerated computation
speed.
OpenCV :is a Python library that allows you to perform image processing and computer
vision tasks. It provides a wide range of features, including object detection, face recognition,
and tracking.
Sign languages detection system
MediaPipe: framework is mainly used for rapid prototyping of perception pipelines with AI
models for inferencing and other reusable components. It also facilitates the deployment of
computer vision applications into demos and applications on different hardware platforms.
Scikit-learn (Sklearn) is the most useful and robust library for machine learning in Python. It
provides a selection of efficient tools for machine learning and statistical modeling including
classification, regression, clustering and dimensionality reduction via a consistence interface
in Python.
The OS module in Python provides functions for creating and removing a directory (folder),
fetching its contents, changing and identifying the current directory, etc. You first need to
import the os module to interact with the underlying operating system.
Matplotlib is a python library used to create 2D graphs and plots by using python scripts. It
has a module named pyplot which makes things easy for plotting by providing feature to
control line styles, font properties, formatting axes etc.
NumPy is a Python library that is widely used for numerical computing. In the project,
NumPy was used to perform mathematical operations and computations on the dataset, such
as statistical analysis and data manipulation. NumPy is an essential library in machine
learning as it provides fast and efficient mathematical computations that can be performed on
arrays, making it easier to work with data. The ability to perform efficient numerical
operations makes NumPy a crucial component in the development of machine learning
models.
Python is a high-level programming language that was used extensively in the project for
developing the algorithms and models used in the breast cancer prediction system. Python is
an excellent choice for machine learning projects as it has a vast range of libraries and
packages, making it easier to build machine learning models. The ability to develop
algorithms and models using Python allowed developers to create a robust and accurate
system that could predict breast cancer with a high degree of accuracy.
Google Colab is a cloud-based platform that provides a free Jupyter notebook environment,
enabling the development and execution of machine learning models on Google's servers. In
the breast cancer prediction system project, Google Colab was used to run the machine
learning models on a virtual machine, allowing for faster training and testing of the models.
Sign languages detection system
Google Colab provided a useful platform for machine learning projects as it provided a
cloud-based environment, allowing users to work on their projects from anywhere.
GitHub is a web-based platform used for version control and collaboration in software
development projects. In the breast cancer prediction system project, GitHub was used to
manage and store the project's source code, allowing for collaboration and version control.
GitHub provided a secure and efficient way to manage source code, making it easier to work
with multiple team members. The ability to collaborate and manage source code efficiently
makes GitHub an essential tool in software development projects.
In conclusion, the sign language detection system project was a significant undertaking that
required the use of several technologies to ensure its success. The technologies used in the
project were carefully chosen to ensure that they provided the necessary functionality
required to create an accurate and robust system. Tensorflow, tensorflow-gpu, openc, sklearn,
matplotlib, NumPy, Python, Anaconda, Google Colab, GitHub, and mediapipe were all
crucial in the development of reliable sign language detection system. Their ability to provide
landmarks MP Holistic key points in hand and face in camera with landmarks keypoints our
system increase accuracy and use less data because we have to store only keypoints of
landmarks Which make it crucial for project .
3 Proposed Methodology
The methodology for the SIGN LANGUAGES DETECTION SYSTEMs project involves
several key steps that include data collection, data pre-processing,, model development, and
model evaluation.
Data Collection: The first step in the project is the collection of data. The data will be
collected from following code :
cap = cv2.VideoCapture(0)
# Set mediapipe model
with mp_holistic.Holistic(min_detection_confidence=0.5, min_tracking_confidence=0.5) as
holistic:
# NEW LOOP
# Loop through actions
for action in actions:
# Loop through sequences aka videos
for sequence in range(no_sequences):
# Loop through video length aka sequence length
for frame_num in range(sequence_length):
Sign languages detection system
# Read feed
ret, frame = cap.read()
# Make detections
image, results = mediapipe_detection(frame, holistic)
# print(results)
# Draw landmarks
draw_styled_landmarks(image, results)
# Break gracefully
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
Sign languages detection system
cv2.destroyAllWindows()
we will use this code to collect data in put in MP_DATA folder we made by using
following code :
DATA_PATH = os.path.join('MP_Data')
Data Pre-processing: After collecting the data, the next step is to pre-process it. The data
pre-processing stage involves several steps such as cleaning, normalization, and feature
extraction. During the cleaning process, the data will be inspected for missing values,
duplicates, and outliers. Missing values and outliers will be replaced or removed as
necessary. Normalization will be done to ensure that the data is on a similar scale. Feature
extraction will be done to select the most important attributes that will be used in the
development of the prediction model. The pre-processing stage is critical to ensure that the
data is accurate, consistent, and suitable for use in the model development process.
Exploratory Data Analysis (EDA): After preprocessing the data, the next step is to perform
exploratory data analysis. EDA is an essential step that allows the project team to understand
the data and gain insights into the relationships between the different variables. The EDA
process may involve techniques such as scatter plots, box plots, and correlation matrices. EDA
can help to identify potential relationships between different variables that may be used in the
model development process.
Model Development: The next step in the project is to substrate train and test data by using:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05)
Sign languages detection system
the development of the prediction model. The prediction model will use machine learning
algorithms such as logistic regression, decision trees, random forests, and support vector
machines. These algorithms will be used to build a model that can predict the sign and hand
using Sequential model and Train LSTM Neural Network .The model will be trained using
the pre-processed data and evaluated using various metrics such as accuracy and precision.
The evaluation process will be done using Confusion Matrix to ensure that the model is
robust and can generalize well to unseen data.
Model Deployment: Once Sequential model and Train LSTM Neural Network were
developed and evaluated, the next step is to Train the data in 200 epochs or more . which will
give us at least 70 to 90 % accuracy in prediction model . It will developed in jupyter
notebook in anaconda .
model.compile(optimizer='Adam', loss='categorical_crossentropy',
metrics=['categorical_accuracy'])
model.fit(X_train, y_train, epochs=2000, callbacks=[tb_callback])
Performance Monitoring: After the deployment of the model, the next step is to monitor its
performance. The performance of the model will be monitored using various metrics such as
the number of requests, response time, and error rates. The performance monitoring process
will help to identify potential issues and bottlenecks in the deployment process. The
performance metrics will be regularly reviewed and optimized to ensure that the model is
functioning optimally.
We can check Performance Monitoring with following code >
model.summary()
Sign languages detection system
User Feedback: The final step in the project is to gather user feedback. The user feedback
process will involve soliciting feedback from end-users to identify potential improvements
and areas for optimization. User feedback will be gathered using various methods such as
surveys, feedback forms, and focus groups. The feedback will be analysed and used to
identify areas for improvement and optimization.
4 Project Modules
Data Collection Module: The first step in the project is the collection of data. The data
Collection Module is following :
cap = cv2.VideoCapture(0)
# Set mediapipe model
with mp_holistic.Holistic(min_detection_confidence=0.5, min_tracking_confidence=0.5) as
holistic:
# NEW LOOP
# Loop through actions
for action in actions:
# Loop through sequences aka videos
for sequence in range(no_sequences):
# Loop through video length aka sequence length
for frame_num in range(sequence_length):
# Read feed
ret, frame = cap.read()
# Make detections
image, results = mediapipe_detection(frame, holistic)
Sign languages detection system
# print(results)
# Draw landmarks
draw_styled_landmarks(image, results)
# Break gracefully
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
we will use this code to collect data in put in MP_DATA folder we made by using
following code :
DATA_PATH = os.path.join('MP_Data')
Sign languages detection system
Machine Learning Module: Sequential model and Train LSTM Neural Network were
Machine Learning Module , the next step is to Train the data in 200 epochs or more . which
will give us at least 70 to 90 % accuracy in prediction model . It will developed in jupyter
notebook in anaconda .
model.compile(optimizer='Adam', loss='categorical_crossentropy',
metrics=['categorical_accuracy'])
model.fit(X_train, y_train, epochs=2000, callbacks=[tb_callback])
Security Module: This module ensures the security of the user's data and the system as a
whole. It includes measures such as encryption, secure socket layer (SSL) certificates, and
access control mechanisms to prevent unauthorized access to the system. The goal of this
module is to ensure that the user's data is safe and secure and that the system is protected
from attacks.
Each of these modules plays a critical role in the sign language detection system, and they
need to work together seamlessly to produce accurate and reliable results for the end-user.
The project's success depends on the effectiveness of each module, and constant evaluation
and improvement are necessary to enhance the system's performance. The sign language
detection system project is complex and requires a high level of expertise in machine
learning, software engineering, and security to develop a functional and efficient system.
5 Diagram:
Project Module
Sign languages detection system
The Use Case diagram depicts the system's functionality by representing different actors
and their interactions with the system. It provides a clear understanding of the system's
use cases and helps to identify the actors and the corresponding use cases, which are
critical for defining the system's requirements.
A use case diagram is a visual representation of the interactions between actors (users or
systems) and a system or application. It is a type of UML (Unified Modelling Language)
diagram that is commonly used in software engineering to model the functionality of a
system or application.
A use case diagram consists of a set of use cases (represented by ovals) that describe the
actions or functions performed by the system or application. Each use case is associated
with one or more actors (represented by stick figures) that interact with the system or
application to accomplish a specific goal.
The actors in a use case diagram may include users, other systems or applications, or even
external entities such as sensors or devices. The use cases describe the interactions
between the actors and the system or application, and may include preconditions,
postconditions, and exceptions.
Sign languages detection system
Use case diagrams are often used early in the development process to help stakeholders
understand the functional requirements of a system or application, and to identify
potential areas of improvement. They can also be used to validate and refine the system or
application design, and to communicate the design to developers, testers, and other
stakeholders.
Overall, a use case diagram provides a high-level overview of the functionality of a
system or application, and helps stakeholders to visualize the interactions between the
actors and the system or application.
Sign languages detection system
CHAPTER IV
IMPLEMENTATION
CHAPTER IV IMPLEMENTATION
The implementation phase of the project is where the detailed design is actually transformed
into working code. Aim of the phase is to translate the design into a best possible solution in
a suitable programming language. This chapter covers the implementation aspects of the
project, giving details of the programming language and development environment used. It
also gives an overview of the core modules of the project with their step-by-step flow.
The implementation stage requires the following tasks.
• Careful planning.
• Investigation of the system and constraints.
• Design of methods to achieve the changeover.
• Evaluation of the changeover method.
• Correct decisions regarding selection of the platform
• Appropriate selection of the language for application development
8. Make Predictions
CHAPTER V
RESULTS
Accuracy is 78%
Sign languages detection system
CHAPTER VI
USER MANUAL
1. SOFTWARE REQUIREMENTS
Data collection and processing: The system must be able to collect and process large volumes
of imaging data so it can increase accuracy of system .
Data storage and retrieval: The system should have a robust database to store all data
securely and efficiently. The system should also be able to retrieve the data quickly and
accurately.
Machine learning models: The system should have machine learning algorithms that can
analyze the data and predict the sign language though camera . The models can include
supervised or unsupervised learning algorithms, such as decision trees, support vector
machines, and neural networks.
User interface: The system should have a user-friendly interface for deaf people so they can
More easily,
Continuous improvement: The system should be designed to continuously learn from the new
data, update the models, and improve the accuracy of the predictions. This is important to
ensure that the system I give high accuracy.
2. HARDWARE REQUIREMENTS
The hardware requirements for sign language detection system depend on the size of the
dataset, the complexity of the machine learning models, and the desired speed and accuracy
of the system. Here are some general hardware requirements that you may consider:
Processor: The system should have a powerful processor to handle the large amounts of data
and complex machine learning algorithms. A multi-core processor with a clock speed of 3
GHz or higher is recommended.
Sign languages detection system
Memory: The system should have enough memory to store the data and models. At least 8
GB of RAM is recommended, and more is better for larger datasets and more complex
models.
Storage: The system should have sufficient storage space to store the data, models, and other
system files. A solid-state drive (SSD) is recommended for faster data access and better
performance.
Graphics processing unit (GPU): A GPU can accelerate the processing of large datasets and
machine learning algorithms. A high-end GPU with at least 8 GB of memory is
recommended.
Network connectivity: The system should have fast and reliable network connectivity to
access and share the data with other healthcare organizations and systems.
Backup and recovery: The system should have a backup and recovery strategy to prevent data
loss in case of hardware failure or system crashes.
It is important to note that the hardware requirements may vary based on the specific
requirements of the sign language detection system. It is recommended to consult with
experts in data science and software development to determine the optimal hardware
requirements for your specific system.
The steps to run a SIGN LANGUAGES DETECTION SYSTEM can vary depending on the
specific system and the software and hardware configurations used. However, here are some
general steps that can be followed:
Collect and process the data: Gather the required data through camera and Process the data
and convert it into a format that can be used by SIGN LANGUAGES DETECTION
SYSTEM.
Build and train the machine learning models: Build the machine learning models using the
processed data. Train the models using a subset of the data, and evaluate their accuracy.
Select the best model based on its performance metrics, such as sensitivity, specificity, and
accuracy.
Integrate the models into the software: Integrate the selected machine learning models into
the SIGN LANGUAGES DETECTION SYSTEM software. Ensure that the software can
receive input data in the correct format and output the predictions in a meaningful way.
Sign languages detection system
Test the system: Test the SIGN LANGUAGES DETECTION SYSTEM using a new set of
data in real time. Evaluate the performance of the system by comparing its predictions with
the actual outcomes.
Continuously monitor and improve the system: Monitor the performance of the system
continuously and make improvements as necessary. Update the machine learning models
with new data and new research findings to improve the accuracy and reliability of the
system.
Sign languages detection system
CHAPTER VII
CONCLUSION & FUTURE SCOPE
7.1 CONCLUSION………………………………………………
CONCLUSION
We may deduce from the model's findings that the suggested system can produce reliable
results under conditions of regulated light and intensity. Furthermore, new motions may be
simply incorporated, and more images captured from various angles and frame will supply
the model with greater accuracy. As a result, by expanding the dataset, the model may simply
be scaled up to a vast size.
The model has some limitations, such as environmental conditions such as low light intensity
and an unmanaged backdrop, which reduce detection accuracy. As a consequence, we'll
attempt to fix these problems as well as expand the dataset for more accurate
findings.
Sign languages detection system
FUTURE WORK
1.Convert this project in two ways : We will convert this project in two ways so it can detect
sign language and also convert English sentence into animated sign language so deaf person
also understand what person trying to say .
2) The implementation of our model for other sign languages such as Indian sign language or
American sign language.
CHAPTER VIII
REFERENCES
CHAPTER VIII REFERENCES………………………………………...
NumPy Documentation
(https://numpy.org/doc/stable/)
MatplobLib Documentation
(https://matplotlib.org/stable/index.html)
Mediapipe Documentation
(https://mediapipe.readthedocs.io/
Jamie Berke , James Lacy March 01, 2021 “Hearing loss/deafness| Sign Language”
https://www.verywellhealth.com/sign-language-nonverbal-users-1046848