ISL Using Machine Learning
ISL Using Machine Learning
ON
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE & ENGINEERING
Submitted by
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE & ENGINEERING
Submitted by
Certificate
This is to certify that a Project entitled
We consider it is our privilege to express our gratitude and respect to all those who guided, inspired
and helped us in the completion of an Project.
We are thankful to our chairman Sri MANNEM RAMI REDDY, Director Sri MANNEM
ARAVIND KUMAR REDDY and Principal Dr.K. JAYACHANDRA for permitting us to use
facilities available in this college to accomplish the Project completely.
We are thankful to our guide Mr T. KATAIAH, all of his guidance and co- operation of Project
work.
We are also thankful to all our staff members of Computer Science and Engineering for their
cooperation.
Last but not least, we wish to thank to all my friends, and who helped directly or indirectly in
completion of our Project.
ABSTRACT
TABLE OF CONTENTS
LIST OF FIGURES
8 8.2.2 Registration 74
LIST OF TABLES
CHAPTER-1
INTRODUCTION
1.1 INTRODUCTION
Sign Languages vary throughout the world. There are around 300 different sign languages used
across various parts of the world. This is because sign languages were developed naturally by people
belonging to different ethnic groups. Perhaps, India does not have a standard sign language. Lexical
variations and different dialects of Indian Sign Language are found in different parts of India. But,
recently, efforts had been taken to standardize the Indian Sign Language (ISL). The ISL hand
gestures are divided into two broad categories: (i) static gestures, and (ii) dynamic gestures. The
static ISL hand gestures of numbers (0-9), English alphabets (A-Z), and some English words are
shown in Fig. 1. According to the 2011 census, there are around 50 lakh people in India with
speech/hearing impairments. But, there are only less than 300 educated and trained sign language
interpreters in India. So, people with speech/hearing impairments tend to become isolated and
lonely, as they face difficulties in communicating with other normal people. This has a tremendous
effect on both their social and working life. Due to the above mentioned challenges that the specially
challenged people face, an automated real-time system that could translate English words to ISL and
vice versa has been proposed in this paper. This system makes it easy for the specially challenged
people to communicate effectively with the rest of the world. This could enhance their abilities and
make them realize that they can do better in life. The proposed system performs two major tasks: (i)
Gesture to Text conversion and (ii) Speech to Gesture conversion. Gesture to text conversion is done
using neural network classifiers. Speech to gesture conversion is done using Google Speech
Recognition API. This paper focuses on conversion of standard Indian Sign Language gestures to
English, and conversion of English words (spoken) to Indian Sign Language gestures with highest
possible accuracy. For this, different neural network classifiers are developed, and their performance
in gesture recognition is tested. The most accurate and efficient classifier is chosen and is used to
develop an application that converts ISL gestures to their corresponding English text, and speech to
CHAPTER-2
SYSTEM ANALYSIS
According to the 2011 census, there are around 50 lakh people in India with speech/hearing
impairments. But, there are only less than 300 educated and trained sign language interpreters in
India. So, people with speech/hearing impairments tend to become isolated and lonely, as they
face difficulties in communicating with other normal people. This has a tremendous effect on
both their social and working life. Due to the above mentioned challenges that the specially
challenged people face, an automated real-time system that could translate English words to ISL
and vice versa has been proposed in this paper. This system makes it easy for the specially
challenged people to communicate effectively with the rest of the world. This could enhance their
abilities and make them realize that they can do better in life.
But, recently, efforts had been taken to standardize the Indian Sign Language (ISL). The ISL
hand gestures are divided into two broad categories: (i) static gestures, and (ii) dynamic
gestures. But the sign are not showing correct results due to small dataset .
Algorithm: cnn
As mentioned in the above section, the proposed system for ISL interpretation performs two
major tasks: (i) Gesture to Text conversion and (ii) Speech to Gesture conversion. Gesture to text
conversion involves four major steps: (i) Dataset collection, (ii) Segmentation, (iii) Feature
Extraction and (iv) Classification. The concept diagram of gesture to text conversion is given in
conversion is dataset collection. An image dataset consisting of ISL hand gestures of 9 numbers
(1-9), 26 English alphabets and a few English words is collected. After the dataset is ready, all
the images in the dataset are pre-processed to mask the unwanted areas and to remove noise from
the image. Hence, pre-processing the images prior to feeding them to a classifier improves the
efficiency, accuracy and performance of the system. Hence, this step is very important in the
image classification process. Here, feature extraction is done using the Speeded-Up Robust
Feature (SURF) method. The SURF is used as a feature descriptor or as a feature detector. It is
often used for applications like object detection, image classification etc. It is a fast and robust
algorithm for representing and comparing images. It acts as a blob detector in an image. The
SURF features are calculated by finding the interest points in the image that contain the
meaningful features using the determinants of Hessian matrices. For each interest point found in
the previous process, the scale invariant descriptors are constructed.
Due to the relative lack of pervasive sign language usage within our society, deaf and other
verbally-challenged people tend to face difficulty in communicating on a daily basis. Our study
thus aims to provide research into a sign language translator applied on the smartphone platform,
due to its portability and ease of use. In this paper, a novel framework comprising established
image processing techniques is proposed to recognise images of several sign language gestures.
More specifically, we initially implement Canny edge detection and seeded region growing to
segment the hand gesture from its background. Feature points are then extracted with Speeded
Up Robust Features (SURF) algorithm, whose features are derived through Bag of Features
(BoF). Support Vector Machine (SVM) is subsequently applied to classify our gesture image
dataset; where the trained dataset is used to recognize future sign language gesture inputs. The
proposed framework has been successfully implemented on smartphone platforms, and
experimental results show that it is able to recognize and translate 16 different American Sign
Language gestures with an overall accuracy of 97.13%.
Hand gestures are a powerful environment for communicating with communities with intellectual
disability. It is useful for connecting people and computers. The expansion potential of this
system can be known in public places where deaf people are communicating with ordinary
people to send messages. In this article, we have provided a system of recognizing gestures
continuously with the Indian Sign Language (ISL), which both hands are used to make every
gesture. Gesture recognition continues to be a daunting task. We tried to fix this problem using
the key download method. These key tips are useful for breaking down the sign language
gestures into the order of the characters, as well as deleting unsupported frameworks. After the
splitting gear breaks each character is regarded as a single and unique gesture. Pre-processing
gestures are obtained using histogram (OH) with PCA to reduce the dimensions of the traits
obtained after OH. The experiments were performed on our live ISL dataset, which was created
using an existing camera.
This paper presents a system which can recognise hand poses & gestures from the Indian Sign
Language (ISL) in real-time using grid-based features. This system attempts to bridge the
communication gap between the hearing and speech impaired and the rest of the society. The
existing solutions either provide relatively low accuracy or do not work in real-time. This system
provides good results on both the parameters. It can identify 33 hand poses and some gestures
from the ISL. Sign Language is captured from a smartphone camera and its frames are
transmitted to a remote server for processing. The use of any external hardware (such as gloves
or the Microsoft Kinect sensor) is avoided, making it user-friendly. Techniques such as Face
detection, Object stabilisation and Skin Colour Segmentation are used for hand detection and
tracking. The image is further subjected to a Grid-based Feature Extraction technique which
represents the hand's pose in the form of a Feature Vector. Hand poses are then classified using
the k-Nearest Neighbours algorithm. On the other hand, for gesture classification, the motion and
intermediate hand poses observation sequences are fed to Hidden Markov Model chains
corresponding to the 12 pre-selected gestures defined in ISL. Using this methodology, the system
is able to achieve an accuracy of 99.7% for static hand poses, and an accuracy of 97.23% for
gesture recognition.
Recognition of sign language by a system has become important to bridge the communication
gap between the abled and the Hearing and Speech Impaired people. This paper introduces an
efficient algorithm for translating the input hand gesture in Indian Sign Language (ISL) into
meaningful English text and speech. The system captures hand gestures through Microsoft
Kinect (preferred as the system performance is unaffected by the surrounding light conditions
and object colour). The dataset used consists of depth and RGB images (taken using Kinect Xbox
360) with 140 unique gestures of the ISL taken from 21 subjects, which includes singlehanded
signs, double-handed signs and fingerspelling (signs for alphabets and numbers), totaling to 4600
images. To recognize the hand posture, the hand region is accurately segmented and hand
features are extracted using Speeded Up Robust Features, Histogram of Oriented Gradients and
Local Binary Patterns. The system ensembles the three feature classifiers trained using Support
Vector Machine to improve the average recognition accuracy up to 71.85%. The system then
translates the sequence of hand gestures recognized into the best approximate meaningful
English sentences. We achieved 100% accuracy for the signs representing 9, A, F, G, H, N and P.
The feasibility of the project is analyzed in this phase and business proposal is put
forth with a very general plan for the project and some cost estimates. During system analysis the
feasibility study of the proposed system is to be carried out. This is to ensure that the proposed
system is not a burden to the company. For feasibility analysis, some understanding of the major
requirements for the system is essential.
Three key considerations involved in the feasibility analysis are,
Economical Feasibility
Technical Feasibility
Social Feasibility
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the available
technical resources. This will lead to high demands on the available technical resources. This will
lead to high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about the system and
to make him familiar with it. His level of confidence must be raised so that he is also able to
make some constructive criticism, which is welcomed, as he is the final user of the system.
CHAPTER-3
SYSTEM REQUIREMENTS
The project involved analyzing the design of few applications so as to make the application
more users friendly. To do so, it was really important to keep the navigations from one screen
to the other well ordered and at the same time reducing the amount of typing the user needs to
do. In order to make the application more accessible, the browser version had to be chosen so
that it is compatible with most of the Browsers.
Functional Requirements
For developing the application the following are the Software Requirements:
1. Python
2. Django
1. Windows 10 64 bit OS
1. Python
For developing the application the following are the Hardware Requirements:
Processor: Intel i9
RAM: 32 GB
Space on Hard Disk: minimum 1 TB
System : Intel i3
Ram : 4GB.
Designing : Html,css,javascript.
CHAPTER-4
SYSTEM DESIGN
1. The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to
represent a system in terms of input data to the system, various processing carried out on this
data, and the output data is genBerated by this system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It is used to model
the system components. These components are the system process, the data used by the
process, an external entity that interacts with the system and the information flows in the
system.
3. DFD shows how the information moves through the system and how it is modified by a series
of transformations. It is a graphical technique that depicts information flow and the
transformations that are applied as data moves from input to output.
4. DFD is also known as bubble chart. A DFD may be used to represent a system at any level of
abstraction. DFD may be partitioned into levels that represent increasing information flow
and functional detail.
UML stands for Unified Modeling Language. UML is a standardized general-purpose modeling
language in the field of object-oriented software engineering. The standard is managed, and was
created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object oriented
computer software. In its current form UML is comprised of two major components: a Meta-model
and a notation. In the future, some form of method or process may also be added to; or associated
with, UML.
The Unified Modeling Language is a standard language for specifying, Visualization,
Constructing and documenting the artifacts of software system, as well as for business modeling and
other non-software systems.
The UML represents a collection of best engineering practices that have proven successful in
the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and the software
development process. The UML uses mostly graphical notations to express the design of software
projects.
GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can develop
and exchange meaningful models.
A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals (represented as
use cases), and any dependencies between those use cases. The main purpose of a use case
diagram is to show what system functions are performed for which actor. Roles of the actors in
the system can be depicted.
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of
static structure diagram that describes the structure of a system by showing the system's classes,
their attributes, operations (or methods), and the relationships among the classes. It explains
which class contains information.
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that
shows how processes operate with one another and in what order. It is a construct of a Message
Sequence Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and
timing.
Activity diagrams are graphical representations of workflows of stepwise activities and actions
with support for choice, iteration and concurrency. In the Unified Modeling Language, activity
diagrams can be used to describe the business and operational step-by-step workflows of
components in a system. An activity diagram shows the overall flow of control.
The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation and those steps are
necessary to put transaction data in to a usable form for processing can be achieved by inspecting
the computer to read data from a written or printed document or it can occur by having people
keying the data directly into the system. The design of input focuses on controlling the amount of
input required, controlling the errors, avoiding delay, avoiding extra steps and keeping the
process simple. The input is designed in such a way so that it provides security and ease of use
with retaining the privacy. Input Design considered the following things:
OBJECTIVES
1.Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process and
show the correct direction to the management for getting correct information from the
computerized system.
2. It is achieved by creating user-friendly screens for the data entry to handle large volume of
data. The goal of designing input is to make data entry easier and to be free from errors. The data
entry screen is designed in such a way that all the data manipulates can be performed. It also
provides record viewing facilities.
3.When the data is entered it will check for its validity. Data can be entered with the help of
screens. Appropriate messages are provided as when needed so that the user will not be in maize
of instant. Thus the objective of input design is to create an input layout that is easy to follow
A quality output is one, which meets the requirements of the end user and presents
the information clearly. In any system results of processing are communicated to the users and to
other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design improves the system’s
relationship to help user decision-making.
The output form of an information system should accomplish one or more of the
following objectives.
CHAPTER-5
IMPLEMENTATION
5.1 MODULES:
User
Admin
Data Preprocessing
Deep learning and machine learning
MODULES DESCRIPTION:
5.1.1 User:
The User can register the first. While registering he required a valid user email and mobile for
further communications. Once the user register then admin can activate the user. Once admin
activated the user then us
er can login into our system. User can upload the dataset based on our dataset column matched.
For algorithm execution data must be in float format. Here we took sign language dataset. User
can also add the new data for existing dataset based on our Django application. User can click the
Classification in the web page so that the data calculated Accuracy, Loss based on the
algorithms.
5.1.2 Admin:
Admin can login with his login details. Admin can activate the registered users. Once he activate
then only the user can login into our system. Admin can view the overall data in the browser.
Admin can click the Results in the web page so calculated Accuracy, loss based on the
algorithms is displayed. All algorithms execution complete then admin can see the overall
accuracy in web page.
5.1.3 Data Pre-processing:
The images in the dataset were pre-processed to mask unwanted areas in the image and to
remove noise as mentioned in the previous section. The various image pre-processing steps
performed on a sample image from the dataset. The SURF feature matrix was calculated for
every image in the dataset after pre-processing them. The SURF features extracted for a sample
image. The blue colored circles of varying sizes found in the SURF Feature points. The SURF
features of all the images were extracted and stored in a pickle file and then fed into different
neural network classifiers. The accuracy of each of the classifiers tested.
5.1.4 Deep learning and machine learning:
Based on the split criterion, the cleansed data is split into 60% training and 40% test. The Support
Vector Machine: The input images were passed to the K-means clustering and Bag of Visual
words classifiers before passing them to the SVM classifier. As there are 42 classes of images in
the dataset, k=42 for the Kmeans clustering classifier. The visual words were collected for both
the test and train dataset after applying the k-means clustering algorithm. There are totally 50391
images in the dataset. Among these images, 40320 images were used for training the SVM
model. The remaining, 10071 images were used for testing the performance of the classifier
model. A testing accuracy of around 99.5% was achieved. The other performance metrics like
precision score, F1 score, and recall score were also calculated. The Convolutional Neural
Network: A Convolutional Neural Network was modeled and developed using the Keras library
in Python. Around 30,240 images (60% of the images in the dataset) were used to train the
classifier model. The classifier was trained with different number of epochs. A maximum
average testing accuracy of around 88.89% was obtained. 3) Recurrent Neural Network: A
Recurrent Neural Network was modeled and developed using the Keras library in Python.
Around 30,240 images were used to train the classifier model. The classifier was trained with
different number of epochs. A maximum overall testing accuracy of around 82.3% was obtained
5.2 SOFTWARE ENVIRONMENT
5.2.1 PYTHON
Invoking the interpreter without passing a script file as a parameter brings up the following
prompt −
$ python
>>>
Type the following text at the Python prompt and press the Enter −
If you are running new version of Python, then you would need to use print statement with
parenthesis as in print ("Hello, Python!"); . However in Python version 2.4.3, this produces the
following result −
Hello, Python!
Invoking the interpreter with a script parameter begins execution of the script and continues until
the script is finished. When the script is finished, the interpreter is no longer active.
Let us write a simple Python program in a script. Python files have extension .py. Type the
following source code in a test.py file –
Live Demo
We assume that you have Python interpreter set in PATH variable. Now, try to run this program
as follows −
$ python test.py
Hello, Python!
Let us try another way to execute a Python script. Here is the modified test.py file −
Live Demo
#!/usr/bin/python
We assume that you have Python interpreter available in /usr/bin directory. Now, try to run this
program as follows −
$./test.py
Hello, Python!
Python Identifiers
A Python identifier is a name used to identify a variable, function, class, module or other object.
An identifier starts with a letter A to Z or a to z or an underscore (_) followed by zero or more
letters, underscores and digits (0 to 9).
Python does not allow punctuation characters such as @, $, and % within identifiers. Python is a
case sensitive programming language. Thus, Manpower and manpower are two different
identifiers in Python.
Class names start with an uppercase letter. All other identifiers start with a lowercase letter.
Starting an identifier with a single leading underscore indicates that the identifier is private.
Starting an identifier with two leading underscores indicates a strongly private identifier
If the identifier also ends with two trailing underscores, the identifier is a language-defined
special name.
Reserved Words
The following list shows the Python keywords. These are reserved words and you cannot use
them as constant or variable or any other identifier names. All the Python keywords contain
lowercase letters only.
assert finally or
def if return
elif in while
else is with
except lambdayield
Python provides no braces to indicate blocks of code for class and function definitions or flow
control. Blocks of code are denoted by line indentation, which is rigidly enforced.
The number of spaces in the indentation is variable, but all statements within the block must be
indented the same amount. For example
if True:
print "True"
else:
print "False"
if True:
print "Answer"
print "True"
else:
print "Answer"
print "False"
Thus, in Python all the continuous lines indented with same number of spaces would form a
block. The following example has various statement blocks −
Note − Do not try to understand the logic at this point of time. Just make sure you understood
various blocks even if they are without braces.
#!/usr/bin/python
import sys
try:
except IOError:
sys.exit()
if file_text == file_finish:
file.close
break
file.write(file_text)
file.write("\n")
file.close()
if len(file_name) == 0:
sys.exit()
try:
except IOError:
sys.exit()
file_text = file.read()
file.close()
print file_text
Multi-Line Statements
Statements in Python typically end with a new line. Python does, however, allow the use of the
line continuation character (\) to denote that the line should continue. For example −
total = item_one + \
item_two + \
item_three
Statements contained within the [], {}, or () brackets do not need to use the line continuation
character. For example −
'Thursday', 'Friday']
Quotation in Python
Python accepts single ('), double (") and triple (''' or """) quotes to denote string literals, as long as
the same type of quote starts and ends the string.
The triple quotes are used to span the string across multiple lines. For example, all the following
are legal −
word = 'word'
Comments in Python
A hash sign (#) that is not inside a string literal begins a comment. All characters after the # and
up to the end of the physical line are part of the comment and the Python interpreter ignores
them.
Live Demo
#!/usr/bin/python
# First comment
Hello, Python!
You can type a comment on the same line after a statement or expression −
# This is a comment.
Following triple-quoted string is also ignored by Python interpreter and can be used as a
multiline comments:
'''
This is a multiline
comment.
'''
A line containing only whitespace, possibly with a comment, is known as a blank line and Python
totally ignores it.
In an interactive interpreter session, you must enter an empty physical line to terminate a
multiline statement.
The following line of the program displays the prompt, the statement saying “Press the enter key
to exit”, and waits for the user to take action −
#!/usr/bin/python
Here, "\n\n" is used to create two new lines before displaying the actual line. Once the user
presses the key, the program ends. This is a nice trick to keep a console window open until the
user is done with an application.
The semicolon ( ; ) allows multiple statements on the single line given that neither statement
starts a new code block. Here is a sample snip using the semicolon.
A group of individual statements, which make a single code block are called suites in Python.
Compound or complex statements, such as if, while, def, and class require a header line and a
suite.
Header lines begin the statement (with the keyword) and terminate with a colon ( : ) and are
followed by one or more lines which make up the suite. For example −
if expression :
suite
elif expression :
suite
else :
suite
Many programs can be run to provide you with some basic information about how they should be
run. Python enables you to do this with -h −
$ python -h
usage: python [option] ... [-c cmd | -m mod | file | -] [arg] ...
You can also program your script in such a way that it should accept various options. Command
Line Arguments is an advanced topic and should be studied a bit later once you have gone
through rest of the Python concepts.
Python Lists
The list is a most versatile datatype available in Python which can be written as a list of comma-
separated values (items) between square brackets. Important thing about a list is that items in a
list need not be of the same type.
Creating a list is as simple as putting different comma-separated values between square brackets.
For example −
list2 = [1, 2, 3, 4, 5 ];
Similar to string indices, list indices start at 0, and lists can be sliced, concatenated and so on.
A tuple is a sequence of immutable Python objects. Tuples are sequences, just like lists. The
differences between tuples and lists are, the tuples cannot be changed unlike lists and tuples use
parentheses, whereas lists use square brackets.
Creating a tuple is as simple as putting different comma-separated values. Optionally you can put
these comma-separated values between parentheses also. For example −
tup2 = (1, 2, 3, 4, 5 );
tup1 = ();
To write a tuple containing a single value you have to include a comma, even though there is
only one value –
tup1 = (50,);
Like string indices, tuple indices start at 0, and they can be sliced, concatenated, and so on.
To access values in tuple, use the square brackets for slicing along with the index or indices to
obtain value available at that index. For example −
Live Demo
#!/usr/bin/python
tup2 = (1, 2, 3, 4, 5, 6, 7 );
tup1[0]: physics
tup2[1:5]: [2, 3, 4, 5]
Updating Tuples
To access dictionary elements, you can use the familiar square brackets along with the key to
obtain its value. Following is a simple example
Live Demo
#!/usr/bin/python
dict['Name']: Zara
dict['Age']: 7
If we attempt to access a data item with a key, which is not part of the dictionary, we get an error
as follows −
Live Demo
#!/usr/bin/python
dict['Alice']:
KeyError: 'Alice'
Updating Dictionary
You can update a dictionary by adding a new entry or a key-value pair, modifying an existing
entry, or deleting an existing entry as shown below in the simple example −
Live Demo
#!/usr/bin/python
dict['Age']: 8
You can either remove individual dictionary elements or clear the entire contents of a dictionary.
You can also delete entire dictionary in a single operation.
To explicitly remove an entire dictionary, just use the del statement. Following is a simple
example
Live Demo
#!/usr/bin/python
This produces the following result. Note that an exception is raised because after del dict
dictionary does not exist any more −
dict['Age']:
Dictionary values have no restrictions. They can be any arbitrary Python object, either standard
objects or user-defined objects. However, same is not true for the keys.
(a) More than one entry per key not allowed. Which means no duplicate key is allowed. When
duplicate keys encountered during assignment, the last assignment wins. For example −
Live Demo
#!/usr/bin/python
dict['Name']: Manni
(b) Keys must be immutable. Which means you can use strings, numbers or tuples as dictionary
keys but something like ['key'] is not allowed. Following is a simple example −
Live Demo
#!/usr/bin/python
Tuples are immutable which means you cannot update or change the values of tuple elements.
You are able to take portions of existing tuples to create new tuples as the following example
demonstrates −
Live Demo
#!/usr/bin/python
# tup1[0] = 100;
print tup3;
Removing individual tuple elements is not possible. There is, of course, nothing wrong with
putting together another tuple with the undesired elements discarded.
To explicitly remove an entire tuple, just use the del statement. For example −
Live Demo
#!/usr/bin/python
print tup;
del tup;
print tup;
This produces the following result. Note an exception raised, this is because after del tup tuple
does not exist any more −
print tup;
5.2.2 DJANGO
Django is a high-level Python Web framework that encourages rapid development and
clean, pragmatic design. Built by experienced developers, it takes care of much of the hassle of
Web development, so you can focus on writing your app without needing to reinvent the wheel.
It’s free and open source.
Django's primary goal is to ease the creation of complex, database-driven websites. Django
emphasizes reusability and "pluggability" of components, rapid development, and the principle
of don't repeat yourself. Python is used throughout, even for settings files and data models.
Django also provides an optional administrative create, read, update and delete interface that is
generated dynamically through introspection and configured via admin models
Create a Project
Whether you are on Windows or Linux, just get a terminal or a cmd prompt and navigate to the
place you want your project to be created, then use this code −
myproject/
manage.py
myproject/
__init__.py
settings.py
urls.py
wsgi.py
The “myproject” folder is just your project container, it actually contains two elements −
manage.py − This file is kind of your project local django-admin for interacting with your project
via command line (start the development server, sync db...). To get a full list of command
accessible via manage.py you can use the code −
The “myproject” subfolder − This folder is the actual python package of your project. It contains
four files −
urls.py − All links of your project and the function to call. A kind of ToC of your project.
Your project is set up in the subfolder myproject/settings.py. Following are some important
options you might need to set −
DEBUG = True
This option lets you set if your project is in debug mode or not. Debug mode lets you get more
information about your project's error. Never set it to ‘True’ for a live project. However, this has
to be set to ‘True’ if you want the Django light server to serve static files. Do it only in the
development mode.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'database.sql',
'USER': '',
'PASSWORD': '',
'HOST': '',
'PORT': '',
Database is set in the ‘Database’ dictionary. The example above is for SQLite engine. As stated
earlier, Django also supports −
MySQL (django.db.backends.mysql)
PostGreSQL (django.db.backends.postgresql_psycopg2)
MongoDB (django_mongodb_engine)
Before setting any new engine, make sure you have the correct db driver installed.
You can also set others options like: TIME_ZONE, LANGUAGE_CODE, TEMPLATE…
Now that your project is created and configured make sure it's working −
You will get something like the following on running the above code −
Validating models...
0 errors found
A project is a sum of many applications. Every application has an objective and can be reused
into another project, like the contact form on a website can be an application, and can be reused
for others. See it as a module of your project.
Create an Application
We assume you are in your project folder. In our main “myproject” folder, the same folder then
manage.py −
You just created myapp application and like project, Django create a “myapp” folder with the
application structure −
myapp/
__init__.py
admin.py
models.py
tests.py
views.py
admin.py − This file helps you make the app modifiable in the admin interface.
At this stage we have our "myapp" application, now we need to register it with our Django
project "myproject". To do so, update INSTALLED_APPS tuple in the settings.py file of your
project (add your app name) −
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'myapp',
Creating forms in Django, is really similar to creating a model. Here again, we just need to
inherit from Django class and the class attributes will be the form fields. Let's add a forms.py file
in myapp folder to contain our app forms. We will create a login form.
myapp/forms.py
class LoginForm(forms.Form):
As seen above, the field type can take "widget" argument for html rendering; in our case, we
want the password to be hidden, not displayed. Many others widget are present in Django:
DateInput for dates, CheckboxInput for checkboxes, etc.
There are two kinds of HTTP requests, GET and POST. In Django, the request object passed as
parameter to your view has an attribute called "method" where the type of the request is set, and
all data passed via POST can be accessed via the request.POST dictionary.
def login(request):
if request.method == "POST":
MyLoginForm = LoginForm(request.POST)
if MyLoginForm.is_valid():
username = MyLoginForm.cleaned_data['username']
else:
MyLoginForm = Loginform()
The view will display the result of the login form posted through the loggedin.html. To test it, we
will first need the login form template. Let's call it login.html
<html>
<body>
<center>
</center>
</div>
<br>
<center>
</center>
</div>
<br>
<center>
<strong>Login</strong>
</button>
</center>
</div>
</form>
</body>
</html>
The template will display a login form and post the result to our login view above. You have
probably noticed the tag in the template, which is just to prevent Cross-site Request Forgery
(CSRF) attack on your site.
{% csrf_token %}
Once we have the login template, we need the loggedin.html template that will be rendered after
form treatment.
<html>
<body>
</body>
</html>
urlpatterns = patterns('myapp.views',
url(r'^connection/',TemplateView.as_view(template_name = 'login.html')),
When accessing "/myapp/connection", we will get the following login.html template rendered −
Setting Up Sessions
In Django, enabling session is done in your project settings.py, by adding some lines to the
MIDDLEWARE_CLASSES and the INSTALLED_APPS options. This should be done while
creating the project, but it's always good to know, so MIDDLEWARE_CLASSES should have −
'django.contrib.sessions.middleware.SessionMiddleware'
'django.contrib.sessions'
When session is enabled, every request (first argument of any view in Django) has a session
(dict) attribute.
Let's create a simple sample to see how to create and save sessions. We have built a simple login
system before (see Django form processing chapter and Django Cookies Handling chapter). Let
us save the username in a cookie so, if not signed out, when accessing our login page you won’t
see the login form. Basically, let's make our login system we used in Django Cookies handling
more secure, by saving cookies server side.
For this, first lets change our login view to save our username cookie server side −
def login(request):
if request.method == 'POST':
MyLoginForm = LoginForm(request.POST)
if MyLoginForm.is_valid():
username = MyLoginForm.cleaned_data['username']
request.session['username'] = username
else:
MyLoginForm = LoginForm()
Then let us create formView view for the login form, where we won’t display the form if cookie
is set
def formView(request):
if request.session.has_key('username'):
username = request.session['username']
else:
Now let us change the url.py file to change the url so it pairs with our new view −
urlpatterns = patterns('myapp.views',
When accessing /myapp/connection, you will get to see the following page
CHAPTER-6
TESTING
6.1 Testing
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types of test. Each
test type addresses a specific testing requirement.
Field testing will be performed manually and functional tests will be written in detail.
Test objectives
Features to be tested
Verify that the entries are of the correct format
No duplicate entries should be allowed
All links should take the user to the correct page.
6.3.1 Integration Testing
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by interface
defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
CHAPTER-7
SOURCE CODE
import re
import pandas as pd
import csv
def UserRegisterActions(request):
if request.method == 'POST':
form = UserRegistrationForm(request.POST)
if form.is_valid():
print('Data is Valid')
form.save()
form = UserRegistrationForm()
else:
print("Invalid form")
else:
form = UserRegistrationForm()
def UserLoginCheck(request):
if request.method == "POST":
loginid = request.POST.get('loginid')
pswd = request.POST.get('pswd')
try:
status = check.status
if status == "activated":
request.session['id'] = check.id
request.session['loggeduser'] = check.name
request.session['loginid'] = loginid
request.session['email'] = check.email
else:
except Exception as e:
pass
def UserHome(request):
def Training(request):
import tensorflow as tf
import os
sz = 128
classifier = Sequential()
classifier.add(MaxPooling2D(pool_size=(2, 2)))
classifier.add(MaxPooling2D(pool_size=(2, 2)))
classifier.add(Flatten())
classifier.add(Dense(units=128, activation='relu'))
classifier.add(Dropout(0.20))
classifier.add(Dense(units=112, activation='relu'))
classifier.add (Dropout(0.10))
classifier.add(Dense(units=96, activation='relu'))
classifier.add(Dropout(0.10))
classifier.add(Dense(units=80, activation='relu'))
classifier.add(Dropout(0.10))
classifier.add(Dense(units=64, activation='relu'))
classifier.summary()
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=False)
print('datasets loaded')
training_set = train_datagen.flow_from_directory(train_dataset,
target_size=(sz, sz),
batch_size=10,
color_mode='grayscale',
class_mode='categorical')
test_set = test_datagen.flow_from_directory(test_dataset,
target_size=(sz, sz),
batch_size=10,
color_mode='grayscale',
class_mode='categorical')
history = classifier.fit(
training_set,
epochs=1,
validation_data=test_set,
# history.saved_model.save("model-all1-alpha.h5")
print('Model Saved')
acc = history.history['acc']
loss = history.history['loss']
print(acc[-1], loss[-1])
def Sign_detection(request):
import numpy as np
import cv2
import time
import keras
import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
tf.config.experimental.set_virtual_device_configuration(gpus[0], [
tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2048)])
except RuntimeError as e:
print(e)
# model = keras.models.load_model("./model-all1-alpha.h5")
print('loaded model')
# model = keras.models.load_model("D:\\Downloads\\model-1st-alpha.h5")
cam = cv2.VideoCapture(0)
alpha_dict = {}
j=0
for i in ascii_uppercase:
alpha_dict[j] = i
j=j+1
alpha_dict.update({26: '0'})
alpha_dict.update({27: '1'})
alpha_dict.update({28: '2'})
alpha_dict.update({29: '3'})
alpha_dict.update({30: '4'})
alpha_dict.update({31: '5'})
alpha_dict.update({32: '6'})
alpha_dict.update({33: '7'})
alpha_dict.update({34: '8'})
alpha_dict.update({35: '9'})
# print(alpha_dict)
capture_duration = 50
start_time = time.time()
Result_list = []
_, frame = cam.read()
# cv2.imshow("Frame", frame)
2.8)
cv2.imshow("BW", final_image)
pred = model.predict(final_image)
# print(pred)
print(alpha_dict[np.argmax(pred)])
a = alpha_dict[np.argmax(pred)]
Result_list.append(a)
cv2.imshow("Frame", frame)
if k == 27:
break
# time.sleep(10)
# time.close()
cam.release()
cv2.destroyAllWindows()
def speach_data(request):
def speach_to_text(request):
import speech_recognition as sr
import pyttsx3
import os
r = sr.Recognizer()
dir_list = os.listdir(image_names)
print(lst)
def SpeakText(command):
engine = pyttsx3.init()
engine.say(command)
engine.runAndWait()
try:
r.adjust_for_ambient_noise(source2, duration=0.2)
audio2 = r.listen(source2)
MyText = r.recognize_google(audio2)
MyText = MyText.lower()
SpeakText(MyText)
if MyText in lst:
detected_sign = MyText
else:
# p =os.path.join(image_names, detected_sign+".jpg")
except sr.RequestError as e:
except sr.UnknownValueError:
index.html:
{% extends 'base.html' %}
{% load static %}
{% block content %}
</div>
</header>
<!-- </div>-->
<!-- scientific principles to extract valuable information from data for business-->
<!-- decision-making-->
<!-- </p>-->
<!-- </div>-->
<!-- </div>-->
<!-- </a>-->
<!-- </div>-->
<!-- Machine learning is a method of data analysis that automates analytical model-->
<!-- building.-->
<!-- </p>-->
<!-- </div>-->
<!-- </div>-->
<!-- </a>-->
<!-- </div>-->
<!-- </p>-->
<!-- </div>-->
<!-- </div>-->
<!-- </a>-->
<!-- underlying relationships in a set of data through a process that mimics the way
the-->
<!-- human brain operates. In this sense, neural networks refer to systems of
neurons,-->
<!-- </div>-->
<!-- </div>-->
<!-- </a>-->
<!-- </div>-->
<!-- </div>-->
<!-- </div>-->
<!-- </section>-->
{% endblock %}
Base.html:
{% load static %}
<!DOCTYPE html>
<html lang="en">
<head>
<!-- Favicon-->
<script src="https://use.fontawesome.com/releases/v5.15.3/js/all.js"
crossorigin="anonymous"></script>
<link href="https://cdnjs.cloudflare.com/ajax/libs/simple-line-icons/2.5.5/css/simple-line-
icons.min.css"
rel="stylesheet" />
<link href="https://fonts.googleapis.com/css?
family=Source+Sans+Pro:300,400,700,300italic,400italic,700italic"
<!-<link="stylesheet"
href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css"
integrity="sha384-
Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm"
crossorigin="anonymous">
-->
</head>
<!-- Navigation-->
<nav id="sidebar-wrapper">
<ul class="sidebar-nav">
</ul>
</nav>
<!-- Header-->
{% block content %}
{% endblock %}
<!-- Map-->
<!-- Footer-->
<!-- class="icon-social-facebook"></i></a>-->
<!-- </li>-->
<!-- class="icon-social-twitter"></i></a>-->
<!-- </li>-->
<!-- </li>-->
<!-- </ul>-->
</div>
</footer>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js"></
script>
<script src="https://code.jquery.com/jquery-3.2.1.slim.min.js"
integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/
GpGFF93hXpG5KkN"
crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js"
integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskv
Xusvfa0b4Q"
crossorigin="anonymous"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js"
integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5
+76PVCmYl"
crossorigin="anonymous"></script>
</body>
</html>
CHAPTER-8
RESULTS
Remarks(IF
S.no Test Case Excepted Result Result
Fails)
If already user
User If User registration
1 Pass email exist then it
Register successfully.
fails.
If Username and password
Un Register Users
2 User Login is correct then it will Pass
will not logged in.
getting valid page.
deep
Here we used deep The request will
learning
learning algorithm and be not accepted
3 algorithm Pass
machine learning other wise its
and machine
algorithms. failed
learning
model under training
Model Results not true
4 results calculated and Pass
training failed
displayed
data is
Model
Model training results consider
5 training Pass
calculated and displayed for
results
testing
Model
Model accuracy and model
accuracy Results not true
6 loss will be displayed by Pass
and model failed
the user
loss
Prediction Indian sign Results not true
7 Pass
results language failed
Calculate Accuracy and loss
Accuracy and loss
8 accuracy, Pass not displayed
calculated
Loss. failed
Admin can login with his Invalid login
9 Admin login login credential. If success Pass details will not
he get his home page allowed here
Admin can
If user id not
activate the Admin can activate the
10 Pass found then it
register register user id
won’t login.
users
Home page
Fig:8.2.1Home Page
Register Form
Fig:8.2.2 Registration
Activate User
Model training:
Sign detection:
Sign caption:
Speech to text:
CHAPTER-9
CONCLUSION
From the results obtained, it is inferred that the SVM classifier along with the K-means clustering
and BoV classifiers is best suited for gesture recognition. A user friendly application that can
interpret Indian Sign Language has been developed using the most efficient SVM classifier (for
gesture to text conversion) and Google Speech Recognition API (for speech to gesture conversion).
Thus, a more reliable sign language interpretation system has been developed.
CHAPTER-10
BIBLIOGRAPHY
REFERENCES
[1] Cheok Ming Jin, Zaid Omar, Mohamed Hisham Jaward, ”A Mobile Application of American
Sign Language Translation via Image Processing Algorithms”, in IEEE Region 10 Symposium,
on IEEEexplore, 2016.
[2] Mr. Sanket Kadam, Mr. Aakash GhodkeProf. Sumitra Sadhukhan, ”Hand Gesture
Recognition Software based on ISL”, IEEE Xplore, 20 June 2019.
[3] Kartik Shenoy, Tejas Dastane, Varun Rao, Devendra Vyavaharkar,” Real-time Indian Sign
Language (ISL) Recognition”, IEEE Xplore: 18 October 2018.
[4] T Raghuveera, R Deepthi, R Mangalashri And R Akshaya, “A depth based ISL Recognition
using Microsoft Kinect”, ScienceDirect, 2018.
[5] Muthu Mariappan H, Dr Gomathi V, “Real time Recognition of ISL”, IEEE Xplore: 10
October 2019.
[6] G. Ananth Rao a, P.V.V. Kishore, “Selfie video based continuous Indian sign language
recognition system”, ScienceDirect, 2018.
[7] G. Ananth Rao a, P.V.V. Kishore, “Sign Language Recognition Based On Hand And Body
Skeletal Data”, IEEE 2018.
[8] Suharjitoa, Ricky Andersonb, Fanny Wiryanab, Meita Chandra Ariestab, Gede Putra
Kusuma, “Sign Language Recognition Application Systems for Deaf-Mute People: A Review
Based on Input-Process-Output”, ScienceDirect, 2017.
CHAPTER-11
FUTURE ENHANCEMENT
Communication is a vital activity of human beings to live, as they can express their feel-ing,
encourage cooperation and social bond, share their idea, and work together in soci-ety through
communication only. People who are not able to hear or speak (hearing-impaired people) uses
sign language as a mean of communication. Like spoken language, sign language also emerges
and evolves naturally within hearing-impaired persons. It is a visual form of communication and
in each country/region, where the hearing-impaired.