0% found this document useful (0 votes)
36 views6 pages

Deaf and Iot Enable Communication Project

The document describes a hand gesture recognition system for deaf and dumb people using IoT. The system uses flex sensors on a glove to detect hand gestures and send the data to the cloud via NodeMCU. The cloud compares the sensor values to predefined values and outputs the corresponding text or voice message on a mobile app.

Uploaded by

selvasundaris45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views6 pages

Deaf and Iot Enable Communication Project

The document describes a hand gesture recognition system for deaf and dumb people using IoT. The system uses flex sensors on a glove to detect hand gestures and send the data to the cloud via NodeMCU. The cloud compares the sensor values to predefined values and outputs the corresponding text or voice message on a mobile app.

Uploaded by

selvasundaris45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

International Journal of Research Publication and Reviews Vol (2) Issue (7) (2021) Page 809-814

International Journal of Research Publication and


Reviews
Journal homepage: www.ijrpr.com ISSN 2582-7421

Hand Gesture Recognition System for Deaf and Dumb Using IOT
Apoorva Na, Gowri U Ursb, Meghana Rao Jc, Roshini Md, Prof.Syed Salim e
UG STUDENTa, Department of Information Science and Engineering VVIET, Karnataka, India.
UG STUDENTb, Department of Information Science and Engineering VVIET, Karnataka, India.
UG STUDENTc, Department of Information Science and Engineering VVIET, Karnataka, India.
UG STUDENTd, Department of Information Science and Engineering VVIET, Karnataka, India
Facultye, Department of Information Science and Engineering VVIET, Karnataka, India

ABSTRACT

Language that is used to express messages through manual contact and body language. Hand forms, orientations and movements, a s well as body and facial
expressions, are also included. People communicate with each other primarily by communication. Birth defects, deaths, and oral disorders have all contributed
to the dramatic rise in the number of deaf and dumb people in recent years. Since deaf and dumb people are unable to communicate with others, they must rely
on visual contact. Many languages are spoken and understood all over the world. People who have trouble communicating or hearing are referred to as "special
people." People who are "dumb" or "deaf" find it difficult to understand what the other person is attempting to say, and the same is true for deaf people. People
often misinterpret these signals, whether by sign language, lip reading, or lip sync. This project is designed to assist these individuals with special needs in
participating equally in society. It exploits unique features of the visual medium through spatial grammar currently, in the United States, there are approximately
one to two million signers. The sign language translator we have developed uses a glove fitted with sensors that can interpre t the words predefined for certain
sensor value combination based on sign Language (ASL). The glove uses flex sensors gather data on each finger’s position and the hand’s motion to differentiate
the letters

Keywords: flex Sensor, Mobile Application, Cloud, NodeMCU, Arduino nano.

1. Introduction

It is critical to create contact or connection with Deaf - Dumb people in today's world. Hand signals or gestures are used to communicate between these
individuals. Gestures are a type of physical activity that an individual use to communicate important information. Like all oral languages, sign language has
developed spontaneously. A computer can be configured to translate sign language into text format, reducing the gap in comprehension between hearing
people and the deaf community.
Several methods for recognizing various hand gestures have been suggested, and can be divided into two categories: vision-based and non-vision-based.
We choose the glove-based, that is non-vision approach because it is increasingly effective in gesture recognition, and it requires the use of a highly structured
sensor glove that produces a symbol that corresponds to the hand sign. The knowledge generated is very accurate because the smart glove's performance is
not affected by light, electric, or attractive fields, or any other influences. Deaf and dumb people have a hard time communicating with normal people. This
enormous challenge leaves them uneasy, and they believe they are being discriminated against in society.

2. Existing and Proposed System

Existing System

The detection of hand gesture can be done using web camera i.e. using image processing. It captures the image from the video stream, removes its background
using RGB filtering and thresholding. This develops a system that can convert the hand gesture into text. The detection involves observation of hand
movement. There are some drawbacks in this system, they are Image Processing can be significantly slow creating unacceptable latency. Many Gesture
810

Recognition system do not read motions correctly due to factors like insufficient background light etc.
Proposed System

Our proposed system is designed such a way that it reduces the communication barrier between speech impaired and normal peopl e thus giving out a
voice to their actions.
Our prototype involves nodemcu, Arduino nano as A/D converter which are interfaced with flex sensors for reading hand gestures when a
specific sign is made the flex sensor makes unique values are generated those values are given to the cloud to verify with the database which has the
message stored and when the sign is made it matches and gives out the speech or voice converted output.

Figure 1: Block Diagram of hand gesture recognition for deaf and dumb.

Methodology

The purpose of the design phase is to plan a solution of the problem specified by the requirement document. This phase is the first step in moving from the
problem domain to the solution domain. In other words, starting with what is needed; design takes us toward how to satisfy the needs. The design of a system
is perhaps the most critical factor affecting the quality of the software; it has a major impact on the later phases particularly testing and maintenance.
It has two parts, one is mobile app and another is hand glove. After the hand glove is powered up, every fingers and hand mov ement each time
will try to detect word or letter for its given pattern. If the finger or hand movement data is recognized, then it will be sent on to that app for di splay. If the
finger or hand movement data are not recognized, then letter or word will not be shown.

3. System Design

Figure 2: High level design.

The figure represents the use case diagram of Hand Gesture Recognition System which consist of three stages. They are as follows:
International Journal of Research Publication and Reviews Vol (2) Issue (7) (2021) Page 809-814 811

 Sensing module: Sensing stage consists of flex sensors which changes the value of resistance depending on the amount of bend on the sensors.
They convert the change in bend to electrical resistance which will be passed to next stage. These values are used to identify the gestures made
by the users.
 Data transfer module: The converted data value is sent to cloud through the Node MCU module. Here MQTT Protocol is used to send the data
from sensors to the cloud as it is Instant message protocol with no delay.
 Processing module: The data received to the cloud is sent to the mobile application where it is compared with the stored values. If the received
value matches to the predefined value, then the associated message is assigned to it which is further converted into Voice message in the mobile
application.

Block Diagram:

Figure Figure 3: Block diagram of hardware module

Figure 4: Block diagram of software module

4. System Implementation

Flex Sensors

Flex sensor is basically a variable resistor whose terminal resistance increases when the sensor is bent. As a variable printed resistor, the Flex Sensor achieves
great form-factor on a thin flexible substrate. When the substrate is bent, the sensor produces a resistance output correlated to the bend radius—the smaller
the radius, the higher the resistance value.
Flex sensor work as analog voltage dividers, inside which are carbon resistive elements within a thin substrate, that are flexible. When substrate
bends, sensor generates resistance output corresponding to the bend radius. More the bend, more is the output resistance.
812 International Journal of Research Publication and Reviews Vol (2) Issue (7) (2021) Page 809-814

(a) (b)
Figure 5: (a) Flex Sensor (b) Resistance change with bending

Working
Flex sensors are the carbon resistive elements within a thin flexible substrate when bent produces a resistance output relative to the bend. Flex sensors works
in the principle of voltage divider form and the basic flex sensor circuit is shown

Figure 6: Basic flex sensor circuit.

The output voltage of potential divider will be,


Vout= Vin ( R2/R1+R2)

NodeMCU

NodeMCU is an open-source LUA based firmware developed for the ESP8266 wifi chip. By exploring functionality with the ESP8266 chip, NodeMCU
firmware comes with the ESP8266 Development board/kit i.e. NodeMCU Development board.
As shown in the figure 7 NodeMCU board consist of ESP8266 wifi enabled chip. The ESP8266 is a low-cost Wi-Fi chip developed by Espressif
Systems with TCP/IP protocol with micro-controller capabilities.
NodeMCU has Arduino like Analog (i.e. A0) and Digital (D0-D8) pins on its board. It supports serial communication protocols i.e. UART, SPI,
I2C, etc.
International Journal of Research Publication and Reviews Vol (2) Issue (7) (2021) Page 809-814 813

Figure 7: NodeMCU.

MQTT Protocol (Message Queuing Telemetry Transport):


MQTT is an extremely lightweight and publish-subscribe messaging transport protocol. The protocol runs over TCP/IP. It is a bi-directional communication
protocol that allows messaging between device to cloud and vice-versa. Default Port No. Is 1883.

Characteristics of MQTT Protocol:


 It is a Machine-to-Machine protocol.
 It provides Bi-directional Communications. 
 It does not require that both the client and the server establish a connection at the same time.
 It provides faster data transmission.
 Can scale to millions of connected devices. 
 Security enabled.

Architecture of MQTT Protocol:

Figure 8: Architecture of MQTT Protocol.

Components of MQTT Protocol includes:


 Message.
 Client.
 Broker.
 Topic.
814 International Journal of Research Publication and Reviews Vol (2) Issue (7) (2021) Page 809-814

1. Message: The message is the data that is carried out by the protocol across the network for the application.
2. Client: MQTT client is any device that runs an MQTT library and connects to an MQTT broker over a network.
3. Broker: Broker is a server that receives all messages from the clients and then routes the messages to the appropriate destination clients.
4. Topic: The label provided to the message is checked against the subscription known by the server is known as TOPIC.

Algorithm

The gestures made by the user needs to be stored and interpreted accurately to convert those into speech.
Start
Step 1: Flex sensor generate a different resistance every time it is bent those ranges are recorded.
Step 2: Establish a connection between the hardware component and the network.
Step 3: Read the sensor values from the controller.
Step 4: Upload the values to cloud
Step 5: Compare these sensor values to the values stored in the database.
Step 6: The sensor values are sent to the application through MQTT protocol.
Step 7: a) If the data in the database matches the sensor data the assigned message is obtained Then assigned messages is converted to speech.
b) Else the data is declined and no message is given.

5 Conclusion and Future Scope

Sign language is a useful tool to ease communication between mute community and normal people. Still a communication barrier exists. In this work, the
gestures made by the speech impaired people are caught by the flex sensors which produce a certain voltage these reading have a specific meaning in the
procured data and would be shown. This helps in covering the communication gap between normal people and speech impaired as the message in converted
to speech. The connection of these flex sensors to the gloves gives an advantage of carrying around easy and very efficient. This in turn help them to express
themselves better and also make them the closer part of the society.
The gloves can be customized with respect to the user’s sign language. Currently translation is being done to only one language, this can be
enhanced to be translated to other different regional languages. Only flex sensors are used currently, in future other differ ent sensors can be used for other
utilities.

REFERENCES

[1] Rupesh Prajapati, Vedant Pandey, Nupur Jamindar, Neeraj Yadav, Prof. Neelam Phadnis” Hand Gesture Recognition and Voice Conversion for Deaf
and Dumb” International Research Journal of Engineering and Technology (IRJET),2018.
[2] Taniya Sahana, Soumi Paul, Subhadip Basu, Ayatullah Faruk Mollah "Hand sign recognition from depth images with multi-scale density features for
deaf mute persons” in 2019 International Conference on Computational Intelligence and Data Science.
[3] Syed Raquib Shareef, Mohammed Mannan Hussain” Hand Gesture Recognition System for Deaf and Dumb” International Journal of Multidisciplinary
and Current Educational Research (IJMCER),2020.
[4] J. Thilagavathy, A. Jeyapaul Murugan, S. Darwin” Embedded Based Hand Talk Assisting System for Deaf and Dumb” International J ournal of
Engineering Research & Technology (IJERT),2014.
[5] K. Manikandan, Ayush Patidar, Pallav Walia, Aneek Barman Roy, “Hand Gesture Detection and Conversion to Speech and Text”
[6] Rohit Rastogi, Shashank Mittal, Sajan Agarwal, "A novel approach for communication among Blind, Deaf and Dumb people", Computing for Sustainable
Global Development (INDIA Com), 2015 2nd International Conference,11-13 March 2017.
[7] Shangeetha, R. K., V. Valliammai, and S. Padmavathi. "Computer vision based approach for Indian Sign Language character recognition." Machine
Vision and Image Processing (MVIP), 2012 International Conference on. IEEE, 2012.
[8] Chance Glenn, Divya Shangeetha, R. K., V. Valliammai, and S. Padmavathi. "Computer vision based approach for Indian Sign Language character
recognition." Machine Vision and Image Processing (MVIP), 2012 International Conference on. IEEE, 2012.
[9] Sood, Anchal, and Anju Mishra. "AAWAAZ: A communication system for deaf and dumb." Reliability, Infocom Technologies and Optimization (Trends
and Future Directions)(ICRITO), 2016 5th International Conference on. IEEE, 2016.
[10] Apoorva N, Gowri U Urs, Meghana Rao J, Roshini M, Prof. Syed Salim Information Science And Engineering Vidya Vikas Institute of Engineering
and Technology Mysore, India. “Hand Gesture Recognition for Deaf and Dumb on ijsrem 2021.

You might also like