0% found this document useful (0 votes)
99 views5 pages

Sign Language Recognition Using Sensor Gloves: December 2002

This document summarizes a 2002 conference paper about recognizing sign language using sensor gloves. It describes using a sensor glove with 7 sensors to capture hand gestures and postures from American Sign Language. An artificial neural network with 3 layers (input, hidden, output) is used to recognize the sensor values and categorize them into 26 letters and punctuation of the English alphabet. The system aims to translate sign language gestures in real-time into written English sentences without any training.

Uploaded by

zara Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views5 pages

Sign Language Recognition Using Sensor Gloves: December 2002

This document summarizes a 2002 conference paper about recognizing sign language using sensor gloves. It describes using a sensor glove with 7 sensors to capture hand gestures and postures from American Sign Language. An artificial neural network with 3 layers (input, hidden, output) is used to recognize the sensor values and categorize them into 26 letters and punctuation of the English alphabet. The system aims to translate sign language gestures in real-time into written English sentences without any training.

Uploaded by

zara Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/4014533

Sign language recognition using sensor gloves

Conference Paper · December 2002


DOI: 10.1109/ICONIP.2002.1201884 · Source: IEEE Xplore

CITATIONS READS
91 6,335

2 authors:

Syed Atif Mehdi Yasir Niaz Khan


Technische Universität Kaiserslautern University of Lahore
25 PUBLICATIONS   187 CITATIONS    9 PUBLICATIONS   180 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

International Workshop on Field & Assistive Robotics (WFAR) View project

International Workshop on Field & Assistive Robotics (WFAR-7) View project

All content following this page was uploaded by Yasir Niaz Khan on 31 May 2014.

The user has requested enhancement of the downloaded file.


Sign Language Recognition using Sensor Gloves
Yasir Niaz Khan
Syed Atif Mehdi
FAST-National University of Computer and Emerging Sciences, Lahore.
Email: [email protected], [email protected]

Abstract
This paper examines the possibility of A gesture in a sign language, is a particular
recognizing sign language gestures using sensor movement of the hands with a specific shape
gloves. Previously sensor gloves are used in made out of them. Facial expressions also count
games or in applications with custom gestures. toward the gesture, at the same time. A posture
This paper explores their use in Sign Language on the other hand, is a static shape of the hand to
recognition. This is done by implementing a indicate a sign.
project called “Talking Hands”, and studying
the results. The project uses a sensor glove to A sign language usually provides signs for
capture the signs of American Sign Language whole words. It also provides signs of letters to
performed by a user and translates them into perform words that don’t have a corresponding
sentences of English language. Artificial neural sign in that sign language. So, although
networks are used to recognize the sensor values sentences can be made using the signs for letters,
coming from the sensor glove. These values are performing with signs of words is faster. The
then categorized in 24 alphabets of English sign language chosen for this project is the
language and two punctuation symbols American Sign Language.
introduced by the author. So, mute people can
write complete sentences using this application. 1.1. American Sign Language

It is the most well documented and most


Keywords widely used language in the world. American
Sign Language (ASL) is a complex visual-spatial
American Sign language, Neural Network, language that is used by the Deaf community in
sensor gloves, language recognition, deaf, the United States and English-speaking parts of
Canada. It is a linguistically complete, natural
Content Areas language. It is the native language of many Deaf
men and women, as well as some hearing
Neural Networks children born into Deaf families. ASL shares no
Human Computer Interaction grammatical similarities to English and should
Natural language understanding not be considered in any way to be a broken,
Machine learning mimed, or gestural form of English.
Artificial Intelligence
2. Sensor Gloves
1. Sign Languages
Sensor gloves are normally gloves made out
Sign language is the language used by deaf of cloth with sensors fitted on it. Using data
and mute people. It is a combination of shapes glove is a better idea over camera as the user has
and movements of different parts of the body. flexibility of moving around freely within a
These parts include face and hands. The area of radius limited by the length of wire connecting
performance of the movements may be from well the glove to the computer, unlike the camera
above the head to the belt level. Signs are used in where the user has to stay in position before the
a sign language to communicate words and camera. This limit can be further lowered by
sentences to audience. using a wireless camera. The effect of light,
electric or magnetic fields or any other
disturbance does not effect the performance of
the glove. Since a glove can only capture the shape of
the hand and not the shape or motion of other
We have used 7-sensor glove of 5DT parts of the body, e.g. arms, elbows, face, etc. so
company. It has 7 sensors on it. 5 sensors are for only postures are taken in this project. Signs for
each finger and thumb. One sensor is to measure letters ‘j’ and ‘z’ are ignored as they involve
the tilt of the hand and one sensor for the rotation moving gestures. Two custom signs have been
of the hand. Optic fibers are mounted on the added to the input set. One is for space between
glove to measure the flexure of fingers and words and the other is for full stop. These are not
thumb. Each sensor returns an integer value part of sign language, but have been added to
between 0 and 4095. This value tells about the facilitate in writing the English equivalent of the
bent of the sensor. 0 means fully stretched and sentence being performed.
4095 means fully bent. So, we get a range of 7 *
4096 combinations as our input.

3. Previous work
Previously, sensor gloves have been used in
games for creating virtual 3D environments.
Players can give input to the game using the
gloves. Gloves, along with other sensor devices,
have also been used in making games. Actions of
the experts wearing the sensors are captured and
translated into the game to give a realistic look to
the game. Sensor gloves have also been used in
giving commands to robots. Streams of shapes of
the hand are defined and then recognized to
control a robotic hand or vehicle. Input layer Hidden layer Output layer

Glove-TalkII is a system that translates hand Figure 1: Model of Neural Network used in the
project. Input, hidden and output layers contain 7, 54
gestures to speech through an adaptive interface. and 26 neurons (nodes) respectively.
Currently, the best version of Glove-TalkII uses
several input devices (including a Cyberglove, a
4.1. Neural Network Model
ContactGlove, a polhemus sensor, and a foot-
pedal), a parallel formant speech synthesizer and
Artificial Neural Network with feed forward
3 neural networks. One subject was trained to
and back propagation algorithms have been used.
use Glove-TalkII. After 100 hours of practice he
Feed forward algorithm is used to calculate the
is able to speak intelligibly. The subject passed
output for a specific input pattern. Back
through 8 distinct stages while he learned to
propagation algorithm is used for learning of the
speak. His speech is fairly slow (1.5~to~3 times
network. Three layers of nodes have been used in
slower than normal speech) and somewhat
the network. First layer is the input layer that
robotic. Reading novel passages intelligibly
takes 7 sensor values from the sensors on the
usually requires several attempts, especially with
glove. So this layer has 7 sensors. This layer
polysyllabic words. Intelligible spontaneous
does not do any processing and just passes the
speech is possible but difficult.
values forward.
4. Sign Language Recognition Next layer is the hidden layer, which takes
the values from the input layer and applies the
Our system is aimed at maximum recognition weights on them. This layer has 52 nodes. This
of gesture without any training. This makes the layer passes its output to the third layer. The
system usable at public places where there is no third layer is the output layer, which takes its
room for long training sessions. The speed of input from the hidden layer and applies weights
gesture capturing and recognition can be to them. There are 26 nodes in this layer. Each
adjusted in the application to incorporate both node denotes one alphabet of the sign language
the slow and fast performers of ASL. subset. This layer passes out the final output.
A threshold is applied to the final output. Sign languages are also space-dependant.
Only the values above this threshold and This means that the space (relative to the body)
considered. If none of the nodes give an output where the gestures are performed also
above the threshold value, no letter is outputted. contributes to sentence formation. Sensors would
If more than one node gives a value above the be needed to detect the relative space where the
threshold, no letter is outputted. The activation gestures are performed.
function used is the sigmoid function. This
activation function is applied at both of the Sign languages, as spoken languages, have
processing layer after the weights have been certain rules of grammar for forming sentences.
applied. This function is used in processing and These rules must be taken into account while
learning of the network. translating a sign language into a spoken
language. Rules of the targeted spoken language
Sampling is done 4 times a second. The user must also be considered into account. In the end,
must keep the sign performed for 3/4th of a adding a speech engine to speak the translated
second to get it recognized. This limit can be text would help enhance ease of use.
lowered for faster performers.

4.2. Results
Data
The accuracy rate of the software was found Glove
to be 88%. This figure is lower due to the fact
that training was done on the samples of people
who did not know sign language and were given Neural Networks
a handout to perform the signs by reading from
it. So, there was great deal of variation in the ASL
samples. Some samples even gave completely Recognition
wrong readings of the sensors. Testing was also
ASL Lexicon
done on the same kind of people.
1. Words
4.3. Problems Machine 2. Rules
Translation
One problem that was faced in the project System
English Lexicon
was that some of the alphabets involved dynamic
gestures. These may not be recognized using this 1. Words
glove. So these were left out from the domain of 2. Rules
the project. Also, some gestures require use of Text
both hands. This requires two sensor gloves. To
Speech Speech
4.4. Proposed solutions

The problem of dynamic gestures can be Figure 2: Model of an application that can fully
translate a sign language into a spoken language.
resolved using sensors on the arm as well.
Sensors would be required at elbow and perhaps
shoulder. Hidden Markov Models can be
employed to recognize the sequence of readings 5. Conclusion
given by moving hands.
This project was meant to be a prototype to
4.5. Future work check the feasibility of recognizing sign
languages using sensor gloves. The completion
As mentioned above, signs of sign languages of this prototype suggests that sensor gloves can
are usually performed not only with hands but be used for partial sign language recognition.
also with facial expressions. One big extension More sensors can be employed to recognize full
to the application can be use of sensors (or sign language.
cameras) to capture facial expressions.
6. Application
The product generated as a result can be used
at public places like airports, railway stations and
counters of banks, hotels etc. where there is
communication between different people. In
addition to this a mute person can deliver a
lecture using it.

Assuming the fact that we are able to convert


whole of American Sign Language into spoken
English, we can manufacture a handy and
portable hardware device having this translating
system built in as a chip. With the help of this
hardware device, which has built in speakers as
well, and group of body sensors along with the
pair of data gloves a mute person can
communicate to any normal person anywhere. A
special dress can also be designed having the
required number of sensors at appropriate places
for this purpose. This will almost bridge the
communication gap present between the deaf
community and the normal world.

7. Reference
[1] http://www.bconnex.net/~randys/index1.html

[2]
http://www.acm.org/sigchi/chi95/Electronic/doc
umnts/papers/ssf_bdy.htm

[3] http://where.com/scott.net/asl/index.html

[4] http://www-
white.media.mit.edu/~testarne/asl/asl-
tr466/index.html

[5] Charlotte Baker Shenk & Dennis Cokely,


American Sign Language, A teacher’s resource
text on Grammar and Culture, Clerc Books
Gallaudet University Press, Washington D.C.,
1981.

View publication stats

You might also like