Batch 4
Batch 4
DeepLearning
D.Pradeep* Monisha S Nandhini J
Associate Professor Department of Computer Science Department of Computer Science
Department of Computer Science andEngineering andEngineering
andEngineering M.Kumarasamy College of
M.Kumarasamy College of
M.Kumarasamy College of Engineering,Thalavapalayam,
Engineering,Thalavapalayam, Engineering,Thalavapalayam, Karur ,Tamilnadu ,India -639113.
Karur ,Tamilnadu ,India -639113. Karur ,Tamilnadu ,India -639113. [email protected]
[email protected] [email protected]
Poogesh R Praneeshwar R
Department of Computer Science Department of Computer Science
andEngineering andEngineering
M.Kumarasamy College of M.Kumarasamy College of
Engineering,Thalavapalayam,
Engineering,Thalavapalayam,
Karur ,Tamilnadu ,India -639113.
Karur ,Tamilnadu ,India -639113. [email protected]
[email protected]
I. INTRODUCTION
Sign language recognition is the process of translating the
user's gestures and signs into text. It aids those who are unable
to interact with the general populace. Using image processing
techniques and neural networks, the movement is map to
appropriate text in the instruction data, converting raw images/
videos into legible text. It happens often that people who are
dumb are unable to communicate normally with other members
of society. Because most people can only identify a small
they cannot communicate verbally. Sign language serves as the
leading means of communication for the hard of hearing and
dumb people. Similar to other languages, it uses grammar and
vocabulary, but it communicates visually. Theissue arises when
people who are deaf or dumb try to communicate with others
using these sign language grammars. This is due to the fact
that most people are unaware of these grammar rules. As a
result, it has been observed that a stupid person's
communication is limited to his or her family or the hearing-
impaired community. The importance of sign language is
shown by the growing public acceptance and sponsorship of
worldwide initiatives. In this day of technology, a computer-
based solution is much sought after by the non-intelligent
population. Teaching a computerto understand human gestures,
voice, and facial emotions are some stages towards achieving
this aim. Gestures are used to convey information nonverbally.
A human being is capable of making an endless number of sign
at any one on time. Since human motions are seen through
vision, computer vision researchers are especially interested in
them. The project's meaning is to develop an HCI that can
recognise human movements. A complex programming
process is required to translate these movements into machine
code. In this study, we mainly focus on Image Processing and
Template Matching for improved output production. Figure 1
depicts symbols for alphabets in sign format.
VI.CONCLUSION
REFERENCES