0% found this document useful (0 votes)
668 views40 pages

BCA 8th Project Report (Sign Language Translator)

The document is an eight-semester project report on a 'Sign Language Translator' submitted by Bipin Parajuli and Ajit Thapa for their Bachelor's in Computer Application at Tribhuvan University. It highlights the need for effective communication tools for the hearing and speech impaired, detailing the project's objectives, system analysis, and design, as well as implementation and testing phases. The application aims to facilitate two-way communication by converting sign language into text using machine learning and deep learning models.

Uploaded by

noelty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
668 views40 pages

BCA 8th Project Report (Sign Language Translator)

The document is an eight-semester project report on a 'Sign Language Translator' submitted by Bipin Parajuli and Ajit Thapa for their Bachelor's in Computer Application at Tribhuvan University. It highlights the need for effective communication tools for the hearing and speech impaired, detailing the project's objectives, system analysis, and design, as well as implementation and testing phases. The application aims to facilitate two-way communication by converting sign language into text using machine learning and deep learning models.

Uploaded by

noelty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Tribhuvan University

Faculty of Humanities & Social Science

Eight Semester Project Report


On
SIGN LANGUAGE TRANSLATOR

Submitted to
Xavier International College
Department of Computer Application
Bouddha, Kathmandu

In partial fulfillment of the requirements for the Bachelor’s in Computer Application

Submitted by:
Bipin Parajuli
Reg no: 54602008
Ajit Thapa
Reg no:

Under the Supervision of


Mr. Amit Chaudhary
Tribhuvan University
Faculty of humanities & social sciences
Xavier International College

Supervisor Recommendation

I hereby recommend that this project report under my supervision by Bipin Parajuli and
Ajit Thapa entitled “Sign Language Translator” in partial fulfillment of the requirement
for a Bachelor's Degree in Computer Application of Tribhuvan University be processed for
evaluation.

…………………….
Mr. Amit Chaudhary

Project Supervisor

Xavier International College

Bouddha, Kathmandu

i
Tribhuvan University
Faculty of Humanities and Social Science
Xavier International College

LETTER OF APPROVAL

This is to certify that this project prepared by Bipin Parajuli and Ajit Thapa entitled
“Sign Language Translator” in partial fulfillment of the requirements for the degree of
Bachelor in Computer Application has been evaluated. In our opinion, it is satisfactory in
the scope and quality of a project for the required degree.

Amit Chaudhary Tika Thapa


Supervisor Coordinator
Xavier International College Xavier International College

Internal Examiner External Examiner


Xavier International College Tribhuvan University

ii
Acknowledgment

We would like to express our deepest appreciation to all those who provided us with the
possibility to complete this report. A special gratitude is given to our final project
supervisor, Mr. Amit Chaudhary, whose contribution in stimulating suggestions and
encouragement, helped us to contribute to our project, especially in writing this report.

Furthermore, we would also like to acknowledge with much appreciation the crucial role
of the coordinator, who gave the permission to use all required equipment and the
necessary materials to complete our project. Special thanks to our Academic Manager, Mr.
Tyson lama, who gave us valuable suggestions regarding the project. Last but not least,
many thanks go to our teachers, friends, and guardians who directly or indirectly helped us
in achieving the goal. We would like to thank all the guidance which has improved our
presentation skills thanks to their comment and advice.

iii
Abstract

According to the World Health Organization (WHO), 466 million people across the world
have disabling hearing loss (over 5% of the world's population), of whom 34 million
are children. There are only about 250 certified sign language interpreters in India for a
deaf population of around 7 million. With these significant statistics, the need for
developing a tool for smooth flow of communication between abled and people with
speech/hearing impairment is very high. Our application promises to secure a two-way
conversation, as it deploys machine learning and deep learning models to convert sign
language to text. The opposite receiver can get a text as his response, which will then be
visible to the disabled person in the form of text. The client can use the tutorials and learn
the basic functioning of the application and ASL. This system eliminates the need for an
interpreter, and the traditional methods of pen and paper can also be discarded. This
application ensures the automation of communication and thereby provides a solution to
the hurdles faced by hearing/speech-impaired people.

Keywords: Sign-Language-Translator, Feature-extraction, KNN-Classification, etc

iv
Table of Contents
Supervisor’s Certificate ........................................................................................................i

Letter of Apporoval..............................................................................................................ii

Acknowledgement..............................................................................................................iii

Abstract...............................................................................................................................iv

List of Figures....................................................................................................................vii

List of Tables ....................................................................................................................viii

List of abbreviation.............................................................................................................ix

Chapter 1: Introduction........................................................................................................1

1.1 Introduction ...........................................................................................................1

1.2 Problem Definition.................................................................................................1

1.3 Objectives…………………………………………………………………….2

1.4 Scope and Limitations………………………………………………………..2

1.5 Report Organization………………………………………………………….2

Chapter 2: Background Study and Literature Review…………………………………….4

2.1 Background Study……………………………………………………………….4

2.2 Literature Review………………………………………………………………..4

Chapter 3: System Analysis and Design…………………………………………………..5

3.1.System Analysis…………………………………………………………………5

3.1.1 Requirement Analysis………………………………………………………….5

i. Functional Requirements………………………………………………………...5

ii. Non-Functional Requirements………………….………………………………7

3.1.2 Feasibility Analysis…………………………………………………………….7

v
i. Technical Feasibility…………………………………,…………………………7

ii. Operational Feasibility………………………………………………………….8

iii. Economic Feasibility…………………..……………………………………….8

iv. Schedule Feasibility………………..…………………..……………………...8

3.1.3 Data Modeling……..………………………………………………………10

3.1.4 Process Modeling………………………………………………………….12

3.2 System Design…………………………………….……………………………..14

3.2.1 Design…………………………………………………………..14

3.2.2 Database Schema Design…………….……………………………………..14

3.2.3 Design……………………………………...……………………..16

3.3.4 Process Design/Physical DFD…………………………..………………..16

Chapter 4: Implementation and Testing………………………………………………..18

4.1 Implementation……………………………………………………………….18

4.1.1 Tools Used………………………………………….………………………20


Language Used………………………………………….………………………..20

4.1.2 Implementation Details and Modules…...………………………………….21

4.2 Testing…………………………………………………………………………..21
Unit Testing…………………………...………………………………………….21

System Testing………………………...…………………………………………22

4.3 Test Cases……………………………………………………………………….22

Chapter 5: Conclusion and Recommendation……………………………………………23

5.1 Lesson Learnt/Outcome………………………………………………………...23

5.2 Conclusion…………………………………………………………………..23

5.3 Future Recommendation………………………………………………………..23

Appendices……………………………………………………………………………….24
REFERENCES…………………………………………………………………………...34

vi
List of Figures
Figure 3.1 Use Case Diagram……………………………………………………………..6
Figure 3.2 Gantt Chart……………………………………………………………………..9
Figure 3.3 ER diagram ……………………..………………………………………….11
Figure3.4 Level 1 DFD…………………………………………………………………..13
Figure3.5 Level 2 DFD…………………………………………………………………..13
Figure 3.6 Database Schema ……………………..……………………………………15
Figure 3.7 Flowchart ..…………………………………………………………………17
Figure 4.1 Waterfall Model…………………………………………………………….19
Figure 5.1 User Screen.…………………………………………………………………..24

Figure5.2 Running Screen…..………………...………………………………………….25

Figure5.3 Gesture Signs…...………...…………………………………………………...26

Figure5.4 Letter C hand sign Screen……………………………………………………27

Figure 5.5 Output of Sign U.………………..…………………………………………28

Figure 5.6 Gesturing sign for E…..………………..…………………………………….29

Figure 5.7 Sign for V...………………………..………………………………………….30

Figure 5.8 Screen for Letter I…...………………...……………………………………...31

Figure 5.9 Code Running………………………..……………………………………….32

Figure 5.10 Nearby Signs.………………………..………………………………………33

vii
List of Tables

Table 4.2 Tools Used Sign Language Translator……………………………….………20


Table 4.3 Test Case 1……………………………………………………………..……22
Table 4.4 Test Case 2…………………………………………………………………..22

viii
List of abbreviation

API: Application Program Interface

DFD: Data Flow Diagram

ER: Entity Relationship

IDE: Integrate Development Environment

UI: User Interface

ix
Chapter 1: Introduction
1.1 Introduction
With the increase of innovations and technology, life has become significantly easier for
humans. The sudden surge of growth in tech has left many overjoyed and overwhelmed
because of the good fruits it bears. It has paved the way for the poor people to become
rich, the sick to become strong, and the disabled to experience the life of an abled. People
with speech/hearing impairment have always found it difficult to communicate and mingle
but with technology, that barrier has been destroyed. They can now communicate without
any difficulty and find themselves in a public setting, communicating confidently.

With the help of technology & internet, we can control and access the machines and things
that are connected to the Internet even if the distances are too long. Without human-human
and computer-human interaction, we can send and receive information.

1.2 Problem Definition


The traditional methods of communicating with the deaf and mute are really not
convenient in many aspects. The alternatives that are available to break this barrier have
definite flaws. An interpreter is not always available, and this method is not
cost-efficient either. The pen-and-paper method is highly unprofessional and also
time-consuming. Texting and messaging are fine to a certain extent but still do not
tackle the bigger problem at hand. This has created a grave need to develop a solution to
destroy the barricade of communication effectively.

1
1.3 Objectives
The general objectives are listed below:

∙ To eliminate the need for an interpreter.


∙ To ease the communication flow for hearing/speech impaired people through our
sign-to-text system.
∙ To create new signs for any text.

1.4 Scope and Limitations


1.4.1 Scope

Sign Language Translator enables the hearing-impaired user to communicate efficiently in


sign language, and the application will translate the sign into text. The user has to train the
model, by recording the sign language gestures and then label the gesture. The user can
then use the saved and recorded gestures while speaking to other people through signs.

This project has a great scope and some of them are listed below:
• Explore educational applications to help individuals learn sign language more effectively.
• Use the project for accessibility applications to assist the deaf and hard-of-hearing community
in various settings, including educational institutions, workplaces, and public spaces.

1.4.2 Limitation:

There are some limitations to this system. They are listed as follows:

● The user has to provide the dataset by making the sign language gestures and then
label them.
● The system may struggle with recognizing gestures in low-light conditions or when
the user's hands are partially occluded.
● The system may struggle to recognize complex hand gestures or gestures that involve
intricate finger movements, as it relies on predefined landmark points.

2
1.5 Report Organization
The report consists of five chapters in which all the phases of application design and
development will be covered.

Chapter One: The first chapter introduces the system and the problems, and gives an
overview of the study.

Chapter Two: The second chapter covers the background study and the literature review
of the project.

Chapter Three: The third chapter covers the system analysis and design phase of the
application. It explains the methodology used while developing the system.

Chapter Four: The fourth chapter discusses the implementation and testing phase of the
application development.

Chapter Five: The last chapter i.e. the fifth chapter covers the conclusion,
recommendations, and future works to improve this project.

3
Chapter 2: Background Study and Literature Review
2.1 Background Study
Sign language is a system of communication using visual gestures and signs. Sign
languages are expressed through manual articulations in combination with non-manual
elements. Sign languages are full-fledged natural languages with their own grammar and
lexicon. Sign languages are not universal and are usually not mutually intelligible with
each other, although there are also similarities among different sign languages.

Using the video input from the webcam Sign Language Translator (SLT) translates the
signs into text output. American Sign Language (ASL) is the predominant sign language
of Deaf communities in the United States and most of Anglophone Canada. Besides North
America, dialects of ASL and ASL-based creoles are used in many countries around the
world, including much of West Africa and parts of Southeast Asia.

2.2 Literature Review


The first sign-language glove to gain any notoriety came out in 2001. A high-school
student from Colorado, Ryan Patterson, fitted a leather golf glove with 10 sensors that
monitored finger position, then relayed finger spellings to a computer which rendered
them as text on a screen. In 2002, the public affairs office of the National Institute on
Deafness and Other Communicative Disorders effused said about Patterson, that the glove
doesn’t translate anything beyond individual letters, certainly not the full range of signs
used in American Sign Language, and works only with the American Manual Alphabet.

https://www.theatlantic.com/technology/archive/2017/11/

Motion Savvy is building a tablet that detects when a person is using ASL and converts it
to text or voice. The software also has voice recognition through the tablet’s mic, which
allows a hearing person to respond with a voice to the person signing

https://techcrunch.com/2014/06/06/motionsavvy-is-a-tablet-app-thatunderstands-sign
language/

4
Chapter 3: System Analysis and Design
3.1. System Analysis
The system analysis of our system is as follows:

3.1.1 Requirement Analysis


The requirement includes functional and non-functional requirements:

i. Functional Requirements
Functional requirement defines a function of a system or its component, where a function is
described as a specification of behavior between outputs and inputs.

5
Figure 3. 1 Use Case Diagram

6
ii. Non-Functional Requirements
Non-functional Requirements are often called “quality attributes” of a system. Other terms
for non-functional requirements are “qualities”, “quality goals”, “quality of service
requirements”, “constraints” and “non-behavioral requirements”. Non-functional
requirements are:

∙Accuracy: The information on the user is significantly accurate.

∙Availability: The system is available 24 hours so that the general public can access and
use the system.

∙Performance: The performance of the application is good and the user interface is
user-friendly. Anyone who can understand English and have knowledge of web
application can use the system.

∙User Satisfaction: The system is designed to satisfy the user’s needs and requirements.

∙Layout: The system deals with viewing the different layouts of the application like (UI -
User Interface & UX – user Experience design).

3.1.2 Feasibility Analysis:


Feasibility analysis is used to assess the strengths and weaknesses of a proposed project
and present directions of activities that will improve a project and achieve desired results.
The nature and components of feasibility studies depend primarily on the areas in which
analyzed projects are implemented. The feasibility analysis of this project had been
carried out. They are as follows:

i. Technical Feasibility

The system is technically feasible to implement. The technology used guarantees


reliability, accuracy, and security. The technical aspects of the system can easily cover
cross platforms like mobile, desktop, web, etc. and is easily scalable. The technical
manpower is also easily available.

7
ii. Operational Feasibility

Operational feasibility refers to whether a system will be used effectively after it has been
developed. Operational feasibility is the ability to utilize, support, and perform the
necessary tasks of a system or program. It includes everyone who creates, operates, or
uses the system. To be operationally feasible, the system must fulfill a need required by 6
the business. An example of an operational feasibility study, or the fourth type, analyzes
the inside operations of how a deemed process will work, be implemented, and how to
deal with change resistance and acceptance.

iii. Economical Feasibility

Economic Feasibility is used to determine the financial resources of the project. It


measures all costs incurred in the development of the new system. The development of
our system is within the budget and this was achieved because most of the technologies
used are freely available. Only the customized products had to be purchased. It focuses
specifically on the financial aspects and possible questions raised are:

∙ System cost effective


∙ Outweigh cost
∙ Estimated cost

iv. Schedule Feasibility

In this Gantt chart, there are different tasks performed and their schedule while creating a
project. Similarly, the total time consumed to create the overall project is also mentioned
below:

8
Activities 1w 2w 3w 4w 5w 6w 7w 8w 9w 10w 11w 12w

Planning

Analysis

Design

Implementation

Testing

Documentation

Review

Presentation *

Figure 3.2 Gantt Chart


9
3.1.3 Data Modeling
Data modeling is an essential step in designing a Sign Language Recognition system. It
involves defining the structure and organization of data that the system will use, manage,
and process. In this project, data modeling is critical for handling video frames, hand
landmarks, annotations, training data, and sign language dictionaries. Here's a high-level
data modeling overview for the project through ER Diagram:

10
ER Diagram

Figure 3.3 Sign Language Translator

11
3.1.4 Process modeling
Modeling of structured activities or tasks that produce a specific product for particular
users or customers is process modeling. This system involves different processes like the
login process, prescription uploading process, and process. These processes can be
visually modeled by using an ER diagram.

a) Data Flow Diagram:


A Data Flow Diagram (DFD) is a traditional visual representation of the
information flows within a system. A neat and clear DFD can depict a good
amount of the system requirements graphically. It can be manual, automated, or a
combination of both.

12
Figure 3.4Level0DFD

Figure3.5Level1DFD

13
3.2 System Design

During the development, we used the following diagrams to understand the requirements
of the system.

3.2.1 Architectural Design


System design is the process of defining the elements of a system such as the architecture,
modules, and components, the different interfaces of those components, and the data that
goes through the system. System design is meant to satisfy the specific needs and
requirements of a business or organization through the engineering of a coherent and
well-running system.

3.2.2 Database Schema Design


A database schema represents the logical configuration of all or part of a relational
database. It can exist both as a visual representation and as a set of formulas known as
integrity constraints that govern a database. A database schema indicates how the entities
that make up the database relate to one another, including tables, views, stored
procedures, and more. Typically, a database designer creates a database schema to help
programmers whose software will interact with the database.

14
3.2.3 Interface Design

The high-level design of a Sign language Recognition system using Python contains three basic
components:

Input Segment: In a separate area of the interface, in this area user make the sign of different
character and they are saved on separate storage which will be useful for training.

Video Feed: A real-time video feed from the user's camera displays on the main screen. This feed
allows users to see themselves signing in sign language.

Sign Language Recognition Output: A dedicated section of the screen displays recognized
signs or phrases in real time. This is where users can see the system's interpretation of their sign
language.

3.2.4 Process Design/Physical DFD

Process design illustrates how each process in the system communicates with each other
to perform designated tasks. Different figures are used to illustrate different processes of
communication:

a) Flowchart
Flowcharts are graphical representations of workflows of stepwise activities and
actions with support for choice and iteration. It shows the overall flow of control
as a form of picture.

15
Figure 3.7 Flowchart of Sign language recognization

16
Chapter 4: Implementation and Testing
4.1 Implementation
For the development of the project, the waterfall model suits perfectly. The waterfall
model is a sequential design process, used in software development processes, in which
progress is seen as flowing steadily downwards through the phases of planning, analysis,
design, implementation, and testing.

The planning phase is used for understanding why a system should be built and
determining how the project team will go about building it. The analysis phase answers
the questions of who will use the system, what the system will do, and where and when it
will be used. During this analysis phase, the project team investigates any current system,
identifies improvement opportunities, and develops a concept for the new system. The
design phase decides how the system will operate in terms of the hardware, software, and
network infrastructure that will be in place. In the implementation phase, a system is
actually created and this phase usually gets the most attention, because for most systems
it is the longest and most expensive single part of the development process. The final
phase is the testing phase, where the system after implementation needs to be tested to
make it error-free.

17
Figure 4.1 Waterfall Model

18
4.1.1 Tools Used
Python: Python is a versatile programming language commonly used for machine learning,
computer vision, and general software development. It serves as the primary programming
language for the project.

OpenCV (Open Source Computer Vision Library): OpenCV is a popular open-source


library for computer vision and image processing. It provides tools and functions for
capturing, processing, and analyzing video and image data from cameras.

MediaPipe: MediaPipe is a framework developed by Google that offers a range of pre-trained


machine learning models for various vision and multimedia tasks, including hand gesture
recognition. It simplifies the development of computer vision applications.

NumPy: NumPy is a fundamental library for numerical computing in Python. It is used for the
efficient handling of multidimensional arrays, which is essential for processing image data and
preparing it for machine learning.

TensorFlow and Keras: TensorFlow is an open-source machine learning framework


developed by Google, and Keras is a high-level deep learning API that runs on top of
TensorFlow. These tools are used for building, training, and deploying deep learning models,
including LSTM networks.

Table 4.2 Tools used for Sign Language Translator


Tool’s name Purpose

Visual studio code IDE for developing software

Python, Tensorflow Server-side programming

Python Client-side programming

Microsoft word Documentation

19
4.1.2 Implementation Details of Modules
Implementing a Sign Language Recognition project involves several key modules that
work together to capture, process, recognize, and translate sign language gestures. Below,
I'll provide an overview of the implementation details for each module in the project:

1. Data Collection Module:

Data Sources: Capture real-time video feed from a camera (e.g., webcam).

Data Acquisition: Continuously capture video frames and convert them to image data for
processing.

Data Preprocessing: Resize, crop, and normalize image data to the required format.

2. Hand Detection Module:

Feature Extraction: Utilize a hand detection model from MediaPipe to identify and
extract hand landmarks in each frame.

Bounding Box Calculation: Create bounding boxes around detected hands for further
analysis.

Filtering: Apply filters to remove noise and improve hand landmark accuracy.

3. Data Annotation Module:

Data Labeling: Annotate the detected hand landmarks and gestures. Associate each
frame with the corresponding sign language gesture.

Data Storage: Store annotated data in a structured format for training and evaluation.

4. Machine Learning Module:

Dataset Creation: Prepare the annotated data for machine learning by converting it into a
suitable format (e.g., NumPy arrays).

Data Splitting: Split the dataset into training and testing sets for model evaluation.

Model Architecture: Design and build an LSTM-based neural network for recognizing
sign language gestures.

20
Model Training: Train the neural network using the training dataset and evaluate its
performance.

5. Sign Language Dictionary Module:

Gesture Dictionary: Create a dictionary or database that maps recognized sign language
gestures to their corresponding meanings or text translations.

7. Real-time Recognition Module:

Real-time Processing: Continuously process video frames, extract hand landmarks, and
feed them to the trained recognition model.

Recognition Feedback: Display recognized signs and their translations in real-time.

Translation Output: Update the translation output as new signs are recognized, creating a
seamless conversation.

4.2 Testing
Software Testing is the process of testing the functionality and correctness of software.
Software testing is an empirical technical investigation conducted to provide stakeholders
with information about the quality of the product to the context in which it is intended to
operate. This includes but is not limited to, the process of executing a program or
application with the intent of finding errors.

Unit Testing
Unit testing refers to the testing of individual units/components of software. The purpose
is to validate that each unit of the software performs as designed. A unit is the smallest
testable part of any software. It usually has one or a few inputs and usually a single
output.

System Testing
System testing tests a completely integrated system and the outputs generated by it to
verify that the system meets its requirements. It is also used to check logic changes made
in it with the intention of finding errors. This process helps in validating the system by

21
testing the system as a whole that covers each module of the application, database

specifications, and underlying configurations. Plagiarism Comparison passes the system

testing and is ready for real-world implementation.

4.3 Test Cases

Table 4.3 Test Case 1


Use Case ID 1

Test Case Name Check valid hand gesture from database

Test Case Description The Gesture of hands should be in a


fixed position.

Steps 1. Open the application from the


command prompt.
2. Provide proper hand gesture

Expected Results If it satisfies the validation rule, it gives


the result.

Actual Results As expected,

22
Chapter 5: Conclusion and Recommendation

5.1 Lesson Learnt / Outcome


Enables the hearing-impaired user to communicate efficiently in sign language, and the
application will translate the same into text. The user has to train the model, by recording
the sign language gestures and then label the gesture. The user can then use the saved and
recorded gestures while speaking to other people.

5.2 Conclusion
This project was undertaken to solve the underlying issue faced by hearing and
speech-impaired people. They often don’t even stand a chance in the competitive global
arena because of communication hurdles.
This project, however, helps in eradicating the social stigma of them not being able to
participate in many domains and successfully gives them the confidence to stand upright in
any field they want.
The application provides the necessary platform to communicate with much ease and
gives them the ability to interact without any external help. The need for an interpreter is
eradicated, and the smooth flow of a conversation is well-developed.

5.3 Future Recommendation


The model and text-to-speech can be embedded into a video calling system. Thereby
allowing the user to show the gestures and the receiver on the call will receive the
message in the form of text or speech. While the receiver responds, the message will be
relayed to the hearing/speech-impaired user via text (subtitles).

22
Appendices

Figure 5.1User Screen23

23
Figure 5.2 Running Screen

24
Figure 5.3 Gesture Signs HomeScreen

25
Figure 5.4 Letter C hand sign Screen

26
Figure 5.5 Output of sign U

27
REFERENCES

[1] Elmahgiubi, Mohammed, et al. "Sign language translator and gesture recognition."
2015 Global Summit on Computer & Information Technology (GSCIT). IEEE, 2015.
55.

[2] Li, Kin Fun, et al. "A web-based sign language translator using 3d video processing."
2011 14th International Conference on Network-Based Information Systems.

[3] Yin, Kayo, and Jesse Read. "Better sign language translation with STMC
transformer." arXiv preprint arXiv:2004.00588 (2020)..

[4] Halawani, Sami M. "Arabic sign language translation system on mobile devices."
IJCSNS International Journal of Computer Science and Network Security 8.1 (2008):
251-256..

[5] Abhishek, Kalpattu S., Lee Chun Fai Qubeley, and Derek Ho. "Glove-based hand
gesture recognition sign language translator using capacitive touch sensor." 2016 IEEE
international conference on electron devices and solid-state circuits (EDSSC). IEEE,
2016.

[6] lorem, The lorem boook, abc publication.

34

You might also like