Research 1
Research 1
Research Article
Received: 1st September 2022; Accepted: 5th May 2023; Published: 5th October 2023
Abstract: This article describes the development of a wearable sensor glove for sign language translation and an
Android-based application that can display words and produce speech of the translated gestures in real-time. The
objective of this project is to enable a conversation between a deaf person and another person who does not know
sign language. The glove is composed of five (5) flexible sensors and an inertial sensor. This article also elaborates
the development of an Android-based application using the MIT App Inventor software that produces words and
speech of the translated gestures in real-time. The sign language gestures were measured by sensors and
transmitted to an Arduino Nano microcontroller to be translated into words. Then, the processed data was
transmitted to the Android application via Bluetooth. The application displayed the words and produced the
sound of the gesture. Furthermore, preliminary experimental results demonstrated that the glove successfully
displayed words and produced the sound of thirteen (13) translated sign languages via the developed application.
In the future, it is hoped that further upgrades can produce a device to assist a deaf person communicates with
normal people without over-reliance on sign language interpreters.
Keywords: Android application; Sign Language Translator; Smart Glove; Speech; Wearable Technology; Words
1. Introduction
Hearing impairment or deafness eventuates when a trauma or an injury occurs to the components of
the ear. Generally, in a noisy environment, an individual who is partially or moderately deaf can detect
muted sounds but has difficulty in hearing them properly. A person with moderate deafness needs a
hearing aid, while a person with extreme deafness requires a cochlear implant. An individual with a
hearing impairment typically utilises various ways to express themselves, such as writing on paper,
speechreading (lip reading) or even using an interpreter. Based on the report from the World Federation
of the Deaf (WFDEAF), approximately 70 million deaf people utilise sign languages around the world1.
Furthermore, more than 200 types of sign languages exist globally. In Malaysia, a deaf person uses the
Malaysian Sign Language (MSL). Based on the Malaysian Federation of the Deaf, over 30,000 people in
Malaysia have hearing problems, and only about 100 certified sign language interpreters are available to
provide services to the deaf community2.
Sign language plays a major part in the daily life of deaf people, and it acts as a primary
communication tool to communicate with others. However, they employ the services of sign language
interpreters to aid in translating sign languages while interacting with normal people. The conversation
Radzi Ambar, Safyzan Salim, Mohd Helmy Abd Wahab, Muhammad Mahadi Abdul Jamil and Tan Ching Phing, “Development of a
Wearable Sensor Glove for Real-Time Sign Language Translation”, Annals of Emerging Technologies in Computing (AETiC), Print ISSN:
2516-0281, Online ISSN: 2516-029X, pp. 25-38, Vol. 7, No. 5, 5th October 2023, Published by International Association for Educators
and Researchers (IAER), DOI: 10.33166/AETiC.2023.05.003, Available: http://aetic.theiaer.org/archive/v7/v7n5/p3.html.
__________________________________________________________
1https://wfdeaf.org/human-rights/crpd/sign-language/
2https://www.malaysiakini.com/news/376165
AETiC 2023, Vol. 7, No. 5 26
between deaf and normal people becomes very difficult without the help of an interpreter. Furthermore, a
deaf person also faces difficulties to obtain a sign language interpreter who can interpret accurately,
particularly such related to complex definitions as well as human emotions. Besides, those with sign
language needs also face difficulties in communicating efficiently, which hinder their opportunities to
demonstrate actual abilities in situations such as job interviews or discussions in the workplace.
Consequently, there is a need to find new ways to facilitate communication in individuals with hearing
loss. Apart from the sign language itself, other alternatives to help deaf people interact with normal
people are hearing aids and sign language translation devices. A hearing aid is a sound amplifying
electronic device, worn inside or behind the ear, suitable for individuals with a minimum ability of
hearing. There are various types of hearing aid devices that are available in the market. However, the
users of hearing aids might experience problems such as discomfort and difficulty adjusting for
background noise, which can affect the overall experience with the device.
On the other hand, sign language interpretation systems are emerging technologies that have been
extensively studied by experts aimed at removing communication barriers between hearing and deaf
people. These systems include wearable sensory devices or vision systems that utilize real-time translation
to convert sign language into texts or spoken words. These innovative technologies facilitate seamless
communication, enabling deaf individuals to converse with hearing individuals without the need for an
interpreter. By exploring and acknowledging these advancements, the collaborative effort is directed
toward creating an inclusive environment that fosters effective communication between deaf and hearing
individuals. These developments have the potential to increase accessibility, promote equality of
opportunity and enable deaf individuals to participate fully in various aspects of life.
Basically, these innovative technologies are classified as vision-based and sensor-based systems.
Vision-based systems utilise single or multiple cameras to track hand gestures. In these systems, frame
images from the recorded videos are used as an input for the sign language translation system, which
requires a computer to translate the hand gestures by using image processing algorithms. Vision-based
systems can be implemented easily using a web camera [1-4], multiple cameras [5], and a smartphone
camera [6-7]. Several researchers had utilised active techniques in vision-based systems using off-the-shelf
tools such as Microsoft’s Kinect [8-10], Intel’s RealSense [11], and Leap Motion Controller (LMC) [12-13].
Fundamentally, active techniques utilise the motion of active sources that can be controlled, such as a
laser scanner or a projected light to scan around the exterior of the object. The experimental results
obtained from the previous studies indicate the vision-based system for sign language gestures
demonstrates a high accuracy rate with promising results. The vision-based systems such as in [1-13] also
allow the users to engage with more spontaneously and less constraint. However, these approaches have
several disadvantages. For example, sign language motions are greatly influenced by the camera’s
viewpoint. This means that a gesture looks the same or similar as viewed from a vertical camera, leading
to potential confusion in gesture recognition. Various motions may appear similar to a static camera due
to this issue. Furthermore, these systems may consist of a standard camera, multiple cameras or even an
expensive high-tech camera, which must be connected to a high-performance computer for image
processing tasks in a laboratory setting. This requirement makes planning inconvenient and also hinders
its practical use in the laboratory or control systems. Vision-based systems often require computing
resources necessary for real-time image processing and gesture recognition. The computational
requirements can be demanding, requiring powerful hardware, which makes the system mobile and real-
time. Furthermore, despite advances in image processing, sign language gestures translation can still pose
challenges to vision-based systems. Processing time required for proper recognition and translation can
cause delays, affecting communication speed and performance. The performance of vision-based methods
is also susceptible to the backgrounds and lighting conditions of the captured images. Differences in
lighting, shadows, or background complexity can affect the accuracy of gesture recognition. This
limitation makes it difficult to use vision-based systems as portable and easy-to-use sign language
translation tools, as they may require a controlled environment with consistent lighting conditions. It is
important to consider these shortcomings when designing and implementing vision-based systems for
sign language translation devices. Addressing these limitations may help improve the reliability,
flexibility, and user-friendliness of such systems and promote effective communication among deaf and
hearing individuals. By understanding the challenges posed by vision-based systems, several researchers
www.aetic.theiaer.org
AETiC 2023, Vol. 7, No. 5 27
have started exploring alternative approaches to overcome these issues such as implementing sensor-
based methods for sign language recognition.
A sensor-based system is another type of sign language recognition method that utilises arrays of
sensors, developed using micro-electro-mechanical-systems (MEMS) technology to detect fingers and
hand motions. The MEMS sensors technology such as inertial measurement units (IMUs), flex sensors,
and force sensors have become inexpensive, low-powered, and scaled-down, which is ideal for wearable
devices. In research related to sign language translation, the majority of researchers had utilised a glove
attached with a fusion of sensors, namely the combination of flexible sensors and IMUs. This approach is
highly preferred because of its cost-effectiveness and portability. For instance, a study of Ahmed et al. on a
novel sign language recognition approach by utilising a sensory glove fitted with five (5) flexible sensors
and five (5) IMUs, where a humanoid arm was used to mimic the sign language gestures [14]. The work
obtained an exceptional accuracy of 93.4% for 75 static gestures. Similarly, Mehra et al. had utilised flexible
sensors and IMUs for an American Sign Language interpreter device, where the translation output was
displayed on a computer monitor [15]. Another prototype of a wearable sign language interpreter, which
was developed by Chong and Kim by using only six (6) IMUs mounted on the back of the hand and
fingertips to detect and translate the hand motions [16]. A wearable glove attached with flex sensors and a
contact sensor is also a favoured combination adopted by researchers. Rishikanth et al. proposed a gesture
recognition glove that employs flexible sensors and a contact sensor [17]. The flexible sensors were fitted
not only on the fingers but also on the wrist to track wrist motions. Contact sensors were positioned on
the fingertips to increase the number of recognisable gestures. The glove was able to recognise 80% of the
tested gestures (20 of 25 English alphabet gestures). Kannan et al. presented a gesture detection system by
adopting accelerometer sensors positioned on fingertips to track hand motions and displayed it on a
liquid-crystal display (LCD) [18]. They experimented with two (2) to five (5) units of accelerometer
sensors to verify the recognition performance in which five (5) accelerometers achieved 95.3% efficiency
compared to two (2) accelerometers with 87% efficiency. However, the work concluded that three (3)
accelerometers adoption is the best option considering the cost, training time, and efficiency. Several
researchers also had developed wearable gloves attached with more than three (3) types of sensors to
detect and translate hand motions. For example, Lee et al. proposed a smart wearable glove by employing
sensor fusions that consist of flexible sensors, pressure sensors, and an accelerometer for translating the
American Sign Language alphabet [19]. The translated gestures were displayed on an Android-based
application which also produces an audible voice. Besides, a surface electromyographic (sEMG) sensor
was also used in detecting sign language motions. The sensor calculates electrical voltages produced by
muscle excitation and contraction that can be utilised to differentiate between different finger and hand
motions depending on multiple muscle behaviours. Song et al. presented a smart detector prototype based
on sEMG for a sign language recognition system [20]. The prototype consists of a skin or a tissue interface
that provides sEMG signals into the system and a signal amplifying interface that amplifies the obtained
signal. Recently, Yu et al. proposed a wearable sensor glove for Chinese sign language translation using
sEMG and IMU [21]. They tested the sensor fusion with a deep learning method, which resulted in a
95.1% recognition accuracy.
Based on the literature, the sensor-based systems allow for better mobility, versatility, and cost-
effectiveness. Unlike vision-based systems, which rely heavily on manual camera capture, sensor-based
systems are less affected by the view or position of the sensor. This characteristic allows for more
consistent and accurate recognition of sign language gestures, regardless of orientation or technique of the
user's hands, allowing greater flexibility in capturing and interpreting gestures from different
perspectives. Sensor-based systems enable accurate and detailed capture of hand motion and position. By
utilizing a variety of sensors such as accelerometers, gyroscopes or flex sensors, these systems can detect
subtle changes in sign language gestures and interpret them accurately. The increased sensitivity and
specificity of sensor variety contribute to higher accuracy and improved recognition of complex and
intricate hand movements. Furthermore, sensor-based systems are usually able to provide real-time
feedback and translation of sign language gestures. The rapid response time of sensors allows immediate
recognition and translation, enabling smooth and seamless communication between hearing and deaf
individuals. Real-time performance is essential for sustaining the flow of conversations and enabling more
natural communications. In term of lighting conditions effect, unlike vision-based systems that can be
www.aetic.theiaer.org
AETiC 2023, Vol. 7, No. 5 28
sensitive to variations in lighting conditions, sensor-based systems are generally less affected by ambient
light. The reliance on sensors rather than visual input reduces the impact of lighting changes or shadows,
ensuring consistent and reliable gesture recognition even in challenging lighting environments. However,
the most appealing advantage of sensor-based systems is their ability to be customized and designed to
suit individual user preferences and needs. Through adjustments to sensory levels, gesture maps, or other
dimensions, the system can be tailored to accommodate different gestures, gesture designs, or specific
user needs. These customizations enhance the user experience and facilitates more personalized and
accurate translation. In addition, sensor-based systems can be developed into compact, lightweight, and
wearable devices. Combining sensors with wristbands or other wearable devices provides portability and
convenience. Users can take the sensor-based system with them, allowing them to communicate in
different environments such as classrooms, offices, or partnerships. This portability offers increased
accessibility and engagement by providing on-the-go sign language translation.
Despite these advantages, sensor-based systems still have certain limitations and disadvantages.
Sensor-based systems may have limitations in detecting and accurately interpreting some complex sign
language gestures. The types of gestures that the system can recognize and correctly translate may be
limited depending on the numbers of sensors utilized in the design, resulting in potential inaccuracies or
misinterpretations. This limitation may prevent the system from fully picking up expressive sign
language. To achieve high recognition accuracy, the utilization of multiple sensors is often important.
However, it is essential to strike a balance between the number of sensors and the associated development
costs because incorporating numerous sensors increases the complexity and cost of the system, making it
necessary to find a cost-effective approach without compromising accuracy. Furthermore, placing sensors
on wearable devices is crucial for accurate gesture recognition. Ensuring that a sensor is properly placed
on the user’s arm or body can be challenging, as it requires careful placement and alignment. Improper
sensor placement can result in inaccuracy or discomfort, affecting both usability and the user experience.
Based on the previous studies described above, the designs of the wearable sensors are bulky, heavy, and
uncomfortable. This circumstance occurs because most research utilised an LCD or a computer monitor to
display the translated gestures. However, there has been a lack of research exploring the potential of
mobile phone applications that can be capable of making the system more efficient while further reducing
production costs. Therefore, the focus of this work was prompted by the fact that the majority of people
own smartphones with built-in cameras. Advances in smartphone camera technology have greatly
improved video capture. This development opens up new possibilities for incorporating sign language
translation directly into smartphones, making it a practical solution for real-world situations Smartphone
capabilities are used to the advantage of users from a flexible and simple sign language translation system
that matches their daily communication needs.
This article presents the design and development of a wearable sensor glove that translates sign
languages into words and speech in real-time. A smart glove was developed to detect gestures using
flexible sensors and an accelerometer by measuring fingers and hand motions. The data from sensors
were processed and translated into words and speech, produced on an Android-based smartphone
application. The remaining parts of the article are as follows. First, the overview of the work is presented,
and the hardware and Android-based application designs are described. Next, the experimental results to
demonstrate the functionality and efficiency of the real-time sign language translation system are
presented.
2. System Overview
Figure 1 depicts the overview of the proposed wearable sensor glove for the sign language
translation system. The system consists of three parts: input sensors, data processing, and output. The
input sensors equipped on the smart glove are a GY-61 accelerometer sensor positioned on the back of the
hand and five (5) units of 4.5 inch flexible sensors attached on the back of each finger. All of the sensors
are linked to an Arduino Nano microcontroller for data processing. The Arduino Nano is a small-scaled
and portable board utilising the ATmega328 single chip 8-bit microcontroller, a 5V operating voltage, and
a processing clock with the speed of 16 MHz. It has 8 analogue pins that are adequate to connect with the
input sensors required in this work. The data from input sensors are processed and digitised in the
www.aetic.theiaer.org
AETiC 2023, Vol. 7, No. 5 29
microcontroller, where the gestures are recognised and translated into words, which are transmitted to an
Android-based smartphone application via an HC-05 Bluetooth module. If the smartphone application
successfully receives the words from Arduino Nano, the words will be displayed on the application, while
the corresponding voices for the words will be produced via the smartphone’s speaker.
Figure 1. Overview of the proposed wearable sensor glove for sign language translation system
(a) (b)
Figure 2. (a) A circuit diagram of the wearable sensor glove, (b) actual circuit attached on the wearable sensor glove
three (3) accelerometer analogue input values were obtained simultaneously. The obtained analogue input
values were converted to digital 10-bit resolution data using an Arduino’s Analog-Digital Converter
(ADC). Based on these digital voltage values, the bending angle of each resistor was derived.
First, the input voltage values were derived from the digital input voltage values (in the range from 0
to 1023) from Arduino’s ADC using the following equation,
𝑉𝑑
𝑉𝑖 = (𝑉𝑐𝑐 ) (1)
2𝑛
Where, 𝑉𝑖 is the input voltage value, 𝑉𝑑 is the digital input voltage values from Arduino’s ADC, 𝑉𝑐𝑐 is
the power supply voltage, and n is the size of ADC. Then, the resistance value for each flexible sensor was
calculated based on the voltage divider equation as follows,
𝑅𝑑
𝑅𝑓𝑙𝑒𝑥 = (𝑉𝑐𝑐 ) (2)
𝑉𝑖 −1
Where, 𝑅𝑓𝑙𝑒𝑥 is the flexible sensor’s resistance value, 𝑅𝑑 is the resistor’s resistance value to create a
voltage divider, 𝑉𝑖 is the input voltage value obtained from Equation (1), and 𝑉𝑐𝑐 is the power supply
voltage. Subsequently, the bend angle of each flexible sensor was obtained by mapping the flexible
sensors’ resistance values, 𝑅𝑓𝑙𝑒𝑥 to the sensors’ bend angles. The mapping of the values were executed
using Arduino’s IDE map() function, such as map(Rflex, flat_Res, bend_Res, 0, 90.0), where the value of
flat_Res (flexible sensor’s value when flat) is mapped to the target angle, which is 0 degree, and bend_Res
(flexible sensor’s value when bent) is mapped to the target angle, which is 90 degrees. Rflex is the flexible
sensor’s resistance value (obtained from Equation (2)) to be mapped to the bend angle.
On the other hand, the wearable sensor glove also utilises an accelerometer sensor, where the
analogue input values from the sensor (x, y, and z axes values) were converted to digital input values
using Arduino’s internal ADC with the range of values from 0 to 1023. The obtained values were applied
to the work directly. The fusions of the flexible sensors bend angle values and accelerometer values were
utilised to develop a mapping table of sensor-gesture, as depicted in Table 1.
Table 1. The range of sensor values for each gesture
Flexible sensors data (deg) Accelerometer data
Gesture
A1 A2 A3 A4 A5 X Y Z
Congratulation 45-55 75-85 0-10 60-70 30-40 310-326 280-292 329-333
Thank You 15-22 -8-0 -57--48 -8--17 4-10 320-330 290-310 330-337
Hello 25-40 0-4 -40--50 -21--17 4-8 380-400 290-310 330-345
Please -26--21 -8—4 -48--43 -21--17 4-12 277-284 331-345 360-370
You’re welcome -12--4 77-85 4-12 63-77 8-12 310-330 280-300 320-330
Blind 47-56 73-77 8-12 77-81 60-65 315-335 290-300 330-337
Deaf 40-60 0-12 8-12 73-80 45-56 335-350 300-320 345-350
No 43-56 8-12 4-8 69-73 43-51 325-340 290-307 325-335
Yes -17--4 77-85 4-8 64-73 8-12 345-360 370-380 385-395
Not Yet -8--4 8-12 -39--35 -4-0 4-8 330-340 404-415 410-420
I 51-60 73-81 0-4 60-69 8-12 321-340 275-290 310-320
As shown in the table, for instance, for the translation of “Congratulation” gesture, the bend angle
values for flexible sensor A1 (thumb) is between 45 and 55 degrees, flexible sensor A2 (index finger) is
between 75 and 85 degrees, flexible sensor A3 (middle finger) is between 0 and 10 degrees, flexible sensor
A4 (ring finger) is between 60 and 70 degrees, and flexible sensor A5 (little finger) is between 30 and 40
degrees. Furthermore, the accelerometer produces values between 310 and 326, 280 and 292, 329 and 333,
for X, Y, and Z axes, respectively. Similarly, for the translation of “Thank you” gesture, the bend angle
values for flexible sensor A1 (thumb) is between 15 and 2 degrees, flexible sensor A2 (index finger) is
between -8 and 0 degrees, flexible sensor A3 (middle finger) is between -57 and 48 degrees, flexible sensor
A4 (ring finger) is between -8 and 17 degrees, and flexible sensor A5 (little finger) is between 4 and 10
degrees. Furthermore, the accelerometer produces values between 320 and 330, 290 and 310, 330 and 337,
for X, Y, and Z axes, respectively. Similar sensor fusion method was used to produce the other nine (9)
gestures using different combinations, as shown in the same table.
www.aetic.theiaer.org
AETiC 2023, Vol. 7, No. 5 31
www.aetic.theiaer.org
AETiC 2023, Vol. 7, No. 5 32
Furthermore, the software supports real-time testing and debugging of the application. Developers can
connect their smartphones to the software and instantly see how the application behaves on the device.
This feature allows for rapid iteration and debugging, ensuring that the application functions as intended.
MIT App Inventor 2 also supports the integration of external services and APIs. Developers can
incorporate features such as GPS location, camera functionality, social media sharing, and database access
into their applications. This enhances the capabilities and interactivity of the created applications.
Figure 5 illustrates the design of an application for sign language translation called the Sign
Language Translator (SLT) application which has been developed using the block-based tools provided
by MIT App Inventor 2. The three (3) developed user interfaces are: (a) the main user interface, (b) the
Bluetooth devices connection interface, and (c) the translated gesture interface.
When the user launches the app, the main user interface of the SLT application will appear as shown
in Figure 5 (left image). The main user interface consists of a blue button labelled “Bluetooth” that is used
to establish Bluetooth connection with another Bluetooth device. Next to the button is the status of
Bluetooth connection with another device. Below the Bluetooth button is a text box that displays the text
of translated sign language gestures received from the wearable smart glove.
Initially, the app is not connected to another Bluetooth device, therefore, the initial status shows “No
Connected”. Bluetooth connection is also not established when the app is not launched. Therefore, to
establish a Bluetooth connection, for example with the proposed wearable sensor glove, the user must
click the Bluetooth button that will change the display to show a list of scanned available Bluetooth
devices as shown in Figure 5 (center image). Here, the user needs to select only one device to be connected
to the SLT app. As shown in the list, the "00:19:08:35:FA:A0 HC05" is the wearable sensor glove’s
Bluetooth module. So, when this is clicked, the display will return to the main interface, but with the
Bluetooth status changed to “Connected” as shown in Figure 5 (right image). This shows that the
Bluetooth communication between SLT app and the wearable smart glove has been established. At this
point, the user can start using the wearable smart glove to do sign language gestures. The wearable smart
glove translates the sign language gestures into texts. Then, the texts are transmitted to the app via
Bluetooth communication. When the data is received by the SLT app, the texts are displayed on the text
box as shown in Figure 5 (right image). The users can disconnect the Bluetooth connection with the
wearable smart glove by clicking the Bluetooth button again that will change the Bluetooth status to
"Disconnected". Figure 6 shows the simplified steps on how to use the developed Sign Language
Translator application.
Figure 5. The contents of the designed application: main interface (left image), user interface that displays available
Bluetooth devices list (center image), application that displays translated gesture in words (right image)
www.aetic.theiaer.org
AETiC 2023, Vol. 7, No. 5 33
SLT app
User clicks app icon to App launched. User clicks List of scanned Bluetooth established.
launch the app. “Bluetooth” button on the Bluetooth devices User executes gestures,
app. shown on the app. app displays translated
words.
Figure 6. Steps on how to use the developed Sign Language Translator (SLT) application
5.1 Experiment on the Wearable Technology for Real-Time Sign Language Gesture Translation
The developed wearable sensor glove and Android-based smartphone application were tested to
demonstrate the functionality and efficiency of the wearable technology. The developed prototype of the
wearable sensor glove is shown in Figure 2(b), and the Android-based application is depicted in Figure 5.
Experimental steps: Prior to the experiment, the Bluetooth connections between the wearable sensor
glove and application were checked five (5) times to ensure good connectivity. The experiment started by
supplying power to the Arduino Nano (initially programmed with the data processing code) using a 5V
power supply. Then, a subject was asked to click the SLT application on the smartphone to enable it.
Next, the user was requested to select a Bluetooth device that is linked to the wearable sensor glove by
pressing the “Bluetooth” button. When the Bluetooth communication was established, the subject was
instructed to make the initial sign language gesture. In this experiment, thirteen (13) sign languages were
prepared for the user, namely “Assalamualaikum”, “Waalaikumussalam”, “Congratulation”, “Thank
You”, “Hello“, “Please”, “You’re welcome”, “Blind”, “Deaf”, “No”, “Yes”, “Not Yet” and “I”. The SLT
application showed the translated words of the gesture and produced the corresponding speech. Finally,
these steps were repeated for other sign language gestures, and the success and failure rates were
recorded manually. In an effort to verify the performance of the developed device and application, each
gesture was tested thirty (30) times. Before the experiment commenced, the subject was introduced to the
thirteen (13) gestures and permitted to practise several times to familiarise each gesture.
Experimental results: The experimental results demonstrated the successful translation of the thirteen (13)
sign language gestures in real-time using the developed sensor-based system. The accuracy of the system
www.aetic.theiaer.org
AETiC 2023, Vol. 7, No. 5 34
varied for different gestures, with some achieving higher accuracy rates than others. Figures 7 to 13
demonstrate that the application had successfully displayed thirteen (13) translated sign language
gestures, namely “Assalamualaikum”, “Waalaikumussalam”, “Congratulation”, “Thank You”, “Hello“,
“Please”, “You’re welcome”, “Blind”, “Deaf”, “No”, “Yes”, “Not Yet”, and “I”.
Figure 7. The SLT app displaying “Assalamualaikum” (left) and “Waalaikumussalam” (right)
Figure 8. The SLT app displaying “Congratulation” (left) and “Thank You” (right)
Figure 9. The SLT app displaying “Hello” (left) and “Please” (right)
Figure 10. The SLT app displaying “You’re Welcome” (left) and “Blind” (right)
Figure 11. The SLT app displaying “Deaf” (left) and “No” (right)
www.aetic.theiaer.org
AETiC 2023, Vol. 7, No. 5 35
Figure 12. The SLT app displaying “Yes” (left) and “Not Yet” (right)
5.2 Discussion on the Performance of the Wearable Sensor Glove and Android-based Application
Table 2 tabulates a comparison of the sign language translation performance using the developed
wearable sensor glove. The translated gestures were displayed on the developed Android-based
smartphone application in real-time. As explained in the previous subsection, a subject was asked to
execute thirteen (13) sign language gestures, namely “Assalamualaikum”, “Waalaikumussalam”,
“Congratulation”, “Thank You”, “Hello“, “Please”, “You’re welcome”, “Blind”, “Deaf”, “No”, “Yes”,
“Not Yet”, and “I”, where each gesture was executed thirty (30) times each.
Based on Table 2, when the "Assalamualaikum" and "Deaf" gestures were replicated thirty (30) times,
both recorded only a single mistake. Both gestures have the least error owing to the position of fingers
and hand orientation that are different from the other eleven (11) gestures. On the other hand, the "You're
Welcome" and "I" gestures recorded the highest translation errors with ten (10) and eleven (11) errors
recorded, respectively. The observation during the experiment showed that both gestures had relatively
similar fingers and hand movements. This can cause the device to be unable to discriminate between the
two gestures. Other than that, most of the gestures produced errors of less than five (5) times, which is an
encouraging result for the prototype. It was observed that some errors occurred due to the consequences
of poor soldering of the sensors that cause intermittent data output during the gestures.
Overall, the system showed promising performance in accurately recognizing and translating sign
language gestures. However, some challenges were observed, such as occasional misinterpretations or
inaccuracies, particularly in complex or rapid gestures. These limitations provide areas for further
refinement and improvement in future iterations of the system. The accuracy of gesture recognition varied
across the thirteen (13) selected sign language gestures. Certain gestures exhibited higher recognition
rates, indicating the effectiveness of the sensor-based system in capturing and interpreting those specific
hand movements. However, it is important to note that some gestures such as “You’re Welcome” and “I”
may have been more challenging to detect accurately due to the reasons explained above, leading to
occasional misinterpretations or inaccuracies. These variations in accuracy could be attributed to factors
such as the complexity of the gesture, speed of execution, and individual variations in performing the
gestures. Furthermore, the usability and user experience of the developed system are critical aspects to
consider. Participants' feedback regarding the comfort and convenience of wearing the sensor glove, as
well as the user interface of the Android application, can provide valuable insights for system refinement.
It is important to address any discomfort or design limitations that may affect the user's ability to perform
sign language gestures naturally and effortlessly. While the results are promising, there are several
limitations that should be acknowledged. The experiment focused on a specific set of sign language
www.aetic.theiaer.org
AETiC 2023, Vol. 7, No. 5 36
gestures, and expanding the gesture vocabulary would be essential for real-world applications.
Nevertheless, the successful translation of sign language gestures in real-time opens a range of practical
applications for the sensor-based system. It can facilitate communication between deaf and hearing
individuals in various settings, including educational institutions, workplaces, and public spaces. The
system's ability to provide instantaneous translations enhances the accessibility and inclusivity of
communication for the deaf community.
Table 2. Performance of the Wearable Sensor Glove and Android-based App
Gesture translation results
Gesture Number of tests
Correct Mistakes
Assalamualaikum 30 29 1
Waalaikumussalam 30 28 2
Congratulation 30 25 5
Thank you 30 28 2
Hello 30 28 2
Please 30 27 3
You’re Welcome 30 20 10
Blind 30 24 6
Deaf 30 29 1
No 30 27 3
Yes 30 26 4
Not Yet 30 28 2
I 30 19 11
5. Conclusion
This paper provides a comprehensive description of the design of a wearable sensor glove
specifically designed for sign language interpretation. Additionally, an Android-based application was
developed, which can display words in real time and generate speech based on translated gestures. The
wearable device itself has five (5) sensitive sensors and an inertial sensor. Together, these sensors
accurately measure sign language gestures, which are then fed into an Arduino Nano microcontroller for
translation processing into words, and then the processed data is sent via Bluetooth to a custom Android-
based application called the Sign Language Translator (SLT) application. When the application receives
the translated data, it displays the corresponding texts and executes the speech associated with the
translated gesture. The paper goes into great detail, thoroughly describing each step in the development
of this device. Furthermore, the paper highlights promising results from preliminary experiments
conducted using the wearable smart glove. This study demonstrated the efficiency of the device in terms
of text display and speech production for a total of thirteen (13) sign languages. In summary, this paper
not only outlines the development process of the wearable sensor glove and Android-based application,
but also demonstrates the successful application in displaying words and generating speech for various
sign languages.
In the future, this project will involve storing its sensor values in a database for easy access and
analysis. In addition, the range of movements that the application can detect is planned to be expanded.
To this end, new initiatives will be taken, including the development of a pair of wearable smart gloves, in
contrast to the current offering, which consists of a single wearable smart glove. These gloves will allow
users to understand a wide variety of sign language gestures. Additionally, the application can be
enhanced with additional features and functionality. For example, users may have the option to save text
and audio, ensuring a personalized experience when using sign language gestures. Also in the translation
application, a complete sign language database can be developed that can becomes an interactive learning
tool. To enhance accuracy, researchers can employ advanced techniques such as deep learning,
convolutional neural networks (CNNs), or recurrent neural networks (RNNs) to capture intricate patterns
and dependencies in the sensor data. These models can be trained on large datasets of annotated sign
language gestures, allowing them to learn and generalize complex gesture representations effectively. The
app aims to make learning sign language easier and more enjoyable for students attending deaf or sign
language schools. By incorporating a guide that includes visual representations of sign language, along
with corresponding text and signing audio, the learning process becomes engaging and accessible.
Through these future developments, the authors wish to develop a device that can provide a flexible and
www.aetic.theiaer.org
AETiC 2023, Vol. 7, No. 5 37
user-friendly method that can interpret sign language gestures and convert it into texts and audio based
on the preferences of the deaf.
Acknowledgement
This research was supported by Ministry of Higher Education (MOHE) through Fundamental
Research Grant Scheme for Research Acculturation of Early Career (FRGS-RACER)
(RACER/1/2019/TK04/UTHM//5) and Universiti Tun Hussein Onn Malaysia.
References
[1] Bryan Berrú-Novoa, Ricardo González-Valenzuela and Pedro Shiguihara-Juárez, "Peruvian sign language
recognition using low resolution cameras", in Proceedings of the 2018 IEEE XXV International Conference on
Electronics, Electrical Engineering and Computing (IEEE INTERCON 2018), 8-10 August 2018, Lima, Peru, E-ISBN:
978-1-5386-5491-0, DOI: 10.1109/INTERCON.2018.8526408, pp. 1-4, Published by IEEE, Available:
https://ieeexplore.ieee.org/document/8526408.
[2] Ariya Thongtawee, Onamon Pinsanoh and Yuttana Kitjaidure, "A Novel Feature Extraction for American Sign
Language Recognition Using Webcam", in Proceedings of the 2018 11th Biomedical Engineering International
Conference (IEEE BMEiCON 2018), 21-24 November 2018, Chiang Mai, Thailand, E-ISBN: 978-1-5386-5724-9, DOI:
10.1109/BMEiCON.2018.8609933, pp. 1-5, Available: https://ieeexplore.ieee.org/document/8609933.
[3] Sakshi Sharma and Sukhwinder Singh, “Vision-based hand gesture recognition using deep learning for the
interpretation of sign language”, in Expert Systems with Applications, Online ISSN: 0957-4174, Vol. 182, p. 115657,
15 November 2021, Published by Elsevier, DOI: 10.1016/j.eswa.2021.115657, Available:
https://www.sciencedirect.com/science/article/abs/pii/S0957417421010484.
[4] Neil Buckley, Lewis Sherrett and Emanuele Lindo Secco, “A CNN sign language recognition system with single
& double-handed gestures", in Proceedings of the 2021 IEEE 45th Annual Computers, Software, and Applications
Conference 2021 (COMPSAC), 12-16 July 2021, Madrid, Spain, Online ISBN: 978-1-6654-2464-6, E-ISBN: 978-1-6654-
2463-9, DOI: 10.1109/COMPSAC51774.2021.00173, pp. 1250-1253, Published by IEEE, Available:
https://ieeexplore.ieee.org/document/9529449.
[5] Polurie Venkat, Vijay Kishore, M. V. D. Prasad, Ch. Raghava Prasad and R. Rahul, "4-Camera model for sign
language recognition using elliptical fourier descriptors and ANN", in Proceedings of the 2015 International
Conference on Signal Processing and Communication Engineering Systems (IEEE SPACES 2015), 2-3 January 2015,
Guntur, India, Online ISBN: 978-1-4799-6109-2, DOI: 10.1109/SPACES.2015.7058288, pp. 34-38, Published by IEEE,
Available: https://ieeexplore.ieee.org/document/7058288.
[6] Kshitij Bantupalli and Ying Xie, "American Sign Language Recognition using Deep Learning and Computer
Vision", in Proceedings of the 2018 IEEE International Conference on Big Data (IEEE Big Data 2018), 10-13 December
2018, Seattle, USA, Online ISBN: 978-1-5386-5035-6, DOI: 10.1109/BigData.2018.8622141, pp. 4896-4899, Published
by IEEE, Available: https://ieeexplore.ieee.org/document/8622141.
[7] Ali Imran, Abdul Razzaq, Irfan Ahmad Baig, Aamir Hussain, Sharaiz Shahid et al., “Dataset of Pakistan Sign
Language and Automatic Recognition of Hand Configuration of Urdu Alphabet through Machine Learning”, in
Data in Brief, Online ISSN: 2352-3409, Vol. 36, p. 107021, 2 April 2021, Published by Elsevier, DOI:
10.1016/j.dib.2021.107021, Available: https://www.sciencedirect.com/science/article/pii/S235234092100305X.
[8] Neel Kamal Bhagat, Y. Vishnusai and G. N. Rathna, "Indian Sign Language Gesture Recognition using Image
Processing and Deep Learning", in Proceedings of the 2019 Digital Image Computing: Techniques and Applications
(IEEE DICTA 2019), 2-4 December 2019, Perth, WA, Australia, Online ISBN: 978-1-7281-3857-2. DOI:
10.1109/DICTA47822.2019.8945850, pp. 1-8, Available: https://ieeexplore.ieee.org/document/8945850.
[9] Qinkun Xiao, Minying Qin, Peng Guo and Yidan Zhao, "Multimodal Fusion Based on LSTM and a Couple
Conditional Hidden Markov Model for Chinese Sign Language Recognition", in IEEE Access, Online ISSN: 2169-
3536, Vol. 7, pp. 112258-112268, 28 June 2019, Published by IEEE, DOI: 10.1109/ACCESS.2019.2925654, Available:
https://ieeexplore.ieee.org/document/8750875.
[10] Wang Pan, Xiongquan Zhang and Zhongfu Ye, "Attention-Based Sign Language Recognition Network Utilizing
Keyframe Sampling and Skeletal Features", in IEEE Access, Online ISSN: 2169-3536, Vol. 8, pp. 215592-215602, 27
November 2020, Published by IEEE, DOI: 10.1109/ACCESS.2020.3041115, Available:
https://ieeexplore.ieee.org/document/9272801.
[11] Jie Huang, Wengang Zhou, Houqiang Li and Weiping Li, “Sign language recognition using real-sense", in
Proceedings of the 2015 IEEE China Summit and International Conference on Signal and Information Processing
(ChinaSIP), 12-15 July 2015, Chengdu, China, E-ISBN: 978-1-4799-1948-2, DOI: 10.1109/ChinaSIP.2015.7230384, pp.
166-170, Published by IEEE, Available: https://ieeexplore.ieee.org/document/7230384.
www.aetic.theiaer.org
AETiC 2023, Vol. 7, No. 5 38
[12] Mohamed Deriche, Salihu O. Aliyu and Mohamed Mohandes, “An Intelligent Arabic Sign Language Recognition
System Using a Pair of LMCs With GMM Based Classification", in IEEE Sensors Journal, Print ISSN: 1530-437X,
Online ISSN: 1558-1748, pp. 8067-8078, Vol. 19, No. 18, 15 September 2019, Published by IEEE, DOI:
10.1109/JSEN.2019.2917525, Available: https://ieeexplore.ieee.org/document/8718638.
[13] Anshul Mittal, Pradeep Kumar, Partha Pratim Roy, Raman Balasubramanian and Bidayut B. Chaudhuri, “A
Modified LSTM Model for Continuous Sign Language Recognition Using Leap Motion”, in IEEE Sensors Journal,
Print ISSN: 1530-437X, Online ISSN: 1558-1748, pp. 7056-7063, Vol. 19, No. 16, 15 August 2019, Published by IEEE,
DOI: 10.1109/JSEN.2019.2909837, Available: https://ieeexplore.ieee.org/document/8684245.
[14] M.A. Ahmed, B.B. Zaidan, A.A. Zaidan, Mahmood M. Salih, Z.T. Al-qaysi et al., “Based on wearable sensory
device in 3D-printed humanoid: A new real-time sign language recognition system”, in Measurement, Online
ISSN: 0263-2241, Vol. 168, p. 108431, 15 January 2021, DOI: 10.1016/j.measurement.2020.108431, Available:
https://www.sciencedirect.com/science/article/abs/pii/S0263224120309659.
[15] Vaibhav Mehra, Aakash Choudhury and Rishu Ranjan Choubey, “Gesture to Speech Conversion using Flex
Sensors, MPU6050 and Python”, in International Journal of Engineering and Advanced Technology (IJEAT), Online
ISSN: 2249-8958, pp. 4686-4690, Vol. 8, No. 6, August 2019, Published by IJEAT, DOI: 10.35940/ijeat.F9167.088619,
Available: https://www.ijeat.org/wp-content/uploads/papers/v8i6/F9167088619.pdf.
[16] Teak-Wei Chong and Beom-Joon Kim, “American Sign Language Recognition System using Wearable Sensors
with Deep Learning Approach”, in The Journal of the Korea Institute of Electronic Communication Sciences, Print
ISSN: 1975-8170, Online ISSN: 2288-2189, pp. 291-298, Vol. 15, No. 2, 30 April 2020, Published by KIECS, DOI:
10.13067/JKIECS.2020.15.2.291, Available: https://www.koreascience.or.kr/article/JAKO202012764216522.page.
[17] Rishikanth Chandrasekaran, Harini Sekar, Gautham Rajagopal, Ramesh Rajesh and Vineeth Vijayaraghavan,
“Low-cost intelligent gesture recognition engine for audio-vocally impaired individuals”, in Proceedings of the
2014 IEEE Global Humanitarian Technology Conference (GHTC), 10-13 October 2014, Atlanta, Georgia, USA, Online
ISBN: 978-1-4799-7193-0, DOI: 10.1109/GHTC.2014.6970349, pp. 628-634, Published by IEEE, Available:
https://ieeexplore.ieee.org/document/6970349.
[18] Ajay Kannan, Ateendra Ramesh, Lakshminarasimhan Srinivasan and Vineeth Vijayaraghavan, "Low-cost static
gesture recognition system using MEMS accelerometers", in Proceedings of the Global Internet of Things Summit
(GIoTS 2017), 6-9 June 2017, Geneva, Switzerland, E-ISBN: 978-1-5090-5873-0, DOI: 10.1109/GIOTS.2017.8016217,
pp. 1-6, Published by IEEE, Available: https://ieeexplore.ieee.org/document/8016217.
[19] Boon Giin Lee and Su Min Lee, "Smart Wearable Hand Device for Sign Language Interpretation System With
Sensors Fusion", in IEEE Sensors Journal, Print ISSN: 1530-437X, Online ISSN: 1558-1748, pp. 1224-1232, Vol. 18,
No. 3, 1 February 2018, Published by IEEE, DOI: 10.1109/JSEN.2017.2779466, Available:
https://ieeexplore.ieee.org/document/8126796.
[20] Wei Song, Qingquan Han, Zhonghang Lin, Nan Yan, Deng Luo et al., “Design of a Flexible Wearable Smart sEMG
Recorder Integrated Gradient Boosting Decision Tree Based Hand Gesture Recognition”, in IEEE Transactions on
Biomedical Circuits and Systems, Print ISSN: 1932-4545, Online ISSN: 1940-9990, pp. 1563-1574, Vol. 13, No. 6,
December 2019, Published by IEEE, DOI: 10.1109/TBCAS.2019.2953998, Available:
https://ieeexplore.ieee.org/document/8903245.
[21] Yi Yu, Xiang Chen, Shuai Cao, Xu Zhang and Xun Chen, “Exploration of Chinese Sign Language Recognition
Using Wearable Sensors Based on Deep Belief Net”, in IEEE Journal of Biomedical and Health Informatics, Print
ISSN: 2168-2194, Online ISSN: 2168-2208, pp. 1310-1320, Vol. 24, No. 5, 5 May 2020, Published by IEEE, DOI:
10.1109/JBHI.2019.2941535, Available: https://ieeexplore.ieee.org/document/8839065.
www.aetic.theiaer.org