6.pressure Sensor Positioning For Accurate Human Interaction With A Robotic Hand
6.pressure Sensor Positioning For Accurate Human Interaction With A Robotic Hand
Abstract— Sensor positioning involves determining the best recognition, different body markers have been attached to the
location for a sensor to be placed or installed so that it can user’s hand [6]. Although vision-based techniques have high
effectively sense or measure the desired physical or environmental recognition performance, they suffer from line-of-sight
parameters. In this paper, we present a novel approach for occlusions, high power consumptions, and they are sensitive to
finding sensor positions on a robotic hand using machine learning
lighting conditions, background noise, and other factors that can
techniques. We focus on pressure sensors and their placement in
order to enhance the performance and reliability of gesture impact the quality of the visual data being processed [6]. To
recognition tasks. Our study analyzes data from 22 sensors placed address these problems, sensor-based techniques have been
at different locations on a right-handed robotic hand, simulating proposed. On the negative side, these sensors may decrease
10 distinct hand gestures. We employ various machine learning dexterity of the hands and reduce physical interactions.
algorithms and create a correlation matrix to determine the most Accelerometer and gyroscope sensors [7], IMU (inertial
relevant sensor positions. The results highlight the significance of measurement unit) sensor [8] used for hand gestures
sensor optimization in improving the overall efficiency and recognition. For complex gestures recognition, data-glove based
effectiveness of robotic hand systems. By reducing the number of techniques have been proposed [9]. A data glove consists of an
sensors to 12, we were still differentiate 10 hand gestures with
array of sensors which can be worn on the user's hand [10].
%99 accuracy.
electromyography (EMG) sensor can detect neuronal electrical
potentials to recognize muscle activity [11] Pressure sensors
I. INTRODUCTION have been used frequently for muscle activity recognition [12].
The human-machine interaction (HMI) is known as the The HMIs rely heavily on sensors to collect data and provide
communication and collaboration between humans and accurate and up-to-date information to operators, allowing them
machines, typically through the use of technology such to monitor and control industrial processes in real-time. After
touchscreens, voice commands, gesture recognition, and other collecting the input signals, they are transformed into a
intuitive interfaces [1]. With the continued advancements in command and sent to the machine system to execute them. In
artificial intelligence and machine learning, humans can interact human-robot interaction, with the tactile and force sensors, it
with sophisticated machines such as robots. HMI enables robots becomes possible to collect biological signals, including body
to perform tasks that require human-like dexterity, such as movements. Resistive sensors, capacitive sensors, piezoelectric
advanced manufacturing, fine surgery, and even uncharted and sensors are among the commonly utilized varieties of tactile
space exploration [2]. A robotic hand, for example, is a and force sensors [13]. Each of these sensors has their own
mechanical device that mimics the function of a human hand. It merits and drawbacks in terms of sensitivity, accuracy, and
typically consists of a series of interconnected joints and response time [14].
actuators that allow it to grasp and manipulate objects in a way Although sensor-based techniques have high sensitivity, they
that resembles the movement of a human hand. Robotic hands need to be poisoned accurately to have a precise hand gesture
can be used for various purposes beyond prosthetics and recognition system. This is mainly due to fact that, the human
industrial reasons, and it could expand to social and humanoid hand is a complex system and interacting with it is a
robotics applications [3]. challenging task. The human hand contains 27 bones, controlled
Taking interaction between human and robotic hands into by more than 30 muscles and 20 identified muscular branches
consideration, various methods have been suggested which can [15]. The aim of this paper is to develop a human-machine
generally be divided into two primary categories: vision-based interaction system to control a robotic through muscles signals,
techniques and sensor-based techniques [4]. Vision-based using pressure sensors. To build an accurate HMI system, we
techniques rely on processing visual data from cameras to should find the optimum location of sensors on the human hand.
recognize and interpret hand gestures [5]. The vision system can Determining the optimal sensor positions can lead to
consist of a single camera, including webcam and smart-phone maximizing efficiency and reliability [1-3]. In this study, we
camera, or stereo-camera to obtain the depth information analyze data from 22 pressure sensors placed on a right-handed
through two simultaneous images. Some researchers captured robotic hand, simulating 10 different hand gestures. By
3D structure of the hand via light coding techniques such as employing various machine learning algorithms, we create a
Microsoft Kinect, etc. To improve accuracy of the gesture heatmap/correlation matrix to identify the most relevant sensor
Authorized licensed use limited to: the Leddy Library at the University of Windsor. Downloaded on August 30,2023 at 00:59:14 UTC from IEEE Xplore. Restrictions apply.
to correctly classify the hand gestures using the sensor data. However, when it comes to precision, the boosted decision
The performance of each algorithm was determined by overall tree algorithm stands out as the clear winner, surpassing even
accuracy, which takes into account both macro and micro the most optimistic expectations. On the other hand, while
precision and recall. linear regression may have produced relatively weaker results,
it should not be overlooked as it could still prove to be a
IV. EXPERIMENTAL RESULTS valuable tool in certain contexts.
In this part, we first classified gestures using the data from all Our results demonstrate that the machine learning algorithms
22 sensors, and then the same experiment was performed with we employed were able to accurately classify most hand
only the top 12 sensor as it will be explained. gestures, with a high degree of precision and recall. This can be
A. Evaluation Metrics contributed to the many sensors installed on the human hand,
and limited number of actions which are different. In reality
The performance of robotic hands in recognizing hand when multiple gestures are performed simultaneously or where
gestures can be assessed based on various metrics, such as environmental factors impact sensor data, gesture recognition
accuracy, precision, and recall. These criteria can be calculated would be more challenging.
based on false positive (fp), false negative (fn), true negative
(tn), and true positive (tp) counts respectively (Table 1).
C. corelation matrix
Table 1. Evaluation equations Through analyzing the correlation matrix, we can obtain
valuable information about the relationships between sensors.
Precisionµ The values in the correlation matrix range from -1 to 1, with -1
representing a full negative correlation, 0 demonstrating no
PrecisionM
correlation, and 1 showing a full positive correlation. Figure 4
shows the correlation matrix for the thumb up gesture. We
hypothesized that sensors with correlation values near zero
Recallµ which indicates independence are more important than highly
correlated sensors. Reducing the number of sensors will reduce
RecallM the feature selection time.
Overall Accuracy
Authorized licensed use limited to: the Leddy Library at the University of Windsor. Downloaded on August 30,2023 at 00:59:14 UTC from IEEE Xplore. Restrictions apply.