0% found this document useful (0 votes)
10 views10 pages

Paper Survey

Paper

Uploaded by

MalathiVelu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views10 pages

Paper Survey

Paper

Uploaded by

MalathiVelu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Journal of Physics: Conference

Series

PAPER • OPEN ACCESS You may also like


- A deep convolutional neural network
Survey of Hand Gesture Recognition Systems model for hand gesture recognition in 2D
near-infrared images
Celal Can, Yasin Kaya and Fatih Klç
To cite this article: Ahmed Kadem Hamed Al-Saedi and Abbas H Hassin Al-Asadi 2019 J. Phys.:
- Real-time continuous gesture recognition
Conf. Ser. 1294 042003 system based on PSO-PNN
Bing Ren, Zhiqiang Gao, Yuhan Li et al.

- Finger gesture recognition with smart skin


technology and deep learning
View the article online for updates and enhancements. Liron Ben-Ari, Adi Ben-Ari, Cheni Hermon
et al.

This content was downloaded from IP address 223.178.86.38 on 01/07/2024 at 17:58


2nd International Science Conference IOP Publishing
IOP Conf. Series: Journal of Physics: Conf. Series 1294 (2019) 042003 doi:10.1088/1742-6596/1294/4/042003

Survey of Hand Gesture Recognition Systems

1 2
Ahmed Kadem Hamed Al-Saedi and Abbas H Hassin Al-Asadi
1Faculty of Information Technology, University of Babylon, Iraq.
2Faculty of Information Technology, University of Basrah , Iraq.

Email : [email protected]

Abstract. Recognition of human gestures is an important subject in computer science,


especially in computer vision and sign language. It aims at interpreting human gestures by
mathematical models. Gestures originate from different parts of the human body, but the
most common ones emerge from the hand or face. Gestures recognition is a method to
enable computers to understand and interpret the language of the human body in the best
way possible and to build a bridge between humans and machines from uncomplicated user
interfaces that have been command-line to graphical user interfaces GUI, so far they limit
the common input on the keyboard and mouse. In this paper, we have reviewed and
analyzed several methods of recognition for hand gestures including, Artificial Neural
Networks (ANN), Histogram based feature, a Fuzzy Clustering algorithm, Hidden Markov
Model (HMM), Condensation algorithm and Finite-State Machine (FSM).

1. Introduction

The gesture is defined as a form of nonverbal communication or non- vocal communication where
the body's movement can convey certain messages. Gestures originate from different parts of the
human body, but the most common ones emerge from the hand or face.
The development of computer technology led to the need for natural communication between
humans and machines. Although our new mobile devices use touchscreen technology, they are not
cheap enough to be implemented on desktop systems. The mouse is very useful to control the
machine, but it may be inappropriate to use for people with physical disabilities and people who are
not accustomed to using the mouse to interact.
In [1], Krueger introduced the first gesture-based interaction as a new type of Human-Computer
Interaction (HCI) in the mid-1970s. A few years later, exactly in 2006, (HCI) became a significant
area of the research.
The greater category of gestures is made by hand (one or two) because the human enable
represent by hand several of arrangements that can be clearly distinguished. Finally, hand gestures
are very important for sign language.
There are four categories of hand gestures, conversational-gestures, controlling-gestures,
manipulated-gestures, and communication-gestures [2]. The important state of communication-
gestures is a sign language, it used for test-bed of vision algorithms because it has a high structural
[3]. At the same time, the sign language enables the disable people from interacting in an easy way
with the

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
2nd International Science Conference IOP Publishing
IOP Conf. Series: Journal of Physics: Conf. Series 1294 (2019) 042003 doi:10.1088/1742-6596/1294/4/042003

computers. Analyzing the pointing gestures for identifying the virtual consider the type of controlling
gestures, which focus on researches in vision-based-interface (VBI) [3]. Navigating gesture is another
gesture of control, it used hand instead of wands to navigate in virtual environments (VEs) by capturing
the hand direction as a 3D directional. The manipulated gestures used to interact with virtual objects in
the natural form. There are several applications for this type of gestures such as virtual assembly and
remote operation. In human-interaction, the communicative-gestures consider important and contains
the psychological researches. However, the techniques based on a vision for capturing the motion
techniques can help in these researches [3].
In general, gestures divided into two categories, static gestures that depend on the hand shape and
dynamic gestures depend on the hand movements [4]. In [5], Liang introduced the best definition for
hand posture or static gestures: "Posture is a specific combination of hand position, orientation, and
flexion observed at some time instance".
In static hand gesture, the posture is a constant single over the time and can by one hand image or
more taken at a certain time got the full understanding for meaning the gesture. Good and simple signs
for this type of gestures are "OK" or "STOP".
Liang [5] defined the dynamic hand gesture as: “Gesture is a sequence of postures connected by
motion over a short time span”.
Here the dynamic gesture of hand represented by sequences of postures not one posture, and postures
determined by separate frames in the video signal. The good examples for dynamic gestures are "No",
"Yes" and "come here" or "goodbye" gestures, which cannot be identified unless taken into account the
temporal context information.
As follows, the remainder of this search is arranged: Section 2 displays a hand analysis approaches.
Section 3 approaches of recognizing the hand gesture based on vision. Section 4 explains techniques for
recognizing the gesture. Finally the conclusion in Section 5.

2. Hand gesture analysis approaches


The first stage in any system for hand gesture recognition is how obtaining the data that needed to
perform a certain task. The current techniques are Vision-Based, Data-Gloves, and Color-Marker as
shown in Figure (1).

2.1. Vision-based approaches

In these approaches [6], the human motion obtained from one camera or more [7], and the devices based
on vision can handle many properties for interpreting the gesture, for example, color and texture whereas
the sensor has not this property. Although these approaches are simple, many challenges can appear for
example lighting diversity, complex background, and presence objects with skin-color similar to the
hand (clutter), as well as the system, requires some criteria like time of recognition, speed, durability,
and efficiency of computation [8][ 9]. Vision-based techniques may differ in seven ways:
1. The cameras number.
2. The speed and response time.
3. Structure of the environment (limitations like movement speed and/or lighting condition).
4. User requirements (is the user needs wearing a certain thing?).
5. Low-level features that used (such as histogram, silhouettes, edges, moments, and regions).
6. Which type of representation is used? (two-dimensional or three-dimensional)
7. Is the time represented? [10].

2.2. Sensor or data glove based approaches


Data glove approaches use sensors to capture the position and motion of the hand. In these approaches
can compute in easy and accurate the coordinates of the location of the finger palm, and hand
configurations [9] [11] [12]. The sensors do not achieve easy connection with the computer because it
needs to be the user connected physically with the computer. These devices are also expensive, and

2
2nd International Science Conference IOP Publishing
IOP Conf. Series: Journal of Physics: Conf. Series 1294 (2019) 042003 doi:10.1088/1742-6596/1294/4/042003

unsuitable to operate in an environment of virtual reality [12]. According to Moore's Law, the sensors
will become smaller and cheaper over time. It will prevalent in the future, we believe.

2.3. Colored- Markers approaches

These approaches used marked gloves worn by the hand of a human and be colored to help in the process
of the hand tracking and to locate the fingers and palm. Marker gloves can form the shape of hand by
extract the geometric-features [13]. In [14], author used a wool glove with three different colors to
represent the palms and fingers. This approach considers simple and not expensive if compared with
Sensor or Data Glove [14], but the natural interaction between human and computer still not enough
[13].

(a) (b) (c)


Figure 1. Examples of input techniques for hand gesture recognition systems [11]. a: data-glove
approaches. b: vision-based approaches. c: colored marker approaches

3. Vision-based hand gesture recognition approaches

These techniques depend only the bare hand and extract the required data for the recognition process
[13]. There are some properties for these techniques such as simple [13] [15], and very easy [6], and
provide direct connect with the computer [13]. This approach for gesture hand recognition can be
divided into two types as follow.

3.1. Appearance-based approach

This approach [16], can design by extracting the features of the input hand image and compared to the
stored image features. The advantages of this method are simple and easier than a three-dimensional
model because of the ease of extracting features in the two-dimensional image [17]. However, this
method is affected by changing the lighting state and other objects in the background.

3.2. 3D model-based approach

This approach uses the 3D model description [18] to model and analyzes the shape of the hand.
Kinematic parameters must be found to making the two-dimensional model from projection the three-
dimensional model [16] [19] that correspond to the hand edge images. In the two-dimensional model,
many features might be lost. There are different types of methods including volumetric and skeleton
type. The volumetric model deals with the three-dimensional and visual appearance of the human hand
and is almost using in applications that work in real-time. There is a large number of parameters for
hand need to handle and that consider problem because of the huge dimensionality. The skeletal method
can be used to handle this problem. It limits the set of parameters to form the hand shape of the three-
dimensional structure.
In general, the detection in real-time is better in appearance-based approach in term of performance
compared to the 3D-model-based approach, but the last type can represent a large range of the hand
gestures [20].
3
2nd International Science Conference IOP Publishing
IOP Conf. Series: Journal of Physics: Conf. Series 1294 (2019) 042003 doi:10.1088/1742-6596/1294/4/042003

Figure 2. Different hand modeling methods to represent the hand posture [21]. a: A three-dimensional
volumetric model. b: A three-dimensional geometric model. c: A three-dimensional skeleton model. d: A model
based on a colored mark. e: Non-geometric shape model (binary silhouette) [22]. f: A two- dimensional
deformable template model (contour) [22]. g: Motion-based model.

4. Gesture recognition techniques

Most of the present hand recognition tools [23] do not collect all the information or they are static, they
produce a good view, but do not produce correct output in all situations. Therefore, they work on
some platforms very well but not on all the platforms. The different tools available to recognize the
gesture depend on approaches extending from statistical and dynamic modeling image processing,
computer vision science, recognition of pattern, etc.

4.1. Artificial neural networks (ANN)

This type of techniques are simulated and inspires from biological neural networks and used to
approximate the functions that it receives a large number of inputs that are often not known. Just as
neurons are the basic units of the brain, the nodes consider the core of neural networks. The nodes are
connected by links that have a weight represents a way to storage.
For example, in back propagation algorithm used a gradient descent concept to adjust the parameters
of the network to better suit a learning sets of I / O couples. ANN's strength in training data errors and
has achieved success in addressing problems in Speech-Recognition, Visual-Interpretation, and
strategies for Roberts-Control. Most Artificial Neural Networks implement in sequential machines but
can work in the parallel machine and became there devices are specially designed for applications of
ANNs. As mentioned above ANNs are inspires from biological neural networks, this lead to presence
difficulties not solved or designed by it. For example, ANNs that produce their distinct units have a
single with a constant value, whereas the biological neurons produce a complex time series of spikes.
Many researchers examined the use of neural networks to recognize gestures. Maximum of the of
the researches use ANN in a classification process for gestures, while others are used to segment the
hand. In [24] introduce a system to the tracking of hand and in [25] provide a system to recognize the
Myanmar–Alphabet- Language (MAL) by NNs. The system finds the edges of hand image by using the
filter of an Adobe Photoshop application and construct the features vector of the input image by
computing the histogram for the local direction, this vector passing as input to supervisor NN.
Stergiopoulou in [24] introduced a system that recognize the static gestures of hand by using the SGONG
Networks (Self-Growing and Self- Organized Neural Gas). YCbCr color space is used for hand detection
and competitive Hebbian learning algorithm used in SGONG network for the learning process. Two
neurons are used and continuous growth until a network of neurons is created it covers the hand object
that gets the shape of a hand.
Manar [26] provides a system to identify the Arabic Signe Language. The system consists of two
NNs recurrent. The Elman network (partially) used in separately the recurrent and fully recurrent of
NNs. The data collected from Colored-Glove and HIS model of colors used to a segmentation process
that segmenting the image for six-layers of colors, one for the wrist and five for the fingertips. For one-
image, 30 features are extracted and grouped, 15-elements for representing two types of angles, one
between the fingertips and another between the wrist and fingertips [26], and the rest of features for
4
2nd International Science Conference IOP Publishing
IOP Conf. Series: Journal of Physics: Conf. Series 1294 (2019) 042003 doi:10.1088/1742-6596/1294/4/042003

representing the distance that is between the fingertips and between the wrist and fingertips [26]. This
vector of features is passing for the two NNs. There are 1200 color images, 900 for training and 300 for
testing the system. The results were as follow in the term the rate of recognition, 89.67% for Elman- NN
and 95.11% for fully-recurrent NN.

4.2. The hidden-Markov models

It is necessary talking about the Markov-chains before explain the concept of Hidden-Markov models
(HMM), which can be defined as Finite-State Automata, where the arc between the states carry a value
representing the probability. The values of Probability that belong to output arcs from one state are sum
to one value. The Markov- chains are deterministic which meaning there is one arc for the transition
from one state to another has a certain value. This limitation in Markov – Chain can handle by using the
Hidden Markov model that has one or more arc carry the same value [27]. HMMs are non-deterministic,
and it called hidden because the state cannot define the sequence only by look to the output. The hidden
Markov model considers a stochastic approach [28] and has a few numbers from states of Markov-Chain
with a set of random functions. It is very useful in speech recognition and sign language recognition
[17], [29], etc.
Keskin [30] introduced the hand gestures recognition system that used hand tracking in real-time and
HMM for recognizing the gestures in three-dimensional. In this system, used two cameras are colored
to form the three-dimensional. The challenge in this system is clutter that can be handled by using the
Markers.
Schlenzig et. al. [10] provides a system to recognize the gestures using only one HMM. HMM states
represent the' gestures and static hand posture represents by the observation symbols. The number of
gestures can be a definition in this system are very limited because there are only three states and nine
observation symbols in HMM. This system uses a Recursive- Filter to update the gesture estimates
identified based on the information of the current posture. HMMs are used in Glove-Based approach or
Vision-Based approach and it important to locate the posture of hand [31]. In HMM the accuracy is very
high and can define several gestures as the Studies have shown. Such as ANN, the HMMs need to train
and for increasing the performance must for each gesture or posture define the exact number of states.
When the types and number of gestures or postures are already pre-defined then the HMMS is the best
option as recognition technique. When gestures and hand postures are determined during system
development, the development process may be more' time- consumption due to retraining. When the
HMM was used for all gestures, as in Starner's work, one HMM must be retrained, and when the HMM
per gesture will be needed only to train the new gestures HMM. Hidden Markov model needs a great
time for training as well as its hidden nature makes it difficult to know what is happening inside it but
is considered the best way because of the high precision it enjoys which has reached more than 90% as
well as being well covered in the literature.

4.3. Condensation algorithm

Particle filtration in its primary form achieves based on the procedure of sampling the repeated Bayes
filter, called the sequential sampling importance' with' resampling' SISR, and it can represent a range of
probability values [32], allowing estimation in real-time of non-linear, non-Gaussian' dynamic' systems.
The condensation algorithm is designed primarily to track the rapid motion of objects in the clutter based
on the principle of particle filter [32]. The hybrid mode of condensation algorithms has been expanded
to identify a wide range of gestures based on their temporal' trajectories.

4.4. Finite-state machine (FSM)

In [33], authors introduced a system to identify gestures of hand by a FSM. Machine's state is enough
to represent four clear stages of general gesture - a static start's position (fixed three frames in minimum),
hand's smooth motion, fingers and gesture's end, a static end's position in minimum three frames, hand's
smooth motion and return to the starting position. This technique used vector displacement to compare
between the input gesture vector and the stored gesture vectors.
5
2nd International Science Conference IOP Publishing
IOP Conf. Series: Journal of Physics: Conf. Series 1294 (2019) 042003 doi:10.1088/1742-6596/1294/4/042003
4.5. Fuzzy clustering algorithms

The general idea of clustering algorithms is that they are based on some measurements to dividing the
given sample data into clusters [34]. These algorithms are widely deployed because it can gather
complex data sets to clusters that be regular [34]. In fact, Fuzzy-clustering algorithms can division the
sample of data into groups in a fuzzy manner that meaning one pattern of data can belong for more than
one set of data and this is the basic difference between it and other algorithms of clustering [12].
In [35], Xingyan introduced a system that recognizes a hand gesture using a fuzzy c-means clustering
algorithm and applied in a mobile remote. In this system, the image obtained by the camera and the
image then converting from RGB to HSV space color. The next step is to extract the hand shape after
some pre-processing operations such as noise elimination, remove the unwanted objects and the
threshold method. Features vector consist of thirteen elements, the aspect ratio is the first parameter for
the bounding box of the hand, the twelve parameters that remainder used to represent the grid-cells of
image. Every cell represents the average of gray levels in the 3x4 block in the hand image. The cell of
average represents in fact the pixels brightness in the hand image, after that classify the gestures using
the FCM algorithm. This system applied in different environments such as constant lighting conditions
and complex background. six gestures are performed and for each gesture, there are twenty sample in
vocabulary and that form the database for training. The accuracy rate for recognition in the system is
85.83%.

4.6. Histogram-based feature

Orientation histogram used in several studies as features vector. William and Michal [36] implemented
the first orientation histogram application in the real-time gesture recognition system; they used the
orientation histogram to identify the gestures. To digitize the input images, the input video must be in
black and white, and some conversions performing to calculate the local orientation for every image
after that represent the histogram in polar coordinate by using a filter for blurring it. Two stages in the
system, one for training and another for running. The gestures are stored with their histogram in the
training set in the training stage. In the running stage, compute the features vector for the input image
and compared to features vector that stored in the training stage and used the Euclidean distance metric
to know the nearest neighbor. The consuming time for the whole process was 100 ms for each frame.
In [37], Hanning et. al. introduced a system to recognition depend on local orientation in the
histogram. The segmentation algorithm used skin color to extract a hand shape, the color input image
(RGB) and to decrease from the impact of lighting conditions must convert the color space model to the
HIS. In the HSI image, H was assigned for the image L likelihood ratio. The threshold method was used
to segment the hand region and there are 128 parameters from the Local orientation in the histogram are
used. The features vector were reinforced by adding the coordinates of the image in the sub-window,
and used k-means clustering to compress the representation of features vector. In the recognition phase,
used the Euclidean distance to calculate the exact degree of matching between the features vector of the
input image and the stored posture. Locality-Sensitive-Hashing (LSH) technique used to compute the
nearest neighbors roughly and made the cost of computation for recovery of the image in minimum.
Wysoski et. al. [38] provided a recognition approach to static-gestures with rotation invariant by
boundary histograms. Skin- Color filter is used to perform the detection process, then morphology
operations such as erosion, and dilation as a pre-processing process, and locate the groups that have the
same characteristics in the image by clustering process. The ordinary contour-tracking algorithm is used
to extract the boundary for each group, then partition the image into grids and the boundary normalized
in size, that made the system has invariance distance between the hand and camera. The homogeneous
background is used, and the chord’s size chain applied to represent the boundary. The image partitioned
into N of regions and the regions' partitioned into radial form [38]. The next step is to compute the
histogram for boundary chord’s size, therefore the features vector contains a series of histogram feature.
For classification in this system used Multi-layer perceptron (MLP) and Dynamic- Programming (DP).
the system successful in distinguish 26 static postures from American Sign Language, and for each
posture, captured 40 images, 20 used in the training stage and 20 used for the testing stage. The number

6
2nd International Science Conference IOP Publishing
IOP Conf. Series: Journal of Physics: Conf. Series 1294 (2019) 042003 doi:10.1088/1742-6596/1294/4/042003
of histograms in the system is variant from 8 to 36 with two increments, and variant resolutions
histogram.

5. Conclusion

This research, a survey on the techniques that used in gestures recognition systems. The main techniques
studied were Artificial Neural Networks (ANN), Histogram-Based Features, a Fuzzy Clustering
algorithm, Hidden Markov model (HMM), Condensation algorithm, and Finite-state machine FSM.
Most researchers use color images to achieve better results. The main goal in systems of hand gestures
recognition is built effective interaction between human and machine. There are many applications in
this field such as robots control,sign-languages recognition, and virtual reality.
Two main questions must be answered when used. The first question is what technology is used to
collect raw data from hand. In general, three types of techniques are available to collect this raw data.
The first is a glove input device, which measures a number of joint angles in the hand. The second is
Computer-Vision and Colored Markers are the third approach. In a vision-based solution, one or more
cameras are placed in a hand motion-record environment.
The second question that needs to be answered when using hand posture and gestures is what the
recognition technique will work with maximum accuracy and robustness. There are a number of
interesting areas to look into in the future hand posture and recognize the gesture. The field is not mature
anyway - we have a long way to go before this kind of metaphor is strong enough to be seen in
commercial and mainstream applications. It is important to find better data collection devices. The best
angle bending sensors, tracking systems, and faster processors will benefit the area dramatically.

6. References

[1] Krueger, M. W., Gionfriddo, T., & Hinrichsen, K. (1985, April). VIDEOPLACE—an artificial
reality. In ACM SIGCHI Bulletin (Vol. 16, No. 4, pp. 35-40). ACM.
[2] Wu, Y., & Huang, T. S. (2001). Hand modeling, analysis and recognition. IEEE Signal Processing
Magazine, 18(3), 51-60.
[3] Wu, Y., & Huang, T. S. (1999, March). Vision-based gesture recognition: A review. In
International Gesture Workshop (Vol.1739, pp. 103-115). Springer, Berlin, Heidelberg.
[4] Chang, C. C., Chen, J. J., Tai, W. K., & Han, C. C. (2006). New approach for static gesture
recognition. Journal of information science and engineering, 22(5), 1047-1057.
[5] Lamar, M. V., Bhuiyan, M. S., & Iwata, A. (2000, May). T-CombNET-A Neural Network
Dedicated to Hand Gesture Recognition. In International Workshop on Biologically Motivated
Computer Vision (Vol.1811, pp. 613-622). Springer, Berlin, Heidelberg.
[6] Chen, L., Wang, F., Deng, H., & Ji, K. (2013, December). A survey on hand gesture recognition.
In Computer Sciences and Applications (CSA), 2013 International Conference on (pp. 313-
316). IEEE.
[7] Bilal, S., Akmeliawati, R., El Salami, M. J., & Shafie, A. A. (2011, May). Vision-based hand
posture detection and recognition for Sign Language—A study. In Mechatronics (ICOM), 2011
4th International Conference On (pp. 1-6). IEEE.
[8] Murthy, G. R. S., & Jadon, R. S. (2009). A review of vision based hand gestures recognition.
International Journal of Information Technology and Knowledge Management, Vol.2(2), 405-
410.
[9] Garg, P., Aggarwal, N., & Sofat, S. (2009). Vision based hand gesture recognition. World
Academy of Science, Engineering and Technology, 49(1), 972-977.
[10] Schlenzig, J., Hunter, E., & Jain, R. (1995). Recursive spatio-temporal analysis: Understanding
gestures. Technical Report VCL-95-109, Visual Computing Laboratory, University of
California, San Diego.
[11] Dipietro, L., Sabatini, A. M., & Dario, P. (2008). A survey of glove-based systems and their
applications. IEEE Trans. Systems, Man, and Cybernetics, Part C, Vol.38 (4), pp.461-482.
[12] LaViola, J. (1999). A survey of hand posture and gesture recognition techniques and technology,

7
2nd International Science Conference IOP Publishing
IOP Conf. Series: Journal of Physics: Conf. Series 1294 (2019) 042003 doi:10.1088/1742-6596/1294/4/042003

Master Thesis, Brown University, Providence, RI, 29.


[13] Hasan, M. M., & Mishra, P. K. (2012). Hand gesture modeling and recognition using geometric
features: a review. Canadian Journal on Image Processing and Computer Vision, 3(1), 12-26.
[14] Lamberti, L., & Camastra, F. (2011, September). Real-time hand gesture recognition using a color
glove. In International Conference on Image Analysis and Processing (pp. 365-373). Springer,
Berlin, Heidelberg.
[15] Mitra, S., & Acharya, T. (2007). Gesture recognition: A survey. IEEE Transactions on Systems,
Man, and Cybernetics, Part C (Applications and Reviews), 37(3), 311-324.
[16] Dan, R. B., & Mohod, P. S. (2014). Survey on hand gesture recognition approaches. structure, 15,
17.
[17] Starner, T., & Pentland, A. (1996). Motion-Based Recognition, chapter Real-Time American Sign
Language Recognition from Video Using Hidden Markov Models. Computational Imaging and
Vision Series.
[18] Khan, R. Z., & Ibraheem, N. A. (2012). Survey on gesture recognition for hand image postures.
Computer and information science, 5(3), 110.
[19] Cheng, H., Yang, L., & Liu, Z. (2016). Survey on 3D Hand Gesture Recognition. IEEE Trans.
Circuits Syst. Video Techn., 26(9), 1659-1673.
[20] Tahir, M., Madani, T. M., Ziauddin, S., Awan, M. A., & Rana, W. H. (2014). Wiimote squash:
comparing DTW and WFM techniques for 3D gesture recognition. Int. Arab J. Inf. Technol.,
11(4), 362-369.
[21] Kaâniche, M. B. (2009). Human gesture recognition. PowerPoint slides.
[22] Pavlovic, V. I., Sharma, R., & Huang, T. S. (1997). Visual interpretation of hand gestures for
human-computer interaction: A review. IEEE Transactions on Pattern Analysis & Machine
Intelligence, (7), 677-695.
[23] Suarez, J., & Murphy, R. R. (2012, September). Hand gesture recognition with depth images: A
review. In Ro-man, 2012 IEEE (pp. 411-417). IEEE.
[24] Stergiopoulou, E., & Papamarkos, N. (2009). Hand gesture recognition using a neural network
shape fitting technique. Engineering Applications of Artificial Intelligence, 22(8), 1141-1158.
[25] Maung, T. H. H. (2009). Real-time hand tracking and gesture recognition system using neural
networks. World Academy of Science, Engineering and Technology, 50, 466-470.
[26] Maraqa, M., & Abu-Zaiter, R. (2008, August). Recognition of Arabic Sign Language (ArSL)
using recurrent neural networks. In Applications of Digital Information and Web Technologies,
2008. ICADIWT 2008. First International Conference on the (pp. 478-481). IEEE.
[27] Hasan, M. M., & Misra, P. K. (2011, June). Gesture recognition using modified HSV
segmentation. In Communication Systems and Network Technologies (CSNT), 2011
International Conference on (pp. 328-332). IEEE.
[28] Mitra, S., & Acharya, T. (2007). Gesture recognition: A survey. IEEE Transactions on Systems,
Man, and Cybernetics, Part C (Applications and Reviews), 37(3), 311-324.
[29] Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech
recognition. Proceedings of the IEEE, 77(2), 257-286.
[30] Keskin, C., Erkan, A., & Akarun, L. (2003). Real time hand tracking and 3d gesture recognition
for interactive interfaces using hmm. ICANN/ICONIPP, 2003, 26-29.
[31] Rubine, D. (1991). Specifying gestures by example (Vol. 25, No. 4, pp. 329-337). ACM.
[32] Black, M. J., & Jepson, A. D. (1998, June). A probabilistic framework for matching temporal
trajectories: Condensation-based recognition of gestures and expressions. In European
conference on computer vision (pp. 909-924). Springer, Berlin, Heidelberg.
[33] Davis, J., & Shah, M. (1994). Visual gesture recognition. IEE Proceedings-Vision, Image and
Signal Processing, 141(2), 101-106.
[34] Bezdek, J. C., Ehrlich, R., & Full, W. (1984). FCM: The fuzzy c-means clustering algorithm.
Computers & Geosciences, 10(2-3), 191-203.

8
2nd International Science Conference IOP Publishing
IOP Conf. Series: Journal of Physics: Conf. Series 1294 (2019) 042003 doi:10.1088/1742-6596/1294/4/042003

[35] Li, X. (2003). Gesture recognition based on fuzzy C-Means clustering algorithm. Department of
Computer Science. The University of Tennessee Knoxville.
[36] Freeman, W. T., & Roth, M. (1995, June). Orientation histograms for hand gesture recognition.
In International workshop on automatic face and gesture recognition (Vol. 12, pp. 296-301).
[37] Zhou, H., Lin, D. J., & Huang, T. S. (2004, June). Static hand gesture recognition based on local
orientation histogram feature distribution model. In Computer Vision and Pattern Recognition
Workshop, 2004. CVPRW'04. Conference on (pp. 161-161). IEEE.
[38] Wysoski, S. G., Lamar, M. V., Kuroyanagi, S., & Iwata, A. (2002, November). A
rotation invariant approach on static-gesture recognition using boundary histograms and neural
networks. In Neural Information Processing, 2002. ICONIP'02. Proceedings of the 9th
International Conference on (Vol. 4, pp. 2137-2141). IEEE.

You might also like