Gesture Volume Control Project Synopsis
Gesture Volume Control Project Synopsis
Abstract
Hand gesture recognition is very significant for human-computer interaction. In
this work, we present a real-time method for hand gesture recognition. In our
framework, the hand region is extracted from the background with the background
subtraction method. Then, the palm and fingers are segmented so as to detect and
recognize the fingers. Finally, a rule classifier is applied to predict the labels of
hand gestures. The experiments on the data set of images show that our method
performs well and is highly efficient. Moreover, our method shows better
performance than a state-of-art method on another data set of hand gestures.
Introduction
The Volume Control with Hand Detection OpenCV Python was developed
using Python OpenCV, in this Python OpenCV Project we are going Building a
Volume Controller with OpenCV, to change the volume of a computer.
We first look into hand tracking and then we will use the hand landmarks to find
gesture of our hand to change the volume. This project is module based which
means we will be using a hand module which makes the hand tracking very easy
and control the volume of system.
In, the hand region from input images and then track and analyse the moving path
to recognize America sign language. In, Shimada et al. propose a TV control
interface using hand gesture recognition. Kestin divides the hand into 21 different
regions and train a SVM classifier to model the joint distribution of these regions
for various hand gestures so as to classify the gestures. Zeng improve the medical
service through the hand gesture recognition. The HCI recognition system of the
intelligent wheelchair includes five hand gestures and three compound states. Their
system performs reliably in the environment of indoor and outdoor and in the
condition of lighting change.
Literature Survey
Hand gesture recognition system received great attention in the recent few years
because of its manifoldness applications and the ability to interact with machine
efficiently through human computer interaction.
Different technologies have been implemented for hand based recognition system
and few of them have shown good results. The ultimate goal is to control certain
systems like smartphones, air conditioners using these techniques. Vision based
approach involves using cameras which capture image of hands and decode it to
perform instruction. Either a single camera or multiple cameras camera can be
used. Several complex algorithms are applied to perform feature extraction which
are used to train classifier using several machine learning strategies.
The system is based on three steps. The first is identifying the hand region in the
image. The second involves feature extraction which comprises of finding centroid
and major axis of magenta region followed by finding 5 centroids of cyan and
yellow region indicating fingers. Then for each of the five regions, the angle
between the line connecting the centroid of the palm and each finger and the major
axis. The four angles and five centroids form the nine dimension vector. The third
step involves building classifier using learning vector quantization.
Software requirements:
• Jupyter Notebook
• TensorFlow
• OpenCV