0% found this document useful (0 votes)
514 views5 pages

Gesture Volume Control Project Synopsis

This document summarizes a student project on gesture control using computer vision. The project aims to develop a system to control volume using hand gestures detected by a camera. The system will detect hand landmarks, calculate the distance between thumb and index finger, and map that distance to volume levels. The project will use mediapipe, TensorFlow, OpenCV and Python. Future work may improve robustness of gesture detection and recognition in different environments and expand the number of recognized gestures to control additional applications.

Uploaded by

Vashu Malik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
514 views5 pages

Gesture Volume Control Project Synopsis

This document summarizes a student project on gesture control using computer vision. The project aims to develop a system to control volume using hand gestures detected by a camera. The system will detect hand landmarks, calculate the distance between thumb and index finger, and map that distance to volume levels. The project will use mediapipe, TensorFlow, OpenCV and Python. Future work may improve robustness of gesture detection and recognition in different environments and expand the number of recognized gestures to control additional applications.

Uploaded by

Vashu Malik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

DELHI TECHNICAL CAMPUS 

(Affiliated Guru Gobind Singh Indraprastha University, New Delhi) 


Greater Noida 
BACHELOR OF TECHNOLOGY CSE 
Synopsis of Project

Project Title: GESTURE CONTROL USING COMPUTER VISION


Project Guide: Mr. Nitesh Bhati
Project Team:
Programme: - BTech Year/Semester: -4th Year/7th Semester
(CSE)
S. No.  Enrolment No.  Name  Signature
1. 42518002718 Nitish Kumar Jha
2. 40918002718 Md. Shahreyar Arif
3. 40718002718 Vashu Dev Malik

Abstract
Hand gesture recognition is very significant for human-computer interaction. In
this work, we present a real-time method for hand gesture recognition. In our
framework, the hand region is extracted from the background with the background
subtraction method. Then, the palm and fingers are segmented so as to detect and
recognize the fingers. Finally, a rule classifier is applied to predict the labels of
hand gestures. The experiments on the data set of images show that our method
performs well and is highly efficient. Moreover, our method shows better
performance than a state-of-art method on another data set of hand gestures.
Introduction
The Volume Control with Hand Detection OpenCV Python was developed
using Python OpenCV, in this Python OpenCV Project we are going Building a
Volume Controller with OpenCV, to change the volume of a computer.
We first look into hand tracking and then we will use the hand landmarks to find
gesture of our hand to change the volume. This project is module based which
means we will be using a hand module which makes the hand tracking very easy
and control the volume of system.

Gesture is a symbol of physical behaviour or emotional expression. It includes


body gesture and hand gesture. It falls into two categories: static gesture and
dynamic gesture. For the former, the posture of the body or the gesture of the hand
denotes a sign. For the latter, the movement of the body or the hand conveys some
messages. Gesture can be used as a tool of communication between computer and
human. It is greatly different from the traditional hardware-based methods and can
accomplish human-computer interaction through gesture recognition. Gesture
recognition determines the user intent through the recognition of the gesture or
movement of the body or body parts. In the past decades, many researchers have
strived to improve the hand gesture recognition technology. Hand gesture
recognition has great value in many applications such as sign language recognition,
augmented reality (virtual reality), sign language interpreters for the disabled, and
robot control.

In, the hand region from input images and then track and analyse the moving path
to recognize America sign language. In, Shimada et al. propose a TV control
interface using hand gesture recognition. Kestin divides the hand into 21 different
regions and train a SVM classifier to model the joint distribution of these regions
for various hand gestures so as to classify the gestures. Zeng improve the medical
service through the hand gesture recognition. The HCI recognition system of the
intelligent wheelchair includes five hand gestures and three compound states. Their
system performs reliably in the environment of indoor and outdoor and in the
condition of lighting change.

Aim and objective


The objective of this project is to develop a gesture volume control system.
Expected achievements in order to fulfil the objectives are:
• To detect the hand segment from the video frame. 
• To detect the finger segment from the video frame.
• To extract the useful features from the hand and finger detected.
• To change the volume of system. 

Literature Survey
Hand gesture recognition system received great attention in the recent few years
because of its manifoldness applications and the ability to interact with machine
efficiently through human computer interaction.
Different technologies have been implemented for hand based recognition system
and few of them have shown good results. The ultimate goal is to control certain
systems like smartphones, air conditioners using these techniques. Vision based
approach involves using cameras which capture image of hands and decode it to
perform instruction. Either a single camera or multiple cameras camera can be
used. Several complex algorithms are applied to perform feature extraction which
are used to train classifier using several machine learning strategies.
The system is based on three steps. The first is identifying the hand region in the
image. The second involves feature extraction which comprises of finding centroid
and major axis of magenta region followed by finding 5 centroids of cyan and
yellow region indicating fingers. Then for each of the five regions, the angle
between the line connecting the centroid of the palm and each finger and the major
axis. The four angles and five centroids form the nine dimension vector. The third
step involves building classifier using learning vector quantization.

 The Overview of the Method


The overview of the hand gesture recognition is described in Figure 1. First, the
hand is detected using the background subtraction method and the result of hand
detection is transformed to a binary image. Then, the fingers and palm are
segmented so as to facilitate the finger recognition. Moreover, the fingers are
detected and recognized. Last, hand gestures are recognized using a simple rule
classifier.
Implementation
Step 1. Detect Hand landmarks
Step 2. Calculate the distance between thumb tip and index finger tip.
Step 3. Map the distance of thumb tip and index finger tip with volume range. For
my case, distance between thumb tip and index finger tip was within the range of
50 – 300 and the volume range was from: 100.0 – 0.0.

 Software requirements:
• Jupyter Notebook
• TensorFlow
• OpenCV

Data flow Diagram:


Data flow diagrams are commonly used during problem analysis. Data flow
diagrams are quite general and not limited to problem analysis for software
requirement specification. A DFD shows the flow of data through a system. It
views a system as a function that transforms the inputs into desired outputs. Any
complex system does not perform this transformation into a single step and a data
will typically undergo a series of transformation before it becomes an output. The
DFD aims to capture the transformations that take place within a system to the
input data so that eventually the output data is produced.
The Dataflow Diagram for GESTURE VOLUME CONTROL:

Future Scope of the project


In this modern world, where technologies are at the peak, there are many facilities
available for offering input to any applications running on the computer systems,
some of the inputs can be offered using physical touch and some of them without
using physical touch (like speech, hand gestures, head gestures etc.). Using hand
gestures many users can handle applications from distance without even touching
it. But there are many applications which cannot be controlled using hand gestures
as an input. This technique can be very helpful for physically challenged people
because they can define the gesture according to their need.
The present system which we have implemented although seems to be user
friendly as compared to modern device or command-based system but it is less
robust in detection and recognition as we have seen in the previous step. We need
to improve our system and try to build more robust algorithm for both recognition
and detection even in the cluttered background and a normal lighting condition. We
also need to extend the system for some more class of gestures as we have
implemented it for only 6 classes. However, we can use this system to control
applications like power point presentation, games, media player, windows picture
manager etc.
References:
1. MIT open course.
2. Tensorflow.org
3. Media pipe.

Signature(s) of Project Members Name & Signature of Project Guide


Mr. Nitesh Bhati

You might also like