0% found this document useful (0 votes)
14 views6 pages

IJRPR4850

Uploaded by

mahmoudmetweh898
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views6 pages

IJRPR4850

Uploaded by

mahmoudmetweh898
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

International Journal of Research Publication and Reviews, Vol 3, no 6, pp 1236-1241, June 2022

International Journal of Research Publication and Reviews


Journal homepage: www.ijrpr.com ISSN 2582-7421

GESTURE CONTROL USING PYTHON

Moitreyee Bose, Partha Maji, Pamela Ghosh, Avisekh Bowri, Sugata Dutta, Sudip Mondal, Dr.
Saswata Chakraborty , Dr. Kuntal Ghosh
Department Of Electronics And Communication Engineering, Asansol Engineering College, Vivekananda Sarani, Kanyapur, Asansol, West Bengal –
713305

ABSTRACT:

Hand gesture recognition is a system that is used to detect the gesture of hand in a real time video. The hand gesture can be restricted within a certain area of
interest. In this study, designing hand gesture recognition is one of the complicated jobs that involves two major problems. Firstly, is the detection of the hand.
Another problem is to create a sign that is suitable to be used one hand at a time. This project focuses on how a system could detect, recognize and interpret the
hand gesture through computer vision with the challenges such as change in pose, orientation, location and scale. Different types of gestures such as numbers and
sign languages need to be created in this system to perform well for developing this project. The image taken from the real time video is analyzed to detect the
gesture of hand before the image processing is done. In this project, the detection of hand will be done using the theories of Region of Interest (ROI) via Python
programming. The explanation of the results will be focused on the simulation part since the difference for the hardware implementation is the source code to
read the real-time input video. The development of hand gesture recognition using Python and OpenCV can be implemented by applying the theories of hand
segmentation and the hand detection system.

Introduction:

Hand gesture recognition is one of the systems that can detect the gesture of a hand in a real time video.Designing a system for hand gesture
recognition is one of the goals of achieving the objectives of this project. The project has been made by our own efforts using Python and
OpenCV. The task of recognizing hand gestures is one of the main and important issues in computer vision. With the latest advances in human
interaction systems hand processing tasks such as hand detection and hand gesture recognition has evolved.Through this project we came to know
about the importance of teamwork and the role of devotion towards work.

Motivation of the Project

Unlike common security measures such as passwords, security cards that can easily be lost, copied or stolen; these biometric features are
unique to individuals and there is little possibility that these pictures can be replaced or altered.Among the biometric sector hand gesture
recognition are gaining more and more attention because of their demand regarding security for law enforcement agency as well as in
private sectors such as surveillance systems.Hand gestures are important to intelligent human and computer interaction to build fully
automated systems that analyse information contained in images, fast and efficient hand gesture recognition algorithms are required.

Basic description of the Project

We’ve implemented our main software using OpenCV Library and Media pipe, Autopy, NumPy Modules in Python environment. No
Performance Overhead at runtime. As a result, we got Real Time Tracking as fast as we need even though it’s harder to implement.

Firstly, the user puts his hand in front of the camera and runs the application. Then through Image Acquisition the video stream is read frame
by frame from the camera and each frame is analyzed. After this process, Image processing method and Hand Detection method is
implemented and it records the gesture and is converted to grayscale. Thirdly, the computer the uses the binary image from image processing
methodand finally using machine learning technique different events are performed.

1.2 Literature Review

Hand gesture recognition research is classified in three categories. First “Glove based Analysis” attaching sensor with gloves mechanical or optical to
transduce flexion of fingers into electrical signals for hand posture determination and additional sensor for position of the hand. This sensor is usually
an acoustic or a magnet that is attached to the glove. Look-up table software toolkit provided for some applications to recognize hand posture.
International Journal of Research Publication and Reviews, Vol 3, no 6, pp 1236-1241, June 2022 1237

The second approach is “Vision based Analysis” in which human beings get information from their surroundings, and this is probably the
most difficult approach to employ in a satisfactory way. Many different implementations have been tested so far. One is to deploy 3-D
models for the human hand. Several cameras are attached to this model to determine parameters corresponding for matching images of
the hand, palm orientation and joint angles to perform hand gesture classification.

The Third implementation is “Analysis of drawing gestures” using a stylus as an input device. These drawing analyses lead to recognition
of written text. Mechanical sensing work has been used for hand gesture recognition at a vast level for direct and virtual environment
manipulation. Mechanically sensing hand posture has many problems like electromagnetic noise, reliability and accuracy. By visual
sensing gesture interaction can be made potentially practical but it is the most difficult problem for machines. Full American Sign
Language recognition systems (words, phrases) incorporate data gloves. Data glove-based system could recognize 34 of the 46 Japanese
gestures (user dependent) using a joint angle and hand orientation coding technique.

1.3 Languages Used

Python

Python is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum and first
released in 1991.It supports multiple programming paradigms, including structured (particularly, procedural), object-
oriented, and functional programming.

OpenCV

OpenCV (Open-Source Computer Vision Library) is a library of programming functions mainly aimed at real-time computer
vision. Originally developed by Intel, it was later supported by Willow Garage then Itself (which was later acquired by Intel).
The library is cross-platform and free for use under the open-source BSD license

AutoPY

AutoPy is a simple, cross-platform GUI automation library for Python. It includes functions for controlling the keyboard and
mouse, finding colors and bitmaps on-screen, and displaying alerts. AutoPy includes a number of functions for controlling the
mouse.

NumPY

Numpy is a general-purpose array-processing package. It provides a high-performance multidimensional array object, and tools
for working with these arrays.

MediaPipe

MediaPipe is a framework for building multimodal (e.g., video, audio, any time series data), cross platform (i.e., Android, iOS,
web, edge devices) applied ML pipelines. With MediaPipe, a perception pipeline can be built as a graph of modular
components, including, for instance, inference models (e.g., TensorFlow, TFLite) and media processing functions.

1.4Project Work Description

We’ve implemented our main software using OpenCV Library and Media pipe, Autopy, NumPy Modules in Python environment. No
Performance Overhead at runtime. As a result, we got Real Time Tracking as fast as we need even though its harder to implement

The projects implementation is as follows: -

Firstly, the user puts his hand in front of the camera and runs the application.

1.4.1 Image Acquisition

Read the video stream frame by frame from the camera, continuously get each frame and analyze it.
International Journal of Research Publication and Reviews, Vol 3, no 6, pp 1236-1241, June 2022 1238

1.4.2 Image processing and Hand detection

To read the gesture done by the user and convert the image to grayscale and smoothen the photo. Henceforth, threshold will be applied by the
compiler and the hand will be enclosed by contours. Then the ROI (Region of Interest) the image that is captured using a web camera will be
processed in a region called Region of Interest (ROI) where it acts as a region of wanted area while ignoring the outside region, called
background.

1.4.1 Hand Gesture Recognition

Now the image is processed and hands are detected, ready to be recognized. The computer shall use the Binary Image resulted from
f
image processing to recognize the gesture done by the user. Image Subtraction is used to compare gestures done by the user with the
set of saved images (training set).

This way worked perfectly if the hand was oriented up straight without deviations. For deviated
deviated gestures, this approach failed.

In order for the program to continuously detect the hand at all times, especially when the hand is in static or not moving around,
ar the
program needs to learn the hand image. This is where machine learning comes in. In this
this project, the machine learning technique was
performed to track the gesture of hand before the image processing is done.

1.4.1 Event Generation

i) The distance between the index finger and thumb will control the volume accordingly. If the distance is greater the volume shall
increase or else decrease.

ii) The movement of the mouse was controlled by the index finger and when the index and middle finger are joined together then
the it
will perform a double-click.

1.5 Testing

Test methodology
International Journal of Research Publication and Reviews, Vol 3, no 6, pp 1236-1241, June 2022 1239

We propose a vision-based
based approach to accomplish the task of hand gesture detection. The task of hand gesture recognition with any machine
learning technique suffers from the variability problem. To reduce the variability in hand recognition task we assume the fol
following assumptions:
 Single colored camera mounted above a neutral
neutral-colored desk.
 Users will interact by gesturing in view of the camera.
 Training is must.
 Hand will not be rotated while the image is capturing.

 Webcam Features-
o Resolution: 640*680
o Video frame rate: 30fps @ 640*480
o Pixel Depth: Minimum 1.3 mega
mega-pixels
o Connection port: USB/Inbuilt

 Test Result

With Index Finger, we can hover the mouse

Mouse Hover

To deselect the mouse, spread the indexfinger and middle finger

Mouse Deselect

For double click, join the index finger and middle finger.

Mouse Select
International Journal of Research Publication and Reviews, Vol 3, no 6, pp 1236-1241, June 2022 1240

The volume is minimum when the index finger and thumb are joined.

Min. Volume

The volume is maximum when the distance between the index finger and thumb is maximum.

Max. Volume

CONCLUSION:

In this work,we propose a methodology to establish a complete systemfor detecting, recognizing and interpreting hand gesture through computer vision
technique. This technique is executedunder the framework of Python and OpenCV tool. Under this framework, this this methodology is capable of
recognizing the dynamic movements of hands up to ~30 frames per second. The training dataset contains several partial images of hands which makes
this methodology more robust to recognize hand movement with minimal number of featuresfeatures of hands. However, this technique fails to detect hand
gesture and finger positioning if the background surface has either same intensity or under low illumination condition. Despi
Despite this limitation, this
computer vision technique successfully interprets
rprets sign languages using hands and fingers under dynamic conditions.

REFERENCES:

1. 1.Building a Hand Tracking System using OpenCV Analytics Vidhya https://www.analyticsvidhya.com/blog/2021/07/building-a-hand-tracking-


https://www.analyticsvidhya.com/blog/2021/07/building
system-using-opencv/
2. Hand Gesture Detection
tection & Recognition System by Muhammad Inayat Ullah Khan Master Thesis Computer Engineering Nr: E4210D
3. Video Sources: -
Hand Tracking 30 FPS using CPU | OpenCV Python (2021)| Computer Vision
https://www.youtube.com/watch?v=NZde8X178lw
Python Expert Full Course | Python Expert Tutorial | Python Expert Programming Simplilearn
https://www.youtube.com/watch?v=OnhFe34iTJQ
Al Face Body and Hand Pose Detection with Python and Mediapipe
https://www.youtube.com/watch?v=pG4sUNDOZ.Fg
AI Virtual Mouse | OpenCV Python | Computer Vision
https://www.youtube.com/watch?v=8gPONGIPgw&t=292s
4. 4. Graduation Project Seminar: - Hand Gesture Based Application “Virtual Mouse, Virtual Piano, Integration with interactive game”. Supervised by:
Dr.LuaiMalhis. Prepared by SuadSeirafy,
afy, Fatima Zubaidi.
5. Garg, P., Aggarwal, N., & Sofat, S. (2009). Vision based hand gesture recognition. World academy of science, engineering and technology, 49(1),
972-977
6. Hasan, M. M., & Mishra, P. K. (2012). Hand gesture modeling and recognition using geometric features: a review. Canadian journal on image
processing and computer vision, 3(1), 12-26.
International Journal of Research Publication and Reviews, Vol 3, no 6, pp 1236-1241, June 2022 1241

7. Ahuja, M. K., & Singh, A. (2015). Static vision based Hand Gesture recognition using principal component analysis. Paper presented at the 2015
IEEE 3rd International Conference on MOOCs, Innovation and Technology in Education (MITE).

You might also like