DEVIL
DEVIL
DEVIL
UNIVERSITY
Jnana Sangama, Belagavi - 590014
A
CGIP Mini Project Report
On
“ZOOM ANY PICTURE USING HAND GESTURE IN
OPENCV”
Submitted By:
MD ADIL
(USN:-
3BK21CS023)
Academic Year-2023-24
BET’s
BASAVAKALYAN ENGINEERING COLLEGE
BASAVAKALYAN-585327
Certificate
This is to certify that the project work entitled " ZOOM ANY PICTURE
USING HAND GESTURE" work carried out by MD ADIL (USN:3BK21CS023)
student of VI semester (CBCS) B.E. (Computer Science & Engineering) in partial
fulfillment for the CGIP Lab with Mini Project (21CSL55) prescribed by
Visvesvaraya Technological University, Belgavi during the academic year 2023-
2024.
Hiremath
1)-----------------
2)-----------------
ACKNOWLEDGMENT
At this pleasing moment of having successfully completed our project, we wish to convey our
sincere thanks and gratitude to our esteemed institute "BASAVAKALYAN ENGINEERING
COLLEGE BASAVAKALYAN ".
First and foremost our sincere thanks to our Principal Dr.Ashok Kumar Vangeri for forwarding us
to carryout our project and offering adequate duration in completing our project
We are also grateful to the Head of department of Computer Science & Engineering Prof.Suvarnalata
Hiremath for her constructive suggestions &Encouragement during our project.
We wish to place our graceful thanks to our project guide Prof. Kirti Rani, Without whose help and
guidance would not have been possible to complete this project.
We express our heartfelt thanks to our all staff members of our department who helped us a lot in the
completion of directly and indirectly within the schedule period.
Last but not least we would like to thanks our friends and family members for increased
andpropels and encouragement throughout in this position.
Project Associates:
MD ADIL (3BK21CS023)
TABLE OF CONTENT
2 INTRODUCTION TO CGIP 06
4 INTRODUCTION 08
6 OBJECTIVES 10
7 OUTCOMES 11
8 IMPLEMENTATION CODE
9 SNAPSHOTS
14
OUTPUT
10 CONCLUSION 16
REFERENCES 17
ABSTRACT
In the realm of human-computer interaction, the development of intuitive and
efficient control mechanisms is paramount. This mini-project focuses on
designing a zoom any picture using hand gesture OpenCV, a widely-used open-
source computer vision and machine learning software library. The project aims
to leverage hand gesture recognition to manipulate the zoom levels of a
multimedia system, providing a contactless and user-friendly interface
The core of the system involves capturing real-time video input through a
webcam, processing the frames to detect and track hand gestures. Using
OpenCV's advanced image processing techniques, such as background
subtraction, contour detection, and convex hull analysis, the system identifies
specific gestures that correspond to , in, out or move to hand gesture
A machine learning model, trained to recognize these gestures, further
enhances the accuracy and robustness of the gesture recognition process. The
detected gestures are then mapped to corresponding Zoom control commands,
which are executed to adjust the system's audio output.
INTRODUCTION TO CGIP
CGIP stands for "Computer Graphics and Image Processing." It is a field of
study and research that focuses on the generation, manipulation, and analysis of
visual images and graphical representations using computers. CGIP
encompasses a wide range of techniques and applications, including.
Software Requirements:
• Operating System
The system can be developed on any major operating system,
including Windows, macOS, or Linux. Ensure that your operating
system is up to date to avoid compatibility issues with libraries
and tools.
• Programming Language
• Development Platform
Chapter 4
INTRODUCTION
Zooming into a picture using hand gestures combines computer vision and
gesture recognition techniques to enable intuitive interaction with digital
content. Here’s a detailed introduction to how this innovative technology can be
implemented:
Implementing zooming into a picture using hand gestures not only enhances
user interaction but also showcases the potential of combining advanced
computer vision techniques with everyday human gestures to create compelling
user interfaces. As technology advances, these interactions are becoming more
accessible and integral to various applications across industries.
PROBLEM STATEMENT
Design a system that allows a user to zoom in and out of a picture displayed on
a screen using hand gestures. The system should be able to recognize specific
gestures to control the zoom level of the picture in real-time.
SPECIFICATION
Input: The system should take input from a camera or a sensor capable of
detecting hand gestures.
Gestures: Define specific gestures for zooming in and out, such as spreading
fingers apart for zooming in and pinching fingers together for zooming out.
Zoom Levels: Implement a mechanism to smoothly adjust the zoom level of
the picture based on the detected gestures.
Real-time Feedback: Provide visual feedback to the user to indicate the
current zoom level and the effect of their gestures.
Integration: Ensure compatibility with a display system (e.g., monitor,
projector, or digital screen) to showcase the zoomed picture.
Real-time Interaction: Ensure that the system can perform zooming in real-
time as gestures are recognized.
OUTCOMES
Success in accurately detecting and classifying zoom-in and zoom-out
gestures.
This outcome is crucial for the overall functionality of the system. High
accuracy ensures a seamless user experience.
Smooth and responsive zooming ensures that users perceive the system as
interactive and reliable
The ability to zoom into specific regions of the image effectively without
losing quality.
Techniques such as image cropping and resizing should maintain clarity and
detail as the image is zoomed
Users find it intuitive to zoom into images using natural hand gestures.
Gesture recognition that aligns with users' expectations and is easy to perform
enhances usability
IMPLEMENTATION CODE
import cv2
from cvzone.HandTrackingModule import HandDetector
cap = cv2.VideoCapture(0)
cap.set(3, 1280)
cap.set(4, 720)
detector = HandDetector(detectionCon=0.7)
startDis = None
scale = 0
cx, cy = 200, 200
while True:
success, img = cap.read()
hands, img = detector.findHands(img)
img1 = cv2.imread("one.jpg")
if len(hands) == 2:
hand1 = hands[0]
hand2 = hands[1]
hand1_fingers = detector.fingersUp(hand1)
hand2_fingers = detector.fingersUp(hand2)
if startDis is None:
length, _, _ = detector.findDistance(hand1["center"],
hand2["center"], img)
startDis = length
length, _, _ = detector.findDistance(hand1["center"],
hand2["center"], img)
scale = int((length - startDis) // 2)
cx, cy = hand1["center"]
else:
startDis = None
except Exception as e:
print(f"Error: {str(e)}")
cap.release()
cv2.destroyAllWindows()
OUTPUT
CONCLUSION
https://chatgpt.com/#:~:text=Website%3A-,OpenCV,-Documentation
%20and%20tutorials
https://chatgpt.com/#:~:text=Documentation%20and%20tutorials
%3A-,OpenCV%20Tutorials,-MediaPipe%3A%20A
https://chatgpt.com/#:~:text=GitHub%20repository%3A-,MediaPipe
%20GitHub,-TensorFlow%20and%20PyTorch