Project Report - AI Virtual Mouse
Project Report - AI Virtual Mouse
Prepared By:
Param Panwar
2nd Year CSE
Objective:
Hand Gesture Recognition plays a key role in human-computer
interactions. As we can see that there are so many new Technological
advancements happening such as biometric authentication which we can
see frequently in our smart phones, similarly hand gesture recognition is a
modern way of human and computer interaction i.e., we can control our
system by showing our hands in front of webcam and hand gesture
recognition can be useful for all kinds of people. Based upon this idea this
paper is presented. This paper provides a detailed explanation to the
algorithms and methodologies for the virtual mouse.
Purpose:
As the technology increase everything becomes virtualized such as speech
recognition. Speech Recognition is used for recognition and translation of
the spoken language into text. Thus, Speech Recognition can replace
keyboards in the future, Similarly Eye Tracking which is used to control
the mouse pointer with the help of our eye. Eye Tracking can replace
mouse in the future. Gestures can be in any form like hand image or pixel
image or any human given pose that require less computational difficulty
or power for making the devices required for the recognitions to make
work. Different techniques are being proposed by the companies for
gaining necessary information/data for recognition handmade gestures
recognition models. Some models work with special devices such as data
glove devices and color caps to develop complex information about
gesture provided by the user/human.
Modules Implementation:
Checking devices (like webcam) working properly or not,
Collection tape or finger ribbon, which should be fit to the fingers,
Import packages like NumPy, OpenCV, mediapipe
Implement the Open Gesture Operation
Source Code:
File name – AiVirtualMouseProject.py
import cv2
import numpy as np
import HandTrackingModule as htm
import time
import autopy
# Camera
wCam, hCam = 640, 480
frameR = 100 # Frame Reduction
smoothening = 7
pTime = 0
plocX, plocY = 0, 0
clocX, clocY = 0, 0
cap = cv2.VideoCapture(0)
cap.set(10, wCam)
cap.set(11, hCam)
detector = htm.handDetector(maxHands=1)
wScr, hScr = autopy.screen.size()
# print(wScr, hScr)
while True:
# 1. Find hand Landmarks
success, img = cap.read()
img = detector.findHands(img)
lmList, bbox = detector.findPosition(img)
# 2. Get the tip of the index and middle fingers
if len(lmList) != 0:
x1, y1 = lmList[8][1:]
x2, y2 = lmList[12][1:]
# print(x1, y1, x2, y2)
# 7. Move Mouse
autopy.mouse.move(wScr - clocX, clocY)
cv2.circle(img, (x1, y1), 15, (255, 0, 255), cv2.FILLED)
plocX, plocY = clocX, clocY
self.mpHands = mp.solutions.hands
self.hands = self.mpHands.Hands(self.mode, self.maxHands,
self.detectionCon, self.trackCon)
self.mpDraw = mp.solutions.drawing_utils
self.tipIds = [4, 8, 12, 16, 20]
if self.results.multi_hand_landmarks:
for handLms in self.results.multi_hand_landmarks:
if draw:
self.mpDraw.draw_landmarks(img, handLms,
self.mpHands.HAND_CONNECTIONS)
return img
if draw:
cv2.rectangle(img, (xmin - 20, ymin - 20), (xmax + 20, ymax +
20),
(0, 255, 0), 2)
def fingersUp(self):
fingers = []
# Thumb
if self.lmList[self.tipIds[0]][1] > self.lmList[self.tipIds[0] - 1][1]:
fingers.append(1)
else:
fingers.append(0)
# Fingers
for id in range(1, 5):
if self.lmList[self.tipIds[id]][2] < self.lmList[self.tipIds[id] - 2][2]:
fingers.append(1)
else:
fingers.append(0)
# totalFingers = fingers.count(1)
return fingers
def main():
pTime = 0
cTime = 0
cap = cv2.VideoCapture(1)
detector = handDetector()
while True:
success, img = cap.read()
img = detector.findHands(img)
lmList, bbox = detector.findPosition(img)
if len(lmList) != 0:
print(lmList[4])
cTime = time.time()
fps = 1 / (cTime - pTime)
pTime = cTime
cv2.imshow("Image", img)
cv2.waitKey(1)
if __name__ == "__main__":
main()
Output Screenshot:
CONCLUSION:
This model can conclude by using the topics of computer vision like open
CV, it can form masks that can variate colors by using color variation
techniques and also development of mouse movement by using certain
packages like ‘mouse’ which will be used for the movement of mouse by
using the coordinates that are linked to the detected color. This can provide
ease use of systems and many other applications. So, the open CV is
helping the users with different accessible forms of models that will make
ease life.