Final ETI MP
Final ETI MP
M.S.B.T.E.
Evolution sheet for Micro Project
Name and
Signature of Mrs. V. S. Wangikar
faculty
SVERI’s COLLEGE OF ENGINEERING (POLYTECHNIC), PANDHARPUR.
CERTIFICATE
is a bonafide work carried out by above students, under the guidance of Mrs. V. S. Wangikar and it
is submitted towards the fulfillment of requirement of MSBTE, Mumbai for the award of Diploma in
Computer Engineering at SVERI’s COE (Polytechnic), Pandharpur during the academic year 2024-2025.
(Mrs.Wangikar V. S.)
Guide
(Mr.Bhandare P. S.) (Dr. Misal N. D.)
HOD Principal
Place:
Pandharpur Date:
/ /
Acknowledgement
“Artificial Intelligence Virtual Mouse” has been developed successfully with a great contribution
of four students in a period of four months. We like to appreciate their guidance, encouragement and
willingness since without their support the project would not have been a success. We would like to give
our heartfelt gratitude to Principal Dr. N. D. Misal, Guide Mrs.Wangikar V. S. & HOD Mr.Bhandare
P.S. who is the supervisor of our project for helping and encouraging us in many ways to make our
project a success. We would never been able to finish our work without great support and enthusiasm
from friends and support from our family. We would like to thank the department of Information
Technology, for giving us permission to initiate this project and successfully finish it.
1. Rationale of the Project:-
The Al Virtual Mouse uses computer vision techniques to track hand movements and translates them into
cursor movements on the screen. The system is designed to be intuitive and user-friendly, allowing users to
interact with their computer without the need for a physical mouse. The virtual mouse is developed using
Python and Open CV libraries. The project includes the implementation of various image processing
algorithms, such as hand segmentation, feature extraction, and classification. Moreover, it is robust to
various lighting conditions, backgrounds, and hand sizes. The developed system provides an alternative to
conventional mouse devices, particularly for individuals with disabilities or those who prefer a more natural
way of interacting with their computers. The target of this project is the invention of something new in the
world of technology that helps an individual work without the help of a physical mouse. It will save the user
money and time. Real-time images will be continuously collected by the Virtual Mouse color recognition
program and put through a number of filters and conversions. When the procedure is finished, the program
will use an image processing technique to extract the coordinates for the position of the desired colors from
the converted frames. The virtual mouse system is evaluated on various metrics, such as accuracy, speed
and robustness, and compared with existing virtual mouse systems. The trial findings demonstrated a high
degree of accuracy 97.37%; the system can operate well in actual scenarios with just one CPU. Following
that, it will compare the current color schemes within the frames to a list of color combinations, where
various combinations correspond to various mouse operations. If the current color scheme matches, the
program will execute a mouse command, which will be converted into a real mouse command on the user's
computer.
2. Aim and Benefits of the Micro Project
Aim of the Micro Project
The goal is to manage computers and other devices with gestures rather than
pointing and clicking a mouse or touching a display directly
Backers believe that the approach can make it not only easier tocarry out many
existing chores but also take on trickier tasks such as creating 3-D models, browsing
medical imagery during surgerywithout touching anything.
Adaptive skin color models and a motion history image-based hand moving direction detection
technique are implemented in a paper published by Dung-Hua Liou and Chen-Chiung Hsieh. The
average accuracy of this project was 94.1%, and processing takes 3.81 milliseconds per frame. The
primary problem with paper is that it has trouble recognizing more complex hand gestures when used
In a working environment.
The history of the virtual mouse dates back to the 1960s when Douglas Engelbart invented the first physical
mouse, revolutionizing human-computer interaction. As graphical user interfaces (GUIs) emerged in the
1980s, the physical mouse became a standard input device. By the 1990s, touchscreen and gesture
recognition technologies started laying the foundation for virtual mouse concepts, enabling users to interact
without physical devices. The 2000s saw the development of AI-based virtual mice using motion sensors and
computer vision to track hand movements and gestures. In the 2010s, virtual mice evolved with
advancements in artificial intelligence, eye-tracking, and voice recognition, allowing for more intuitive,
hands-free control. Today, virtual mice are integrated into VR/AR systems, wearables, and smart devices,
offering more accessible and efficient ways to interact with technology, particularly for users with
disabilities.
LIDAR (Light Detection and Ranging): Creates detailed 3D maps of the environment, enabling
precise object detection.
RADAR (Radio Detection and Ranging): Detects objects and calculates their distance and speed,
especially useful in adverse weather conditions.
Cameras: Identify lane markings, traffic signs, pedestrians, and other road elements.
Ultrasonic Sensors: Assist with close-range detection for parking and maneuvering.
Gesture Recognition: AI and ML algorithms classify and recognize hand or body gestures for controlling the
virtual mouse.
Adaptive Learning: AI learns from user interactions to improve accuracy and responsiveness over time .
Smart Gesture Learning: Machine learning maps unique, custom gestures to specific actions for personalized
control.
4. Introduction
A virtual mouse uses advanced technologies like gesture recognition, eye-tracking, and voice commands to
control a computer without physical input devices. It enhances accessibility, productivity, and ergonomics
by enabling hands-free interaction and adapting to user preferences.
Gesture Recognition: Uses motion sensors to track hand or body movements for cursor control.
Customization: Adapts to individual user gestures and preferences for personalized control.
Accessibility: Provides an alternative input method for people with physical disabilities.
Gesture Recognition: Uses motion sensors or computer vision to track hand movements for cursor
control.
Eye-Tracking Technology: Employs infrared cameras to detect eye movements and control the
cursor hands-free.
Machine Learning: Enhances gesture and voice recognition accuracy over time by learning user
behaviors.
Accelerometers and Gyroscopes: Detect motion and orientation in devices to control the cursor via
tilts or gestures.
Depth Sensors: Use ToF or LiDAR sensors to capture 3D spatial data for accurate gesture tracking.
Capacitive Touch Sensors: Detect proximity or finger movements to control the cursor touchlessly.
Optical Sensors: Use cameras to track hand or finger gestures for virtual mouse control.
6. Working of virtual mouse
1. Gesture Recognition:
Motion Tracking: Sensors like accelerometers, gyroscopes, or cameras capture the movement
of the user's hand or body.
Gesture Processing: The tracked movement is translated into a virtual cursor's motion on the
screen, with predefined gestures mapped to specific actions (click, scroll, drag).
2. Eye-Tracking Technology:
Eye Movement Detection: Infrared cameras detect the position of the user's eyes and monitor
their gaze.
Cursor Control: The system moves the cursor based on where the user is looking, allowing
hands-free control.
3. Machine Learning:
Data Collection: The system collects data on user behavior and gestures over time.
Adaptive Learning: Using machine learning models, the virtual mouse adjusts its recognition
algorithms to improve accuracy and responsiveness based on user feedback.
Motion Detection: These sensors detect changes in orientation and movement of the device
(e.g., smartphone or wearable).
Cursor Movement: Movement data is used to translate device tilts or hand gestures into cursor
movement on the screen.
Proximity Detection: These sensors emit infrared light or sound waves and measure their
reflection off nearby objects (such as the user’s hand).
3D Tracking: The system uses reflected signals to track hand positions and gestures in a 3D
space, allowing for precise cursor control.
6 . Depth Sensors:
Spatial Mapping: Depth sensors like Time-of-Flight (ToF) or LiDAR capture the distance
between the sensor and the user's hand.
Gesture Recognition: The system processes the depth data to understand hand movements in
three-dimensional space, enabling more accurate gesture-based input.
Proximity Sensing: Capacitive sensors detect changes in the electrical field caused by the
presence of a finger or hand.
Cursor Interaction: The system uses proximity data to control the cursor, triggering actions
based on the user’s gestures without direct touch.
7. Challenges in virtual mouse
Sensor Limitations:
Although AVs use multiple sensors (LIDAR, radar, cameras, etc.), these sensors still have
limitations in certain conditions. For example, LIDAR can struggle in heavy rain or fog, while
cameras might not perform well in low light. Combining sensor data to create a reliable
understanding of the environment remains a complex task.
Complex Decision-Making:
Autonomous systems must handle unpredictable and dynamic real-world scenarios, such as sudden
pedestrian movements, construction zones, or complex traffic situations. Programming AVs to make
decisions in these situations—often where human intuition would prevail—remains a challenge.
Edge Cases:
AVs can face situations not covered in training data, such as rare weather events, unusual road
conditions, or unusual driver behavior. These edge cases are difficult to predict and program for,
requiring continuous learning and adaptation from the vehicle.
High-Definition Mapping:
While AVs rely heavily on HD maps to navigate accurately, these maps need constant updates to
reflect road changes, construction zones, and new traffic signals. Map creation and maintenance are
resource-intensive processes.
o Discuss the moral dilemmas autonomous vehicles might face (e.g., the "trolley problem").
o Explore legal challenges, such as liability in accidents, insurance policies, and current regulations
governing autonomous vehicle testing and deployment.
Cybersecurity Risks:
Autonomous vehicles are highly dependent on software and internet connectivity, making them
vulnerable to hacking or data breaches. Explain the cybersecurity measures needed to protect AVs
from such risks.
8. Future Prospects of virtual mouse
Gesture recognition can suffer from misinterpretations due to environmental factors like lighting
or motion noise. Variations in how users perform gestures, such as speed or angles, can also
impact tracking accuracy. Ensuring precise recognition requires advanced algorithms, which can
be computationally intensive and challenging to implement in real-time.
A significant challenge is the delay between the user's input and the system's response, which can
disrupt smooth interaction. Complex algorithms needed for gesture and voice recognition often
introduce lag, especially on less powerful devices. Reducing latency is crucial for creating a
seamless experience, but this demands high processing power and optimization.
3. Environmental Factors:
Lighting conditions greatly affect the performance of sensors, as infrared or optical sensors may
struggle in bright or dim environments. Background clutter, such as movement in the room, can
interfere with tracking, reducing accuracy. Maintaining consistent performance across varying
conditions requires robust calibration and adaptive systems, which can be difficult to implement
effectively.
Switching to a virtual mouse requires users to learn new methods of interaction, which can be
difficult for those unfamiliar with gesture-based controls. Complex gestures or voice commands
may frustrate users, particularly if they are not intuitive. Personalized learning systems can help,
but they require extensive training and time to optimize for individual preferences.
Virtual mouse systems often have a limited range of gestures they can effectively recognize,
restricting their functionality. Complex or rapid gestures may not be detected accurately, leading to
missed actions or inputs. Expanding the range of gestures while ensuring precise tracking across
diverse movements is a significant challenge.
Actual Resource Used:
RAM: 8GB
1 Laptop Processor: i5 1
HDD: 1TB
Graphic: 2GB
2
Cameras 60 fps 1
9 Conclusion :
In conclusion, the virtual mouse represents a significant leap in user interface technology,
enabling hands-free interaction through innovative resources like gesture recognition, voice
commands, and advanced sensors. While the technology offers numerous benefits, such as
increased accessibility and enhanced user experience, it faces challenges in terms of accuracy,
latency, environmental factors, and power consumption. Continued advancements in hardware,
software, and machine learning will be crucial to overcoming these limitations, making virtual
mice more accurate, affordable, and user-friendly. As the technology evolves, it has the
potential to redefine the way we interact with digital devices, offering new opportunities for
accessibility and convenience.
References :
Textbook/manual
https://chatgpt.com/
https://www.google.com/