0% found this document useful (0 votes)
4 views47 pages

AIDS Minor Project Report Format-2024

The document is a minor project report titled 'AIR CANVAS' submitted by students from the Department of Artificial Intelligence & Data Science for the Bachelor of Technology degree. It outlines the project's objectives, methodologies, and the use of hand gesture recognition to create a digital art interface using OpenCV and Python. The report includes sections on feasibility analysis, design, implementation, and future applications of the project.

Uploaded by

ved
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views47 pages

AIDS Minor Project Report Format-2024

The document is a minor project report titled 'AIR CANVAS' submitted by students from the Department of Artificial Intelligence & Data Science for the Bachelor of Technology degree. It outlines the project's objectives, methodologies, and the use of hand gesture recognition to create a digital art interface using OpenCV and Python. The report includes sections on feasibility analysis, design, implementation, and future applications of the project.

Uploaded by

ved
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

VIRTUAL WHITE BOARD

A Minor Project Report Submitted To

Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal


Towards Partial Fulfilment for the Award Of
Bachelor of Technology
In

DEPARTMENT OF ARTIFICIAL INTELLIGENCE & DATA SCIENCE


Submitted By
Muskan Sengar(0863AD221026)

Jiya Darwai ( 0863AD221019)

Ankita Rajput (0863AD221006)

Under the Supervision of


Prof./Dr./Asst. Prof. Guide Name

Session: 2024-2025

Department of Artificial Intelligence & Data Science

Prestige Institute of Engineering, Management and Research, Indore (M.P.)


[An Institution Approved By AICTE, New Delhi & Affiliated To RGPV, Bhopal]

PRESTIGE INSTITUTE OF ENGINEERING MANAGEMENTAND RESEARCH

INDORE (M.P.)
DECLARATION

We Muskan Sengar, Jiya Darwai and Ankita Rajput, hereby declare that the project entitled
“AIR CANVAS”, which is submitted by us for the partial fulfilment of the requirement for the
award of Bachelor of Technology in Artificial Intelligence & Data Science to the Prestige Institute
of Engineering, Management and Research, Indore (M.P.). Rajiv Gandhi Proudhyogiki
Vishwavidyalaya, Bhopal, comprises my own work and due acknowledgement has been made in
text to all other material used.

Signature of Students:

Date: 24 december 2024

Place: Indore
PRESTIGE INSTITUTE OF ENGINEERING MANAGEMENTAND RESEARCH

INDORE (M.P.)

DISSERTATION APPROVAL SHEET

This is to certify that the dissertation entitled “AIR CANVAS” submitted by Muskan sengar
(0863AD221026), Jiya darwai (0863AD221019) and Ankita Rajput (0863AD221006)to the
Prestige Institute of Engineering, Management and Research, Indore (M.P.) is approved as
fulfilment for the award of the degree of “Bachelor of Technology in Artificial Intelligence & Data
Science” by Rajiv Gandhi Proudhyogiki Vishwavidyalaya, Bhopal, (M.P.).

Internal Examiner External Examiner

Date: Date:

HOD, AI & DS
Dr. Dipti Chauhan
PIEMR, INDORE
PRESTIGE INSTITUTE OF ENGINEERING MANAGEMENTAND RESEARCH

INDORE (M.P.)

CERTIFICATE

This is certified that project entitled “AIR CANVAS” submitted by Muskan sengar , Jiya
darwai and Ankita Rajput is a satisfactory account of the bona fide work done under our
supervision and is recommended towards partial fulfilment for the award of the degree
Bachelor of Technology in Artificial Intelligence & Data Science to Rajiv Gandhi
Proudyogiki Vishwavidyalaya, Bhopal (M.P.) .

Date:

Enclosed by:

Prof.Mohit Kadwal Prof.Mohit Kadwal Dr. Dipti Chauhan

Project Guide Project Coordinator Professor & Head, AI & DS

Dr. Manojkumar Deshpande


Sr. Director
PIEMR, Indore
PRESTIGE INSTITUTE OF ENGINEERING MANAGEMENTAND RESEARCH

INDORE (M.P.)

ACKNOWLEDGEMENT

After the completion of Minor project work, words are not enough to express my feelings about
all these who helped me to reach my goal; feeling above this is my indebtedness to the almighty
for providing me this moment in life.
First and foremost, I take this opportunity to express my deep regards and heartfelt gratitude to my
project guide Prof./Dr./Asst. Prof. MOHIT KADWAL and Project Coordinator Name of
Coordinator, Department of Artificial Intelligence & Data Science, PIEMR, Indore for their
inspiring guidance and timely suggestions in carrying out my project successfully. They are also
the constant source of inspiration for me. Working under their guidance has been an opportunity
for me to learn more and more.

I am extremely thankful to Dr. Dipti Chauhan, (HOD, AI & DS) for his co-operation and
motivation during the project. I extend my deepest gratitude to Dr. Manojkumar Deshpande,
Director, PIEMR, and Indore for providing all the necessary facilities and true encouraging
environment to bring out the best of my endeavour’s.

I would like to thank all the teachers of our department for providing invaluable support and
motivation. I remain indebted to all the non-teaching staff of our Institute who has helped me
immensely throughout the project.

I am also grateful to my friends and colleagues for their help and co-operation throughout this
work. Last but not least; I thank my family for their support, patience, blessings and understanding
while completing my project.

Name of Students:

Muskan sengar (0863AD221026)

Jiya darwai (0863AD221019)

Ankita Rajput(0863AD221006)
INDEX

Declaration I

Dissertation Approval Sheet II

Certificate III

Acknowledgement IV

Table of Contents V

List of Tables VI

List of Figures VII


TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION

1.1 Introduction…………………………………………………………………….………2
1.2 Motivation………………………………………………………..............................2
1.3 Objective………………………………………………………………………………3
1.4 Analysis ………………………………………………………….............................3
1.4.1 Functional Requirements ………………………………………………......4

1.4.2 Non-functional Requirements ………………………………………………5

1.4.3 Use Case Diagram…………………………………………………………..6

CHAPTER 2 BACKGROUND AND RELATED WORK

2.1 Problem Statement ……………………………………………………………………9

2.2 Background and Related Work………………………………………………………..9

2.2.2 Literature survey ……………………………………………………….......12

2.3 Solution Approach (methodology and technology used)………………………………16

CHAPTER 3 DESIGN (UML AND DATA MODELING)

3.1 UML Modeling

3.1.1 Sub Systems………………………………………………………………….20

3.1.2 Modules Specification………………………………………………………..20

3.1.3 Collaboration Diagram ……………………………………………………….21

3.1.4 Class Diagram……………………………………………............................22

3.1.5 Sequence Diagram ………………………………………..........................22


3.1.6 Activity Diagram…………………………………………..........................23

3.2 Data Modeling

3.2.1 Data Flow Diagram………………………………………..........................24

CHAPTER 4 IMPLEMENTATION

4.1 Tools Used…………………………………………………………………………...26

4.2 Technology…………………………………………………………………………...27

4.3 Testing ……………………………………………………………………………...28

4.3 User manual……………………………………………………………………………34

CHAPTER 5 PROJECT PLAN

5.1 Gantt Chart………………………………………………………...............................39

5.2 Effort Schedule & Cost estimation ……………………………………………………39

5.3 Work Breakdown Structure…………………………………………………………….40

5.4 Deviation from original plan and correction applied………………………………..41

CHAPTER 6: Project Screenshot

CHAPTER 7 CONCLUSION/ FUTURE SCOPE

7.1 Conclusion………………………………………………………..............................42

7.2 Application Domain & Future Scope…………………………………………………42

References………………………………………………………………………………….43

Appendix A: Snapshots or Screenshots with description………………………………44

Published Research Paper…………………………………………………………………46


LIST OF FIGURES

Figure no Tittle

Figure 1 Earse virtually

Figure 2 class diagaram

Figure 3 Sequence diagram

Figure 4 Activity diagram

Figure 5 data flow diagram

Figure 6 writing free hand

Figure 7 finger tip recognition


Poster- Colour Copy
CHAPTER 1
INTRODUCTION
1.1 Introduction

In the era of digital world, traditional art of writing is being replaced by digital art. Digital art refers to
forms of expression and transmission of art form with digital form. Relying on modern science and
technology is the distinctive characteristics of the digital manifestation. Traditional art refers to the art
form which is created before the digital art. From the recipient to analyse, it can simply be divided into visual
art, audio art, audio-visual art and audio-visual imaginary art, which includes literature, painting, sculpture,

architecture, music, dance, drama and other works of art. Digital art and traditional art are interrelated
and interdependent. Social development is not a people's will, but the needs of human life
are the main driving force anyway. The same situation happens in art. In the present
circumstances, digital art and traditional art are inclusive of the symbiotic state, so we need
to systematically understand the basic knowledge of the form between digital art and traditional
art. The traditional way includes pen and paper, chalk and board method of writing. The
essential aim of digital art is of building hand gesture recognition system to write digitally. Digital
art includes many ways of writing like by using keyboard, touch-screen surface, digital pen,
stylus, using electronic hand gloves, etc. But in this system, we are using hand gesture
recognition with the use of machine learning algorithm by using python programming, which
creates natural interaction between man and machine. With the advancement in technology,
the need of development of natural human -computer interaction system to replace traditional system
in increasing rapidly.

We will be using the computer vision techniques of OpenCV to build this project. The preferred
language is Python due to its exhaustive libraries and easy to use syntax but understanding the basics
it can be implemented in any OpenCV supported language. This is a live project in which our
camera will be turned on and our project will detect the live motion of our hand and corresponding
shape will be drawn on the air canvas. Here we have provided the various options such as we can
draw shapes with different colours such as

● Blue

● Green

● Red

● Yellow

● Clear

1.2 MOTIVATION

There are several motivations for creating an "air canvas" using OpenCV and Python:

• Fun and entertainment: An air canvas can be a fun and interactive way for users to create art or
play games using hand gestures or to teach students.

• Accessibility: An air canvas can be used to create an interface that is more accessible to users with
disabilities, who may have difficulty using a traditional input device such as a mouse or keyboard.

• Educational purposes: An air canvas project can be a good opportunity to learn about and
experiment with computer vision algorithms and techniques, such as object detection and image
segmentation.

• Novelty: An air canvas can be a unique and attention-grabbing way to display and interact with
digital content, such as at a trade show or exhibition.

•Improved usability: An air canvas can provide a more natural and intuitive way for users to interact
with digital content, compared to traditional input devices like a mouse or keyboard.

•Virtual reality (VR) and augmented reality (AR) applications: An air canvas can be used to create
interactive VR or AR experiences that allow users to draw or interact with virtual objects using hand
gestures.
• Interactive installations: An air canvas can be used to create interactive installations or exhibits that
allow users to interact with digital content using hand gestures.

•Gesture-based control: An air canvas can be used to create a gesture-based control interface for
devices such as smart home systems or gaming consoles.

•Advertising and marketing: An air canvas can be used as an attention-grabbing tool for advertising
and marketing campaigns, allowing users to interact with digital content in a unique and memorable
way.

• Medical rehabilitation: An air canvas can be used as a therapeutic tool for patients under going
medical rehabilitation, helping them to improve their hand-eye coordination and fine motor skills.

• Art therapy: An air canvas can be used as a tool for art therapy, allowing users to express themselves
creatively and emotionally through the medium of hand gestures.

• Gaming: An air canvas can be used to create innovative and immersive gaming experiences that
allow users to interact with the game using hand gestures.

• Virtual meetings and presentations: An air canvas can be used to enhance virtual meetings and
presentations, allowing users to draw or annotate digitally in real-time using hand gestures.

• Music performance: An air canvas can be used as a tool for music performance, allowing users to
create music or control audio effects using hand gestures.

Overall, an air canvas project using OpenCV and Python can be a fun and educational way to explore
the capabilities of computer vision, and can have applications in a variety of contexts.

1.5 OBJECTIVE

The main objective of an "air canvas" system using OpenCV and Python is to allow users to create
drawings or artworks using hand gestures, which are captured by a camera and processed in real-
time using computer vision algorithms. To achieve this objective, the system would need to detect
and track the movement of the user's hand in the video frames captured by the camera, and then
use this information to generate a corresponding drawing on a screen. This could be done by mapping
the movement of the hand to the movement of a cursor or drawing tool on the screen, or by using
the hand movement to draw lines or shapes directly onto the screen

In addition to the main objective of creating drawings or artworks using hand gestures, an air canvas
system using OpenCV and Python could also have the following secondary objectives:

• To provide a fun and interactive way for users to create art or play games using hand gestures.

• To create an interface that is more accessible to users with disabilities, who may have difficulty
using a traditional input device such as a mouse or keyboard.

• To provide an opportunity for users to learn about and experiment with computer vision algorithms
and techniques, such as object detection and image segmentation.

• To create a unique and attention-grabbing way to display and interact with digital content, such as
at a trade show or exhibition.

• To provide a more natural and intuitive way for users to interact with digital content, compared to
traditional input devices like a mouse or keyboard.

•To create interactive VR or AR experiences that allow users to draw or interact with virtual objects
using hand gestures.

• To create interactive installations or exhibits that allow users to interact with digital content using
hand gestures.

• To create a gesture-based control interface for devices such as smart home systems or gaming
consoles.

• To use as an attention-grabbing tool for advertising and marketing campaigns, allowing gusers to
interact with digital content in a unique and memorable way.

• To use as a therapeutic tool for patients undergoing medical rehabilitation, helping them to improve
their hand-eye coordination and fine motor skills.

• To use as a tool for art therapy, allowing users to express themselves creatively and emotionally
through the medium of hand gestures.
• To create innovative and immersive gaming experiences that allow users to interact with the game
using hand gestures.

• To enhance virtual meetings and presentations, allowing users to draw or annotate digitally in real-
time using hand gestures.

• To use as a tool for music performance, allowing users to create music or control audio effects
using hand gestures.

1.6 ANALYSIS

The major step in analysis is to verify the feasibility of the proposed system.
“All projects are feasible given unlimited resources and infinite time “. But in reality, both resources
and time are scarce. Project should confirm to be time effective and should be optimal in their
consumption of resources. This plays a constant role in approval of any project.Three key
considerations involved in the feasibility analysis are

Technical Feasibility

Operational Feasibility

Economic Feasibility

3.1.Technical Feasibility

To determine whether the proposed system is technically feasible, we should take into consideration
the technical issues involved behind the system. This technical feasibility study is carried out to check
the technical requirements of the system. Any system developed must not have high demand on the
available technical resource. The developed system must have modest requirement, as only minimal
changes are required for implementing this system.

3.2.Operational Feasibility

To determine the operational feasibility of the system we should take into consideration the
awareness level of the users. This system is operationally feasible since the users are familiar with
the technologies and hence there is no need to gear up or train the personnel to use of the system.
Also, the system is very friendly and easy to use.

Economic Feasibility

To decide whether a project is economically feasible, we have to consider various factorsas:

Cost benefit analysis

Long-term returns

Maintenance costs

This system is carried out to check the economic impact that the system will have on the
organization. The funds that the company can pour into the research and development of the system
is limited. The expenditure must be justified. Thus, the developed system as well with in the budget
and this was achieved because most of the technologies used are freely available.

1.4.1 USE CASE DIAGRAM

Use case Diagram consists of use case and actors.

The main purpose is to show the interaction between the use cases and the actor.

It intends to represent the system requirements from user’s perspective.

The use cases are the functions that are to be performed in the module.
Fig. 1 Use Case Diagram for Air Canvas using OpenCV

1.4.2 Functional Requirements


 Using Camera to capture input.
 Detect Hand positions and finger tips.
 Choose different shapes, colors, size.
 Depict shapes on canvas
 Save the work on canvas as image.
 Open PDF and edit/annotate it.
1.4.3 Non-Functional Requirements
 Reliability requirements
 Scalability requirements
 Maintainability requirements
 Usability requirements
 Availability requirements

1.3 Purpose

The purpose of the Air Canvas Application project is to create a novel and interactive digital drawing
platform that allows users to paint and draw in the air using hand gestures and a camera-
based system powered by OpenCV (Open Source Computer Vision Library). This innovative project
aims to achieve the following objectives: Gesture-Based Drawing: Enable users to create digital
artwork by simply moving their hands or a pointing device in the air, eliminating the need for
physical touch or traditional input devices like a mouse or stylus. Real-Time Visualisation: Provide
real-time visualisation of the drawing process, allowing users to see their artwork taking shape as
they draw in the air. This feature enhances the user experience and provides immediate feedback.
Dynamic Brush Selection: Implement a system that allows users to switch between different virtual
brushes and colours on the fly, enabling a diverse range of creative expression. intuitive user
interface: Design an intuitive and user-friendly interface that is accessible to people of all ages and
backgrounds. The application should be easy to learn and use. Gesture Recognition: Utilise OpenCV
for hand and gesture recognition, enabling the system to interpret hand movements and convert
them into drawing commands. customization and saving Allow users to save and export their digital
creations.

Additionally, provide options for customising brush properties, line thickness, and other artistic
elements. educational and entertainment value: The application can serve as an educational tool to
teach computer vision and image processing concepts, as well as offer an entertaining and creative
outlet for users. Cross-Platform Compatibility: Develop the application to work on multiple
platforms, such as Windows, macOS, and Linux, ensuring accessibility to a wide user base. open
source collaboration: Encourage open-source collaboration and community involvement to
continually enhance the application's features and capabilities.
CHAPTER 2
BACKGROUND AND
RELATED WORK
2.1 Problem statement

The current system only functions with your fingers, not crayons or paints. We concentrate on the
difficult task of identifying and separating a finger from an RGB image without a depth sensor. The
lack of a top and movement under the pen are additional issues. One RGB camera is used by the
system, which you can replace. It is impossible to discover the bottom up, and a pen cannot be
followed up. The result is an abstract, model unseen image because every finger path has been
drawn. It takes a lot of code care to change the position of the process from one region to another
using real-time hand touch. To properly control his plan, the user should also be familiar with
numerous movements. The survey focuses on finding solutions to some of the most pressing
social problems.

The problems that hearing-impaired people face in daily life are numerous. While listening is
something that most people take for granted, sign language is not always used when
communicating with someone who is disabled. Second, Paper waste is not unusual. Wasted paper
includes paper used for writing, drawing, etc. Paper makes up 25% of landfills, 50% of commercial
waste, 93% of sources, and the list goes on. These issues can easily be resolved through on-air
writing. It will help the hearing impaired communicate. Your online text may be spoken back to you
or displayed in augmented reality .On the air, one can write quickly and work.
Fig 2. Flow diagram of the project

2.2 Literature survey

• Babu, S., Pragathi, B.S., Chinthala, U. and Maheshwaram, 2021, September.“Subject


Tracking with Camera Movement Using Single Board Computer”. In 2020 IEEE- HYDCON
(pp. 1-6). In this work [1] presents a system that is able to track subjects and accordingly
pan in real time. The system replicates a camera operator by detecting humans or objects
and plan accordingly to track and record them. This subject tracking camera system can be
Implemented in classroom lectures to help teacher’s record video without the aid of any
camera operator, or to ease the work of a camera operator in particular.
• Shetty, M., Daniel, C.A., Bhatkar, M.K. and Lopes, O.P., 2020, June. “Virtual
MouseUsing Object Tracking”. In 2020 5th International Conference on Communication andEl
ectronics Systems (ICCES) (pp. 548-553). In this paper [2] they proposed a Computer Vision-
based mouse cursor control system, which uses hand gestures that are being captured from a
webcam through an HSV color detection technique.

• Chen, M., Al Regib, G. and Juang, B.H., 2020. “Air-writing recognition—Part I: Modeling and
recognition of characters, words, and connecting motions”. IEEE Transactions on Human-
Machine Systems, 46(3), pp.403-413. In this paper [3] Air-Writing refers to writing of linguistic
characters or words in a free space by hand or finger movements. Here recognition
of characters or words is accomplished base don six-degree-of-freedom hand motion data.
Isolated air-writing characters can be recognized similar to motion gestures although with
increased sophistication and variability.

• Kaur, H., Reddy, B.G.S., Sai, G.C. and Raj., A.S 2019. “A Comprehensive overview of AR/VR by
Writing in Air”. In this paper [4], Dependency injection OpenCV is used to sketch on the camera
with a virtual pen, i.e., any marker may be used to draw using the
contour detection technique centered on the mask of the desired cultured reference marker.
The research is all about how often people could identify alphabets and numbers written in the
openair. A leap motion captures motion trajectory information and plots it as a continuous
stream of points. Lines would be combined and major slopes identified from the major points.
Significantslopes are converted into directions by the use of geometry.
• Zhou, L.,2019. “Paper Dreams: an adaptive drawing canvas supported by machine
learning”(Doc-toral dissertation, Massachusetts Institute of Technology). In this
paper [5] it has the potential to empower a large subset of the population, from children to
the elderly, with a new medium to represent and visualize their ideas. Paper Dreams is a web-
based canvas for sketching and storyboarding, with a multimodal user interface integrated with
a variety of machine learning models. By using sketch recognition, style transfer, and natural
language processing, the system can contextualize what the user is drawing; it then can
color the sketch appropriately, suggest related objects for the user to draw, and allow the user
to pull from a database of related images to add onto the canvas.
• Joolee, J.B., Raza, A., Abdullah, M. and Jeon, S 2015. “Tracking of Flexible Brush Tip on
Real Canvas: Silhouette
b a s e d a n d D e e p E n s e m b l e N e t w o r k B a s e d A p p r o a c h e s ” . I E E E Access,8,
pp.115778-115788. In this paper [6] they introduced silhouette-based and deep
ensemble network-based approaches to track the brush tip position for interactive drawing.
The silhouette-based approach captures the silhouette of deforming bristles using
a pair of well-aligned infra-red (IR) cameras, extracts the tip using our proposed tracking
procedure and then the 2D position of the tip is reconstructed. However, this approach still
needs a specially aligned frame and cameras and has shortcoming in usability. In
the current work, we only consider a standard size brush.

2.3 Methology

This methodology needs a lot of data to be stored and sometimes it leads to wrong prediction
due to background difference or the skin color difference. Some of the existing papers followed
the process of image processing using threshold values and database data of different images.
To overcome all these limitations and drawbacks , we proposed the system using media pipe.
In the proposed system hand tracking is done using media pipe which first detects hand
landmarks and then obtain positions according to it. Figure 1 describes the flow of system or
the process of project in which it works. This is the complete work flow that occurs to execute
the system and draw images just by waving hands in efficient manner..the total process of the
system is explained in simple sequence diagram as shown : The methodology implemented in
the system is divided into four modules as follows :
1. Finger Tip Recognition
2. Writing with free hand
3. Shapes Integration
4. Erase Virtually
• finger tip recognition
This module focuses on accurately detecting and interpreting user hand gestures and track
specific points on the hand, such as fingertips, palm, and joints. Utilizes computer vision librarie
such as OpenCV and MediaPipe for real-time hand tracking and gesture recognition.

Fig 3. finger tip recognition

• WRITING WITH FREE HAND


This module enables users to draw or write with free hand in air in front of the webcam using
natural hand gestures. Employs MEDIAPIPE for solving the issues in hand tracking. When the
user points the index finger on to the draw tool , the user can start drawing or writhing base on
his requirement. The appropriate tool name is been specified on the screen when he points on

the tool.
Fig 4. Writing free hand

Once the user points on the shape tool using index finger, the specified tool is selected. Now he needs
to raise the middle finger to draw the shape. Once he is done with drawing , he can close the middle
finger. Then the specific shape is displayed on the screen.

• ERASE VIRTUALLY
Fig 5 : ERASE TOOL SELECTION

Air Canvas application enables users to switch to the erasing mode by using the selection mode
from the toolbar and clicking on the eraser icon. It draws a transparent line when the finger
moves thus erasing colored points. By processing the video frames and selectively removing
corresponding drawings from the canvas, the application creates the effect of erasing unwanted
strokes.
CHAPTER 3
DESIGN (UML AND DATA
MODELING)
3.1 UML DIAGRAMS

The Unified Modelling Language (UML) is a standard language for specifying,visualizing,constructing, and
documenting the artefacts of software systems, as well as for business modelling and other non-software
systems. The UML represents a collection of best engineering practices that have proven to be successful
in the modelling of large and complex systems. The UML is a very important part of developing object-
oriented software and the software development process. UML mostly uses graphical notations to express
the design of software projects. Using the UML helps the project teams to communicate, explore potential
designs and validate the architectural design of the software .The Unified Modelling Language (UML) is a
standard language for writing software blue prints. The UML is a language for

Visualizing

Specifying

Constructing

Documenting the artifacts of a software system.

UML is a language which provides vocabulary and the rules for combining words in that vocabulary for
the purpose of communication. A modelling language is a language whose vocabulary and the rules focus
on the conceptual and physical representation of a system. Modelling yields an understanding of a system.

3.1.1 USE CASE DIAGRAM

Use case Diagram consists of use case and actors.

The main purpose is to show the interaction between the use cases and the actor.

It intends to represent the system requirements from user’s perspective.

The use cases are the functions that are to be performed in the module.
Fig 3.1.1 use case diagram

3.1.2 CLASS DIAGRAM

It contains the classes involved and shows the connections between the various classes.

Class diagram includes classes, which further has a class label or name, attributes of the class and
the operations of functions performed by the class.

Fig 3.1.2 class diagram


3.1.3 ACTIVITY DIAGRAM

In UML, the activity diagram is used to demonstrate the flow of control within the system rather than
the implementation. It models the concurrent and sequential activities.

It is also termed as an object-oriented flowchart. It encompasses activities composed of a set of actions


or operations that are applied to model the behavioral diagram.

Fig 3.1.3 Activity Diagram for Air Canvas using Open

3.1.4 SEQUENCE DIAGRAM

It shows the sequence of the steps that are carried out throughout the process of execution.

It involves lifelines or life time of a process that shows the duration for which process is alive while the
steps are taking place in the sequential manner.
Sequence diagram specifies the order in which the various steps are executed.

Fig 3.1.4 Sequence Diagram for Air Canvas using OpenCV

.
CHAPTER 4
IMPLEMENTATION
4.1 Technology Used

• MediaPipe is the Google Framework which helps solve the problems in hand tracking Media
Pipe has ready to use machine learning solutions that can be used in various machine learning
projects also it contains other modules like movement recognition, gestures recognition and
some others. The user experience can be significantly enhanced across a range of
technological domains and platforms by being able to recognize the shape and motion of
hands. For instance, it can provide as the foundation for hand gesture control and sign
language comprehension. It can also make it possible for digital information and material
to be superimposed on top of the real world in augmented reality.

• OpenCV is a library which is used for image recognition. It will identify our hand tracking and
Drawing. It is library basically design to work on image processing and image recognition.
Object detection image processing methods are included in the OpenCV computer vision
library. Real-time computer vision applications can be created by utilizing the OpenCV library
for the Python programming language. The processing of images and videos as well as
analytical techniques like face and object detection use the OpenCV library.

• NumPy is a Python library used for working with arrays. It also has functions for working in
domain of linear algebra, fourier transform, and matrices. NumPy was created in 2005 by Travis
Oliphant. It is an open source project and you can use it freely. NumPy stands for Numerical
Python. In Python we have lists that serve the purpose of arrays, but they are slow to process
.NumPy aims to provide an array object that is up to 50x faster than traditional Python lists. The
array object in NumPy is called nd array, it provides a lot of supporting functions that make
working with Nd array very easy. Arrays are very frequently used in data science, where speed
and resources are very important.

4.2 # implementation code with using python

import cv2
import numpy as np
import mediapipe as mp
from collections import deque

#from main import process_frame # Ensure main.py is in the same directory.

def generate_frames():
while True:
success, frame = cap.read()
if not success:
break
else:
# Process the frame
processed_frame = process_frame(frame)
ret, buffer = cv2.imencode('.jpg', processed_frame)
frame = buffer.tobytes()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')

bpoints = [deque(maxlen=1024)]
gpoints = [deque(maxlen=1024)]
rpoints = [deque(maxlen=1024)]
ypoints = [deque(maxlen=1024)]

blue_index = 0
green_index = 0
red_index = 0
yellow_index = 0

kernel = np.ones((5,5),np.uint8)

colors = [(255, 0, 0), (0, 255, 0), (0, 0, 255), (0, 255, 255)]
colorIndex = 0

paintWindow = np.zeros((471,636,3)) + 255


paintWindow = cv2.rectangle(paintWindow, (40,1), (140,65), (0,0,0), 2)
paintWindow = cv2.rectangle(paintWindow, (160,1), (255,65), (255,0,0), 2)
paintWindow = cv2.rectangle(paintWindow, (275,1), (370,65), (0,255,0), 2)
paintWindow = cv2.rectangle(paintWindow, (390,1), (485,65), (0,0,255), 2)
paintWindow = cv2.rectangle(paintWindow, (505,1), (600,65), (0,255,255), 2)

cv2.putText(paintWindow, "CLEAR", (49, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 2,


cv2.LINE_AA)
cv2.putText(paintWindow, "BLUE", (185, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 2,
cv2.LINE_AA)
cv2.putText(paintWindow, "GREEN", (298, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 2,
cv2.LINE_AA)
cv2.putText(paintWindow, "RED", (420, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 2,
cv2.LINE_AA)
cv2.putText(paintWindow, "YELLOW", (520, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 2,
cv2.LINE_AA)
cv2.namedWindow('Paint', cv2.WINDOW_AUTOSIZE)

mpHands = mp.solutions.hands
hands = mpHands.Hands(max_num_hands=1, min_detection_confidence=0.7)
mpDraw = mp.solutions.drawing_utils

cap = cv2.VideoCapture(0)
ret = True
while ret:
ret, frame = cap.read()

x, y, c = frame.shape

frame = cv2.flip(frame, 1)
framergb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame = cv2.rectangle(frame, (40,1), (140,65), (0,0,0), 2)
frame = cv2.rectangle(frame, (160,1), (255,65), (255,0,0), 2)
frame = cv2.rectangle(frame, (275,1), (370,65), (0,255,0), 2)
frame = cv2.rectangle(frame, (390,1), (485,65), (0,0,255), 2)
frame = cv2.rectangle(frame, (505,1), (600,65), (0,255,255), 2)
cv2.putText(frame, "CLEAR", (49, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 2,
cv2.LINE_AA)
cv2.putText(frame, "BLUE", (185, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 2,
cv2.LINE_AA)
cv2.putText(frame, "GREEN", (298, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 2,
cv2.LINE_AA)
cv2.putText(frame, "RED", (420, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 2,
cv2.LINE_AA)
cv2.putText(frame, "YELLOW", (520, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 2,
cv2.LINE_AA)

result = hands.process(framergb)

if result.multi_hand_landmarks:
landmarks = []
for handslms in result.multi_hand_landmarks:
for lm in handslms.landmark:
lmx = int(lm.x * 640)
lmy = int(lm.y * 480)

landmarks.append([lmx, lmy])

mpDraw.draw_landmarks(frame, handslms, mpHands.HAND_CONNECTIONS)


fore_finger = (landmarks[8][0],landmarks[8][1])
center = fore_finger
thumb = (landmarks[4][0],landmarks[4][1])
cv2.circle(frame, center, 3, (0,255,0),-1)
print(center[1]-thumb[1])
if (thumb[1]-center[1]<30):
bpoints.append(deque(maxlen=512))
blue_index += 1
gpoints.append(deque(maxlen=512))
green_index += 1
rpoints.append(deque(maxlen=512))
red_index += 1
ypoints.append(deque(maxlen=512))
yellow_index += 1
elif center[1] <= 65:
if 40 <= center[0] <= 140:
bpoints = [deque(maxlen=512)]
gpoints = [deque(maxlen=512)]
rpoints = [deque(maxlen=512)]
ypoints = [deque(maxlen=512)]

blue_index = 0
green_index = 0
red_index = 0
yellow_index = 0

paintWindow[67:,:,:] = 255
elif 160 <= center[0] <= 255:
colorIndex = 0 # Blue
elif 275 <= center[0] <= 370:
colorIndex = 1 # Green
elif 390 <= center[0] <= 485:
colorIndex = 2 # Red
elif 505 <= center[0] <= 600:
colorIndex = 3 # Yellow
else :
if colorIndex == 0:
bpoints[blue_index].appendleft(center)
elif colorIndex == 1:
gpoints[green_index].appendleft(center)
elif colorIndex == 2:
rpoints[red_index].appendleft(center)
elif colorIndex == 3:
ypoints[yellow_index].appendleft(center)
else:
bpoints.append(deque(maxlen=512))
blue_index += 1
gpoints.append(deque(maxlen=512))
green_index += 1
rpoints.append(deque(maxlen=512))
red_index += 1
ypoints.append(deque(maxlen=512))
yellow_index += 1

points = [bpoints, gpoints, rpoints, ypoints]


for i in range(len(points)):
for j in range(len(points[i])):
for k in range(1, len(points[i][j])):
if points[i][j][k - 1] is None or points[i][j][k] is None:
continue
cv2.line(frame, points[i][j][k - 1], points[i][j][k], colors[i], 2)
cv2.line(paintWindow, points[i][j][k - 1], points[i][j][k], colors[i], 2)

cv2.imshow("Output", frame)
cv2.imshow("Paint", paintWindow)

if cv2.waitKey(1) == ord('q'):
break

cap.release()
cv2.destroyAllWindows()
def process_frame(frame):
# Your Air Canvas frame processing logic here
return processed_frame

IV. TESTING

Software testing is an investigation conducted to provide stakeholders with information about


the quality of the product or service under test. Software testing can also provide an objective,
independent view of the software to allow the business to appreciate and understand the risks
of software implementation. Table 1 describes the various test cases for the system . Test cases
outputs can be seen in the output section. The sample test cases we designed for our system
are as follows:
CHAPTER 5
PROJECT PLAN
# CHALLENGES IDENTIFIED

A. Fingertip detection

The existing system only works with your fingers, and there are no highlighters, paints, or relatives.
Identifying and characterizing an object such as a finger from an RGB image without a depth sensor is a
great challenge.

B. Lack of pen up and pen down motion the system uses a single RGB camera to write from above.
Since depth sensing is not possible, up and down pen movements cannot be followed. Therefore, the
fingertip's entire trajectory is traced, and the resulting image would be absurd and not recognized by
the model. The difference between hand written and air written ‘G’ fig 1.

Figure 1: Actual Character vs Trajectory

C. Controlling the real-time system

Using real-time hand gestures to change the system from

one state to another requires a lot of code care. Also, the

user must know many movements to control his plan

adequately.
CHAPTER 6
PROJECT SCREENSHOT
CHAPTER 7
CONCLUSION/ FUTURE
SCOPE
CONCLUSION :

Day by day technology is improving a lot. In our present technology we want to draw our imaginations
by just waving our finger in the air. Thus, we got a new technology of building anAir canvas with which
can draw anything on the canvas by just capturing the motion of a colored marker with a laptop camera.
Mainly this type of technology will be useful for kids to learn and remember which was taught by the
teacher. By teaching the kids on screen just by waving, they will easily remember whatever they saw on
the screen.

In conclusion, developing an "Air Canvas" application using OpenCV offers a fascinating and practical
way to explore the capabilities of computer vision technology. This application provides users with
an innovative means of creating digital art by using their hand gestures to draw on a virtual canvas.
Throughout the development process, several key points can be highlight

FUTURE SCOPE :

To enhance hand gesture tracking, we would have to delve more into Open CV. There are many
different methods of contour analysis, but in this particular algorithm, it may be worthwhile to take a
look at the color histogram used to create the contours in question. Whenever a new green object
comes into the frame it takes the new object a pencil. This will be improved so that the using object
remains as the pencil even if a high intensity new object enters the frame. In future a tools tab will be
added which contains mathematical tools like triangle, rectangle etc. for easy drawing. In future
additional features will be added so that light colors will be drawn when we selected a particular color.
It will produce an effective communication between peoples that will reduce the uses of laptops and
mobile phones by abolishing the writing need. The major scope is in the teaching field while teaching
online or teaching on screen. Without the mouse or any markers, we can easily implement on the
screen. It will use in the designing purposes to create the immersive or interactive designs.

1. Gesture & Shapes Vocabulary: Enhance the system by adding more gestures and shapes to increase
its functionality. Recognize a broader range of gestures for increased user convenience.

2. Multimodal Integration: Explore the integration of voice commands to create a multi modal HCI
system. This allows users to combine hand gestures with voice instructions for a richer interaction
experience.

3. Cross-Platform Compatibility: Extend compatibility to various operating systems and applications,


making the system adaptable to different user needs.

4. User Customization: Develop features that enable users to customize gesture definitions, making the
system more personalized and versatile.
REFERENCES

1. Babu, S., Pragathi, B.S., Chinthala, U. and Maheshwaram, S., 2021, September.
SubjectTracking with Camera Movement Using Single Board Computer. In 2020 IEEE-
HYDCON (pp. 1-6). IEEE.

2.. Shetty, M., Daniel, C.A., Bhatkar, M.K. and Lopes, O.P., 2020, June. Virtual Mouse Using
Object Tracking. In 2020 5th International Conference on Communication and Electronics
Systems (ICCES) (pp. 548-553). IEEE.

3. Chen, M., Al Re gib, G. and Juang, B.H., 2020. Air-writing recognition—Part I: Modeling and
recognition of characters, words, and connecting motions. IEEE Transactions on Human-
Machine Systems, 46(3), pp.403-413.

4. Kaur, H., Reddy, B.G.S., Sai, G.C. and Raj, A.S., 2019. “A Comprehensive overview of AR/VR
by Writing in Air”.

5.. Zhou, L., 2019. “Paper Dreams: an adaptive drawing canvas supported by machine learning”
(Doc-toral dissertation, Massachusetts Institute of Technology).

6. Joo lee, J.B., Raza, A., Abdullah, M. and Jeon, S., 20. “Tracking of Flexible Brush Tip on Real
Canvas: Silhouette-Based and Deep Ensemble Network Based Approaches”.
IEEEAccess,8, pp.115778-115788.

7. T. Grossman, R. Balakrishnan, G. Kurtenbach, G. Fitzmaurice, A. Khan, and B. Buxton,


Creating Principal 3D Curves with Digital Tape Drawing," Proc. Conf. Human Factors Computing
Systems (CHI' 02), pp. 121- 128, 2002.
8. T. A. C. Bragatto, G. I. S. Ruas, M. V. Lamar, "Real-time Video-Based Finger Spelling
Recognition System Using Low Computational Complexity Artificial Neural Networks", IEEE ITS,
pp. 393-397, 2006 .

9. Yusuke Araga, Makoto Shirabayashi, Keishi Kaida, Hiroomi Hikawa, "Real Time Gesture
Recognition System Using Posture Classifier and Jordan Recurrent Neural Network", IEEE
World Congress on Computational Intelligence, Brisbane, Australia, 2012 .

10. [Google AI Blog: On-Device, Real-Time Hand Tracking with MediaPipe (googleblog.com)

11. http://ai.googleblog.com/2019/08/on-device-real-time-hand-tracking-with.html

12. https://mediapipe.dev/ [4] https://ieeexplore.ieee.org/abstract/document/9753939

13. A Practical Introduction to Computer Vision with OpenCV-WILEY

14. https://opencv.org/

15. https://ieeexplore.ieee.org/abstract/document/9731873

You might also like