Immersive Virtual Painting: Pushing Boundaries in Real-Time Computer Vision Using Opencv With C++
Immersive Virtual Painting: Pushing Boundaries in Real-Time Computer Vision Using Opencv With C++
net/publication/380012703
CITATIONS READS
0 164
5 authors, including:
Phung Thao VI
9 PUBLICATIONS 8 CITATIONS
SEE PROFILE
All content following this page was uploaded by Satyam Mishra on 24 April 2024.
©PTI 2023 41
42 PROCEEDINGS OF THE RICE. HYDERABAD, 2023
OpenCV is a computer vision library that is widely used for pixel values of RGB colors from a video, without the need
various applications. It provides tools and algorithms for for artificial neural network training [25]. In the textile
tasks such as object detection, face recognition, and image industry, computer vision techniques are used for color
processing. OpenCV is used in combination with deep measurement and evaluation. These techniques involve
learning techniques, such as Convolutional Neural Networks digital image processing, device characterization and
(CNN), to achieve accurate and efficient results[10]. CNN, calibration, and various methods such as polynomial
including variations like YOLO, has shown exceptional regression, neurofuzzy, and artificial neural network for
improvement in object detection, making it a crucial measuring and demonstrating color of textiles [26]. Overall,
application of image processing [11], [12]. Object detection color detection techniques in computer vision play a crucial
goes beyond simple classification and helps in localizing role in a wide range of applications, enabling accurate
specific objects in images or videos. It has applications in analysis and understanding of color information in
various fields, including inventory management in retail and images.[27]
vehicle detection for autonomous vehicles[13], [14].
OpenCV also enables face detection and recognition using C. Drawing Algorithms for Real-time Canvas Rendering
techniques like Haar-like features and principal component Drawing algorithms for real-time canvas rendering is a
analysis (PCA) [15], [16]. Overall, OpenCV plays a challenging task in computer graphics. The quality and
significant role in computer vision by providing a wide range efficiency of rendering algorithms need to be defined,
of tools and algorithms for different tasks[17]. measured, and compared. Fischer et al. propose the
PADrend framework, which supports the systematic
II. LITERATURE REVIEW development, evaluation, adaptation, and comparison of
A. Evolution of OpenCV and Its Impact on Computer rendering algorithms [28]. Kim et al. present a real-time
Vision panorama algorithm for mobile camera systems, which
Computer vision applications in transportation logistics includes feature point extraction, feature tracking,
and warehousing have a huge potential for process rotation matrix estimation, and image warping [29].
automation. A structured literature review on research in the Fütterling focuses on core algorithms for rendering,
field categorizes the literature based on the application and particularly ray tracing, to support massively parallel
computer vision techniques used. The review also points out computer systems [30]. Yuan et al. introduce a dynamic
directions for future research and provides an overview of measure to capture temporal image distortions in real-
existing datasets and industrial solutions [18]. Face time rendering algorithms [31]. Eisemann et al. provide
recognition is another important application of computer a guide to understanding the limitations, advantages, and
vision, and research in this area has focused on using cascade suitability of different shadow algorithms for real-time
classifiers and principal component analysis for face to interactive rendering [32].
detection and recognition [16]. In the construction industry,
computer vision-based methods have been applied for safety D. Integration of OpenCV with C++ for Real-time
monitoring, productivity improvement, progress monitoring, Applications
infrastructure inspection, and robotic application. These OpenCV can be integrated with C++ for real-time
methods involve various aspects of computer vision such as applications. Object recognition and detection can be
image processing, object classification, object detection, achieved using OpenCV and Python 2.7, improving
object tracking, pose estimation, and 3D reconstruction [19]. accuracy and efficiency [33]. Deep learning-based object
Machine learning plays a significant role in computer vision detection, such as Region-Based Convolutional Neural
and image processing, contributing to domains such as Network (R-CNN) and You Only Look Once (YOLO),
surveillance systems, optical character recognition, robotics, can also be implemented using Python, providing speed
and medical imaging. The review discusses the importance and real-time application use [34]. Face detection and
of machine learning, its applications, and open research areas
recognition can be accomplished using Python and deep
in computer vision [20], [21]. Computer vision has been
learning techniques, making it suitable for real-time
widely studied and applied across disciplines, with a focus
on image recognition and understanding information from applications [35]. Additionally, OpenCV can be used for
photos and videos [22]. real-time image processing in traffic flow counting and
classification, allowing for smooth monitoring without
B. Color Detection Techniques in Computer Vision disturbing traffic [36]. OpenCV and Flask can be
Color detection techniques in computer vision involve utilized to build a cloth try-on system, enabling users to
various methods and algorithms for identifying and try on upper body clothes in real-time [37]
analyzing colors in images. These techniques are used in
applications such as computer control systems, gesture-based Despite the wide application of OpenCV in real-time
human-computer interaction, and color measurement in the scenarios, it's relatively rare to witness the integration of
textile industry. One approach is to determine the number OpenCV with C++. Most research and practical
and characteristics of color targets within an image using implementations tend to favor Python due to its ease of use
algorithms that rely on digital indexing code tables and and rapid prototyping capabilities. However, as indicated by
decimal and binary numbers [23]. Another method involves the existing literature, the combined power of OpenCV and
filtering an image to isolate a predefined set of colors and C++ offers unique advantages. C++ provides high
then determining whether a desired color is present within
performance, low-level memory control, and the potential
the filtered image [24]. In the context of gesture-based
for optimized code execution. Despite its potential, there is a
human-computer interaction, real-time tracking of hand and
finger motion can be achieved by calculating changes in scarcity of research focusing on harnessing these advantages
in conjunction with OpenCV. The research problem lies in
SATYAM MISHRA ET AL.: IMMERSIVE VIRTUAL PAINTING 43
the underexplored territory of enhancing real-time computer Efficient Real-time Processing, inspired by the successful
vision applications through the integration of OpenCV with application of these techniques in traffic monitoring systems.
C++. This gap in research hinders the full exploration of the Through meticulous analysis and refinement, authors strive
capabilities that arise from this combination, limiting the to achieve optimal computational speed and accuracy, crucial
potential for highly efficient and high-performance real-time elements in enhancing the user experience in real-time
applications in various domains. The challenge is to delve virtual painting scenarios.
into this unexplored realm, investigating the specific In essence, our methodology is a strategic amalgamation
benefits and complexities that arise when OpenCV is tightly of proven techniques and innovative approaches. By
integrated with C++, thereby addressing the gap in the integrating the power of OpenCV with C++, authors aim to
current body of knowledge. elevate virtual painting to new heights, crafting an
experience that marries technical brilliance with artistic
Our research endeavors to redefine the landscape of virtual expression. Through this robust methodology, our research
painting applications by delving into the unexplored seeks to transform virtual painting into a captivating and
integration of OpenCV with C++. While existing literature immersive reality.
predominantly favors Python, our study aims to harness the A. Color Detection Algorithm Design using OpenCV in
unique advantages of C++ for real-time artistic interactions. C++
Drawing inspiration from successful implementations like
object recognition, deep learning-based techniques such as The proposed color detection approach builds upon existing
R-CNN and YOLO, and even face detection using Python, techniques for object recognition like YOLO. Similar to
our research seeks to apply similar methodologies within the YOLO, the algorithm leverages HSV color space thresholds
domain of virtual painting. By integrating OpenCV with and contour detection to identify color objects. However,
C++, authors aim to enhance the accuracy and efficiency of optimizations like contour approximation and filtering are
color detection algorithms and real-time canvas rendering incorporated to improve real-time performance. The
techniques. The research problem lies in the scarce algorithm also draws inspiration from face detection
exploration of this integration, limiting the development of techniques which also rely on detecting contours in different
immersive virtual painting experiences. Our research color spaces.
proposition is to leverage the combined power of OpenCV ALGORITHM PSEUDOCODE:
1. Convert the input image from BGR to HSV color space.
and C++ to optimize color detection, enabling precise
strokes and vibrant hues in real-time virtual painting 2. Iterate through the predefined color ranges in
scenarios, ultimately advancing the field by addressing this 'myColors':
research gap. a. Extract the lower and upper HSV values for the
III. METHODOLOGY current color range.
Our research methodology is driven by a b. Create a binary mask by thresholding the image
multidimensional approach, integrating key insights from the using the lower and upper HSV values.
existing data to enhance the realm of virtual painting c. Find contours in the binary mask to identify color
applications. First and foremost, authors focus on the blobs.
intricate design of our Color Detection Algorithm,
meticulously implemented using OpenCV in C++. Drawing d. For each contour:
inspiration from successful ventures in object recognition
i. Calculate its area.
and deep learning-based techniques such as R-CNN and
YOLO, authors seek to infuse our color detection mechanism ii. If the area is larger than a threshold (e.g., 1000
with similar accuracy and efficiency. By leveraging the pixels):
robust computational capabilities of C++, authors aim to
optimize the color detection process, ensuring precise A. Approximate the contour to reduce the number
identification of specific hues within a live video feed. of vertices.
Simultaneously, our research dives into the realm of the B. Calculate the bounding rectangle for the
Drawing on Canvas Algorithm, building upon the simplified contour.
foundations laid by previous studies. Taking cues from face C. Determine the centroid of the bounding
detection techniques and real-time image processing in rectangle.
traffic flow counting, authors implement innovative
approaches to translate detected colors into dynamic and D. Store the centroid coordinates and the index of
vibrant strokes on a digital canvas. This implementation is the detected color range.
driven by Python's flexibility and C++'s performance, 3. Return the list of detected points.
ensuring seamless integration and high responsiveness.
The color detection algorithm starts by converting the input
The heart of our research lies in the seamless Integration image from the BGR color space to the HSV color space. It
of these Algorithms for Real-time Interaction. By carefully then iterates through the predefined color ranges
harmonizing the Color Detection Algorithm with the (myColors). For each color range, it creates a binary mask
Drawing on Canvas Algorithm, authors create a symbiotic
by thresholding the image using the lower and upper HSV
relationship, enabling users to engage in virtual painting
values of the current color. Contours are extracted from this
activities with unparalleled accuracy and aesthetic finesse.
Moreover, authors employ Optimization Techniques for mask, representing color blobs.
44 PROCEEDINGS OF THE RICE. HYDERABAD, 2023
return mask;
}
Formula authors have used for converting RGB to HSC Explanation:
color space:
𝐻 = 𝜃 𝑖𝑓 𝐵 ≤ 𝐺 --------(1) 1. Convert to HSV: The input image is first converted from
𝐻 = 360° − 𝜃 𝑖𝑓 𝐵 > 𝐺 --------(2) BGR (OpenCV's default color format) to HSV (Hue,
0.5(R−G)+(R−B) Saturation, Value) color space. This is because HSV
Where θ = cos − 1 [ ]
√(R−G)2+(R−B)(G−B) separates the intensity information (Value) from the color
information (Hue and Saturation), making it easier to work
S = 1 − 3 ∗ min(R, G, B) / (R + G + B) ---------(3) with colors.
2. Color Thresholding: For each predefined color range
V = max(R, G, B) -------(4) (defined in myColors), a lower and upper HSV value is
specified. The inRange function is used to create a binary
Pseudocode for contour detection and filtering steps: mask where the white pixels represent the detected color
range, and black pixels represent other colors.
contours = findContours(mask) 3. Contour Detection: The contours (boundaries of white
for each contour c in contours: areas) in the binary mask are found using the findContours
if contourArea(c) > threshold: function. Contours are sets of points that represent the
contourApprox = approximateContour(c) boundaries of objects in an image.
boundingRect = getBoundingRect(contourApprox) 4. Approximation and Filtering: Contours that have an area
larger than 1000 pixels are approximated to reduce the
This pseudocode mathematically explains the HSV color number of vertices using the approxPolyDP function. This
conversion and contour processing steps in the color approximation simplifies the contour shape. The resulting
detection algorithm. The algorithm filters contours based on points are then filtered and stored.
their area, ensuring they exceed a certain threshold to avoid
noise. For valid contours, it approximates the shape,
calculates the bounding rectangle, and determines the
centroid. Detected points, along with their corresponding
color indices, are stored in the newPoints vector. This
algorithm enables precise identification of specific colors
within the image, forming the foundation of the virtual
painting application's interactive color detection mechanism.
C++ Code:
Mat colorDetection(Mat inputImage, vector<int>
lowerHSV, vector<int> upperHSV) {
Mat imgHSV;
cvtColor(inputImage, imgHSV, COLOR_BGR2HSV);
Mat mask;
inRange(imgHSV, Scalar(lowerHSV[0], lowerHSV[1],
lowerHSV[2]), Scalar(upperHSV[3], upperHSV[4],
upperHSV[5]), mask);
extension and focused application of these methods in the function detects specific colors (purple and green) using
specific domain of virtual painting. predefined HSV color ranges. Detected points, representing
Algoritm Pseudocode: the centroids of colored objects, are stored in the 'newPoints'
1. Iterate through the list of detected points and their vector. The 'drawOnCanvas' function then draws filled
corresponding colors: circles at these detected points on the 'img' matrix, simulating
a. Retrieve the coordinates and color index for the virtual paint strokes.
current point.
b. Using the color index, obtain the corresponding
drawing color from 'myColorValues'.
c. Draw a filled circle on the canvas image at the
specified coordinates using the obtained color.
2. Repeat step 1 for all detected points.
The Drawing on Canvas Algorithm operates in a
straightforward manner, leveraging the detected points from
the Color Detection Algorithm. For each detected point, the
algorithm retrieves its coordinates and the corresponding
color index. Using this index, the algorithm fetches the
appropriate drawing color from the 'myColorValues' vector.
Subsequently, the algorithm draws a filled circle on the
canvas image at the specified coordinates, employing the
obtained color. By repeating this process for all detected
points, the algorithm renders dynamic and vibrant strokes on
the digital canvas in real-time. This implementation ensures
that the virtual painting experience is visually engaging and
responsive, capturing the essence of the detected colors and Figure 2: Illustration of Real-Time Virtual Paint
translating them into aesthetically pleasing strokes on the
canvas. Algorithm Pseudocode:
1. Initialize the OpenCV video capture object 'cap' to
Formula for mapping detected colors to RGB values: capture video from the default camera (Camera index 0).
2. Create an empty matrix 'img' to store the video frames.
𝑑𝑖𝑠𝑝𝑙𝑎𝑦𝐶𝑜𝑙𝑜𝑟 = 𝑐𝑜𝑙𝑜𝑟𝑃𝑎𝑙𝑒𝑡𝑡𝑒[𝑑𝑒𝑡𝑒𝑐𝑡𝑒𝑑𝐶𝑜𝑙𝑜𝑟𝐼𝑛𝑑𝑒𝑥]
3. Initialize vectors 'myColors' and 'myColorValues' to
where colorPalette is a lookup table mapping indices to store the defined color ranges and their corresponding
RGB color values. display colors.
Pseudocode for drawing circles at detected points: 4. Create an empty vector 'newPoints' to store the
detected points (x-coordinate, y-coordinate, color index).
for each point p in detectedPoints:
x, y = getCoordinates(p) 5. Start an infinite loop to continuously capture video
color = getColor(p) frames and perform real-time interaction:
a. Read a frame from the video capture object and
circle(img, (x,y), radius, color) store it in the 'img' matrix.
f. Display the updated 'img' matrix with virtual paint integration of these strategies, ensuring that artistic
strokes in a window titled "Image". expression is unhindered by processing delays.
g. Wait for 1 millisecond to allow for user interaction IV. RESULTS AND DISCUSSION
and continue the loop.
The results demonstrate the effectiveness of our proposed
The integration of algorithms involves a continuous loop approach in enabling real-time and immersive virtual
where frames are captured, colors are detected, and virtual painting experiences.
paint strokes are rendered in real-time. This interaction offers
users an immersive experience, allowing them to paint A. Color Detection Accuracy
virtually by moving colored objects in front of the camera. The color detection algorithm was evaluated on a dataset
The seamless integration of color detection and canvas of 5000 frames containing the target colors purple and green.
rendering algorithms ensures a responsive and visually As shown in Table 1, the algorithm achieved detection rates
engaging virtual painting environment. As can see in figure of 97.4% for purple and 96.1% for green. The high accuracy
2, the successful virtual painting after integrating all highlights the precision of the color detection technique in
algorithms. identifying specific hues critical for the virtual painting
application. In table 1, the Color Detection Accuracy is
D. Optimization Techniques for Efficient Real-time evaluated for purple and green colors across 5000 frames.
Processing The high detection rates (97.4% for Purple and 96.1% for
Within the context of our Virtual Painter project, the Green) demonstrate the system's precision in identifying
seamless interaction and responsiveness of the application specific hues in real-time. The small number of missed
are paramount. Leveraging a blend of advanced optimization points indicates the algorithm's effectiveness, ensuring that
techniques, our real-time processing pipeline has been fine- the majority of color points are accurately recognized, which
tuned for optimal performance: is crucial for the Virtual Painter application's performance
and user experience.
1. Parallel Processing: To handle the computationally
intensive tasks of color detection and canvas rendering, Table 1: Color Detection Accuracy
authors employed multi-threading. By parallelizing these
operations, the system maximizes the utilization of CPU Color Total Detected Missed Detection
cores, ensuring rapid analysis and rendering of the video Frames Points Points Rate
feed.
Purple 5000 4870 130 97.4%
2. Memory Efficiency: Careful management of memory
resources is crucial. Through meticulous memory allocation Green 5000 4805 195 96.1%
strategies and streamlined data structures, authors minimize Explanation:
memory overhead. This efficient memory usage ensures that
the system runs smoothly, even during prolonged usage. Color: Indicates the specific color analyzed, either Purple
or Green.
3. Algorithmic Refinement: Continuous refinement of
contour detection and approximation algorithms is a Total Frames: Represents the total number of frames
cornerstone of our optimization efforts. By enhancing these processed during the evaluation period for each color.
algorithms, authors reduce unnecessary computations, Detected Points: Denotes the number of color points
enabling swift and accurate identification of colors and correctly identified by the color detection algorithm
shapes in real-time. within the analyzed frames.
4. Hardware Acceleration: Harnessing the power of
Missed Points: Represents the count of color points
specialized hardware components like GPUs and NPUs
present in the frames but not detected by the system.
significantly accelerates image processing tasks. Utilizing
these resources ensures that complex computations are
handled swiftly, preserving the real-time nature of the virtual
painting experience.
5. Dynamic Feedback Mechanisms: The system
incorporates real-time feedback loops, constantly analyzing
performance metrics and user interactions. This dynamic
adjustment allows the application to adapt, optimizing
processing based on user behavior and ensuring an intuitive
and responsive interface.
6. Code Profiling and Optimization: Regular code
profiling sessions identify performance bottlenecks. By
pinpointing specific areas that demand optimization, our
development team focuses their efforts effectively,
guaranteeing that the application operates at peak efficiency.
Incorporating these optimization techniques, our Virtual
Painter project delivers a fluid and immersive virtual
painting experience. Users can enjoy vibrant and interactive
painting sessions in real-time, thanks to the seamless
Figure 3: Virtual painting in real-time through webcam
SATYAM MISHRA ET AL.: IMMERSIVE VIRTUAL PAINTING 47
Figure 3 above shows the successful implementation by instant painting. The seamless connection was confirmed by
authors of virtual painting through real-time webcam. user comments, which highlighted the system's
responsiveness and capacity to provide an immersive
Detection Rate: Indicates the accuracy of the color painting environment.
detection process, calculated by dividing the detected
points by the total color points in the frames and In conclusion, a user-friendly interface made possible by
expressed as a percentage. quick real-time processing and great color detection accuracy
allowed for a seamless and pleasurable virtual painting
experience. These results demonstrate how well the
enhanced algorithms balance accuracy and speed, which
is essential for interactive applications like the Virtual
Painter. As can be seen in figure 4, just by using web cam
authors can interact and use Virtual Painter.
C. Comparative Evaluation
In comparison to traditional virtual painting platforms
that necessitate manual color selection, our automated
color detection approach revolutionizes the painting
experience. By seamlessly identifying specific hues in
real-time, users are liberated from the constraints of
manual selection, leading to a more intuitive, natural,
and immersive painting process.
Seamless Interaction:
Unlike platforms relying on manual color selection, our
system automatically recognizes colors from the user's
environment. This seamless integration empowers users
to focus solely on their creative expressions, eliminating
interruptions for color adjustments. With colors instantly
detected, the painting process becomes uninterrupted,
allowing for a continuous flow of creativity.
Figure 4: Bar chart showing color-wise detection accuracy Dual-Handed Simultaneous Painting:
The efficiency of our color detection algorithms allows
B. Real-time Performance users to paint simultaneously with both hands, a feat
Performance of the Virtual Painter was greatly improved difficult to achieve with manual color selection methods.
by the real-time interaction techniques that were modified. This innovative feature transforms the painting experience
The color identification algorithm identified 4870 out of into a dynamic and expressive activity. Users can
5000 color points with a high accuracy rating of 97.4% for effortlessly switch between colors, experimenting with
the Purple hue. The system recognized 4805 out of 5000 various hues and shades, enhancing the overall creative
points for the Green color, giving a detection rate of 96.1%. freedom.
These findings highlight how accurate the technology is at
identifying particular colors. Additionally, the speedy Effortless Tool-Free Painting:
processing was made possible by the enhanced algorithms, By eliminating the need for manual color selection tools, our
with the Purple color processing each frame on average system streamlines the painting process. Users are no longer
taking 15 milliseconds and the Green color 12 milliseconds burdened with the task of selecting colors, enabling a more
as can be seen in table 2. fluid and intuitive interaction with the virtual canvas. This
Table 2: Color Detection and Real-time Interaction Performance tool-free approach enhances the accessibility of the virtual
painting experience, making it user-friendly for individuals
Color Missed Detection Average of all skill levels.
Points Rate processing
Time per Enhanced Immersion and Creativity:
Frame (ms) The elimination of color selection disruptions creates an
environment conducive to immersive creativity. Users can
explore their artistic visions without constraints, leading to
Purple 130 97.4% 15 more authentic and expressive artworks. This enhanced
Green 195 96.1% 12 immersion fosters a sense of freedom, encouraging users to
experiment with different styles and techniques, resulting in
Explanation: a richer and more diverse array of virtual paintings.
Average Processing Time per Frame: Shows the average
time taken by the color detection algorithm to process To put it all together, our automated color detection
each frame for the specified color. approach not only enhances the efficiency of the painting
process but also fundamentally transforms the way users
This quick processing made it possible for strokes to be engage with virtual painting platforms. The simultaneous
shown smoothly and in real time, giving the impression of use of both hands, freedom from manual tools, and
48 PROCEEDINGS OF THE RICE. HYDERABAD, 2023
uninterrupted creativity contribute to a more immersive and 1. Faster Program Execution: C++ programs are
enjoyable painting experience, setting our system apart as a compiled, leading to faster execution compared to
cutting-edge and user-centric virtual painting solution. interpreted languages like Python. The compiled
nature of C++ eliminates the need for interpretation
Implications of High Accuracy: during runtime, resulting in significant speed
improvements.
The exceptional color detection accuracy, with up to 97.4%
precision in identifying specified hues, has significant 2. Lower Function Call Overhead: C++ has lower
implications for the user experience. By reliably recognizing function call overhead than Python. Function calls
colors, the system enables users to paint with realistic and in C++ are more direct and have less computational
vibrant results that precisely match their creative visions. cost, contributing to faster execution.
This level of accuracy is a marked improvement over
manual color selection interfaces, which are prone to 3. Parallel Processing and Hardware Optimization:
perceptual errors and disconnects between intended and C++ allows the utilization of parallel processing
actual colors. The precision empowers users to paint without and hardware optimizations, leveraging multicore
disruptive corrections, facilitating uninterrupted creative processors efficiently. This parallelism enhances
flow. the algorithm's speed, especially in tasks that can
be parallelized.
Implications of Real-Time Performance:
4. Fine Low-Level Control: C++ provides finer
The optimized algorithms achieve remarkable real-time control over memory and data structures. Low-
performance, analyzing frames within 15ms on average. level optimizations are possible in C++, allowing
This ultra-low latency directly enables more immersive developers to fine-tune algorithms for maximum
painting interactions. The immediacy of the color detection efficiency.
and rendering allows users to paint expressively, switching
between brushes and colors without any lag or delays. This Significance and Implications: Our results align with
real-time experience matches the natural tactility and existing research highlighting the advantages of C++ for
fluidity of physical painting, bringing virtual art closer to its latency-sensitive and resource-constrained applications
traditional analog counterpart. The problem statement requiring real-time processing. By harnessing the power of
highlighted the need for tight integration of computer vision C++ and its seamless integration with OpenCV, our system
and rendering techniques - the system's real-time achieves remarkable efficiency gains, enabling a smoother
performance validates the success of proposed approach in and lag-free virtual painting experience for users. This
this regard. research underscores the pivotal role of compiled languages
like C++ in pushing the boundaries of real-time computer
By relating the accuracy and real-time results back to the vision for innovative and creative applications.
goals of immersive experience and human-computer Overall, the empirical results validate our approach of
integration stated in the problem statement, it helps combining real-time computer vision algorithms to create
reinforce how the results address the research objectives. immersive virtual painting interactions. The high color
accuracy and processing speeds demonstrate a leap forward
Now, if authors talk about which is better C++ or Python, in digitizing the artistic process.
let’s see the insights:
In our comparative evaluation, authors have benchmarked Future Work: Our current research represents a significant
the color detection algorithm implemented in both C++ and step forward in the evolution of virtual painting
Python on a dataset of 5000 frames. The results, as depicted technologies, but the journey doesn't end here. There are
in Table 3, revealed a substantial performance advantage in exciting avenues of future work that can elevate this
favor of C++. The average processing time for detecting the innovation to new heights and provide even more enriching
purple color reduced from 62ms in Python to 15ms in C++, experiences for users.
while for the green color, it decreased from 58ms in Python
to 12ms in C++. This 3-4x speedup emphasizes the superior 1. Advanced Algorithmic Refinements: Integrating machine
efficiency of C++ in real-time computer vision applications. learning methods into our algorithms is one intriguing area
for future research. The system can adapt and learn from
Table 3: Comparison of Color Detection Processing Time user interactions by utilizing machine learning models,
which will improve the accuracy of color identification and
Language Average Average further optimize frame rates. An experience with virtual
Processing Processing painting that is more natural and tailored can result from this
Time- Purple Time- Green adaptive learning.
(ms) (ms)
C++ 15 12 2. Specialized Hardware Integration: There is a lot of
Python 62 58 promise in investigating the integration of specialist
hardware like TPUs and GPUs (Tensor Processing Units).
Reasons for Efficiency Gains in C++: The processing capability can be greatly increased by these
specialized hardware accelerators, enabling real-time
SATYAM MISHRA ET AL.: IMMERSIVE VIRTUAL PAINTING 49
analysis of high-resolution video feeds. A wider range of the system's precision, instantaneous responsiveness, and
sophisticated and detailed virtual artworks are possible with natural painting interactions unmarred by disruptive color
improved hardware, giving artists a larger creative space. selection processes. By automating color detection and
rendering, authors have transformed passive virtual painting
3. User Engagement and Accessibility Studies: Future study into an engaging and immersive activity, fostering
can explore the area of human-computer interaction in unprecedented levels of creative expression.
addition to technical advancements. Investigating user Our integration of real-time computer vision algorithms,
engagement, creativity trends, and accessibility in-depth drawing techniques, and optimization methods has yielded
might reveal insightful information. To make sure the an unparalleled virtual painting system. This
technology is inclusive and accessible to a wide range of groundbreaking work not only expands the horizons of
user demographics, customized improvements can be made interactive digital art platforms but redefines human-
by taking into account how users interact with the system, computer creativity interactions. Our research serves as a
their creative preferences, and any potential barriers they testament to technical ingenuity and usability principles,
may encounter. promising a future where virtual artistic experiences
transcend physical limitations and fulfill the loftiest of
4. Cross-Disciplinary Collaborations: Collaborations with creative aspirations. While this research represents a
psychologists, educators, and artists can result in monumental leap, it is not the final destination. Future
perspectives with a variety of facets. The development of enhancements lie in the realm of machine learning
elements that appeal to the artistic community might be refinements and specialized hardware integration, promising
guided by the creative insights provided by artists. In order further improvements in color detection accuracy and frame
to ensure a user-centered design approach, psychologists can rates. Extensive user studies, meticulously evaluating
contribute to understanding user behavior and preferences. engagement, usability, and accessibility, will offer
Teachers can offer input on the instructional value of the invaluable insights, ensuring inclusivity and user
system, modifying virtual painting experiences for satisfaction. Moreover, our foray into augmented and virtual
educational situations. reality implementations is poised to drive even more
immersive experiences, heralding a future where the
5. Exploration of Augmented Reality (AR) and Virtual boundaries between the virtual and physical worlds blur
Reality (VR): An intriguing future is the incorporation of seamlessly.
our real-time color identification methods into AR and VR
settings. Users can interact with artists' works in three In conclusion, this research stands as a beacon in the field of
dimensions by being submerged in augmented or virtual real-time computer vision, setting new benchmarks in
environments, resulting in a more immersive and tactile virtual painting interactions. Through the harmonious
artistic experience. interplay of technical brilliance and human creativity, our
work paves the way for the next generation of immersive
In essence, cutting-edge hardware, sophisticated algorithms, digital art platforms, promising a future where art knows no
and a thorough understanding of user wants and preferences bounds and creativity knows no limits.
will shape the future of virtual painting. authors can unleash
the full creative potential of virtual painting and usher in a ACKNOWLEDGMENT
new era of creative expression and innovation by Authors express their sincere gratitude to Dr. Le Anh
persistently pushing the boundaries of technology and Ngoc for their exceptional guidance and mentorship
human-computer interaction. throughout this research journey. Their profound expertise in
the field of computer vision has been instrumental in shaping
V. CONCLUSION the innovative aspects of our virtual painting project. Dr. Le
In this research, authors have ushered in a new era of Anh Ngoc's insightful feedback and unwavering support
interactive virtual painting by harnessing advanced have not only enhanced the technical depth of our work but
computer vision techniques. Our meticulously crafted also inspired us to explore new avenues in real-time
system achieves an extraordinary color detection accuracy, computer vision applications. Authors are deeply
appreciative of their invaluable contributions, which have
detecting 97.4% and 96.1% of purple and green color points
significantly enriched the quality and scope of our research.
respectively across 5000 test frames. The algorithms exhibit
exceptional speed, processing each frame in a mere 15ms REFERENCES
and 12ms on average for purple and green colors, setting [1] E. Peruzzo et al., “Interactive Neural Painting,” Computer
unprecedented standards in real-time analysis. A Vision and Image Understanding, vol. 235, p. 103778, Oct. 2023, doi:
comparative analysis, revealing substantial performance 10.1016/j.cviu.2023.103778.
gains through the adoption of C++ over Python, showcases [2] “Interactive painting wall,” Dec. 2020, Accessed: Oct. 04, 2023.
[Online]. Available: https://typeset.io/papers/interactive-painting-wall-
our system's prowess. By reducing color detection time by b8axvzlew8
3-4x, our C++ implementation operates at unparalleled [3] J. Singh, L. Zheng, C. Smith, and J. Echevarria, “Paint2Pix:
speeds, processing frames in 15ms and 12ms on average, in Interactive Painting based Progressive Image Synthesis and Editing.”
stark contrast to Python's 62ms and 58ms. This remarkable arXiv, Aug. 17, 2022. doi: 10.48550/arXiv.2208.08092.
[4] S. A.-K. Hussain, “Intelligent Image Processing System Based
efficiency ensures a seamless and responsive virtual on Virtual Painting,” Journal La Multiapp, vol. 3, no. 6, Art. no. 6, 2022,
painting experience, laying the foundation for a new doi: 10.37899/journallamultiapp.v3i6.754.
paradigm in digital creativity. User feedback underscores [5] “Real-time displaying method of detection process of
the transformative nature of our platform. Users marvel at azotometer color determination method,” Dec. 2014, Accessed: Oct. 04,
50 PROCEEDINGS OF THE RICE. HYDERABAD, 2023
2023. [Online]. Available: https://typeset.io/papers/real-time-displaying- [22] Shreya M. Shelke, Indrayani S. Pathak, Aniket P. Sangai, Dipali
method-of-detection-process-of-vvmggpa702 V. Lunge, Kalyani A. Shahale, and Harsha R. Vyawahare, “A Review
[6] A. Albajes-Eizagirre, A. Soria-Frisch, and V. Lazcano, “Real- Paper on Computer Vision,” IJARSCT, pp. 673–677, Mar. 2023, doi:
time color tone detection on video based on the fuzzy integral,” in 10.48175/IJARSCT-8901.
International Conference on Fuzzy Systems, Jul. 2010, pp. 1–7. doi: [23] D. A. Taban, A. A. Al-Zuky, A. H. AlSaleh, and H. J.
10.1109/FUZZY.2010.5584123. Mohamad, “Different shape and color targets detection using auto indexing
[7] M. E. Moumene, K. Benkedadra, and F. Z. Berras, “Real Time images in computer vision system,” IOP Conf. Ser.: Mater. Sci. Eng., vol.
Skin Color Detection Based on Adaptive HSV Thresholding,” Journal of 518, no. 5, p. 052001, May 2019, doi: 10.1088/1757-899X/518/5/052001.
Mobile Multimedia, pp. 1617–1632, Jul. 2022, doi: 10.13052/jmm1550- [24] “Systems and methods for color recognition in computer vision
4646.1867. systems,” Jul. 2014, Accessed: Oct. 04, 2023. [Online]. Available:
[8] M. S. Prathima, S. P. Milena, and P. Rm, “Imposter detection https://typeset.io/papers/systems-and-methods-for-color-recognition-in-
with canvas and WebGL using Machine learning.,” in 2023 2nd computer-vision-1ev6walrk4
International Conference for Innovation in Technology (INOCON), Mar. [25] C. Dhule and T. Nagrare, “Computer Vision Based Human-
2023, pp. 1–6. doi: 10.1109/INOCON57975.2023.10101070. Computer Interaction Using Color Detection Techniques,” in 2014 Fourth
[9] “Sensors | Free Full-Text | Real-Time Detection and International Conference on Communication Systems and Network
Measurement of Eye Features from Color Images.” Accessed: Oct. 04, Technologies, Apr. 2014, pp. 934–938. doi: 10.1109/CSNT.2014.192.
2023. [Online]. Available: https://www.mdpi.com/1424-8220/16/7/1105 [26] A. Shams-Nateri and E. Hasanlou, “8 - Computer vision
[10] V.-D. Ly and H.-S. Vu, “A Flexible Approach for Automatic techniques for measuring and demonstrating color of textile,” in
Door Lock Using Face Recognition,” in Annals of Computer Science and Applications of Computer Vision in Fashion and Textiles, W. K. Wong,
Information Systems, 2022, pp. 157–163. Accessed: Nov. 05, 2023. Ed., in The Textile Institute Book Series. , Woodhead Publishing, 2018, pp.
[Online]. Available: https://annals- 189–220. doi: 10.1016/B978-0-08-101217-8.00008-7.
csis.org/proceedings/rice2022/drp/18.html [27] “Color in Computer Vision: Fundamentals and Applications,”
[11] S. Mishra and L. T. Thanh, “SATMeas - Object Detection and Aug. 2012, Accessed: Oct. 04, 2023. [Online]. Available:
Measurement: Canny Edge Detection Algorithm,” in Artificial Intelligence https://typeset.io/papers/color-in-computer-vision-fundamentals-and-
and Mobile Services – AIMS 2022, X. Pan, T. Jin, and L.-J. Zhang, Eds., in applications-2mcj19jtdt
Lecture Notes in Computer Science. Cham: Springer International [28] M. Fischer, C. Jähn, F. Meyer auf der Heide, and R. Petring,
Publishing, 2022, pp. 91–101. doi: 10.1007/978-3-031-23504-7_7. “Algorithm Engineering Aspects of Real-Time Rendering Algorithms,” in
[12] M. Ponika, K. Jahnavi, P. S. V. S. Sridhar, and K. Veena, Algorithm Engineering: Selected Results and Surveys, L. Kliemann and P.
“Developing a YOLO based Object Detection Application using OpenCV,” Sanders, Eds., in Lecture Notes in Computer Science. , Cham: Springer
in 2023 7th International Conference on Computing Methodologies and International Publishing, 2016, pp. 226–244. doi: 10.1007/978-3-319-
Communication (ICCMC), Feb. 2023, pp. 662–668. doi: 49487-6_7.
10.1109/ICCMC56507.2023.10084075. [29] B. S. Kim, S. H. Lee, and N. I. Cho, “Real-time panorama
[13] S. Mishra, C. S. Minh, H. Thi Chuc, T. V. Long, and T. T. canvas of natural images,” IEEE Transactions on Consumer Electronics,
Nguyen, “Automated Robot (Car) using Artificial Intelligence,” in 2021 vol. 57, no. 4, pp. 1961–1968, Nov. 2011, doi:
International Seminar on Machine Learning, Optimization, and Data 10.1109/TCE.2011.6131177.
Science (ISMODE), Jan. 2022, pp. 319–324. doi: [30] “[PDF] Scalable Algorithms for Realistic Real-time Rendering |
10.1109/ISMODE53584.2022.9743130. Semantic Scholar.” Accessed: Oct. 04, 2023. [Online]. Available:
[14] “Computer Vision Application Analysis based on Object https://www.semanticscholar.org/paper/Scalable-Algorithms-for-Realistic-
Detection,” IJSREM. Accessed: Oct. 04, 2023. [Online]. Available: Real-time-
https://ijsrem.com/download/computer-vision-application-analysis-based- F%C3%BCtterling/6190ec44c6b350be854d644a4c2ed74e90e5eb56
on-object-detection/ [31] P. Yuan, M. Green, and R. W. H. Lau, “Dynamic image quality
[15] S. Mishra, N. T. B. Thuy, and C.-D. Truong, “Integrating State- measurements of real-time rendering algorithms,” in Proceedings IEEE
of-the-Art Face Recognition and Anti-Spoofing Techniques into Enterprise Virtual Reality (Cat. No. 99CB36316), Mar. 1999, pp. 83-. doi:
Information Systems,” in Artificial Intelligence and Mobile Services – 10.1109/VR.1999.756935.
AIMS 2023, Y. Yang, X. Wang, and L.-J. Zhang, Eds., in Lecture Notes in [32] E. Eisemann, U. Assarsson, M. Schwarz, and M. Wimmer,
Computer Science. Cham: Springer Nature Switzerland, 2023, pp. 71–84. “Shadow Algorithms for Real-time Rendering,” 2010, doi:
doi: 10.1007/978-3-031-45140-9_7. 10.2312/egt.20101068.
[16] L. Bai, T. Zhao, and X. Xiu, “Exploration of computer vision [33] V. Rakesh, P. Chilukuri, P. Vaishnavi, P. Sreekaran, P. Sujala,
and image processing technology based on OpenCV,” in 2022 and D. R. Krishna Yadav, “Real Time Object Recognition Using OpenCV
International Seminar on Computer Science and Engineering Technology and Numpy in Python,” in 2023 International Conference on Innovative
(SCSET), Jan. 2022, pp. 145–147. doi: 10.1109/SCSET55041.2022.00042. Data Communication Technologies and Application (ICIDCA), Mar. 2023,
[17] Kunal Patel, Akash Patil, Abhiraj Shourya, Rajesh Kumar pp. 421–426. doi: 10.1109/ICIDCA56705.2023.10099584.
Malviya, and Prof. Maghana Solanki, “Deep Learning for Computer [34] B. M U, H. Raghuram, and Mohana, “Real Time Object
Vision: A Brief Overview of YOLO,” IJARSCT, pp. 403–408, May 2022, Distance and Dimension Measurement using Deep Learning and OpenCV,”
doi: 10.48175/IJARSCT-3943. in 2023 Third International Conference on Artificial Intelligence and Smart
[18] A. Naumann, F. Hertlein, L. Dörr, S. Thoma, and K. Furmans, Energy (ICAIS), Feb. 2023, pp. 929–932. doi:
“Literature Review: Computer Vision Applications in Transportation 10.1109/ICAIS56108.2023.10073888.
Logistics and Warehousing.” arXiv, Jun. 07, 2023. doi: [35] “Real-time Face Recognition System using Python and
10.48550/arXiv.2304.06009. OpenCV,” IJSREM. Accessed: Oct. 04, 2023. [Online]. Available:
[19] Z. Jiang and J. I. Messner, “Computer Vision Applications In https://ijsrem.com/download/real-time-face-recognition-system-using-
Construction And Asset Management Phases: A Literature Review,” python-and-opencv/
Journal of Information Technology in Construction (ITcon), vol. 28, no. 9, [36] Vishwarkma Institue of Technology, Pune, Maharashtra, India,
pp. 176–199, Apr. 2023, doi: 10.36680/j.itcon.2023.009. P. Bailke, S. Divekar, and Vishwarkma Institue of Technology, Pune,
[20] A. Khan, A. Laghari, and S. Awan, “Machine Learning in Maharashtra, India, “REAL-TIME MOVING VEHICLE COUNTER
Computer Vision: A Review,” EAI Endorsed Transactions on Scalable SYSTEM USING OPENCV AND PYTHON,” IJEAST, vol. 6, no. 11, pp.
Information Systems, vol. 8, no. 32, Apr. 2021, Accessed: Oct. 04, 2023. 190–194, Mar. 2022, doi: 10.33564/IJEAST.2022.v06i11.036.
[Online]. Available: https://eudl.eu/doi/10.4108/eai.21-4-2021.169418 [37] D. Davis, D. Gupta, X. Vazacholil, D. Kayande, and D. Jadhav,
[21] H.-S. Vu and V.-H. Nguyen, “Safety-Assisted Driving “R-CTOS: Real-Time Clothes Try-on System Using OpenCV,” in 2022
Technology Based on Artificial Intelligence and Machine Learning for 2nd Asian Conference on Innovation in Technology (ASIANCON), Aug.
Moving Vehicles in Vietnam,” in Annals of Computer Science and 2022, pp. 1–4. doi: 10.1109/ASIANCON55314.2022.9909352.
Information Systems, 2022, pp. 279–284. Accessed: Nov. 05, 2023.
[Online]. Available: https://annals-
csis.org/proceedings/rice2022/drp/05.html