Virtual Dressing
Virtual Dressing
CHAPTER 1
INTRODUCTION
1.1 VISION
To pioneer a global shift in the fashion industry, bridging the gap between the real and virtual worlds. Our vision
is to provide individuals with an unparalleled virtual dressing room experience that transcends the limitations
of traditional clothing shopping. By seamlessly integrating cutting-edge technologies, we aim to redefine how
users interact with fashion, offering a dynamic, immersive, and convenient virtual wardrobe. Our vision extends
beyond a mere technological solution. We aspire to create a cultural shift where the boundaries between physical
and virtual experiences in the fashion realm are blurred, fostering a sense of creativity, self-expression, and
personalized style for users worldwide.
1.2 MISSION
Our mission is to empower users with an innovative and user-friendly virtual dressing room solution, leveraging
advanced technologies like precise body detection and posture assessment. We strive to enhance the
accessibility and time efficiency of trying on clothing, revolutionizing the online shopping landscape. Through
continuous innovation, we aim to provide a secure platform that ensures a seamless, private, and satisfying
virtual shopping experience. Our mission involves not just technological advancement but also a commitment
to ethical practices and user satisfaction. We are dedicated to creating a platform that not only meets the needs
of the modern consumer but also respects privacy and promotes a positive and enjoyable online shopping
journey.
1.3 OBJECTIVES
1.Develop and refine an advanced virtual dressing room software with a focus on precise body detection and
posture assessment.
2.Overcome the challenge of accurately aligning virtual clothing with the user's body, ensuring a realistic and
satisfying try-on experience.
3.Improve the accessibility of virtual dressing rooms for online shoppers, minimizing time-consuming processes
and enhancing convenience.
4.Utilize web cameras and cutting-edge technologies to provide a secure and cost-effective virtual try-on
solution.
5.Revolutionize the online shopping experience by eliminating the need for physical trial rooms, ensuring
privacy and convenience for users.
Deep learning is a part of the family with basis of machine learning approaches based on the layer used in the
artificial neural system. Learning can semi-supervised or unsupervised. Deep Learning architectures such as
deep neural networks, deep belief networks, and convolutional neural networks are practiced in areas such as
computers, speech recognition, natural language processing, voice recognition, social network filtering,
translating engine, biotechnology, Drug design, medical analysis, material controls and board games where they
get the results comparable and in some cases better than the experts.
The neural network was initially inspired by data processing and node distribution in the structures of the
synaptic biological system, but the differences have different properties and functions, the biological brain
structure that makes them incompatible with Nerve damage. Especially nerve networks tend to be symbolic and
static, while the biological brain of living organisms is mostly plastic and analogous.
The most advanced models of deep learning are based on artificial nerves, especially on CNN. In deep learning,
each level learns to change its data to an abstract and brief presentation.
Deep in deep learning refers to the number of layers that the data is changed. In particular, a deep learning
system has a significant depth in the credit assignment process (CAP). CAP is a chain of change from the
gateway. The CAP describes the relationship between creation and output. For feed forward neural networks,
the depth of the cap is that the net and the number of hidden layers as a resultant layer are defined. For a neural
nerve network, a signal can spread over one layer again, the depth of the CAP has unlimited potential.
The main objective of the proposed system is to enhance customer experience in clothing fitting by enabling
customers to virtually try clothing on in order to check for size, fit or style. In this way, customers are able to
shop and try their favorite clothing anywhere and anytime with smartphone. The main objective of the project
is divided into sub objectives as shown as below.
To detect and extract human body skeleton-based joint positions using smartphone camera.
To calculate body measurements based on the extracted body skeleton joint positions.
To fit virtual garments onto human body according to the extracted body skeleton joint positions, body
measurements and garment measurements.
Compared to the early days, clothing shopping is getting easier and more convenient for customers and sellers
especially through online shopping. However, the problem where it is compulsory to physically try clothing on
body in order to check for size, fit or style is still cannot be solved. Therefore, one of the effective solutions to
solve the problem is to develop an virtual fitting room.
This project not only enhances the online shopping experience for customers but also holds significant
advantages for clothing store management, ultimately leading to improved sales. The Virtual Dressing Room
(VDR) system addresses the issue of potential customer loss within a store by providing an innovative solution
to visualize and try on clothing items virtually. By eliminating the inconvenience of customers not finding
suitable garments and potentially opting for another store, the VDR ensures a seamless and efficient shopping
experience.
The VDR's potential to reduce the need for large showrooms and additional customer service personnel further
translates to cost savings for clothing stores, making it a financially advantageous solution. The time-saving and
convenience-driven aspects of the VDR system underscore its potential to revolutionize traditional retail
practices and contribute significantly to the overall efficiency and profitability of clothing stores.
CHAPTER 2
LITERATURE SURVEY
[2] "A Virtual Trial Room using Pose Estimation and Homography"
The paper addresses the challenges faced by the retail industry, particularly the issue of long queues for trying
on clothes in shopping malls due to rapid urbanization and increasing population. The authors propose an
Android-based mobile application using OpenCV and TensorFlow lite, which allows customers to virtually try
on clothes without the need for physical trial rooms. The application uses pose estimation and homography to
map clothes onto the customer's body, providing a quick, easy, and accurate way to try clothes. The results
obtained demonstrate the accuracy of the mapping process and its potential to transform the retail industry. The
authors believe that their application will prove instrumental in shaping the retail industry.
[3]“Image-to-Image attire transfer using Generative Adversarial Networks (GAN) and image processing
methods”
Syed Sanzam, Sourav Govinda Dasf, Sifat-Ul-Alam, Mohammad Imrul Jubair, and Md. Faisal Ahmed,
Propose a system that transfers clothing from one person's image to another while preserving the shape, pose,
action, and identity of the user. The approach leverages Liquid Warping GAN for domain transfer and U-Net
with Grab-cut for segmentation. The system aims to enhance the online attire shopping experience and
contribute to the e-commerce sector by providing a virtual trial room for users.
[4] “A real-time virtual dressing room application using OpenCV and Kinect sensor technology”
Nagaraju Bogiri, Srinivasan K., and Vivek S.
The system uses real-time cloth processing and hardware sensors like motion, light, and camera sensors
controlled by a GUI software. It also involves the use of skeletal tracking, 3D depth image, and dynamic cloth
fitting. The proposed approach aims to provide an efficient solution for online shopping and virtual dressing
experiences.
This paper made significant strides by leveraging the Microsoft Kinect sensor for real-time body measurements,
simplifying virtual dressing room creation, and enhancing online shopping experiences. However, potential
limitations include variations in body measurement accuracy from the Kinect sensor, a limited database of
clothing options, and technical issues affecting system performance, emphasizing the need for ongoing research
to refine and optimize virtual dressing room technology for improved user satisfaction and usability.
[6] "An augmented reality based Virtual dressing room using Haarcascades Classifier”
The system aims to provide a cost-effective alternative to Kinect sensors for virtual try-on experiences. The
technology involves mapping algorithms to accurately fit dresses on subjects based on body measurements
obtained through Euclidean distance calculations. The study evaluates the proposed scheme on 50 subjects with
10 dresses, demonstrating its effectiveness in dynamically adjusting dress sizes and enhancing the virtual
shopping experience.
[7] “The implementation of a virtual fitting room using image processing”
[9] “ "Towards Photo-Realistic Virtual Try-On by Adaptively Generating Preserving Image Content"
Han Yang, Ruimao Zhang, Xiaobao Guo, Wei Liu, Wangmeng Zuo, and Ping Luo
The PDF file "Towards Photo-Realistic Virtual Try-On by Adaptively Generating↔Preserving Image Content"
explores advancements in virtual try-on technology, focusing on synthesizing realistic images of individuals in
target clothing while preserving crucial details like texture, logos, and posture. The document addresses
challenges in achieving photo-realistic virtual try-on and defines difficulty levels in the try-on task.
[10] “Image Processing Design Flow for Virtual Fitting Room Applications used in Mobile Devices”
The present virtual changing rooms are useful for clothing items for individual body parts such the head, feet,
arms, and face. The current approach requires us to try on each and every item of clothing, which most people
find difficult or inconvenient to accomplish. Existing efforts that use images to test have issues with the model's
stability when processing pictures of individuals taken in various lighting situations, different environmental
settings, and unique stances. Many e-commerce companies, including Lenskart, Purple, and others, use AI
fitting rooms to produce results relevant to spectacles or makeup.
2.2.1 Disadvantages:
▪ People always have problem in choosing colour which looks good on them.
▪ Buying products without trials may lead to return & exchanges.
This project's primary goal is to improve and simplify the online buying experience for users. In order to save
time, it seeks to develop an "Augmented Reality" fitting room. gives customers a way to try on different clothing
without actually touching it before making a purchase. decreases the necessity for manual or physical clothing
putting on, which also lowers the chance of contracting covid. It enables consumers to make wiser decisions.
The project's primary goal is to create a genuine connection between the user and virtual clothing as shown in
fig 3.1.
2.3.1 Advantages:
CHAPTER 3
1) Avatar Creation
-Enable users to create personalized avatars that accurately represent their body shape, size, and proportions.
-Provide options for customizing avatars with different skin tones, hair styles, and facial features.
2) 3D Garment Library
-Build a comprehensive library of 3D garment models with accurate sizing and detailed textures.
-Include a diverse range of clothing styles, brands, and categories to cater to different preferences.
3) Virtual Try-On
-Implement a realistic virtual try-on experience that allows users to see how garments fit and drape on their
avatars.
-Enable users to adjust the fit and styling of garments, as well as view them from different angles and lighting
conditions.
4) Real-Time Interaction
-Ensure smooth and responsive interaction with the virtual garments, allowing users to move their avatars and
see the garments move naturally.
-Provide real-time feedback on garment fit and styling, such as highlighting areas that might be too tight or
loose.
3.2 Requirements:
Developer side:
IDE: Anaconda
Programming Language: Python, HTML, CSS, BOOTSTRAP.
Packages Used: Dlib (19.15.0), OpenCV (3.4.2.17), SciPy (1.0.0), Cascade trainer gui (1.8.0), Tkinter
canvas(8.6.8), NumPy (1.18.1), Flask Web framework (1.1.1)
Front-end Languages: HTML, CSS & BOOTSTRAPData: Source (Internet)
Backend Language: Python (3.7.4).
User side:
The user must identify the target human body's parts and transfer objects to the appropriate body parts.
The image's moving *24*24 target window, which includes characteristics like (line features, rectangular
Packages used:
OpenCV:
OpenCV-Python is a library of Python bindings designed to solve computer vision problems. OpenCV array
structures are converted to and from NumPy arrays. This also makes it easier to integrate with other libraries
that use NumPy such as SciPy.
SciPy:
An open source, BSD-licensed library for mathematics, science, and engineering, SciPy is a scientific library
for Python. The NumPy library, which offers simple and quick N-dimensional array manipulation, is a
prerequisite for the SciPy library.
NumPy:
Large, multi-dimensional arrays and matrices are supported by NumPy, a library for the Python programming
language, along with a substantial number of high-level mathematical operations that may be performed on
these arrays.
CHAPTER 4
PROJECT PLANNING
In software engineering, the software development process refers to the systematic approach used to design,
develop, and deploy software applications. It involves a series of well-defined steps and methodologies that
ensure the successful creation of high-quality software.
As the project assigned was an academic project, the main focus was learning and experimentation. The timeline
of the project is as follows:
The team was formed in the month of September, and the guide was finalized. The research for a topic was
initiated. The skills of each member were analyzed to effectively and efficiently make optimum use of each
individual’s expertise. The guide helped us in assessing the feasibility and societal needs of the proposed topics
for the project.
After careful consideration, the project topic was refined to "Virtual Dressing Room." The guide played a crucial
role in guiding us through the analysis of feasibility, societal needs, and the potential impact of the proposed
project. The team collectively decided to explore the integration of virtual technologies into the realm of fashion
and clothing, leading to the selection of the Virtual Dressing Room as our project focus.
In October, the Virtual Dressing Room project embarked on a quest to revolutionize the fashion experience
through cutting-edge virtual technologies. Months of extensive research and meticulous planning laid the
foundation for a system designed to redefine how individuals interact with clothing virtually.
The development phase advanced, creating an immersive platform for users to virtually try on different outfits.
The system aimed at providing a seamless and enjoyable virtual dressing experience, allowing users to visualize
how different clothing items looked on them in a virtual environment. February ushered in rigorous testing to
ensure every aspect of the virtual dressing room functioned flawlessly.
As March unfolded, the project underwent further refinement, focusing on polishing interfaces and eliminating
any imperfections within the virtual dressing room experience. April witnessed integration efforts, forging
connections with external data sources to enhance the range of available clothing items and styles.
May brought the final push in development, culminating in comprehensive testing to prove the system's mettle
and reliability. June marked the grand launch of the Virtual Dressing Room, signaling a new era in the way
people engage with fashion and clothing in the digital realm.
Beyond the launch, the project aimed for continuous improvement, driven by user feedback and a vision of
ever-increasing excellence. This is the saga of the Virtual Dressing Room, a technological journey toward a
future where the fashion experience is dynamic, immersive, and personalized for every user.
CHAPTER 5
SYSTEM ARCHITECTURE
In the outlined process, the user initiates the clothing selection by choosing various options. The subsequent
stages involve advanced computer vision and image processing techniques to seamlessly integrate the selected
clothing onto the user's virtual representation.
The Pose Estimation phase plays a crucial role in understanding the user's body positioning, ensuring that the
virtual outfit aligns accurately with the user's posture and movements. This step is essential for creating a
realistic and visually appealing virtual try-on experience.
The Human Parsing component follows, identifying the specific type and style of clothing chosen by the user.
This step involves recognizing intricate details such as the type of garment, its color, and any additional
accessories, contributing to a comprehensive understanding of the user's fashion preferences.
The Semantic Generation Module comes into play to generate a high-fidelity virtual representation of the
selected clothing. This module utilizes advanced algorithms to create a visually realistic rendition, considering
factors like fabric texture, lighting conditions, and garment fit.
The Content Fusion Module takes the virtual clothing generated and seamlessly integrates it with the user's
image, ensuring a harmonious blend between the chosen attire and the user's unique characteristics. This
integration is pivotal in creating a convincing and personalized virtual try-on experience.
The Try-On Module then displays the synthesized virtual outfit on the user for preview. Leveraging the insights
from Pose Estimation and Content Fusion, this stage provides users with an accurate and dynamic visualization
of how the chosen clothing would appear on them, allowing for a more informed decision-making process.
Finally, the user engages in the assessment phase, viewing and evaluating the virtual outfit. This interactive and
user-centric approach allows individuals to make informed decisions about the chosen attire, considering factors
such as fit, style, and overall satisfaction. Collectively, these stages create a comprehensive and technologically
sophisticated Virtual Dressing Room experience, seamlessly blending user choice, accurate virtual
representation, and interactive evaluation.
5.2 Methodology
Our goal is to provide a detailed concept of a real-time system that effortlessly tries on countless items of
clothing without leaving the comfort of your home. In addition, people can also try to wear good looking dresses
when they wanted to go from home to a party or other places. People use mirrors on a daily basis to see how
they look and choose clothes to wear for the day before leaving home. Also, many mirrors are placed in clothing
stores to help customers decide on clothes that fit and look good. In this sense, detailed concepts for a realtime
dressing system can answer your questions about dressing as well as clothing sizing without the need for
physical dressing and undressing. The need for virtual dressing in real time system are obvious. Firstly, benefits
for customers are to save don and undressing time and easily estimate your body measurements for tailored
dresses. Customers normally try on many things and spend a lot of time dressing and undressing to buy a dress.
It is very inconvenient for them to take the dress they want to try on, go to the dressing room, take it off and put
it back on whenever they find an attractive dress. Second, store owners can save costs because they no longer
need changing rooms. In addition, the waste of clothes tried on by customers will be reduced.
From fig 6.2 the system starts with the user, who stands in front of a camera. The camera takes a picture of the
user, and the pose estimation module analyzes the picture to determine the user's body pose. The human parsing
module then analyzes the picture to segment the user's body into different parts, such as the head, torso, arms,
and legs.
The content fusion module takes the output of the pose estimation and human parsing modules and combines it
with a 3D model of the garment that the user wants to try on. The try-on module then positions the 3D model
of the garment on the user's body, taking into account the user's body pose and the way the garment would drape
on a real person.
The final output of the system is a picture of the user with the garment tried on. The user can then see how the
garment looks on them from different angles, and they can also change the color or style of the garment.
The detailed step-by-step breakdown from the above fig 6.3 is as follows :
1. User takes a picture: The process begins with the user standing in front of a camera and taking a picture. This
picture will be used to create a virtual representation of the user's body.
2. Pose Estimation (estimatePose()): The image is then analyzed by the pose estimation module. This module
uses computer vision techniques to identify the key points of the user's body, such as the joints of the arms,
legs, and head. The pose estimation module essentially determines the user's body posture in the picture.
3. Human Parsing (parseHuman()): Next, the human parsing module comes into play. This module further
analyzes the image to segment the user's body into different parts, such as the head, torso, arms, and legs. This
segmentation helps to create a more accurate and detailed virtual representation of the user's body.
4. Semantic Generation (generateSemantic()): The semantic generation module takes the output of the pose
estimation and human parsing modules and generates a semantic understanding of the user's body. This means
it analyzes the data to understand the shape, size, and other attributes of the user's body parts.
5. Content Fusion (fuseContent()): The content fusion module then takes the semantic understanding of the
user's body and combines it with a 3D model of the garment that the user wants to try on. This 3D model can
be provided by the retailer or designer of the garment.
6. Try-On (tryOnClothing()): Finally, the try-on module takes the fused data from the content fusion module
and positions the 3D model of the garment onto the virtual representation of the user's body. This is done in a
way that takes into account the user's body pose and the way the garment would naturally drape on a real person.
7. Output: The final output of the system is a picture of the user with the garment tryed on. The user can then
view this image from different angles and even change the color or style of the garment to see how other options
would look.
CHAPTER 6
IMPLEMENTATION
A technical specification or method is implemented when it is made into a programme, piece of software, or
other type of computer system. The goal of this phase is to implement the system's design as effectively as
possible by translating the concept into code. In the life cycle of a system, implementation is critical. It is a phase
where the design is turned into a functional module.
The crucial and concluding stage of software development is implementation. It speaks of the transformation of a
novel system design into a function. Implementation results in robust, reusable, and expandable code. The process
of guiding a customer from purchase to the hardware or software that was purchased is known as implementation.
This covers user regulations, user training, system integration, customization, scope analysis, and delivery.
6.1 Pseudocode
Pseudocode is a colloquial term for a high-level, informal description of how an algorithm or computer
programme works. Although it follows standard programming language structural conventions, it is written for
human rather than machine reading. It is used to develop a program's rough draught or blueprint. Pseudocode
condenses a program's flow but omits supporting information. To make sure that programmers comprehend the
specifications of a software project and align their code correctly, system designers create pseudocode.
The model implemented has been done with a focus on tops, with complete apparel transfer being potential future
work.
For this, we initially needed to be able to implement a segmentation algorithm. Even though open-source state-of-
the-art models could have been used to implement this, we stuck with robust image processing techniques for
segmentation. With the idea being to localize the face and understand the skin color of the model from the face image
to be able to divide an image into hair, clothes, skin, and background.
Once we have the clothing segment, we can now geometrically compare this clothing segment to the in-shop
clothing. Our goal is now to be able to learn transforms on the in-shop clothing to make it as geometrically similar
to the model clothing. To visually describe this one can refer to the image below. The example is a grid of six images.
Top left is the in-shop clothes, top right being the clothing segment of the model, the top middle being the transform
(bottom left) applied on the in-shop clothes.
The above examples are generated during training and hence the in shop clothing and the model clothing are the
same. This can also lead to an easier qualitative assessment. The network architecture to learn this transform is briefly
described below.
We call the learning of this transformation as the Geometric Matching Module, as it matches the in-shop clothing to
the current clothing trying to get them to match geometrically. Some of the results after training are shown below.
The instinctive approach to imposing the new clothing now is to simply paste it over the image, but as one can see
this will cause problems due to overlap with hair and hands, and the previous clothing stays, making it look very
unrealistic. The solution to this was the try-on module, where we implement an encoder-decoder network to
smoothen out the image.
This gives a smoothened image that looks much more realistic then the results we would have if we were to paste
the image over the model. The article has avoided any in-depth description of the work done and for a thorough
description of the model and training strategy
import cv2 as cv
import numpy as np
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--input', help='Path to image or video. Skip to capture frames from camera')
parser.add_argument('--thr', default=0.2, type=float, help='Threshold value for pose parts heat map')
parser.add_argument('--width', default=368, type=int, help='Resize input to specific width.')
parser.add_argument('--height', default=368, type=int, help='Resize input to specific height.')
args = parser.parse_args()
BODY_PARTS = { "Nose": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4,
"LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9,
"RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "REye": 14,
"LEye": 15, "REar": 16, "LEar": 17, "Background": 18 }
frameWidth = frame.shape[1]
frameHeight = frame.shape[0]
import argparse
import os.path
import numpy as np
import cv2 as cv
def preprocess(image):
Create 4-dimensional blob from image and flip image
:param image: input image
image_rev = np.flip(image, axis=1)
input = cv.dnn.blobFromImages([image, image_rev], mean=(104.00698793, 116.66876762, 122.67891434))
return input
if __name__ == "__main__":
if not os.path.isfile(args.gmm_model):
raise OSError("GMM model not exist")
if not os.path.isfile(args.tom_model):
raise OSError("TOM model not exist")
if not os.path.isfile(args.segmentation_model):
raise OSError("Segmentation model not exist")
if not os.path.isfile(findFile(args.openpose_proto)):
raise OSError("OpenPose proto not exist")
if not os.path.isfile(findFile(args.openpose_model)):
raise OSError("OpenPose model not exist")
person_img = cv.imread(args.input_image)
ratio = 256 / 192
inp_h, inp_w, _ = person_img.shape
current_ratio = inp_h / inp_w
if current_ratio > ratio:
center_h = inp_h // 2
out_h = inp_w * ratio
start = int(center_h - out_h // 2)
end = int(center_h + out_h // 2)
person_img = person_img[start:end, ...]
else:
center_w = inp_w // 2
out_w = inp_h / ratio
start = int(center_w - out_w // 2)
end = int(center_w + out_w // 2)
person_img = person_img[:, start:end, :]
cloth_img = cv.imread(args.input_cloth)
pose = get_pose_map(person_img, findFile(args.openpose_proto),
findFile(args.openpose_model), args.backend, args.target)
segm_image = parse_human(person_img, args.segmentation_model)
segm_image = cv.resize(segm_image, (192, 256), cv.INTER_LINEAR)
CHAPTER 7
SYSTEM TESTING
The main quality assurance technique used in software development is testing. Its fundamental purpose is to
find software bugs. To test various system components, various testing levels are employed, each of which
carries out a separate duty.
Test case 1: when the user is standing with objects in the background
Expected outcome: The cloth imposed perfectly on the user
Result: pass
Step 3: Once the components have been integrated execute the test cases
Step 5: Repeat the test cycle until the components have been successfully integrate.
CHAPTER 8
SNAPSHOTS
the Fig 8.1 shows us the login page of our Virtual Dressing Room where the user can log in to his account, the
website remembers the user’s login and gets them back where they had left taking the trial of the dress
The Fig 8.2 shows fields for username, email, mobile number, and password and buttons for "SIGNIN" and
"SIGNUP" for user registration for the Virtual Trial Room application.
The Fig 8.3 depicts the Home page of the application showing a variety of clothes on display.
The Fig 8.4 depicts the Recommendation page of the application where users can select their dress size
and it recommends the suitable fit cloths.
CHAPTER 9
CONCLUSION
The popularity of online shopping and people's desire to utilize it to the fullest extent possible when buying
clothes justifies the necessity to create an algorithm that digitally dresses them in the chosen clothing.
The requirement to spend hours physically trying on a range of outfits is a regular issue client run into when
shopping for clothing. The time available might not be enough, and this might be exhausting. The utilization of
a virtual styling room that serves as a trial room using live video feed is the suggested remedy for this issue.
The human body's nodes and points are plotted using a Kinect sensor, and this information is then utilized to
create an image of clothing over the user's body, obviating the need for actual fittings and saving time.
The ability to check out themselves in different outfits with less limits thanks to this technology would be greatly
appreciated by online buyers.
We came to the conclusion that this exercise really saves time. It doesn't demand extra work. Anyone who is
not technically savvy can use this virtual machine. It doesn't call for a lot of technical expertise. It is hence
accessible. Therefore, it is the perfect addition for a clothier. Overall, the suggested virtual dressing room
appears to be a solid option for precise and speedy virtual clothing fitting.
FUTURE ENHANCEMENT
The application is currently solely designed to allow users to try on virtual garments. Another application that
will be built just for the owner or shopkeeper will be implemented, as planned. The owner will receive daily
information from this program on how many individuals tried on items, how many bought them, which clothes
should be stored longer, etc. We also intend to integrate a machine learning model into our application,which
will provide the user recommendations for what apparel to buy based on their previous apparel
preferences and other factors.
Following the identification of the target image's bodily sections, there are a few operations that must be carried
out to ensure accuracy. Therefore, in order to arrange clothing on the goal image, we need to train three different
types of networks.
REFERENCES
[1] Srinivasan K. Vivek S, Implementation of Virtual Fitting Room Using Image Processing, Department of
Electronics and Instrumentation Engineering, Sri Ramakrishna Engineering College Coimbatore, India,
ICCCSP-2017.
[2] Cecilia Garcia, Nicolas Bessou, Anne Chadoeuf and Erdal Oruklu, Image Processing Design Flow for
Virtual Fitting Room Applications used in Mobile, Devices by Department of Electrical and Computer
[3] Kshitij Shah, Mridul Pandey, Sharvesh Patki, Radha Shankarmani, A Virtual Trial Room using Pose
Estimation and Homograph, Department of Information Technology Sardar Patel Institute of Technology
Mumbai, India.
[4] Vlado Kitanovski, Ebroul Izquierdo Multimedia and Vision Research Group – “3d Tracking of Facial
[5] A. Hilsmann and P. Eisert, “Realistic cloth augmentation in single view video,” in Proc. Vis., Modell.,
[6] LanZiquan, Augmented Reality – Virtual fitting Room using Kinect, Department of Computer Science,
[7] Vipin Paul, Sanju Abel J, Sudarshan S, Praveen M, “Virtual Trial Room”, South Asian Journal of
[8] Nikita Deshmukh, Ishani Patil, Sudehi Patwari, Aarati Deshmukh, Pradnya Mehta, “Real Time Virtual
Dressing Room”, IJCSN International Journal of Computer Science and Network, Volume 5, Issue 2, April
2016.
[9] Muhammed Kotan and Cemil Oz,” Virtual Dressing Room Application with Virtual Human Using Kinect
Sensor”, Journal of Mechanics Engineering and Automation 5 (2015) 322-326, May 25, 2015.
[10] Shreya Kamani, Neel Vasa, Kriti Srivastava,” Virtual Trial Room Using Augmented Reality”, International