0% found this document useful (0 votes)
47 views37 pages

Virtual Dressing

Report

Uploaded by

rohanrajphotos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views37 pages

Virtual Dressing

Report

Uploaded by

rohanrajphotos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

ABSTRACT

This groundbreaking project proposes the development of a cutting-edge Virtual Dressing


Room (VDR) that seeks to revolutionize the limitations inherent in traditional online shopping
experiences. Harnessing the capabilities of advanced technologies, specifically Deep Learning
and Generative Adversarial Networks (GANs), the VDR employs a sophisticated architecture
wherein a generator network crafts synthetic images of users donned in an extensive array of
outfits. Complementing this, a discriminator network adeptly distinguishes between genuine
and generated images. What sets this VDR apart is its real-time interactivity, allowing users to
actively engage with a dynamic virtual environment where they can meticulously select and
visualize different clothing items. The crux of the system lies in the iterative learning process
of the GAN. User feedback becomes integral in refining the GAN's outputs, ensuring a
continuous evolution in the generation of diverse and highly realistic outfit simulations. This
user-centric design not only addresses the current challenges in online apparel shopping but also
promises a seamless and personalized experience that bridges the gap between traditional in-
store try-ons and the digital retail landscape. This innovative approach sets a new standard in
the industry, promising a paradigm shift in how users perceive and engage with online fashion
retail.
VIRTUAL DRESSING ROOM

CHAPTER 1

INTRODUCTION

1.1 VISION
To pioneer a global shift in the fashion industry, bridging the gap between the real and virtual worlds. Our vision
is to provide individuals with an unparalleled virtual dressing room experience that transcends the limitations
of traditional clothing shopping. By seamlessly integrating cutting-edge technologies, we aim to redefine how
users interact with fashion, offering a dynamic, immersive, and convenient virtual wardrobe. Our vision extends
beyond a mere technological solution. We aspire to create a cultural shift where the boundaries between physical
and virtual experiences in the fashion realm are blurred, fostering a sense of creativity, self-expression, and
personalized style for users worldwide.

1.2 MISSION
Our mission is to empower users with an innovative and user-friendly virtual dressing room solution, leveraging
advanced technologies like precise body detection and posture assessment. We strive to enhance the
accessibility and time efficiency of trying on clothing, revolutionizing the online shopping landscape. Through
continuous innovation, we aim to provide a secure platform that ensures a seamless, private, and satisfying
virtual shopping experience. Our mission involves not just technological advancement but also a commitment
to ethical practices and user satisfaction. We are dedicated to creating a platform that not only meets the needs
of the modern consumer but also respects privacy and promotes a positive and enjoyable online shopping
journey.

1.3 OBJECTIVES
1.Develop and refine an advanced virtual dressing room software with a focus on precise body detection and
posture assessment.
2.Overcome the challenge of accurately aligning virtual clothing with the user's body, ensuring a realistic and
satisfying try-on experience.
3.Improve the accessibility of virtual dressing rooms for online shoppers, minimizing time-consuming processes
and enhancing convenience.
4.Utilize web cameras and cutting-edge technologies to provide a secure and cost-effective virtual try-on
solution.

Dept. of ISE, BIT. 2023-24 -1-


VIRTUAL DRESSING ROOM

5.Revolutionize the online shopping experience by eliminating the need for physical trial rooms, ensuring
privacy and convenience for users.

1.4 Deep learning in virtual reality


1.4.1 What is Virtual Reality?
Virtual Reality (VR) is a computer-generated environment with scenes and objects that appear to be real, making
the user feel they are immersed in their surroundings. This environment is perceived through a device known
as a Virtual Reality headset or helmet. VR allows us to immerse ourselves in video games as if we were one of
the characters, learn how to perform heart surgery or improve the quality of sports training to maximize
performance.
Although this may seem extremely futuristic, its origins are not as recent as we might think. In fact, many people
consider that one of the first Virtual Reality devices was called Sensorama, a machine with a built-in seat that
played 3D movies, gave off orders and generated vibrations to make the experience as vivid as possible. The
invention dates back as far as the mid-1950s. Subsequent technological and software developments over the
following years brought with them a progressive evolution both in devices and in interface design.

1.5 Deep Learning

Deep learning is a part of the family with basis of machine learning approaches based on the layer used in the
artificial neural system. Learning can semi-supervised or unsupervised. Deep Learning architectures such as
deep neural networks, deep belief networks, and convolutional neural networks are practiced in areas such as
computers, speech recognition, natural language processing, voice recognition, social network filtering,
translating engine, biotechnology, Drug design, medical analysis, material controls and board games where they
get the results comparable and in some cases better than the experts.

The neural network was initially inspired by data processing and node distribution in the structures of the
synaptic biological system, but the differences have different properties and functions, the biological brain
structure that makes them incompatible with Nerve damage. Especially nerve networks tend to be symbolic and
static, while the biological brain of living organisms is mostly plastic and analogous.

The most advanced models of deep learning are based on artificial nerves, especially on CNN. In deep learning,
each level learns to change its data to an abstract and brief presentation.

Dept. of ISE, BIT. 2023-24 -2-


VIRTUAL DRESSING ROOM

Deep in deep learning refers to the number of layers that the data is changed. In particular, a deep learning
system has a significant depth in the credit assignment process (CAP). CAP is a chain of change from the
gateway. The CAP describes the relationship between creation and output. For feed forward neural networks,
the depth of the cap is that the net and the number of hidden layers as a resultant layer are defined. For a neural
nerve network, a signal can spread over one layer again, the depth of the CAP has unlimited potential.

1.6 Project objectives

The main objective of the proposed system is to enhance customer experience in clothing fitting by enabling
customers to virtually try clothing on in order to check for size, fit or style. In this way, customers are able to
shop and try their favorite clothing anywhere and anytime with smartphone. The main objective of the project
is divided into sub objectives as shown as below.

 To detect and extract human body skeleton-based joint positions using smartphone camera.
 To calculate body measurements based on the extracted body skeleton joint positions.
 To fit virtual garments onto human body according to the extracted body skeleton joint positions, body
measurements and garment measurements.

1.7 Scope of the project

Compared to the early days, clothing shopping is getting easier and more convenient for customers and sellers
especially through online shopping. However, the problem where it is compulsory to physically try clothing on
body in order to check for size, fit or style is still cannot be solved. Therefore, one of the effective solutions to
solve the problem is to develop an virtual fitting room.

1.8 Purpose of the project

This project not only enhances the online shopping experience for customers but also holds significant
advantages for clothing store management, ultimately leading to improved sales. The Virtual Dressing Room
(VDR) system addresses the issue of potential customer loss within a store by providing an innovative solution
to visualize and try on clothing items virtually. By eliminating the inconvenience of customers not finding
suitable garments and potentially opting for another store, the VDR ensures a seamless and efficient shopping
experience.

Dept. of ISE, BIT. 2023-24 -3-


VIRTUAL DRESSING ROOM

The VDR's potential to reduce the need for large showrooms and additional customer service personnel further
translates to cost savings for clothing stores, making it a financially advantageous solution. The time-saving and
convenience-driven aspects of the VDR system underscore its potential to revolutionize traditional retail
practices and contribute significantly to the overall efficiency and profitability of clothing stores.

Dept. of ISE, BIT. 2023-24 -4-


VIRTUAL DRESSING ROOM

CHAPTER 2

LITERATURE SURVEY

[1] "3D Grid Based Virtual Trial Room"

Debangana Ram, Bholanath Roy, and Vaibhav Soni,


discusses the use of image-based virtual trial room technologies to integrate modern in-store clothes into a
person's image highlight the growing interest in this technology within the multimedia and computer vision
communities. This sets the stage for the exploration of the 3D grid virtual trial room and its potential impact on
the online shopping experience.

[2] "A Virtual Trial Room using Pose Estimation and Homography"

Kshitij Shah and Sharvesh Patki

The paper addresses the challenges faced by the retail industry, particularly the issue of long queues for trying
on clothes in shopping malls due to rapid urbanization and increasing population. The authors propose an
Android-based mobile application using OpenCV and TensorFlow lite, which allows customers to virtually try
on clothes without the need for physical trial rooms. The application uses pose estimation and homography to
map clothes onto the customer's body, providing a quick, easy, and accurate way to try clothes. The results
obtained demonstrate the accuracy of the mapping process and its potential to transform the retail industry. The
authors believe that their application will prove instrumental in shaping the retail industry.

[3]“Image-to-Image attire transfer using Generative Adversarial Networks (GAN) and image processing
methods”

Syed Sanzam, Sourav Govinda Dasf, Sifat-Ul-Alam, Mohammad Imrul Jubair, and Md. Faisal Ahmed,

Propose a system that transfers clothing from one person's image to another while preserving the shape, pose,
action, and identity of the user. The approach leverages Liquid Warping GAN for domain transfer and U-Net
with Grab-cut for segmentation. The system aims to enhance the online attire shopping experience and
contribute to the e-commerce sector by providing a virtual trial room for users.

[4] “A real-time virtual dressing room application using OpenCV and Kinect sensor technology”
Nagaraju Bogiri, Srinivasan K., and Vivek S.

The system uses real-time cloth processing and hardware sensors like motion, light, and camera sensors

Dept. of ISE, BIT. 2023-24 -5-


VIRTUAL DRESSING ROOM

controlled by a GUI software. It also involves the use of skeletal tracking, 3D depth image, and dynamic cloth
fitting. The proposed approach aims to provide an efficient solution for online shopping and virtual dressing
experiences.

[5] " Virtual Dressing Room Application”

' Aladdin Salah Masriof' and 'Muhannad Al-Jabi

This paper made significant strides by leveraging the Microsoft Kinect sensor for real-time body measurements,
simplifying virtual dressing room creation, and enhancing online shopping experiences. However, potential
limitations include variations in body measurement accuracy from the Kinect sensor, a limited database of
clothing options, and technical issues affecting system performance, emphasizing the need for ongoing research
to refine and optimize virtual dressing room technology for improved user satisfaction and usability.

[6] "An augmented reality based Virtual dressing room using Haarcascades Classifier”

Nauman Zafar Hashmi1 , Aun Irtaza1 , Wakeel Ahmed1 , Nudrat Nida2

The system aims to provide a cost-effective alternative to Kinect sensors for virtual try-on experiences. The
technology involves mapping algorithms to accurately fit dresses on subjects based on body measurements
obtained through Euclidean distance calculations. The study evaluates the proposed scheme on 50 subjects with
10 dresses, demonstrating its effectiveness in dynamically adjusting dress sizes and enhancing the virtual
shopping experience.
[7] “The implementation of a virtual fitting room using image processing”

Srinivasan K. and Vivek s


The implementation of a virtual fitting room using image processing technology aims to revolutionize online
shopping experiences by allowing customers to visualize how clothes fit before making a purchase. By utilizing
color clustering and human contour analysis, this technology provides a reliable and efficient way to extract
foreground, extract human silhouettes, and extract feature points for accurate virtual fitting. Online marketers
can leverage virtual fitting rooms to enhance their market presence and offer customers a wider choice of
products, ultimately improving the online shopping experience.

[9] “ "Towards Photo-Realistic Virtual Try-On by Adaptively Generating Preserving Image Content"

Han Yang, Ruimao Zhang, Xiaobao Guo, Wei Liu, Wangmeng Zuo, and Ping Luo

The PDF file "Towards Photo-Realistic Virtual Try-On by Adaptively Generating↔Preserving Image Content"

Dept. of ISE, BIT. 2023-24 -6-


VIRTUAL DRESSING ROOM

explores advancements in virtual try-on technology, focusing on synthesizing realistic images of individuals in
target clothing while preserving crucial details like texture, logos, and posture. The document addresses
challenges in achieving photo-realistic virtual try-on and defines difficulty levels in the try-on task.

[10] “Image Processing Design Flow for Virtual Fitting Room Applications used in Mobile Devices”

Cecilia Garcia, Nicolas Bessou, Anne Chadoeuf, and Erdal Oruklu


The Virtual Fitting Room (VFR) application presented in the PDF file offers a real-time human-friendly
interface for trying on clothes using webcams or smartphones. The three-stage algorithm enables the detection
and sizing of the user's body, identification of reference points through face detection and augmented reality
markers, and superimposition of clothing over the user's image. Implemented as a universal Java applet using
OpenCV library functions, the VFR application enhances the online shopping experience by helping users
choose the correct type and size of clothing items with ease and accuracy.

2.1 Problem statement


Designing a solution for reducing human time and making their online shopping better by designing a virtual
styling room using live video feed it also Provides a virtual room to try apparel through e-commerce websites
before buying it.

2.2 Existing system

The present virtual changing rooms are useful for clothing items for individual body parts such the head, feet,
arms, and face. The current approach requires us to try on each and every item of clothing, which most people
find difficult or inconvenient to accomplish. Existing efforts that use images to test have issues with the model's
stability when processing pictures of individuals taken in various lighting situations, different environmental
settings, and unique stances. Many e-commerce companies, including Lenskart, Purple, and others, use AI
fitting rooms to produce results relevant to spectacles or makeup.

2.2.1 Disadvantages:

▪ Due to the pandemic it causes second thoughts before trying on an outfit.


▪ On online websites we can only observe how the garments look on others.
▪ The size & colour variance is not always seen clearly in the posted photos.
▪ In 3D models we can see how clothes look on the avatar not on the real people.

Dept. of ISE, BIT. 2023-24 -7-


VIRTUAL DRESSING ROOM

▪ People always have problem in choosing colour which looks good on them.
▪ Buying products without trials may lead to return & exchanges.

2.3 Proposed system:

This project's primary goal is to improve and simplify the online buying experience for users. In order to save
time, it seeks to develop an "Augmented Reality" fitting room. gives customers a way to try on different clothing
without actually touching it before making a purchase. decreases the necessity for manual or physical clothing
putting on, which also lowers the chance of contracting covid. It enables consumers to make wiser decisions.
The project's primary goal is to create a genuine connection between the user and virtual clothing as shown in
fig 3.1.

Fig 3.1 Virtual trial room

2.3.1 Advantages:

 Allows people to complete their tasks in a smooth & timely manner


 Gives the shoppers access to various more options to try & check.
 Convenient and faster shopping experience.

Dept. of ISE, BIT. 2023-24 -8-


VIRTUAL DRESSING ROOM

 Quicker and Easier to try clothes.


 Reduces Exchange and Return Polices.
 No hidden camera issues as for female customers.
 By providing your customers a way to see how each item looks on their body, they can make more
informed decisions. Therefore, fewer of them will want to return their orders.
 Drawing customers in.
 Notably greater turnover. Positive image of your brand.

Dept. of ISE, BIT. 2023-24 -9-


VIRTUAL DRESSING ROOM

CHAPTER 3

HARDWARE AND SOFTWARE REQUIREMENTS

3.1 Functional Requirements:

1) Avatar Creation
-Enable users to create personalized avatars that accurately represent their body shape, size, and proportions.
-Provide options for customizing avatars with different skin tones, hair styles, and facial features.

2) 3D Garment Library
-Build a comprehensive library of 3D garment models with accurate sizing and detailed textures.
-Include a diverse range of clothing styles, brands, and categories to cater to different preferences.

3) Virtual Try-On
-Implement a realistic virtual try-on experience that allows users to see how garments fit and drape on their
avatars.
-Enable users to adjust the fit and styling of garments, as well as view them from different angles and lighting
conditions.

4) Real-Time Interaction
-Ensure smooth and responsive interaction with the virtual garments, allowing users to move their avatars and
see the garments move naturally.
-Provide real-time feedback on garment fit and styling, such as highlighting areas that might be too tight or
loose.

5) Personalization and Recommendations


-Allow users to save their favorite garments, create outfits, and receive personalized recommendations based
on their style preferences and past interactions.
-Use AI-driven algorithms to suggest complementary items and complete looks.

Dept. of ISE, BIT. 2023-24 - 10 -


VIRTUAL DRESSING ROOM

6) Social Sharing Features


-Enable users to share their virtual outfits with friends and family on social media or through other platforms.
-Encourage social interaction and create a community around virtual fashion experiences.

7) Integration with Online Stores


-Integrate the virtual dressing room with online clothing stores to allow users to seamlessly transition from
trying on garments virtually to purchasing them.
-Provide links to purchase items directly from the virtual dressing room interface.

8) Data Security and Privacy


-Implement robust measures to protect user data, including personal information, body measurements, and
shopping preferences.
-Comply with relevant privacy regulations and ensure transparency in data collection and usage practices.

9) Scalability and Performance


-Design the system to handle a large number of users and garments without compromising performance or
responsiveness.
-Optimize the system for different hardware and software platforms to ensure accessibility across a wide range
of devices.

3.2 Requirements:
Developer side:
 IDE: Anaconda
 Programming Language: Python, HTML, CSS, BOOTSTRAP.
 Packages Used: Dlib (19.15.0), OpenCV (3.4.2.17), SciPy (1.0.0), Cascade trainer gui (1.8.0), Tkinter
canvas(8.6.8), NumPy (1.18.1), Flask Web framework (1.1.1)
 Front-end Languages: HTML, CSS & BOOTSTRAPData: Source (Internet)
 Backend Language: Python (3.7.4).

User side:
The user must identify the target human body's parts and transfer objects to the appropriate body parts.

 For body part detection we were used Haar cascade dataset.

 The image's moving *24*24 target window, which includes characteristics like (line features, rectangular

Dept. of ISE, BIT. 2023-24 - 11 -


VIRTUAL DRESSING ROOM

feature,edge feature etc.)


 Convolutional kernels features are pertinent characteristics for object detection, whereas the ad boost
technique discards irrelevant features.

3.3 Programming languages:


Python:
Python is a popular computer programming language used to create software and websites, automate processes,
and analyses data. Python is a general-purpose, high-level, interpreted programming language. Code readability
is prioritized in its design philosophy, which employs heavy indentation.

Packages used:
OpenCV:
OpenCV-Python is a library of Python bindings designed to solve computer vision problems. OpenCV array
structures are converted to and from NumPy arrays. This also makes it easier to integrate with other libraries
that use NumPy such as SciPy.

SciPy:

An open source, BSD-licensed library for mathematics, science, and engineering, SciPy is a scientific library
for Python. The NumPy library, which offers simple and quick N-dimensional array manipulation, is a
prerequisite for the SciPy library.

NumPy:
Large, multi-dimensional arrays and matrices are supported by NumPy, a library for the Python programming
language, along with a substantial number of high-level mathematical operations that may be performed on
these arrays.

Cascade trainer gui:


An application called Cascade Trainer GUI is used to train, test, and enhance cascade classifier models. It makes
it simple to utilize OpenCV tools for training and testing classifiers by using a graphical interface to set the
settings.

Dept. of ISE, BIT. 2023-24 - 12 -


VIRTUAL DRESSING ROOM

CHAPTER 4

PROJECT PLANNING

In software engineering, the software development process refers to the systematic approach used to design,
develop, and deploy software applications. It involves a series of well-defined steps and methodologies that
ensure the successful creation of high-quality software.

4.1 Project Timeline

As the project assigned was an academic project, the main focus was learning and experimentation. The timeline
of the project is as follows:

The team was formed in the month of September, and the guide was finalized. The research for a topic was
initiated. The skills of each member were analyzed to effectively and efficiently make optimum use of each
individual’s expertise. The guide helped us in assessing the feasibility and societal needs of the proposed topics
for the project.

After careful consideration, the project topic was refined to "Virtual Dressing Room." The guide played a crucial
role in guiding us through the analysis of feasibility, societal needs, and the potential impact of the proposed
project. The team collectively decided to explore the integration of virtual technologies into the realm of fashion
and clothing, leading to the selection of the Virtual Dressing Room as our project focus.

Fig 4.1 Project Timeline

Dept. of ISE, BIT. 2023-24 - 13 -


VIRTUAL DRESSING ROOM

In October, the Virtual Dressing Room project embarked on a quest to revolutionize the fashion experience
through cutting-edge virtual technologies. Months of extensive research and meticulous planning laid the
foundation for a system designed to redefine how individuals interact with clothing virtually.

The development phase advanced, creating an immersive platform for users to virtually try on different outfits.
The system aimed at providing a seamless and enjoyable virtual dressing experience, allowing users to visualize
how different clothing items looked on them in a virtual environment. February ushered in rigorous testing to
ensure every aspect of the virtual dressing room functioned flawlessly.

As March unfolded, the project underwent further refinement, focusing on polishing interfaces and eliminating
any imperfections within the virtual dressing room experience. April witnessed integration efforts, forging
connections with external data sources to enhance the range of available clothing items and styles.

May brought the final push in development, culminating in comprehensive testing to prove the system's mettle
and reliability. June marked the grand launch of the Virtual Dressing Room, signaling a new era in the way
people engage with fashion and clothing in the digital realm.

Beyond the launch, the project aimed for continuous improvement, driven by user feedback and a vision of
ever-increasing excellence. This is the saga of the Virtual Dressing Room, a technological journey toward a
future where the fashion experience is dynamic, immersive, and personalized for every user.

Dept. of ISE, BIT. 2023-24 - 14 -


VIRTUAL DRESSING ROOM

CHAPTER 5

SYSTEM ARCHITECTURE

5.1 System Architecture:

Fig 5.1 System Architecture

In the outlined process, the user initiates the clothing selection by choosing various options. The subsequent
stages involve advanced computer vision and image processing techniques to seamlessly integrate the selected
clothing onto the user's virtual representation.

The Pose Estimation phase plays a crucial role in understanding the user's body positioning, ensuring that the

Dept. of ISE, BIT. 2023-24 - 15 -


VIRTUAL DRESSING ROOM

virtual outfit aligns accurately with the user's posture and movements. This step is essential for creating a
realistic and visually appealing virtual try-on experience.

The Human Parsing component follows, identifying the specific type and style of clothing chosen by the user.
This step involves recognizing intricate details such as the type of garment, its color, and any additional
accessories, contributing to a comprehensive understanding of the user's fashion preferences.

The Semantic Generation Module comes into play to generate a high-fidelity virtual representation of the
selected clothing. This module utilizes advanced algorithms to create a visually realistic rendition, considering
factors like fabric texture, lighting conditions, and garment fit.

The Content Fusion Module takes the virtual clothing generated and seamlessly integrates it with the user's
image, ensuring a harmonious blend between the chosen attire and the user's unique characteristics. This
integration is pivotal in creating a convincing and personalized virtual try-on experience.

The Try-On Module then displays the synthesized virtual outfit on the user for preview. Leveraging the insights
from Pose Estimation and Content Fusion, this stage provides users with an accurate and dynamic visualization
of how the chosen clothing would appear on them, allowing for a more informed decision-making process.

Finally, the user engages in the assessment phase, viewing and evaluating the virtual outfit. This interactive and
user-centric approach allows individuals to make informed decisions about the chosen attire, considering factors
such as fit, style, and overall satisfaction. Collectively, these stages create a comprehensive and technologically
sophisticated Virtual Dressing Room experience, seamlessly blending user choice, accurate virtual
representation, and interactive evaluation.

5.2 Methodology

Our goal is to provide a detailed concept of a real-time system that effortlessly tries on countless items of
clothing without leaving the comfort of your home. In addition, people can also try to wear good looking dresses
when they wanted to go from home to a party or other places. People use mirrors on a daily basis to see how
they look and choose clothes to wear for the day before leaving home. Also, many mirrors are placed in clothing
stores to help customers decide on clothes that fit and look good. In this sense, detailed concepts for a realtime
dressing system can answer your questions about dressing as well as clothing sizing without the need for
physical dressing and undressing. The need for virtual dressing in real time system are obvious. Firstly, benefits

Dept. of ISE, BIT. 2023-24 - 16 -


VIRTUAL DRESSING ROOM

for customers are to save don and undressing time and easily estimate your body measurements for tailored
dresses. Customers normally try on many things and spend a lot of time dressing and undressing to buy a dress.
It is very inconvenient for them to take the dress they want to try on, go to the dressing room, take it off and put
it back on whenever they find an attractive dress. Second, store owners can save costs because they no longer
need changing rooms. In addition, the waste of clothes tried on by customers will be reduced.

5.2 Use case diagram:

Fig 5.2 Use case diagram

From fig 6.2 the system starts with the user, who stands in front of a camera. The camera takes a picture of the
user, and the pose estimation module analyzes the picture to determine the user's body pose. The human parsing
module then analyzes the picture to segment the user's body into different parts, such as the head, torso, arms,
and legs.

The content fusion module takes the output of the pose estimation and human parsing modules and combines it
with a 3D model of the garment that the user wants to try on. The try-on module then positions the 3D model
of the garment on the user's body, taking into account the user's body pose and the way the garment would drape
on a real person.

The final output of the system is a picture of the user with the garment tried on. The user can then see how the
garment looks on them from different angles, and they can also change the color or style of the garment.

Dept. of ISE, BIT. 2023-24 - 17 -


VIRTUAL DRESSING ROOM

5.3 Work Flow Diagram:

Fig 5.3 Work Flow Diagram

The detailed step-by-step breakdown from the above fig 6.3 is as follows :
1. User takes a picture: The process begins with the user standing in front of a camera and taking a picture. This
picture will be used to create a virtual representation of the user's body.
2. Pose Estimation (estimatePose()): The image is then analyzed by the pose estimation module. This module
uses computer vision techniques to identify the key points of the user's body, such as the joints of the arms,
legs, and head. The pose estimation module essentially determines the user's body posture in the picture.
3. Human Parsing (parseHuman()): Next, the human parsing module comes into play. This module further
analyzes the image to segment the user's body into different parts, such as the head, torso, arms, and legs. This
segmentation helps to create a more accurate and detailed virtual representation of the user's body.

Dept. of ISE, BIT. 2023-24 - 18 -


VIRTUAL DRESSING ROOM

4. Semantic Generation (generateSemantic()): The semantic generation module takes the output of the pose
estimation and human parsing modules and generates a semantic understanding of the user's body. This means
it analyzes the data to understand the shape, size, and other attributes of the user's body parts.
5. Content Fusion (fuseContent()): The content fusion module then takes the semantic understanding of the
user's body and combines it with a 3D model of the garment that the user wants to try on. This 3D model can
be provided by the retailer or designer of the garment.
6. Try-On (tryOnClothing()): Finally, the try-on module takes the fused data from the content fusion module
and positions the 3D model of the garment onto the virtual representation of the user's body. This is done in a
way that takes into account the user's body pose and the way the garment would naturally drape on a real person.
7. Output: The final output of the system is a picture of the user with the garment tryed on. The user can then
view this image from different angles and even change the color or style of the garment to see how other options
would look.

Dept. of ISE, BIT. 2023-24 - 19 -


VIRTUAL DRESSING ROOM

CHAPTER 6

IMPLEMENTATION

A technical specification or method is implemented when it is made into a programme, piece of software, or
other type of computer system. The goal of this phase is to implement the system's design as effectively as
possible by translating the concept into code. In the life cycle of a system, implementation is critical. It is a phase
where the design is turned into a functional module.
The crucial and concluding stage of software development is implementation. It speaks of the transformation of a
novel system design into a function. Implementation results in robust, reusable, and expandable code. The process
of guiding a customer from purchase to the hardware or software that was purchased is known as implementation.
This covers user regulations, user training, system integration, customization, scope analysis, and delivery.

6.1 Pseudocode

Pseudocode is a colloquial term for a high-level, informal description of how an algorithm or computer
programme works. Although it follows standard programming language structural conventions, it is written for
human rather than machine reading. It is used to develop a program's rough draught or blueprint. Pseudocode
condenses a program's flow but omits supporting information. To make sure that programmers comprehend the
specifications of a software project and align their code correctly, system designers create pseudocode.

The model implemented has been done with a focus on tops, with complete apparel transfer being potential future
work.

6.1.1 The Segmentation Algorithm

For this, we initially needed to be able to implement a segmentation algorithm. Even though open-source state-of-
the-art models could have been used to implement this, we stuck with robust image processing techniques for
segmentation. With the idea being to localize the face and understand the skin color of the model from the face image
to be able to divide an image into hair, clothes, skin, and background.

Dept. of ISE, BIT. 2023-24 - 20 -


VIRTUAL DRESSING ROOM

6.1.2 Geometric Matching Module

Once we have the clothing segment, we can now geometrically compare this clothing segment to the in-shop
clothing. Our goal is now to be able to learn transforms on the in-shop clothing to make it as geometrically similar
to the model clothing. To visually describe this one can refer to the image below. The example is a grid of six images.
Top left is the in-shop clothes, top right being the clothing segment of the model, the top middle being the transform
(bottom left) applied on the in-shop clothes.

The above examples are generated during training and hence the in shop clothing and the model clothing are the
same. This can also lead to an easier qualitative assessment. The network architecture to learn this transform is briefly
described below.

Dept. of ISE, BIT. 2023-24 - 21 -


VIRTUAL DRESSING ROOM

We call the learning of this transformation as the Geometric Matching Module, as it matches the in-shop clothing to
the current clothing trying to get them to match geometrically. Some of the results after training are shown below.

6.1.3 Try-on Module

The instinctive approach to imposing the new clothing now is to simply paste it over the image, but as one can see
this will cause problems due to overlap with hair and hands, and the previous clothing stays, making it look very
unrealistic. The solution to this was the try-on module, where we implement an encoder-decoder network to
smoothen out the image.

Dept. of ISE, BIT. 2023-24 - 22 -


VIRTUAL DRESSING ROOM

This gives a smoothened image that looks much more realistic then the results we would have if we were to paste
the image over the model. The article has avoided any in-depth description of the work done and for a thorough
description of the model and training strategy

Dept. of ISE, BIT. 2023-24 - 23 -


VIRTUAL DRESSING ROOM

6.2 .MODULE 1 :POSE TRACKING

import cv2 as cv
import numpy as np
import argparse

parser = argparse.ArgumentParser()
parser.add_argument('--input', help='Path to image or video. Skip to capture frames from camera')
parser.add_argument('--thr', default=0.2, type=float, help='Threshold value for pose parts heat map')
parser.add_argument('--width', default=368, type=int, help='Resize input to specific width.')
parser.add_argument('--height', default=368, type=int, help='Resize input to specific height.')

args = parser.parse_args()
BODY_PARTS = { "Nose": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4,
"LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9,
"RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "REye": 14,
"LEye": 15, "REar": 16, "LEar": 17, "Background": 18 }

POSE_PAIRS = [ ["Neck", "RShoulder"], ["Neck", "LShoulder"], ["RShoulder", "RElbow"],


["RElbow", "RWrist"], ["LShoulder", "LElbow"], ["LElbow", "LWrist"],
["Neck", "RHip"], ["RHip", "RKnee"], ["RKnee", "RAnkle"], ["Neck", "LHip"],
["LHip", "LKnee"], ["LKnee", "LAnkle"], ["Neck", "Nose"], ["Nose", "REye"],
["REye", "REar"], ["Nose", "LEye"], ["LEye", "LEar"] ]
inWidth = args.width
inHeight = args.height
net = cv.dnn.readNetFromTensorflow("graph_opt.pb")
cap = cv.VideoCapture(args.input if args.input else 0)
while cv.waitKey(1) < 0:
hasFrame, frame = cap.read()
if not hasFrame:
cv.waitKey()
break
# Convert the image to grayscale
gray_img = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)

frameWidth = frame.shape[1]
frameHeight = frame.shape[0]

Dept. of ISE, BIT. 2023-24 - 24 -


VIRTUAL DRESSING ROOM

net.setInput(cv.dnn.blobFromImage(frame, 1.0, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False))


out = net.forward()
out = out[:, :19, :, :] # MobileNet output [1, 57, -1, -1], we only need the first 19 elements
assert(len(BODY_PARTS) == out.shape[1])
points = []
for i in range(len(BODY_PARTS)):
# Slice heatmap of corresponging body's part.
heatMap = out[0, i, :, :]
# Originally, we try to find all the local maximums. To simplify a sample
# we just find a global one. However only a single pose at the same time
# could be detected this way.
_, conf, _, point = cv.minMaxLoc(heatMap)
x = (frameWidth * point[0]) / out.shape[3]
y = (frameHeight * point[1]) / out.shape[2]
# Add a point if it's confidence is higher than threshold.
points.append((int(x), int(y)) if conf > args.thr else None)
cv.imshow('OpenPose using OpenCV', frame)
cv.imshow('thermal_image.jpg', thermal_img)

6.3 MODULE 2 :HUMAN PARSING

import argparse
import os.path
import numpy as np
import cv2 as cv

backends = (cv.dnn.DNN_BACKEND_DEFAULT, cv.dnn.DNN_BACKEND_INFERENCE_ENGINE,


cv.dnn.DNN_BACKEND_OPENCV)
targets = (cv.dnn.DNN_TARGET_CPU, cv.dnn.DNN_TARGET_OPENCL, cv.dnn.DNN_TARGET_OPENCL_FP16,
cv.dnn.DNN_TARGET_MYRIAD, cv.dnn.DNN_TARGET_HDDL)

def preprocess(image):
Create 4-dimensional blob from image and flip image
:param image: input image
image_rev = np.flip(image, axis=1)
input = cv.dnn.blobFromImages([image, image_rev], mean=(104.00698793, 116.66876762, 122.67891434))

Dept. of ISE, BIT. 2023-24 - 25 -


VIRTUAL DRESSING ROOM

return input

def parse_human(image, model_path, backend=cv.dnn.DNN_BACKEND_OPENCV,


target=cv.dnn.DNN_TARGET_CPU):
Prepare input for execution, run net and postprocess output to parse human.
:param image: input image
:param model_path: path to JPPNet model
:param backend: name of computation backend
:param target: name of computation target
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Use this script to run human parsing using JPPNet',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--input', '-i', required=True, help='Path to input image.')
parser.add_argument('--model', '-m', default='lip_jppnet_384.pb', help='Path to pb model.')
parser.add_argument('--backend', choices=backends, default=cv.dnn.DNN_BACKEND_DEFAULT, type=int,
help="Choose one of computation backends: "
"%d: automatically (by default), "
"%d: Intel's Deep Learning Inference Engine (https://software.intel.com/openvino-toolkit), "
"%d: OpenCV implementation" % backends)
parser.add_argument('--target', choices=targets, default=cv.dnn.DNN_TARGET_CPU, type=int,
help='Choose one of target computation devices: '
'%d: CPU target (by default), '
'%d: OpenCL, '
'%d: OpenCL fp16 (half-float precision), '
'%d: NCS2 VPU, '
'%d: HDDL VPU' % targets)
args, _ = parser.parse_known_args()
cv.namedWindow(winName, cv.WINDOW_AUTOSIZE)
cv.imshow(winName, output)
cv.waitKey()

Dept. of ISE, BIT. 2023-24 - 26 -


VIRTUAL DRESSING ROOM

6.4 MODULE 3 :SEMANTIC GENERATION

if __name__ == "__main__":
if not os.path.isfile(args.gmm_model):
raise OSError("GMM model not exist")
if not os.path.isfile(args.tom_model):
raise OSError("TOM model not exist")
if not os.path.isfile(args.segmentation_model):
raise OSError("Segmentation model not exist")
if not os.path.isfile(findFile(args.openpose_proto)):
raise OSError("OpenPose proto not exist")
if not os.path.isfile(findFile(args.openpose_model)):
raise OSError("OpenPose model not exist")

person_img = cv.imread(args.input_image)
ratio = 256 / 192
inp_h, inp_w, _ = person_img.shape
current_ratio = inp_h / inp_w
if current_ratio > ratio:
center_h = inp_h // 2
out_h = inp_w * ratio
start = int(center_h - out_h // 2)
end = int(center_h + out_h // 2)
person_img = person_img[start:end, ...]
else:
center_w = inp_w // 2
out_w = inp_h / ratio
start = int(center_w - out_w // 2)
end = int(center_w + out_w // 2)
person_img = person_img[:, start:end, :]

cloth_img = cv.imread(args.input_cloth)
pose = get_pose_map(person_img, findFile(args.openpose_proto),
findFile(args.openpose_model), args.backend, args.target)
segm_image = parse_human(person_img, args.segmentation_model)
segm_image = cv.resize(segm_image, (192, 256), cv.INTER_LINEAR)

Dept. of ISE, BIT. 2023-24 - 27 -


VIRTUAL DRESSING ROOM

CHAPTER 7
SYSTEM TESTING
The main quality assurance technique used in software development is testing. Its fundamental purpose is to
find software bugs. To test various system components, various testing levels are employed, each of which
carries out a separate duty.

7.1 Levels of Testing

7.1.1 Unit Testing:


Unit testing is a method by which individual units of source code, sets of one or more computer program
modules together with associated control data, usage procedures, and operating procedures, are tested to
determine if they are fit for use. Intuitively, one can view a unit as the smallest testable part of an application.
In object-oriented programming a unit is often an entire interface, such as a class, but could be an individual
method. For unit testing first we adopted the code testing strategy, which examined the logic of program. During
the development process itself all the syntax errors etc. got rooted out. For this developed test case that result in
executing every instruction in the program or module i.e. every path through program was tested. Test cases are
data chosen at random to check every possible branch after all the loops.

7.1.2 Functionality Testing:

Test case 1: when the user is standing with objects in the background
Expected outcome: The cloth imposed perfectly on the user
Result: pass

Test case 2: when the user is wearing mask


Expected outcome: The cloth imposed perfectly on the user
Result: fail

Test case 3: when the background is dark


Expected outcome: The cloth imposed perfectly on the user
Result: fail

Dept. of ISE, BIT. 2023-24 - 28 -


VIRTUAL DRESSING ROOM

Test case 4: when the multiple users are standing


Expected outcome: The cloth imposed perfectly on the user
Result: pass

7.1.3 Website UI Testing :-


This is a user-centric testing of the website. In this test phase, items such as visibility of text in various screens
of the website, interactive messages, alignment of data, the look and feel of the website for different screens,
size of fields etc are tested under this.

7.1.4 Compatibility Testing :-


Compatibility plays an important role as it determines if your device is compatible with the requirements of
our website, such as screen size, resolution and network connectivity. This testing was done to check
compatibility among different website resolutions.

7.1.5 Navigation Testing:-


Proper navigation is available such as back and go to recent work properly. Navigation tests analyze how
users navigate through our website, given a specific task or goal.

7.1.6 Integration Testing:-


Data can be lost across an interface, one module can have an adverse effect on the other sub function, when
combined may not produce the desired functions. Integrated testing is the systematic testing to uncover the
errors with an interface. This testing is done with simple data and developed system has run successfully with
this simple data. The need for integrated system is to find the overall system performance.

Steps to perform integration testing:

Step 1: Create a Test Plan

Step 2: Create Test Cases and Test Data

Step 3: Once the components have been integrated execute the test cases

Step 4: Fix the bugs if any and re test the code

Step 5: Repeat the test cycle until the components have been successfully integrate.

Dept. of ISE, BIT. 2023-24 - 29 -


VIRTUAL DRESSING ROOM

CHAPTER 8

SNAPSHOTS

Fig 8.1 Login page of Virtual Dressing Room

the Fig 8.1 shows us the login page of our Virtual Dressing Room where the user can log in to his account, the
website remembers the user’s login and gets them back where they had left taking the trial of the dress

Fig 8.2 Registration page of Virtual dressing room

The Fig 8.2 shows fields for username, email, mobile number, and password and buttons for "SIGNIN" and
"SIGNUP" for user registration for the Virtual Trial Room application.

Dept. of ISE, BIT. 2023-24 - 30 -


VIRTUAL DRESSING ROOM

Fig 8.3 Home page of Virtual Dressing Room

The Fig 8.3 depicts the Home page of the application showing a variety of clothes on display.

Fig 8.4 Size Recommendation page of Virtual Dressing Room

The Fig 8.4 depicts the Recommendation page of the application where users can select their dress size
and it recommends the suitable fit cloths.

Dept. of ISE, BIT. 2023-24 - 31 -


VIRTUAL DRESSING ROOM

Fig 8.5 Recommendation page of Virtual Dressing Room


The Fig 8.5 shows the recommended cloths for that particular size where users can select based
on their choice.

Fig 8.6 Trial Output page of Virtual Dressing Room


The Fig 8.6 shows the selected cloth to trial on the image, the selected cloth is placed on the uploaded
photo model/Live test image.

Dept. of ISE, BIT. 2023-24 - 32 -


VIRTUAL DRESSING ROOM

Fig 8.7 Trial Output page of Virtual Dressing Room


The Fig 8.7 shows the selected cloth to trial on the image, the selected cloth is placed on the uploaded
photo model/Live test image.

Dept. of ISE, BIT. 2023-24 - 33 -


VIRTUAL DRESSING ROOM

CHAPTER 9

CONCLUSION

The popularity of online shopping and people's desire to utilize it to the fullest extent possible when buying
clothes justifies the necessity to create an algorithm that digitally dresses them in the chosen clothing.
The requirement to spend hours physically trying on a range of outfits is a regular issue client run into when
shopping for clothing. The time available might not be enough, and this might be exhausting. The utilization of
a virtual styling room that serves as a trial room using live video feed is the suggested remedy for this issue.
The human body's nodes and points are plotted using a Kinect sensor, and this information is then utilized to
create an image of clothing over the user's body, obviating the need for actual fittings and saving time.
The ability to check out themselves in different outfits with less limits thanks to this technology would be greatly
appreciated by online buyers.
We came to the conclusion that this exercise really saves time. It doesn't demand extra work. Anyone who is
not technically savvy can use this virtual machine. It doesn't call for a lot of technical expertise. It is hence
accessible. Therefore, it is the perfect addition for a clothier. Overall, the suggested virtual dressing room
appears to be a solid option for precise and speedy virtual clothing fitting.

Dept. of ISE, BIT. 2023-24 - 34 -


VIRTUAL DRESSING ROOM

FUTURE ENHANCEMENT
The application is currently solely designed to allow users to try on virtual garments. Another application that
will be built just for the owner or shopkeeper will be implemented, as planned. The owner will receive daily
information from this program on how many individuals tried on items, how many bought them, which clothes
should be stored longer, etc. We also intend to integrate a machine learning model into our application,which
will provide the user recommendations for what apparel to buy based on their previous apparel
preferences and other factors.

Following the identification of the target image's bodily sections, there are a few operations that must be carried
out to ensure accuracy. Therefore, in order to arrange clothing on the goal image, we need to train three different
types of networks.

PAN (pose alignment network)


TRN (texture refinement
network)

FTN (fitting network)

Dept. of ISE, BIT. 2023-24 - 35 -


VIRTUAL DRESSING ROOM

REFERENCES
[1] Srinivasan K. Vivek S, Implementation of Virtual Fitting Room Using Image Processing, Department of

Electronics and Instrumentation Engineering, Sri Ramakrishna Engineering College Coimbatore, India,

ICCCSP-2017.

[2] Cecilia Garcia, Nicolas Bessou, Anne Chadoeuf and Erdal Oruklu, Image Processing Design Flow for

Virtual Fitting Room Applications used in Mobile, Devices by Department of Electrical and Computer

Engineering Illinois, Institute of Technology Chicago, Illinois, USA.

[3] Kshitij Shah, Mridul Pandey, Sharvesh Patki, Radha Shankarmani, A Virtual Trial Room using Pose

Estimation and Homograph, Department of Information Technology Sardar Patel Institute of Technology

Mumbai, India.

[4] Vlado Kitanovski, Ebroul Izquierdo Multimedia and Vision Research Group – “3d Tracking of Facial

Features for Augmented Reality Applications”, Queen Mary, University of London, UK

[5] A. Hilsmann and P. Eisert, “Realistic cloth augmentation in single view video,” in Proc. Vis., Modell.,

Visualizat. Workshop, 2009, pp.

[6] LanZiquan, Augmented Reality – Virtual fitting Room using Kinect, Department of Computer Science,

School of Computing, National University of Singapore 2011/2012

[7] Vipin Paul, Sanju Abel J, Sudarshan S, Praveen M, “Virtual Trial Room”, South Asian Journal of

Engineering and Technology Vol3, No5(2017)87-96

[8] Nikita Deshmukh, Ishani Patil, Sudehi Patwari, Aarati Deshmukh, Pradnya Mehta, “Real Time Virtual

Dressing Room”, IJCSN International Journal of Computer Science and Network, Volume 5, Issue 2, April

2016.

[9] Muhammed Kotan and Cemil Oz,” Virtual Dressing Room Application with Virtual Human Using Kinect

Sensor”, Journal of Mechanics Engineering and Automation 5 (2015) 322-326, May 25, 2015.

[10] Shreya Kamani, Neel Vasa, Kriti Srivastava,” Virtual Trial Room Using Augmented Reality”, International

Journal of Advanced Computer Technology (Ijact), Vol 3, Number 6

Dept. of ISE, BIT. 2023-24 - 36 -

You might also like