0% found this document useful (0 votes)
546 views

Virtual Try On Documentation

The document describes a virtual try-on system for clothing and accessories using Python. It outlines the process which includes face detection using the Viola-Jones algorithm to extract the user's face from video and superimpose virtual clothing onto the user's body in real-time. The system specifications include hardware like an Intel Core i3 processor and 4GB RAM, as well as software like Windows 11, Python 6.2, and Visual Studio IDE. It then provides a brief history of the Python programming language.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
546 views

Virtual Try On Documentation

The document describes a virtual try-on system for clothing and accessories using Python. It outlines the process which includes face detection using the Viola-Jones algorithm to extract the user's face from video and superimpose virtual clothing onto the user's body in real-time. The system specifications include hardware like an Intel Core i3 processor and 4GB RAM, as well as software like Windows 11, Python 6.2, and Visual Studio IDE. It then provides a brief history of the Python programming language.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

VIRUTAL TRY ON CLOTHING AND ACCESSORIES

USING PYTHON

Project Submitted to Mother Teresa Women’s University, Kodaikanal in partial


fulfillment of requirements for the award of

Degree of Bachelor of Science in Computer Science

Submitted by

V.S. MITHRAM MALA M.MAHA LAKSHMI


Reg.no: 204526ER027 Reg.no: 204526ER025

Under guidance of
Ms. . Ajisha Anna Xavier ,MCA
Assistant Professor of Computer Science
DEPARTMENT OF COMPUTER SCIENCE

VADAPUTHUPATTI,THENI - 625531
VADAPUTHUPATTI,THENI – 625531

April – 2023

CERTIFICATE

This is to certify that this project, entitled “VIRUTAL TRY ON CLOTHING AND
ACCESSORIES USING PYTHON ” is the bonafide record of work done by
V.S.MITHRAM MALA(Reg.no:204526ER027)and M. MAHA
LAKSHMI(Reg.no:204526ER025) in the academic year of 2022-2023 at the Department of
Computer Science, Nadar Saraswathi College of Arts and Science, Theni in partial fulfillment
of requirements for the degree of Bachelor of Science in Computer Science is based on the
results of studies carried out by her under my guidance. It has not previously formed the basis of
the award of a degree, diploma, fellowship, associate ship or any other similar title.

Dr.K.Sivakami, Ph.d,SET., Ms. . Ajisha Anna Xavier ,MCA


Head & Assistant Professor, Guide & Assistant Professor
Department of Computer Science Department of Computer Science
Nadar Saraswathi College of Arts Nadar Saraswathi College of Arts

& Science,Theni. & Science,Theni.

The Project Viva-Voce Examination held on ____________________

INTERNAL EXAMINER EXTERNAL EXAMINER


ACKNOWLEDGEMENT

I cordially express my sincere thanks to god who have me health and knowledge to do

this project. I thank my parents for their constant support by side through this Endeavour. I fall

short of words. Behind my every successful accomplishment, they are there.

I wish to express, my sincere thanks to Dr.S.CHITRA, Ph.D.,SET.,D.Litt., Principal

and Dr.A.KOMATHI,Ph.D.,SET., Vice Principal, NadarSaraswathi College of Arts and

Science, Theni for giving me the opportunity to do this project.

I express my heartfelt gratitude to our Head of the Department Dr.K.SIVAKAMI,

Ph.D., SET., Head and Assistant Professor, Department of Computer Science,NadarSaraswathi

College of Arts and Science, Theni for their valuable and all kinds of support.

My sincere thanks goes to my guide., Ms.AJISHA ANNA XAVIER ,MCA.,Head and

Assistant Professor, Department of Computer Science who have valuable guidance, suggestion

and great support in each and every steps moved in my project.

My special thanks go to all Faculty Members in our Department and Friends for their

continuous encouragement in completing this course of study.

I am also indebted to my all well-wishers who had been with me in helping me to

completing the project successfully.


S.NO CONTENT PAGE.NO

INTRODUCTION 1

I 1.1 PROJECT OVERVIEW 1

1.2 PROCESS DESCRIPTION 2

SYSTEM SPECIFICATION 3

2.1 HARDWARE SPECIFICATION 3


II
2.2 SOFTWARE SPECIFICATION 3

2.3 SOFTWARE DESCRIPTION 4

SYSTEM ANALYSIS 18

III 3.1 EXISTING SYSTEM 18

3.2 PROPOSED SYSTEM 19

SYSTEM DESIGN AND DEVELOPMENT 20

IV 4.1 DATAFLOW DIAGRAM 21

4.2 DATABASE DESIGN 26

SYSTEM TESTING & IMPLEMENTATION 28

V 5.1 TESTING 28

5.2 IMPLEMENTATION 32

VI SOURCE CODE 33

VII SAMPLE SCREEN 48

VIII CONCLUSION 55

IX BIBLIOGRAPHY 57
ABSTRACT

The proposed solution is virtual fitting room for Clothes and Accessories
which renders 3D model for appropriately fitted clothing to the body. It can be used for online
shopping or intelligent recommendation to narrow down the selections to a few designs and sizes
virtually. Further, this project explores multiple outputs of 3D clothes transformation by utilizing
various approaches relies under machine learning using the algorithm of “Haar classifier”and Digital
Image processing. This enables a user to see themself wearing virtual clothes without taking off their
actual clothes. The system physically simulates the selected virtual clothes on the user’s body in real
time and the user can see virtual clothes fitting on their mirror image from various angles as they
moves.

VIRUTAL TRY ON CLOTHING ACCESSORIES Page 1


1. INTRODUCTION

1.1 PROJECTOVERVIEW

The proposed solution is virtual fitting room for Clothes and Accessories which renders 3D
model for appropriately fitted clothing to the body. It can be used for online shopping or
intelligent recommendation to narrow down the selections to a few designs and sizes virtually.
Further, this project explores multiple outputs of 3D clothes transformation by utilizing
various approaches relies under machine learning using the algorithm of “Haar classifier” and
Digital Image processing. This enables a user to see themself wearing virtual clothes without
taking off their actual clothes. The system physically simulates the selected virtual clothes on
the user’s body in real time and the user can see virtual clothes fitting on their mirror image
from various angles as they moves.

VIRUTAL TRY ON CLOTHING ACCESSORIES Page 2


1.2 PROCESSDESCRIPTION

DESCRIPTION
 Extraction of user allows to create an augmented reality environment by isolating
the user area from the video stream and superimposing it onto a virtual
environment in the user interface.

FACE DETECTION
 The main function of this step is to determine whether human faces appear in a
given image, and where these faces are located at. Viola-Jones Algorithm is used.

VIOLA-JONES ALGORITHM
 The Viola–Jones object detection framework is an object detection framework by
Paul Viola and Michael Jones.
 This can be trained to detect a variety of object classes, it was motivated primarily
by the problem of face detection.

VIRUTAL TRY ON CLOTHING ACCESSORIES Page 3


2. SYSTEMSPECIFICATION

2.1 HARDWARESPECIFICATION

PROCESSOR : INTEL(R) CORE(TM) I3-5005U CPU@ 2.00GHZ

~2.00GHZHARD DISK :1TB

RAM :4GB

MOUSE :TOUCHPAD

KEYBOARD :108KEYS(STANDARDKEYBOARD)

2.2 SOFTWARESPECIFICATION

OPERATINGSYSTEM :WINDOWS-11

FRONTEND :PYTHON 6.2

IDE : VISUAL STUDIO 1.75.1

VIRUTAL TRY ON CLOTHING ACCESSORIES Page 4


2.3 SOFTWAREDESCRIPTION

PYTHONHISTORY

The programming language Python was conceived in the late 1980s, and its
implementation was started in December 1989 by Guido van Rossum at CWI in the
Netherlands as a successor to ABC capable of exception handling and interfacing with the
Amoeba operating system. Van Rossum is Python's principal author, and his continuing
central role in deciding the direction of Python is reflected in the title given to him by the
Python community, Benevolent Dictator for Life (BDFL). (However, Van Rossum stepped
down as leader on July 12, 2018). Python was named after the BBC TV show Monty
Python's Flying Circus.

Python 2.0 was released on October 16, 2000, with many major new features, including a
cycle-detecting garbage collector (in addition to reference counting) for memory
management and support for Unicode. However, the most important change was to the
development process itself, with a shift to a more transparent and community-backed
process.

Python 3.0, a major, backwards-incompatible release, was released on December 3, 2008[9]


after a long period of testing. Many of its major features have also been backported to the
backwards-compatible, though now-unsupported, Python 2.6 and 2.7.

ThePython Platform

Python is a high-level, general-purpose programming language. Its design philosophy


emphasizes code readability with the use of significant indentation via the off-side rule.

Python is dynamically typed and garbage-collected. It supports multiple programming


paradigms, including structured (particularly procedural), object-oriented and functional
programming. It is often described as a "batteries included" language due to its
comprehensive standard library.

Guido van Rossum began working on Python in the late 1980s as a successor to the ABC
programming language and first released it in 1991 as Python 0.9.0.Python 2.0 was released

VIRUTAL TRY ON CLOTHING ACCESSORIES Page 5


in 2000. Python 3.0, released in 2008, was a major revision not completely backward-
compatible with earlier versions. Python 2.7.18, released in 2020, was the last release of
Python 2.Python consistently ranks as one of the most popular programming languages.

Python, when used in machine learning, offers developers of all skill sets exceptional
versatility and power. Developers can use Python to develop a variety of applications
because it integrates well with other software while its simple syntax makes it a good choice
for coding algorithms and collaborating across teams. Python also has a huge number
of libraries and frameworks that are very good for machine learning (such as Scikit-Learn),
which handle basic machine learning algorithms.

Python is the best choice for building machine learning models due to its ease of use,
extensive framework library, flexibility and more.

Python brings an exceptional amount of power and versatility to machine


learning environments. The language’s simple syntax simplifies data validation and
streamlines the scraping, processing, refining, cleaning, arranging and analyzing processes,
thereby making collaboration with other programmers less of an obstacle. Python also offers
a vast ecosystem of libraries that take much of the monotonous routine function writing
tasks out of the equation to free developers up to focus on code and reduces the chances for
error when programming.

Python also offers a great deal of flexibility and we pair it with other programming
languages to complete a machine learning model. Python can also run on any operating
system, from Windows to macOS, Linux, Unix and more. Perhaps most importantly, Python
is easy to read, beloved by a huge community of developers (who also contribute to the
development of new packages that facilitate machine learning) and continues to gain in
popularity. In short, Python’s online community makes it easy to find answers and resources
when building or troubleshooting machine learning models.

Using Python allows beginners to utilize a simplified programming language while learning
the fundamentals of machine learning.

Python is the most simplified programming language in terms of its syntax and ease of
understanding, making it the most common choice for those who have just started learning

VIRUTAL TRY ON CLOTHING ACCESSORIES Page 6


about programming or are learning how to apply their Python knowledge to machine
learning.

In order to begin creating machine learning models using Python, it is crucial to understand
the different data types, like integers, strings and floating point numbers, as well
as statistical fundamentals, how to source data and more.

Understanding how to clean and structure your data is also necessary in order to create input
data to be fed into a machine learning model. Users should know how to access different
Python libraries and how to choose the right library to create machine learning models.
Lastly, users must know how to create and utilize algorithms in Python in order to build the
model itself.

Building machine learning models may be difficult in itself but using Python frameworks,
such as Scikit-Learn, simplifies the process by doing much of the heavy lifting and requiring
only that data is provided to function, which allows developers to focus on functionality and
trained accuracy of models.

There are many ways to begin learning Python for machine learning, including hands-on
experiences, courses, Built In tutorials and college education.

In order to begin using Python in a machine learning context, it is first important to


understand the fundamentals of both the programming language and data. Data
types, loops, conditional statements, data manipulation, algorithms, libraries like Pandas,
NumPy, Scikit-Learn and Matplotlib will all come into play when learning to use Python for
machine learning. You’ll need a working knowledge of all of these concepts. Additionally,
having a solid development environment, such as Jupyter Notebook, is crucial to staying
organized when building machine learning models.

Beyond simply learning how to code with Python, there are several options for learning how
to apply your Python knowledge to machine learning. Hands-on experience working with
software such as TensorFlow or other data-focused environments can allow beginners the
opportunity to experiment with their background knowledge and learn proper machine
learning programming processes through trial-and-error. To gain even more practical
knowledge and add efficiency to workflows, enrolling in a professional development course
VIRUTAL TRY ON CLOTHING ACCESSORIES Page 7
from Built In can provide developers with a wealth of knowledge that will help them
enhance their machine learning models in specific ways. Finally, the most robust way to
learn Python for machine learning is by earning a bachelor’s degree in computer
science, data science or a related field from an accredited university.

MACHINE LEARNING

As a scientific endeavour, machine learning grew out of the quest for artificial
intelligence. In the early days of AI as an academic discipline, some researchers were
interested in having machines learn from data. They attempted to approach the problem with
various symbolic methods, as well as what was then termed "neural networks"; these were
mostly perceptron and other models that were later found to be reinventions of the
generalized linear models of statistics.Probabilistic reasoning was also employed, especially
in automated medical diagnosis.

However, an increasing emphasis on the logical, knowledge-based approach caused a rift


between AI and machine learning. Probabilistic systems were plagued by theoretical and
practical problems of data acquisition and representation.By 1980, expert systems had come
to dominate AI, and statistics was out of favor. Work on symbolic/knowledge-based
learning did continue within AI, leading to inductive logic programming, but the more
statistical line of research was now outside the field of AI proper, in pattern recognition and
information retrieval.Neural networks research had been abandoned by AI and computer
science around the same time. This line, too, was continued outside the AI/CS field, as
"connectionism", by researchers from other disciplines including Hopfield, Rumelhart, and
Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation.

Machine learning (ML), reorganized as a separate field, started to flourish in the 1990s. The
field changed its goal from achieving artificial intelligence to tackling solvable problems of
a practical nature. It shifted focus away from the symbolic approaches it had inherited from
AI, and toward methods and models borrowed from statistics, fuzzy logic and probability
theory.

VIRUTAL TRY ON CLOTHING ACCESSORIES Page 8


THE VIOLA – JONES ALGORITHM
The Viola–Jones object detection framework is a machine learning object detection
framework proposed in 2001 by Paul Viola and Michael Jones. It was motivated primarily
by the problem of face detection, although it can be adapted to the detection of other object
classes.
The algorithm is efficient for its time, able to detect faces in 384 by 288 pixel images at 15
frames per second on a conventional 700 MHz Intel Pentium III. It is also robust, achieving
high precision and recall.
While it has lower accuracy than more modern methods such as convolutional neural
network, its efficiency and compact size (only around 50k parameters, compared to millions
of parameters for typical CNN like DeepFace) means it is still used in cases with limited
computational power. For example, in the original paper, they reported that this face
detector could run on the Compaq iPAQ at 2 fps (this device has a low power
StrongARMfloating point hardware.

Choose a resolution for the images to be classified. In the original paper, they
recommended .

VIRUTAL TRY ON CLOTHING ACCESSORIES Page 9


3. SYSTEM ANALYSIS

3.1 EXISTINGSYSTEM

In the existing system of “Instantaneous media file upload to local server” are
capableto copy the videos and images in our own phone. In the phone we have only less
storage.Whentheamountofvideosandimagesareincreaseditwillautomaticallyformatted.Howeve
r we may recover the formatted videos, but sometimes some videos are unable toretrieve. If
we want to send the videos to another person it is a long time process and it alsowasteofMB
space.

Disadvantage:

 Wasteoftime
 Difficultiesisrecovery
 Loseofimportantdata
 Storageproblem
 Securityproblem

VIRUTAL TRY ON CLOTHING ACCESSORIES Page 10


3.2 PROPOSEDSYSTEM

In the proposed system, we can overcome the disadvantages of the existing


system.Wecancopythemediafilesdirectlyfromthemobiletotheserverwithoutinternetconnectiont
hroughwhichwecaneasilytransfers the data.

Advantage:

 Safety and security


 Time management
 Online is not necessary

VIRUTAL TRY ON CLOTHING ACCESSORIES Page 11


4. SYSTEMDESIGNANDDEVELOPMENT

DESIGN

The design phase begins with the requirements specification for the software to
bedeveloped. Design is the first step to move from the problem domain towards the
solutiondomain. Design is essentially the bridge between requirement specification and the
finalsolutionforsatisfyingtherequirements.Itisthemostcriticalfactoraffectingthequalityofthe
software.

1. Input Design
2. Output Design

VIRUTAL TRY ON CLOTHING ACCESSORIES Page 12


4.1 INPUTDESIGN

Inputscanbeclassifiedaccordingtotwocharacteristics.

Definition:

“When the design in put has been reviewed and the design input requirements are
Determined to be acceptable,an iterative process of translating those requirements in toa
devicedesignbegins.Thefirststepisconversionoftherequirementsintosystemorhigh-
level specifications. Thus, these specifications are a design output. Upon verification that
thehigh-level specifications conform to the design input requirements, they become the
designinputforthe nextstepinthe designprocess,andsoon.”

 Howdataisinitiallytobecapture,enteredandprocessed.
 Themethod and technologyused to capture and enterdata.
 Input begins long before the data arrives at the input Device, be it
akeyboard or a mouse. Source documentation, input screens
methodsandproceduresforgettingthedataintothecomputerhavetobedesig
nedfirst.

VIRUTAL TRY ON CLOTHING ACCESSORIES Page 13


VIRUTAL TRY ON CLOTHING ACCESSORIES Page 14
PHYSICALDESIGN

Thephysicaldesigntakesintoconsiderationthephysicaldataflows,whichmustrepresent
anyofthefollowing.

Definition:

“In integrated circuit design, physical design is a step in the standard design
cyclewhich follows after the circuit design. At this step, circuit representations of the
components(devices and interconnects) of the design are converted into geometric
representations ofshapes which, when manufactured in the corresponding layers of materials,
will ensure therequired functioning of the components. This geometric representation is called
integratedcircuitlayout.Thisstepisusuallysplitintoseveralsub-
steps,whichincludebothdesignandverificationandvalidationofthelayout.“

Modern day Integrated Circuit (IC) design is split up into Front-end design
usingHDLs, Verification, and Back-end Design or Physical Design. The next step after
PhysicalDesign is the Manufacturing process or Fabrication Process that is done in the
WaferFabricationHouses. Fab-
housesfabricatedesignsontosilicondieswhicharethenpackagedintoICs.

Each of the phases mentioned above have Design Flows associated with them.
TheseDesign Flows lay down the process and guide-lines/framework for that phase.
PhysicalDesign flow uses the technology libraries that are provided by the fabrication
houses. Thesetechnology files provide information regarding the type of Silicon wafer used,
the standard-cellsused,thelayoutrules(likeDRCinVLSI),etc.

o Theplannedimplementationofaninputtooroutputformsaphysicalprocess.
o Adatabasecommand or actionsuchasinsert,delete,andupdate.
o Theimportofdatafromorexportofdatatoanotherinformationsystemacrossane
twork.
o Theflowofdatabetweentomodulesorsubroutineswithinthesameprogram.

VIRUTAL TRY ON CLOTHING ACCESSORIES Page 15


VIRTUAL TRY ON CLOTHES AND ACCESSORIES

Page15
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

LOGICALDESIGN

The logical design seeks to trace the flow of data throughout the system. Data
flowdiagram where designed to achieve this end in a graphical format that is easy to
understand.Allprocessonanyofthe DFDsmusthave atleastoneinputand outputdataflow.

Application

CaptureImage RecordVideo

ImageCapturing VideoRecording

Stores into
phonememory
either
inInternalorExterna
Uploadtoserver l Uploadtoserver

Configuration
Ifconnected Ifnot connected

Photo/Videouploa Connection
dedtoserversucces Refused
sfully

. Page25
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

4.2 OUTPUTDESIGN

Definition:

“This basic technique is used repeatedly throughout the design process. Each
designinput is converted into a new design output; each output is verified as conforming to its
input;anditthen becomes the design inputfor another step in the design process. In this
manner,thedesigninputrequirementsaretranslatedintoadevicedesignconformingtothoserequire
ments.”

Design of output screens has been kept as simple as possible. The user is
providedwitheitheratabularrepresentationorastatementgivingdetailsoftransaction.

. Page26
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

USERINTERFACEDESIGN

The basics tepsoruser interface design have been followed. They are

 Managing the user interface dialogue.


 Prototyping the dialogue and user interface.
 Obtaining user feedback.

. Page27
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

5. SYSTEM TESTINGANDIMPLIMENTAION

5.1 TESTING

SOFTWARETESTING

Software testing is the process of evaluation a software item to detect


differencesbetween given input and output. Also to access the feature of a software item.
Testing accessthe quality of the product. Software testingis a process thatshould be done
during thedevelopmentprocess.

VERIFICATION

Verification is to make sure the product satisfies the conditions imposed at the start
ofthe development phase. In other words, to make sure the product behaves the way we went
itdo.

VALIDATION

Validation is the process to make sure the product satisfies the specified
requirementsat the end of the development phase. In other words, to make sure the product is
built as percustomerrequirements.

BASICSOFSOFTWARETESTING

Therearetwobasicsofsoftwaretesting: blackboxtestingandwhiteboxtesting.

BLACKBOXTESTING

Black box testing is a testing technique that ignores the internal mechanism of
thesystem and focuses on the output generated against any input and execution of the system.
Itisalsocalledfunctionaltesting.

. Page28
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

WHITEBOXTESTING

White box testing is a testing technique that takes into account the internal mechanism
ofa system.It is also calledstructuraltestingandglassboxtesting.

Black box it testing is often used for validation and white box testing often used
forvalidation.

TYPESOF TESTING

Therearemanytypesoftestinglike

 UnitTesting
 IntegrationTesting
 FunctionalTesting
 SystemTesting
 StressTesting
 PerformanceTesting
 UsabilityTesting
 RegressionTesting

UNIT TESTING
Unit testing is the testing of an individual unit or group of related units. It falls
underthe class of white box testing. It is often done by the programmer to test that the unit
he/shehasimplementedisproducingexpectedoutputagainstgiveninput.

MANUALVSAUTOMATEDTESTING
Testingcaneitherbedonemanuallyorusinganautomatedtestingtool:

 Manual - This testing is performed without taking help of automated testing


tools.The software tester prepares test cases for different sections and levels of the
code,executesthetestsandreportsthe resulttothemanager.

. Page29
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

Manualtestingistimeandresourceconsuming.Thetesterneedstoconfirmwhetherornot
right test casesareused.Majorportionoftestinginvolvesmanualtesting.

 Automated This testing is a testing procedure done with aid of automated


testingtools. The limitations with manual testing can be overcome using automated
testtools.

A test needs to check if a webpage can be opened in Internet Explorer. This can be
easilydone with manual testing. But to check if the web-server can take the load of 1 million
users,itis quite impossibletotestmanually.

There are software and hardware tools which helps tester in conducting load testing,
stresstesting,regressiontesting.

INTEGRATIONTESTING
Integration testing is testing in which a group of components are combined to
produceoutput. Also, the integration between software and hardware is tested in integration
testing ifsoftware and hardware components have any relation. It may fall under both white
box andblackboxtesting.
FUNCTIONALTESTING
Functionaltestingisthespecifiedfunctionalityrequiredinthesystemrequirements
STRESSTESTING
Stresstestingisthetestingtoevaluatehowsystembehavesunderunfavorableconditions.
Testing is conducted at beyond limits of the specifications. It falls under the
classofblockboxtesting.

PERFORMANCETESTING
Performance testing is the testing to access the speed and effectiveness of the
systemandtomakesureitisgeneratingresultswithinaspecifiedtimeasinperformancerequirements.
Itfalls undertheclassofblackboxtesting.

SYSTEMTESTING
Thesoftwareiscompiledasproductandthenitistestedasawhole.Thiscanbeaccomplishedusingone
ormoreofthefollowingtests:

. Page30
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

 Functionalitytesting -Testsallfunctionalitiesofthesoftwareagainsttherequirement.

 Performance testing - This test proves how efficient the software is. It tests
theeffectiveness and average time taken by the software to do desired task.
Performancetesting is done by means of load testing and stress testing where the
software is putunderhighuseranddata load undervariousenvironmentconditions.

USABILITYTESTING
Usability testing is performed to the perspective of the client, to evaluate how the GUI
isuser-friendly? How easily can the client learn? After learning how to use, how
proficientlycan the client perform? How pleasing is it to use it design? This under the class of
black boxtesting.

REGRESSIONTESTING
Regression testing is the testing after modification of a system, component, or a
ofrelated other modules to produce unexpected results. It falls under the class of block
boxtesting.

. Page31
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

5.2 IMPLEMENTATION
The implementation is the final and important phase. It involves user training,
systemtesting and successful running of the developed system. The user test the developed
systemwhen changes are made according to the needs. The testing phase involves the testing
of thedeveloped system using various kinds of data. An elaborate testing of data is prepared
andsystemis testedusingthetests data.
Implementation is the stage where theoretical design turned into a working
systemimplementation is planed carefully to propose system to avoid unanticipated problems.
Manyby the preparation involved before and during the implementation of proposed system.
Thesystem needed to be plugged in to the organization’s network then it could be accessed
fromanywhere, after a user logins into the portal. The tasks that had to be done to implement
thesystem were to create the data base the organization data base domain. Then the
administratorwasgrantedhis role sothatthesystemcouldbe accessed.
The next phase in the implementation was to educate the system. A demonstration
ofall the functions that can be carried out by the system was given to examination
departmentperson,whowillmakeextensive userosystem.

. Page32
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

6. SOURCECODE

CODE:

The main purpose of code design is to facilitate the identification and retrieval
ofinformation. Code design is the process of representing data flow diagram. This should
beeasytodebugwhentheerroroccurs.

MAIN.PY:
from flask import Flask, render_template, Response,redirect,request
from camera import VideoCamera
import os
app = Flask(__name__)
CART=[]
@app.route('/checkOut', methods=['POST', 'GET'])
def checkOut():
return render_template('checkout.html')
@app.route('/tryon/<file_path>',methods = ['POST', 'GET'])
def tryon(file_path):
file_path = file_path.replace(',','/')
os.system('python tryOn.py ' + file_path)
return redirect('http://127.0.0.1:5000/',code=302, Response=None)
#return Response(gen(VideoCamera()),mimetype='multipart/x-mixed-replace;
boundary=frame')
@app.route('/tryall',methods = ['POST', 'GET'])
def tryall():
print("YESSS")
if request.method == 'POST':
cart = request.form['mydata'].replace(',', '/')
print("tryall cart type= ", type(cart))
print("tryall cart= ", cart)
os.system('python test.py ' + cart)
render_template('checkout.html', message='')
@app.route('/')
def index():
. Page33
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

return render_template('index.html')
def gen(camera):
while True:
frame = camera.get_frame()
#print("frame= ", frame)
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
@app.route("/cart/<file_path>",methods = ['POST', 'GET'])
def cart(file_path):
global CART
file_path = file_path.replace(',','/')
print("ADDED", file_path)
CART.append(file_path)
return render_template("checkout.html")
@app.route('/video_feed')
def video_feed():
return Response(gen(VideoCamera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__ == '__main__':
app.run(debug=True)

CAMERA.PY:

import cv2
import dlib
from imutils import face_utils, rotate_bound
#from tryOn import calculate_inclination
class VideoCamera(object):
def __init__(self):
# Using OpenCV to capture from device 0. If you have trouble capturing
# from a webcam, comment the line below out and use a video file
# instead.
self.video = cv2.VideoCapture(0)
# If you decide to use video.mp4, you must have this file in the folder
# as the main.py.

. Page34
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

# self.video = cv2.VideoCapture('video.mp4')
def __del__(self):
self.video.release()
def get_frame(self):
success, image = self.video.read()
# We are using Motion JPEG, but OpenCV defaults to capture raw images,
# so we must encode it into JPEG in order to correctly display the
# video stream.
# detector = dlib.get_frontal_face_detector()
# model = "data/shape_predictor_68_face_landmarks.dat"
# predictor = dlib.shape_predictor(model)
# gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# faces = detector(gray, 0)
# for face in faces:
# (x,y,w,h) = (face.left(), face.top(), face.width(), face.height())
# shape = predictor(gray, face)
# shape = face_utils.shape_to_np(shape)
# incl = calculate_inclination(shape[17], shape[26]) #inclination based on eyebrows
# # condition to see if mouth is open
# is_mouth_open = (shape[66][1] -shape[62][1]) >= 10 #y coordiantes of landmark points
of lips
ret, jpeg = cv2.imencode('.jpg', image)
return jpeg.tobytes()

UPLOADACTIVITY:

from tkinter_scroll import *


from tkinter import *
from PIL import Image
from PIL import ImageTk
import cv2, threading, os, time
from threading import Thread
from os import listdir
from os.path import isfile, join

. Page35
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

import dlib
from imutils import face_utils, rotate_bound
import math

ACTIVE_IMAGES=[0 for i in range(100)]


def put_sprite(num, k):
global SPRITES, BTNS
SPRITES[num] = (1 - SPRITES[num])
if SPRITES[num]:
ACTIVE_IMAGES[num] = k
BTNS[num].config(relief=SUNKEN) #buttons to click
else:
BTNS[num].config(relief=RAISED)

def draw_sprite(frame, sprite, x_offset, y_offset):


(h,w) = (sprite.shape[0], sprite.shape[1])
(imgH,imgW) = (frame.shape[0], frame.shape[1])
if y_offset+h>= imgH:
sprite = sprite[0:imgH-y_offset,:,:]
if x_offset+w>= imgW:
sprite = sprite[:,0:imgW-x_offset,:]
if x_offset< 0:
sprite = sprite[:,abs(x_offset)::,:]
w = sprite.shape[1]
x_offset = 0
for c in range(3):
frame[y_offset:y_offset+h, x_offset:x_offset+w, c] = \
sprite[:,:,c] * (sprite[:,:,3]/255.0) + frame[y_offset:y_offset+h, x_offset:x_offset+w, c] *
(1.0 - sprite[:,:,3]/255.0)
return frame

def adjust_sprite2head(sprite, head_width, head_ypos, ontop = True):


(h_sprite,w_sprite) = (sprite.shape[0], sprite.shape[1])
factor = 1.0*head_width/w_sprite

. Page36
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

sprite = cv2.resize(sprite, (0,0), fx=factor, fy=factor)


(h_sprite,w_sprite) = (sprite.shape[0], sprite.shape[1])
y_orig= head_ypos-h_sprite if ontop else head_ypos
if (y_orig< 0):
sprite = sprite[abs(y_orig)::,:,:]
y_orig = 0
return (sprite, y_orig)

def apply_sprite2feature(image, sprite_path, haar_filter, x_offset, y_offset, y_offset_image,


adjust2feature, desired_width, x, y, w, h):
sprite = cv2.imread(sprite_path,-1)
(h_sprite,w_sprite) = (sprite.shape[0], sprite.shape[1])

xpos = x + x_offset
ypos = y + y_offset
factor = 1.0*desired_width/w_sprite

sub_img = image[y + y_offset_image:y+h,x:x+w,:]

feature = apply_Haar_filter(sub_img, haar_filter, 1.3 , 10, 10)


if len(feature)!=0:
xpos, ypos = x, y + feature[0,1] #adjust only to feature in y axis (eyes)

if adjust2feature:
size_mustache = 1.2 #how many times bigger than mouth
factor = 1.0*(feature[0,2]*size_mustache)/w_sprite
xpos= x + feature[0,0] - int(feature[0,2]*(size_mustache-1)//2) #centered respect to width
ypos = y + y_offset_image + feature[0,1] - int(h_sprite*factor) #right on top

sprite = cv2.resize(sprite, (0,0), fx=factor, fy=factor)


image = draw_sprite(image,sprite,xpos,ypos)

def apply_sprite(image, path2sprite,w,x,y, angle, ontop = True):


sprite = cv2.imread(path2sprite,-1)

. Page37
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

sprite = rotate_bound(sprite, angle)


(sprite, y_final) = adjust_sprite2head(sprite, w, y, ontop)
image = draw_sprite(image,sprite,x, y_final)

def calculate_inclination(point1, point2):


x1,x2,y1,y2 = point1[0], point2[0], point1[1], point2[1]
incl = 180/math.pi*math.atan((float(y2-y1))/(x2-x1))
return incl

def calculate_boundbox(list_coordinates):
x = min(list_coordinates[:,0])
y = min(list_coordinates[:,1])
w = max(list_coordinates[:,0]) - x
h = max(list_coordinates[:,1]) - y
return (x,y,w,h)

def apply_Haar_filter(img, haar_cascade,scaleFact = 1.05, minNeigh = 3, minSizeW = 30):


gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

features = haar_cascade.detectMultiScale(
gray,
scaleFactor=scaleFact,
minNeighbors=minNeigh,
minSize=(minSizeW, minSizeW),
flags=cv2.CASCADE_SCALE_IMAGE
)
return features

def get_face_boundbox(points, face_part):


if face_part == 1:
(x,y,w,h) = calculate_boundbox(points[17:22])
elifface_part == 2:
(x,y,w,h) = calculate_boundbox(points[22:27])

. Page38
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

elifface_part == 3:
(x,y,w,h) = calculate_boundbox(points[36:42])
elifface_part == 4:
(x,y,w,h) = calculate_boundbox(points[42:48])
elifface_part == 5:
(x,y,w,h) = calculate_boundbox(points[29:36])
elifface_part == 6:
(x,y,w,h) = calculate_boundbox(points[1:17])
elifface_part == 7:
(x,y,w,h) = calculate_boundbox(points[0:6])
elifface_part == 8:
(x,y,w,h) = calculate_boundbox(points[11:17])
return (x,y,w,h)

def cvloop(run_event):
global ctr_mid
global SPRITES
i=0
video_capture = cv2.VideoCapture(0)
video_capture.set(3,2048)
video_capture.set(4,2048)
(x,y,w,h) = (0,0,10,10)
detector = dlib.get_frontal_face_detector()
fullbody = cv2.CascadeClassifier('data/haarcascade_fullbody.xml')
model = "data/shape_predictor_68_face_landmarks.dat"
predictor = dlib.shape_predictor(model) # link to model:
http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
while run_event.is_set():
ret, image = video_capture.read()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = detector(gray, 0)
for face in faces:
(x,y,w,h) = (face.left(), face.top(), face.width(), face.height())
shape = predictor(gray, face)

. Page39
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

shape = face_utils.shape_to_np(shape)
incl = calculate_inclination(shape[17], shape[26])
is_mouth_open = (shape[66][1] -shape[62][1]) >= 10

if SPRITES[3]:#Tiara
apply_sprite(image, IMAGES[3][ACTIVE_IMAGES[3]],w+45,x-20,y+20, incl, ontop = True)

#Necklaces
if SPRITES[1]:
(x1,y1,w1,h1) = get_face_boundbox(shape, 6)
apply_sprite(image, IMAGES[1][ACTIVE_IMAGES[1]],w1,x1,y1+150, incl, ontop = False)

#Goggles
if SPRITES[6]:
(x3,y3,_,h3) = get_face_boundbox(shape, 1)
apply_sprite(image, IMAGES[6][ACTIVE_IMAGES[6]],w,x,y3-10, incl, ontop = False)

#Earrings
(x0,y0,w0,h0) = get_face_boundbox(shape, 6) #bound box of mouth
if SPRITES[2]:
(x3,y3,w3,h3) = get_face_boundbox(shape, 7) #nose
apply_sprite(image, IMAGES[2][ACTIVE_IMAGES[2]],w3,x3-40,y3+30, incl,ontop=False)
(x3,y3,w3,h3) = get_face_boundbox(shape, 8) #nose
apply_sprite(image, IMAGES[2][ACTIVE_IMAGES[2]],w3,x3+40,y3+75, incl)

# if SPRITES[5]:
# apply_sprite(image, IMAGES[5][ACTIVE_IMAGES[5]],w,x,y, incl, ontop = True)

#Frocks
if SPRITES[5]:
(x1,y1,w1,h1) = get_face_boundbox(shape, 8)
apply_sprite(image, IMAGES[5][ACTIVE_IMAGES[5]],w1+580,x1-350,y1+80, incl, ontop =
False)

. Page40
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

#Tops
if SPRITES[4]:
# (x,y,w,h) = (0,0,10,10)
# apply_sprite2feature(image, IMAGES[7][ACTIVE_IMAGES[7]], fullbody, w//4,
2*h//3, h//2, True, w//2, x, y, w, h)
(x1,y1,w1,h1) = get_face_boundbox(shape, 8)
apply_sprite(image, IMAGES[4][ACTIVE_IMAGES[4]],w1+350,x1-230,y1+100, incl, ontop =
False)

image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)


image = Image.fromarray(image)
image = ImageTk.PhotoImage(image)
ctr_mid.configure(image=image)
ctr_mid.image = image

video_capture.release()

root = Tk()
root.title('Virtual Dressing room')
app=FullScreenApp(root)

top_frame = Frame(root, bg='#077bd4', width=50, height=50, pady=3)


center = Frame(root, bg='white', width=50, height=40, padx=3, pady=3)
btm_frame = Frame(root, bg='#077bd4', width=50, height=50, pady=3)

root.grid_rowconfigure(1, weight=1)
root.grid_columnconfigure(0, weight=1)

top_frame.grid(row=0, sticky="ew")
center.grid(row=1, sticky="nsew")
btm_frame.grid(row=4, sticky="ew")
logo = ImageTk.PhotoImage(Image.open('logo1.png').resize((120,60)))
model_label = Label(top_frame,image=logo)

. Page41
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

model_label.grid(row=0, columnspan=3)

center.grid_rowconfigure(0, weight=1)
center.grid_columnconfigure(1, weight=1)

ctr_left = Label(center,bg='white', width=50, height=190)


ctr_mid = Label(center,bg='white',width=100, height=160, padx=0, pady=0)

ctr_left.grid(row=0, column=0, sticky="ns")


ctr_mid.grid(row=0, column=1, sticky="nsew")

scrollable_body = Scrollable(ctr_left, width=15)


SPRITES=[0 for i in range(10)] # [0,0,0,0,0,0,0,0,0,0]
BTNS=[]
IMAGES ={i:[] for i in range(10)} #Images= {0: [], 1: [], 2: [], 3: [], 4: [], 5: [], 6: [], 7: [], 8: [],
9: []}
PHOTOS ={i:[] for i in range(10)} #PHOTOS= {0: [], 1: [], 2: [], 3: [], 4: [], 5: [], 6: [], 7: [], 8:
[], 9: []}

print("Images= ", IMAGES)


print("PHOTOS= ", PHOTOS)
print("sys.argv[1:] =", sys.argv[1:])

for img in sys.argv[1:]: #it contains all selected images path that are chosen from gui, eg-
['static/images/Necklace1/1.png', 'static/images/Tops4/12.png']
IMAGES[int(img.rsplit('/',1)[0][-1])].append(img) #it will append the image path to desired
categ. say 1,2,etc.
image=ImageTk.PhotoImage(Image.open(img).resize((150,100)))
PHOTOS[int(img.rsplit('/',1)[0][-1])].append(image)
print("Images= ", IMAGES)
print("PHOTOS= ", PHOTOS)
for index in range(9):
if len(PHOTOS[index]) > 0:
for k,photo in enumerate(PHOTOS[index]):

. Page42
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

#print("text on btn= ",IMAGES[index][k].rsplit('/',1)) #['static/images/Necklace1', '1.png']


btn= Button(scrollable_body, highlightbackground='white',
text=IMAGES[index][k].rsplit('/',1)[1].replace('.png','')[:-1],bg='white', image=photo, command =
lambda index=index, k=k: put_sprite(index,k), compound=LEFT, width='300', height='200')
btn.pack(side="top", fill="both", expand="no", padx="5", pady="5")
BTNS.append(btn)
scrollable_body.update()

run_event = threading.Event()
run_event.set()
action = Thread(target=cvloop, args=(run_event,))
action.setDaemon(True)
action.start()

def terminate():
global root, run_event, action
run_event.clear()
time.sleep(1)
root.destroy()
root.protocol("WM_DELETE_WINDOW", terminate)
root.mainloop()

TKINTER_SCROLL.PY:

import tkinter as tk
from tkinter import ttk

class Scrollable(ttk.Frame):
"""
Make a frame scrollable with scrollbar on the right.
After adding or removing widgets to the scrollable frame,
call the update() method to refresh the scrollable area.
"""

def __init__(self, frame, width=16):


. Page43
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

scrollbar = tk.Scrollbar(frame, width=width)


scrollbar.pack(side=tk.RIGHT, fill=tk.Y, expand=False)

self.canvas = tk.Canvas(frame, bg='white',yscrollcommand=scrollbar.set)


self.canvas.pack(side=tk.LEFT, fill=tk.BOTH, expand=True)

scrollbar.config(command=self.canvas.yview)

self.canvas.bind('<Configure>', self.__fill_canvas)

# base class initialization


tk.Frame.__init__(self, frame)

# assign this obj (the inner frame) to the windows item of the canvas
self.windows_item = self.canvas.create_window(
0, 0, window=self, anchor=tk.NW)

def __fill_canvas(self, event):


"Enlarge the windows item to the canvas width"

canvas_width = event.width
self.canvas.itemconfig(self.windows_item, width=canvas_width)

def update(self):
"Update the canvas and the scrollregion"

self.update_idletasks()
self.canvas.config(scrollregion=self.canvas.bbox(self.windows_item))

class FullScreenApp(object):
def __init__(self, master, **kwargs):
self.master = master

. Page44
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

pad = 3
self._geom = '200x200+0+0'
master.geometry("{0}x{1}+0+0".format(master.winfo_screenwidth() -
pad, master.winfo_screenheight() - pad))
master.bind('<Escape>', self.toggle_geom)

def toggle_geom(self, event):


geom = self.master.winfo_geometry()
print(geom, self._geom)
self.master.geometry(self._geom)
self._geom = geom

# set a callback to handle when the window is closed


self.root.wm_title("PhotoBooth")
self.root.wm_protocol("WM_DELETE_WINDOW", self.onClose)

def videoLoop(self):
try:
while not self.stopEvent.is_set():
self.frame = self.vs.read()
self.frame = imutils.resize(self.frame, width=300)
image = cv2.cvtColor(self.frame, cv2.COLOR_BGR2RGB)
image = Image.fromarray(image)
image = ImageTk.PhotoImage(image)
if self.panel is None:
self.panel = tki.Label(image=image)
self.panel.image = image
self.panel.pack(side="left", padx=10, pady=10)
else:
self.panel.configure(image=image)
self.panel.image = image
except RuntimeError as e:
print("[INFO] caught a RuntimeError")

. Page45
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

def takeSnapshot(self):
# grab the current timestamp and use it to construct the
# output path
ts = datetime.datetime.now()
filename = "{}.jpg".format(ts.strftime("%Y-%m-%d_%H-%M-%S"))
p = os.path.sep.join((self.outputPath, filename))

# save the file


cv2.imwrite(p, self.frame.copy())
print("[INFO] saved {}".format(filename))

def onClose(self):
# set the stop event, cleanup the camera, and allow the rest of
# the quit process to continue
print("[INFO] closing...")
self.stopEvent.set()
self.vs.stop()
self.root.quit()

TRYON.PY:

from tkinter import *


from PIL import Image
from PIL import ImageTk
import cv2, threading, os, time
from threading import Thread
from os import listdir
from os.path import isfile, join

import dlib
from imutils import face_utils, rotate_bound
import math
def put_sprite(num):
global SPRITES, BTNS
SPRITES[num] = (1 - SPRITES[num])

. Page46
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

# if SPRITES[num]:
# BTNS[num].config(relief=SUNKEN)
# else:
# BTNS[num].config(relief=RAISED)
def draw_sprite(frame, sprite, x_offset, y_offset):
print("sprite>>>>>>>", sprite.shape ,"type=", type(sprite.shape))
(h,w) = (sprite.shape[0], sprite.shape[1])
(imgH,imgW) = (frame.shape[0], frame.shape[1])
if y_offset+h>= imgH:
sprite = sprite[0:imgH-y_offset,:,:]
if x_offset+w>= imgW:
sprite = sprite[:,0:imgW-x_offset,:]
if x_offset< 0:
sprite = sprite[:,abs(x_offset)::,:]
w = sprite.shape[1]
x_offset = 0

for c in range(3):
try:
frame[y_offset:y_offset+h, x_offset:x_offset+w, c] = \
sprite[:,:,c] * (sprite[:,:,3]/255.0) + frame[y_offset:y_offset+h, x_offset:x_offset+w, c] *
(1.0 - sprite[:,:,3]/255.0)
except Exception as e:
print(e)
pass
return frame

def adjust_sprite2head(sprite, head_width, head_ypos, ontop = True):


(h_sprite,w_sprite) = (sprite.shape[0], sprite.shape[1])
factor = 1.0*head_width/w_sprite
sprite = cv2.resize(sprite, (0,0), fx=factor, fy=factor)
(h_sprite,w_sprite) = (sprite.shape[0], sprite.shape[1])

y_orig= head_ypos-h_sprite if ontop else head_ypos

. Page47
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

if (y_orig< 0):
sprite = sprite[abs(y_orig)::,:,:]
y_orig = 0
return (sprite, y_orig)

def apply_sprite(image, path2sprite,w,x,y, angle, ontop = True):


sprite = cv2.imread(path2sprite,-1)
sprite = rotate_bound(sprite, angle)
(sprite, y_final) = adjust_sprite2head(sprite, w, y, ontop)
image = draw_sprite(image,sprite,x, y_final)

def calculate_inclination(point1, point2):


x1,x2,y1,y2 = point1[0], point2[0], point1[1], point2[1]
incl = 180/math.pi*math.atan((float(y2-y1))/(x2-x1))
return incl

def calculate_boundbox(list_coordinates):
x = min(list_coordinates[:,0])
y = min(list_coordinates[:,1])
w = max(list_coordinates[:,0]) - x
h = max(list_coordinates[:,1]) - y
return (x,y,w,h)

def detectUpperBody(image):
cascadePath= 'data/haarcascade_upperbody.xml'
result = image.copy()
imageGray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cascade = cv2.CascadeClassifier(cascadePath)
Rect = cascade.detectMultiScale(imageGray, scaleFactor=1.1, minNeighbors=1, minSize=(1,1))
if len(Rect) <= 0:
return False
else:

. Page48
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

return Rect

def get_face_boundbox(points, face_part):


if face_part == 1:
(x,y,w,h) = calculate_boundbox(points[17:22])
elifface_part == 2:
(x,y,w,h) = calculate_boundbox(points[22:27])
elifface_part == 3:
(x,y,w,h) = calculate_boundbox(points[36:42])
elifface_part == 4:
(x,y,w,h) = calculate_boundbox(points[42:48])
elifface_part == 5:
(x,y,w,h) = calculate_boundbox(points[29:36])
elifface_part == 6:
(x,y,w,h) = calculate_boundbox(points[0:17])
elifface_part == 7:
# (x,y,w,h) = calculate_boundbox(points[48:68]) #mouth
(x,y,w,h) = calculate_boundbox(points[1:5])
elifface_part == 8:
(x,y,w,h) = calculate_boundbox(points[12:16])
return (x,y,w,h)

image_path = ''

def add_sprite(img):
global image_path
image_path = img
print("rsplit of imgpath>>>>>>>",img.rsplit('/',1))
print(">>>>>>>>>>>",int(img.rsplit('/',1)[0][-1]))
put_sprite(int(img.rsplit('/',1)[0][-1])) #here it will return the integer value from 1 to 6, which will
denotes that which catagory is the apparel belongs to i.e either frock or tops or etc.

#Principal Loop where openCV (magic) ocurs


# Face detection starts from here

. Page49
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

def cvloop(run_event):
global panelA
global SPRITES
global image_path
i=0
video_capture = cv2.VideoCapture(0) #read from webcam
(x,y,w,h) = (0,0,10,10) #whatever initial values

#Filters path
detector = dlib.get_frontal_face_detector()

model = "data/shape_predictor_68_face_landmarks.dat"
predictor = dlib.shape_predictor(model) # link to model:
http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2

while run_event.is_set():
ret, image = video_capture.read()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) #converting the colour image to
gray
faces = detector(gray, 0)

for face in faces:


(x,y,w,h) = (face.left(), face.top(), face.width(), face.height()) #reading the coordinates in
sequence

shape = predictor(gray, face)


shape = face_utils.shape_to_np(shape)
incl = calculate_inclination(shape[17], shape[26]) #inclination based on eyebrows

# condition to see if mouth is open


is_mouth_open = (shape[66][1] -shape[62][1]) >= 10 #y coordiantes of landmark points of lips

if SPRITES[0]:

. Page50
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

apply_sprite(image,image_path,w,x,y+40, incl, ontop = True)

if SPRITES[3]:#Tiara
apply_sprite(image, image_path,w+45,x-20,y+15, incl, ontop = True)

#Necklaces
if SPRITES[1]:
(x1,y1,w1,h1) = get_face_boundbox(shape, 6)
apply_sprite(image, image_path,w1,x1,y1+110, incl, ontop = False)

#Goggles
if SPRITES[6]:
(x3,y3,_,h3) = get_face_boundbox(shape, 1)
apply_sprite(image, image_path,w,x,y3-10, incl, ontop = False)

#Earrings
(x0,y0,w0,h0) = get_face_boundbox(shape, 6) #bound box of mouth
if SPRITES[2]:
(x3,y3,w3,h3) = get_face_boundbox(shape, 7) #nose
apply_sprite(image, image_path,w3,x3-40,y3+30, incl,ontop=False)
(x3,y3,w3,h3) = get_face_boundbox(shape, 8) #nose
apply_sprite(image, image_path,w3,x3+30,y3+75, incl)

# if SPRITES[5]:
# apply_sprite(image,image_path,w,x,y, incl, ontop = True)

#Frocks
if SPRITES[5]:
(x1,y1,w1,h1) = get_face_boundbox(shape, 8)
apply_sprite(image, image_path,w1+590,x1-300,y1+70, incl, ontop = False)

#Tops
if SPRITES[4]:

. Page51
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

# (x,y,w,h) = (0,0,10,10)
# apply_sprite2feature(image, IMAGES[7][ACTIVE_IMAGES[7]], fullbody, w//4,
2*h//3, h//2, True, w//2, x, y, w, h)
(x1,y1,w1,h1) = get_face_boundbox(shape, 8)
apply_sprite(image, image_path,w1+300,x1-200,y1+40, incl, ontop = False)

image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)


image = Image.fromarray(image)
image = ImageTk.PhotoImage(image)
#image = ImageTk.PhotoImage(image.resize((500,500))) #this is for photo mode
panelA.configure(image=image)
panelA.image = image

video_capture.release()

# Initialize GUI object


root = Tk()
root.title("Virtual trial room")
this_dir = os.path.dirname(os.path.realpath(__file__))
btn1 = None

def try_on(image_path):
btn1 = Button(root, text="Try it ON", command = lambda:add_sprite(image_path))
btn1.pack(side="top", fill="both", expand="no", padx="5", pady="5")
panelA = Label(root)
panelA.pack( padx=10, pady=10)

SPRITES = [0,0,0,0,0,0,0]
BTNS = [btn1]

print(">>>>>>>>>>>>>>>>>>>",sys.argv[1])
try_on(sys.argv[1]) #here sys.argv will return the path of apparels.
run_event = threading.Event()
run_event.set()

. Page52
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

action = Thread(target=cvloop, args=(run_event,))


action.setDaemon(True)
action.start()

def terminate():
global root, run_event, action
run_event.clear()
time.sleep(1)
root.destroy()

root.protocol("WM_DELETE_WINDOW", terminate)
root.mainloop()

. Page53
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

7. SAMPLESCREEN
Sample1:

. Page54
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

Sample2:

. Page55
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

Sample3:

. Page56
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

Sample4:

. Page57
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

Sample5:

. Page58
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

Sample6:

. Page59
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

Sample7:

. Page60
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

8. CONCLUSION

 A person travelling to shop and then buying clothes is a tedious task in this COVID-19
Pandemic.
 In our work, User will be able to choose his favourite clothes according to his size without
going outside.
 It will be a “user friendly” web-application so that he/she can try it virtually.
 An easy navigable, user friendly Web app for the user to use.
 Overall, the presented virtual dressing room seems to be good solution for quick and accurate
try on of clothes virtually.

. Page61
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

FutureEnhancement:

This project can be enhance in some of aspects for the user convenience. Such
asdirectly send the person who we want to send. And also send the video immediately to
thelocal server incorporates a system of file sharing in which the creator of a file or folder, is
bydefault, its owner. The owner has the ability to regulate the public visibility of the file
orfolder. Files and folders can also be made "public on the web", which means that they can
beindexed by search engines and thus can be found and accessed by anyone. The owner
mayalso set an access level for regulating permissions. The three access levels offered are
"canedit", "can comment" and "can view". Users with editing access can invite others to edit.
Allof the third-party apps are free to install. However, some have fees associated with
continuedusage or access to additional features. We can only able to upload data of size about
8 MB.Butinfuture,we canuploadlargeamountofdata.

. Page62
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER

9. BIBLIOGRAPHY

REFERENCEWEBSITE:

 www.instructables.com

 www.youtube.com

 stackoverflow.com

 developer.android.com

 www.udacity.com

 www.i-programmer.info

. Page63

You might also like