Virtual Try On Documentation
Virtual Try On Documentation
USING PYTHON
Submitted by
Under guidance of
Ms. . Ajisha Anna Xavier ,MCA
Assistant Professor of Computer Science
DEPARTMENT OF COMPUTER SCIENCE
VADAPUTHUPATTI,THENI - 625531
VADAPUTHUPATTI,THENI – 625531
April – 2023
CERTIFICATE
This is to certify that this project, entitled “VIRUTAL TRY ON CLOTHING AND
ACCESSORIES USING PYTHON ” is the bonafide record of work done by
V.S.MITHRAM MALA(Reg.no:204526ER027)and M. MAHA
LAKSHMI(Reg.no:204526ER025) in the academic year of 2022-2023 at the Department of
Computer Science, Nadar Saraswathi College of Arts and Science, Theni in partial fulfillment
of requirements for the degree of Bachelor of Science in Computer Science is based on the
results of studies carried out by her under my guidance. It has not previously formed the basis of
the award of a degree, diploma, fellowship, associate ship or any other similar title.
I cordially express my sincere thanks to god who have me health and knowledge to do
this project. I thank my parents for their constant support by side through this Endeavour. I fall
College of Arts and Science, Theni for their valuable and all kinds of support.
Assistant Professor, Department of Computer Science who have valuable guidance, suggestion
My special thanks go to all Faculty Members in our Department and Friends for their
INTRODUCTION 1
SYSTEM SPECIFICATION 3
SYSTEM ANALYSIS 18
V 5.1 TESTING 28
5.2 IMPLEMENTATION 32
VI SOURCE CODE 33
VIII CONCLUSION 55
IX BIBLIOGRAPHY 57
ABSTRACT
The proposed solution is virtual fitting room for Clothes and Accessories
which renders 3D model for appropriately fitted clothing to the body. It can be used for online
shopping or intelligent recommendation to narrow down the selections to a few designs and sizes
virtually. Further, this project explores multiple outputs of 3D clothes transformation by utilizing
various approaches relies under machine learning using the algorithm of “Haar classifier”and Digital
Image processing. This enables a user to see themself wearing virtual clothes without taking off their
actual clothes. The system physically simulates the selected virtual clothes on the user’s body in real
time and the user can see virtual clothes fitting on their mirror image from various angles as they
moves.
1.1 PROJECTOVERVIEW
The proposed solution is virtual fitting room for Clothes and Accessories which renders 3D
model for appropriately fitted clothing to the body. It can be used for online shopping or
intelligent recommendation to narrow down the selections to a few designs and sizes virtually.
Further, this project explores multiple outputs of 3D clothes transformation by utilizing
various approaches relies under machine learning using the algorithm of “Haar classifier” and
Digital Image processing. This enables a user to see themself wearing virtual clothes without
taking off their actual clothes. The system physically simulates the selected virtual clothes on
the user’s body in real time and the user can see virtual clothes fitting on their mirror image
from various angles as they moves.
DESCRIPTION
Extraction of user allows to create an augmented reality environment by isolating
the user area from the video stream and superimposing it onto a virtual
environment in the user interface.
FACE DETECTION
The main function of this step is to determine whether human faces appear in a
given image, and where these faces are located at. Viola-Jones Algorithm is used.
VIOLA-JONES ALGORITHM
The Viola–Jones object detection framework is an object detection framework by
Paul Viola and Michael Jones.
This can be trained to detect a variety of object classes, it was motivated primarily
by the problem of face detection.
2.1 HARDWARESPECIFICATION
RAM :4GB
MOUSE :TOUCHPAD
KEYBOARD :108KEYS(STANDARDKEYBOARD)
2.2 SOFTWARESPECIFICATION
OPERATINGSYSTEM :WINDOWS-11
PYTHONHISTORY
The programming language Python was conceived in the late 1980s, and its
implementation was started in December 1989 by Guido van Rossum at CWI in the
Netherlands as a successor to ABC capable of exception handling and interfacing with the
Amoeba operating system. Van Rossum is Python's principal author, and his continuing
central role in deciding the direction of Python is reflected in the title given to him by the
Python community, Benevolent Dictator for Life (BDFL). (However, Van Rossum stepped
down as leader on July 12, 2018). Python was named after the BBC TV show Monty
Python's Flying Circus.
Python 2.0 was released on October 16, 2000, with many major new features, including a
cycle-detecting garbage collector (in addition to reference counting) for memory
management and support for Unicode. However, the most important change was to the
development process itself, with a shift to a more transparent and community-backed
process.
ThePython Platform
Guido van Rossum began working on Python in the late 1980s as a successor to the ABC
programming language and first released it in 1991 as Python 0.9.0.Python 2.0 was released
Python, when used in machine learning, offers developers of all skill sets exceptional
versatility and power. Developers can use Python to develop a variety of applications
because it integrates well with other software while its simple syntax makes it a good choice
for coding algorithms and collaborating across teams. Python also has a huge number
of libraries and frameworks that are very good for machine learning (such as Scikit-Learn),
which handle basic machine learning algorithms.
Python is the best choice for building machine learning models due to its ease of use,
extensive framework library, flexibility and more.
Python also offers a great deal of flexibility and we pair it with other programming
languages to complete a machine learning model. Python can also run on any operating
system, from Windows to macOS, Linux, Unix and more. Perhaps most importantly, Python
is easy to read, beloved by a huge community of developers (who also contribute to the
development of new packages that facilitate machine learning) and continues to gain in
popularity. In short, Python’s online community makes it easy to find answers and resources
when building or troubleshooting machine learning models.
Using Python allows beginners to utilize a simplified programming language while learning
the fundamentals of machine learning.
Python is the most simplified programming language in terms of its syntax and ease of
understanding, making it the most common choice for those who have just started learning
In order to begin creating machine learning models using Python, it is crucial to understand
the different data types, like integers, strings and floating point numbers, as well
as statistical fundamentals, how to source data and more.
Understanding how to clean and structure your data is also necessary in order to create input
data to be fed into a machine learning model. Users should know how to access different
Python libraries and how to choose the right library to create machine learning models.
Lastly, users must know how to create and utilize algorithms in Python in order to build the
model itself.
Building machine learning models may be difficult in itself but using Python frameworks,
such as Scikit-Learn, simplifies the process by doing much of the heavy lifting and requiring
only that data is provided to function, which allows developers to focus on functionality and
trained accuracy of models.
There are many ways to begin learning Python for machine learning, including hands-on
experiences, courses, Built In tutorials and college education.
Beyond simply learning how to code with Python, there are several options for learning how
to apply your Python knowledge to machine learning. Hands-on experience working with
software such as TensorFlow or other data-focused environments can allow beginners the
opportunity to experiment with their background knowledge and learn proper machine
learning programming processes through trial-and-error. To gain even more practical
knowledge and add efficiency to workflows, enrolling in a professional development course
VIRUTAL TRY ON CLOTHING ACCESSORIES Page 7
from Built In can provide developers with a wealth of knowledge that will help them
enhance their machine learning models in specific ways. Finally, the most robust way to
learn Python for machine learning is by earning a bachelor’s degree in computer
science, data science or a related field from an accredited university.
MACHINE LEARNING
As a scientific endeavour, machine learning grew out of the quest for artificial
intelligence. In the early days of AI as an academic discipline, some researchers were
interested in having machines learn from data. They attempted to approach the problem with
various symbolic methods, as well as what was then termed "neural networks"; these were
mostly perceptron and other models that were later found to be reinventions of the
generalized linear models of statistics.Probabilistic reasoning was also employed, especially
in automated medical diagnosis.
Machine learning (ML), reorganized as a separate field, started to flourish in the 1990s. The
field changed its goal from achieving artificial intelligence to tackling solvable problems of
a practical nature. It shifted focus away from the symbolic approaches it had inherited from
AI, and toward methods and models borrowed from statistics, fuzzy logic and probability
theory.
Choose a resolution for the images to be classified. In the original paper, they
recommended .
3.1 EXISTINGSYSTEM
In the existing system of “Instantaneous media file upload to local server” are
capableto copy the videos and images in our own phone. In the phone we have only less
storage.Whentheamountofvideosandimagesareincreaseditwillautomaticallyformatted.Howeve
r we may recover the formatted videos, but sometimes some videos are unable toretrieve. If
we want to send the videos to another person it is a long time process and it alsowasteofMB
space.
Disadvantage:
Wasteoftime
Difficultiesisrecovery
Loseofimportantdata
Storageproblem
Securityproblem
Advantage:
DESIGN
The design phase begins with the requirements specification for the software to
bedeveloped. Design is the first step to move from the problem domain towards the
solutiondomain. Design is essentially the bridge between requirement specification and the
finalsolutionforsatisfyingtherequirements.Itisthemostcriticalfactoraffectingthequalityofthe
software.
1. Input Design
2. Output Design
Inputscanbeclassifiedaccordingtotwocharacteristics.
Definition:
“When the design in put has been reviewed and the design input requirements are
Determined to be acceptable,an iterative process of translating those requirements in toa
devicedesignbegins.Thefirststepisconversionoftherequirementsintosystemorhigh-
level specifications. Thus, these specifications are a design output. Upon verification that
thehigh-level specifications conform to the design input requirements, they become the
designinputforthe nextstepinthe designprocess,andsoon.”
Howdataisinitiallytobecapture,enteredandprocessed.
Themethod and technologyused to capture and enterdata.
Input begins long before the data arrives at the input Device, be it
akeyboard or a mouse. Source documentation, input screens
methodsandproceduresforgettingthedataintothecomputerhavetobedesig
nedfirst.
Thephysicaldesigntakesintoconsiderationthephysicaldataflows,whichmustrepresent
anyofthefollowing.
Definition:
“In integrated circuit design, physical design is a step in the standard design
cyclewhich follows after the circuit design. At this step, circuit representations of the
components(devices and interconnects) of the design are converted into geometric
representations ofshapes which, when manufactured in the corresponding layers of materials,
will ensure therequired functioning of the components. This geometric representation is called
integratedcircuitlayout.Thisstepisusuallysplitintoseveralsub-
steps,whichincludebothdesignandverificationandvalidationofthelayout.“
Modern day Integrated Circuit (IC) design is split up into Front-end design
usingHDLs, Verification, and Back-end Design or Physical Design. The next step after
PhysicalDesign is the Manufacturing process or Fabrication Process that is done in the
WaferFabricationHouses. Fab-
housesfabricatedesignsontosilicondieswhicharethenpackagedintoICs.
Each of the phases mentioned above have Design Flows associated with them.
TheseDesign Flows lay down the process and guide-lines/framework for that phase.
PhysicalDesign flow uses the technology libraries that are provided by the fabrication
houses. Thesetechnology files provide information regarding the type of Silicon wafer used,
the standard-cellsused,thelayoutrules(likeDRCinVLSI),etc.
o Theplannedimplementationofaninputtooroutputformsaphysicalprocess.
o Adatabasecommand or actionsuchasinsert,delete,andupdate.
o Theimportofdatafromorexportofdatatoanotherinformationsystemacrossane
twork.
o Theflowofdatabetweentomodulesorsubroutineswithinthesameprogram.
Page15
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
LOGICALDESIGN
The logical design seeks to trace the flow of data throughout the system. Data
flowdiagram where designed to achieve this end in a graphical format that is easy to
understand.Allprocessonanyofthe DFDsmusthave atleastoneinputand outputdataflow.
Application
CaptureImage RecordVideo
ImageCapturing VideoRecording
Stores into
phonememory
either
inInternalorExterna
Uploadtoserver l Uploadtoserver
Configuration
Ifconnected Ifnot connected
Photo/Videouploa Connection
dedtoserversucces Refused
sfully
. Page25
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
4.2 OUTPUTDESIGN
Definition:
“This basic technique is used repeatedly throughout the design process. Each
designinput is converted into a new design output; each output is verified as conforming to its
input;anditthen becomes the design inputfor another step in the design process. In this
manner,thedesigninputrequirementsaretranslatedintoadevicedesignconformingtothoserequire
ments.”
Design of output screens has been kept as simple as possible. The user is
providedwitheitheratabularrepresentationorastatementgivingdetailsoftransaction.
. Page26
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
USERINTERFACEDESIGN
The basics tepsoruser interface design have been followed. They are
. Page27
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
5. SYSTEM TESTINGANDIMPLIMENTAION
5.1 TESTING
SOFTWARETESTING
VERIFICATION
Verification is to make sure the product satisfies the conditions imposed at the start
ofthe development phase. In other words, to make sure the product behaves the way we went
itdo.
VALIDATION
Validation is the process to make sure the product satisfies the specified
requirementsat the end of the development phase. In other words, to make sure the product is
built as percustomerrequirements.
BASICSOFSOFTWARETESTING
Therearetwobasicsofsoftwaretesting: blackboxtestingandwhiteboxtesting.
BLACKBOXTESTING
Black box testing is a testing technique that ignores the internal mechanism of
thesystem and focuses on the output generated against any input and execution of the system.
Itisalsocalledfunctionaltesting.
. Page28
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
WHITEBOXTESTING
White box testing is a testing technique that takes into account the internal mechanism
ofa system.It is also calledstructuraltestingandglassboxtesting.
Black box it testing is often used for validation and white box testing often used
forvalidation.
TYPESOF TESTING
Therearemanytypesoftestinglike
UnitTesting
IntegrationTesting
FunctionalTesting
SystemTesting
StressTesting
PerformanceTesting
UsabilityTesting
RegressionTesting
UNIT TESTING
Unit testing is the testing of an individual unit or group of related units. It falls
underthe class of white box testing. It is often done by the programmer to test that the unit
he/shehasimplementedisproducingexpectedoutputagainstgiveninput.
MANUALVSAUTOMATEDTESTING
Testingcaneitherbedonemanuallyorusinganautomatedtestingtool:
. Page29
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
Manualtestingistimeandresourceconsuming.Thetesterneedstoconfirmwhetherornot
right test casesareused.Majorportionoftestinginvolvesmanualtesting.
A test needs to check if a webpage can be opened in Internet Explorer. This can be
easilydone with manual testing. But to check if the web-server can take the load of 1 million
users,itis quite impossibletotestmanually.
There are software and hardware tools which helps tester in conducting load testing,
stresstesting,regressiontesting.
INTEGRATIONTESTING
Integration testing is testing in which a group of components are combined to
produceoutput. Also, the integration between software and hardware is tested in integration
testing ifsoftware and hardware components have any relation. It may fall under both white
box andblackboxtesting.
FUNCTIONALTESTING
Functionaltestingisthespecifiedfunctionalityrequiredinthesystemrequirements
STRESSTESTING
Stresstestingisthetestingtoevaluatehowsystembehavesunderunfavorableconditions.
Testing is conducted at beyond limits of the specifications. It falls under the
classofblockboxtesting.
PERFORMANCETESTING
Performance testing is the testing to access the speed and effectiveness of the
systemandtomakesureitisgeneratingresultswithinaspecifiedtimeasinperformancerequirements.
Itfalls undertheclassofblackboxtesting.
SYSTEMTESTING
Thesoftwareiscompiledasproductandthenitistestedasawhole.Thiscanbeaccomplishedusingone
ormoreofthefollowingtests:
. Page30
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
Functionalitytesting -Testsallfunctionalitiesofthesoftwareagainsttherequirement.
Performance testing - This test proves how efficient the software is. It tests
theeffectiveness and average time taken by the software to do desired task.
Performancetesting is done by means of load testing and stress testing where the
software is putunderhighuseranddata load undervariousenvironmentconditions.
USABILITYTESTING
Usability testing is performed to the perspective of the client, to evaluate how the GUI
isuser-friendly? How easily can the client learn? After learning how to use, how
proficientlycan the client perform? How pleasing is it to use it design? This under the class of
black boxtesting.
REGRESSIONTESTING
Regression testing is the testing after modification of a system, component, or a
ofrelated other modules to produce unexpected results. It falls under the class of block
boxtesting.
. Page31
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
5.2 IMPLEMENTATION
The implementation is the final and important phase. It involves user training,
systemtesting and successful running of the developed system. The user test the developed
systemwhen changes are made according to the needs. The testing phase involves the testing
of thedeveloped system using various kinds of data. An elaborate testing of data is prepared
andsystemis testedusingthetests data.
Implementation is the stage where theoretical design turned into a working
systemimplementation is planed carefully to propose system to avoid unanticipated problems.
Manyby the preparation involved before and during the implementation of proposed system.
Thesystem needed to be plugged in to the organization’s network then it could be accessed
fromanywhere, after a user logins into the portal. The tasks that had to be done to implement
thesystem were to create the data base the organization data base domain. Then the
administratorwasgrantedhis role sothatthesystemcouldbe accessed.
The next phase in the implementation was to educate the system. A demonstration
ofall the functions that can be carried out by the system was given to examination
departmentperson,whowillmakeextensive userosystem.
. Page32
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
6. SOURCECODE
CODE:
The main purpose of code design is to facilitate the identification and retrieval
ofinformation. Code design is the process of representing data flow diagram. This should
beeasytodebugwhentheerroroccurs.
MAIN.PY:
from flask import Flask, render_template, Response,redirect,request
from camera import VideoCamera
import os
app = Flask(__name__)
CART=[]
@app.route('/checkOut', methods=['POST', 'GET'])
def checkOut():
return render_template('checkout.html')
@app.route('/tryon/<file_path>',methods = ['POST', 'GET'])
def tryon(file_path):
file_path = file_path.replace(',','/')
os.system('python tryOn.py ' + file_path)
return redirect('http://127.0.0.1:5000/',code=302, Response=None)
#return Response(gen(VideoCamera()),mimetype='multipart/x-mixed-replace;
boundary=frame')
@app.route('/tryall',methods = ['POST', 'GET'])
def tryall():
print("YESSS")
if request.method == 'POST':
cart = request.form['mydata'].replace(',', '/')
print("tryall cart type= ", type(cart))
print("tryall cart= ", cart)
os.system('python test.py ' + cart)
render_template('checkout.html', message='')
@app.route('/')
def index():
. Page33
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
return render_template('index.html')
def gen(camera):
while True:
frame = camera.get_frame()
#print("frame= ", frame)
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
@app.route("/cart/<file_path>",methods = ['POST', 'GET'])
def cart(file_path):
global CART
file_path = file_path.replace(',','/')
print("ADDED", file_path)
CART.append(file_path)
return render_template("checkout.html")
@app.route('/video_feed')
def video_feed():
return Response(gen(VideoCamera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__ == '__main__':
app.run(debug=True)
CAMERA.PY:
import cv2
import dlib
from imutils import face_utils, rotate_bound
#from tryOn import calculate_inclination
class VideoCamera(object):
def __init__(self):
# Using OpenCV to capture from device 0. If you have trouble capturing
# from a webcam, comment the line below out and use a video file
# instead.
self.video = cv2.VideoCapture(0)
# If you decide to use video.mp4, you must have this file in the folder
# as the main.py.
. Page34
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
# self.video = cv2.VideoCapture('video.mp4')
def __del__(self):
self.video.release()
def get_frame(self):
success, image = self.video.read()
# We are using Motion JPEG, but OpenCV defaults to capture raw images,
# so we must encode it into JPEG in order to correctly display the
# video stream.
# detector = dlib.get_frontal_face_detector()
# model = "data/shape_predictor_68_face_landmarks.dat"
# predictor = dlib.shape_predictor(model)
# gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# faces = detector(gray, 0)
# for face in faces:
# (x,y,w,h) = (face.left(), face.top(), face.width(), face.height())
# shape = predictor(gray, face)
# shape = face_utils.shape_to_np(shape)
# incl = calculate_inclination(shape[17], shape[26]) #inclination based on eyebrows
# # condition to see if mouth is open
# is_mouth_open = (shape[66][1] -shape[62][1]) >= 10 #y coordiantes of landmark points
of lips
ret, jpeg = cv2.imencode('.jpg', image)
return jpeg.tobytes()
UPLOADACTIVITY:
. Page35
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
import dlib
from imutils import face_utils, rotate_bound
import math
. Page36
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
xpos = x + x_offset
ypos = y + y_offset
factor = 1.0*desired_width/w_sprite
if adjust2feature:
size_mustache = 1.2 #how many times bigger than mouth
factor = 1.0*(feature[0,2]*size_mustache)/w_sprite
xpos= x + feature[0,0] - int(feature[0,2]*(size_mustache-1)//2) #centered respect to width
ypos = y + y_offset_image + feature[0,1] - int(h_sprite*factor) #right on top
. Page37
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
def calculate_boundbox(list_coordinates):
x = min(list_coordinates[:,0])
y = min(list_coordinates[:,1])
w = max(list_coordinates[:,0]) - x
h = max(list_coordinates[:,1]) - y
return (x,y,w,h)
features = haar_cascade.detectMultiScale(
gray,
scaleFactor=scaleFact,
minNeighbors=minNeigh,
minSize=(minSizeW, minSizeW),
flags=cv2.CASCADE_SCALE_IMAGE
)
return features
. Page38
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
elifface_part == 3:
(x,y,w,h) = calculate_boundbox(points[36:42])
elifface_part == 4:
(x,y,w,h) = calculate_boundbox(points[42:48])
elifface_part == 5:
(x,y,w,h) = calculate_boundbox(points[29:36])
elifface_part == 6:
(x,y,w,h) = calculate_boundbox(points[1:17])
elifface_part == 7:
(x,y,w,h) = calculate_boundbox(points[0:6])
elifface_part == 8:
(x,y,w,h) = calculate_boundbox(points[11:17])
return (x,y,w,h)
def cvloop(run_event):
global ctr_mid
global SPRITES
i=0
video_capture = cv2.VideoCapture(0)
video_capture.set(3,2048)
video_capture.set(4,2048)
(x,y,w,h) = (0,0,10,10)
detector = dlib.get_frontal_face_detector()
fullbody = cv2.CascadeClassifier('data/haarcascade_fullbody.xml')
model = "data/shape_predictor_68_face_landmarks.dat"
predictor = dlib.shape_predictor(model) # link to model:
http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
while run_event.is_set():
ret, image = video_capture.read()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = detector(gray, 0)
for face in faces:
(x,y,w,h) = (face.left(), face.top(), face.width(), face.height())
shape = predictor(gray, face)
. Page39
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
shape = face_utils.shape_to_np(shape)
incl = calculate_inclination(shape[17], shape[26])
is_mouth_open = (shape[66][1] -shape[62][1]) >= 10
if SPRITES[3]:#Tiara
apply_sprite(image, IMAGES[3][ACTIVE_IMAGES[3]],w+45,x-20,y+20, incl, ontop = True)
#Necklaces
if SPRITES[1]:
(x1,y1,w1,h1) = get_face_boundbox(shape, 6)
apply_sprite(image, IMAGES[1][ACTIVE_IMAGES[1]],w1,x1,y1+150, incl, ontop = False)
#Goggles
if SPRITES[6]:
(x3,y3,_,h3) = get_face_boundbox(shape, 1)
apply_sprite(image, IMAGES[6][ACTIVE_IMAGES[6]],w,x,y3-10, incl, ontop = False)
#Earrings
(x0,y0,w0,h0) = get_face_boundbox(shape, 6) #bound box of mouth
if SPRITES[2]:
(x3,y3,w3,h3) = get_face_boundbox(shape, 7) #nose
apply_sprite(image, IMAGES[2][ACTIVE_IMAGES[2]],w3,x3-40,y3+30, incl,ontop=False)
(x3,y3,w3,h3) = get_face_boundbox(shape, 8) #nose
apply_sprite(image, IMAGES[2][ACTIVE_IMAGES[2]],w3,x3+40,y3+75, incl)
# if SPRITES[5]:
# apply_sprite(image, IMAGES[5][ACTIVE_IMAGES[5]],w,x,y, incl, ontop = True)
#Frocks
if SPRITES[5]:
(x1,y1,w1,h1) = get_face_boundbox(shape, 8)
apply_sprite(image, IMAGES[5][ACTIVE_IMAGES[5]],w1+580,x1-350,y1+80, incl, ontop =
False)
. Page40
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
#Tops
if SPRITES[4]:
# (x,y,w,h) = (0,0,10,10)
# apply_sprite2feature(image, IMAGES[7][ACTIVE_IMAGES[7]], fullbody, w//4,
2*h//3, h//2, True, w//2, x, y, w, h)
(x1,y1,w1,h1) = get_face_boundbox(shape, 8)
apply_sprite(image, IMAGES[4][ACTIVE_IMAGES[4]],w1+350,x1-230,y1+100, incl, ontop =
False)
video_capture.release()
root = Tk()
root.title('Virtual Dressing room')
app=FullScreenApp(root)
root.grid_rowconfigure(1, weight=1)
root.grid_columnconfigure(0, weight=1)
top_frame.grid(row=0, sticky="ew")
center.grid(row=1, sticky="nsew")
btm_frame.grid(row=4, sticky="ew")
logo = ImageTk.PhotoImage(Image.open('logo1.png').resize((120,60)))
model_label = Label(top_frame,image=logo)
. Page41
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
model_label.grid(row=0, columnspan=3)
center.grid_rowconfigure(0, weight=1)
center.grid_columnconfigure(1, weight=1)
for img in sys.argv[1:]: #it contains all selected images path that are chosen from gui, eg-
['static/images/Necklace1/1.png', 'static/images/Tops4/12.png']
IMAGES[int(img.rsplit('/',1)[0][-1])].append(img) #it will append the image path to desired
categ. say 1,2,etc.
image=ImageTk.PhotoImage(Image.open(img).resize((150,100)))
PHOTOS[int(img.rsplit('/',1)[0][-1])].append(image)
print("Images= ", IMAGES)
print("PHOTOS= ", PHOTOS)
for index in range(9):
if len(PHOTOS[index]) > 0:
for k,photo in enumerate(PHOTOS[index]):
. Page42
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
run_event = threading.Event()
run_event.set()
action = Thread(target=cvloop, args=(run_event,))
action.setDaemon(True)
action.start()
def terminate():
global root, run_event, action
run_event.clear()
time.sleep(1)
root.destroy()
root.protocol("WM_DELETE_WINDOW", terminate)
root.mainloop()
TKINTER_SCROLL.PY:
import tkinter as tk
from tkinter import ttk
class Scrollable(ttk.Frame):
"""
Make a frame scrollable with scrollbar on the right.
After adding or removing widgets to the scrollable frame,
call the update() method to refresh the scrollable area.
"""
scrollbar.config(command=self.canvas.yview)
self.canvas.bind('<Configure>', self.__fill_canvas)
# assign this obj (the inner frame) to the windows item of the canvas
self.windows_item = self.canvas.create_window(
0, 0, window=self, anchor=tk.NW)
canvas_width = event.width
self.canvas.itemconfig(self.windows_item, width=canvas_width)
def update(self):
"Update the canvas and the scrollregion"
self.update_idletasks()
self.canvas.config(scrollregion=self.canvas.bbox(self.windows_item))
class FullScreenApp(object):
def __init__(self, master, **kwargs):
self.master = master
. Page44
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
pad = 3
self._geom = '200x200+0+0'
master.geometry("{0}x{1}+0+0".format(master.winfo_screenwidth() -
pad, master.winfo_screenheight() - pad))
master.bind('<Escape>', self.toggle_geom)
def videoLoop(self):
try:
while not self.stopEvent.is_set():
self.frame = self.vs.read()
self.frame = imutils.resize(self.frame, width=300)
image = cv2.cvtColor(self.frame, cv2.COLOR_BGR2RGB)
image = Image.fromarray(image)
image = ImageTk.PhotoImage(image)
if self.panel is None:
self.panel = tki.Label(image=image)
self.panel.image = image
self.panel.pack(side="left", padx=10, pady=10)
else:
self.panel.configure(image=image)
self.panel.image = image
except RuntimeError as e:
print("[INFO] caught a RuntimeError")
. Page45
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
def takeSnapshot(self):
# grab the current timestamp and use it to construct the
# output path
ts = datetime.datetime.now()
filename = "{}.jpg".format(ts.strftime("%Y-%m-%d_%H-%M-%S"))
p = os.path.sep.join((self.outputPath, filename))
def onClose(self):
# set the stop event, cleanup the camera, and allow the rest of
# the quit process to continue
print("[INFO] closing...")
self.stopEvent.set()
self.vs.stop()
self.root.quit()
TRYON.PY:
import dlib
from imutils import face_utils, rotate_bound
import math
def put_sprite(num):
global SPRITES, BTNS
SPRITES[num] = (1 - SPRITES[num])
. Page46
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
# if SPRITES[num]:
# BTNS[num].config(relief=SUNKEN)
# else:
# BTNS[num].config(relief=RAISED)
def draw_sprite(frame, sprite, x_offset, y_offset):
print("sprite>>>>>>>", sprite.shape ,"type=", type(sprite.shape))
(h,w) = (sprite.shape[0], sprite.shape[1])
(imgH,imgW) = (frame.shape[0], frame.shape[1])
if y_offset+h>= imgH:
sprite = sprite[0:imgH-y_offset,:,:]
if x_offset+w>= imgW:
sprite = sprite[:,0:imgW-x_offset,:]
if x_offset< 0:
sprite = sprite[:,abs(x_offset)::,:]
w = sprite.shape[1]
x_offset = 0
for c in range(3):
try:
frame[y_offset:y_offset+h, x_offset:x_offset+w, c] = \
sprite[:,:,c] * (sprite[:,:,3]/255.0) + frame[y_offset:y_offset+h, x_offset:x_offset+w, c] *
(1.0 - sprite[:,:,3]/255.0)
except Exception as e:
print(e)
pass
return frame
. Page47
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
if (y_orig< 0):
sprite = sprite[abs(y_orig)::,:,:]
y_orig = 0
return (sprite, y_orig)
def calculate_boundbox(list_coordinates):
x = min(list_coordinates[:,0])
y = min(list_coordinates[:,1])
w = max(list_coordinates[:,0]) - x
h = max(list_coordinates[:,1]) - y
return (x,y,w,h)
def detectUpperBody(image):
cascadePath= 'data/haarcascade_upperbody.xml'
result = image.copy()
imageGray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cascade = cv2.CascadeClassifier(cascadePath)
Rect = cascade.detectMultiScale(imageGray, scaleFactor=1.1, minNeighbors=1, minSize=(1,1))
if len(Rect) <= 0:
return False
else:
. Page48
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
return Rect
image_path = ''
def add_sprite(img):
global image_path
image_path = img
print("rsplit of imgpath>>>>>>>",img.rsplit('/',1))
print(">>>>>>>>>>>",int(img.rsplit('/',1)[0][-1]))
put_sprite(int(img.rsplit('/',1)[0][-1])) #here it will return the integer value from 1 to 6, which will
denotes that which catagory is the apparel belongs to i.e either frock or tops or etc.
. Page49
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
def cvloop(run_event):
global panelA
global SPRITES
global image_path
i=0
video_capture = cv2.VideoCapture(0) #read from webcam
(x,y,w,h) = (0,0,10,10) #whatever initial values
#Filters path
detector = dlib.get_frontal_face_detector()
model = "data/shape_predictor_68_face_landmarks.dat"
predictor = dlib.shape_predictor(model) # link to model:
http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
while run_event.is_set():
ret, image = video_capture.read()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) #converting the colour image to
gray
faces = detector(gray, 0)
if SPRITES[0]:
. Page50
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
if SPRITES[3]:#Tiara
apply_sprite(image, image_path,w+45,x-20,y+15, incl, ontop = True)
#Necklaces
if SPRITES[1]:
(x1,y1,w1,h1) = get_face_boundbox(shape, 6)
apply_sprite(image, image_path,w1,x1,y1+110, incl, ontop = False)
#Goggles
if SPRITES[6]:
(x3,y3,_,h3) = get_face_boundbox(shape, 1)
apply_sprite(image, image_path,w,x,y3-10, incl, ontop = False)
#Earrings
(x0,y0,w0,h0) = get_face_boundbox(shape, 6) #bound box of mouth
if SPRITES[2]:
(x3,y3,w3,h3) = get_face_boundbox(shape, 7) #nose
apply_sprite(image, image_path,w3,x3-40,y3+30, incl,ontop=False)
(x3,y3,w3,h3) = get_face_boundbox(shape, 8) #nose
apply_sprite(image, image_path,w3,x3+30,y3+75, incl)
# if SPRITES[5]:
# apply_sprite(image,image_path,w,x,y, incl, ontop = True)
#Frocks
if SPRITES[5]:
(x1,y1,w1,h1) = get_face_boundbox(shape, 8)
apply_sprite(image, image_path,w1+590,x1-300,y1+70, incl, ontop = False)
#Tops
if SPRITES[4]:
. Page51
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
# (x,y,w,h) = (0,0,10,10)
# apply_sprite2feature(image, IMAGES[7][ACTIVE_IMAGES[7]], fullbody, w//4,
2*h//3, h//2, True, w//2, x, y, w, h)
(x1,y1,w1,h1) = get_face_boundbox(shape, 8)
apply_sprite(image, image_path,w1+300,x1-200,y1+40, incl, ontop = False)
video_capture.release()
def try_on(image_path):
btn1 = Button(root, text="Try it ON", command = lambda:add_sprite(image_path))
btn1.pack(side="top", fill="both", expand="no", padx="5", pady="5")
panelA = Label(root)
panelA.pack( padx=10, pady=10)
SPRITES = [0,0,0,0,0,0,0]
BTNS = [btn1]
print(">>>>>>>>>>>>>>>>>>>",sys.argv[1])
try_on(sys.argv[1]) #here sys.argv will return the path of apparels.
run_event = threading.Event()
run_event.set()
. Page52
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
def terminate():
global root, run_event, action
run_event.clear()
time.sleep(1)
root.destroy()
root.protocol("WM_DELETE_WINDOW", terminate)
root.mainloop()
. Page53
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
7. SAMPLESCREEN
Sample1:
. Page54
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
Sample2:
. Page55
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
Sample3:
. Page56
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
Sample4:
. Page57
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
Sample5:
. Page58
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
Sample6:
. Page59
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
Sample7:
. Page60
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
8. CONCLUSION
A person travelling to shop and then buying clothes is a tedious task in this COVID-19
Pandemic.
In our work, User will be able to choose his favourite clothes according to his size without
going outside.
It will be a “user friendly” web-application so that he/she can try it virtually.
An easy navigable, user friendly Web app for the user to use.
Overall, the presented virtual dressing room seems to be good solution for quick and accurate
try on of clothes virtually.
. Page61
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
FutureEnhancement:
This project can be enhance in some of aspects for the user convenience. Such
asdirectly send the person who we want to send. And also send the video immediately to
thelocal server incorporates a system of file sharing in which the creator of a file or folder, is
bydefault, its owner. The owner has the ability to regulate the public visibility of the file
orfolder. Files and folders can also be made "public on the web", which means that they can
beindexed by search engines and thus can be found and accessed by anyone. The owner
mayalso set an access level for regulating permissions. The three access levels offered are
"canedit", "can comment" and "can view". Users with editing access can invite others to edit.
Allof the third-party apps are free to install. However, some have fees associated with
continuedusage or access to additional features. We can only able to upload data of size about
8 MB.Butinfuture,we canuploadlargeamountofdata.
. Page62
INSTATANEOUSMEDIAUPLOADINGINTOLOCALSERVER
9. BIBLIOGRAPHY
REFERENCEWEBSITE:
www.instructables.com
www.youtube.com
stackoverflow.com
developer.android.com
www.udacity.com
www.i-programmer.info
. Page63