0% found this document useful (0 votes)
60 views6 pages

Paper 3

paper on computer science 3

Uploaded by

Om Borawake
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views6 pages

Paper 3

paper on computer science 3

Uploaded by

Om Borawake
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Proceedings of the Third International Conference on Electronics and Sustainable Communication Systems (ICESC 2022)

IEEE Xplore Part Number: CFP22V66-ART; ISBN: 978-1-6654-7971-4

Cosmetic Suggestion System Using Convolution


Neural Network
2022 3rd International Conference on Electronics and Sustainable Communication Systems (ICESC) | 978-1-6654-7971-4/22/$31.00 ©2022 IEEE | DOI: 10.1109/ICESC54411.2022.9885369

Dr. S Bhuvana Shubhikshaa S M


Brindha G S
Associate Professor, CSE Computer Science and Engineering Computer Science and Engineering
PSG Institute of Technology and PSG Institute of Technology and
PSG Institute of Technology and
Applied Research Applied Research
Applied Research
Coimbatore, Tamil Nadu Coimbatore, Tamil Nadu
Coimbatore, Tamil Nadu
[email protected] [email protected]
[email protected]
Swathi J V
Computer Science and Engineering
PSG Institute of Technology and
Applied Research
Coimbatore, Tamil Nadu
[email protected]

Abstract—Nowadays, cosmetics play a significant role in manipulating those images’. There are three steps to this
personal appearance. Choosing the best skin care product is process:
becoming increasingly complicated. As a result, a predictive • First, import all images from devices such as a camera,
approach is developed that gives a clear understanding of which sensor, scanner, or even photos generated by the system.
product is best for a certain skin type. An AI algorithm is
utilized to solve this problem since it works well with vast
• After that, evaluate, manipulate, and improve the image, as
amount of unstructured data and produces promising results. well as give a description of the photographs to the data
Convolutional Neural Networks are used in the suggested summary.
system (CNN). The model is trained from a dataset that was • Finally, the image processing results must be output. The
scrapped from the internet consists of four classes of skin types output can be minor alterations in the image, report analysis,
i.e normal, dry, oily and combinational. The CNN model is built or image prediction etc.,
utilizing the packages in Python 3 such as Numpy, OpenCV, The Convolutional neural networks are the neural
Matplotlib, TensorFlow, Keras and Sklearn. This method is networks which uses the concept of convolution instead of
created by training and testing the model to establish accuracy. normal matrix multiplication, which is the most efficient one.
The products suitable for each class of skin types are combined
in a file. After detecting the skin type, the suitable products for
This convolution is applied in at least in one of its layers. This
those skin type is fetched from that file. As a result, best uses deep learning concepts for implementation purpose. In
composition of cosmetic products are suggested for suitable skin every recognition approach, there exist a black box between
types. The goal is to achieve high accuracy and detect the skin the input and the output. But the processing inside the black
type using the defined training model. box varies on supervised learning and supervised deep
learning approach. This architecture is created by a stack of
Keywords—CNN Architecture, Model Training and different layers that helps in transforming the input volumes
Validation, Cosmetic product, Model Testing to the output volume through a differentiation function.
The rest of the paper is organized as follows. A
I. INTRODUCTION
comprehensive survey of cosmetic suggestion system is dealt
The beauty industry has grown in size over the years, as in section 2. Section 3 describes the overview of the proposed
have the items it offers and the number of people it serves. As framework. Section 4 is to provides experimental results.
a result of this massive proliferation of products and Testing and Results obtained are in Section 5. Finally, Section
consumers, choosing the right cosmetic product becomes 6 presents the conclusion.
increasingly vital. Because cosmetics play such a crucial role
in personal appearance, choosing the right products for one's
II. RELATED WORKS
skin type is essential. Finding the ideal cosmetic for a user's
skin type is notoriously difficult, as everyone's skin texture is A detailed study on cosmetic suggestion system using
different. existing research was carried out to identify the scope and
extent of the work already done on this theme. Research
This challenge is solved using an AI system that works
articles by various authors were collected from sources like
well with unstructured and massive amounts of data and
IEEE publication, Springer publication, Google research,
produces promising results. Convolutional Neural Networks
Research Gate publication, etc. These articles are presented in
are used to do this (CNN). The skin photos will be analyzed
this section.
to determine the kind of skin (dry, oily, combinational, or
normal) and to recommend products that are appropriate for C.Chang, et al. [2] proposed “Robust skin type
that skin type. Image processing is a process of extracting classification using Convolution Neural Network” which
some piece of information from the digital image after some enhances performance while reducing the complexity of skin
processing. Here initially the image recognition takes place, image classification. It possesses a 92 percent training
then the recognized image is given as an input to the model accuracy rate and an 86 percent testing accuracy rate. It
for processing (image classification) and at last the output is utilizes the LeNet5 architecture, which includes two
obtained. Basically, it is ‘Analyzing the image and

978-1-6654-7971-4/22/$31.00 ©2022 IEEE 1084


Authorized licensed use limited to: National Sun Yat Sen Univ.. Downloaded on August 22,2024 at 07:53:30 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Electronics and Sustainable Communication Systems (ICESC 2022)
IEEE Xplore Part Number: CFP22V66-ART; ISBN: 978-1-6654-7971-4

convolution layers, two max pooling layers, and two fully Arya Kothari, Dipam Shah, Taksh Soni, Sudhir Dhage [9]
linked layers.. studied on “Cosmetic Skin Type Classification using CNN
with Product Recommendation”. This approach divides skin
Sakshi Indolia, et al. [12] explained the goal of
images into several categories. Convolution Neural Network,
“Conceptual Understanding of Convolution Neural Network”.
a deep learning technique, was used to evaluate and train the
This study is to provide information and understanding of
skin pictures. Our CNN classification model's results reveal
many aspects of CNN using a deep learning approach. This
an accuracy of around 85%, with a minor bias towards
paper explains what CNN is and how it works, as well as the
particular pictures.
three most prevalent architectures and learning algorithms.
For individuals who are interested in this field, it will serve as
Andrej Karpath and Li Fei-Fei [20] proposed a noval study
a resource and quick reference.
for “Deep Visual-Semantic Alignments for Generating Image
Descriptions”. The model is composed of a Convolutional
M. S. Fleysher and Veronika M. Troeglazova [13] studied
on Neural Network for selecting cosmetics by photo proposed Neural Network, bidirectional Recurrent Neural Networks,
the study of CNN, which is based on photos, eye and skin and a structured objective that aligns the two modalities via
colors, and python libraries. It determines the suitable multimodal embedding. It generates descriptions of pictures
cosmetics, like lipstick and foundation. A new online raw data and their regions in natural language.
analysis mechanism can boost performance in the general Framework for Sentiment-Driven Customer Satisfaction
approach to neural networks. Evaluation of Cosmetics Brands - Jaehun Park [1] The present
R. Iwabuchi, et al [3] addressed this paper “Proposal of study used sentiment analysis and statistical data analysis to
recommender system based user evaluation and cosmetic provide a systematic method to assess relative consumer
ingredients” and evaluates the user's compatibility with a satisfaction with cosmetics companies, and also Term
simple cosmetic product based on the cosmetic ingredients. Frequency-Inverse Document Frequency analysis to assess
Using data from the Bihada-Mania website and cosme, we the causes of positive and negative sentiments. The suggested
were able to extract the most effective cosmetic compounds strategy can be used by cosmetics companies to attain or
for each user attribute and create a recommender system based improve consumer satisfaction with the brands they evaluate.
on ingredients. The TF-IDF method is used to extract cosmetic
components that are effective. III. PROPOSED SYSTEM
Rubasri S, et al. [7] studied on “Cosmetic Product The goal of proposed system is to provide system with the
Selection using Machine Learning” which examines a basic best cosmetic product using Convolution Neural Network.
cosmetic recommendation system. It identified the most The suggested technique takes in inputs such as skin photos,
effective cosmetic compounds for each user attribute and product ingredients, product brands, pricing, and so on, and
processes them to produce the required output.
created an ingredient-based recommender system. Cosmetic
ingredient information and chemicals used are obtained by The main steps in the process of developing this system
web scraping from a sephora page. The chemicals are are:
subjected to NLP ideas, and a Document Term Matrix (DTM)
• Exploring the dataset.
is created.
• Building a CNN model.
Musab Coskun , et al [17] In this project “Face
Recognition Based on Convolutional Neural Network” by • Training and validating the model.
adding two normalization procedures to two of the layers in • Testing the model.
this research, a modified Convolutional Neural Network
(CNN) architecture is proposed. The batch normalization These are further combined and classified as modules
procedure enabled network acceleration. In the fully where each module is independent of each other and execution
connected layer of CNN, the CNN architecture was utilised to of the module takes place in a sequential manner.
extract unique face features and the Softmax classifier was A. Dataset Preparation
used to classify faces. Georgia Tech Database demonstrated
Data is very crucial for building an efficient deep learning
in the experiment section that the proposed system strategy
model. Without a clean dataset, building a deep learning
increased face recognition performance with better
model is impossible, even if a high – performance algorithm
recognition results.
is used. For this model evaluation, Kaggle and @Nykaa
Christian Rathgeb, et al. [8] proposed “Makeup gathered aesthetic data from multiple websites. The images
Presentation Attacks: Review and Detection performance are then classified into four classes namely combinational,
benchmark”. This model focuses on makeup presentation dry, normal and oily. Out of 600 images, 144 images are
attacks with the goal of impersonating the Makeup Induced images of the class combinational, 152 images are images of
Face Spoofing (MIFS) and Disguised Faces in the Wild the class dry, 160 images are images of the class normal and
(DFW) databases, which are both publically available. It also 144 images are images of the class oily.
provides a novel picture pair-based, i.e. differential, attack
detection scheme that compares feature representations
obtained from possible makeup presentation attacks with
associated target face photos.

978-1-6654-7971-4/22/$31.00 ©2022 IEEE 1085


Authorized licensed use limited to: National Sun Yat Sen Univ.. Downloaded on August 22,2024 at 07:53:30 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Electronics and Sustainable Communication Systems (ICESC 2022)
IEEE Xplore Part Number: CFP22V66-ART; ISBN: 978-1-6654-7971-4

transformed to a different color space, such as HSL/HSV, the


technique can be applied to the luminance or value channel
without changing the image's hue or saturation.
3.Image Resizing
Image which is given as input cannot be considered with
variable sizes. It is necessary to perform the process of
resizing of the image for any image processing system which
would be easier at later stages of the development process.
Basically, resizing means either changing the width of the
image or height of the image or both width and height of the
image. The resizing is done with width 50 and height 50.
C. Modelling
Modelling is done with the help of keras which is a deep
learning functional API used for defining the models. Usually
models are created by the means of creating instance of layers
and establishing connections to each pair of layers. The layers
act as input layer, output layers and the intermediate layers.
The steps involved in the process of modelling are:
• Model building
• Compilation and training of the model
Figure 1. overall process of proposed system • Testing the model

B. Preprocessing 1. Building the model


Preprocessing is a process of converting a raw data into a Model building is done by the way of sequential modeling.
clean set of data. In case of image the original image which is This modeling is done with the layering of the following
given as input is then converted into an image by resizing it, classes as shown in table 4.1.
removing noises etc., Here in this system we perform three
different types of preprocessing techniques which are the most Table 1. Number of layers for each class
significant and essential steps for this system. Those
techniques are the gray scaling, Histogram Equalization and Sno. Class Name No. of layers
image resizing. 1. Conv2d 3
2. MaxPool2d 3
1.Gray Scaling 3. Dropout 1
Gray scaling is a process of converting the colored image 4. Flatten 1
into grayed images. This is done for the purpose that the 5. Dense 5
information needed for grayed image is very much less when
compared to the colored images. The scale that ranges from
black to white can be called as the grayscale. It ranges from These layers are the neural network layers which are the
pitch black color to white. It contains only luminance most important part of the system to attain the highest rate of
information. Both colors are obtained by adjusting the accuracy.
luminance between them: the maximum luminance is white, a) Conv2d: The numerical value of the three-
and zero luminance is black. Everything in between is a shade
dimensional image is the input for this class. However, the
of gray. The value of the grayscale images ranges from 0 to
255 where 0 represents black and 255 represents white color. movement of the filters is limited to a two-dimensional
The range is between 0 and 255 because one byte is used to structure. The input matrix and the kernel matrix are required
represent a pixel of a grayscale image. for a 2D convolution. This aids in the recognition of the
original image's attributes. It's also useful for detecting the
2.Histogram Equalization image's subfeatures. This convolution occurs in either a 3x3
Histogram Equalization is a technique for improving or a 5x5 format. Multiplication and addition are the two
image contrast that is used in image processing. It primary operations used in this 2D Convolution.
accomplishes this by effectively spreading out the most
common intensity values, hence extending the intensity range b) Maxpool(Maxpool2d): Pooling is the process of
of the image. This method typically improves the global shrinking an image's size and dimensions. This is done by
contrast of photos when the useful data is represented by close looking at the image's even-numbered dimensional matrix
contrast values. This allows for higher contrast in places with (2x2,4x4,...) and then calculating the greatest value of each
low local contrast.. dimension over the whole image. As a result, the processing
The number of pixels in each sort of color component is complexity and periodicity are reduced.
represented by a color histogram of a picture. Histogram
equalization can't be applied directly to the image's Red, c) Dropout: Dropout helps in preventing a model from
Green, and Blue components because it changes the color
over fitting. It works by randomly setting out the outgoing
balance dramatically. However, if the image is first

978-1-6654-7971-4/22/$31.00 ©2022 IEEE 1086


Authorized licensed use limited to: National Sun Yat Sen Univ.. Downloaded on August 22,2024 at 07:53:30 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Electronics and Sustainable Communication Systems (ICESC 2022)
IEEE Xplore Part Number: CFP22V66-ART; ISBN: 978-1-6654-7971-4

edges of the hidden units to zero at each update. At each layer D. Product Suggestion
at least one neuron is left out so that each time it finds out a From the CNN model file the output of the skin image i.e.,
new path for propagation. the prediction output is retrieved. Initially, after the inputting
the image the entire process takes place and finally the output
d) Flatten: The Flatten layer is useful for converting from the cosmetics.csv file is outputted based on the
the image's complete map matrix into a single columned predicted skin type.
matrix with several rows. This aids in the flattening of a
multi-dimensional picture matrix into a one-dimensional
image. The image is turned to a matrix, which makes IV. EXPERIMENTAL RESULTS
computations much simple and easier. The proposed cosmetic suggestion system is implemented
using Google colab platform. The database images are taken
from the Kaggle websites. Figure 2 and 3 shows the database
e) Dense: A dense layer is one that is tightly connected. images and skin image respectively.
This layer feeds all of the previous layer's output to all of the
next layer's neurons, with each neuron providing only one
output to the next layer.

2.Compilation and Training the model


The deep learning model must be trained for a number of
times (epochs) before it starts predicting correctly. As the
number of epochs increases, the model’s performance also
increases. But this can also lead to overfitting the model.
Hence, a sweet spot between good performance and
overfitting is found out and used for training. The model is
trained on the preprocessed dataset for 5 epochs or iterations.
This means that the model sees the training data 5 times and
for each of those iterations, the model’s prediction is obtained, Figure 2. Sample database images
loss function is calculated and the model’s parameters are
tweaked using the optimizer or the optimization function so
that the model reaches coverage. At the end of each epoch, the
validation data is passed to the model to assess its
performance.
Colaboratory, abbreviated "Colab," is a Google Research
product. Colab is a Python editor which is similar to Jupyter
notebooks. This runs on the web through Google's cloud
servers. Colab allows writing and executing Python code very
easily. It is especially beneficial for education and research in Figure 3. Skin image
the fields of data science, AI, etc. Colab needs no installation
The database contains 600 pictures that are grouped into
and offers free access to computer resources such as GPUs,
and other compute resources. Google Colaboratory notebooks four categories: Combinational, Dry, Normal, and Oily. The
with GPU runtime were used to train the model. Each epoch pictures in the database are converted to grayscale images.
took 1 to 2 seconds on average to execute. The grayscale image is shown in Figure 4.

First, The model is initialized and the parameters are set.


During training, the preprocessed images are loaded into
variables in memory. For the specified number of epochs, the
images are passed into the model. This is called the
feedforward stage. This stage gives out a prediction which is
stored in a variable. Then the cross entropy loss for the
predictions is calculated and stored in another variable . This
variable is the loss and is in charge of altering the model
parameters to decrease the loss. This is done by the “Adam”
optimizer.
3. Testing the model. Figure 4. Grayscale image

During the testing phase, the dataset is loaded and passed The Histogram Equalization is applied to get better
to the model for predictions. This is stored in the variable pred. contrast in images. It allows areas of local contrast to get
Pred is compared with the actual labels and based on this the higher contrast. The accuracy of the image gets increased by
model’s accuracy is calculated and returned. doing so. Figure 5 shows the histogram equalized image.

978-1-6654-7971-4/22/$31.00 ©2022 IEEE 1087


Authorized licensed use limited to: National Sun Yat Sen Univ.. Downloaded on August 22,2024 at 07:53:30 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Electronics and Sustainable Communication Systems (ICESC 2022)
IEEE Xplore Part Number: CFP22V66-ART; ISBN: 978-1-6654-7971-4

Figure 5. Histogram equalized image


The images are then resized to 50*50. After pre-
processing, the images are then trained using CNN
architecture consists of 3 convolution layers and 5 hidden
layers. Testing the model to get accuracy. Results are obtained
when the input image skin type is predicted correctly and
suitable product composition for that skin type is suggested.
Figure 6 shows the product suggestion for that skin type.

Figure 7. Graph analysis between epochs and accuracy


From the figure 8 the graph plotted the number of epochs
and the loss rate. This helps to find out the loss rate as well as
the validate loss rate of the system for each epoch. So, this
graph is plotted with the help of the following data as shown
in table 3.
Table 3. Data of Loss and Validate Loss
Number of Loss Validate Loss
epochs
Figure 6. Suitable product suggestion
1 1.2953 1.0178
V. PERFORMANCE EVALUATION 2 0.6021 0.1590
The model is trained several epochs to improve the 3 0.0636 0.0033
prediction rate, accuracy as well as efficiency. This is usually
run before the prediction so that the model is trained well, and 4 0.0039 0.0021
the rate of prediction raises apparently on each time of 5 0.0011 0.0018
running the model. While building the model the loss,
accuracy, validate loss, which is val_loss and validate
accuracy which is val_accuracy is obtained along with their
rates.
After running several epochs, there comes the task of
obtaining accuracy, validation accuracy, loss, validation loss.
From this an analysis of data is being made and plotted as a
graph as shown in figure 7 is plotted between the accuracy and
the validate accuracy in order to analyse the accuracy rates of
the model.
So, for the number of epochs the accuracy and validate
accuracy both are plotted. Here the epochs are taken as the x-
major and the accuracy rate is taken as y-major. This data is
obtained from the model building. It is put up into a table and
is plotted according to the values in the table 2.
Table 2. Data of accuracy and validate accuracy
Number of Accuracy Validate
epochs Accuracy
1 0.3241 0.6833
2 0.7835 0.8539
3 0.9926 0.9567 Figure 8. Graph analysis between epochs and loss
4 0.9833 1.0000
5 0.9899 1.0000

978-1-6654-7971-4/22/$31.00 ©2022 IEEE 1088


Authorized licensed use limited to: National Sun Yat Sen Univ.. Downloaded on August 22,2024 at 07:53:30 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Electronics and Sustainable Communication Systems (ICESC 2022)
IEEE Xplore Part Number: CFP22V66-ART; ISBN: 978-1-6654-7971-4

Table 4. Classification Report input and training the data, in order to find the best result. The
model obtained an accuracy of 99% on the dataset. The future
PRECISION RECAL F1-SCORE SUPPORT work such as extending the dataset, expanding the system to
recommend for females and males by adding more templates
0 1.00 1.00 1.00 1.00 and colors to the makeup synthesis library to recommend
more styles.
1 1.00 1.00 1.00 1.00
REFERENCES
2 1.00 1.00 1.00 1.00
[1] Park, "Framework for Sentiment-Driven Evaluation of Customer
Satisfaction With Cosmetics Brands," in IEEE 2020.
3 1.00 1.00 1.00 1.00
[2] C. Chang et al., "Robust skin type classification using convolutional
neural networks," 2018 13th IEEE Conference on Industrial
Accuracy 1.00 600 Electronics and Applications (ICIEA), 2018, pp. 2011-2014, doi:
10.1109/ICIEA.2018.8398040.
[3] Rio Iwabuchi et al, “Proposal of Recommender System Based on User
Macro 1.00 1.00 1.00 600 Evaluation and Cosmetic Ingredients”, IEEE 2017.
Avg
[4] Yuki Matsunami et al, “Tag Recommendation Method for a Cosmetics
Review Recommender System”, iiWAS’17, ACM 2017
Weighted 1.00 1.00 1.00 600
Avg [5] Christopher J. Holder et al, “Visual Siamese Clustering for Cosmetic
Product Recommendation”, ACCV 2018, Springer, 2017, 510-522..
[6] Jiwon Jiong, “For Your Skin Beauty: Mapping Cosmetic Items with
The classification model’s performance is visualised using Bokeh”, accessed on 23 June 2019,.
confusion matrix. The matrix compares the actual target [7] R. S, H. S, K. Jayasakthi, S. D. A, K. Latha and N. Gopinath, "Cosmetic
values with those predicted by the model. The columns Product Selection Using Machine Learning," 2022 International
Conference on Communication, Computing and Internet of Things.
represent the True values of the target variable. The rows
represent the predicted values of the target variable. Figure 9 [8] G. Guo, L. Wen and S. Yan, "Face Authentication With Makeup
Changes," in IEEE Transactions on Circuits and Systems for Video
shows the plotting of confusion matrix. Technology, vol. 24, no. 5, pp. 814-825, May 2014, doi:
10.1109/TCSVT.2013.2280076.
[9] A. Kothari, D. Shah, T. Soni and S. Dhage, "Cosmetic Skin Type
Classification Using CNN With Product Recommendation," 2021 12th
International Conference on Computing Communication and
Networking Technologies.
[10] B. Seo, K. Kim, I. Yoo, J. Park and D. Park, "Development of
Diagnostic Method and Algorithm for Skin Type Based on Consumer
Language and Sentiment," 2018 3rd Technology Innovation
Management and Engineering Science International Conference.
[11] K. Yamagishi, S. Yamamoto, T. Kato and S. Morishima, "Cosmetic
Features Extraction by a Single Image Makeup Decomposition," 2018
IEEE/CVF Conference on Computer Vision and Pattern Recognition
Workshops (CVPRW).
[12] https://www.researchgate.net/publication/350924953_Facial_Skin_Ty
pe_Classification_Based_on_Microscopic_Images_Using_Convoluti
onal_Neural_Network_CNN
[13] M. S. Fleysher and V. M. Troeglazova, "Neural Network for Selecting
Cosmetics by Photo," 2021 IEEE Conference of Russian Young
Researchers in Electrical and Electronic Engineering (ElConRus).
[14] Jason Brownlee. “What is Deep Learning?” accessed on 29 November
2019, https://machinelearningmastery.com/what-is-deeplearning/.
[15] https://www.researchgate.net/publication/325657562_Conceptual_Un
derstanding_of_Convolutional_Neural_Network-
_A_Deep_Learning_Approach
[16] Yuncheng Li et al, “Mining Fashion Outfit composition using an End-
to-End deep learning approasch for set data”, IEEE transaction on
Figure 9. Confusion Matrix multimedia 2017, IEEE, 2017.
[17] https://www.researchgate.net/publication/322303457_Face_Recogniti
VI. CONCLUSIONS on_Based_on_Convolutional_Neural_Network
Our research emphasizes an approach using CNNs to [18] Asami Okuda et al, “Finding Similar Users Based on Their Preferences
suggest better cosmetic products using skin images. There are against Cosmetic Item Clusters”, iiWAS’17, ACM 2017.
a number of existing techniques for cosmetic product [19] Vinita Silaparasetty. "Deep Learning Projects Using TensorFlow 2",
Springer Science and Business Media LLC, 2020 Publication.
suggestion. It is possible to use multiple techniques for
[20] Andrej Karpath, Li Fei-Fei, Department of Computer Science, Stanford
cosmetic product suggestion, each with different advantages University, Deep Visual-Semantic Alignments for Generating Image
and limitations. In order to overcome these shortcomings, a Descriptions, arXiv:1412.2306v2 [cs.CV] 14 Apr 2015.
Convolution Neural Network based classifier is proposed.
Classifiers based on CNNs are employed for comparing the

978-1-6654-7971-4/22/$31.00 ©2022 IEEE 1089


Authorized licensed use limited to: National Sun Yat Sen Univ.. Downloaded on August 22,2024 at 07:53:30 UTC from IEEE Xplore. Restrictions apply.

You might also like