Fayyaz 2020
Fayyaz 2020
Abstract—In this work, we propose a novel method for of the proposed textile design data by Stearns research by
automated textile design patterns generation using generative 2% using pseudo labeling and by training our model on much
models. We first improve the accuracy of state-of-the-art results larger data, hence proposing a generalized and efficient model.
in classification of textile design patterns by 2% through data
cleaning and pseudo labeling. Then a new dataset which is an We have also compared performance of 3 image generative
improvement of existing dataset is also proposed. On this new algorithms i.e. WGANs GP, DCGANs [5] and CVAEs [6].
dataset we compare the performance of image generative models Moreover, we have shown how to go from generating one
like Wasserstein Generative Adversarial Networks Gradient class patterns to making complex multi-class patterns using
Penalty (WGANs GP), Deep Convolutional GANs (DCGANs) and neural style transfer [7].
Convolutional Variational Autoencoders (CVAEs) for all classes
separately and have evaluated the models using the inception II. R ELATED W ORK
score. We further use a style transfer model to combine multiple
designs generated by WGANs GP, due to its better results among Stearns and the corresponding authors have reported a new
all three approaches and form more complex and appealing dataset consisting of 6 textile design classes and have also
textile designs. Moreover we present results of unsupervised shown state-of-the-art results on the proposed dataset in their
clustering of different patterns in the latent space captured by
a CVAE. study. The dataset has 2764 images for all 6 classes i.e. solid,
Index Terms—generatve adversarial networks, textile pattern zigzag, striped, floral, checkered and dotted and after doing
generation, variational autoencoders, latent code, clustering data augmentation using rotation and central cropping they
manage to obtain 91.7% accuracy on 77,052 images using
I. I NTRODUCTION Residual Networks [8].
Textile industries requires creative personals for making With the advent of GANs image generation has been
alluring designs. The textile industry, while being a creative art revolutionized. GANs are a little bit unstable during training
form, is a very business savvy industry. Hence, an enormous but many improvements have been made in their architectures
amount of capital is spent on such activities. On the contrary, like Deep Convolutional Generative Adversarial Networks
due to digitization we have a number of pre-built textile (DCGANs), Conditional GANs (CGANs) [9], Big GANs [10]
designs from experienced professionals freely available on the etc. To measure the quality of samples generated by GANs
internet. So, by using machine learning for pattern learning Inception score [11] is used.
and hence generating new patterns the textile design industry Facebook’s research proposed a revolutionary algorithm
can be leveraged. Our primary motivation for carrying out which is the extension of GANs named Wasserstein GANs
this research is to add machine intelligence in a creative task (WGANs) [12]. The benefits of WGANs over traditional
like generating appealing textile designs. To help other fellow GANs as stated in their study are that the loss function is
researchers we have open sourced the dataset that we have meaningful as it truly reflects generator’s convergence and the
created through pseudo labeling [1]. stability of optimization process is also guaranteed. WGANs
In this paper we introduce a novel method of textile use weight clipping to enforce Lipschitz constraint [13] but
image generation using Wasserstein Generative Adversarial the improvement of WGANs i.e. WGANs Gradient Penalty
Networks Gradient Penalty (WGANs GP) [2] by substantially (WGANs GP) show that how weight clipping is a terrible
taking forward the previous research done in this field. Ac- way for enforcing the Lipschitz constraint. They show that
cording to our knowledge no prior machine and deep learn- gradient penalty [14] can be used for such purpose.
ing algorithm exists that can generate appealing and lively Another way of generating appealing and mathematically
textile patterns. GANs [3] are unsupervised image generation correct patterns is shown by Feijs and other authors [15] in
algorithms but one still needs a good dataset so that GANs their study using fractals. Fractals are the small patterns or
can learn the underlying distribution of data and can generate designs that when printed on a cloth can be considered as
more samples like that. We have used the dataset provided by textile designs.
Stearns and other authors [4] in their research but then decided Jun-Yan Zhu and other authors [16] have showed a new
to improve that dataset and provide a new clean and large approach for neural style transfer that gives appealing output.
dataset for image generation task so that other researchers Adversarial loss is used in their approach which makes
can easily go forward in this area of research. We have also the input and the target distribution indistinguishable. The
managed to improve the testing accuracy on the classification proposed method is unsupervised.
978-1-7281-5442-8/20/$31.00
Authorized ©2020
licensed use limited to: University IEEE
of Gothenburg. Downloaded on December 19,2020 at 22:36:30 UTC from IEEE Xplore. Restrictions apply.
2020 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE)
Fig. 1: Some faulty examples present in Stearns et al. data C. Textile Patterns Generation Using Image Generative Mod-
els
Due to the presence of these types of images, we first clean We have used WGANs GP, DCGANs and convolutional
the data and then perform pseudo labeling to increase the VAEs for the generation of textile patterns and trained all
dataset to further use it in our research. these variants from scratch. The standard architecture of all
these networks was used with no modifications. The results
B. Pesudo Labelling for Improving and Cleaning Dataset
of all three models are present in Table I
Pseudo labelling is a semi-supervised technique in which af-
ter learning from limited training data, labels can be assigned D. Visualizing Latent Space of VAEs
to the test data. We scraped mixed textile design images from Convolutional VAEs were used to generate the textile pat-
internet. The problem with some of the images was that they terns and the latent space captured was visualized. The mean
had watermark. We used OCR to detect text in those images and standard deviation of latent space was being captured for
and such images were deleted on the fly. Another problem was the testing set and is shown in Fig. 3. Even though the textile
that these images had some text and website information at the designs are not well separated, we can still see the clusters
bottom rows, since the images were quite large and removing being made of similar designs together. The clusters being
few rows from the bottom doesn’t make the difference so we made in unsupervised method backs up our claim that the
removed last 20 rows and this is how we got large data set dataset we have proposed is properly labeled and is distinctive
Authorized licensed use limited to: University of Gothenburg. Downloaded on December 19,2020 at 22:36:30 UTC from IEEE Xplore. Restrictions apply.
2020 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE)
Authorized licensed use limited to: University of Gothenburg. Downloaded on December 19,2020 at 22:36:30 UTC from IEEE Xplore. Restrictions apply.
2020 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE)
Fig. 4: Style transferred from check- Fig. 5: Style transferred from zigzag
ered to dotted to striped and then to dotted
TABLE I: Results of various generative algorithms for textile design patterns generation
TABLE II: Inception score of WGANs GP and DCGANs model for different textile design classes
Authorized licensed use limited to: University of Gothenburg. Downloaded on December 19,2020 at 22:36:30 UTC from IEEE Xplore. Restrictions apply.
2020 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE)
R EFERENCES
[1] D. Lee, Pseudo-Label : The Simple and Efficient Semi-Supervised
Learning Method for Deep Neural Networks,. In proc. ICML Workshop
: Challenges in Representation Learning (WREPL), Atlanta, Georgia,
USA, 2013.
[2] L. Weng, From GAN to WGAN, Apr. 18, 2019.[Online Document].
[3] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
S. Ozair, A. Courville, and Yoshua Bengio, Generative adversarial
nets, Advances in neural information processing systems, pp. 26722680,
2014.
[4] A. J. Medeiros, L. Stearns, L. Findlater, C. Chen, and J. E. Froehlich,
”Recognizing Clothing Colors and Visual Textures Using a Finger-
Mounted Camera: An Initial Investigation,”. [Online Document].
[5] A. Radford, L. Metz, and S. chintala, Unsupervised representation
learning with deep convolutional generative adversarial networks, Jan.
7 , 2016. [Online Document].
[6] Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li , A. Stevens, and L. Carin,
Variational Autoencoder for Deep Learning of Images, Labels and
Captions, In proc. 30th Conference on Neural Information Processing
Systems , Barcelona, Spain, 2016.
[7] Li, Yanghao, et al. ”Demystifying neural style transfer.” arXiv preprint
arXiv:1701.01036 (2017).
[8] K. He, X. Zhang, S. Ren, and J. Sun , Deep Residual Learning for
Image Recognition, Dec. 10, 2015.[Online Document].
[9] J. Gauthier, Conditional generative adversarial nets for convolutional
face generation, Class Project for Stanford CS231N: Convolutional
Neural Networks for Visual Recognition, Winter semester, 2014.
[10] Brock, Andrew, Jeff Donahue, and Karen Simonyan. ”Large scale
gan training for high fidelity natural image synthesis.” arXiv preprint
arXiv:1809.11096 (2018).
[11] Z. Zhou, W. Zhang, and J. Wang. Inception Score, Label Smoothing,
Gradient Vanishing and log(Dr(x)) Alternative,2017.
[12] M. Arjovsky, S. Chintala, and L. Bottou, Wasserstein GAN Dec. 06,
2017. [Online Document].
[13] Yen, Nguyen Dong. ”Lipschitz continuity of solutions of variational
inequalities with a parametric polyhedral constraint.” Mathematics of
Operations Research 20.3 (1995): 695-708.
[14] Tsuruoka, Yoshimasa, Jun’ichi Tsujii, and Sophia Ananiadou. ”Stochas-
tic gradient descent training for l1-regularized log-linear models with
cumulative penalty.” Proceedings of the Joint Conference of the 47th
Annual Meeting of the ACL and the 4th International Joint Conference
on Natural Language Processing of the AFNLP: Volume 1-Volume 1.
Association for Computational Linguistics, 2009.
[15] Turcotte, D. L. ”Fractals and fragmentation.” Journal of Geophysical
Research: Solid Earth 91.B2 (1986): 1921-1926.
[16] Zhu, Jun-Yan, et al. ”Unpaired image-to-image translation using cycle-
consistent adversarial networks.” Proceedings of the IEEE international
conference on computer vision. 2017.
[17] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet Classification
with Deep Convolutional Neural Networks, [Online Document].
[18] Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. ”Image style
transfer using convolutional neural networks.” Proceedings of the IEEE
conference on computer vision and pattern recognition. 2016.
[19] F. Perez-Cruz, Kullback-Leibler Divergence Estimation of Continuous
Distributions. [OnlineDocument].
[20] C Frogner, C. Zhang, H. Mobahi, M. Araya-Polo, and T. Poggio,
Learning with a Wasserstein Loss, Dec. 30, 2015. [Online Document].
Authorized licensed use limited to: University of Gothenburg. Downloaded on December 19,2020 at 22:36:30 UTC from IEEE Xplore. Restrictions apply.