FabricDefectDetectionusingDeepConvolutionalNeuralNetwork
FabricDefectDetectionusingDeepConvolutionalNeuralNetwork
net/publication/355091253
CITATIONS READS
10 925
3 authors, including:
SEE PROFILE
All content following this page was uploaded by Maheshwari Shivanand Biradar on 25 January 2022.
Abstract—The enormous growth in the fashion industry increased the demand for quality of service of
the fabric material. Fabric defect detection plays a crucial role in maintaining the quality of service as
a single defect in the fabric can halve its price. Traditional machine learning approaches are less gen-
eralized and cannot be employed for fabric defect detection of patterned as well as non-patterned fab-
rics. This paper presents Deep Convolutional Neural Network (DCNN) for fabric defect detection.
The proposed method consists of a three-layered DCNN for the representation of the normal and
defected fabric patch. The performance of the proposed DCNN is evaluated on the standard TILDA
and in-house database using percentage accuracy. It is noticed that the proposed method gives an
accuracy of 98.33 and 90.39% for patterned and non-patterned fabric defect detection for in-house
database and 99.06% accuracy for non-patterned TILDA database.
Keywords: fabric defect detection, Deep Convolutional Neural Network, patterned fabric, non-pat-
terned fabric
DOI: 10.3103/S1060992X21030024
1. INTRODUCTION
A fabric or textile is the material formed by yarning, weaving, knitting, tatting, felting, crocheting, or
braiding natural or synthetic yarns. Natural yarns can be obtained from cotton, silk-worm, wool, flax,
jute, rayon, etc. Whereas synthetic fibers can be obtained from polyester, acrylic, nylon, lurex, aramid,
carbon fiber, etc. [1, 2]. Fabric defect disturbs the homogeneity of the fabric textures and makes it unsuit-
able for garments or fashion product. Fabric defects can be caused by a fault in the machine, production,
yarning, knitting, dyeing, stitching, painting, rolling, etc. Some of the common fabric defects are hole,
knot, thick bar, thin bar, missing picks, broken end, stain weft yarn, oil spot, double pick, double end,
loose weft, broken end, missing picks, etc. [3, 4]. Figure 1 shows some common fabric defects.
The fabric material is classified into non-patterned and patterned fabric. The non-patterned fabric
material has a high degree of homogeneity in its texture whereas patterned fabric consists of the repetitive
unit over the texture [5]. Various machine learning-based methods have been adopted in the past for the
non-patterned fabric defect detection such as Fourier transform [6], morphological filters [7], wavelet
transform [8, 9], contourlet transform [10], local homogeneity analysis [11], etc. These methods are sen-
sitive to rotation, scale, noise, illumination changes and contrast. For the patterned fabric defect detection
Bollinger Band [12], Regular Band [13, 14], Wavelet Golden Image Subtraction method [15, 16], Image
decomposition [17], etc. has been successfully presented. These methods are less generalized and resulted
in poor performance for the defect detection of non-patterned fabric.
In this paper, we present three-layered Deep Convolutional Neural Network (DCNN) for the pat-
terned as well as non-patterned fabric defect detection. Proposed DCNN consists of convolution layer,
rectified linear unit (ReLU), maximum polling layer, fully connected layer and classification layer. For
the experimentation, the database is divided into the normal and defective fabric patches having a resolu-
tion of 200 × 200. The performance of the proposed DCNN is evaluated for the detection of the hole,
stain, thin bar, thick bar, knots and broken picks defects.
This paper is organized as follows: Section 2 gives the details about DCNN implementation. Further
Section 3 provides experimental results and discussion. Finally, Section 4 concludes the paper and pres-
ents the future scope of the work.
250
FABRIC DEFECT DETECTION 251
Hirizont Missing
al lines picks Oil spot Loose weft
tion kernel is stride by one pixel over the entire image. For the implementation 6 convolution kernels of
size m × n are selected. All filter kernels are initialized randomly. Each value in the map is called the neu-
ron. The convolution operation for gray image im using filter fm is given in Eq. (1).
R C
imconv ( m, n) = im ( m, n) fm ( m, n) = im (i, j ) fm (i − m, j − n) .
i =1 j =1
(1)
The rectified linear unit increases the non-linear properties of the convolution feature map. The negative
values in the convolution layer may degrade the nonlinear properties of the features. In this layer, the
dimensions of feature maps remain the same as that of the convolution layer. In this layer negatives value
of the convolution feature map is rounded to 0 and all non-negative values are kept as it is as given in
Eq. (2) [19].
imReLU (i, j ) = max {0, imReLU (i, j )} . (2)
i =1: R, j =1:C
The pooling layer helps to collect the important features from the ReLU layer feature maps by neglecting
the less important features. There are two major types of pooling such as maximum pooling and average
pooling. In the maximum pooling, the maximum of the local window is selected as the salient feature
whereas in the average pooling average of the local window is selected as a salient feature. Pooling also
helps to minimize the feature dimensions. For the maximum pooling window of size M × M, the feature maps
are scaled by the factor of (1/M). Pooling is performed for the non-overlapping window of size M × M [20].
The performance of the proposed system is also evaluated for the average pooling layer. The maximum
and average pooling (imMP and imAP ) for the ReLU layer output having the dimension of R × C can be
given in Eqs. (3) and (4).
imMP (i, j ) = max imReLU (i: i + 1, j: j + 1) , (3)
i =1: R −1, j =1:C −1
In fully connected layer, each neuron of one layer is connected to all other neurons of the other layers to
give a deeper representation of the fabric image. Rather than using a traditional soft-max classifier in the
deep convolutional neural network which is a simple template matching-based classifier, we have used a
multi-class linear support vector machine (SVM) classifier for the classification purpose. SVM requires
minimum testing time because it compares the test data with the support vectors only [21, 22].
that DCNN with maximum pooling gives better results than the average pooling because of its ability to
select salient information from the local window.
The performance of the proposed system for different layers of CNN is shown in Fig. 6. It is noticed
that by increasing the number of convolution layers, fabric texture representation capability increases.
Thus, for 3 layered CNN defect detection accuracy is more than a lesser number of layers. When only one
CNN layer is used then the dimensions of feature maps is 15000 whereas when two CNN layers are used
then the dimensions of feature maps is 22500. The final implementation with three convolution layers
results in a feature map of 31104.
60
50
40
30
20
10
0
Thin Thick Broken
Hole Knot Stain Average
Bar bar Picks
Types of Defect
DT 74.00 68.00 69.00 83.00 78.00 71.00 73.83
WT 80.00 82.00 78.00 91.00 86.00 84.00 83.50
LHA 95.67 97.00 93.33 95.00 93.67 91.33 94.33
MLNA 98.33 94.67 97.33 97.33 93.67 98.68 96.67
DCNN 98.67 99.33 99.33 98.00 96.00 98.67 98.33
Fig. 3. Accuracy for different fabric defects for inhouse non-patterned database.
% Accuracy
60
50
40
30
20
10
0
Thin Thick Broken
Hole Knot Stain Average
Bar bar Picks
Types of Defect
DT 68.33 58.00 62.66 72.00 58.67 61.66 63.55
WT 74.33 72.66 71.33 73.66 64.66 73.66 71.72
LHA 90.33 83.66 84.00 85.33 72.00 84.33 83.28
MLNA 91.66 86.33 92.33 91.66 73.66 92.66 88.05
DCNN 91.66 88.67 93.00 93.33 87.33 88.33 90.39
Fig. 4. Accuracy for different fabric defects for inhouse patterned database.
60
50
40
30
20
10
0
Thin Thick Broken
Hole Knot Stain Average
Bar bar Picks
Types of Defect
DT 75.00 68.33 69.66 83.00 78.00 71.00 74.17
WT 82.00 82.64 80.66 91.66 86.67 85.33 84.83
LHA 95.67 98.00 94.33 96.33 94.00 92.33 95.11
MLNA 98.67 95.66 98.33 97.66 94.00 98.66 97.16
DCNN 99.00 99.67 99.33 98.67 98.33 99.33 99.06
4. CONCLUSIONS
Hence, this paper presents simple and effective fabric defect detection based on Deep Convolutional
Neural Network. The performance of the proposed algorithm is evaluated for the TILDA non-patterned
100
95
90
% Accuracy
85
80
75 CNN 1
70 CNN 2
65
CNN 3
60
Inhouse Non- Inhouse TILDA Non-
Patterned Patterned patterned
Database
and in-house patterned as well as non-patterned fabric defects. The performance of the proposed method
is compared with various methods and it is found that the proposed method gives significant improvement
over traditional methods. The proposed DCNN can describe the local texture, connectivity and homo-
geneity of the local fabric patch. Three-layered DCNN with an SVM classifier gives 98.33 and 90.39%
accuracy for the patterned and non-patterned fabric defect detection respectively. the proposed method
gives 99.06% accuracy for the TILDA database. The performance of the supervised DCNN depends upon
the total number of training samples and the patch size of the fabric. Further, the future scope of the pro-
posed work consists of performance improvement for the patterned fabric defect detection using unsuper-
vised deep learning architectures such as generative adversarial networks and variational auto-encoders.
REFERENCES
1. Textile, Merriam-Webster.com Dictionary, Merriam-Webster. https://www.merriam-webster.com/dictio-
nary/textile. Accessed October 19, 2020.
2. Fabric, Merriam-Webster.com Dictionary, Merriam-Webster. https://www.merriam-webster.com/dictio-
nary/fabric. Accessed October 19, 2020.
3. Kumar Ajay, Computer-vision-based fabric defect detection: A survey, IEEE Trans. Ind. Electron., 2008,
vol. 55, no. 1, pp. 348–363.
4. 23 Fabric defects to look out for during fabric inspection. https://www.intouch-quality.com/blog/5-common-
fabric-defects-prevent. Accessed October 26, 2020.
5. Ngan, Henry Y.T., Grantham K.H. Pang, and Nelson H.C. Yung, Automated fabric defect detection—A review,
Image Vision Comput., 2011, vol. 29, no. 7, pp. 442–458.
6. Modrângă, C., Brad, R., and Brad, R., Fabric defect detection using fourier transform and gabor filters, J. Text.
Eng. Fashion Technol., 2017, vol. 3, no. 4, p. 00107.
7. Rebhi, Ali, Abid, S., and Farhat Fnaiech, Fabric defect detection using local homogeneity and morphological image
processing, in 2016 Int. Image Processing, Applications and Systems (IPAS), IEEE, 2016, pp. 1–5.
8. Saleh, Eman Hussein, Mohamed Mohamed Fouad, Sayed, M.S., Wael Badawy, Abd El-Samie, and Fathi, E.,
Fully automated fabric defect detection using additive wavelet transform, Menoufia J. Electron. Eng. Res., 2020,
vol. 29, no. 2, pp. 119–125.
9. Karlekar, V.V., Biradar, M.S., and Bhangale, K.B., Fabric defect detection using wavelet filter, in 2015 Int. Conf.
on Computing Communication Control and Automation, IEEE, 2015, pp. 712–715.
10. Yapi, D., Mohand Saïd Allili, and Nadia Baaziz, Automatic fabric defect detection using learning-based local
textural distributions in the contourlet domain, IEEE Trans. Autom. Sci. Eng., 2017, vol. 15, no. 3, pp. 1014–
1026.
11. Kure, Namita, Biradar, M.S., and Bhangale, K.B., Local neighborhood analysis for fabric defect detection, in
2017 Int. Conf. on Information, Communication, Instrumentation and Control (ICICIC), IEEE, 2017, pp. 1–5.
12. Ngan, Henry Y.T. and Grantham K.H. Pang, Novel method for patterned fabric inspection using Bollinger
bands, Opt. Eng., 2006, no. 8, p. 087202.
13. Ngan, Henry Y.T. and Grantham K.H. Pang, Regularity analysis for patterned texture inspection, IEEE Trans.
Autom. Sci. Eng., 2008, vol. 6, no. 1, pp. 131–144.
14. Biradar, M.S., Sheeparmatti, B.G., Patil, P.M., and Sarojini Ganapati Naik, Patterned fabric defect detection
using regular band and distance matching function, in 2017 Int. Conf. on Computing, Communication, Control and
Automation (ICCUBEA), IEEE, 2017, pp. 1–6.
15. Jing, Jun-Feng, Shan Chen, and Peng-Fei Li, Fabric defect detection based on golden image subtraction, Color.
Technol., 2017, vol. 133, no. 1, pp. 26–39.
16. Naik, Sarojini Ganapati, Biradar, M.S., and Bhangale, K.B., Patterned fabric defect detection using wavelet
golden image subtraction method, Int. J. Adv. Res., Ideas Innovations Technol., 2017, vol. 3, no. 3, pp. 767–771.
17. Ng, M.K., Henry Y.T. Ngan, Xiaoming Yuan, and Wenxing Zhang, Patterned fabric inspection and visualiza-
tion by the method of image decomposition, IEEE Trans. Autom. Sci. Eng., 2014, vol. 11, no. 3, pp. 943–947.
18. Jing, Jun-Feng, Hao Ma, and Huan-Huan Zhang, Automatic fabric defect detection using a deep convolutional
neural network, Color. Technol., 2019, vol. 135, no. 3, pp. 213–223.
19. Albawi, Saad, Tareq Abed Mohammed, and Saad Al-Zawi, Understanding of a convolutional neural network,
in 2017 Int. Conf. on Engineering and Technology (ICET), IEEE, 2017, pp. 1–6.
20. Guo, Tianmei, Jiwen Dong, Henjian Li, and Yunxing Gao, Simple convolutional neural network on image clas-
sification, in 2017 IEEE 2nd Int. Conf. on Big Data Analysis (ICBDA), IEEE, 2017, pp. 721–724.
21. Bhangale, K.B. and Shekokar, R.U., Human body detection in static images using hog and piecewise linear
svm, Int. J. Innovative Res. Devel., 2014, vol. 3, no. 6.
22. Bhangale, K.B. and Mohanaprasad, K., Content based image retrieval using collaborative color, texture and
shape features, Int. J.Innovative Technol. Exploring Eng. (IJITEE), 2020, vol. 9, no. 3.
23. TILDA: Textile Texture Database, 1996, Dataset, Texture. https://lmb.informatik.uni-freiburg.de/resourc-
es/datasets/tilda.en.html.