0% found this document useful (0 votes)
2 views

FabricDefectDetectionusingDeepConvolutionalNeuralNetwork

The document discusses a method for fabric defect detection using a Deep Convolutional Neural Network (DCNN), which outperforms traditional machine learning techniques. The proposed DCNN achieves high accuracy rates of 98.33% for patterned and 90.39% for non-patterned fabrics on an in-house database, and 99.06% on the TILDA database. The paper details the architecture of the DCNN, experimental results, and the significance of accurate fabric defect detection in the fashion industry.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

FabricDefectDetectionusingDeepConvolutionalNeuralNetwork

The document discusses a method for fabric defect detection using a Deep Convolutional Neural Network (DCNN), which outperforms traditional machine learning techniques. The proposed DCNN achieves high accuracy rates of 98.33% for patterned and 90.39% for non-patterned fabrics on an in-house database, and 99.06% on the TILDA database. The paper details the architecture of the DCNN, experimental results, and the significance of accurate fabric defect detection in the fashion industry.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/355091253

Fabric Defect Detection Using Deep Convolutional Neural Network

Article in Optical Memory and Neural Networks · July 2021


DOI: 10.3103/S1060992X21030024

CITATIONS READS
10 925

3 authors, including:

Maheshwari Shivanand Biradar


Padmashri Dr. D.Y patil institute of technology, Pimpri, Pune
15 PUBLICATIONS 84 CITATIONS

SEE PROFILE

All content following this page was uploaded by Maheshwari Shivanand Biradar on 25 January 2022.

The user has requested enhancement of the downloaded file.


ISSN 1060-992X, Optical Memory and Neural Networks, 2021, Vol. 30, No. 3, pp. 250–256. © Allerton Press, Inc., 2021.

Fabric Defect Detection Using Deep


Convolutional Neural Network
Maheshwari S. Biradara, *, B. G. Shiparamattia, and P. M. Patilb
a
Basaveshwar Engineering College, Vidayagiri, Bagalkot, Karnataka, 587102 India
b
S.N.D. College of Engineering and Research Center, Yeola, Maharashtra, 423401 India
*e-mail: [email protected]
Received February 9, 2021; revised June 8, 2021; accepted June 10, 2021

Abstract—The enormous growth in the fashion industry increased the demand for quality of service of
the fabric material. Fabric defect detection plays a crucial role in maintaining the quality of service as
a single defect in the fabric can halve its price. Traditional machine learning approaches are less gen-
eralized and cannot be employed for fabric defect detection of patterned as well as non-patterned fab-
rics. This paper presents Deep Convolutional Neural Network (DCNN) for fabric defect detection.
The proposed method consists of a three-layered DCNN for the representation of the normal and
defected fabric patch. The performance of the proposed DCNN is evaluated on the standard TILDA
and in-house database using percentage accuracy. It is noticed that the proposed method gives an
accuracy of 98.33 and 90.39% for patterned and non-patterned fabric defect detection for in-house
database and 99.06% accuracy for non-patterned TILDA database.

Keywords: fabric defect detection, Deep Convolutional Neural Network, patterned fabric, non-pat-
terned fabric
DOI: 10.3103/S1060992X21030024

1. INTRODUCTION
A fabric or textile is the material formed by yarning, weaving, knitting, tatting, felting, crocheting, or
braiding natural or synthetic yarns. Natural yarns can be obtained from cotton, silk-worm, wool, flax,
jute, rayon, etc. Whereas synthetic fibers can be obtained from polyester, acrylic, nylon, lurex, aramid,
carbon fiber, etc. [1, 2]. Fabric defect disturbs the homogeneity of the fabric textures and makes it unsuit-
able for garments or fashion product. Fabric defects can be caused by a fault in the machine, production,
yarning, knitting, dyeing, stitching, painting, rolling, etc. Some of the common fabric defects are hole,
knot, thick bar, thin bar, missing picks, broken end, stain weft yarn, oil spot, double pick, double end,
loose weft, broken end, missing picks, etc. [3, 4]. Figure 1 shows some common fabric defects.
The fabric material is classified into non-patterned and patterned fabric. The non-patterned fabric
material has a high degree of homogeneity in its texture whereas patterned fabric consists of the repetitive
unit over the texture [5]. Various machine learning-based methods have been adopted in the past for the
non-patterned fabric defect detection such as Fourier transform [6], morphological filters [7], wavelet
transform [8, 9], contourlet transform [10], local homogeneity analysis [11], etc. These methods are sen-
sitive to rotation, scale, noise, illumination changes and contrast. For the patterned fabric defect detection
Bollinger Band [12], Regular Band [13, 14], Wavelet Golden Image Subtraction method [15, 16], Image
decomposition [17], etc. has been successfully presented. These methods are less generalized and resulted
in poor performance for the defect detection of non-patterned fabric.
In this paper, we present three-layered Deep Convolutional Neural Network (DCNN) for the pat-
terned as well as non-patterned fabric defect detection. Proposed DCNN consists of convolution layer,
rectified linear unit (ReLU), maximum polling layer, fully connected layer and classification layer. For
the experimentation, the database is divided into the normal and defective fabric patches having a resolu-
tion of 200 × 200. The performance of the proposed DCNN is evaluated for the detection of the hole,
stain, thin bar, thick bar, knots and broken picks defects.
This paper is organized as follows: Section 2 gives the details about DCNN implementation. Further
Section 3 provides experimental results and discussion. Finally, Section 4 concludes the paper and pres-
ents the future scope of the work.

250
FABRIC DEFECT DETECTION 251

Stain Hole Knot Thin bar

(а) (b) (c) (d)

Hirizont Missing
al lines picks Oil spot Loose weft

(e) (f) (g) (h)

Fig. 1. Fabric defect samples.

Input image Conv-1 + Max Conv-2 + Max Conv-3 + Max


ReLU-1 pooling-1 ReLU-2 pooling-2 ReLU-3 pooling-3 Output
100 × 100
100 × 100 × 6 50 × 50 × 6 50 ×50 × 36 25 × 25 × 36 25 ×25 × 216 12 × 12 × 216 layer

CNN layer 1 CNN layer 2 CNN layer 3 Fully


connected
layer and
classifier

Fig. 2. Flow diagram of proposed system.

2. DEEP CONVOLUTIONAL NEURAL NETWORK (DCNN)


The proposed system consists of a supervised three-layered Deep Convolutional Neural Network
(DCNN) that is comprised of convolution layer, ReLU layer, maximum/average pooling layer, fully con-
nected layer, and multiple support vector machine classifier. For the training and testing of the system
defected and non-defected fabric patches of size 200 × 200 are used. the flow diagram of the proposed
system is shown in Fig. 2.
In the convolution layer, the original fabric image is convolved with the convolution filters. The con-
volutional layer gives a better spatial and temporal relationship between the local regions of fabric texture.
It gives the local connectivity between the various pixels that helps to learn the discriminative attributes of
the fabric texture. In the convolution layer, the original gar scale image is convolved with the N number
of convolution filters. In the convolution operation, the filter window of w × w size is multiplied with the
w × w size local region of the image using elements multiplication. The convolution filter window slides
over the entire image to capture the local connectivity of the texture. The window is stored by one pixel to
overcome the problem of edge vanishing. The filter weights are initialized randomly and updated contin-
uously throughout the training process. To maintain the original size of the image the original gray Image
is zero-padded with one pixel all over the borders. The convolution operation generates N convolution
feature maps. Each filter can learn different texture properties of the fabric material. The convolution layer
gives the representation of the internal texture properties of the fabric material [18]. To maintain the
dimension of the image after the convolution process original image is padded with zeros. The convolu-

OPTICAL MEMORY AND NEURAL NETWORKS Vol. 30 No. 3 2021


252 BIRADAR et al.

tion kernel is stride by one pixel over the entire image. For the implementation 6 convolution kernels of
size m × n are selected. All filter kernels are initialized randomly. Each value in the map is called the neu-
ron. The convolution operation for gray image im using filter fm is given in Eq. (1).
R C
imconv ( m, n) = im ( m, n) fm ( m, n) = im (i, j ) fm (i − m, j − n) .
i =1 j =1
(1)

The rectified linear unit increases the non-linear properties of the convolution feature map. The negative
values in the convolution layer may degrade the nonlinear properties of the features. In this layer, the
dimensions of feature maps remain the same as that of the convolution layer. In this layer negatives value
of the convolution feature map is rounded to 0 and all non-negative values are kept as it is as given in
Eq. (2) [19].
imReLU (i, j ) = max {0, imReLU (i, j )} . (2)
i =1: R, j =1:C

The pooling layer helps to collect the important features from the ReLU layer feature maps by neglecting
the less important features. There are two major types of pooling such as maximum pooling and average
pooling. In the maximum pooling, the maximum of the local window is selected as the salient feature
whereas in the average pooling average of the local window is selected as a salient feature. Pooling also
helps to minimize the feature dimensions. For the maximum pooling window of size M × M, the feature maps
are scaled by the factor of (1/M). Pooling is performed for the non-overlapping window of size M × M [20].
The performance of the proposed system is also evaluated for the average pooling layer. The maximum
and average pooling (imMP and imAP ) for the ReLU layer output having the dimension of R × C can be
given in Eqs. (3) and (4).
imMP (i, j ) = max imReLU (i: i + 1, j: j + 1) , (3)
i =1: R −1, j =1:C −1

imAP (i, j ) = average imReLU (i: i + 1, j: j + 1) . (4)


i =1: R −1, j =1:C −1

In fully connected layer, each neuron of one layer is connected to all other neurons of the other layers to
give a deeper representation of the fabric image. Rather than using a traditional soft-max classifier in the
deep convolutional neural network which is a simple template matching-based classifier, we have used a
multi-class linear support vector machine (SVM) classifier for the classification purpose. SVM requires
minimum testing time because it compares the test data with the support vectors only [21, 22].

3. EXPERIMENTAL RESULTS AND DISCUSSIONS


The proposed system is implemented on a personal laptop having a core i5 processor with 2.64 GHz
speed, 8 GB RAM and 2 GB Graphics processor on the Windows platform. For the experimentation in-
house patterned and non-patterned fabric database each consisting of 500 defected and 500 non-defected
samples are used. In-house non-patterned fabric defect database consists of samples from the woven fab-
ric material having plain texture. In-house patterned fabric database consists of star patterned, box pat-
terned, dot patterned and flower-patterned fabric samples. Also, we have considered 500 patches of
defected fabric and 500 patches of non-defected non-patterned fabric from the TILDA database [23]. The
performance of the proposed system is evaluated using % accuracy. [24]. The accuracy for the system is
computed using Eq. (5) where P represents the total number of non-defected samples, N represents a total
number of defected samples, True Positive (TP) represents a total number of correctly classified non-
defected samples and True Negative (TN) represents a total number of correctly classified defected sam-
ples.

Accuracy (%) = TP + TN × 100. (5)


P+N
The DCNN feature maps dimension for various layers are given in Table 1. final feature vector of dimen-
sion 31104 × 1 is fed to the SVM classifier.
Extensive experiments have been carried out for the various window sizes, several filter maps, striding,
and window size of average/maximum pooling. The optimal performance comparison of the proposed
system with previous methods such as direct thresholding (DT), wavelet transform (WT), Local Homo-
geneity Analysis (LHA), Modified Local Neighborhood Analysis (MLNA) for in-house and TILDA
database is shown in Figs. 3–5. The parameter selected for the implementation of DCNN are three CNN
layers, a window size of 3 × 3, 6 filter maps, one-pixel stride and maximum pooling layer It is observed

OPTICAL MEMORY AND NEURAL NETWORKS Vol. 30 No. 3 2021


FABRIC DEFECT DETECTION 253

Table 1. DCNN feature maps


DCNN-Layer Sub-Layer Window Size Stride Dimension

Original Image – – – 100 × 100 × 3

Gray Image – – – 100 × 100

CNN Layer 1 Convolution Layer 3×3×6 1 100 × 100 × 6


ReLU Layer – – 100 × 100 × 6
Max Pooling Layer 2×2 2 50 × 50 × 6

CNN Layer 2 Convolution Layer 3 × 3 × 36 1 50 × 50 × 36


ReLU Layer – – 50 × 50 × 36
Max Pooling Layer 2×2 2 25 × 25 × 36

CNN Layer 3 Convolution Layer 3 × 3 × 216 1 25 × 25 × 216


ReLU Layer – – 25 × 25 × 216
Max Pooling Layer 2×2 2 12 × 12 × 216

Fully Connected Layer – – – 31104 × 1

that DCNN with maximum pooling gives better results than the average pooling because of its ability to
select salient information from the local window.
The performance of the proposed system for different layers of CNN is shown in Fig. 6. It is noticed
that by increasing the number of convolution layers, fabric texture representation capability increases.
Thus, for 3 layered CNN defect detection accuracy is more than a lesser number of layers. When only one
CNN layer is used then the dimensions of feature maps is 15000 whereas when two CNN layers are used
then the dimensions of feature maps is 22500. The final implementation with three convolution layers
results in a feature map of 31104.

% Accuracy ( Inhouse Non-Patterned Database)


100
90
80
70
% Accuracy

60
50
40
30
20
10
0
Thin Thick Broken
Hole Knot Stain Average
Bar bar Picks
Types of Defect
DT 74.00 68.00 69.00 83.00 78.00 71.00 73.83
WT 80.00 82.00 78.00 91.00 86.00 84.00 83.50
LHA 95.67 97.00 93.33 95.00 93.67 91.33 94.33
MLNA 98.33 94.67 97.33 97.33 93.67 98.68 96.67
DCNN 98.67 99.33 99.33 98.00 96.00 98.67 98.33

Fig. 3. Accuracy for different fabric defects for inhouse non-patterned database.

OPTICAL MEMORY AND NEURAL NETWORKS Vol. 30 No. 3 2021


254 BIRADAR et al.

% Accuracy (Inhouse Patterned Database)


100
90
80
70

% Accuracy
60
50
40
30
20
10
0
Thin Thick Broken
Hole Knot Stain Average
Bar bar Picks
Types of Defect
DT 68.33 58.00 62.66 72.00 58.67 61.66 63.55
WT 74.33 72.66 71.33 73.66 64.66 73.66 71.72
LHA 90.33 83.66 84.00 85.33 72.00 84.33 83.28
MLNA 91.66 86.33 92.33 91.66 73.66 92.66 88.05
DCNN 91.66 88.67 93.00 93.33 87.33 88.33 90.39

Fig. 4. Accuracy for different fabric defects for inhouse patterned database.

% Accuracy (TILDA Database)


100
90
80
70
% Accuracy

60
50
40
30
20
10
0
Thin Thick Broken
Hole Knot Stain Average
Bar bar Picks
Types of Defect
DT 75.00 68.33 69.66 83.00 78.00 71.00 74.17
WT 82.00 82.64 80.66 91.66 86.67 85.33 84.83
LHA 95.67 98.00 94.33 96.33 94.00 92.33 95.11
MLNA 98.67 95.66 98.33 97.66 94.00 98.66 97.16
DCNN 99.00 99.67 99.33 98.67 98.33 99.33 99.06

Fig. 5. Accuracy for different fabric defects for TIL.

4. CONCLUSIONS
Hence, this paper presents simple and effective fabric defect detection based on Deep Convolutional
Neural Network. The performance of the proposed algorithm is evaluated for the TILDA non-patterned

OPTICAL MEMORY AND NEURAL NETWORKS Vol. 30 No. 3 2021


FABRIC DEFECT DETECTION 255

100
95
90

% Accuracy
85
80
75 CNN 1
70 CNN 2
65
CNN 3
60
Inhouse Non- Inhouse TILDA Non-
Patterned Patterned patterned
Database

Fig. 6. Accuracy for the different layers of CNN.

and in-house patterned as well as non-patterned fabric defects. The performance of the proposed method
is compared with various methods and it is found that the proposed method gives significant improvement
over traditional methods. The proposed DCNN can describe the local texture, connectivity and homo-
geneity of the local fabric patch. Three-layered DCNN with an SVM classifier gives 98.33 and 90.39%
accuracy for the patterned and non-patterned fabric defect detection respectively. the proposed method
gives 99.06% accuracy for the TILDA database. The performance of the supervised DCNN depends upon
the total number of training samples and the patch size of the fabric. Further, the future scope of the pro-
posed work consists of performance improvement for the patterned fabric defect detection using unsuper-
vised deep learning architectures such as generative adversarial networks and variational auto-encoders.

REFERENCES
1. Textile, Merriam-Webster.com Dictionary, Merriam-Webster. https://www.merriam-webster.com/dictio-
nary/textile. Accessed October 19, 2020.
2. Fabric, Merriam-Webster.com Dictionary, Merriam-Webster. https://www.merriam-webster.com/dictio-
nary/fabric. Accessed October 19, 2020.
3. Kumar Ajay, Computer-vision-based fabric defect detection: A survey, IEEE Trans. Ind. Electron., 2008,
vol. 55, no. 1, pp. 348–363.
4. 23 Fabric defects to look out for during fabric inspection. https://www.intouch-quality.com/blog/5-common-
fabric-defects-prevent. Accessed October 26, 2020.
5. Ngan, Henry Y.T., Grantham K.H. Pang, and Nelson H.C. Yung, Automated fabric defect detection—A review,
Image Vision Comput., 2011, vol. 29, no. 7, pp. 442–458.
6. Modrângă, C., Brad, R., and Brad, R., Fabric defect detection using fourier transform and gabor filters, J. Text.
Eng. Fashion Technol., 2017, vol. 3, no. 4, p. 00107.
7. Rebhi, Ali, Abid, S., and Farhat Fnaiech, Fabric defect detection using local homogeneity and morphological image
processing, in 2016 Int. Image Processing, Applications and Systems (IPAS), IEEE, 2016, pp. 1–5.
8. Saleh, Eman Hussein, Mohamed Mohamed Fouad, Sayed, M.S., Wael Badawy, Abd El-Samie, and Fathi, E.,
Fully automated fabric defect detection using additive wavelet transform, Menoufia J. Electron. Eng. Res., 2020,
vol. 29, no. 2, pp. 119–125.
9. Karlekar, V.V., Biradar, M.S., and Bhangale, K.B., Fabric defect detection using wavelet filter, in 2015 Int. Conf.
on Computing Communication Control and Automation, IEEE, 2015, pp. 712–715.
10. Yapi, D., Mohand Saïd Allili, and Nadia Baaziz, Automatic fabric defect detection using learning-based local
textural distributions in the contourlet domain, IEEE Trans. Autom. Sci. Eng., 2017, vol. 15, no. 3, pp. 1014–
1026.
11. Kure, Namita, Biradar, M.S., and Bhangale, K.B., Local neighborhood analysis for fabric defect detection, in
2017 Int. Conf. on Information, Communication, Instrumentation and Control (ICICIC), IEEE, 2017, pp. 1–5.
12. Ngan, Henry Y.T. and Grantham K.H. Pang, Novel method for patterned fabric inspection using Bollinger
bands, Opt. Eng., 2006, no. 8, p. 087202.
13. Ngan, Henry Y.T. and Grantham K.H. Pang, Regularity analysis for patterned texture inspection, IEEE Trans.
Autom. Sci. Eng., 2008, vol. 6, no. 1, pp. 131–144.

OPTICAL MEMORY AND NEURAL NETWORKS Vol. 30 No. 3 2021


256 BIRADAR et al.

14. Biradar, M.S., Sheeparmatti, B.G., Patil, P.M., and Sarojini Ganapati Naik, Patterned fabric defect detection
using regular band and distance matching function, in 2017 Int. Conf. on Computing, Communication, Control and
Automation (ICCUBEA), IEEE, 2017, pp. 1–6.
15. Jing, Jun-Feng, Shan Chen, and Peng-Fei Li, Fabric defect detection based on golden image subtraction, Color.
Technol., 2017, vol. 133, no. 1, pp. 26–39.
16. Naik, Sarojini Ganapati, Biradar, M.S., and Bhangale, K.B., Patterned fabric defect detection using wavelet
golden image subtraction method, Int. J. Adv. Res., Ideas Innovations Technol., 2017, vol. 3, no. 3, pp. 767–771.
17. Ng, M.K., Henry Y.T. Ngan, Xiaoming Yuan, and Wenxing Zhang, Patterned fabric inspection and visualiza-
tion by the method of image decomposition, IEEE Trans. Autom. Sci. Eng., 2014, vol. 11, no. 3, pp. 943–947.
18. Jing, Jun-Feng, Hao Ma, and Huan-Huan Zhang, Automatic fabric defect detection using a deep convolutional
neural network, Color. Technol., 2019, vol. 135, no. 3, pp. 213–223.
19. Albawi, Saad, Tareq Abed Mohammed, and Saad Al-Zawi, Understanding of a convolutional neural network,
in 2017 Int. Conf. on Engineering and Technology (ICET), IEEE, 2017, pp. 1–6.
20. Guo, Tianmei, Jiwen Dong, Henjian Li, and Yunxing Gao, Simple convolutional neural network on image clas-
sification, in 2017 IEEE 2nd Int. Conf. on Big Data Analysis (ICBDA), IEEE, 2017, pp. 721–724.
21. Bhangale, K.B. and Shekokar, R.U., Human body detection in static images using hog and piecewise linear
svm, Int. J. Innovative Res. Devel., 2014, vol. 3, no. 6.
22. Bhangale, K.B. and Mohanaprasad, K., Content based image retrieval using collaborative color, texture and
shape features, Int. J.Innovative Technol. Exploring Eng. (IJITEE), 2020, vol. 9, no. 3.
23. TILDA: Textile Texture Database, 1996, Dataset, Texture. https://lmb.informatik.uni-freiburg.de/resourc-
es/datasets/tilda.en.html.

OPTICAL MEMORY AND NEURAL NETWORKS Vol. 30 No. 3 2021

View publication stats

You might also like