0% found this document useful (0 votes)
79 views8 pages

Fabric Defect Classification Using Transfer Learning and Deep Learning

The internal inspection of fabrics is one of the most important phases of production in order to achieve high quality standard in the textile industry. Therefore, developing efficient automatic internal control mechanism has been an extremely major area of research. In this paper, the famous architecture GoogLeNet was fine-tuned into two configurations for texture defect classification that was trained on a textile texture database (TILDA). The experimental result, for both configurations, achieved a significant overall accuracy score of 97% for motif and a non-motif-based images and 89% for mixed images. In the results obtained, it was observed that the second model, which updates the last six layers, was more successful than the first one; which updates the last two layers.

Uploaded by

IAES IJAI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views8 pages

Fabric Defect Classification Using Transfer Learning and Deep Learning

The internal inspection of fabrics is one of the most important phases of production in order to achieve high quality standard in the textile industry. Therefore, developing efficient automatic internal control mechanism has been an extremely major area of research. In this paper, the famous architecture GoogLeNet was fine-tuned into two configurations for texture defect classification that was trained on a textile texture database (TILDA). The experimental result, for both configurations, achieved a significant overall accuracy score of 97% for motif and a non-motif-based images and 89% for mixed images. In the results obtained, it was observed that the second model, which updates the last six layers, was more successful than the first one; which updates the last two layers.

Uploaded by

IAES IJAI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

IAES International Journal of Artificial Intelligence (IJ-AI)

Vol. 12, No. 3, September 2023, pp. 1378~1385


ISSN: 2252-8938, DOI: 10.11591/ijai.v12.i3.pp1378-1385  1378

Fabric defect classification using transfer learning and deep


learning

Aafaf Beljadid1, Adil Tannouche2, Abdessamad Balouki1


Engineering and Applied Physics Taem “EAPT”, High school of Technology, University of Sultan Moulay Slimane, Beni Mellal,
1

Morocco
2
Laboratory of Engineering and Applied Technology “LITA”, High school of Technology, University of Sultan Moulay Slimane,
Beni Mellal, Morocco

Article Info ABSTRACT


Article history: The internal inspection of fabrics is one of the most important phases of
production in order to achieve high quality standard in the textile industry.
Received Dec 14, 2021 Therefore, developing efficient automatic internal control mechanism has
Revised May 05, 2022 been an extremely major area of research. In this paper, the famous
Accepted May 26, 2022 architecture GoogLeNet was fine-tuned into two configurations for texture
defect classification that was trained on a textile texture database (TILDA).
The experimental result, for both configurations, achieved a significant
Keywords: overall accuracy score of 97% for motif and a non-motif-based images and
89% for mixed images. In the results obtained, it was observed that the
Deep learning second model, which updates the last six layers, was more successful than
Defect detection the first one; which updates the last two layers.
Fabric quality
GoogLeNet
Transfer learning This is an open access article under the CC BY-SA license.

Corresponding Author:
Aafaf Beljadid
Engineering and Applied Physics Taem “EAPT”, University Sultan Moulay Slimane
Avenu Mohamed V, BP 591, Beni Mellal, Morocco
Email: [email protected]

1. INTRODUCTION
Fabric defect detection is a key process in quality control which identifies and visualizes the
appearance of fabric defects [1]–[3]. One of the most direct uses of artificial intelligence in industry is
machine learning for automatic fabric defect detection, the successful deployment that could lead to an
improved fabric quality and lower labor costs. In engineering settings, the use of real-time systems is
common for both measuring and quality control tasks. In manufacturing engineering, such systems are
generally employed for both automatic and cycle-based inspection of components, and the supervision of the
production flow.
The human being has a complex vision system which able him to disting defects in large and small
scales. The objective of the present study is to provide an automatic detection system to distinguish between
defected and non-defected zones in fabric images. This will be implemented through the vision machine and
is expected to provide a better detection of defects with a low error rate. The automatic detection of defects
was the aim of this work, not the classification of faults. Machine inspection of fabric is done through
computer treatment. The only way of inspection is image-based. It takes images of fabric during
manufacturing and treats them to identify irregularities [4]. Hence, defect detection in fabrics presents a
significant challenge for industrial and researchers. The objective is to identify diverse anomalous structures
in complicated contexts.

Journal homepage: http://ijai.iaescore.com


Int J Artif Intell ISSN: 2252-8938  1379

Multiple approaches to detect defects in various settings are currently proposed [5]–[10]. As such,
Abouelela et al. [8] presumed that the texture of fabric is composed by basic structure and considered any
area with a modified structure to be a defect. Due to the existence of significant differences in the frequency
spectrum of defective and non-defective textures, Chan et al. [9] used a Fourier transform method to
distinguish between these areas.
This spectral method, however, is not appropriate for complicated textures. Deep learning has
contributed immensely to the resolution of many computer vision challenges in the recent years. Certain
approaches [11]–[17] have implemented deep neural networks to detect tissue defects. Using the trained
architecture, Zhao et al. [16] built a convolutional neural network (CNN) based on integrated visual short and
long term memory to discriminate between fabric fault images. Authors observed that deep learning methods
conceived for various kinds of image classification tasks can be ideally fitted to the fabric defect
classification challenge, indicating as well the requirement of our adequately-designed architecture. Li et al.
[18] tested an automatic Fisher criterion-based stacked denoising auto-encoders (FCSDA) coder using equal
size fabric image patches to categorize the testing patches into defective or non defective ones, with the
residual between rebuilt images and defective patches as the location of the defect. It has shown good
performance on regular and complex woven knitted jacquard patterns. Although such approaches showed
significant success in certain applications, most of them are restricted to simple textures and unable to resolve
complicated real world problems of fabric inspection.
In order to increase the efficiency of real-world fabric defect detection, many issues must be
addressed. Firstly, labeling the various faults in real-time products takes time and labor. Due to the
sophisticated variety of fabric defects and fabric types, it is hard to collect a meaningful and detailed dataset
covering all possible fabric textures. As such, when it comes to fabrics with non-visible textures, pre-trained
architecture generally does not function properly. In addition, as the production process and materials vary
for each kind of fabric, there are huge variations in the aspect and features of each defect, which also makes
the detection of fabric defects challenging.
In this paper, the evaluation of the ability of GoogLeNet a CNN to identify tissue faults from the
textile texture database (TILDA) Dataset for seven different classes of fabric defaults to develop a reliable
system whish is able to detect defect in real time. The work will be structured. First, a summary of literature
on fabric defect detection approaches is presented, followed by the proposed method for automatic fabric
defect detection, comprising the preparation of the dataset and the description of the proposed network
model. At the end, findings and discussion are following by the conclusion.

2. METHOD
We pretrained the CNN "GoogLeNet" on ImageNet database. The learned values from ImageNet
are transferred to the neural system and adjusted to detect the presence of defaults in the fabric images. Both
simple and complex texture fabrics are used to fine-tune the network and achieve a reliable system which is
able to detect defect in real-time.

2.1. GoogLeNet architecture


The architecture of GoogLeNet [19], proposed by Szegedy et al. in 2015, differs from other classical
CNNs. It includes 22 layers in which the number of units in each layer has been augmented by using a
parallel filter known as the inception module [20] of sizes 1×1, 3×3 and 5×5. Figure 1 shows the 22 layers of
GoogLeNet.
This model is designed to be an accurate and low computational cost for use in mobile and
embedded systems. To make the architecture computationally efficient, the inception module with reduced
dimensionality is used instead of the naive version. The rectified linear units (ReLU) are used as activation
functions for all the convolutions layer of this architecture. Figure 2 show the inception module.
One can choose to fine-tune all the layers of the architecture, or just to maintain the first layers
frozen (for reasons of over-fitting) and refine just a certain part of the top-level architecture. The reason for
this is the finding that the early features of a network contain the most common characteristics (e.g., edge or
color detectors) which are expected to be relevant to a multiplicity of tasks, while the later layers of the
network contain the most specific characteristics of the classes contained in the fabric dataset. The
GoogLeNet network was firstly trained from the ImageNet dataset which consists of around one million of
pictures and one thousand tags and classes. For our tagged fabric dataset; it contains just 24,000 fabric
pictures and 2 tags/categories. Thus, the fabric dataset is not large enough for training GoogLeNet from
scratsh, so we utilize the learned values from the ImageNet trained GoogLeNet network. We fine-tuned all
layers except for the top two pretrained layers containing most general-purpose values that are independent
of the data. The existing classification layer "loss3/classifier" produces predictions for 1,000 classes. Instead,
a new binary classification layer is applied.
Fabric defect classification using transfer learning and deep learning (Aafaf Beljadid)
1380  ISSN: 2252-8938

Figure 1. Shows the architecture of GoogLeNet [19]

Figure 2. Shows the inception module [19]

We tuned the GoogLeNet architecture to fit with our task by changing the latest fully-connected
layer (designed for 1,000 categories) into a binary fully connected layer. The starting filter values of the
network learned from ImageNet are then back-propagated to more accurately represent the fabric conditions
in the dataset. The following basic modifications are maded:

Int J Artif Intell, Vol. 12, No. 3, September 2023: 1378-1385


Int J Artif Intell ISSN: 2252-8938  1381

a) The names of the three output layers have been modified to avoid conflicts when reading the original
weights from the pre-trained model. Thus:
"loss1/classifier" became " loss1/classifier_defect";
"loss2/classifier" became " loss2/classifier_defect";
" loss3/classifier" became " loss3/classifier_defect";
b) The output layer count was reduced to two (from 1,000) to take into account the two categories: defective
and non-defective.
c) The basic learning rate Base_lr was set to 0.01 and the learning rate policy is polynomial.
d) The Max_iter, the maximal number of operations, was set to 10,000.
In this study, the framework Caffe [21] is used as a CNN library. A pre-trained version of the
GoogLeNet CNN is available for unrestricted use in [22]. GoogLeNet is also available in the Digits training
system version 5.0 (Nvidia Corporation, Santa Clara, CA) [23].

2.2. Data sets preparation


Experiments have been conducted on the popular TILDA, is a database of fabric patterns that was
created in the context of the workshop "Texture Analysis" of the major research project of the Deutsche
Forschungsgemeinschaft "Automatic Visual Inspection of Technical Objects". In this workgroup, methods
for recognizing and distinguishing textures of different kinds were investigated and evaluated [24]. This
database consists of eight representative textile kinds, seven error classes and an error-free textile class. Thus,
there are eight types of classes for each type of textile, including four main groups (C1-C4), each group being
composed of two different subgroups as shown in Figures 3 and 4. Therefore, there is one fabric type in each
sub-directory, which is split in eight sub-directories, containing 50 texture images each. First subfolder
labeled "e0" includes non-defective images, while the rest of the subfolders ("e1"-"e7") contain defective
images. The Figure 5 shows some of common fabric defects: (a) plain fabric without defects, (b) plain fabric
with defects, (c) plain weave fabric without defects, (d) plain weave fabric with defects, (e) twill fabric
without defects, and (f) twill fabric with defects.

Figure 3. Display the TILDA’S database four classes {C1, C2, C3, and C4}

Figure 4. Examples of defective fabric images from TILDA database

Fabric defect classification using transfer learning and deep learning (Aafaf Beljadid)
1382  ISSN: 2252-8938

(a) (b) (c)

(d) (e) (f)

Figure 5. Example of fabric images from TILDA database displaying: (a) pain fabric without defects,
(b) plain fabric with defects, (c) plain weave fabric without defects, (d) plain weave fabric with defects,
(e) twill fabric without defects, and (f) twill fabric with defects

Fifty different images for each of the selected classes (768×512 pixels, 8-bit grayscale image) were
obtained by moving and rotating the fabric image. The whole database of textured textiles is composed of
3,200 images with a total size of 1.2 GB. The dimension of the images was resized from 768×512 to
224×224.The images were randomly sorted into 90% images for learning, 5% images for validation and 5%
images for testing. The training data set was composed of 60% images of non-defect fabric (negative class
'0') and 40% images of defect fabric (positive class '1').
The current study used a total of 3,200 pictures from the TILDA database. Moreover, we enlarged
the dataset by applying three directions of turning and rotating (90, and 270) data increasing techniques.
Then, the training images were increased by changing and adapting the sharpness, luminosity, and contrast of
the pictures with IrfanView picture editing software [25]. This is a common method to train small datasets
more efficiently. As a result, the size of the training datasets has increased from 3,200 to 24,000 patches.
The TILDA dataset includes 4 classes of varying textures,C1 and C2 contain non-motif based fabric
images and C3 and C4 contain motif based fabric images. Thus, three sets of training data (S1, S2 and S3)
was constructed, with 8,000 images for each group. The S1 group for the non-motif classes, while the S2
group for the motif based and the S3 a mixed images from the S1 and S2.
Two configurations were designed to refine the pre-trained GoogLeNet architecture; the first
updates the settings of the final couple layers, while the second updates the settings of the final six layers.
Both configurations were trained on the three learning groups. As a result, six models were created and
tested.

3. RESULTS AND DISCUSSION


The overall classification accuracies, relating to the two configurations, obtained on the three
training sets for the different numbers of iterations are shown in Tables 1 to 3. The accuracy, sensitivity and
specificity are defined respectively by the following formulas; in which the true positive desined as TP, the
false positive desined as FP, the true negative desined as TN and the false negative desined as FN:
TP+TN
Accuracy = (1)
TP+FP+TN+FN

𝑇𝑃
Sensitivity = (2)
𝑇𝑃+𝐹𝑁

TN
Specificity = (3)
TN+FP

Table 1. Classification accuracy of non-motif based texture images S1 over the two configurations
Iteration 1,000 2,000 3,000 4,000 5,000 6,000 7,000 8,000 9,000 10,000
Conf_1 0.954 0.956 0.94 0.96 0.959 0.957 0.966 0.967 0.965 0.969
Conf_2 0.952 0.946 0.981 0.95 0.968 0.969 0.986 0.975 0.977 0.976

Int J Artif Intell, Vol. 12, No. 3, September 2023: 1378-1385


Int J Artif Intell ISSN: 2252-8938  1383

Table 2. Classification accuracy of motif based texture images S2 over the two configurations
Iteration 1,000 2,000 3,000 4,000 5,000 6,000 7,000 8,000 9,000 10,000
Conf_1 0.965 0.980 0.964 0.988 0.986 0.984 0.990 0.991 0.988 0.990
Conf_2 0.963 0.957 0.960 0.964 0.982 0.983 0.992 0.993 0.990 0.992

Table 3. Classification accuracy of mixed based texture images S3 over the two configurations
Iteration 1,000 2,000 3,000 4,000 5,000 6,000 7,000 8,000 9,000 10,000
Conf_1 0.868 0.852 0.840 0.860 0.859 0.857 0.866 0.867 0.865 0.887
Conf_2 0.874 0.884 0.864 0.880 0.886 0.884 0.868 0.891 0.890 0.897

We have recorded a maximum accuracy of 97% for non-motif based texture images and 99% for
motif based texture images at 8,000 iterations. However, the precision recorded on mixed images was of the
order of 86% at 7,000 iterations, which is relatively low. Figure 6 illustrates, in detail, the accuracy,
sensitivity and specificity obtained according to the iteration number.
In general, the six trained models scored well in terms of precision as shown in Tables 1 to 3. Also
is noticed that in most cases, and especially for high numbers of iterations, the second configuration showed
a better accuracy compared to the first one; 97% for the G1 Groupe, 99% for the G2 groupe, and 90% for the
G3 groupe. This second configuration updates the parameters of the last six layers of the pre-trained
GoogLeNet.

Figure 6. Shows the accuracy of mixed images for the second fine-tuning configuration

The convolutional networks provide a robust error tolerant, dual computing and self-learning
abilities to deal with complicated environmental data issues. From the beginning of the 21st century, with the
fast progress of big data and AI, the use of convolutional networks to detect, segment [6], [26], recognize
[17], analyze and process data has been very successful, especially in applications with a high number of
tagged images, for example surface finish [27], [28], industrial images [29], heath images [30]–[34] and
weed detection [35], [36]. In our previous work [11], three famous pre-trained CNN models are compared to
detect defect in fabric texture, and the three models achieve high accuracy over 96%.
In this paper, the famous network “GoogLeNet” was refined to detect the presence of defect in motif
and non-motif texture images. The aim of this investigation involved the development of an automatic fabric
inspection system able to detect fabric defect for motif and non-motif-based fabric images. Experimental
results show an accuracy of 97% using a specific classifier for each set and 89% for a common classifier. In
the upcoming research, we intend to classify the tissue faults into multiple classes which will help to
recognize the source of defects and pretaind it in the future. The study will also focus on different techniques
such as layer locking, Dropout, Top-N as output for estimation, and the incorporation of perspective
information to increase the precision and benchmark it with various architectures including visual geometry
group network (VGGNet), residual network (ResNet) and MobilNet.

Fabric defect classification using transfer learning and deep learning (Aafaf Beljadid)
1384  ISSN: 2252-8938

4. CONCLUSION
In this paper, the famous network “GoogLeNet” was refined to detect the presence of defect in motif
and non-motif texture images. The aim of this investigation involved the development of an automatic fabric
inspection system able to detect fabric defect for motif and non-motif-based fabric images. Experimental
results show an accuracy of 97% using a specific classifier for each set and 89% for a common classifier. In
the upcoming research, we intend to classify the tissue faults into multiple classes which will help to
recognize the source of defects and pretaind it in the future. The study will also focus on different techniques
such as layer locking, Dropout, Top-N as output for estimation, and the incorporation of perspective
information to increase the precision and benchmark it with various architectures including VGGNet, ResNet
and Mobilnet.

REFERENCES
[1] H. Y. T. Ngan, G. K. H. Pang, and N. H. C. Yung, “Automated fabric defect detection-a review,” Image and Vision Computing,
vol. 29, no. 7, pp. 442–458, 2011, doi: 10.1016/j.imavis.2011.02.002.
[2] K. Hanbay, M. F. Talu, and Ö. F. Özgüven, “Fabric defect detection systems and methods—a systematic literature review,” Optik,
vol. 127, no. 24, pp. 11960–11973, 2016, doi: 10.1016/j.ijleo.2016.09.110.
[3] A. Goyal, “Automation in fabric inspection,” Automation in Garment Manufacturing, pp. 75–107, 2018,
doi: 10.1016/B978-0-08-101211-6.00004-5.
[4] J. L. Jiang and W. K. Wong, “Fundamentals of common computer vision techniques for textile quality control,” Applications of
Computer Vision in Fashion and Textiles, pp. 3–15, 2018, doi: 10.1016/B978-0-08-101217-8.00001-4.
[5] P. Banerjee, A. K. Bhunia, A. Bhattacharyya, P. P. Roy, and S. Murala, “Local neighborhood intensity pattern–a new texture
feature descriptor for image retrieval,” Expert Systems with Applications, vol. 113, pp. 100–115, 2018,
doi: 10.1016/j.eswa.2018.06.044.
[6] L. Jia, C. Chen, J. Liang, and Z. Hou, “Fabric defect inspection based on lattice segmentation and Gabor filtering,”
Neurocomputing, vol. 238, pp. 84–102, 2017, doi: 10.1016/j.neucom.2017.01.039.
[7] J. Jing, P. Yang, and P. Li, “Defect detection on patterned fabrics using distance matching function and regular band,” Journal of
Engineered Fibers and Fabrics, vol. 10, no. 2, pp. 85–97, 2015, doi: 10.1177/155892501501000210.
[8] A. Abouelela, H. M. Abbas, H. Eldeeb, A. A. Wahdan, and S. M. Nassar, “Automated vision system for localizing structural
defects in textile fabrics,” Pattern Recognition Letters, vol. 26, no. 10, pp. 1435–1443, 2005, doi: 10.1016/j.patrec.2004.11.016.
[9] C. H. Chan, “Fabric defect detection by Fourier analysis,” IEEE Transactions on Industry Applications, vol. 36, no. 5, pp. 1267–1276,
2000, doi: 10.1109/28.871274.
[10] J. F. Jing, S. Chen, and P. F. Li, “Fabric defect detection based on golden image subtraction,” Coloration Technology, vol. 133,
no. 1, pp. 26–39, 2017, doi: 10.1111/cote.12239.
[11] A. Beljadid, A. Tannouche, and A. Balouki, “Application of deep learning for the detection of default in fabric texture,” in 6th
International Conference on Optimization and Applications, ICOA 2020 - Proceedings, 2020, pp. 1–5,
doi: 10.1109/ICOA49421.2020.9094515.
[12] J. Jing, A. Dong, and P. Li, “Yarn-dyed fabric defect classification based on convolutional neural network,” Optical Engineering,
vol. 56, no. 9, pp. 093104--093104, 2017, doi: 10.1117/12.2281978.
[13] J. F. Jing, H. Ma, and H. H. Zhang, “Automatic fabric defect detection using a deep convolutional neural network,” Coloration
Technology, vol. 135, no. 3, pp. 213–223, 2019, doi: 10.1111/cote.12394.
[14] P. Banumathi and G. M. Nasira, “Fabric inspection system using artificial neural networks,” International Journal of Computer
Engineering Science, vol. 2, no. 5, pp. 20–27, 2012.
[15] M. Guan, Z. Zhong, Y. Rui, H. Zheng, and X. Wu, “Defect detection and classification for plain woven fabric based on deep
learning,” in Proceedings - 2019 7th International Conference on Advanced Cloud and Big Data, CBD 2019, 2019, pp. 297–302,
doi: 10.1109/CBD.2019.00060.
[16] Y. Zhao, K. Hao, H. He, X. Tang, and B. Wei, “A visual long-short-term memory based integrated CNN model for fabric defect
image classification,” Neurocomputing, vol. 380, pp. 259–270, 2020, doi: 10.1016/j.neucom.2019.10.067.
[17] Z. Liu, C. Zhang, C. Li, S. Ding, Y. Dong, and Y. Huang, “Fabric defect recognition using optimized neural networks,” Journal
of Engineered Fibers and Fabrics, vol. 14, p. 1558925019897396, 2019, doi: 10.1177/1558925019897396.
[18] Y. Li, W. Zhao, and J. Pan, “Deformable patterned fabric defect detection with fisher criterion-based deep learning,” IEEE
Transactions on Automation Science and Engineering, vol. 14, no. 2, pp. 1256–1264, 2017, doi: 10.1109/TASE.2016.2520955.
[19] C. Szegedy et al., “Going deeper with convolutions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), Jun. 2015, pp. 1–9, doi: 10.1109/CVPR.2015.7298594.
[20] M. Lin, Q. Chen, and S. Yan, “Network in network,” arXiv preprint arXiv:1312.4400, pp. 1–10, 2013,
doi: 10.48550/arXiv.1312.4400.
[21] Y. Jia et al., “Caffe: convolutional architecture for fast feature embedding,” in 2014 ACM Conference on Multimedia, 2014,
pp. 675–678, doi: 10.1145/2647868.2654889.
[22] “BVLC/caffe,” GitHub. https://github.com/BVLC/caffe (accessed May 06, 2021).
[23] “NVIDIA DIGITS | NVIDIA Developer,” NVIDIA Corporation, 2019. https://developer.nvidia.com/digits (accessed Sep. 23, 2020).
[24] T. B. E. Ilg and N. Mayer and T. Saikia, M. Keuper, A. Dosovitskiy, “Computer Vision Group, Freiburg,” 2017.
https://lmb.informatik.uni-freiburg.de/resources/datasets/tilda.en.html%0Ahttps://lmb.informatik.uni-
freiburg.de/resources/datasets/FlyingChairs.en.html (accessed May 04, 2021).
[25] I. Skiljan, “IrfanView - Official Homepage - One of the Most Popular Viewers Worldwide,” IrfanView, 2020.
https://www.irfanview.com/ (accessed May 20, 2021).
[26] A. Serdaroglu, A. Ertuzun, and A. Ercil, “Defect detection in textile fabric images using wavelet transforms and independent
component analysis,” Pattern Recognition and Image Analysis, vol. 16, no. 1, pp. 61–64, 2006,
doi: 10.1134/S1054661806010196.
[27] R. Manish, A. Venkatesh, and S. Denis Ashok, “Machine vision based image processing techniques for surface finish and defect
inspection in a grinding process,” Materials Today: Proceedings, vol. 5, no. 5, pp. 12792–12802, 2018,

Int J Artif Intell, Vol. 12, No. 3, September 2023: 1378-1385


Int J Artif Intell ISSN: 2252-8938  1385

doi: 10.1016/j.matpr.2018.02.263.
[28] F. Bre, J. M. Gimenez, and V. D. Fachinotti, “Prediction of wind pressure coefficients on building surfaces using artificial neural
networks,” Energy and Buildings, vol. 158, pp. 1429–1441, 2018, doi: 10.1016/j.enbuild.2017.11.045.
[29] T. Czimmermann et al., “Visual-based defect detection and classification approaches for industrial applications—a survey,”
Sensors (Switzerland), vol. 20, no. 5, p. 1459, 2020, doi: 10.3390/s20051459.
[30] R. Miotto, F. Wang, S. Wang, X. Jiang, and J. T. Dudley, “Deep learning for healthcare: review, opportunities and challenges,”
Briefings in Bioinformatics, vol. 19, no. 6, pp. 1236–1246, 2017, doi: 10.1093/bib/bbx044.
[31] E. Rewar, B. P. Singh, M. K. Chhipa, O. P. Sharma, and M. Kumari, “Detection of infected and healthy part of leaf using image
processing techniques,” Journal of Advanced Research in Dynamical and Control Systems, vol. 9, no. 1, pp. 13–19, 2017.
[32] Y. Ariji et al., “Automatic detection and classification of radiolucent lesions in the mandible on panoramic radiographs using a
deep learning object detection technique,” Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology, vol. 128, no. 4,
pp. 424–430, 2019, doi: 10.1016/j.oooo.2019.05.014.
[33] M. Fukuda et al., “Comparison of 3 deep learning neural networks for classifying the relationship between the mandibular third
molar and the mandibular canal on panoramic radiographs,” Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology,
vol. 130, no. 3, pp. 336–343, 2020, doi: 10.1016/j.oooo.2020.04.005.
[34] C. Kuwada et al., “Deep learning systems for detecting and classifying the presence of impacted supernumerary teeth in the
maxillary incisor region on panoramic radiographs,” Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology, vol. 130,
no. 4, pp. 464–469, 2020, doi: 10.1016/j.oooo.2020.04.813.
[35] B. Espejo-Garcia, N. Mylonas, L. Athanasakos, S. Fountas, and I. Vasilakoglou, “Towards weeds identification assistance
through transfer learning,” Computers and Electronics in Agriculture, vol. 171, p. 105306, 2020,
doi: 10.1016/j.compag.2020.105306.
[36] A. Tannouche, A. Gaga, M. Boutalline, and S. Belhouideg, “Weeds detection efficiency through different convolutional neural
networks technology,” International Journal of Electrical and Computer Engineering, vol. 12, no. 1, pp. 1048–1055, 2022,
doi: 10.11591/ijece.v12i1.pp1048-1055.

BIOGRAPHIES OF AUTHORS

Aafaf Beljadid received an engineer’s degree in Industrial Engineering from


the High School of Textile and Clothing Industries ESITH in 2015. Currently, she is a PhD
student at the Faculty of Sciences and Technology, University of Sultan Moulay Slimane,
Beni Mellal, Morocco. Her area of research includes Machine Learning, Artificial
Intelligence and their application in textile. She can be contacted at email:
[email protected].

Adil Tannouche is a professor at the Higher School of Technology, Béni


Mellal, Sultan Moulay Slimane University, Morocco. He received his PhD in electronics
and embedded systems from Moulay Ismail University, Mèknes, Morocco. He is member of
Laboratory of Engineering and Applied Technologies and his research area includes
machine vision, artificial intelligence and applications in the field of precision agriculture
and agro-industry. He can be contacted at email: [email protected].

Abdessamad Balouki is a professor at the department of mecatronics in the


High School of Technology Beni Mellal, University of Sultan Moulay Slimane, Beni Mellal,
Morocco. His area of research includes information and communication technology,
computer networking, mechatronics, electronics and communication engineering, and
chemometrics. He can be contacted at email: [email protected].

Fabric defect classification using transfer learning and deep learning (Aafaf Beljadid)

You might also like