0% found this document useful (0 votes)
20 views

A Quality Control Application On A Smart Factory Prototype Using Deep Learning Methods

This document describes a quality control application for a smart factory prototype that uses deep learning methods. A camera is placed over the assembly line to capture images of products. Deep learning is used to detect objects in the images and classify products as "okay" or "not okay". The classifications are sent to a PLC controlling the assembly line to route "okay" and "not okay" products appropriately. The system aims to demonstrate how deep learning can help automate quality control for industry 4.0 smart factories.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

A Quality Control Application On A Smart Factory Prototype Using Deep Learning Methods

This document describes a quality control application for a smart factory prototype that uses deep learning methods. A camera is placed over the assembly line to capture images of products. Deep learning is used to detect objects in the images and classify products as "okay" or "not okay". The classifications are sent to a PLC controlling the assembly line to route "okay" and "not okay" products appropriately. The system aims to demonstrate how deep learning can help automate quality control for industry 4.0 smart factories.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

46

A Quality Control Application on a Smart Factory


Prototype Using Deep Learning Methods

Ridvan Ozdemir Mehmet Koc


Industry Based Vocational Training and Research Center Department of Electrical and Electronics Engineering
Bilecik Seyh Edebali University Bilecik Seyh Edebali University
Bilecik, Turkey Bilecik, Turkey
ridvan.ozdemir[at]bilecik.edu.tr mehmet.koc[at]bilecik.edu.tr

Abstract—The number of smart factories is increasing day sorting robot for quality control of the produced parts using
after day to reach the vision of Industry 4.0. Computer vision and image-processing methods.
image processing have important roles in the systems whose aim
is unmanned production. In the industrial automation Tekinalp et al [5], made an application for the separation of
applications, computer vision is mostly used at the quality control olives according to color using MATLAB and PLC controlled
stage. In this stage, there are many applications which use image- systems in real time with image processing methods. Cuşkun
processing methods for object detection and classification but and his friends [6] designed a 4-axis multi-functional robot
deep learning-based applications are rarely seen. In this work, a mechanism to find the position and the angles of the plates
visual quality control automation application is proposed by which have different features using image processing methods.
using a camera placed over the assembly line in a smart factor In addition, it is seen that computerized vision techniques are
model. The product is detected in an image obtained from the used in many applications in agriculture and food industry [7].
assembly line and then classified as “okay” or “not okay” using In their study, Kazemi and Kharrati [8] were able to classify
deep learning methods. After the deep learning-based quality objects through a moving production line and separate them
control, the “okay” products continue their production stages with a PLC-controlled robot.
and the “not okay” products are separated from the production
line using a PLC, which controls the line. It is seen with this Even though deep learning methods are not commonly
application that deep learning methods in automation encountered in industrial automation applications, as the
applications will have an important role in transitioning to the process of adaptation to industry 4.0 progresses, the use of
industry 4.0. artificial intelligence and consequently deep learning-based
applications will gradually increase. Because deep neural
Keywords – deep learning, industry 4.0, smart factory, object networks have achieved better classification accuracy than
detection, object recognition. human level recognition accuracy in some studies like ResNet-
152 [9-12]. In this study, it is aimed to make the visual quality
I. INTRODUCTION control of the final product by a deep learning method, which is
In recent years, the number of deep learning applications obtained end of the assembly line in the smart factory model.
has significantly increased parallel to the developments in For this purpose, the images of 3 different products which are
computer technology. The capability and accuracy of computer produced at the end of the assembly line are taken using
vision applications have made a great progress with deep webcam. This image is transferred to MATLAB. First, object
learning methods. Behrendt et al. [1] developed a deep detection algorithm is applied to image using machine learning
learning-based method for object detection and recognition in technique (neural network) to find the object, then object
real time. They used 2 different neural networks for object recognition algorithm is applied to the detected region by deep
detection and recognition in their study based on learning methods to classify if the final product is “OK” or
YouOnlyLookOnce (YOLO) method. Gordon et al. [2] has “NOT OK”. The result of the classification sent to PLC, which
developed a method based on Recurrent Neural Network controls the assembly line via TCP / IP protocol. PLC adjusts
(RNN) for object recognition in their study. This study shows the position of the points on the band according to the data
that RNN methods can achieve high successes for object obtained from the PC. It allows the “OK” products to continue
recognition. Image processing, object detection and recognition on the line, while separates the products classified as “NOT
tasks have an important issue in the process of transition to OK” into the discharge line. Also, to monitor, to control, and to
unmanned production in intelligent systems developed under save some of the data of all these processes, a human machine
the vision of Industry 4.0. There are many industrial interface panel program is written in TIA PORTAL.
automation applications using image processing methods in the The rest of the paper is organized as follows. We define the
literature. For example, Karadöl and Aybek [3] processed the smart factory model and its components in Section II. The
images obtained from cornfield in MATLAB environment to object detection, recognition parts of our model, dataset, and
determine the weeds. The obtained data were transferred from experiments and their results are explained in Section III.
MATLAB to PLC with OPC server. Kervancıoğlu et al. [4] Finally, conclusion is given in Section IV.
performed the automation of PLC-controlled electro-pneumatic

CSIT 2019, 17-20 September, 2019, Lviv, Ukraine

Authorized licensed use limited to: NED UNIV OF ENGINEERING AND TECHNOLOGY. Downloaded on June 01,2023 at 19:12:16 UTC from IEEE Xplore. Restrictions apply.
47

B. PLC, HMI and MATLAB Communication


Profinet, the communication protocol of Siemens, is used
for communication between PLC and HMI panel. TCP/IP
protocol is used for communication between computer and
PLC. Suitable IP addresses are provided for 3 systems to
communicate with each other. “TCON”, “TDISCON”,
“TRVC” and “TSEND” blocks are used in TIA Portal V13
software while the communication between PLC and
MATLAB are provided. “TCON” and “TDISCON” blocks
establish and terminate communication connection between PC
and PLC. “TRVC” and “TSEND” blocks are used for
receiving and sending data via communication connection.
Figure 3 shows a flowchart of all the program of the system.

Fig. 1. Smart factory training prototype.

II. SMART FACTORY TRAINING PROTOTYPE


Festo's “Pick and Place”, “Muscle Press” and “Separating”
modules are used for the intelligent factory model in this study.
Figure 1 shows the smart factory training prototype assembly
line, quality control automation mechanism, and the webcam
which is integrated to the system. On this assembly line, the
wall clock is produced with three different body colors: orange,
gray, and black.
The smart factory model is basically based on CPU 1214C
model of Siemens S7-1200 PLC series, Festo's pneumatic and
electropneumatic valves, pistons, vacuum elements, various
proximity sensors, DC motors. The PLC program is
implemented in TIA Portal V13 software.

A. Choosing and Programming HMI


Siemens's “KTP700 Basic PN” model is used as an HMI
panel. The program of the HMI panel is implemented in TIA
Portal V13 software. Figure 2 shows the interface of this
program. The user is able to see the position of the product, the
quality control result, and the number of “OK” and “NOT OK”
products on the line in real time with this operator panel.

Fig. 3. Flowchart of program.

III. OBJECT DETECTION AND RECOGNITION


In this study, one of the most important points is detecting
the product in the image which is taken by webcam on the
assembly line. Another important part is classifying the product
to one of the classes “OK” or “NOT OK” while the automation
of visual quality control application for intelligent factory
model is being performed. Unlike other studies, object
detection algorithm is carried out with machine learning
algorithm and its recognition is carried out using deep learning
methods.
Firstly, a dataset is created for the object detection
Fig. 2. Interface of HMI panel program.
application. 189×2 dimensional list is created by storing the
position information of the objects that are in the 189 images of

CSIT 2019, 17-20 September, 2019, Lviv, Ukraine

Authorized licensed use limited to: NED UNIV OF ENGINEERING AND TECHNOLOGY. Downloaded on June 01,2023 at 19:12:16 UTC from IEEE Xplore. Restrictions apply.
48

the dataset. The positions of the objects are automatically


acquired when generating the list. The artificial neural network
is trained with this data set using the “ACF Object Detector”
command in MATLAB. Later on, it is seen that 86 out of the
100 test images are successfully detected. Thus, the first leg of
visual quality control task, which is object detection on the
image with an artificial neural network, is generated and tested.
In the continuation of the study; the quality of the product
taken from the image on the assembly line is classified as
“OK” or “NOT OK”. Also, a data set is generated for Fig. 5. Change in the accuracy rate throughout the training.
classification task. The data set includes 356 product images in
3 different colors, for “OK” or “NOT OK” situations. Since the Figure 6 shows the change in the loss rate throughout the
size of our data set is not big enough to train deep learning training. Similar observations can be made when the loss rate
network properly. Transfer learning technique is used to obtain graphic is analyzed.
more effective results instead of creating a new Convolution
Neural Network (CNN). AlexNet is selected as a pre-trained
CNN model. AlexNet, which developed by Alex Krizhevsky et
al. in 2012 [13]. AlexNet is the winner of the ImageNet
competition in 2012. It reduced the lowest error rate of 26.2%
to 15.4%. It is a very deep neural network that contains 60
million parameters and 650 thousand neurons. It has eight
layers; five of those are convolutional layers and the other three Fig. 6. Change in the loss rate throughout the training.
are fully-connected layers. The illustration of the architecture
of AlexNet’s CNN given in Figure 4. A. Dataset
The training set is obtained by taking images of several
products under different lighting conditions and angles. The
reason for this image variety is to form a robust CNN to
classify an object in different environment conditions. The data
set has 356 images in 3 different colors e.g. red, gray and
black. The distribution of the images in the training dataset is
given in Table 2. Images are taken at 3024x3024 pixels in RGB
format, then resized to 227x227 pixels to match the CNN’s
Fig. 4. The illustration of the architecture of AlexNet [13]. input image dimensions. Some of the images from the dataset
can be seen in Figure 7 for “NOT OK” class and in Figure 8 for
90% of the 356 images in our data set are used for training, “OK” class. Sample images of “OK” and “NOT OK” products
while the remaining 10% are used for testing the new are given in Figure 9 which are taken from the webcam during
network’s recognition accuracy. The parameters used in the operation of the quality control application. Firstly, the
training stage are given in Table 1. product is detected on image by ACF object detector algorithm,
then the detected area is cropped to create new image. Finally,
It is seen that the object recognition performance reached the image is classified to the one of the classes, OK or NOT
97,22% accuracy after determining the training parameters. OK, by the CNN.
Figure 5 shows the change in the accuracy rate throughout the
training. The accuracy of training is shown with blue line and Table 2: Training dataset distribution
the accuracy of validation is shown with black dots in the OK NOT OK
graphic. It can be seen that the model starts learning fast in Red 57 47
epoch 1 and keeps its performance until the end of epoch 2. But
Gray 61 53
there is a decrease in epoch 3 when the graphic is analyzed.
The accuracy rate starts rising again in the middle of epoch 3 Black 60 78
that shows the model keeps learning. Finally, it reaches the TOTAL 178 178
97,22% recognition rate at the end of epoch 6.

TABLE 1: TRAINING PARAMETERS

Training parameters Values


Function SGDM
Initial Learning Rate 0.001
Max Epochs 6
Mini Batch Size 16
Shuffle Every Epoch Fig. 7. Sample images of “NOT OK” class from the dataset with different
Validation Frequency 5 colors.

CSIT 2019, 17-20 September, 2019, Lviv, Ukraine

Authorized licensed use limited to: NED UNIV OF ENGINEERING AND TECHNOLOGY. Downloaded on June 01,2023 at 19:12:16 UTC from IEEE Xplore. Restrictions apply.
49

object detection and recognition applications. This will speed


up the cycle time of the process and will increase the
production efficiency.

ACKNOWLEDGMENT
Authors thank to Bilecik Seyh Edebali University, Industry
Based Vocational Training and Research Center (EDMEM) for
let them use Festo's “Pick and Place”, “Muscle Press” and
Fig. 8. Sample images of “OK” class from the dataset with different colors.
“Separating” training prototype modules in this study.
IV. CONCLUSION
Deep learning techniques become more popular in recent years
thanks to the advances in computing power of computers. This REFERENCES
study on the smart factory model shows that deep learning [1] Behrendt K., Novak L., Botros R., A deep learning approach to traffic
lights: Detection, tracking, and classification, In: Robotics and
techniques can be used in many areas as well as providing Automation (ICRA), 2017 IEEE International Conference on, pp. 1370–
effective solutions in the field of quality control automation 1377, 2017.
studies where machine learning applications are frequently [2] Gordon D., Farhadi A., Fox D., Re3: Real-Time Recurrent Regression
used. A CNN, which is trained using a robust data set is not Networks for Object Tracking, arXiv preprint arXiv:1705.06368, 2017.
affected several environment conditions in object detection and [3] H. Karadöl, A. Aybek, “Mısır Arazisinde Yabancı Otların
recognition applications. Thus, CNN is different from Belirlenmesine Yönelik Matlab ve PLC Arası OPC Haberleşme
Kullanılarak Geliştirilen Bir Kontrol Sistemi”[ A Developed Control
traditional industrial object recognition applications, it can System by Using OPC Communication Between Matlab and PLC for
work in variable light conditions. The number of automation Determination of Weeds in Corn Field], Tekirdağ Ziraat Fakültesi
applications where computer vision is used can be increased Dergisi: 14 (02), pp. 129-137, 2017.
and the transition to the industry 4.0 process can be accelerated [4] E. Kervancıoğlu, A. Adıyan, L. Çetin, E. Uyar, “Görüntü İşlemeye
Dayalı Elektro-Pnömatik Parça Tasnif Robotu”[Electro-Pneumatic Parts
with the use of CNN. Also, production efficiency can be Display Robot Based On Image Processing], V. Ulusal Hidrolik
increased in production by eliminating human errors. Pnömatik Kongresi, İzmir, pp. 397-404, 23-26 October 2008.
[5] Z. Tekinalp, S. Öztürk, M. Kuncan, “OPC Kullanılarak Gerçek Zamanlı
Haberleşen Matlab ve PLC Kontrollü Sistem”[ Matlab and PLC
Controlled Systems Real Time Communication Using OPC], Otomatik
Kontrol Ulusal Toplantısı, Malatya, s:465-470, 26-28 September 2013.
[6] Y. Cuşkun, F. Duman, H. Basık, F. Gün, K. Kaplan, H. M. Ertunç,
“Görüntü İşleme Tabanlı 4 Eksenli Çok Amaçlı Robot Mekanizması”
[Image Processing Based Multi-Purpose 4-Axis Robot Mechanism],
Elektrik-Elektronik ve Biyomedikal Mühendisliği Konferansı, Bursa,
pp.247-251, 1-3 December 2016.
[7] J. F. S. Gomes, F. R. Leta, “Applications of computer vision techniques
in the agriculture and food industry: a review”, Eur Food Res Technol
(2012) 235, pp:989–1000, Springer-Verlag Berlin Heidelberg, 2012.
[8] S. Kazemi, H. Kharrati, “Visual Processing and Classification of items
on Moving Conveyor with Pick and Place Robot using PLC”, Intell Ind
Syst (2017) 3, pp:15–21, Springer Science+Business Media Singapore,
2017.
[9] K. Simonyan and A. Zisserman, “Deep convolutional networks for
large-scale image recognition” arXiv 2014, arXiv:1409.1556.
[10] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D.
Erhan, V. Vanhoucke and A. Rabinovich, “Going deeper with
convolutions” In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015, pp.
1–9.
[11] K. He, X. Zhang, S. Ren, J. Sun, “Deep residual learning for image
recognition” In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June
Fig. 9. Sample images of “OK” and “NOT OK” that are taken during the 2016, pp. 770–778.
quality control. [12] M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S.
Nasrin, M. Hasan, B. C. V. Essen, A. A. S. Awwal, V. K. Asari, “ A
State-of-the-Art Survey on Deep Learning Theory and Architectures”,
In addition, this study shows that by using a larger data set, Electronics 2019, 8, 292; doi:10.3390/electronics8030292, pp. 47, 5
visual quality control automation applications can also be March 2019.
applied to products that are more complex. In addition, the [13] A. Krizhevsky, I. Sutskever, G. E. Hinton, "ImageNet Classification
quality control test can be carried out without stopping the with Deep Convolutional Neural Networks", NIPS, 2012.
product while it is moving on the assembly line with real-time

CSIT 2019, 17-20 September, 2019, Lviv, Ukraine

Authorized licensed use limited to: NED UNIV OF ENGINEERING AND TECHNOLOGY. Downloaded on June 01,2023 at 19:12:16 UTC from IEEE Xplore. Restrictions apply.

You might also like