Pratap 11
Pratap 11
com
Available online at www.sciencedirect.com
ScienceDirect
Available ScienceDirect
online
Available atonline
Procedia CIRP
www.sciencedirect.com
at www.sciencedirect.com
00 (2019) 000–000
ScienceDirect
Procedia CIRP 00 (2019) 000–000
ScienceDirect
www.elsevier.com/locate/procedia
www.elsevier.com/locate/procedia
ProcediaProcedia
CIRP 00CIRP
(2017)
93000–000
(2020) 1224–1229
www.elsevier.com/locate/procedia
A new methodologyProblems
to analyzeinthe
Paper Bag Production
a functional and physical architecture of
Anna Syberfeldt* and Fredrik Vuoluterä
a
Écoleauthor.
* Corresponding Nationale
Tel.:Supérieure d’Arts etE-mail
+46(0)500448577. Métiers, Arts et [email protected]
address: Métiers ParisTech, LCFC EA 4495, 4 Rue Augustin Fresnel, Metz 57078, France
*Abstract
Corresponding author. Tel.: +33 3 87 37 54 30; E-mail address: [email protected]
Abstract
It is critical for manufacturers to identify quality issues in production and prevent defective products being delivered to customers. We investigate
the use of deep neural networks to perform automatic quality inspections based on image processing to eliminate the current manual inspection.
It
A isdeep
Abstractcritical for manufacturers
neural to identify quality
network was implemented in a issues in production
real-world industrialand prevent
case study,defective products
and its ability to being
detectdelivered to customers.
quality problems was We investigate
evaluated and
the use of deep neural networks to perform automatic quality inspections based on image processing to eliminate
analyzed. The results show that the network has an accuracy of 94.5%, which is considered good in comparison to the 70–80% accuracy of a the current manual inspection.
A today’s
deephuman
Intrained neural network
inspector.
business was implemented
environment, in a real-world
the trend towards industrial
more product caseand
variety study, and its ability
customization to detectDue
is unbroken. quality problems
to this was evaluated
development, the needand
of
analyzed.
agile The results show
and reconfigurable that the systems
production network emerged
has an accuracy
to cope of 94.5%,
with which
various is considered
products goodfamilies.
and product in comparison to the
To design and70–80% accuracy
optimize of a
production
trained human
systems
inspector.
© 2019 as Thewell as to choose
Authors. Published theby optimal product
Elsevier B.V. matches, product analysis methods are needed. Indeed, most of the known methods aim to
© 2020
analyze
This The
is aan Authors.
product
open access Published
or one product
article by Elsevier
family
under the on
CCtheB.V.
physical level.
BY-NC-ND Different
license product families, however, may differ largely in terms of the number and
(http://creativecommons.org/licenses/by-nc-nd/4.0/)
©
This
nature2019
Peer-review The
isofan Authors.
open access
components.
under Published
articlefact
This
responsibility by
of Elsevier
under
impedes
the B.V.
the scientific
CCanBY-NC-ND
efficient license
comparison
committee (http://creativecommons.org/licenses/by-nc-nd/4.0/)
of the and choice
53rd of appropriate
CIRP Conference product familySystems
on Manufacturing combinations for the production
This
system. is A
Peer-reviewannew
open access
under articleisunder
responsibility
methodology of the CC
proposed BY-NC-ND
scientific
to analyze license
committee
existing (http://creativecommons.org/licenses/by-nc-nd/4.0/)
of the 53rd
products in CIRP Conference
view of on Manufacturing
their functional and physicalSystems
architecture. The aim is to cluster
Peer-review
these products
Keywords: under
Deep responsibility
inNeural
new assembly of theProcessing;
Networks;oriented
Image scientific committee
product families
Quality of the
for the 53rd CIRP
optimization
Inspection; Industrial ofConference
existing
Vision Systems on Manufacturing
assembly Systems
lines and the creation of future reconfigurable
assembly systems. Based on Datum Flow Chain, the physical structure of the products is analyzed. Functional subassemblies are identified, and
Keywords: Deep Neural Networks; Image Processing; Quality Inspection; Industrial Vision Systems
a functional analysis is performed. Moreover, a hybrid functional and physical architecture graph (HyFPAG) is the output which depicts the
similarity between product families by providing design support to both, production system planners and product designers. An illustrative
1. Introduction
example of a nail-clipper is used to explain the proposed methodology. An industrial In recent caseyears,
studynew
on twotechniques for image
product families analysis
of steering based of
columns on
thyssenkrupp Presta France is then carried out to give a first industrial evaluation artificial
of theintelligence (AI) have been suggested. These
proposed approach.
1. Introduction In recent years, new techniques for image analysis based on
© 2017 Detection and elimination
The Authors. Published by of poor B.V.
Elsevier quality products is of techniques have been shown to be capable of overcoming the
Peer-review under responsibility of the
artificial intelligence (AI) have been suggested. These
critical importance to virtually allscientific committee
manufacturing of the 28th CIRP
companies Design Conference
shortcomings of the2018.
traditional techniques used in industrial
Detection and elimination of poor quality products is of techniques have been shown to be capable of overcoming the
[1]. They employ specialized personnel to undertake quality vision systems [7]. AI-based image processing is currently a hot
critical importance to virtually all manufacturing
Keywords: Assembly; Design method; Family identification companies shortcomings of the traditional techniques used in industrial
inspections in which one or more product characteristics are research topic. The most successful implementations use so-
[1]. They employ specialized personnel to undertake quality vision systems [7]. AI-based image processing is currently a hot
checked against product specifications. Products that do not called “deep learning”, which is a machine learning method
inspections in which one or more product characteristics are research topic. The most successful implementations use so-
meet the specifications are rejected or returned for based on artificial neural networks [8]. Despite the great
checked against product specifications. Products that do not called “deep learning”, which is a machine learning method
improvement
1.meet
Introduction to avoid releasing poor quality products [2]. potential of deep neural
and networks, theremanufactured
are still only and/or
a few
the specifications are rejected or returned for of the product
based range
on artificial neural characteristics
networks [8]. Despite the great
These inspections are vital to ensuring high-quality products, assembled implementations
in deep of them
this system. in vision systems targeting the
improvement to avoid releasing poor quality products [2]. potential of neural In this context,
networks, therethe aremain
stillchallenge
only a few in
butDue
performing them
to the fast manually comes
development at a high cost. It has been manufacturing industry, mainly because the technique and the
These inspections are vital to ensuring in the domain
high-quality of
products, modelling and analysis
implementations of them is now not only
in vision to copetargeting
systems with singlethe
suggested that and
communication automatic camera-based
an ongoing trend ofsystems,
digitization so-called
and available asoftware
products, limited platforms
product are still
range relativelyproduct
or existing new.
but performing them manually comes at a high cost. It has been manufacturing industry, mainly because the techniquefamilies,
and the
“vision systems,”
digitalization, could replace manual quality inspections [3- In this paper, we describe the implementation and testing of
suggested thatmanufacturing enterprises aresystems,
automatic camera-based facing important
so-called but also tosoftware
available be able toplatforms
analyze and to compare
are still relativelyproducts
new. to define
4]. However,
challenges camera-based
in today’s systems have not a been widely new the product
technique of deepIt canlearning in real-world industrial
“vision systems,” could market environments:
replace manual continuing
quality inspections [3- families.
In this paper, we describe betheobserved that classical
implementation existing
and testing of
adopted
tendency in manufacturing,
towards mainly
reduction of systems because
product development of their
times poor
and production
product and evaluate
families are and analyze its ability to detect quality
4]. However, camera-based have not been widely the technique of regrouped
deep learningin function of clients orindustrial
in real-world features.
performance and low
shortened flexibility when the thereenvironment changes. problems. The company involved in families
the study was Jonsacfind.
AB,
adopted product lifecycles.
in manufacturing, Inmainly
addition, becauseisof an increasing
their poor However,
productionassembly oriented
and evaluate and product
analyze its abilityare tohardly
detect to
quality
For
demandexample, a slight
of customization, change in
beingwhenlighting conditions
at thethesame may cause
time in achanges.
global a Swedish
On the productmanufacturer of
familyinvolved paper
level, products bags that
differwas are
mainly used for
in AB,
two
performance and low flexibility environment problems. The company in the study Jonsac
a camera
competition system to stop working [5]. The hardware needed is packaging a variety of products including dry foods, waste, and
For example,with competitors
a slight change in alllighting
over the world. This
conditions maytrend,
cause main characteristics:
a Swedish (i) the of
manufacturer number
paperofbagscomponents
that areand (ii) the
used for
also expensive,
which is inducing and so isdevelopment
the cost of installing and maintaining animal feed. Their (e.g.
customers demand high-quality bags that
a camera system to the stop working [5]. The fromhardware
macro to microis type
needed of components
packaging mechanical,
a variety of products includingelectrical, electronical).
dry foods, waste, and
the system
markets, [6]. in diminished lot sizes due to augmenting
results will store their
Classical products inconsidering
the best possible conditions. The
also expensive, and so is the cost of installing and maintaining animal feed.methodologies
Their customers demand mainly single bags
high-quality products
that
product
the systemvarieties
[6]. (high-volume to low-volume production) [1]. or
willsolitary, already
store their existing
products in theproduct families
best possible analyze The
conditions. the
To cope with
2212-8271 this
© 2019 Theaugmenting variety
Authors. Published as wellB.V.
by Elsevier as to be able to product structure on a physical level (components level) which
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
identify
2212-8271 possible The optimization potentials in the existing causes difficulties regarding an efficient definition and
Peer-review©under
2019responsibility
Authors. ofPublished by Elsevier
the scientific B.V.
committee of the 53rd CIRP Conference on Manufacturing Systems
production system, it is important to have a precise knowledge comparison of different product families. Addressing this
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility of the scientific committee of the 53rd CIRP Conference on Manufacturing Systems
2212-8271 © 2020 The Authors. Published by Elsevier B.V.
This is an©open
2212-8271 2017access article Published
The Authors. under theby CC BY-NC-ND
Elsevier B.V. license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review
Peer-review under
under responsibility
responsibility of scientific
of the the scientific committee
committee of the of theCIRP
28th 53rdDesign
CIRP Conference
Conference2018.
on Manufacturing Systems
10.1016/j.procir.2020.04.158
Anna Syberfeldt et al. / Procedia CIRP 93 (2020) 1224–1229 1225
2 Syberfeldt and Vuoluterä / Procedia CIRP 00 (2019) 000–000
A deep neural network is basically an artificial neural There are no previous studies on detecting defects in paper
network with multiple layers between the input and output bag production using the same techniques as this study.
layers [9]. It transforms input into an output by mathematical Accordingly, this section of the paper focuses on the three most
manipulations based on linear or non-linear relationships. Each
relevant studies using different variants of CNNs for detecting
mathematical manipulation is considered a layer, and the term
defective products.
"deep" refers to the many layers in the network [9]. Currently
Zhu et al. [14] present a study on detecting defects in the
the most successful deep neural networks for image processing
use what is known as a convolutional neural network (CNN) production of emulsion pumps. They evaluate the possibilities
architecture [10]. Interest in CNNs spiked in 2012 after a of using CNNs to replace the current manual inspection of the
network called AlexNet solved a given problem with an error pumps. The main challenge in their implementation was that
rate of 15.3%. In 2015 ResNet achieved an error rate of 3.6%, the limited number of images of defective products, making
which surpassed the recognition ability of a human facing the high-quality image samples very important. To increase the
same problem [11]. quality of the images, pre-treatment using slant correction was
The name “CNN” comes from the fact that the network used. When validated on images not previously seen, the CNN
employs a mathematical operation called convolution, which is achieved a 97% accuracy rate with a mean detection time of
a specialized kind of linear operation. Basically, CNNs are 0.18 seconds.
neural networks that use convolution instead of general matrix Jing et al. [15] present a study from the textile industry
multiplication in at least one of the layers. Instead of using fully focused on finding defects in fabric. Like the study of Zhu et
connected layers where one neuron takes 1D data as input as in al. the purpose was to replace manual inspections with a CNN.
a traditional neural network, CNN uses convolutions that take The authors used a dataset of different fabrics with a range of
2D data [10]. This technique preserves the spatial information colors and recurring patterns, and the CNN was trained to
in an image, which is crucial for accurate predictions. Most recognize six categories of common defects. The network
CNN architectures consist of convolutions, rectified linear utilized an automatic calculation of the patch sizes of the fabric,
units (ReLUs) and pooling layers for feature detection, which improved accuracy and made it possible to identify very
followed by fully connected layers and SoftMax for small defects. The CNN showed an average accuracy of over
classification. This is the architecture we chose for the project. 97%.
The concept of a CNN is presented in Figure 1. Further A study on defect detection in nanofibrous materials
information about the technique can be found in [12].
presented by Napolentano et al [16] achieved similar results to
Jing et al. Using scanning electron microscope (SEM) images,
a regional CNN approach was used to discover defects in the
1226 Anna Syberfeldt et al. / Procedia CIRP 93 (2020) 1224–1229
Syberfeldt and Vuoluterä / Procedia CIRP 00 (2019) 000–000 3
materials. The accuracy achieved was also about 97%. The Table 1. Product variants included in the study.
authors note that the accuracy was higher for coarse-grained Variant Product
defects but lower for fine-grain defects. number
1
4. Training data set
The paper bag product line at Jonsac is subject to frequent
changes. The company delivers products to a vast assortment
of customers in very different quantities and at different
intervals. Each customer has the option of having a custom
print, and can also choose among an array of colors for each 2
area of the bag. There are also structural differences in the bags,
such as length. As in any market, Jonsac may gain new
customers or lose old ones, and established customers may
change their preferences. These factors make it unfeasible, if
not impossible, to gather data on every single product variant
for training the deep neural network. Even if it were possible to 3
collect enough images of all the current product series, as soon
as a new product series was introduced the deep neural network
would need new data. It is also doubtful that a single network
would be able to distinguish between faulty print/color in
variant X and the correct print/color in variant Y. Accordingly,
4
this study focused on the geometry of the bags, which is almost
identical between variants except for length. Thus, the network
ignores faults in print, coloring, and other purely aesthetic
defects.
A dataset of 1,729 images was collected for the study,
including six different product variants as shown in Table 1.
5
Each variant is clearly distinguishable from the others by some
factor, be it color or print. They share the same geometry and
“folding lines”, and are identical from a structural point of
view, except for variants 5 and 6 which are slightly longer than
variants 1–4. As previously mentioned, only the bottom of the
bag needs to be checked for quality; thus only the bottom part 6
of the bag is used for the training data.
To properly train the network, all the images in the dataset
need to be classified as “OK” (the product has no flaws) or
“NOK” (the product is of inferior quality). More precisely,
NOK denotes an anomaly, a divergence from the normal
geometry. The images were classified as “OK” or “NOK”
manually, as each had to be evaluated by a human who could
determine the NOK or OK assignment from experience. The 4. Training and evluation
images were also annotated to indicate the region of interest
and preferable traits. Figure 2 shows an example of an This study used a pre-trained network with transfer learning
annotation. based on the publicly available model
“faster_rcnn_inception_v2_coco_2018_01_28”. The
specification for the model can be found at
https://github.com/opencv/open_model_zoo/blob/master/mod
els/public/faster_rcnn_inception_v2_coco/faster_rcnn_incepti
on_v2_coco.md.
There are six prediction layers, decreasing in feature-map
size: 382, 192, 102, 52, 32, and 12, with the following numbers of
Fig. 2. Example of annotation (green box = OK, red box = NOK).
default boxes: 4, 6, 6, 6, 4, and 4. Thus there are 8732
detections per class (382×4+192×6+102×6+52×6+32×4+12×4).
From the classification process, it became clear that there However, many of these are unlikely candidates with very low
were five different categories of anomalies: skewed side probability, and so they are removed using non-maximum
(right), skewed side (left), crushed side, tearing, and offset suppression (NMS).
bottom. Examples of products exhibiting these anomalies are
shown in Table 2 (anomalies marked with yellow circles).
Anna Syberfeldt et al. / Procedia CIRP 93 (2020) 1224–1229 1227
2 Syberfeldt and Vuoluterä / Procedia CIRP 00 (2019) 000–000
Table 2: Anomalies present in the training data dataset. erroneous behaviors, often caused by mistaken manual
Description Common Comment annotations. Figure 3 shows the procedure used.
Skewed side (right) Yes Represents most of all
anomalies.
Fig. 3. Flowchart showing the process of training and evaluating the network.
products being approved. Should a missed product be regarded to setting up the training data. It is time-consuming to check
as OK, the evaluation would have generated an accuracy of each image individually, classify it as OK or NOK, and
97.40%. annotate it. In this study we manually inspected 1,729 images
for the training data set. On average, each image took about one
minute to process. This means that we spent about 30 hours
configuring the training data set (not including the time it took
to take the pictures). Our training set was not particularly large,
and considerably more images might be needed for solving
other problems. In such cases, the time required to set up the
training data becomes a real problem. One option could involve
generating training data virtually. For example, CAD models
of the products with superimposed defects could be used to
generate training data. If it at least some virtual training data
could be used, the time for setting up the training data set could
Fig. 4. Duplicate annotations made by the network. The human inspector has be significantly reduced. We will therefore start investigating
classified this bag as OK, but being a border case (a minimal skewness in the this, as we believe that many applications could benefit from
bottom right corner). such an approach if it can be proven to work.
This paper investigated the use of deep neural networks for The authors would like to thank Jonsac AB for their support
performing automatic quality inspections based on image in the study and for allowing us to work in their facility. The
processing to eliminate the current manual inspection process. authors also want to thank Vinnova for financing the VISION
The focus of this study is a real-world industrial case study of project through the strategic innovation program
paper bag production using a Faster R-CNN to detect defective Produktion2030, within which this work has been undertaken.
bags. It is difficult for even a trained human to detect defective
bags as the defects may be small and can vary considerably. An References
evaluation of the deep neural network shows, however, that it
has an accuracy of 94.5%. This is at least as good as the [1] Colledani, M., Tolio, T., Fischer, A., Iung, B. Lanza, G., Schmitt, R. &
accuracy of the operator currently undertaking the quality Váncza, J. Design and management of manufacturing systems for
inspection manually. Follow-ups from the manual inspection production quality. CIRP Annals 2014;63(2), 773-796.
that has been undertaken by the company show that the [2] R.K. Rajput. A Textbook of Manufacturing Technology (Manufacturing
Processes). ISBN 978-81-318-0244-1;2018.
accuracy of a human inspector usually lies between 70–80% [3] Forsyth, D., & Ponce, J. Computer Vision: A Modern Approach (2nd ed.).
after a period of training, which is not surprising given the high New Jersey, USA: Pearson;2012.
rate of production. [4] Soini, A. Machine vision technology take-up in industrial applications.
The evaluations performed by the network. network were Proceedings of the 2nd International Symposium on Image and Signal
not only accurate; they were also rapid. This is important given Processing and Analysis (ISPA 2001);2001.
[5] Semeniuta, O., Dransfeld, S., Martinsen, K. & Falkman, P. Towards
the high rate of production. With 1–2 bags being produced per increased intelligence and automatic improvement in industrial vision
second, there is at most 0.5 seconds to take a picture, evaluate systems. Procedia CIRP 2018; 67, 256-261.
the bag quality, and notify the production system that a [6] Davies, R. E. Machine Vision: Theory, Algorithms, Practicalities. 4th ed.
defective product has been detected. Evaluation of the London: Elsevier; 2012.
implementation shows that the network can solve the problem [7] Silva R.L., Rudek M., Szejka A.L., Junior O.C. Machine Vision Systems
for Industrial Quality Control Inspections. In: Chiabert P., Bouras A., Noël
in 0.3 seconds, meaning that it can be used in real production. F., Ríos J. (eds) Product Lifecycle Management to Support Industry 4.0.
In fact the performance of the network was considered so PLM 2018. IFIP Advances in Information and Communication
promising that the company has decided to introduce it into Technology. 540. Springer, Cham;2018.
their production line. [8] Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y. & Alsaadi, F. A survey of
Future work could focus on improving the solution so that it deep neural network architectures and their applications. Neurocomputing
2017;234, 11-26.
can assess not only the geometry of bags but also faults in print, [9] Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning. First ed.
coloring, and other purely aesthetic defects that are important Cambridge: MIT press;2016.
to customers even though such flaws do not affect the function [10] Bosse, S., Dominique, M., Müller, K-R., Wiegand, T. & Wojciech, S.
of the bag. The personnel at the company did indicate that Deep Neural Networks for No-Reference and Full-Reference Image
issues with print and color are somewhat predictable, often Quality Assessment. IEEE Transactions on Image Processing. 2018;27.
206-219.
happening when refilling printing materials or switching [11] Du, J. Understanding of object detection based on CNN family and
between product variants. Although it might not be critical to YOLO. Journal of Physics: Conference Series. 1004;2018.
automatically detect aesthetic defects, doing so would reduce [12] Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B., Liu, T.,
the burden on the operators in the line. It is thus worth Wang, X., Wang, G., Cai, J. and Chen, T. Recent advances in convolutional
investigating adding this capability as an extra feature of the neural networks. Pattern Recognition. 2018;77. 354-377.
[13] Géron, A. Hands-on Machine Learning with Scikit-Learn and Tensorflow:
network. Concepts, Tools, and Techniques to Build Intelligent Systems (2nd ed.).
Another aspect that could be investigated is ways to make Sebastopol, CA: O'Reilly Media. ISBN: 978-1491962299;2019.
the training process for the network more efficient with respect
Anna Syberfeldt et al. / Procedia CIRP 93 (2020) 1224–1229 1229
2 Syberfeldt and Vuoluterä / Procedia CIRP 00 (2019) 000–000
[14] Zhu, C., Zhou, W., Yu, H. & Xiao, S. Defect Detection of Emulsion Pump [15] Jing, J-F., Ma, H. & Zhang, H-H. Automatic fabric defect detection using
Body Based on Improved Convolutional Neural Network. Proceedings of a deep convolutional neural network. Coloration Technology. 2019;135(3).
the 2019 International Conference on Advanced Mechatronic Systems. 213-223.
Kusatsu, Shiga, Japan 26-28 August 2019, 2019;349-352. [16] Napolentano, P., Piccoli, F. & Schettini, R. Anomaly Detection in
Nanofibrous Materials by CNN-Based Self-Similarity. Sensors 2018;18(1).