0% found this document useful (0 votes)
83 views14 pages

An Industrial Vision System For Surface Quality Inspection of Transparent Parts

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views14 pages

An Industrial Vision System For Surface Quality Inspection of Transparent Parts

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Int J Adv Manuf Technol (2013) 68:1123–1136

DOI 10.1007/s00170-013-4904-2

ORIGINAL ARTICLE

An industrial vision system for surface quality inspection


of transparent parts
S. Satorres Martı́nez · J. Gómez Ortega ·
J. Gámez Garcı́a · A. Sánchez Garcı́a ·
E. Estévez Estévez

Received: 31 October 2012 / Accepted: 6 March 2013 / Published online: 6 April 2013
© Springer-Verlag London 2013

Abstract One of the industrial applications of computer 1 Introduction


vision is the automated detection and characterisation of
surface defects. Some types of surface, such as those that are Over the past 10 years, machine vision systems have been
highly reflective or transparent, require the implementation increasingly used by the industry in order to implement
of custom-made systems for automated inspections. The efficient contactless visual quality controls [1]. However,
purpose of this article is to present a machine vision system, when the inspection process requires a highly specific solu-
with an easily configurable hardware–software structure, for tion that rules out commercially available systems, manual
surface quality inspection of transparent parts. This struc- inspection remains the most usual option [2]. One exam-
ture permits that different products and part models may ple of a process usually performed by skilled workers is the
be inspected by the system. Its hardware is composed of surface quality inspection of reflective or transparent parts
image-capturing devices and mechanisms for manipulating or surfaces. There are many reasons why this task proves
and holding the part to be inspected. One such mechanism is very complex. Firstly, surface defects, also known as aes-
the lighting system, which has been specifically developed thetic, are visible only under certain lighting conditions [3].
to allow real-time quality control of this type of surfaces. Secondly, the size of the defect to be detected must be at
As regards software design, a component-based approach least 0.5 mm in diameter, since smaller defects are not vis-
has been adopted in order to increase reusage of the code ible to the human eye [4]. Finally, the inspection task must
and decrease the time required for configuring any type of be carried out in the cycle times established for a specific
part and adapting it for inspection. To test the efficiency and production process, which tend to be short [5]. These three
robustness of the industrial setup, a series of tests using a reasons confirm the industry’s need for automated detection
transparent industrial part, specifically, a commercial model systems of aesthetic defects, which must also comply with
of headlamp lens, have been assessed. the following requirements:

Keywords Computer vision · Automated inspection · – Reliability and robustness: The hardware devices used
Software architecture · Surface quality for image acquisition and the software algorithms used
to extract information from the images captured by the
vision sensors should guarantee the detection of the
tiniest defects.
– Capability for real-time inspection: Since the auto-
mated quality control system is but just one element
S. Satorres Martı́nez () · J. Gómez Ortega · J. Gámez Garcı́a ·
of the production chain, the process must be per-
A. Sánchez Garcı́a · E. Estévez Estévez
Universidad de Jaén, 23071 Jaén, Spain formed in the preestablished amount of time. These
e-mail: [email protected] cycle time requirements will significantly affect the
1124 Int J Adv Manuf Technol (2013) 68:1123–1136

selection and arrangement of the system’s hardware in order to increase protection against external aggressions,
devices and will diminish the possibility of using certain such as UV radiation or mechanical frictions, some transpar-
image-processing algorithms. ent parts are usually varnished, a process that may originate
– Adaptability: Whenever commercial systems for indus- new types of defects [9]. In short, the defects we may find
trial inspection prove inadequate to solve a particular are quite different, both typologically and as regards their
problem, great effort must be geared to developing cause (see Table 1).
new devices and tools. For this reason, it is absolutely The majority of the abovementioned defects tend to have
necessary that the system’s components may be the same optical properties than the part surface in which
included, eliminated or even modified with ease [6]. they are located. Moreover, their variability (typology, size,
This flexibility will permit that different models of the etc.) makes it difficult, even for skilled workers, to perform
same part or different parts may be inspected by just a quality control of these surfaces. One further disadvantage
one system. of manual inspections has to do with the high cost involved
when the process entails inspecting the entire output of a
With this purpose in mind, we present a new automated mass production system [10].
inspection system that is versatile—adaptable to different
products and part models, open—having an easily configu- 2.2 Related work on surface quality inspection
rable hardware–software structure, and maintenance-easy,
thanks to the modularity and internal independence of Even though manual inspection is used in certain surface
its components. In order to comply with the abovemen- quality control applications for reflective or transparent
tioned features, our proposal follows a component-based parts, it does not allow to control the whole production
development approach [7]. This architecture makes it pos- in the cycle times established for the process. Besides,
sible to combine commercially available components (pro- manual inspection is at odds with the reliability and robust-
grammable logic controller (PLC) control software, image- ness requirements demanded by the client [11]. This has
capturing applications and computer vision algorithms) with stimulated a number of proposals for automated inspection
custom-made ones (highly specific image-processing algo- systems, including the following: Torres et al. developed a
rithms, lighting device control, data presentation strategies, prototype for defect detection in aluminium web production
etc.). The remainder of this article is structured in the fol- that evaluated the distortion experienced by a laser beam on
lowing way: Section 2 presents the cause and types of a flawed surface [12], Peng et al. attempted to detect defects
aesthetic defects, including a review of related literature in float glass by using a lineal high-resolution camera [4],
on machine vision systems; Section 3 describes how our Der-Baau Perng et al. developed a vision system for detect-
machine vision proposal operates and the devices included ing defects in the interior or on the surface of a CRT panel
in it; Section 4 defines which actions have to be taken [13], Liu et al. presented a glass defect identification system
into account in the machine vision system’s adjustment; based on multiresolution and information fusion analysis
Section 5 shows the experimental validation of the sys- [14], Adamo et al. built a multiple camera system for qual-
tem proposed, by using an assortment of car headlamp ity control of satin glass surfaces [15], Zhang et al. designed
lenses; and finally, Section 6 outlines the conclusions of a system for inspecting polished metals capable of identify-
this work. ing up to seven different types of surface defects [16] and
Salis et al. used structured lighting to inspect reflective parts
of the automotive industry [17]. This lighting technique has
2 Definition of the problem and related work also been used to inspect both great surfaces, such as a vehi-
cle’s bodywork [18], and small parts, such as flasks and
In this section, we describe the causes and different types of containers in the cosmetic industry [19].
aesthetic defects that should be susceptible of being detected In order to increase the machine vision flexibility, some
automatically. We also list a number of previous proposals of these systems are provided with graphical user interfaces
for reflective or transparent surface quality inspection. that, with the aid of human intervention, can be adapted to
control several part models or even other inspection appli-
2.1 Types of aesthetic defects cations [20]. Recent studies have presented frameworks that
can be useful in the development and implementation of
Among the factors liable to originate aesthetic defects, such computer vision-based systems: Afrah et al. addressed two
as those shown in Fig. 1, the following may be mentioned: aspects of vision-based system development that are not
inappropriate quality of the raw material, inadequate ma- fully exploited in current frameworks, abstraction over low-
nipulation of the component and faulty machinery or use level details and high-level module reusability [21], and
of incompatible chemical products [8]. It also happens that Wang et al. proposed a real-time distributed architecture that
Int J Adv Manuf Technol (2013) 68:1123–1136 1125

Fig. 1 Examples of aesthetic


defects that can be found in
transparent parts. From left to
right: blister, bubble and excess
of coating

can automatically load processing modules according to the for adjusting the system to inspect a new part—AI, and the
task that the vision system has to accomplish [22]. control of the mechanisms in charge of manipulating and
holding the part—PLC. Through five use cases, the func-
tionality provided by the system is Adjust System, Inspect,
3 Description of the machine vision system Commercial Lighting, Historic Check and Cancel.
Previous to the automated inspection of a specific model
Thanks to the hardware and software architecture, our of part, several adjustments have to be done in the MVS.
machine vision system (MVS) permits flexibility for the The Adjustment System performs this task following the
surface quality inspection. Starting from the performance performance requirements—included in the MVS by the
requirements—in the aspect of time of cycle, description of Client—and by the aid of the AI. The Inspect performs
the part and features that have to be extracted—the MVS the continuous inspection of a predefined part. Depending
selects the elements involved in the image acquisition and on the MVS task, additional lighting techniques could be
processing. In this section, the MVS is deeply described. demanded. The Commercial Lighting verifies the proper
Firstly, what the MVS should do, in other words its lighting technique for the inspection task. The Historic
functionality, is presented. Then, the hardware and software Check performs the results of the inspection, and finally, the
architectures are detailed. Some diagrams of the Unified Cancel stops the inspection process.
Modelling Language (UML), which has been extensively
used for system engineering modelling [23], implement the 3.2 Hardware architecture
MVS description.
The hardware architecture is represented in Fig. 3 using an
3.1 System functionality UML deployment diagram. This diagram shows the MVS
hardware, the software that is deployed on that hardware
Any operation performed by the MVS needs to be defined and the networks used to communicate the different nodes
and this is presented in the UML use case diagram shown in to one another.
Fig. 2. Three are the actors that interact with the MVS: the Basically, the MVS consists of the following hardware
system user—Client, the artificial intelligence algorithms elements:

– The binary active lighting system;


Table 1 Description of aesthetic defects – n monochrome Camera i, and their optical systems,
Type Name Description
mounted on stands that allow 6 degrees of freedom for
the automated positioning;
Punctual Blisters Particles caught up in a coating – A Host PC to control the lighting system, synchronise
layer the image captured and execute the computer vision
Drops Sporadic coating splashing algorithms;
Bubbles Intake of air during the coating – A frame tool for holding the parts;
process – A conveyor belt;
Lineal Scratches Caused by mechanical aggressions – A manipulator with 2 degrees of freedom with an end
Threads Threads adhered onto the surface effector adapted to the frame tool; and
Superficial Burrs Excess of material – A PLC to control manipulation and movement of
Sinks Superficial depression elements.
Excess of coating Accumulation of coating
One of the main hardware devices, which has been
Orange skin General superficial undulation
specially designed for the MVS, is the binary active lighting
1126 Int J Adv Manuf Technol (2013) 68:1123–1136

Fig. 2 Use case diagram


describing the machine vision
system

system. This system proves crucial to extract and charac- libraries, such as the OpenCV, and proprietary computer
terise aesthetical defects. As different lighting techniques vision algorithms are needed.
are available with this system, it interfaces with a software Depending on the size of the part and the size of the
component, the BinaryActiveLighting, that runs in the Host defects that have to be detected and if we are constrained
PC. A further description of this system is given in the next to a number of cameras for capturing images of the entire
subsections. part, it may be necessary that the MVS includes a series of
As for the vision sensors, GUPPY F-080B cameras have elements that allow manipulating the part. These manipula-
been included, with a resolution of 1,024 × 768 square pi- ting elements are the manipulator with 2 degrees of freedom
xels of size 4.65 µm and an acquisition rate of 30 fps. The that interfaces with the Manipulator, the conveyor belt and
information of these frames is transferred to the PC via the frame tool for holding the parts. All these elements are
IEEE 1394. With this bus, the cameras may be enabled or controlled by a PLC connected via EtherCAT to the host PC
disabled without having to reboot the PC. Moreover, the using the TwinCAT environment.
cameras do not require any additional power, which simpli- In an industrial machine vision system, ambient light has
fies the system’s cabling requirements. Every camera device to be avoided. For this reason, the manipulator, lighting
interfaces with a camera software component, the Camera, systems and cameras are placed in a closed environment.
that is in charge of acquiring and processing the image. For We next present a deep description of the lighting sys-
the image acquisition, the libraries, provided by the cam- tem detailing its hardware elements and its configurable
era manufacturer, are required. For the processing, specific parameters.

Fig. 3 Deployment diagram describing the hardware architecture


Int J Adv Manuf Technol (2013) 68:1123–1136 1127

3.2.1 Description of the lighting system 3.2.2 Configuration of the lighting system

Lighting is a key component of the image-capturing process Depending on the configuration of the image projected
and its main function is to make visible the desired features by the LCD mesh, the system will act as a source of
of an object or its surface [24]. Thanks to our novel contri- backlighting—white pattern, or structured lighting—stripe
bution to lighting systems it is possible to enhance defects pattern. If the latter lighting technique is chosen, the follow-
that either reflect or transmit the light received. Besides, the ing parameters must be established:
binary active lighting system proves most suitable when se-
– Width of the white stripe,
veral cameras are used, since it is not placed in their optical
– Width of two stripes having different luminance and
axis and it, therefore, reduces the spatial limitations when
being connected and
positioning the cameras. The physical principle on which
– Shift between stripes in consecutive images.
the lighting system rests can be found in [3].
Mainly, this lighting system is composed of a series of The ideal lighting configuration should be able to offer
hardware and software elements providing it with great maximum contrast, which is achieved by decreasing the
luminance and great flexibility in the configuration of struc- ratio between the white and black stripes. In this way, a
tured lighting patterns. Figure 4 shows an image of the white stripe with minimum width would move along the
system with an open side in order to display all the elements whole width of the two stripes. To ensure defect detection in
contained within. The function of each of these elements is the whole field of view, it is necessary to capture a greater
the following: number of images and to increase camera exposure time.
– Lighting source. It is composed by a row of fluorescent When the cycle time constitutes a limitation, a compro-
lamps. This arrangement prevents punctual concentra- mise solution is often used, even if it implies a decrease in
tions of light or unlit areas. contrast. Because the stripe pattern is defined by software
– Thermal insulator and diffuser. In order to homogenise and is, therefore, easily modifiable, there is no need for a
the light coming from a lighting source, it is convenient presimulation stage to define a suitable configuration.
to place a diffusing element between the source and the
LCD mesh. Also, in order to ensure that the LCD mesh 3.3 Software architecture
does not overheat, a thermal insulator has been placed
next to the diffuser. This insulator consists of an air- The main factor to build an inspection system that is versa-
filled double glazing. tile, open and easy to maintain is the architecture of the soft-
– Fans. For refrigeration, a set of fans are used to extract ware. Our MVS software architecture follows a component-
the hot air generated inside the box containing the based development approach [25] which is illustrated in the
system’s components. UML component diagram presented in Fig. 5.
– LCD mesh and associated electronics. This element is The component which provides and requires a greater
basically composed of the liquid crystal panel (LCD) number of interfaces is the Camera. This component has
and electronic boards for processing and fixing its four interfaces: CamConf, Trigger, Image and Inspection-
control signals. Result. The two first interfaces are required services and the
others are offered services. Through the CamConf, the com-
ponent receives the vision sensor position, with respect to
the part reference system, that it is computed by the AI. In

Fig. 4 Binary active lighting system Fig. 5 UML component diagram describing the software architecture
1128 Int J Adv Manuf Technol (2013) 68:1123–1136

addition, vision sensor parameters such as exposure time, based on global or adaptive thresholds with low computa-
gain, etc. are provided by the GUI. Every time the light- tional cost. Finally, the classification stage consists of the
ing system changes its lighting technique or the structured algorithms that extract a series of characteristics of the seg-
pattern configuration, an image has to be acquired. The mented regions and include them within a predetermined
Trigger interface receives from the Lighting component class.
these changes. In relation to the offered services, the Image Going into details, the computer vision algorithms
stores, in an n-dimensional matrix, the camera acquisitions. included in the former stages are the following:
Finally, the InspectionResult indicates the number, size and
location of the flaws detected by this component.
Preprocessing:
By the aid of the GUI, the MVS user defines the
inspection requirements. These requirements are useful for – Region of interest (ROI) definition. This algorithm is
the lighting technique selection, LightConf interface, and used to identify regions of interest in which to con-
for the AI and AIConf. This last component provides the duct the image analysis. Based on a free-form shape,
Camera and Manipulator poses through the ManConf and the procedure presented in [27] dynamically adapts
CamConf interfaces. it to the image that has to be processed.
For every component implementation, we have made use – Aspect image. Provided the image capture is done
of the object-oriented paradigm and the top–down design by means of structured lighting, its subsequent pro-
[26]. The Manipulator and Lighting components contain cessing is conducted not on a single capture but
the methods, included in one level of abstraction, in charge on an image, called aspect image, M(x), which is
of managing their respective hardware devices. For the obtained from a series of different captures applying
GUI and AI components, the methods provide the required the following expression:
services.
Figure 6 illustrates the characterisation of the Camera N −1
1 
component in a UML class diagram. This component is M(x) = · P (x − n · ) n∈N (1)
N
hierarchically structured in three levels. The first level con- n=0
tains properties related to the image acquisition and the
vision sensor location. The intermediate level corresponds where P (x) is the function that defines the stripe
to the different stages of the vision process, namely, prepro- pattern,  is the pattern’s shift between two consec-
cessing, segmentation and classification, and the last level utive images and N is the number of acquisitions.
includes the computer vision algorithms of these stages.
Segmentation:
Different goals are achieved in each of the computer
vision stages. In the preprocessing, the algorithms used to – Otsu. Global threshold algorithm that produces satis-
prepare the image for the subsequent stages are found: noise factory results with images that present bimodal or
reduction, deletion of unwanted regions, etc. The segmen- multimodal histograms. It is a histogram shape-based
tation stage is used to determine the image regions whose method that establishes an optimum threshold min-
pixels share some type of attribute. In order to extract imising the weighted sum of within-class variances
these regions, segmentation techniques have been included, of the foreground and background pixels [28].

Fig. 6 Camera component in


UML class diagram
Int J Adv Manuf Technol (2013) 68:1123–1136 1129

– Multistate adaptive. A selective procedure which, – Rosin. A segmentation algorithm adapted to images
according to the pixel’s level of grey, applies a that present unimodal histograms. It is based on find-
different operation t (x, y), being ing a corner in the histogram plot, h(g), and for this,
a straight line is drawn from the peak to the high end

F (x, y) − Uoptmin si F (x, y) < T0 of the histogram. The optimal threshold is selected
t (x, y) = F (x, y) − Uopt (i,j) si T0 ≤F (x,y)≤T1 (2) as the histogram index (g) that maximises the per-
F (x, y) − Uoptmax si F (x, y) > T1
pendicular distance between this line and the point
(g, h(g)) [29].
where:
Classification:
• F (x, y) is the image to be segmented.
• Uoptmin is the global threshold obtained – Parametric classification. It is based on the use of
from decision functions di (x), i = 1, ..., c. The classifier
is said to assign a feature vector x to a class wi if
argmax{h(g)} + min {k|h(k) = 0}
0≤k<T0
Uoptmin = di (x) > dj (x) f or all j = i. (8)
2
(3) Thus, the classifier is viewed as a network that com-
putes c decision functions and selects the class corre-
sponding to the largest discriminant [30]. The deci-
where g represents the level of grey of the sion functions have been found from patterns that are
peak of the histogram. representative of the classes of interest and using the
• Uopt (i, j ) is the optimal threshold obtained Euclidean distance as a measure of similarity.
from
– Nonparametric classification. A classifier based on
      the nearest neighbour rule. For this classifier, this
max F x  , y  + min F x  , y 
Uopt (x, y) = rule consists in classifying the object—extracted
2
with (x  , y  ) ∈ Neighbours (x, y) in the previous stage—into the class of its near-
(4) est neighbour according to a similarity or distance
measure. Besides, it may be extended by using the
• Uoptmax is a global threshold obtained k nearest neighbours, which normally improves the
according to success rate in the classification. In order to find
the nearest neighbour, an approximation and elimi-
argmax{h(g)}+ max {k|h(k) = 0} nation algorithm has been used, which is formulated
T1 <k≤255
Uoptmax = . as follows:
2
(5)
BEGIN
dnn := ∞
• T0 is the near-zero threshold at the lower end DO until P is empty
of the normal distribution obtained from
pi := argminp∈P Aprox(x, p)

T0 //approx.
h(g)
T0 = = A% (6) d := d(x, p)//distance to the
M sample
0
IF d < dnn
where M is the total number of pixels in the nn := p; dnn := d //updating
image. P := P − {q : q ∈ / E(x, dnn )}
• T1 is the near maximum grey level threshold //removing
(G = 255) at the lower end of the normal END IF
distribution calculated from
END DO
END

G
h(g)
T1 = = A%. (7) where E is the representation space or the represen-
M
T1
tation features, P = {p1 , ..., pn } is the set of patterns
A% is established empirically from the analysis of or objects used during the learning process and dnn
the histogram of different aspect images at a value is the distance of the unknown object and x is that to
of 1 %. the nearest neighbour.
1130 Int J Adv Manuf Technol (2013) 68:1123–1136

4 The machine vision system adjustment is used, which is known as quality matrix (qm ) and is
obtained from
In order to achieve the automated surface quality inspection
1 if sj fulfill the constraints for vi
of a specific model of part, several adjustments have to be qm (i, j ) = (11)
done in our MVS. Firstly, all the element involved in the 0 otherwise
image acquisition have to be chosen, configured and posi-
where sj is a portion of the part’s surface and vi is
tioned. Then, the so-called image processing chain has to
the position of the camera. The constraints, which are
be generated for extracting the desired information from the
evaluated hierarchically, are system construction con-
acquired images. All these issues are stated below.
straints, visibility, field of view, resolution and viewing
angle. The first constraint is related to the system’s
4.1 Scanning the part
mechanical configuration. As mentioned earlier, the
MVS consists of nine cameras, and should a greater
A number of inspection tasks require using multiple came-
number be required, the cameras will be aligned and
ras in order to capture an image of the whole part, with a
the manipulator will move the part in such a way that
spatial resolution that allows detecting defects of a certain
images of its whole surface may be captured. The
size. In this case, it has to be decided how many cam-
remaining constraints indicate how the image capture
eras will be used and where they should be positioned. Of
of the part was done from the positions of the cameras
course, in industrial plant environments, cycle times consti-
evaluated.
tute an important limitation, so that it is always convenient
to use as few cameras as possible to capture images with the By means of this tool, it is possible to know how many
required spatial resolution. In the previous work, the authors Camera components it is necessary to enable in the MVS
of this article developed a sensory planning system that and, if required, to what positions must the manipulator in
can calculate, by applying a genetic algorithm, the number charge of moving the part be displaced.
and positions of the cameras to solve a specific vision task
[31]. The evolutionary process will attempt to find the opti- 4.2 The image processing chain
mal solution to use as few cameras as possible and ensure
appropriate spatial resolutions. The fitness function, which The image processing chain puts together sequences of i-
measures the quality of the solution, is expressed in the mage processing operations needed for the inspection of
following way: surface quality. The model of this process is depicted in
Fig. 7 using an UML activity diagram. Four types of image
F = w1 · OBJ1 + w2 · (1 − OBJ2 ) (9) processing operations are included in this chain: acqui-
sition and preprocessing, segmentation, feature extraction
and classification.
where wi is the component of the relative importance vector,
or weighting coefficients, and OBJi is the objective to be
satisfied, which are summarised next:
– Objective 1. Minimizing the number of viewpoints of the
sensor.


n
Ci
i=1
OBJ1 = min (10)
Cn

Parameter Ci indicates the number of cameras in


the solution being evaluated, while Cn is the maximum
number of cameras to be used calculated on the basis
of their spatial resolution and the size of the part.

– Objective 2. Maximising inspection quality.


The concept of inspection quality is a parameter
that indicates how close the camera’s spatial resolution
comes to that which is required to inspect the part’s sur-
face. To evaluate this objective, a binary data matrix Fig. 7 UML activity diagram describing the image processing chain
Int J Adv Manuf Technol (2013) 68:1123–1136 1131

The purpose of the first processing operation is to acquire are obtained: null and unitary. Null values belong to the i-
and to prepare the image in such a way that computationally mage background and shall not be taken into account in
costly algorithms may be spared. It would be desirable that the following stages of the image processing chain. On the
the image to be processed in the next operation should have contrary, unitary values belong to the object, or possible
a uniform background in which the defects to be detected defect, and need to be described mathematically, that is,
would be visible by contrast with the background. As it is its characteristics—size, position, etc.—must be extracted
shown in Fig. 8, through an UML activity diagram, it is pos- by means of a vector called feature vector. This vector is
sible to obtain such an image utilising the proper lighting obtained in the third processing operation and is used by
technique attending to the optical properties of the defects the last operation, the classification, which decides which
and the part surface. However, with the structured tech- class or which representative group the object evaluated
nique, it is necessary to acquire a sequence of images for belongs to.
further obtaining the aspect image. Before moving onto the The supervised learning classifiers, included in the MVS,
segmentation, it will be necessary to decide which regions need a design stage (Fig. 9). Through a set of defective and
of the image must be processed or, in other words, to define flawless samples, the segmentation algorithm, the feature
the ROI. vector and the classifier type are chosen and subsequently
In the second processing operation, by applying one of used in the image processing chain.
the available segmentation algorithms, two types of values As it is shown in Fig. 9, the segmentation algorithm
selection is based on the misclassification error (ME) [32].
This measure reflects the percentage of background pix-
els wrongly assigned to an object and, conversely, object
pixels wrongly assigned to background. ME can be simply
expressed as

|Bo BT | + |Oo OT |
ME = 1 − (12)
|Bo | + |Oo |

where Bo and Oo denote the background and object of


the original—ground-truth—image, BT and OT denote the
background and object area pixels in the test image and |·| is
the cardinality of the set. The ME varies from 0 for a perfect
segmentation to 1 for a totally wrongly segmentation.
The selection of attributes belonging to the feature vector
is crucial, both for the design of the classifier and for its suc-
cess. It is desirable that the number of attributes should be
kept to the minimum. Also, that the values of the attributes
of objects belonging to the same class should be simi-
lar while at the same time different from those of objects
belonging to other classes. The feature vector’s attributes are
selected by manual means.
Finally, the inclusion of one or the other classifier
depends on how the learning samples are distributed in the
features space. The parametric classifier is chosen whenever
the problem’s different classes can be separated by means
of lineal decision functions. Otherwise, the nonparametric
classifier is included in the image processing chain.

5 Experimental validation. Study case: car


headlamp lens

The experimental validation was conducted with a part


Fig. 8 UML activity diagram describing the acquisition and manufactured by the automotive industry, namely, a car
preprocessing headlamp lens. The lens is a crucial part for the aesthetics of
1132 Int J Adv Manuf Technol (2013) 68:1123–1136

Fig. 9 UML activity diagram


describing the design stage

a headlamp, and for this reason, it has very strict demands


as regards quality. Table 2 MVS adjustment
For the chosen commercial lens, both transparent and
opaque defects with a minimum diameter size of 0.5 mm Scanning the part
have to be measured (in pixels), located in the vision sensor Components Configuration
reference system and classified according their geometry as Binary active lighting Two lighting techniques
punctual or lineal. It is not necessary to indicate the side of Stripe size (in pixels):
the lens in which the defect has been detected. In order to TB = 6; T = 18
achieve this objective, the MVS adjustment is described in Movement: 1 pixel
Table 2. Camera Seven components are enabled
With this MVS adjustment, the seven cameras are a-
Vision sensors includes 35-mm optics
ligned, while the manipulator moves to three predefined
Manipulator Three positions
positions. The images captured have a 0.1-mm/pixel spatial
resolution, so that, taking into account the features of the
Image processing chain
cameras included, any region extracted with an area greater
than 20 pixels should be considered as a defect. In order to Vision stage Algorithm
achieve this inspection result with the MVS presented here, Preprocessing Aspect image
it is fundamental that the lens introduced has absolutely no ROI included in 16 aspect images
trace of dust or any other particle susceptible of adhering to Segmentation Multistate adaptive method
its surface during its manipulation. Classification Parametric classification component
Since the MVS does not include any cleaning mecha- Learning: 45 samples
nism, the size of the defect has been increased to 0.8 mm in Classes: OK, punctual, lineal
diameter. In this way, any regions extracted having an area
Int J Adv Manuf Technol (2013) 68:1123–1136 1133

greater than 50 pixels, given the spatial resolution of the extracted in one of the following classes: OK (C1), punc-
cameras, shall be considered as defective. We next present tual (C2) and lineal (C3). The feature vector contains two
two types of results used to evaluate the performance of attributes, area and circularity, and has been trained with 45
the MVS. We first show how it is possible to contrast, controlled patterns of each class.
extract and classify the defects that are hardest to detect, that The results of the classification are shown in Fig. 12.
is, transparent defects. Then, we will present the system’s In the results, any region completely circular and having
detection capacity of aesthetic defects by analysing a series an area greater than 50 pixels shall be considered as a
of lenses of the commercial model. The MVS capability is defect. In this case, the size of defects classified as punc-
also determined through statistical methods. tual ranges approximately between 1.4 and 1.6 mm in
diameter for regions of 188 and 220 pixels, respectively.
5.1 Detection and characterisation of defects Besides, if at least these 50 pixels are aligned, it is con-
sidered as a lineal defect. It should be pointed out that
The extraction of information from the images captured the rest of the regions evaluated would not be considered
depends on the algorithms selected in the image processing as defective if a manual inspection process had been con-
chain. Figure 10 shows why it is necessary to configure the ducted and they would correspond to the definitive finish of
lighting system as a source of structured lighting and, there- the part.
fore, to obtain the aspect image in order to be able to contrast
transparent defects. If the lighting system operates just as a 5.2 Results of inspection
backlighting source, it is not possible to reveal this type of
defect and its extraction and characterisation is, therefore, To value the MVS capacity for aesthetic defect detection, a
not viable [3]. However, the aspect image clearly shows number of inspections have been done using lenses that had
the defect contrasted against the background, which facili- previously been considered as defective in manual quality
tates the extraction. As regards the region extraction stage, controls performed by a skilled worker. The analysis was
the algorithm applied must be robust enough so as not to performed on a total of 86 lenses and produced 1,806 aspect
identify as a region the pixels belonging to the background images presenting different types of defects. The results
and vice versa. If the image to be processed is the aspect of this analysis are shown in Table 3, where the lenses
image, because of the distribution of its grey levels, it will inspected are classified according to the type of defect that
be convenient to incorporate a method based on unimodal justified their exclusion.
histograms, such as the Rosin algorithm or the multistate The results of both the manual and automated inspec-
adaptive one. Figure 11 compares how the aspect image seg- tion are shown in percentage values with respect to the
mentation is done (presented in Fig. 10) when using the total number of lenses analysed, and these results are pre-
Otsu algorithm or the one included in the image processing sented in the columns of Table 3. The first column, “Num-
chain (see Table 2). With the Otsu algorithm, a large por- ber of samples”, gives the percentage of lenses rejected
tion of the pixels belonging to the background are extracted during manual inspection. The second column, “ND” (no
as regions, which precludes its use to carry out the aspect detections), gives the percentage of lenses rejected during
image segmentation. manual inspection, but accepted during automated inspec-
The last stage is the classification of the regions extracted tion. The third column, “FP” (false positives), gives the
as either defective or defect free. The classification algo- percentage of lenses in which the automated process has
rithm selected in this case is the parametric one, which detected defects not identified as such during manual
employs two attributes to include each of the regions inspection.

Fig. 10 Different lighting


techniques. Backlighting (left)
and aspect image (right)
1134 Int J Adv Manuf Technol (2013) 68:1123–1136

Fig. 11 Image segmentation.


Otsu (left) and multistate
adaptive algorithm (right)

Nondetections of defects can be attributed mainly to the may be wrongly included as a false positive although it is a
fact that the punctual defect was smaller than 0.8 mm or real one.
because it was very close to or even on the edge itself of
the lens. The “ROI definition” preprocessing algorithm is 5.3 The machine vision system capability
used to ensure that a part’s edges are not included in the
processing since, in the aspect image, both the edges and the The measurement system analysis, which is a collection of
defects have the same level of grey. statistical methods for the analysis of measurement system
As regards the “false positives” shown in Table 3, they capability [33], has been used in order to determine the
can be due to the following reasons: MVS accuracy and precision. The accuracy, also referred to
as bias, is the difference between a measurement and a ref-
– The lens coating may have detached itself when the lens
erence value. This reference value has been selected using
is fixed to the frame tool so that the detached bits have
a sample, affected by a punctual defect, as a pattern. The
become stuck to its surface.
same operator placed the lens in the MVS frame tool and it
– There may be flecks of dirt on the surface of the lens,
was inspected 10 times. The average of these measurements
such as specks of dust or fluff.
is our reference value. The bias analysis is given in Table 4
– The superficial finish of the lens, which is not abso-
with a 95 % t-confidence interval. The bias is acceptable
lutely smooth, may be of the same size as the defects.
if [34]
However, the MVS may also happen to find a defect that
σr σr
had not been detected during the manual inspection process, bias− tv,1− α2 · √ < 0 < bias+ tv,1− α2 · √ (13)
and by taking this manual analysis as a reference, the defect n n

Fig. 12 Region classification


for different aspect images (in
the top images, only regions
larger than 20 pixels are
presented and in the bottom
images, regions larger than 40
pixels are presented)
Int J Adv Manuf Technol (2013) 68:1123–1136 1135

Table 3 Performance of the automated inspection system Table 5 Gauge R&R study for the MVS

Defects Number of samples (%) ND (%) FP (%) Source σ 5.15 · σ %5.15 · σ

Punctual 58.2 3.49 4.65 Gauge R&R 7.99 41.13 7.43


Lineal 32.5 2.33 1.16 Repeatability (σ̂repeatability) 7.99 41.13 7.43
Superficial 9.3 3.49 1.16 Reproducibility (σ̂reproducibility) 0.00 0.00 0.00
Part variation (PV) 107.14 511.76 99.72
Total 107.43 553.29

where: % Gauge R&R of total variations (PRR) 7.43


Number of distinct categories (NDC) 19
– n is the number of measurements.
– σr is the standard deviation of the measurements calcu-
lated from where:

max(xi ) − min(xi ) – GRR = 2
σ̂repeatability + σ̂reproducibility
2
σr = (14) 
d2∗ – Total = GRR2 + PV2

where xi , i = 1, ..., n is the measurement value and d2∗ According to Automotive Industry Action Group (AIAG)
is the value associated with the distribution of the aver- [37], if the PRR is under 10 %, the measurement system
age range. It is obtained from the d2 tables and in this is acceptable; if it is between 10 and 30 %, the measure-
study d2∗ = 3.553 [35]. ment system “may be acceptable based upon the importance
– tv,1− α2 is obtained from the t-distribution tables [35] of the application”; and if over 30 %, the measurement
where v is the degrees of freedom (v = n − 1) and α is system is considered unacceptable. The MVS precision is
the level of significance (in our study α = 0.05). With considered as acceptable as PRR = 7.43.
the former parameters and from the t-distribution tables, Another statistic provided and recommended by AIAG is
tv,1− α2 = 2.145. the “number of distinct categories, NDC” obtained from
PV
As it is shown in Table 4, the MVS presents an accept- NDC = 1.41 · (16)
GRR
able bias since the value “zero” is included in the 95 %
The measurement system is adequate if NDC is greater
t-confidence interval.
than or equal to 5 [37]. Since the NDC is 19, the MVS is
For determining the MVS precision, the ANOVA-based
adequate.
“Gauge R&R”—repeatability and reproducibility—study
was employed as an analytical tool. Ten parts, three oper-
ators and three trials are typical for the ANOVA-based 6 Conclusions
Gauge R&R (GRR) study [36], resulting in 90 measure-
ments. Table 5 presents the repeatability, reproducibility and The purpose of this work is to present an automated system
parts standard deviations. The value 5.15 corresponds to the for the surface quality inspection of transparent parts. One
limiting value of the standard deviations between bounds of of the advantages of the system is its flexibility to adapt
a 95 % tolerance interval that contains at least 99 % of a nor- itself to different inspection tasks or different part models.
mal population [33]. The MVS precision is obtained from This is possible thanks to the component-based develop-
percentage of Gauge R&R of total variations (PRR) as ment approach that has been used to design the system’s
software architecture. It is this versatility mainly that makes
GRR the system different from other automated inspection pro-
PRR = 100 · (15)
Total posals that are normally custom-made to solve one specific
application.
Since the industrial vision system has to cope with tight
cycle times, its hardware devices and software algorithms
Table 4 Bias study for the MVS
are specifically designed for the real-time inspection. For
n σr Bias 95 % t-confidence interval the selection of a suitable position for the vision sensors,
a sensor planning system has been developed on the basis
Low High
of genetic algorithms. As regards the lighting system, the
Values 15 1.1321 0.0667 −0.3767 0.5100 so-called binary active lighting system and preprocessing
algorithms are used to provide an image fast to compute.
1136 Int J Adv Manuf Technol (2013) 68:1123–1136

Finally, the machine vision system performance was 15. Adamo F, Attivissimo F, Dinisio A, Savino M (2009) An online
assessed inspecting a considerable number of headlamp defects inspection system for satin glass based on machine
vision. In: Instrumentation and measurement technology confer-
lenses, demonstrating that the automated inspection of this
ence 2009. I2MTC ’09. IEEE, Singapore, 5–7 May 2009, art no
part, and thus of transparent surfaces, is feasible. The 5168461, pp 288–293
MVS capability has been analysed with statistical methods 16. Zhang X-w, Ding Y-q, Lv Y-y, Shi A-y, Liang R-y (2011)
from the measurement system analysis, and according to A vision inspection system for the surface defects of strongly
reflected metal based on multi-class SVM. Expert Syst Appl
the Automotive Industrial Action Group requirements, the
38:5930–5939
MVS is acceptable. 17. Salis G, Seulin R, Morel O, Meriaudeau F (2007) Machine vision
system for the inspection of reflective parts in the automotive
Acknowledgments This work has been partially supported by industry. Proc SPIE- Int Soc Opt Eng 6503:65030O
the projects DPI2011-27284, TEP2009-5363, AGR-6429 and Devel- 18. Leopold J, Günther H, Leopold R (2003) New developments in
opment of a System for Automated Vehicle Headlamp Inspection fast 3D-surface quality control. Measurement 33:179–187
based on Computer Vision supported by VALEO ILUMINACIÓN 19. Aluze D, Merienne F, Dumont C, Gorria P (2002) Vision system
MARTOS S.A. for defect imaging, detection, and characterization on a specular
surface of a 3D object. Image Vis Comput 20:569–580
20. Killing J, Surgenor BW, Mechefske CK (2009) A machine vision
system for the detection of missing fasteners on steel stampings.
References Int J Adv Manuf Technol 41:808–819
21. Afrah A, Miller G, Fels S (2009) Vision system development
1. Malamas EN, Petrakis EGM, Zervakis M, Petit L, Legat J-D through separation of management and processing. In: 11th IEEE
(2003) A survey on industrial vision systems, applications and international symposium on multimedia, pp 612–617
tools. Image Vis Comput 21:171–188 22. Wang G, Tao L, Di H, Ye X, Shi Y (2012) A scalable distributed
2. Caulier Y, Bourennane S (2008) An image content description architecture for intelligent vision system. IEEE Trans Ind Inf
technique for the inspection of specular objects. EURASIP J Adv 8(1):91–99
Signal Process, art no 195263 23. Booch G, Rumbaugh J, Jacobson I (2005) Unified modeling
3. Satorres Martı́nez S, Gómez Ortega J, Gámez Garcı́a J, Sánchez language user guide. Addison-Wesley, Reading
Garcı́a A (2012) A machine vision system for defect characteriza- 24. Jahr I (2008) Handbook of machine vision. In: Hornberg A (ed)
tion on transparent parts with non-plane surfaces. Mach Vis Appl Lighting in machine vision. Wiley, Weinheim. ISBN: 3-527-
23(1):1–13 40584-7
4. Pen X, Chen Y, Yu W, Zhou Z, Sun G (2008) An online defects 25. Szyperski C, Gruntz D, Murer S (2002) Component software:
inspection method for float glass fabrication based on machine beyond object-oriented programming. Addison-Wesley, Reading
vision. Int J Adv Manuf Technol 39:180–1189 26. Rajkumar B, Thamarai S, Xingchen C (2009) Object oriented pro-
5. Telljohann A (2008) Introduction to building a machine vision gramming with Java: essentials and applications. Tata McGraw
inspection. In: Hornberg A (ed) Handbook of machine vision. Hill Education Private Limited, Dharmatala
Wiley, Weinheim. ISBN: 3-527-40584-7 27. Satorres Martı́nez S, Gómez Ortega J, Gámez Garcı́a J, Sánchez
6. Oreizy P, Gorlick M, Taylor R (1999) An architecture-based Garcı́a A (2010) An automatic procedure to identify the areas
approach to self-adaptive software. IEEE Intell Syst 14(3):54–62 of interest for the automated inspection of headlamp lenses. In:
7. Gámez Garcı́a J, Gómez Ortega J, Sánchez Garcı́a A, Satorres International conference of emerging technologies and factory
Martı́nez S (2009) Robotic software architecture for multisensor automation. ISBN: 978-1-4244-6849-2
fusion system. IEEE Trans Ind Electron 56(3):766–777 28. Otsu N (1979) A threshold selection using an iterative selection
8. Satorres Martı́nez S, Gómez Ortega J, Gámez Garcı́a J, Sánchez method. IEEE Trans Syst Man Cybern 9:62–66
Garcı́a A (2009) An automatic procedure to code the inspec- 29. Rosin PL (2001) Unimodal thresholding. Pattern Recognit
tion guideline for vehicle headlamp lenses. In: IEEE international 34(11):2083–2096
conference on mechatronic, ISBN: 978-1-4244-4195, vol 978-1- 30. Duda RO, Hart PE, Stork DG (2001) Pattern classification. Wiley,
4244-4195 New York. ISBN: 0-471-05669-3
9. Fabbri P, Messori M, Toselli M, Rocha J, Pilati F (2009) Enhanc- 31. Satorres Martı́nez S, Gómez Ortega J, Gámez Garcı́a J,
ing the scratch resistance of polycarbonate with polyethylene Sánchez Garcı́a A (2009) A sensor planning system for auto-
oxide-silica hybrid coatings. Adv Polym Technol 27(2):117–126 mated headlamp lens inspection. Expert Syst Appl 36(5):8768–
10. Boby RA, Sonakar PS, Singaperumal M, Ramamoorthy B (2010) 8777
Identification of defects on highly reflective ring components and 32. Yasnoff WA, Mui JK, Bacus JW (1977) Error measures for scene
analysis using machine vision. Int J Adv Manuf Technol 52:217– segmentation. Pattern Recognit 9:217–231
233 33. Burdick RK, Borror CM (2003) A review of measurement system
11. Golnabi H, Asadpour A (2007) Design and application of capability analysis. J Qual Technol 37(4):342–354
industrial machine vision systems. Robot Comput Integr Manuf 34. Juran JM, De Feo JA (2010) Juran’s quality handbook. McGraw-
23:630–637 Hill, New York. ISBN:978-0071629737
12. Torres F, Jiménez LM, Candelas FA, Azorı́n JM, Agulló R 35. Duncan AJ (1986) Quality control and industrial statistics. Irwin
(2002) Automatic inspection for phase-shift reflection defects in Professional Publishing, New York. ISBN: 978-0256035353
aluminium web production. J Intell Manuf 13:151–156 36. Smith RR, McCrary SW, Callahan RN (2007) Gauge repeatability
13. Perng D-B, Chou C-C, Chen W-Y (2005) A novel vision system and reproducibility studies and measurement system analysis: a
for CRT panel auto-inspection. J Chin Inst Ind Eng 24(5):341–350 multimethod exploration of the state of practice. J Qual Technol
14. Liu H-g, Chen Y-p, Peng X-q, Xie J-m (2011) A classification 23(1):1–11
method of glass defect based on multiresolution and information 37. Automotive Industry Action Group (AIAG) (2002) Measurement
fusion. Int J Adv Manuf Technol 56:1079–1090 system analysis, 3rd edn. AIAG, Southfield

You might also like