Real-Time Tiny Part Defect Detection System in Manufacturing Using Deep Learning
Real-Time Tiny Part Defect Detection System in Manufacturing Using Deep Learning
ABSTRACT We adopted actual intelligent production requirements and proposed a tiny part defect detection
method to obtain a stable and accurate real-time tiny part defect detection system and solve the problems of
manually setting conveyor speed and industrial camera parameters in defect detection for factory products.
First, we considered the important influences of the properties of tiny parts and the environmental parameters
of a defect detection system on its stability. Second, we established a correlation model between the detection
capability coefficient of the part system and the moving speed of the conveyor. Third, we proposed a defect
detection algorithm for tiny parts that are based on a single short detector network (SSD) and deep learning.
Finally, we combined an industrial real-time detection platform with the missed detection algorithm for
mechanical parts based on intermediate variables to address the problem of missed detections. We used a
0.8 cm darning needle as the experimental object. The system defect detection accuracy was the highest
when the speed of the conveyor belt was 7.67 m/min.
INDEX TERMS Defect detection, tiny parts, deep learning, SSD, missing detection rate.
89278 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ VOLUME 7, 2019
J. Yang et al.: Real-Time Tiny Part Defect Detection System in Manufacturing Using Deep Learning
detection algorithm. The stability and defect detection ability mechanical products and adopted a second-order peak differ-
of the algorithm are indispensable key factors. For example, ential method to determine the defect depth, which has good
Taivedi et al. [21] described the significance of camera imag- accuracy for surface defect detection. However, the errors
ing for defect detection and proposed a single beam Fourier increased as the defect depth increased. Devivier et al. [23]
transform digital holographic interferometric technology for proposed and applied a damage index that is based on the
defect detection. virtual field method to detect the defects of mechanical prod-
By combining a real-time defect detection system and the ucts. However, this index is sensitive to the changes of the
attributes of tiny parts, we propose a real-time tiny part defect properties, such as stiffness. To apply the current method
detection method based on the abovementioned analysis. The to the defect detection of mechanical products, several tasks
experimental results demonstrate that the proposed method should be done to extend the current research scope to the
can achieve superior performance and strong adaptability. defect detection of curved surfaces. Aiming at the challenge
The main contributions of this work are presented as follows. brought by the complexity of trailing pulses to ultrasonic
• The mainstream defect detection methods mainly focus detection, the self-focusing method of trailing pulses was
on studying the defect degree of a detection sample. proposed to improve the accuracy of defect location by theo-
We fully consider the attributes of tiny parts and the retically deducing the characteristics of trailing pulses [24].
environmental parameters of a defect detection sys- In filtering detection, Zou et al. [25] proposed a real-time
tem, including industrial camera parameters, illumina- X-ray flaw detection method for mechanical products based
tion, and conveyor speed, and establish a relationship on Kalman filtering. Zhang et al. [26] used the zero-angle
model between the detection capability coefficient of spatial filter and peak search to obtain the time center
the tiny part system and the moving speed of parts. of the corresponding signal sources and proposed a new
Thus, the robustness of the defect detection system is time-variant spatial filtering rearrangement scheme that is
improved. based on a microphone array. The proposed scheme overcame
• With the integration of the improved SSD object detec- the Doppler distortion in acoustic bearing signals and the
tion algorithm and the correlation model between the signal separation quality detection method of multi-bearing
detection capability coefficient and the moving speed source mixing.
of the parts, we analyze the optimal object recognition In machine vision defect detection, Boaretto
method SSD and propose a tiny part defect detection and Centeno [27] proposed a double image exposure tech-
algorithm based on SSD and speed model. The proposed nique for mechanical product radiation image automatic
method has higher accuracy than YOLO V3, Faster- detection and classification. The discontinuities of ‘‘defects’’
RCNN, and FPN. and ‘‘no defects’’ were taken as the indicators for the exper-
• Missed detection is prone to occur when the dynamic iment, and the test data of the obtained classifier reached
defect detection is performed in the conveyor belt. 88.6% accuracy with the use of semisupervised learning
Therefore, combined with the fiber sensor, conveyor technology. Hajizadeh et al. [28] used high-frequency cam-
speed, detection algorithm, and the attributes of tiny eras to detect unmarked defect candidates and improve the
parts, we propose an algorithm for determining the imbalance of nondefective image data. Wakaf and Jalab [29]
missed detections of tiny parts based on intermediate detected the defects of mechanical products using histogram
variables, thereby increasing the stability and accuracy matching from an image background. Martinez proposed the
of the system. extraction of features from each region of the fused image
The organization of this paper is presented as follows. and developed a machine vision system that detects the
In the first part, we review related research works on defect defects on machined metal parts. This research considered
detection technology. In the second part, we describe the rela- the illumination method in image processing to increase
tionship model between the detection capability coefficient the defect detection accuracy, but it extracted the defective
of a system of defective parts and the moving speed of parts. features in the fused image, thereby increasing the time
In the third part, we propose a tiny part defect recognition overhead of defect detection and the difficulty of real-time
algorithm based on SSD. In the fourth part, we design an defect detection [30].
industrial real-time object detection platform and propose an In addition, deep learning [31]–[34] is widely used in prod-
algorithm to determine the missing detection of tiny parts uct defect detection. Yang et al. proposed three-point circle
based on intermediate variables. In the fifth part, we exper- fitting and convolutional neural network (CNN) to achieve
imentally analyze the proposed method. Finally, we summa- automatic aperture detection. The automatic defect detection
rize the research work and discuss future research directions. system will save time and labor costs. Song et al. [32] consid-
ered the defect detection problem of surface damage, surface
II. RELATED WORK dirt, and stripped screws; proposed the screw surface defect
Common defect detection methods include filtering, ultra- detection technology based on the CNN; and proved that deep
sonic, and machine vision detection. For ultrasonic detection, learning technology was better than the traditional template
Tan et al. [22] used computer simulations to study mobile matching technology. Wei et al. [33] used the CNN to classify
thermal scanning to detect the defects detection bottom of the defects of a printed circuit board (PCB) and achieved
FIGURE 1. Schematic of the tiny part detection (a. Collection of tiny parts. b. Collection of a single sample).
better classification results for a data set containing 1818 col- If T t, then the necessary time for collecting an image is
lected images. Liu et al. [19], Krummenacher et al. [35] pro- expressed as follows:
posed a pipeline defect detection method based on deep
t = Wview V .
learning, and showed that the size of the data set, the ini- (1)
tialization of the network model, the training mode, and
the network super-parameter impact the performance of the If the sample image is not collected after time t, then a
model. In conclusion, the application of deep learning to missed detection will occur. The formula for calculating the
defect detection is highly significant. capability coefficient of the system in detecting tiny parts in
unit time is expressed as follows:
Wview
III. RELATION MODEL OF DETECTION CAPABILITY (D1 + W1 )
θ= . (2)
COEFFICIENT AND PART MOTION VELOCITY t
The attributes of tiny parts and the environmental parameters
Formula (1) is substituted into Formula (2) to establish the
of defect detection systems are the main factors that affect
relationship model between the detection coefficient and the
the stability of the system. By analyzing the performance and
movement speed of the tiny parts.
experimental parameters of each part of a system, we define
the index that reflects the excellent defect detection of the sys- v
θ= . (3)
tem for tiny parts as the defect detection capability coefficient D1 + W1
of tiny parts in unit time θ.
Formula (3) presents that, if only the size of tiny parts and
We define the length and width of the camera field d as
the camera’ field of vision are considered, then the defect
Lview and Wview , respectively. The length and width of a tiny
detection capability coefficient of the system in unit time
part are L1 and W1 , correspondingly. The distance of the tiny
is directly proportional to the conveyor belt speed when
parts to be positioned from the bottom edge of the field of
the size of the tiny parts and the camera’ field of vision
vision is d1 , and the conveyor belt speed is v. The direction of
remains unchanged. If T > t, then the processing time of
motion is the direction indicated by the arrow, as illustrated
the computer is long, the processing of the tiny parts will be
in Figure 1.
incomplete, and the tiny parts to be processed will enter the
Figure 1(b) demonstrates that, in accordance with the size
camera’ field of vision. We define the time of the missing tiny
of the small part, when the camera’ field of vision d is larger
parts as follows:
than the size of the tiny parts, that is, Wview > W1 , Lview > L1 .
The image collection of the tiny parts begins when the upper 1t = T − t. (4)
edge of a tiny part coincides with the edge of the camera’
field of vision, while the collection ends when the upper In the initial stage, the distance of a tiny part in 1t time can
edges of the tiny parts coincide with the edge of the lens. be expressed as follows:
Considering the processing image time of computers, T is
assumed to be the necessary time for the system to respond. 1S = 1t.v. (5)
When 1S = Wview , the detection of a tiny part is missed. The When (m − 1)Wview ≤ 1S ≤ mWview (m > 1 and m is a
system loss rate ε is defined as follows: positive integer),
ε = 50% (6) ε=
mv.(T − t)
× 100%. (17)
D1 + W1
At this time, the defect detection capability coefficient of the
system can be expressed as follows: With the integration of Formula (17) into Formula (7),
v v the system defect detection capability coefficient can be cal-
θ = (1 − ε) = . (7) culated as follows:
D1 + W1 2(D1 + W1 )
m (T-t) 1
When d1 ≤ 1s ≤ Wview and 1s = d1 , the second sample θ = −v2 . + v. . (18)
of tiny parts can be identified, but the third sample image (D1 + W1 )2 D1 + W1
cannot be recognized. This case is called data loss. When According to the abovementioned analysis, when the lens
1s > d1 , the distance between the nth sample image and field Wview , the system response time t, and the distance
the standard position is presented as follows: D1 between two tiny parts are constant, the defect detection
capability coefficient of the system is a quadratic function in
Sdis = n.1S. (8) relation to the conveyor belt speed. Moreover, the quadratic
When Sdis = Wview , the situation can be considered an function θ is continuous and derivable.
The maximum point
error cycle. of θ is v = (W 3 × m + D1 + W1 ) 2mT . The speed of the
system is the optimal conveyor belt speed, and its defect
Wview
n= . (9) detection ability is the optimal rate.
1S
Formulas (4) and (5) are integrated into Formula (9). IV. TINY PART DEFECT RECOGNITION ALGORITHM
Wview BASED ON SSD
n= . (10) A. SSD OBJECT DETECTION
v.(T − t)
An SSD network combines the anchor mechanism of the
After considering the number of samples to be tested,
YOLO [18] regression and the Faster-RCNN [17]. The
we assume that the number of samples in camera’ field of
regression is adopted to simplify the computational complex-
vision is expressed as follows:
ity of the neural network and improve the real-time perfor-
P = Wview (D + W1 ).
(11) mance of the algorithm. The anchor mechanism is used to
1
extract the features with different aspect ratios. This local fea-
The loss rate of tiny parts at this time is computed as ture extraction method is more reasonable and effective than
follows: the YOLO method for global feature extraction for a certain
P.v.(T − t) location in terms of the feature extraction ability. In addition,
ε= × 100%. (12)
Wview the SSD adopts the method of multiscale [36] object feature
The substitution of Formula (11) into Formula (12) yields extraction for the feature expression of different scales. This
design is helpful for improving the robustness of detecting
T −t objects with different scales. Given these advantages of SSD
ε = v. × 100%. (13)
D1 + W1 networks, this study uses an SSD network to detect defects in
The detection capability coefficient of the system at this tiny parts.
time for tiny parts is expressed as follows:
1) FEATURE LAYER MAPPING MODEL
T −t 1
θ = −v2 . + v. . (14) In consideration of the different attributes of tiny parts and
(D1 + W1 )2 D1 + W1
the idea that an SSD network adopts a multiscale method to
When the speed of the conveyor belt v, the camera’ field obtain multiple feature graphs with different sizes, a m-layer
of vision range Wview , and the size of tiny parts are deter- feature graph is used for model detection. The formula for
mined, the defect detection coefficient of the system’s tiny calculating the proportion of the default box of the k-th feature
parts is inversely proportional to the detection speed T of graph is as follows:
the program. In Formula (14), v considerably influences the
Smax − Smin
coefficients of the tiny parts in the system. When Wview ≤ Sk = Smin + (k − 1) , k ∈ {1, 2, . . . , m} .
1S ≤ 2Wview , m−1
(19)
2v.(T − t)
= × 100%. (15) where Smin = 0.2 and Smax = 0.95, which represents the
D1 + W1
proportion of the default box of the feature layer in the input
By analogy, when 2Wview ≤ 1S ≤ 3Wview ,
image. The SSD uses the anchor mechanism to determine the
3v.(T − t) different aspect ratios for the default box on the same feature
ε= × 100%. (16)
D1 + W1 layer to enhance the robustness of the default box to the shape
of the object. In this work, the default box aspect ratio is r = detection results. Hence, an enhanced prediction model is
{1, 2, 1/2, 3,
√ 1/3}. For the class when the aspect ratio is equal trained.
0
to 1, Sk = Sk Sk+1 is added. Then, we get the following.
√ Sk B. TINY-PART DEFECT DETECTION ALGORITHM
Wnk = Sk r n , hnk = √ , n ∈ {1, 2, 3, 4, 5}. (20) BASED ON SSD AND SPEED MODEL
rn
The original color image is converted into a binary image by
W6k = h6k = Sk Sk+1 .
p
(21) preprocessing to improve the accuracy of the defect detec-
tion of tiny parts. The areas of tiny parts in the conveyor
We set the default box of the center for a+0.5 |fk | , b+0.5
|fk | , belt are determined by locating these areas to simplify the
whereby |fk | is the k-th feature image size, and a, b ∈
calculation, and the detected simple image is cut to remove
{0, 1, 2, . . . , |fk | − 1}. We set the coordinates of the default
useless background noise. A boundary detection algorithm
box so that it is within [0,1]. The mapping relationship
is used to determine the four boundaries of tiny parts and
between the default box coordinates on the feature image and
realize an accurate location. the grid is divided to extract
the original image coordinates is as follows.
the data information of the tiny parts, and the extracted data
cx + w2b a + 0.5 wk are transmitted to the subsequent SSD defect identification
xmin = wing = ( − )wing (22) program to obtain the class of defects. The specific algorithm
wfeature |fk | 2
is described as follows.
cy + h2b b + 0.5 hk
ymin = hing = ( − )hing . (23)
hfeature |fk | 2 Algorithm 1 Tiny Part Defect Detection Algorithm Based on
wb
cx + 2 a + 0.5 wk SSD and Speed Model (TP-SSDM)
xmax = wing = ( + )wing . (24)
wfeature |fk | 2 Input: Image data Xpic
cy + h2b b + 0.5 hk Output: Image data Xpic ; predicted class estimates for
ymax = hing = ( + )hing . (25) defect types ppic ; part class Pclass
hfeature |fk | 2
Step 1: Set the image pixels of the sample tiny parts.
Here, (cx , cy ) are the coordinates of the default box center on Step 2: When the optical fiber sensor transmits the fre-
the feature layer; wb and hb are the width and height of the quency of the optical signal to the industrial camera,
default box, respectively; wfeature and hfeature are the width the sensor begins the collection of the sample image of tiny
and height of the feature layer, respectively; and wimg and himg parts.
are the width and height of the original image, respectively. Step 3: Set the background information of the tiny part
After (xmin , ymin , xmax , and ymax ) are obtained, the object sample. The industrial camera is set to the bright part area,
frame coordinates of the k-th layer are mapped to the original and the background color is dark.
image. Step 4: Use the subpixel edge extraction function
select_obj() in Halcon to extract the contours of the tiny
2) LOSS FUNCTION parts.
The SSD’s training simultaneously conducts the regressions Step 5: Fit the shapes of circles and lines in the detection
of the position and the object type. The object loss function sample to obtain the position and size information of the
is the sum of the confidence and the position loss, and its detection object, which is the bounding box of the first
expression is as follows: feature map.
1 Step 6: The results obtained in Step 3 are integrated
L (z, c, l, g) = (Lconf (z, c) + Lloc (z, l, g)). (26) into Formulas (19)– (25), the feature information of the
n
k-th feature graph is obtained as the output vector of the
where n is the number of default frames matching the ground
first layer of the SSD network, and the output vector is used
truth object frame; Lconf (z,c) is the confidence loss; Lloc (z,l,g)
as the input of the second layer.
is the position loss [37]; z is the matching result of the default
Step 7: The VGGNet [38] network is used as the training
box and the ground truth object boxes of different categories;
network of the SSD, and convolution and pooling opera-
c is the confidence of the prediction object frame; l is the
tions are used to obtain the feature vector V of the samples.
position information of the prediction object box; g is the
Step 8: Formula (26) is used to reduce the value of the loss
position information of the object frame of the ground truth;
function and thus improve the confidence of the type of
and α is a parameter that weighs the confidence loss against
prediction box while enhancing the position reliability of
the position loss, whereby we set α to 1. The object loss
the prediction box.
function and position contain incredible losses. In the training
Step 9: The eigenvector V is used as the input of the soft-
process, the SSD algorithm improves the confidence of the
max() classification function, and the predicted class esti-
prediction box object by reducing the loss function value
mates of defect type ppic and part class Pclass are obtained.
and enhancing the position reliability of the prediction box.
Step 10: Output the predictive probability estimate ppic and
The object detection performance of the model is continu-
part class Pclass .
ously improved through several optimizations of the object
V. INDUSTRIAL REAL-TIME TINY PART DETECTION Algorithm 2 Missing Detection Algorithm for Mechanical
EXPERIMENTAL PLATFORM Parts Based on Intermediate Variables (AMP-IV)
Figure. 2 illustrates the experimental testing platform that was Input: Image data Xpic
designed by the research group [39]. It includes a conveyor Output: Loss rate of tiny parts ε
belt, data processor, data acquisition sensor, light source, Step 1: Initialize the intermediate variables µi = 0, and set
and parts for mechanical support. The touch screen is used the attributes of the tiny parts to be detected: the number
for the data input and the display is a 32-inch industrial of the tiny parts and the size of the pixels of the captured
touch screen. The vision sensor device uses a MindVision image.
high-speed industrial camera with an electronic rolling shut- Step 2: According to the transmission signal of the optical
ter, which can collect high-speed samples that can be tested in fiber sensor, the industrial camera captures the sample
real time. The data processor is a Raspberry Pi B3. To ensure image Xpic.
the sufficiency of the light in the system box, a strip envi- Step 3: Use Algorithm 1 for defect detection to obtain the
ronmental light source LED with adjustable brightness is predicted class estimation value ppic of the defect type of
installed on the box. A biological LED ring light is used for the detection samples.
sample imprinting. The workstation that is used to reduce the Step 4: If Pclass 6= 0 or Ppic 6= 0, then
computing load of the data processor is configured with an The AMP-IV algorithm does not detect the missing
Intel Xeon e5-1620 V3 3.5 GHz CPU, 16 GB of memory, tiny parts. The predicted class estimation value ppic= µ0
a Nvidia GeForce GTX1080, and Ubuntu16.04 and is mainly and ε 6= 0.
used for data analysis. The Raspberry Pi and the workstation else
are equipped with OpenCV 3.1, TensorFlow 1.4, YOLO V3, The loss rate of tiny parts is ε= 0. The AMP-IV
and the SSD. The Raspberry Pi B3 has a wireless commu- algorithm detects the missing tiny parts and proceeds to
nication module that can realize end-to-end communication Step 3.
between the experimental test platform and the workstation. end if
Step 5: if µ1 − µ0 6= 0, (that is, no missed detection is
observed), then
Continue to Step 3, and output ε = 0.
else
µ1 − µ0 = 0 indicates that AMP-IV has missing
parts between the two tiny parts. Output ε = 1.
end if
the defects of the tiny parts, and the result of the detection is
obtained. The processing result is sent to the control unit for
sorting and selecting the defective parts.
We propose an algorithm based on intermediate variables
FIGURE 2. Industrial real-time part detection experimental platform.
to determine the missed detections of tiny parts. The specific
algorithm is described as follows.
To enable the algorithm to detect the defects of different
tiny parts quickly and accurately, we combine it with the real-
time detection equipment. Then, we design a real-time defect VI. EXPERIMENTAL RESULT AND ANALYSIS
detection process for tiny parts based on Algorithms 1 and 2 A. DATASET
and the speed model. The flowchart is shown in Figure. 3. 1) EXPERIMENTAL DATA COLLECTION
When the optical fiber sensor detects the mechanical parts on The four kinds of defects in 0.8 cm darning needles, which
the conveyor belt, the control module judges the movement of are frequently used in actual production, are crooked shapes,
the mechanical parts into the visual field close to the camera length size errors, endpoint size errors, and wringing. The
system. In the center, trigger pulses are sent to the image 0.8 cm darning needle dataset is collected in motion with
acquisition part, and the trigger pulses are then sent to the the high-speed industrial camera on the self-built industrial
industrial camera and the lighting system according to preset real-time object detection platform. The distance between
procedures and delays. The industrial camera begins captur- the camera and the belt is 10 cm. We collected the 0.8 cm
ing images, and the microscope LED ring light is illuminated darning needle dataset with 3000 images using constant
by a special ring light source. The lighting opening time changes in the locations of the detection objects, including
matches the exposure time of the industrial camera. After 2140 training and 860 testing images. The training and testing
the camera captures the image, the digital image is stored in images contained 6306 and 2000 0.8 cm darning needle
the memory of the processor. The defect detection algorithm labels, correspondingly. The collected samples are depicted
based on the SSD is used to process, analyze, and identify in Figure 4.
FIGURE 5. Influence of other system parameters on the system defect detection capability coefficient: (a) distance
between parts and the system defect detection capability coefficient, (b) width of parts and the system defect detection
capability coefficient, (c) system response time and the defect detection capability coefficient, and (d) the camera’ field of
vision and the system defect detection capability coefficient.
FIGURE 6. Shows the defect detection sample (It compares the pedestrian detection results with those of
state-of-the-art methods. The first column shows the input images with the ground truths annotated with red
rectangles. The other columns show the detection results (green rectangles) of YOLO V3, the FPN, the Faster-R-CNN,
and our method. Our method can successfully detect most small-size instances that the two state-of-the-art methods
miss.).
detection capability coefficient, (c) system response time proposed algorithm with YOLO V3 [18], Faster-RCNN [17],
and defect detection capability coefficient, and (d) the cam- FPN [41], and the proposed method. The settings of com-
era’ field of vision and system defect detection capability parison algorithms are similar to those of the SSD experi-
coefficient. ment [19]. The four kinds of defect data that are observed
A comparison of the influence of each parameter on the in the test dataset are analyzed from subjective and objective
system defect detection capability coefficient in Figure 6 indi- perspectives. The test set has 860 and 2000 images in the
cates that the small distance between two tiny parts leads 0.8 cm darning needle label data. The detailed data informa-
to a narrow width of the tiny parts. When the system pro- tion is summarized in Table 1. The statistical results of the
cessing time is short, the system defect detection capability recognition accuracy and predictive probability estimates of
coefficient is high; when the camera’ field of vision is large, the algorithm in the system are presented in Tables 3 –6, and
and the movement speed of the tiny parts is 7.67 m/min, the subjective detection results are illustrated in Figure 6.
the system defect detection capability coefficient is low.
The experiments show that, when the distance between two
defect detections is 2.00 mm, the width of the tiny parts is 1) ANALYSIS OF SUBJECTIVE TEST RESULTS
8.00 mm, the camera’ field of vision is 36.00 mm, and the Figure 6 depicts the experimental results of the subjective
defect detection capability coefficient of the tiny parts on the defect detections of the algorithm in this work and the com-
experimental platform is the highest. pared algorithms. The figure also demonstrates that YOLO
V3 and Faster-RCNN cannot reliably detect the size error
(fat point size) defects. This result occurs mainly because the
C. COMPARISON OF PROPOSED ALGORITHM WITH sample in YOLO V3 must identify the image segmentation
CLASSICAL ALGORITHM for a 7 × 7 grid. The inside of the cell is used to detect
We use an 0.8 cm darning needle as the experimental object the object neurons with information loss problems, thereby
to verify the validity of the algorithm further and compare the resulting in a model with a strong spatial constraint in the
process of feature extraction. If the grid contains small, sim- predictive probability estimation and accuracy among the
ilar detection tasks, then the system cannot simultaneously compared algorithms. The estimated probability and accu-
detect all the objects. The Faster-RCNN detects an object racy of the proposed method are 2.47% and 1.20% higher
based on information, such as the color and image edges, than those of the FPN, respectively. In Table 5, in the case of
thus denoting its insufficient ability to detect weak objects. Defect Type 3, the prediction probability estimation and accu-
The bounding box is remarkably larger in the FPN than in racy of the proposed method are 2.37% and 3.40% higher,
the method proposed in the present work. The bounding correspondingly, than those of the FPN with the highest
box regression of the FPN does not converge. This work is comparison algorithm. The location time is −0.10 less than
based on the SSD for defect detection. The SSD network that of the lowest YOLO V3 algorithm. The missed detection
is found in the multi-feature layer detection frame extrac- rate is the same as that of the lowest YOLO V3 algorithm.
tion and improves the accuracy of tiny object bounding box In the case of Defect Type 4, the prediction probability
regression. The FPN can also obtain the defect type and estimation and accuracy of the proposed method are 4.74%
provide the approximate locations of the defects accurately. and 6.80% higher, respectively, than those of the FPN. The
However, accurate and appropriate object location is crucial location time is −0.10 less than that of the lowest YOLO V3
to calculating the grasping position of the manipulator in an algorithm. The missed detection rate is 0.60% less than that
actual robotic grabbing task. of the lowest YOLO V3 algorithm.
Further observation of Figure 7 indicates that the accu-
2) OBJECTIVE ANALYSIS OF TEST RESULTS racy is higher in the proposed method than in the compared
Tables 3–6 correspond to the detection results of the proposed algorithms under the four defect types. The location time of
method and the comparison algorithms under the four defect YOLO V3 is the shortest, and the location time is shorter
type datasets. Table 3 reflects that, under Defect Type 1, in the proposed method than in the Faster-RCNN and FPN.
the predictive probability estimates are 3.32%, 5.20%, and In terms of the relevant datasets and experimental param-
−1.60% higher in the proposed method than in YOLO V3, eters, the missed detection rate is lower in the proposed
Faster-RCNN, and FPN, respectively. The accuracy rates method than in the comparison algorithms. The training time
are 5.20%, 6.80%, and 3.00% higher than those of the cor- is 32 h shorter in the proposed method than in YOLO V3
respondingly compared algorithms. The location times are and 55 h and 16 h lower than in the Faster-RCNN and FPN,
−0.05, 0.83, and 0.12 less than those of the respectively respectively.
compared algorithms. The missing detection rate is 2.40% In summary, the accuracy is higher in the proposed method
lower than that of the Faster-RCNN. than in the comparison algorithms because an anchor mech-
In Table 4, under Defect Type 2, the predictive probability anism is used to extract the features with different aspect
estimates, and accuracy of the proposed method are 95.36% ratios. The proposed local feature extraction method is more
and 99.00%, correspondingly. The FPN has the highest reasonable and effective than the global feature extraction
method of YOLO V3 for a certain location in terms of the speed v is 7.67 m/min. We compare our method defect detec-
feature extraction ability. However, detecting YOLO V3 sam- tion algorithm in the test set to obtain better performance
ples as a single regression problem directly uses image pixel than that of the comparison algorithm in Section Compari-
optimization and can detect the sample bounding box loca- son of proposed algorithm with classical algorithm. In this
tion and conduct classification. In addition, the proposed part, we combine the speed model and the TP-SSDM defect
method has a higher location time than that of YOLO V3. detection method to verify the actual application on the defect
The average missed detection rate of the proposed method is detection simulation platform. We set the simulation platform
approximately 1.00%. We propose a missed detection algo- conveyor motor range of speed per minute v = [0 m/min,
rithm for tiny parts based on intermediate variables, which 10 m/min] and consider the performance of the proposed
can effectively identify missed detections in industrial pro- method when the actual camera’ field of vision of the indus-
cesses. The proposed regions extracted by the Faster-RCNN trial Wview are 32.00, 36.00, and 40.00 mm. We deploy the
overlap with one another, and the overlapping parts become algorithm on a workstation with one GTX1080. The work-
repeatedly extracted features. The feature extraction process station connects the industrial camera on the simulation plat-
of the Faster-RCNN increases the positioning time of the form through the USB serial port to test the defect detection
model, thereby increasing the missed detection rate. The accuracy, the missed detection rate, and the average number
longer model training time of the proposed method than that of detections within 10 s at different conveyor speeds with
of YOLO V3 is caused by the numerous parameters of the 1000 defect type parts. Table 7 displays the result of an
SSD-based method. This problem must be solved in future average test of 10 times.
studies. Defect Type 4 has a large similarity to the standard Table 7 presents that, when the velocity is v = 1 m/min,
0.8 cm darning needle; thus, the accuracy is lower. the recognition accuracy of defect detection is the highest.
The accuracy rates of Wview = 32.00, 36.00, and 40.00mm
3) TINY PART DEFECT DETECTION RESULTS OF THE are 86.87%, 93.26%, and 87.23, respectively. The average
DIFFERENT FIELDS OF VISION OF THE CAMERA AND number of defect detections is the least, and the detection
VARIOUS SPEEDS OF THE CONVEYOR BELT numbers of Wview = 32.00, 36.00, and 40.00mm are 10, 13,
In Section relational model of detection capability coefficient and 15, correspondingly. Furthermore, the missed detection
of defect system and part motion velocity, we have theoreti- rate is the lowest, and the missing detection rates of Wview =
cally confirmed that the defect detection capability of small 32.00, 36.00, and 40.00 mm are 2.35%, 1.20%, and 2.11%,
parts is optimal when Wview is 36.00 mm and the conveyor respectively. This finding is due to the slow movement of
TABLE 7. Tiny part defect detection results in different field of the camera’ field of vision and different speeds of the conveyor belt.
the conveyor belt, the clearly captured sample image of the relationship model between the part coefficients identified by
camera, the missing detection rate, the increased detection the defect detection system and the moving speed of the parts.
number, and the decreased accuracy rate of the algorithm. We obtained HD detected samples and a stable tiny part data
When the speed is v = 10 m/min, the defect detection has the acquisition system using the optimal speed and appropriate
lowest recognition accuracy, and the defect detection sample camera position, respectively. Our study used the advanced
number and the missed detection rate are the highest. This object detection algorithm SSD, which can detect the defects
finding is due to the number of cameras captured increases of tiny parts accurately and efficiently. The proposed method
with the speed of the conveyor belt. Therefore, the number is applicable to the defect detection of 0.8 cm darning needles
of detections rises. However, the sharpness of the picture is and other tiny parts.
degraded, the missing detection rate of the proposed algo- 1) We proposed a relationship model between the detec-
rithm is increased, and the recognition accuracy is decreased. tion capability coefficient of defective part systems and the
Using the lowest conveyor belt speed not only obtains movement speed of parts and obtained the best transmission
the highest defect detection accuracy but also the effect of belt speed of the defective part detection system. This method
product life cycle in the actual defect detection. Although improves the system stability for tiny part defect detection.
the fastest conveyor speed increases the number of tiny 2) The end-to-end defect detection model was trained
parts tested, it reduces the defect detection accuracy of tiny based on an SSD algorithm using the defect data of a 0.8 cm
parts. Therefore, when the conveyor speed is between v = darning needle. The accuracy rates of the model in Defect
[5 and 8 m/min], the missing detection rate, accuracy rate, Types 1, 2, 3, and 4 were 98.00%, 99.00%, 97.80%, and
and detection number are acceptable ranges for the enter- 79.40%, respectively. Compared with the compared object
prise. This finding further confirms our previous theoretical detection algorithms, our proposed SSD-based tiny part
derivation. When Wview = 36.00 mm and the conveyor defect detection algorithm is more suitable for the defect
speed v = 7.67 m/min, the system has the optimal defect detection of 0.8 cm darning needles.
detection capability for small parts. The missing detection 3) We built a real-time industrial detection platform and
rate is 3.60%. The accuracy rate is 85.50%, and the detection propose an algorithm for mechanical part omission detection
number is 91. that is based on intermediate variables. The platform can
Based on the abovementioned analysis, the proposed effectively detect missing tiny parts and provide a theoretical
method obtains the optimal defect detection system capability reference for the practical application of defect detection
coefficient considering the factors, such as conveyor speed, technology.
part size attribute, camera’ field of vision, missing detection In addition, for occluded objects, the increased attention
rate, and detection number. Given the complexity of the actual mechanism method will provide more contextual information
working conditions and the jitter, the recognition accuracy on that will provide new ideas for the automated detection of tiny
the simulation platform is lower in the proposed method in the part defects.
test dataset in the PC.
REFERENCES
VII. CONCLUSION [1] M. Rezaei-Malek, M. Mohammadi, J.-Y. Dantan, A. Siadat, and
We proposed a real-time tiny part defect detection system R. Tavakkoli-Moghaddam, ‘‘A review on optimisation of part quality
for manufacturing using an end-to-end CNN algorithm. This inspection planning in a multi-stage manufacturing system,’’ Int. J. Prod.
Res., pp. 1–18, Apr. 2018. doi: 10.1080/00207543.2018.1464231.
system is based on the defects of a 0.8 cm darning needle,
[2] Q. Song, W. Ding, H. Peng, J. Gu, and J. Shuai, ‘‘Pipe defect detection
such as crooked shapes, length and endpoint size errors, with remote magnetic inspection and wavelet analysis,’’ Wireless Pers.
and wringing. Subsequently, we studied the correlation Commun., vol. 95, no. 3, pp. 2299–2313, Aug. 2299-2313.
[3] S. W. Oh, D.-B. Yoon, G. J. Kim, J.-H. Bae, and H. S. Kim, ‘‘Acoustic [26] S. Zhang, Q. He, K. Ouyang, and W. Xiong, ‘‘Multi-bearing weak
data condensation to enhance pipeline leak detection,’’ Nucl. Eng. Des., defect detection for wayside acoustic diagnosis based on a time-varying
vol. 327, pp. 198–211, Feb. 2018. spatial filtering rearrangement,’’ Mech. Syst. Signal Process., vol. 100,
[4] C. Wu, S. Yao, and B. Corinne, ‘‘Leakage current study and relevant pp. 224–241, Jul. 2017.
defect localization in integrated circuit failure analysis,’’ Microelectron. [27] N. Boaretto and T. M. Centeno, ‘‘Automated detection of welding defects
Rel., vol. 55, nos. 3–4, pp. 463–469, Feb. 2015. in pipelines from radiographic images DWDI,’’ NDT E Int., vol. 86,
[5] M. A. J. Bouwens, D. J. Maas, J. C. J. van der Donck, P. F. A. Alkemade, pp. 7–13, Mar. 2017.
and P. van der Walle, ‘‘Enhancing re-detection efficacy of defects on [28] S. Hajizadeh, A. Núñez, and D. M. J. Tax, ‘‘Semi-supervised rail defect
blank wafers using stealth fiducial markers,’’ Microelectron. Eng., vol. 153, detection from imbalanced image data,’’ IFAC-PapersOnLine, vol. 49,
pp. 48–54, Mar. 2016. no. 3, pp. 78–83, 2016.
[6] J.-P. Wang, Y. Wu, and T.-W. Zhao, ‘‘Short critical area model and extrac- [29] Z. Wakaf and H. A. Jalab, ‘‘Defect detection based on extreme edge of
tion algorithm based on defect characteristics in integrated circuits,’’ Ana- defective region histogram,’’ J. King Saud Univ.-Comput. Inf. Sci., vol. 30,
log Integr. Circuits Signal Process., vol. 91, no. 1, pp. 83–91, Apr. 2017. no. 1, pp. 33–40, Jan. 2018.
[7] T. Li, L. Gao, P. Li, and Q. Pan, ‘‘An ensemble fruit fly optimization algo- [30] S. S. Martínez, C. O. Vázquez, J. G. García, and J. G. Ortega, ‘‘Quality
rithm for solving range image registration to improve quality inspection of inspection of machined metal parts using an image fusion technique,’’
free-form surface parts,’’ Inf. Sci., vols. 367–368, pp. 953–974, Nov. 2016. Measurement, vol. 111, pp. 374–383, Dec. 2017.
[8] J. Wang, Y. Ma, L. Zhang, R. X. Gao, and D. Wu, ‘‘Deep learning for [31] Y. Yang, Y. Lou, M. Gao, and G. Ma, ‘‘An automatic aperture detection
smart manufacturing: Methods and applications,’’ J. Manuf. Syst., vol. 48, system for LED cup based on machine vision,’’ Multimedia Tools Appl.,
pp. 144–156, Jul. 2018. vol. 77, no. 18, pp. 23227–23244, Sep. 2018.
[9] W. Zhou, M. Fei, H. Zhou, and K. Li, ‘‘A sparse representation based fast [32] L. Song, X. Li, Y. Yang, X. Zhu, Q. Guo, and H. Yang, ‘‘Detection of
detection method for surface defect detection of bottle caps,’’ Neurocom- micro-defects on metal screw surfaces based on deep convolutional neural
puting, vol. 123, pp. 406–414, Jan. 2014. networks,’’ Sensors, vol. 18, no. 11, pp. 3709–3723, Oct. 2018.
[33] P. Wei, C. Liu, M. Liu, Y. Gao, and H. Liu, ‘‘CNN-based reference
[10] F. R. López-Estrada, D. Theilliol, C. M. Astorga-Zaragoza, J. C. Ponsart,
comparison method for classifying bare PCB defects,’’ J. Eng., vol. 2018,
G. Valencia-Palomo, and J. Camas-Anzueto, ‘‘Fault diagnosis observer
no. 16, pp. 1528–1533, Nov. 2018.
for descriptor Takagi–Sugeno systems,’’ Neurocomputing, vol. 331,
[34] J. C. P. Cheng and M. Wang, ‘‘Automated detection of sewer pipe defects in
pp. 10–17, Feb. 2018.
closed-circuit television images using deep learning techniques,’’ Automat.
[11] H. Shao, H. Jiang, H. Zhang, and T. Liang, ‘‘Electric locomotive bearing
Construct., vol. 95, pp. 155–171, Nov. 2018.
fault diagnosis using a novel convolutional deep belief network,’’ IEEE
[35] G. Krummenacher, C. S. Ong, S. Koller, S. Kobayashi, and J. M. Buhmann,
Trans. Ind. Electron, vol. 65, no. 3, pp. 2727–2736, Mar. 2018.
‘‘Wheel defect detection with machine learning,’’ IEEE Trans. Intell.
[12] Y. Li, D. Zhang, and D.-J. Lee, ‘‘Automatic fabric defect detection with
Transp. Syst., vol. 19, no. 4, pp. 1176–1187, Apr. 2018.
a wide-and-compact network,’’ Neurocomputing, vol. 329, pp. 329–338,
[36] Z. Cai, Q. Fan, R. S. Feris, and N. Vasconcelos, ‘‘A unified multi-scale
Feb. 2019.
deep convolutional neural network for fast object detection,’’ in Proc. Eur.
[13] J. Lei, X. Gao, Z. Feng, H. Qiu, and M. Song, ‘‘Scale insensitive and Conf. Comput. Vis. Eur. Conf. Comput. Vis. (ECCV), Cham, Switzerland,
focus driven mobile screen defect detection in industry,’’ Neurocomputing, Sep. 2016, pp. 354–370.
vol. 294, pp. 72–81, Jun. 2018. [37] O. Oktay, E. Ferrante, K. Kamnitsas, M. Heinrich, W. Bai, J. Caballero,
[14] W. Liu, Z. Wang, X. Liu, N. Zeng, Y. Liu, and F. E. Alsaadi, ‘‘A survey of S. A. Cook, A. de Marvao, T. Dawes, and D. P. O Regan, ‘‘Anatomi-
deep neural network architectures and their applications,’’ Neurocomput- cally constrained neural networks (ACNNs): Application to cardiac image
ing, vol. 234, pp. 11–26, Apr. 2017. enhancement and segmentation,’’ IEEE Trans. Med. Imag., vol. 37, no. 2,
[15] Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu, and M. S. Lew, ‘‘Deep pp. 384–395, Feb. 2018.
learning for visual understanding: A review,’’ Neurocomputing, vol. 187, [38] K. Simonyan and A. Zisserman, ‘‘Very deep convolutional networks for
pp. 27–48, Apr. 2016. large-scale image recognition,’’ 2014, arXiv:1409.1556. [Online]. Avail-
[16] J. Yang and G. Yang, ‘‘Modified convolutional neural network based able: https://arxiv.org/abs/1409.1556
on dropout and the stochastic gradient descent optimizer,’’ Algorithms, [39] J. Yang, S. Li, Z. Gao, Z. Wang, and W. Liu, ‘‘Real-time recognition
vol. 11, no. 3, pp. 28–43, Mar. 2018. method for 0.8 cm darning needles and KR22 bearings based on con-
[17] S. Ren, K. He, R. Girshick, and J. Sun, ‘‘Faster R-CNN: Towards real-time volution neural networks and data increase,’’ Appl. Sci., vol. 8, no. 10,
object detection with region proposal networks,’’ in Proc. Adv. Neural Inf. pp. 1857–1875, Oct. 2018.
Process. Syst., 2015, pp. 91–99. [40] J. Schmidhuber, ‘‘Deep learning in neural networks: An overview,’’ Neural
[18] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, ‘‘You only look once: Netw., vol. 61, pp. 85–117, Jan. 2015.
Unified, real-time object detection,’’ in Proc. IEEE Conf. Comput. Vis. [41] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie,
Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 779–788. ‘‘Feature pyramid networks for object detection,’’ in Proc. Comput. Vis.
[19] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and Pattern Recognit. (CVPR), Honolulu, HI, USA, Nov. 2017, pp. 2117–2125.
A. C. Berg, ‘‘SSD: Single shot multibox detector,’’ in Proc. Eur. Conf.
Comput. Vis. Eur. Conf. Comput. Vis. (ECCV), Amsterdam, The Nether-
lands, Sep. 2016, pp. 21–37.
[20] K. He, G. Gkioxari, P. Dollár, and R. Girshick, ‘‘Mask R-CNN,’’ in
Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Venice, Italy, Oct. 2017,
pp. 2980–2988.
[21] V. Trivedi, M. Joglekar, S. Mahajan, N. Patel, V. Chhaniwal, B. Javidi,
and A. Anand, ‘‘Digital holographic imaging of refractive index distribu-
tions for defect detection,’’ Opt. Laser Technol., vol. 111, pp. 439–446,
Apr. 2019.
JING YANG received the B.Sc. degree from
[22] Y. C. Tan, W. K. Chiu, and N. Rajic, ‘‘Quantitative defect detection on the
Anyang Normal University, in 2015. He is cur-
underside of a flat plate using mobile thermal scanning,’’ Procedia Eng.,
vol. 188, pp. 493–498, May 2017. rently pursuing the Ph.D. degree with Guizhou
[23] C. Devivier, F. Pierron, and M. R. Wisnom, ‘‘Impact damage detection University, China. He has received a Scholarship
in composite plates using deflectometry and the virtual fields method,’’ from the China Scholarship Council (CSC), from
Composites A, Appl. Sci. Manuf., vol. 48, pp. 201–218, May 2013. 2018 to 2019, under the State Scholarship Fund
[24] Y. Shao, L. Zeng, J. Lin, W. Wu, and H. Zhang, ‘‘Trailing pulses self- to pursue the degree with Oklahoma State Univer-
focusing for ultrasonic-based damage detection in thick plates,’’ Mech. sity, as a joint Ph.D. Student with the Institute for
Syst. Signal Process., vol. 119, pp. 420–431, Mar. 2019. mechatronic engineering and joined Prof. G. Fan’s
[25] Y. Zou, D. Du, B. Chang, L. Ji, and J. Pan, ‘‘Automatic weld defect Group in U.S. His main research interest includes
detection method based on Kalman filtering for real-time radiographic machine vision and deep learning in indoor robots and smart manufacturing
inspection of spiral pipe,’’ NDT E Int., vol. 72, pp. 1–9, Jun. 2015. applications.
SHAOBO LI was a Professor with the School GUANCI YANG received the Ph.D. degree in com-
of Mechanical Engineering, Guizhou University puter software and theory from the University of
(GZU), China. He has been the Dean of the School Chinese Academy of Sciences, in 2012. He is
of Mechanical Engineering, GZU, since 2015. currently a Professor with the Key Laboratory of
He was the Vice Director of the Key Laboratory of Advanced Manufacturing Technology, Ministry of
Advanced Manufacturing Technology, Ministry of Education, Guizhou University, China. His cur-
Education, GZU, from 2007 to 2015. His research rent areas of interest mainly include computational
has been supported by the National Science intelligence, human–robot interaction, and intelli-
Foundation of China (NSFC) and the National gent control systems.
High-Tech R&D Program (863 Program). His
main research interests include intelligence manufacturing and big data.