0% found this document useful (0 votes)
14 views14 pages

Defect Detection of Composite Material

This paper presents a method for detecting internal defects in composite materials using terahertz images and an improved faster region-convolutional neural network (faster R-CNN) algorithm. The proposed method enhances detection accuracy, achieving a mean average precision of 98.35%, which surpasses traditional methods and reduces missed detections. The study highlights the advantages of terahertz nondestructive testing technology in improving defect detection efficiency in composite materials.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views14 pages

Defect Detection of Composite Material

This paper presents a method for detecting internal defects in composite materials using terahertz images and an improved faster region-convolutional neural network (faster R-CNN) algorithm. The proposed method enhances detection accuracy, achieving a mean average precision of 98.35%, which surpasses traditional methods and reduces missed detections. The study highlights the advantages of terahertz nondestructive testing technology in improving defect detection efficiency in composite materials.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

materials

Article
Defect Detection of Composite Material Terahertz Image Based
on Faster Region-Convolutional Neural Networks
Xiuwei Yang 1,2,3 , Pingan Liu 4 , Shujie Wang 1 , Biyuan Wu 5,6 , Kaihua Zhang 7 , Bing Yang 8 and Xiaohu Wu 6, *

1 Institute of Automation, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250014, China
2 Key Laboratory of Microwave Remote Sensing, National Space Science Center, Chinese Academy of Sciences,
Beijing 100190, China
3 University of Chinese Academy of Sciences, Beijing 100049, China
4 Qingdao Quenda Terahertz Technology Co., Ltd., Qingdao 266100, China
5 Basic Research Center, School of Power and Energy, Northwestern Polytechnical University,
Xi’an 710072, China
6 Shandong Institute of Advanced Technology, Jinan 250100, China
7 Henan Key Laboratory of Infrared Materials & Spectrum Measures and Applications, School of Physics,
Henan Normal University, Xinxiang 453007, China
8 Centre for Advanced Laser Manufacturing (CALM), School of Mechanical Engineering, Shandong University
of Technology, Zibo 255000, China
* Correspondence: [email protected]

Abstract: Terahertz (THz) nondestructive testing (NDT) technology has been increasingly applied
to the internal defect detection of composite materials. However, the THz image is affected by
background noise and power limitation, leading to poor THz image quality. The recognition rate
based on traditional machine vision algorithms is not high. The above methods are usually unable to
determine surface defects in a timely and accurate manner. In this paper, we propose a method to
detect the internal defects of composite materials by using terahertz images based on a faster region-
convolutional neural networks (faster R-CNNs) algorithm. Terahertz images showing internal defects
in composite materials are first acquired by a terahertz time-domain spectroscopy system. Then the
terahertz images are filtered, the blurred images are removed, and the remaining images are enhanced
with data and annotated with image defects to create a dataset consistent with the internal defects
Citation: Yang, X.; Liu, P.; Wang, S.; of the material. On the basis of the above work, an improved faster R-CNN algorithm is proposed
Wu, B.; Zhang, K.; Yang, B.; Wu, X. in this paper. The network can detect various defects in THz images by changing the backbone
Defect Detection of Composite
network, optimising the training parameters, and improving the prior box algorithm to improve the
Material Terahertz Image Based on
detection accuracy and efficiency of the network. By taking the commonly used composite sandwich
Faster Region-Convolutional Neural
structure as a representative, a sample with typical defects is designed, and the image data are
Networks. Materials 2023, 16, 317.
obtained through the test. Comparing the proposed method with other existing network methods,
https://doi.org/10.3390/
ma16010317
the former proves to have the advantages of a short training time and high detection accuracy. The
results show that the mean average precision (mAP) without data enhancement reached 95.50%, and
Academic Editor: Polina P. Kuzhir
the mAP with data enhancement reached 98.35% and exceeded the error rate of human eye detection
Received: 4 October 2022 (5%). Compared with the original faster R-CNN algorithm of 84.39% and 85.12%, the improvement
Revised: 2 December 2022 is 11.11% and 10.23%, respectively, which demonstrates superb feature extraction capability and
Accepted: 21 December 2022 reduces the occurrence of network errors and omissions.
Published: 29 December 2022
Keywords: defect detection; composite material; terahertz image; faster R-CNN

Copyright: © 2022 by the authors.


Licensee MDPI, Basel, Switzerland.
1. Introduction
This article is an open access article
distributed under the terms and Composite structures are widely used in aerospace, automotive, shipbuilding, and
conditions of the Creative Commons other fields thanks to their advantages, such as low specific gravity and good fatigue
Attribution (CC BY) license (https:// resistance. Therefore, composite materials play a very important role in the development
creativecommons.org/licenses/by/ of modern science and technology. However, in the process of composite material prepa-
4.0/). ration, defects such as delamination, debonding, voids, and inclusions will inevitably

Materials 2023, 16, 317. https://doi.org/10.3390/ma16010317 https://www.mdpi.com/journal/materials


Materials 2023, 16, 317 2 of 14

occur because of the process, environment, and other factors. It is crucial to detect these
defects to avoid certain hazards. The currently available general inspection means are ultra-
sonic inspection [1], infrared nondestructive testing [2], X-ray, thermal wave detection [3],
etc. Nevertheless, these detection methods cannot correctly and clearly detect defects in
composite materials.
Terahertz time-domain spectroscopy imaging, as a noncontact, penetrating nonde-
structive testing (NDT) technique, can characterise nonmetallic materials with multilayer
structures [4–6]. Compared with the ultrasonic testing technology, the terahertz NDT
technology does not need to use a coupling agent in the testing process, which reduces
the pollution on the surface of the tested object. Compared with X-ray, THz wave has
less energy and no ionisation damage to the human body and the detected object, so it
offers more safety. Compared with NIR technology, THz technology has a strong anti-
interference ability, and THz waves can penetrate nonpolar and nonmetallic materials such
as clothing and paper boxes [7]. THz wave is transient in the time domain, which can
extract the depth and thickness data of materials, and broadband in the frequency domain,
which can reflect the difference of substances in the spectrum and be used to identify sub-
stances. Therefore, THz imaging technology for the defect detection of substances has great
application potential.
In 2015, Guo et al. [7] used a THz-TDS system and a return oscillator continuous
terahertz wave system to detect delamination, intercalated metal, and thermal damage
defects in glass fibre composites. In 2019, Zhang et al. [8] performed defect detection on
composite materials using terahertz reflectance laminar imaging and compared it with the
B-scan imaging method. Yet the diffraction effect of terahertz waves can lead to the blurring
of image edges, and the interference effect can also lead to the presence of significant bright
and dark streaks in the image. Terahertz images can also be affected by factors such as
the environment, which can lead to suboptimal image quality [9]. It comes with great
subjective interference on the manual recognition of defects in terahertz images, and there
will be a misjudgement.
Defect detection methods based on deep learning are classified into single-stage and
two-stage. The main single-stage networks are SSD [10], YOLO [11–15] series networks, and
so on. The main two-stage networks are R-CNN [16], fast R-CNN [17], faster R-CNN [18],
mask R-CNN [19], and so on. In 2019, Hou et al. [20] proposed an image-detection algorithm
based on an improved faster R-CNN, which obtained a high accuracy and detection rate.
In 2022, Xue et al. proposed the mask-CGANs model and used the RetinaNet detection
network [21] to build a conditional generative adversarial network used for the accurate
segmentation of terahertz images [22].
In this paper, a glass fibre composite sample with three kinds of embedded defects
is designed, and the terahertz image of the sample is obtained by scanning imaging with
the terahertz time-domain spectral system. Then, an improved faster region-convolutional
neural network (faster R-CNN) is proposed to realise the intelligent recognition of THz im-
ages. By comparing the detection results with those of two existing detection networks, the
accuracy is improved by 11.11% and 10.23%, respectively, which proves that the proposed
method can effectively reduce missed detection. THz-NDT technology is developing in a
real-time and intelligent direction, so it is of great significance to explore new intelligent
detection methods to improve detection efficiency.

2. Principles and Methods


2.1. Faster R-CNN
Faster R-CNN is a two-stage network model first proposed by Ross B. Girshick in 2016.
The first stage obtains the candidate frames by feature extraction using the backbone net-
work, and the second stage classifies the contents of the candidate frames. The commonly
used backbone networks are VGG [23], ResNet [24], Xception [25], etc., and the selection
of one has a crucial impact on detection accuracy. The algorithm has the characteristics of
fast operation speed and high detection accuracy, so it is widely used to detect targets in
convolutional neural network to obtain the feature map of the image.
(2) Candidate region generation network RPN (region proposal network). It is used
generate candidate regions where detection targets may exist. A more accurate d
tected region is obtained by classifying and regressing the predefined anchor fram
Materials 2023, 16, 317 3 of 14
on the feature map obtained in the previous step. RPN can improve the efficiency
candidate region selection and greatly reduce network time consumption.
(3) ROI
images(region of interest)
and more. pooling.
In this section, On the
the overall one hand,
architecture the R-CNN
of faster corresponding feature
is illustrated and vecto
are extracted
introduced usingfor the candidate
ResNet regions.
as the backbone On network.
extraction the other hand, the feature maps cor
As shown in Figure 1, the faster R-CNN
sponding to the candidate regions are adjusted network can
tobea divided intoto
fixed size four parts: subseque
facilitate
(1) Feature
accurate extraction network. The feature extraction of the input image mainly uses
classification.
convolutional neural network to obtain the feature map of the image.
(4) Classification and regression. Softmax is used to classify the feature vectors to det
(2) Candidate region generation network RPN (region proposal network). It is used
minetothe categories.
generate Then
candidate the exact
regions whereposition
detectionis selected
targets mayfor theAdetection
exist. more accuratebox by usi
bbox_pred.
detected region is obtained by classifying and regressing the predefined anchor frames
on the feature map obtained in the previous step. RPN can improve the efficiency of
The Softmax function is shown below.
candidate region selection and greatly reduce network time consumption.
(3)
ROI (region of interest) pooling. On the one hand,
ak the corresponding feature vectors
are extracted for the candidate regions. One the other hand, the feature maps corre-
yk =adjusted (1)

sponding to the candidate regions are n toai a fixed size to facilitate subsequent
accurate classification. i =1
e
(4) Classification and regression. Softmax is used to classify the feature vectors to de-
where 𝑎 termine
represents an array,Then
the categories. 𝑎 the
represents the k’th
exact position element
is selected in the
for the arraybox
detection a, and
by 𝑦 re
using
resents the bbox_pred.
Softmax value of the k’th element.

Feature extraction network Feature map


fully connected
Bbox_pred
Classification and Regression
Input

Softmax Cls_prob
ROI Pooling

RPN network

FigureFigure
1. Faster R-CNN
1. Faster network
R-CNN network framework diagram.
framework diagram.

The Softmax function is shown below.


2.2. The Improved Faster R-CNN
e ak
The improved faster R-CNN network yk = n consists mainly of three parts: an (1)improv
∑ i =1 e a i
backbone network, resetting the anchor frame of the dataset, and the Bayesian optimi
where
tion of a represents an array, ak represents the k’th element in the array a, and yk represents
hyperparameters.
the Softmax value of the k’th element.
(1) Backbone network improvement
2.2. The Improved Faster R-CNN
The original faster R-CNN algorithm uses VGG16 in the backbone network featu
The improved faster R-CNN network consists mainly of three parts: an improved
extraction module to complete the image feature extraction [26]. To extract more comp
backbone network, resetting the anchor frame of the dataset, and the Bayesian optimisation
hensive features of the composite sandwich structure, the Resnet50 network, which c
of hyperparameters.
extract image detail information, is selected for the feature extraction of the compos
(1) Backbone network improvement
The original faster R-CNN algorithm uses VGG16 in the backbone network feature
extraction module to complete the image feature extraction [26]. To extract more compre-
hensive features of the composite sandwich structure, the Resnet50 network, which can
extract image detail information, is selected for the feature extraction of the composite
sandwich structure in this paper. In addition, in the detection of defects in composite
sandwich structures, there are currently two problems: defects occupy a small area of the
sandwich structure in this paper. In addition, in the detection of defects
sandwich structures, there are currently two problems: defects occupy a sm
Materials 2023, 16, 317 whole image, and a small percentage of information is obtained. 4To avoid th
of 14

of useless information, this paper adds another layer of 2 × 2 average pool


to integrate spatial information before downsampling the convolution kern
whole image, and a small percentage of information is obtained. To avoid the redundancy
ofual module,
useless which
information, this can
paperprevent overfitting
adds another layer of 2 ×in this layer.
2 average poolingIn addition,
(avg-pool) to to en
ceptivespatial
integrate fieldinformation
of the network with respect
before downsampling to the input
the convolution kernelfeatures and to mo
of the residual
module, which can prevent overfitting in this layer. In addition, to enhance the perceptive
extract the low-level detail information and high-level semantic informatio
field of the network with respect to the input features and to more thoroughly extract
image,
the low-levela layer
detail of modules
information containing
and convolution,
high-level semantic batch
information normalisation,
of the input image, and
ation
layerisof built
modulesinto the residual
containing network.
convolution, Compared with
batch normalisation, the correction
and linear faster R-CNNis n
built into the residual network. Compared with the faster R-CNN network
the improved residual structure of the algorithm extracts more information model, the im-
proved residual structure of the algorithm extracts more information from the input feature
feature
layer layerthe
and makes and makes
network theexpressive.
more networkThe more expressive.
improved The improved
residuals module is shown resid
inshown
Figure 2.in Figure 2.

Figure
Figure 2. Improved
2. Improved residuals
residuals module. module.
(2) Anchor boxes for resetting datasets
(2) Anchor boxes for resetting datasets
1 Anchor box
① Anchor
Anchor box is a classic concept in target detection, providing faster detec-
box estimation
tion andAnchor
solving multiscale target problems.
box estimation To avoidconcept
is a classic the problem in of an imbalance
target between
detection, providin
positive and negative samples due to the use of generic anchor frames in the dataset of this
tion and solving multiscale target problems. To avoid the problem of an
paper, this paper proposes an improved Kmeans algorithm for estimating anchor frames
tween
and positive
reclustering the and negative
dataset to obtain samples duesuitable
a priori frames to the forusedetecting
of generic anchor
the dataset of frame
this paper.
of this paper, this paper proposes an improved Kmeans algorithm for esti
2 Improved Kmeans algorithm
frames and reclustering the dataset to obtain a priori frames suitable for de
The detection targets in this paper take the form of circles, triangles, and quadrilaterals.
tasetofof
Most thethis paper.
a priori boxes generated by clustering of the dataset are of similar size and
similar ②
aspect ratio. To make
Improved the improved
Kmeans faster R-CNN algorithm quickly and accurately
algorithm
predict the location of the target more, an improved Kmeans process is used for selecting
The detection targets in this paper take the form of circles, triangles,
the nine cluster-class centres one by one, as shown in Figure 3.
erals. Most of the a priori boxes generated by clustering of the dataset are
and similar aspect ratio. To make the improved faster R-CNN algorithm q
curately predict the location of the target more, an improved Kmeans proc
selecting the nine cluster-class centres one by one, as shown in Figure 3.
Materials 2023,
Materials 16,16,
2023, x FOR
317 PEER REVIEW 5 of 14

Figure
Figure 3. Flowchart
3. Flowchart of improved
of improved KmeansKmeans
clusteringclustering
algorithm. algorithm.
(3) Bayesian optimisation network training hyperparameter
(3) Bayesian optimisation network training hyperparameter
This hyperparameter is for the convolutional neural network in the network training
processThis hyperparameter
for the is for the
problem of inefficiency andconvolutional
poor accuracy. neural network
This paper in the network
uses Bayesian
process for
algorithms the problem
to optimise of inefficiency
the training hyperparametersandofpoor accuracy.
the network. This optimisation
Bayesian paper uses Bayes
isrithms
an algorithm that uses
to optimise theBayes to search
training for the optimal value
hyperparameters of network.
of the an objectiveBayesian
function, optimi
where the probabilistic agent model and the collection function are the core parts of the
an algorithm that uses Bayes to search for the optimal value of an objective f
Bayesian optimisation algorithm. Currently, the commonly used probabilistic agent model
iswhere the process,
a Gaussian probabilistic
and theagent model
collection andconsists
function the collection function
of the posterior are theofcore par
probabilities
Bayesian
the objective optimisation algorithm.
function. To minimise the totalCurrently, the commonly
loss r, the Bayesian optimisationused probabilist
algorithm
selects the evaluation point
model is a Gaussian process, x i by using a collection function. The process can be expressed
and the collection function consists of the posterio
asbilities
follows:of the objective function. To minimise the total loss r, the Bayesian opti
xi+1 = maxλ( x, D1:i )
algorithm selects the evaluation point
x ∈X xi by using a collection function. The proce

expressed as follows: ri = | y ∗ − yi | (2)

i +1 max λ ( x, Dfunctions,
where x is the decision space, λ( x, D1:i ) isx the= collection
1:i )
and y* is the
optimal solution. x∈X

The Bayesian optimisation algorithm is implemented *


by the following steps:
r = y −y
1 Determine the maximum number of iterationsi N. i

2 Use the collection function to obtain the evaluation point xi .


3
where x is the
Evaluate decisionfunction
the objective λ ( xy, iDby
space,value 1:i )using
is the
the collection functions,
evaluation point xi . and y* is the
4solution.
Update the probabilistic proxy model after integrating data Dt.
5 Return to step 2 and
The Bayesian continue iterating
optimisation algorithm if the is
current number of by
implemented iterations n is the steps
the following
maximum number of iterations N; otherwise, output xi .
① Determine the maximum number of iterations N.
② Use the collection function to obtain the evaluation point xi.
③ Evaluate the objective function value yi by using the evaluation point xi.
④ Update the probabilistic proxy model after integrating data Dt.
⑤ Return to step ② and continue iterating if the current number of iterations
Materials 2023, 16, 317 6 of 14

2.3. Evaluation Indicators


The overall mean average precision (mAP) is used as an evaluation index for detection
accuracy according to detection accuracy and speed requirements.
Overall average accuracy
(1) Recall and precision
Recall is the proportion of all positive samples in the test set that are correctly identified
as positive. Precision is the percentage of positive samples that are identified in the number
of images identified as positive samples. The expressions for recall and precision are
as follows:
Recall = TPTP

+FN (3)
Precision = TPTP
+FP
where TP represents the number of positive samples correctly identified as positive, FN is
the number of positive samples incorrectly identified as negative, and FP is the number of
negative samples incorrectly identified as positive.
(2) Average precision (AP)
AP is an important indicator to evaluate the accuracy of a target detection model
for a class and can be reflected by the PR curve. The PR curve combines the detector’s
ability to perform correct classification (precision) with the ability to find all relevant objects
(recall). When the area of the PR curve is larger, the accuracy of the model’s localisation
and recognition is higher. AP can be expressed as follows:
Z 1
AP = p(r )dr (4)
0

(3) mAP
In the case of composite sandwich structure inspection targets, mAP is the mean value
of the average accuracy of the category target, which is expressed as follows:

∑kn=1 APk
mAP = (5)
n

where n is the number of target category and k is a specific category.

3. Experiments and Equipment


3.1. Preparation of Samples
A composite multilayer structure for aircraft radomes is selected for this work. The
structure consists of upper and lower skins, adhesives, and intermediate core materials.
The skin is made of glass fibre material with good wave-transparent performance, the
thickness is about 1.4 mm (7 layers), and the laying angle is (0◦ /90◦ /45◦ /−45◦ ). The
intermediate core material is the most commonly used PMI foam in aerospace structures.
FOAM, an MH-type PMI foam, is selected as the core material. The design diagram is
shown in Figure 4.
Materials
Materials 16,16,
2023,
2023, 317 PEER REVIEW
x FOR 7 of 14 7 of 14

Figure 4. Design
Figure 4. Designofofthe
thesandwich
sandwich structure.
structure.

3.2. Artificial
3.2. Artificial Defect
DefectPreset
Preset
The main defects of the composite sandwich structure are delamination, debonding,
The main defects of the composite sandwich structure are delamination, debonding,
and cavity, which are distributed at different depths of the sandwich structure. The defect
and cavity, which are distributed at different depths of the sandwich structure. The defect
shapes and sizes were laid out to verify the detection ability of the proposed improved
shapes and sizes
faster R-CNN were
network forlaid out In
defects. toaddition,
verify the detection
to verify ability
the ability of the
of the proposed
neural networkimproved
to
faster
detect R-CNN
the defectnetwork for
depth, the defects.
location In addition,
of the to is
defect depth verify the The
laid out. ability of types
defect the neural
containnetwork
to
thedetect the defect depth, the location of the defect depth is laid out. The defect types
follows:
contain the follows:defects. In the process of glass fibre material prefabrication, the de-
(1) Delamination
lamination defect
(1) Delamination is represented
defects. by adding
In the process polytetrafluoroethylene
of glass (PTFE) flakesthe
fibre material prefabrication, in delam-
between the middle of the third and fourth prepreg layers. Because the
ination defect is represented by adding polytetrafluoroethylene (PTFE) flakes in be- refractive
index of PTFE is close to that of air, it can replace the delamination effect with the
tween the middle of the third and fourth prepreg layers. Because the refractive index
thickness of 0.2 mm.
of PTFE is close to that of air, it can replace the delamination effect with the thickness
(2) Debonding defects. When the glass fibre material is glued to the foam, a PTFE sheet
of 0.2 mm.It can replace the state without gluing, and the thickness of a PTFE sheet
is placed.
(2) Debonding
is 0.2 mm. defects. When the glass fibre material is glued to the foam, a PTFE sheet
(3) is placed.
Hollow It can This
defects. replace the state
involves without
the setting ofgluing,
cavities and the thickness
of different of a PTFE
sizes, shapes, and sheet is
0.2 mm.on the surface of the foam.
depths
(3) Hollow defects.ofThis
A large number involves
sample therequired
data are setting of
forcavities
networkoftraining,
differentso sizes, shapes,
samples of and
depths
datasets on the surface
for network trainingof thevalidation
and foam. were first prepared. Samples A1 to A8 were
used to improve the training and validation of the faster R-CNN network. Three types of
A large number of sample data are required for network training, so samples of da-
defects were randomly preset at different depth positions within these samples. The defect
tasets for network training and validation were first prepared. Samples A1 to A8 were
shapes were triangle, circle, heterotropic, and other shapes. Figure 5 shows the design
used to improve
drawing of one of the trainingset
the training and validation of the faster R-CNN network. Three types of
samples.
defects were randomly preset at different depth positions within these samples. The de-
fect shapes were triangle, circle, heterotropic, and other shapes. Figure 5 shows the design
drawing of one of the training set samples.
MaterialsMaterials
2023, 16,2023, 16, 317
x FOR PEER REVIEW 8 of 8
14 of 14

Figure 5. Top view of the defective sample.

In the sample for testing the network, layered defects used thin circular sheets of
mm, 3 mm, and 2 mm diameter PTFEs with a thickness of 0.2 mm. The debonding defe
is placed in the equilateral triangular sheet of PTFE, which is also a substitute for the non
adhesive state. The PTFE sheets are 4 mm, 3 mm, and 2 mm in side length and 0.2 mm i
thickness. For void defects, square holes with the same depth and different side length
were punched in the foam, with side lengths of 4 mm, 3 mm, and 2 mm and a depth of
mm. A sample design diagram with three defect types is shown in Figure 6.
The preparation process is to first carry out the lay-up of prepreg to prepare for glas
fibre composites. Then the glass fibre is glued to the foam using an adhesive. Compactio
Figureis 5.carried
Top outofby
view thevacuum evacuation. The preparation is carried out in accordance wit
Figure 5. Top view of thedefective
defectivesample.
sample.
Q/MLPS1 “Hot Pressed Can Process for Fiber Reinforced Composites”. In order to preven
the
In the collapse
In the
sample of for
sample the
forvoid defect
testing
testing during
thenetwork,
the network, the vacuum-pressing
layered
layered defects usedprocess,
defectsused thin the sheets
thincircular
circularvacuum of pressur
sheets of 4
4
mm, is mm, 3
reduced
3 mm, mm, and
and 2tomm 2 mm diameter
halfdiameter
of the normal PTFEs
PTFEs with with
value. a thickness of 0.2 mm.
Finally, high-temperature
a thickness The debonding
curing and
of 0.2 mm. The debonding coolin
defect
defect
is placed is placed
demoulding
in in the
are
the equilateral equilateral
performed. triangular
actualsheet
Thesheet
triangular of PTFE,
ofprocessed
PTFE, which
samples
which is also
is also are a substitute
shown forthe
in for
a substitute thenon-
Figure 7. Figur
nonadhesive state. The PTFE sheets are 4 mm, 3 mm, and 2 mm in side length and 0.2 mm
7a is state.
adhesive the sample
The PTFEof the training
sheets are 4set,
mm,Figure
3 mm,8b and
is a part
2 mm ofinthe validation
side length andset,0.2
andmmFigure
in 7
in thickness. For void defects, square holes with the same depth and different side lengths
is the
thickness. sample physical diagram of the test network. The image obtained
For void defects, square holes with the same depth and different side lengths using THz-TD
were punched in the foam, with side lengths of 4 mm, 3 mm, and 2 mm and a depth of
were scanning
punched
2 mm. A sample imaging
in the is shown
foam,
design with in
diagram sideFigure
with three8.defect
lengths of 4 types
mm, 3is mm,
shown and 2 mm 6.
in Figure and a depth of 2
mm. A sample design diagram with three defect types is shown in Figure 6.
The preparation process is to first carry out the lay-up of prepreg to prepare for glass
fibre composites. Then the glass fibre is glued to the foam using an adhesive. Compaction
is carried out by vacuum evacuation. The preparation is carried out in accordance with
Q/MLPS1 “Hot Pressed Can Process for Fiber Reinforced Composites”. In order to prevent
the collapse of the void defect during the vacuum-pressing process, the vacuum pressure
is reduced to half of the normal value. Finally, high-temperature curing and cooling
demoulding are performed. The actual processed samples are shown in Figure 7. Figure
7a is the sample of the training set, Figure 8b is a part of the validation set, and Figure 7b
is the sample physical diagram of the test network. The image obtained using THz-TDS
scanning imaging is shown in Figure 8.

Figure
Figure Three
6. 6. types
Three of defect
types distributions.
of defect distributions.
The preparation process is to first carry out the lay-up of prepreg to prepare for glass
fibre composites. Then the glass fibre is glued to the foam using an adhesive. Compaction
is carried out by vacuum evacuation. The preparation is carried out in accordance with
Q/MLPS1 “Hot Pressed Can Process for Fiber Reinforced Composites”. In order to prevent
the collapse of the void defect during the vacuum-pressing process, the vacuum pressure
is reduced to half of the normal value. Finally, high-temperature curing and cooling
demoulding are performed. The actual processed samples are shown in Figure 7. Figure 7a
Materials 2023, 16, 317 9 of 14

is the sample of the training set, Figure 8b is a part of the validation set, and Figure 7b is the
Materials 2023, 16, x FOR PEER REVIEW 9 of 14
sample physical diagram of the test network. The image obtained using THz-TDS scanning
Materials 2023, 16, x FOR PEER REVIEW 9 of 14
imaging is shown in Figure 8.

Figure7.7.Physical
Physical picture
picture of the
of the sample:
sample: (a) part
(a) part of theof the training set samples
and (b) and (b) aof
sample of the
Figure
Figure 7. Physical picture of the sample: (a) part oftraining set samples
the training set samples a sample
and the
(b) a sample of the
test
test network.
network.
test network.

Figure 8. Defect types. (a) Terahertz image of sample A1 (b) Terahertz image of sample A2 (c) Te-
Figure8.8. Defect
Defect types. (a) Terahertz
Terahertz image
image of
of sample
sample A1
A1 (b)
(b) Terahertz image of
of sample
sampleA2
A2 (c) Te-
Figure
rahertz image of types.
test set(a)
sample Terahertz image
rahertz
(c) image
Terahertz of test
image setset
of test sample
sample.

3.3. THz-TDS
3.3. THz-TDS Experimental System
3.3. THz-TDSExperimental
ExperimentalSystem
System
Thereflection
The reflection modemodeof theofterahertz
the terahertztime-domaintime-domain spectroscopy
spectroscopy system
system selected selected in
in this
The reflection mode of the terahertz time-domain spectroscopy system selected in
paper is shown in Figure 9. The experimental setup includes an ultrafast
this paper is shown in Figure 9. The experimental setup includes an ultrafast femtosecond femtosecond laser,
this
an paperdelay
optical is shown a in
line, delay Figure 9.and
transmitter The experimental
a receiver, setup includes
photoconductive antennasan (PCAs),
ultrafastlock-in
femtosecond
laser, an optical line, a transmitter and a receiver, photoconductive antennas
laser, an and
amplifiers, optical
a delay line,
computer for a transmitter
controlling the deviceandand a receiver,
processing photoconductive
the signals. A 2 mm antennas
(PCAs), lock-in amplifiers, and a computer for controlling the device and processing the
(PCAs), lock-in amplifiers, and a computer for controlling
thick silicon wafer in the experimental setup acts as a beam splitter to guide the reflectedthe device and processing the
signals. A 2 mm thick silicon wafer in the experimental setup acts as a beam splitter to
signals. A
terahertz beam2 mmfromthick silicontowafer
the sample in theThe
the receiver. experimental
emitted terahertzsetuppulses
acts are
as acollimated
beam splitter to
guide
and
the reflected
focused
terahertz
by a TPXterahertz
beam
lens to reach
from
thefrom
surface
the ofsample to thetoreceiver.
the sample
The emitted
be measured.
terahertz
After theterahertz
guide the reflected beam the sample to the receiver. The emitted
pulses are
photoelectric collimated
effect has been and focused by a TPX lens to reach the surface of the sample to be
pulses are collimated andgenerated,
focused by it isa collimated
TPX lens to andreach
focusedtheby a symmetrical
surface TPX
of the sample to be
measured.
lens to the After the photoelectric
fibre-coupled terahertz effectIt has
receiver. is been
then fed generated,
to the fibre it is collimated
femtosecond laser and
via focused
measured. After the photoelectric effect has been generated, it is collimated and focused
aby a symmetrical
fibre TPX lens
delay line system. At thetodetection
the fibre-coupled
end, the THz terahertz
pulse isreceiver.
obtained by It iselectro-optical
then fed to the fibre
by a symmetrical TPX lens to the fibre-coupled terahertz receiver. It is then fed to the fibre
sampling.
femtosecond Thenlaser
the THz
via signal
a fibre is obtained
delay line by system.
equivalent Atsampling, whichend,
the detection in turnthecollects
THz pulse is
femtosecond
the signal through laserthevia a fibre delay
acquisition systemline andsystem.
processes At thedisplays
detection the end, the THz pulse is
obtained by electro-optical sampling. Then the THz and signal is obtained signal through
by equivalent sam
obtained
the upper by electro-optical
computer system. sampling.
The system Then
samples the THz
time signal
signals withis obtained
a step size by
of equivalent
0.01 ps, a sam
pling, which in turn collects the signal through the acquisition system and processes and
pling, whichscan
time-domain in turn
length collects
of 120 ps,theasignal
pulse widththrough theand
of 1ps, acquisition
a modifiable system and processes
scan speed of up and
displays the signal through the upper computer system. The system samples time signals
displays
to 60 Hz. To theavoid
signal thethrough
influencethe upper computer
of temperature system.on
and humidity The thesystem
systemsamples
performance,time signals
with
the
a stephumidity
relative
size of 0.01
of the
ps,test
a time-domain
environment was
scan length ofat120
maintained 25%,
ps,and
a pulse width of 1ps, and
the temperature
with a step size of 0.01 ps, a time-domain scan length of 120 ps, a pulse width of 1ps, and
a modifiable
was maintained scan speed
at 25 ◦ C. of up to 60 Hz. To avoid the influence of temperature and humid
a modifiable scan speed of up to 60 Hz. To avoid the influence of temperature and humid
ity on the system performance, the relative humidity of the test environment was main
ity on the system performance, the relative humidity of the test environment was main
tained at 25%, and the temperature was maintained at 25 °C.
tained at 25%, and the temperature was maintained at 25 °C.
Materials 2023,
Materials 16,16,
2023, x FOR
317 PEER REVIEW 10 of 14 10 of 14

Figure 9. Schematic
Figure 9. Schematicdiagram
diagramof of
thethe THz-TDS
THz-TDS system.
system.

3.4. Data
3.4. Data Acquisition
Acquisitionand
andPreprocessing
Preprocessing
In this paper, the reflection mode of the terahertz time-domain spectral system (men-
In this paper, the reflection mode of the terahertz time-domain spectral system (men-
tioned above) and the two-dimensional scanning platform were used to scan and image the
tioned above) and the two-dimensional scanning platform were used to scan and image
samples. Through experimental comparison, the scanning step of the scanning platform
the
wassamples.
finally setThrough
as 0.5 mm. experimental
To increase the comparison,
number of the scanning
training step
datasets andofimprove
the scanning
the plat-
form
robustness of network training, time-domain maximum imaging and specific frequencyimprove
was finally set as 0.5 mm. To increase the number of training datasets and
the robustness
amplitude imaging of network
with obvious training, time-domain
difference maximum
characteristics imaging
were selected and specific
to ensure that fre-
every defect
quency in the imaging
amplitude image can withbe presented. The collected
obvious difference and selected
characteristics weredatasets
selectedwereto ensure
preprocessed
that every defect to train
in the
theimproved
image can faster R-CNN network
be presented. in this work.
The collected andThe initial dataset
selected datasets were
of 300 images was first processed one by one using the superresolution
preprocessed to train the improved faster R-CNN network in this work. The initial dataset reconstruction
method to make the image signal–to-noise ratio higher. To increase the number of datasets,
of 300 images was first processed one by one using the superresolution reconstruction
each image was cropped into four copies. Then the cropped images were subjected to
method to make the image signal–to-noise ratio higher. To increase the number of da-
random rotation, contrast adjustment, brightness adjustment, and other operations for data
tasets, each image
enhancement. was cropped
The images with poor into four were
results copies. Then again,
selected the cropped images
and finally 1300were
imagessubjected
to random rotation, contrast adjustment, brightness adjustment, and
of the dataset were obtained, including 1000 images of the training set and 300 images of other operations for
data
the testenhancement.
set. Finally, the The images
defects with
in the poorset
training results
imageswere
were selected
manuallyagain,
annotatedandusing
finally 1300
the image
images oflabelling function
the dataset were in obtained,
the MATLAB software.1000
including The annotated
images ofdatathe were imported
training set and 300
into the workspace in “.groundTruth” format. Then they were stored
images of the test set. Finally, the defects in the training set images were manually in the folder in anno-
“.mat” format.
tated using the image labelling function in the MATLAB software. The annotated data
The parameters used were set in the training process. The initial learning rate was set
were imported into the workspace in “.groundTruth” format. Then they were stored in
to 0.01, the BatchSize was set to 4, and the maximum number of rounds maxEpochs was
the folder in “.mat” format.
set to 1000. The learning rate adjustment strategy decreased as the Epoch increased, using
Theincrease
a linear parameters usedthe
to change were set inrate.
learning the training process. The initial learning rate was set
to 0.01, the BatchSize was set to 4, and the maximum number of rounds maxEpochs was
4. Results
set to 1000.and
TheDiscussions
learning rate adjustment strategy decreased as the Epoch increased, using
a linear increase Dataset
Result of Resetting Anchor
to change the Box
learning rate.
The algorithm in this paper uses the intersection ratio of the cluster-class centre prior
frame
4. to the
Results areaDiscussions
and of other prior frames as the metric distance to calculate the intersection
over union (IoU).
Result of Resetting Dataset Anchor Box
The algorithm in this paper uses the intersection ratio of the cluster-class centre prior
frame to the area of other prior frames as the metric distance to calculate the intersection
over union (IoU).
Usually, the IoU of the coincidence of the predicted frame and the actual annotation
frame is greater than 0.5, and the target can be considered to be successfully detected. The
IoU is used to characterise whether the anchor frame parameters are optimal. The training
Materials 2023, 16, 317 11 of 14

Usually, the IoU of the coincidence of the predicted frame and the actual annotation
frame is greater than 0.5, and the target can be considered to be successfully detected. The
IoU is used to characterise whether the anchor frame parameters are optimal. The training
dataset is reclustered to generate nine new sets of anchor frame parameters: (22,26), (3,12),
(22,16), (27,28), (9,16), (25,21), (19,20), (16,15), and (15,27). The average IoU obtained for the
training data is 96.34%
The hyperparameters, such as learning rate, were optimised using Bayesian optimisa-
tion; the maximum number of iterative rounds was set to 100; and the acquisition function
was expectation boosting. The decision space is shown in Table 1.

Table 1. Hyperparameters and their ranges of improved ResNet 50.

Hyperparameters Minimum Value Maximum Value


Initial Learn Rate 1× 10−4 1
Momentum 0.8 0.99
L2 Regularisation 1 × 10−5 1 × 10−2

After the Bayesian optimisation of the hyperparameters, it is found that the learn-
ing rate is 0.001679, the momentum parameter is 0.8157, and the L2 regularisation is
2.368 × 10−5 . The SGDM optimisation algorithm is experimentally analysed to outperform
both RMSProp and Adam optimisers in terms of training speed and prediction accuracy.
This is because the SGDM optimisation algorithm introduces a momentum factor to update
the model parameters while selecting a portion of the sample data for training each time.
Improved faster R-CNN network performance analysis
To test the superiority of the improved faster R-CNN against defects, the results of
the faster R-CNN and those of the improved faster R-CNN are compared for the same
parameter settings. The ResNet50 network is the backbone of all three of these networks.
As shown in Figure 10, the P-R curves of the three networks are plotted. The horizontal
axis is the recall rate, and the vertical axis is the precision. The average precision value
can be calculated from the area enclosed by the curve. It can be seen that the improved
network has a larger area and therefore a higher average precision value. The average
accuracies obtained from the calculations are compared in Table 2. The average accuracy
of the fast R-CNN without data enhancement preprocessing is 76.52%, and the average
accuracy of the original faster R-CNN is 84.39%. The average accuracy of the improved
faster R-CNN in this study is 95.50%, which is 18.98% and 11.11% better than the other two
networks, respectively. After data enhancement preprocessing, the average accuracy of
the fast R-CNN is 79.04%. The average accuracy of the original faster R-CNN is 88.12%.
The improved faster R-CNN in this study has an average accuracy of 98.35%, which is
19.31% and 10.23% better than the other two networks, respectively. One can see that the
detection accuracy of the faster R-CNN network improved by this study is higher than
other networks, both without data augmentation and after data augmentation.

Table 2. Performance comparison of different networks.

Average Accuracy Value/%


Models Backbone Unused Data
Data Enhancement
Enhancement
Fast R-CNN ResNet50 76.52 79.04
Faster R-CNN ResNet50 84.39 88.12
Improved Faster
ResNet50 95.50 98.35
R-CNN
Materials 2023, 16, x FOR PEER REVIEW
Materials 2023, 16, 317 12 of 14
12 of 1

Figure10.
Figure 10.P-R
P-Rcurves
curves of the
of the three
three networks.
networks.

To
Tofurther
furtheranalyse
analysethe the
detection abilityability
detection of the improved faster R-CNN
of the improved network,
faster R-CNNimages
network, im
in the validation set and in the test set are selected, respectively. The validation set contains a
ages in the validation set and in the test set are selected, respectively. The validation se
defect triangle with a minimum side length of 2mm, and the accuracy of the three networks
contains
is compared.a defect
Figuretriangle
11a shows with
theaimaging
minimum side
results of length ofimage
a test set 2mm,obtained
and theby accuracy
using of the
three networks is compared. Figure 11a shows the imaging results
time-domain maximum imaging; that is, the maximum value of the time-domain signal of a test set image ob
tained by
species using
of each time-domain
scanning point ismaximum
selected asimaging;
the pixel that
valueis,ofthe
themaximum value
image. Figure of the time
11b–d
domain signal
respectively showspecies of each
the results scanning
of defect point
detection usingisdifferent
selectedmethods,
as the pixel
wherevalue of the image
the location
and accuracy
Figure 11b–dofrespectively
the detected defects
show theare marked.
results ofIn defect
terms ofdetection
the number of detections,
using different itmethods
can
wherebe seen from Figure
the location and11b,c that the
accuracy offast
the R-CNN
detected method
defects misses two defects,
are marked. while of
In terms thethe num
faster R-CNN misses one defect with the smallest detection rate. The proposed
ber of detections, it can be seen from Figure 11b,c that the fast R-CNN method misses two network
structure can detect all four defects in the image. In terms of detection accuracy, for each
Materials 2023, 16, x FOR PEER REVIEW defects, while the faster R-CNN misses one defect with the smallest detection 13 rate.
of 14 The
defect, the detection accuracy of the proposed network structure is higher than that of the
proposed
other network structure can detect all four defects in the image. In terms of detection
two networks.
accuracy, for each defect, the detection accuracy of the proposed network structure i
higher than that of the other two networks.
Figure 12a shows the imaging results of the validation set samples obtained by using
time-domain maximum imaging; that is, the maximum value of time-domain signal spe
cies of each scanning point is selected as the pixel value of the image. The defect edges in
the image are blurred. Figure 12b–d respectively show the defect recognition using three
methods for the image. In the fast R-CNN, the detection of defects is easy to miss with
small size and fuzzy edges, and the detection rate of the detected defects is also low. The
faster R-CNN can detect most defects, except for the smallest and most ambiguous defects
whereas the improved faster R-CNN proposed in this paper can detect all defects, and it
Figure
Figure 11. Comparison
11. Comparison
detection accuracy of
ofis defect
defect
the detectionperformance
detection
highest. performance of of the
the validation
validationset:
set:(a)
(a)original,
original,(b)(b)
fast
fast
R-CNN,
R-CNN, (c) (c) faster
faster R-CNN, and (d) improved faster R-CNN.
One canR-CNN,
see thatand fast(d)R-CNN
improved fastertwo
misses R-CNN.
defect targets. The faster R-CNN misses the
smallest
Figure defect target.
12a shows theThe improved
imaging resultsfaster
of theR-CNN notset
validation only detects
samples all theby
obtained defects
using but also
has the highest accuracy for each defect. The fast R-CNN and faster
time-domain maximum imaging; that is, the maximum value of time-domain signal species R-CNN methods are
prone
of each to missingpoint
scanning the detection
is selected ofas small defects
the pixel valueand blurred
of the image.defects. The edges
The defect improved in faste
the imagedetected
R-CNN are blurred. Figure
all the 12b–dwhich
defects, respectively show the defect the
fully demonstrated recognition
detection using three of the
capability
methods
improved forfaster
the image.
R-CNN In the
for fast R-CNN,
various the detection of defects is easy to miss with
defects.
small size and fuzzy edges, and the detection rate of the detected defects is also low. The
faster R-CNN can detect most defects, except for the smallest and most ambiguous defects,

Figure 12. Comparison of test image defect detection performance: (a) original, (b) fast R-CNN, (c)
Materials 2023, 16, 317 13 of 14

whereas
Figure the improved
11. Comparison faster R-CNN
of defect detectionproposed in this
performance of paper can detect
the validation set:all(a)
defects, and
original, (b)itsfast
detection accuracy is the highest.
R-CNN, (c) faster R-CNN, and (d) improved faster R-CNN.

Figure
Figure 12.12. Comparison
Comparison of of test
test imagedefect
image defectdetection
detectionperformance:
performance: (a)
(a) original,
original,(b)
(b)fast
fastR-CNN,
R-CNN, (c)
(c) faster R-CNN, and (d) improved faster
faster R-CNN, and (d) improved faster R-CNN. R-CNN.

One can see that fast R-CNN misses two defect targets. The faster R-CNN misses the
5. Conclusions
smallest defect target. The improved faster R-CNN not only detects all the defects but also
Inthe
has summary, we proposed
highest accuracy a defect.
for each defect The
detection method
fast R-CNN andbased
faster on the faster
R-CNN R-CNN
methods are al-
gorithm
prone for defectsthe
to missing in the composite
detection sample
of small parts.
defects and Defect
blurreddetection experiments
defects. The were per-
improved faster
R-CNN
formed ondetected all theby
the samples defects, whicha fully
building demonstrated
terahertz the detection
time-domain capability
spectroscopy of the The
system.
improved faster R-CNN for various defects.
target detection experimental dataset was obtained by preprocessing, such as data screen-
ing,5. data enhancement, and defect labelling. The anchor box was optimised by the
Conclusions
Kmeans clustering algorithm to improve detection accuracy and detection speed by a
In summary, we proposed a defect detection method based on the faster R-CNN
small margin.
algorithm forThe feature
defects in thefusion of feature
composite samplemaps
parts.in the path
Defect aggregation
detection network
experiments were was
adjusted
performed on the samples by building a terahertz time-domain spectroscopy system. was
to make full use of the shallow details. The target feature detection layer
built
Theontarget
the new three experimental
detection scales. The improved
dataset was faster R-CNN
obtained algorithm trains
by preprocessing, suchsamples
as data with
100screening,
epochs. After
data several trials of
enhancement, training
and defect the network,
labelling. Thean accuracy
anchor of 98%
box was and a recall
optimised by of
the Kmeans clustering algorithm to improve detection accuracy and detection
92.02% were obtained from the test set. It resulted in a 10.23% improvement over the orig- speed by
inala faster
small margin.
R-CNNThe feature fusion
algorithm and anof feature maps in thein
8.51% reduction path
theaggregation network rate.
leakage detection was The
adjusted to make full use of the shallow details. The target feature detection layer was
method proposed in this paper featured a large improvement in detecting small targets
built on the new three scales. The improved faster R-CNN algorithm trains samples with
and eliminated almost all missed and wrong detections, implying that the model had bet-
100 epochs. After several trials of training the network, an accuracy of 98% and a recall
ter of
robustness.
92.02% were The detection
obtained fromspeed wasset.
the test increased while
It resulted the detection
in a 10.23% accuracy
improvement was
over theguar-
anteed, and
original the R-CNN
faster model algorithm
size was and reduced by reduction
an 8.51% light-weighting the network
in the leakage detectionto beThe
rate. used in
embedded systems for
method proposed industrial
in this inspection.
paper featured a large improvement in detecting small targets
and eliminated almost all missed and wrong detections, implying that the model had
Author Contributions:
better Conceptualisation,
robustness. The detection speedX.Y.,
wasS.W., and X.W.;
increased whiledata
thecuration,
detectionP.L. and B.W.;
accuracy formal
was
analysis, X.Y., K.Z.,
guaranteed, andand B.Y.; funding
the model acquisition,
size was reduced byX.W.; investigation,
light-weighting theP.L. and B.W.;
network to bemethodology,
used in
X.Y., S.W., andsystems
embedded K.Z.; project administration,
for industrial P.L.; resources, K.Z., B.Y., and X.W.; software, S.W.;
inspection.
supervision, B.Y.; writing—original draft, X.Y.; writing—review and editing, B.W. and X.W. All au-
Author
thors haveContributions:
read and agreedConceptualisation,
to the publishedX.Y., S.W. and
version X.W.;
of the data curation, P.L. and B.W.; formal
manuscript.
analysis, X.Y., K.Z. and B.Y.; funding acquisition, X.W.; investigation, P.L. and B.W.; methodology, X.Y.,
Funding:
S.W. and This
K.Z.;research was funded by
project administration, P.L.;the National
resources, Natural
K.Z., B.Y. andScience Foundation
X.W.; software, of China (Grant
S.W.; supervision,
No.B.Y.;
52106099), the Shandong Provincial Natural Science Foundation (Grant No. ZR2020LLZ004),
writing—original draft, X.Y.; writing—review and editing, B.W. and X.W. All authors have read
andand
the Keyto Research
agreed andversion
the published Development programme of Shandong Province (Grant No.
of the manuscript.
2021JMRH0108).
Funding: This research was funded by the National Natural Science Foundation of China (Grant No.
52106099), the
Institutional Shandong
Review Provincial
Board NaturalNot
Statement: Science Foundation (Grant No. ZR2020LLZ004), and the
applicable.
Key Research and Development programme of Shandong Province (Grant No. 2021JMRH0108).
Informed Consent Statement: Not applicable.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Materials 2023, 16, 317 14 of 14

Conflicts of Interest: The authors declare no conflict of interest.

References
1. Castellano, A.; Fraddosio, A.; Piccioni, M.D. Quantitative analysis of QSI and LVI damage in GFRP unidirectional composite
laminates by a new ultrasonic approach. Compos. Part B Eng. 2018, 151, 106–117. [CrossRef]
2. Bozheng, W.; Lihong, D.; Haidou, W.; Jiajie, K.; Wei, G.; Ming, X.; Ai, M. Research and Application of Laser Infrared Thermography
in Material Defect Detection. Mater. Rev. 2020, 34, 127–132.
3. Yu, F.L.; Yu, R.Q.; Liao, L.W.; Liu, L. Application of Micro-magnetic Detection Technique in Glass Fibers. Fail. Anal. Prev. 2019, 14,
232–239.
4. Wang, Y.; Chen, Z.; Zhao, Z.; Zhang, L.; Kang, K.; Zhang, Y. Restoration of terahertz signals distorted by atmospheric water vapor
absorption. J. Appl. Phys. 2009, 105, 103105. [CrossRef]
5. Wang, Q.; Li, X.; Chang, T.; Hu, Q.; Yang, X. Terahertz spectroscopic study of aeronautical composite matrix resins with different
dielectric properties. Optik 2018, 168, 101–111. [CrossRef]
6. Cheng, Z. Progress in terahertz nondestructive testing. Chin. J. Sci. Instrum. 2008, 29, 1563–1568.
7. Guo, X.D.; Wang, Q.; Gu, X.H.; Chen, X.A.; Fan, X.W. Analysis of Terahertz Spectroscopic Inspection Experiment for Glass Fiber
Composite Material Defects. Infrared Technol. 2015, 37, 764–768.
8. Zhang, D.D.; Ren, J.J.; Li, L.J.; Qiao, X.L.; Gu, J. Teraherz nondestructive testing technology for glass fiber honeycomb composites.
J. Photonics 2019, 48, 163–171.
9. Wang, Q.; Zhou, H.; Xia, R.; Liu, Q.; Zhao, B.Y. Time Segmented Image Fusion Based Multi-Depth Defects Imaging Method in
Composites With Pulsed Terahertz. IEEE Access 2020, 8, 155529–155537. [CrossRef]
10. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of
the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Cham, Switzerland,
2014; pp. 21–37.
11. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934.
12. Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. YOLOX: Exceeding YOLO Series. In Proceedings of the 2021 IEEE Conference on Computer
Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2016.
13. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 2016 IEEE Conference on Computer Vision and
Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016.
14. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the
2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016.
15. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767.
16. Grishick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In
Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbis, OH, USA, 23–28 June 2014.
17. Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA,
USA, 7–12 June 2015; pp. 1440–1448.
18. Ren, S.; He, K.; Grishick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In
Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016.
19. He, K.; Gkioxari, G.; Dollar, P.; Grishick, R. Mask R-CNN. arXiv 2018, arXiv:1703.06870.
20. Hou, B.; Yang, M.; Sun, X. Real-Time Object Detection for Millimeter-Wave Images Based on Improved Faster Regions with
Convolutional Neural Networks. Laser Optoelectron. Prog. 2019, 56, 31009.
21. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell.
2020, 42, 318–327. [CrossRef] [PubMed]
22. Xue, F.; Liang, D.; Yu, Y.; Pan, J.X.; Wu, T.P. Multi-object Segmentation, Detection and Recognition in Active Terahertz Imaging.
Infrared 2020, 41, 13–25.
23. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556.
24. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on
computer vision and pattern recognition, Las Vegas, NV, USA, 27–29 June 2016; pp. 770–778.
25. Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on
Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017.
26. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings
of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like