Advanced Methods in NDE Using Machine Learning
Advanced Methods in NDE Using Machine Learning
View Export
Online Citation
Image feature detection and extraction techniques performance evaluation for development of panorama
under different light conditions
AIP Conference Proceedings (April 2018)
1
Fraunhofer IKTS, Maria-Reiche. Str. 2, 01109 Dresden, Germany.
a)
Corresponding author: [email protected]
b)
[email protected]
Abstract. Machine learning (ML) methods and algorithms have been applied recently with great success in quality control
and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give
accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive
Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated
processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms
in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal
INTRODUCTION
While the Digital transformation is widely discussed in all sectors of the economy under the keyword “Industry
4.0” or “Industrial Internet of Things” (IIoT), this discussion has had so far surprisingly limited impact on the academia
and service industry using Non-Destructive Evaluation (NDE) methods. As Fraunhofer IKTS is focusing on
developments along the value chains of structural and functional ceramic components on one hand as well as value
chains for NDE systems on the other hand, we feel well positioned to discuss this issue in a broader context.
For the Material Diagnostics branch of Fraunhofer IKTS (until 2014 Fraunhofer IZFP-D) we have defined four
core areas for digital transformation in the field of NDE in Fig. 1.
020022-1
a b c d
FIGURE 1. Key areas for digital transformation in the field of NDE as defined for Fraunhofer IKTS
The four areas in Fig. 1 require and enable various innovation elements:
a) Remote monitoring and evaluation using augmented reality tools needs the development of a new level of
portable inspection devices with a modern human machine interface (HMI), IP enabled and interacting e.g.
with data lenses.
b) Automated failure detection by sensor fusion and/or by machine learning is offering from our perspective
huge potentials for different methods and will be discussed more in detail below.
c) As Industry 4.0 is envisioning cyber-physical systems, communicating with each other in process, quality and
logistical aspects, we see that NDE and process (and environmental) monitoring will be applied seamlessly.
Again, machine learning methods will be key to analyze the data and learn from them.
d) If the data gained in various steps along the value chain can be integrated in a “Digital Twin” of a component
In summary machine learning has potential impact in at least 3 of the 4 areas of the digital transformation defined
here. It should become a key method in NDE.
020022-2
they are. Data preparation steps like segmentation and feature separation are included in the algorithm. These methods
require large training data and huge processing power as provided by specific hardware (e.g. GPU processing). Then
they may become extremely powerful.
The obvious advantage of machine learning becomes clear if we look at a fingerprint signal of welding current and
acoustic emission data as shown in Fig. 2. Nobody needs to study and understand why the correct welding process
parameters, the noise emitted and the quality of a weld line are related to each other. We like to decide however from
experience, looking at these data, if a weld line is correct. This is a perfect task for machine learning.
The application of machine learning to the field of NDE requires a systematic and careful approach using the
knowledge of experts from the very beginning. A potential systematic process is shown in Fig. 3. Developing business
understanding and data understanding is an iterative process typically starting again after first data evaluation results
are available.
020022-3
The field of machine learning is evolving with high speed resulting in impressive results in some areas. This trend
is driven by four elements:
a. broader availability of ready to use algorithms, some of them covering end-to-end solutions;
b. availability of large training sets of pictures and data due to the amount of data stored on the internet or
in private cloud-based infrastructures;
c. transfer of time consuming computational steps to massive parallel processing units like graphic chips or
even specialized tensor processing units;
d. strong interest in technologies like autonomous driving, automated diagnostics in radiology or automated
data processing in online commerce.
Machine learning will sustain and should be applied to Non-Destructive Testing and Evaluation.
Pattern recognition is based on, e.g., Deep Neural Networks (DNNs), Gaussian mixture models (GMMs),
Hidden Markov Models (HMMs), or support vector machines (SVMs) and includes interpretation of results. The
class models required for this are built through machine learning processes, such as deep learning (DNN), the EM
algorithm (GMM, HMM), and convex optimization (SVM). In a training phase, the system is supplied with training
examples, or sensor signals with known meanings (e.g., “good“ or “bad“). Models can then be taught further
(adapted) during operation for improving the AI system or adapting it to changed tasks. In certain DNN
configurations, the pattern recognizer assumes the task of secondary analysis, as well as primary analysis in part.
020022-4
Recently the IKTS team was able to transfer the software for signal processing and pattern recognition on an
embedded system [14][15]. In that way, a mobile, modular, miniaturized hardware is available executing
applications like those described above.
Still the algorithms will be trained on an ordinary PC; however, recognition algorithms will run on the hardware
comprised of a Digital Signal Processor and a FPGA chip. The FPGA (Field Programmable Gate Array) chip here is
equivalent to GPU based data processing in a larger stationary system.
Another application for Machine learning is the analysis of transparent and translucent ceramic materials using a
method called Optical Coherence Tomography (OCT). In the fourier-domain OCT, the light coming from a shortwave
coherent light source is split in a sample- and reference arm (Fig. 5). The signal reflected by disturbances or interfaces
in the sample interfers with the reference signal and results after Fast-Fourier-Transformation (FFT) in a depth
reflection signal also called A-scan. By moving the optical signal with a mirror system, a sequence of A-scans will
result in a cross sectional view into the material (B-scan). By moving the sample on an X-Y-table, a 3D tomogram
can be created.
Automated failure detection in an OCT test station that is setup to classify ceramic components by failures
(according to a defined failure catalogue) is a reasonable approach. In practice, a number of open questions arise. First,
a single B-scan results in a huge data set of 42GB. By cutting the B-Scan and data compression techniques, the amount
of data can be reduced to 400MB. A second step of filtering and pre-classification will reduce the amount of data to
about 0.4 MB. So, handling huge amounts of data is an issue here, as B-scans will be used in machine learning
algorithms to learn from thousands of samples.
Another open question is whether the chosen path from raw data to 2D-visualization and subsequent application
of machine learning tools for pattern recognition in pictures is more robust and more quickly implemented than the
more exotic online learning by pattern recognition in the primary data stream of the FFT processed raw data. It is also
important that, for image processing, large libraries like Tensor Flow are more readily available.
This example illustrates how important it is to combine 1) the domain knowledge of the NDT expert and specialists
familiar with both the method and the devices with 2) the experiences of machine learning specialists. The NDT expert
may gradually evolve into a data scientist who understands the technical and business needs in a test or inspection
case and has a good understanding of data management and machine learning techniques.
020022-5
Optical coherence tomography offers an interesting path to analyze 3D printed structures of polymeric and ceramic
materials in operando during the printing process (Fig. 6).
Because the OCT scan can look into a certain depth (2-5 mm) of the material and see beyond the top surface layer,
it can deliver information about 1) flaws under the surface (those that might be created during cooling down while a
layer 3 elements above is generated) and 2) the material integrity into the depth of the printed process. By building a
voxel-by-voxel representation of any single component, as it is printed, the exact digital representation of this
component is generated. This is called a Digital Twin (on the component level). The idealistic CAD model once used
to generate the .STL or .AMF file can be replaced by the corrected model as built. By this approach a quality
management process for components manufactured in lot size = 1 can be realized.
SUMMARY
Two cases of how machine learning is applied today have been discussed. While machine learning in acoustic
emission monitoring has been used within Fraunhofer IKTS for many years, the application in the field of Optical
Coherence Tomography is just starting. In any case, machine learning will be a highly relevant tool set to improve
NDT methods further, combine them, and deliver reliable results in a fast and cost efficient way.
020022-6
Even if many decisions in NDT testing will be taken by automated systems in the near future, NDT experts will
still be needed. However, their role is changing. NDT experts will become data scientists, understanding the technical
and business needs test or inspection cases and having a good understanding of data management and machine learning
techniques.
Although the widespread availability of “Digital Twins” in industry is today still more vision than reality, the unique
role of Nondestructive Evaluation and Structural Health Monitoring methods for the Digital Twin must be underlined.
These methods offer unique opportunities to extract, understand and create data while processing, testing or using a
component or system. So, NDT isn’t any longer a cost causing end-of-pipe technology but an enabling technology
creating valuable data for the digital transformation in the manufacturing world known as Industry 4.0.
REFERENCES
1. C. Tschöpe, D. Hentschel, M. Wolff, M. Eichner, R. Hoffmann: Classification of Non-Speech Acoustic Signals
using Structure Models, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP
2004), Montreal, 17-21 May 2004, Proc. 5, pp. 653-656.
2. M. Wolff, U. Kordon, H. Hussein, M. Eichner, R. Hoffmann, C. Tschöpe: Auscultatory Blood Pressure
Measurement using HMMs. IEEE International Conference on Acoustics, Speech, and Signal Processing
(ICASSP 2007), Hononlulu, Hawaii, (15-20 April 2007), Proc. 1, pp. 405-408.
3. C. Tschöpe, M. Wolff: Automatic Decision Making in SHM using Hidden Markov Models
18th International Conference on Database and Expert Systems Applications (DEXA 2007), Regensburg, 3-7
(Sept. 2007), pp 307-311.
4. C. Tschöpe, E. Schulze, H. Neunübel, M. Wolff, R. Schubert: Experiments in Acoustic Structural Health
Monitoring of Airplane Parts, IEEE International Conference on Acoustics, Speech, and Signal Processing
020022-7