A Random Forest Classifier For Anomaly Detection I
A Random Forest Classifier For Anomaly Detection I
A Random Forest Classifier For Anomaly Detection I
Article
A Random Forest Classifier for Anomaly Detection in
Laser-Powder Bed Fusion Using Optical Monitoring
Imran Ali Khan 1, * , Hannes Birkhofer 1 , Dominik Kunz 2 , Drzewietzki Lukas 3 and Vasily Ploshikhin 1
1 Airbus Endowed Chair for Integrative Simulation and Engineering of Materials and Processes (ISEMP),
University of Bremen, Am Fallturm 1, 28359 Bremen, Germany; [email protected] (H.B.);
[email protected] (V.P.)
2 Electro Optical Systems GmbH, Robert-Stirling Ring 1, 82152 Krailling, Germany; [email protected]
3 Leibherr-Aerospace Lindenberg GmbH, Pfänderstraße 50-52, 881161 Lindenberg, Germany;
[email protected]
* Correspondence: [email protected]; Tel.: +49-(0)-421-218-62350
Abstract: Metal additive manufacturing (AM) is a disruptive production technology, widely adopted
in innovative industries that revolutionizes design and manufacturing. The interest in quality control
of AM systems has grown substantially over the last decade, driven by AM’s appeal for intricate,
high-value, and low-volume production components. Geometry-dependent process conditions in
AM yield unique challenges, especially regarding quality assurance. This study contributes to the
development of machine learning models to enhance in-process monitoring and control technology,
which is a critical step in cost reduction in metal AM. As the part is built layer upon layer, the
features of each layer have an influence on the quality of the final part. Layer-wise in-process
sensing can be used to retrieve condition-related features and help detect defects caused by improper
process conditions. In this work, layer-wise monitoring using optical tomography (OT) imaging
was employed as a data source, and a machine-learning (ML) technique was utilized to detect
anomalies that can lead to defects. The major defects analyzed in this experiment were gas pores
and lack of fusion defects. The Random Forest Classifier ML algorithm is employed to segment
anomalies from optical images, which are then validated by correlating them with defects from
Citation: Khan, I.A.; Birkhofer, H.;
computerized tomography (CT) data. Further, 3D mapping of defects from CT data onto the OT
Kunz, D.; Lukas, D.; Ploshikhin, V. A
dataset is carried out using the affine transformation technique. The developed anomaly detection
Random Forest Classifier for
model’s performance is evaluated using several metrics such as confusion matrix, dice coefficient,
Anomaly Detection in Laser-Powder
Bed Fusion Using Optical
accuracy, precision, recall, and intersection-over-union (IOU). The k-fold cross-validation technique
Monitoring. Materials 2023, 16, 6470. was utilized to ensure robustness and generalization of the model’s performance. The best detection
https://doi.org/10.3390/ accuracy of the developed anomaly detection model is 99.98%. Around 79.40% of defects from CT
ma16196470 data correlated with the anomalies detected from the OT data.
Metal additive manufacturing techniques using laser powder bed fusion (L-PBF)
nowadays provide the highest repeatability and dimensional precision for part production
and have thus been extensively investigated in both industry and academia. To manufacture
a component, L-PBF methods typically employ the following steps: (1) A layer of metal
powder of a specific thickness is placed over the machine’s build plate; (2) a laser beam
selectively melts the required region within the powder layer; (3) the build plate slides
down, and a fresh layer of powder is put onto the build plate. Layer by layer, this procedure
is repeated until the part production is complete. The present approach in AM quality
assurance is to analyze the component after it is created using computed tomography,
which is extremely costly and time-consuming [5]. According to Seifi et al. [6], statistical
qualification of AM components based on destructive materials testing may be unacceptably
expensive and take over a decade to complete, which is unfeasible, given the tiny batch
sizes and time necessary for manufacturing. If defects could be detected in situ, quality
assurance costs in metal AM could be reduced significantly.
Porosity is one of the most important defects to avoid, especially for components that
require high tensile strength and fatigue resistance. Porosity in L-PBF components can
be caused by inadequate melting (i.e., lack of fusion), pre-existing gas holes in metallic
powders from the gas-atomizing manufacturing process, and trapping of gas pores during
AM processing [7]. Lack of fusion defects in the laser powder bed fusion process refers
to irregular and elongated-shaped anomalies that can vary in size from 50 µm to several
millimeters. On the other hand, gas pores in L-PBF are spherical in shape and typically
range in size from 5 µm to 20 µm [8]. Process anomalies within a layer, which might yield
defects such as pores and lack of fusion defects, are closely related to the occurrence of local
temperature changes [9]. Optical monitoring data in the form of intensity recordings can
reveal these process anomalies which possibly precede defect genesis. Current monitoring
systems however produce huge amounts of data that are typically processed only after
completion of the printing process.
The introduction of in-situ process monitoring allows for the tracing of defects through-
out the process. Process monitoring may be classified into three categories in principle.
The first is melt pool monitoring, which monitors the melt pool and its surroundings. The
molten pool’s size and temperature characteristics provide information on the process’s
stability and the occurrence of local flaws. The second category examines the entire layer in
order to discover defects in various sections of each layer. After scanning, the temperature
distribution and surface are observed. The geometric development of the build from slice
to slice is considered as the third category [10].
Each of the aforementioned methods generates vast quantities of image data, and the
time needed to analyze such large datasets is substantial. Consequently, conducting in-situ
data analysis for monitoring purposes in additive manufacturing is currently impractical
due to extended processing times. However, a specific branch of artificial intelligence
(AI) called machine learning offers a potential solution by enabling rapid and dependable
analysis of image data [11]. Process monitoring with the application of ML especially
convolutional neural networks (CNN) and random forest classifiers has been utilized
successfully for defect detection during the AM process. Baumgarti et al. [2] used in-situ
layer-wise images captured by a thermographic camera during the L-PBF process to detect
defects using convolutional neural networks. Delamination and uncritical splatters were
detected with an accuracy of 96.08%. Grad CAM heat maps were plotted to identify defects.
Kwon et al. [12] illustrated the use of CNN for laser power prediction utilizing in-situ
layer-wise meltpool images acquired by a high-speed camera during the L-PBF process.
The developed CNN model can predict laser power values, which can be utilized to identify
problematic positions in AM products without requiring destructive inspections.
ML has grown in popularity in recent years because of its exceptional performance in
data tasks such as classification, regression, and clustering [13]. Machine learning is de-
scribed as computer programming that uses sample data and prior knowledge to maximize
a performance criterion [14]. Aside from the traditional application of making predictions
Materials 2023, 16, 6470 3 of 19
through data fitting, the scientific community is exploring new and innovative approaches
to integrate ML methods into additive manufacturing. Precise identification, analysis, and
prediction of defects hold immense promise in expediting the production of metal AM
structures that are both solidly constructed and devoid of defects [15]. Mohr et al. [1] used
thermography and optical tomography images for in-situ defect detection during the L-PBF
process. A layer-wise OT image is captured using an off-axis CMOS camera, which is
similar to the monitoring system utilized in this paper (Section 2.2). CT scans were used
to assess the outcomes of OT and thermographic imaging. Only significant defects, such
as the lack of fusion void clusters, performed well when compared with the CT data. But
for pore detection which is one of the major part defects in additive manufacturing, only
0.7% OT pores and Micro-CT pores overlapped, but 71.4% of thermography anomalies
and Micro-CT pores overlapped. For high-quality predictions, ML models require huge
training data sets. Due to the high experimental costs, there are restrictions in generating
sufficient OT data. As a result, it is ideal to employ a machine learning technique capable
of developing an anomaly detection model with a small amount of training data [16]. As
a result, there is a need to improve the resolution of the OT system or employ new pore
detection approaches using ML techniques for better correlation with micro-CT pores,
which is one of the main goals of this research.
The main challenges of developing high-quality machine learning algorithms are
Limited data for training, high computational costs, and the lack of generalization to new
materials and geometries. The utilization of L-PBF encompasses a wide range of materials
and intricate geometries. Nevertheless, the development of machine learning algorithms
that can generalize effectively across diverse materials and geometries has a significant
challenge. This difficulty arises from the distinct behaviors and characteristics exhibited
by each material and geometry, necessitating substantial data and model adaptation. The
issue at hand is addressed through the utilization of a traditional machine learning method,
specifically the random forest classifier. This choice is made due to its ability to overcome
the challenge without necessitating a large volume of training data, unlike more widely
used machine learning techniques such as convolutional neural networks [17].
The significance of advancements in data processing algorithms in the field of AM
monitoring becomes evident when considering their potential broad impact and applicabil-
ity. Integrating these algorithms into various monitoring and control systems can enhance
process repeatability. This integration can also lead to a reduction in post-processing and
non-destructive testing, resulting in cost-effective quality assurance. Conventional qual-
ity control methods in L-PBF often involve time-consuming post-processing inspections.
However, the utilization of machine learning algorithms can automate the defect detec-
tion process by analyzing real-time sensor data and identifying patterns associated with
defects [18]. This enables faster and more efficient defect detection, facilitating prompt
corrective actions and minimizing the need for extensive post-processing inspections. Ul-
timately, machine learning offers the ability to swiftly analyze and process in-situ data
in L-PBF, thereby enabling accelerated defect detection, real-time monitoring, process op-
timization, and adaptive control. These advantages collectively contribute to improved
efficiency, reduced post-processing requirements, and enhanced overall quality in the L-PBF
process [2]. This study aims to contribute to process repeatability and quality assurance
through the development of a machine learning algorithm for rapid and reliable anomaly
detection leading to defects from monitoring data.
Process invariance and optical noise in the generated OT images make it difficult to
identify anomalies. When the amount of data is low for image segmentation random forest
technique can be used which is a conventional ML approach. Yaokun Wu and Siddharth
Misra [19] demonstrated that RF models outperform neural network approaches in terms
of noise tolerance. Thayumanavan et al. [20] also used a random forest classifier to segment
brain tumors from MR brain images with an accuracy of 98.37%. In this paper, the focus is
on the application of machine learning using optical monitoring data to identify anomalies
and validate the detected anomalies using defects obtained using the µCT technique.
Materials 2023, 16, 6470 4 of 19
Specifications Values
Spectral range 887.5 nm–912.5 nm
Camera resolution 2560 × 2160 pixels
Objective lens 8 mm
Frame rate 10 fps
Spatial resolution 125 µm/Pixel
Data interface USB 3.1
(a) (b)
Figure 2. EOSTATE Exposure optical tomography images for the 100th layer under normal process
conditions: (a) Integral OT image (b) Maximum OT image.
(a) (b)
Figure 3. Normalized OT images for 100th layer under normal process conditions: (a) Normalized
integral OT image (b) Normalized maximum OT image.
Materials 2023, 16, 6470 6 of 19
Figure 4. Normalized Integral OT image for layer 101 with induced artifacts.
The anomalies after detection have to be investigated for potential defects. After
completion of the L-PBF process, the built-in cylinders were post-processed and examined
using the micro-computerized tomography technique. The majority of defects are gas pores
and lack of fusion, ranging from 30 to 540 µm in diameter. An algorithm is developed to
correlate anomalies from OT data with defects from CT data to prove the potential of the
optical monitoring system in identifying defects during L-PBF processes.
Materials 2023, 16, 6470 7 of 19
f ( x, y) − M f ( x, y)
g( x, y) = (1)
σ f ( x, y)
where f ( x, y) is original image, M f ( x, y) is the estimation of mean of original image and
σ f ( x, y) is the estimation of the standard deviation.
Gabor Filter
One of the most well-known feature extraction methods is the Gabor filter. It is
made up of wavelet coefficients for various scales and orientations, which makes these
features resistant to rotation, translation, distortion, and scaling [31]. In this study, 32 Gabor
filters with different orientations and scales were created with a kernel size of 9 × 9. Gabor
is a convolutional filter representing a combination of Gaussian and sinusoidal terms.
The Gaussian component provides the weights and the sine component provides the
directionality. It has excellent localization properties in both the spatial and frequency
domains. In the spatial domain, it is a Gaussian-modulated sinusoid, and in the frequency
domain, it is a shifted Gaussian. It is represented in Equation (2) [31]:
h − x 02 + y 02 γ2 i h h 2πx 0 ii
g( x, y, σ, θ, λ, γ, φ) = exp exp i +φ (2)
2σ2 λ
x 0 = x cos θ + y sin θ (3)
Gaussian Blur
The Gaussian blur feature is obtained by blurring an image using a Gaussian ker-
nel and convolving the image. It functions as a non-uniform low-pass filter, preserving
low spatial frequency while reducing image noise and insignificant details. A Gaussian
function [32] is formulated as in Equation (5).
1 x 2 + y2
G(x) = √ e− (5)
2πσ2 2σ2
Materials 2023, 16, 6470 9 of 19
where x and y are the image coordinates and σ is the standard deviation of the Gaussian
distribution. A Gaussian kernel with a standard deviation of 3 and 7 is used to generate
feature extractors.
Figure 6. A few OT images and corresponding ground truth labels used in training.
Materials 2023, 16, 6470 10 of 19
TN + TP
Accuracy = (6)
TP + FP + TN + FN
Materials 2023, 16, 6470 11 of 19
The dice coefficient calculates the overlapping pixels between the predicted segmenta-
tion pixels and the ground truth pixels as follows [40]:
2 × TP
Dice Coe f f = (7)
2 × TP + FP + FN
Precision, also known as sensitivity, is defined as the fraction of pore pixels identified as
true-positive pixels in relation to all pixels in an OT image classified by the RF_Segm model,
which is defined as follows [40]:
TP
Precision = (8)
TP + FP
The recall is calculated as the proportion of true positive pixels classified by the
RF_Segm model vs. pixels labeled by manual labeling, and it is expressed as follows [40]:
TP
Recall = (9)
TP + FN
Intersection over Union (IoU), also known as the Jaccard Index, is defined as the area
of intersection between the predicted segmentation map A and the ground truth map B,
divided by the area of union between the two maps, and ranges between 0 and 1 [40].
| A∩B |
IoU = J ( A, B) = (10)
| A∪B |
were achieved. These values indicate a minimal number of false negatives compared to
false positives, ensuring comprehensive anomaly detection.
Table 2. Performance metrics for RF_Segm model with different number of estimators
Number of Estimators Dice Coeff Precision Recall Accuracy [%] IOU Score
10 0.7068 0.6489 0.7760 96.96 54.66
50 0.7952 0.7334 0.8660 97.96 65.87
100 0.8200 0.7604 0.8899 99.67 69.50
1000 0.8309 0.7705 0.9018 99.98 71.08
The dataset was divided for training (90% dataset) and testing (10% dataset) (Section 2.4.4).
To ensure the generalization of the model performance on data splitting, one of the de-
veloped anomaly models that is RF_Segm model with 100 estimators was considered for
the cross-validation experiment. The data was divided into 10 folds (K = 10), splitting the
data into the same shape of 90% training, and 10% testing as used in generating all the
anomaly detection models. This approach guarantees that each data point appears in the
test set exactly once, reducing the influence of the initial split on model evaluation. Figure 8
shows the plot of metric classification accuracy against each fold. This graph offers a visual
depiction of the model’s performance variability across various folds. It illustrates that the
model consistently achieves accuracies within the range of 99.77% to 99.79% across all ten
folds, affirming its suitability for different data splits.
Figure 8. Cross-validation classification accuracy is indicated by the red line across folds for the
RF_Segm model with 100 estimators
The confusion matrix (Section 2.6) was calculated for each fold, offering a compre-
hensive view of the model’s performance in terms of true positives, true negatives, false
positives, and false negatives. Specifically, the confusion matrix was computed for the test
dataset for each fold, encompassing a total of 470,890 pixels. This approach provides a
more accurate evaluation of the model’s performance on the test dataset.
Subsequently, an average confusion matrix was generated by computing the pixel-
wise average (mean) of the individual confusion matrices obtained from all 10 folds of
the cross-validation. This average representation consolidates the results and offers a
comprehensive view of the model’s overall performance.
Figure 9 visually presents the matrix representation of the average confusion matrix.
In this matrix, ’0’ denotes the number of pixels that do not have any anomalies, whereas
Materials 2023, 16, 6470 13 of 19
’1’ reflects the number of pixels with anomalies. This illustration provides valuable in-
sights into the model’s performance, allowing us to comprehend its consistency in making
accurate and inaccurate predictions, and aids in identifying patterns of errors.
The analysis of the average confusion matrix reveals a predominance of non-anomalous
pixels, with a count of ≈467,043 as true positives, accurately identified by the model. Ad-
ditionally, ≈2829 pixels are correctly recognized as true negatives. On the other hand,
there are ≈363 pixels falsely predicted as anomalies (false positives) and ≈653 pixels that
are actual anomalies but incorrectly predicted as non-anomalies (false negatives). These
numbers highlight the model’s strengths and areas for improvement, providing essential
metrics to evaluate its performance.
Figure 9. Average confusion matrix: Consistency and performance overview of the RF_Segm model
with 100 estimators.
Further analysis of developed models should consider the detection time to strike
a balance between precision and detection speed. The time required for training (90%
dataset), testing (10% dataset), and prediction time for a single image of the random forest
models with different numbers of estimators are tabulated in Table 3. It can be seen that
as the number of estimators gets bigger, so does the time required for training, testing,
and anomaly prediction time. The prediction time for an anomaly detection model is
of significant importance in the L-PBF process. Low prediction time signifies the timely
identification of anomalies, improves process efficiency, minimizes costs, enables real-time
monitoring, optimizes resource allocation, and facilitates scalability. These combined factors
result in improved productivity, decreased defects, and enhanced quality control within
L-PBF manufacturing. A prediction time of 40 ms was achieved for the model with 1000
estimators. This detection time goes better with the performance metrics of the RF_Segm
model with 1000 estimators when compared to other developed models. Further, if the
number of estimators is increased, a point of diminishing returns is reached. At this point, a
marginal improvement in performance becomes smaller and does not justify the additional
computational resources and time required for training and predicting anomalies.
Table 3. Time required for Training and Testing the RF_Segm model.
In all the generated ML models, a total of 40 feature extractors were utilized in construct-
ing the RF_Segm models. The importance of each feature and the selection of optimal features
for model training were deemed crucial. This process, known as Feature Selection in machine
learning, involves the removal of less relevant features, thereby simplifying the model, re-
ducing overfitting, and improving computational efficiency [42]. Feature selection based on
feature importance contributes to enhancing the model’s performance and interpretability.
The feature importance diagram, as depicted in Figure 10, illustrates the relative importance of
each feature for different estimators. This diagram offers valuable insights into the significance
of individual features in the segmentation of anomalies in OT images. Notably, the original
pixel values of OT images, Gaussian filter, Median filter, and Gabor24 feature extractors exhibit
the highest importance values, indicating a strong relationship with the segmentation label.
Overall, the feature importance diagram in the random forest segmentation model provides
valuable insights for feature selection, understanding data relationships, model interpretation,
error detection, and guidance for future data collection endeavors.
Figure 11 shows the anomaly prediction from OT images for different RF_Segm models
developed with different numbers of estimators. In Figure 11, Images A and B are the OT
images with artificially induced anomalies and image C is the OT image under normal
process conditions. It can be seen that the RF_Segm model with 1000 estimators gives better
prediction with respect to models with a lesser number of estimators.
Figure 11. Anomaly prediction in sample optical tomography images (A–C) utilizing RF_Segm
models with diverse estimator counts.
Materials 2023, 16, 6470 15 of 19
Figure 12. 3-Dimensional rendered surface with overlap of CT defects with detected anomalies.
4. Conclusions
In conclusion, the implemented conventional ML algorithm indicates outstanding
abilities in detecting process anomalies within the specified range of intensity values. The
experiment was carried out using an EOS M 290 L-PBF machine using EOS Titanium Ti64
Materials 2023, 16, 6470 16 of 19
as material. Random forest segmentation models were created for a variety of estimators,
including 10, 50, 100, and 1000. The RF_Segm model with 1000 estimators obtained an
astounding 99.98% accuracy while keeping a fast prediction time of 40 ms. The reported
instabilities were analyzed using defects identified using the CT approach to test the al-
gorithm’s robustness. 79.4% of the defects identified in the CT data correlated with the
anomalies reported by the optical monitoring system, which is promising. This finding
emphasizes the proposed random forest segmentation model’s potential for quality inspec-
tion during L-PBF procedures, outperforming current CT correlation standards in in-situ
anomaly identification.
Furthermore, the developed model showcases remarkable efficiency in terms of com-
putational costs, which stands as a significant advantage in utilizing the random forest
classifier for anomaly detection model development. Despite the limited training data,
consisting of only 100 OT images with corresponding ground truth labels, a segmentation
model with an accuracy of 99.98% was successfully created. The model’s training process
also offers a notable advantage in terms of time requirements. Merely approximately 3 h
of computational training was necessary to construct the RF_Segm model with 1000 esti-
mators. This aspect enhances its efficiency and ensures optimal utilization of resources. It
is worth highlighting that the model effectively identifies artificially induced defects with
reduced laser power parameters and establishes a correlation with defects detected in the
CT data.
This paper presents the successful detection of anomalies utilizing the RF_Segm model
in the context of in-situ anomaly detection in L-PBF. The anomalies detected in this study
were subsequently evaluated and identified as gas pores and lack of fusion defects using
the CT technique. These two types of defects are known to significantly impact the fatigue
life of printed parts in the L-PBF process. By effectively identifying and characterizing
these critical defects, the RF_Segm model contributes to quality assurance and reliability
improvement in additive manufacturing processes. The findings of this study highlight
the potential of the developed model in enhancing the overall structural integrity and
performance of L-PBF-produced components.
In summary, the developed random forest segmentation model, integrated with the
optical monitoring system, exhibits exceptional accuracy, swift prediction time, and strong
correlation with CT data. Its potential for quality inspection during L-PBF processes
demonstrates its efficacy in detecting anomalies and ensuring manufacturing integrity.
Further research and validation on larger datasets are warranted to fully exploit the model’s
capabilities and advance anomaly detection in L-PBF processes.
5. Concluding Remarks
The research has demonstrated the effectiveness of machine learning algorithms in the
realm of anomaly detection, particularly in the context of EOS Titanium Ti64 produced by
the EOS M 290 L-PBF machine. The developed ML algorithm has showcased remarkable
performance, achieving an accuracy rate of 99.98% in identifying anomalies within specified
intensity ranges. Notably, it outperforms conventional CT standards, underscoring its
potential for enhancing quality assurance processes in the additive manufacturing industry.
Part defects such as gas pores and lack of fusion defects were successfully identified
through CT data analysis. which was further correlated with detected anomalies which
gave a remarkable correlation accuracy of 79.4%. This underscores the promising capability
of optical monitoring systems in enhancing the quality assurance procedures for laser
powder bed fusion processes.
Looking ahead, our focus is on the future prospects of integrating machine learning
with optical monitoring techniques to further enhance quality assurance in L-PBF processes.
Envisioning the utilization of CNN models for faster anomaly detection, harnessing a
comprehensive dataset of over 2000 real-time images. Our ongoing efforts will be directed
toward improving model robustness and enhancing detection accuracy, paving the way for
more reliable and efficient quality control in additive manufacturing.
Materials 2023, 16, 6470 17 of 19
Author Contributions: Conceptualization, I.A.K., H.B., D.K. and D.L.; methodology, I.A.K.; software,
I.A.K.; validation, I.A.K., H.B., D.K. and D.L.; formal analysis, I.A.K. and V.P.; investigation, I.A.K.;
resources, D.K. and D.L.; data curation, I.A.K. and H.B.; Writing—original draft preparation, I.A.K.;
Writing—review and editing, I.A.K., H.B. and V.P.; visualization, I.A.K.; supervision, H.B. and V.P.;
project administration, D.K. and H.B.; funding acquisition, D.K. and V.P. All authors have read and
agreed to the published version of the manuscript.
Funding: The authors of this academic research paper would like to express their gratitude for
the financial support provided by the German Federal Ministry for Economic Affairs and Energy
through the program “Luftfahrtforschungsprogramm LuFo V-3” (PAULA—Prozesse fur die additive
Fertigung und Luftfahrtanwendungen, funding code 20W1707E).
Data Availability Statement: The data presented in this study are available on request from the
corresponding author. The data are not publicly available due to restrictions from the project partners
Acknowledgments: The authors would extend our appreciation to EOS GmbH, INDUSTRIEANLA-
GEN -BETRIEBSGESELLSCHAFT (IABG) GmbH and Liebherr-Aerospace Lindenberg GmbH for
conducting the experiments and their valuable assistance throughout the experimental process. Their
contributions have been instrumental in the successful execution of this study.
Conflicts of Interest: The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
AM Additive Manufacturing
OT Optical Tomography
ML Machine Learning
µCT Micro-Computerized Tomography
EOS Electro-Optical Systems
IABG INDUSTRIE ANLAGEN-BETRIEBS GESELLSCHAFT
IOU Intersection-Over-Union
L-PBF Laser Powder Bed Fusion
CNN Convolutional Neural Network
CMOS Complementary metal-oxide-semiconductor
DV Digital Values
ROI Regions Of Interest
RF_Segm Random Forest Segmentation
RGB Red Green Blue
3D Three Dimensional
KV Kilo Volts
TP True Positives
TN True Negatives
FP False Positives
FN False Negatives
µm Micrometer
ms Millisecond
References
1. Mohr, G.; Altenburg, S.J.; Ulbricht, A.; Heinrich, P.; Baum, D.; Maierhofer, C.; Hilgenberg, K. In-Situ Defect Detection in Laser
Powder Bed Fusion by Using Thermography and Optical Tomography—Comparison to Computed Tomography. Metals 2020,
10, 103.
2. Baumgarti, H.; Tomas, J.; Buettner, R.; Markus Merkel, M. A deep learning-based model for defect detection in laser-powder bed
fusion using in-situ thermographic monitoring. Prog. Addit. Manuf. 2020, 5, 277–285.
3. Chen, Z.; Han, C.; Gao, M.; Kandukuri, S.Y.; Zhou, K. A review on qualification and certification for metal additive manufacturing.
Virtual Phys. Prototyp. 2022, 17, 382–405.
4. Abdulhameed, O.; Al-Ahmari, A.; Ameen, W.; Mian, S.H. Additive manufacturing: Challenges, trends, and applications. Adv.
Mech. Eng. 2019, 11, 1687814018822880.
5. Montazeri, M.; Yavari, R.; Rao, P.; Boulware, P. In-Process Monitoring of Material Cross-Contamination Defects in Laser Powder
Bed Fusion. J. Manuf. Sci. Eng. ASME 2018, 140, 111001.
Materials 2023, 16, 6470 18 of 19
6. Seifi, M.; Salem, A.; Beuth, J.; Harrysson, O.; Lewandowski, J.J. Overview of Materials Qualification Needs for Metal Additive
Manufacturing. Miner. Met. Mater. Soc. JOM 2016, 68. 747–764
7. Choo, H.; Sham, K.L.; Bohling, J.; Ngo, A.; Xiao, X.; Ren, Y.; Depond, P.J.; Matthews, M.J; Garlea, E. Effect of laser power on defect,
texture, and microstructure of a laser powder bed fusion processed 316L stainless steel. Mater. Des. 2019, 164, 1264–1275.
8. Brennan, M.C.; Keist, J.S.; Palmer, T.A. Defects in Metal Additive Manufacturing Processes. J. Mater. Eng. Perform. 2021, 30,
4808–4818.
9. Pham, V.T.; Fang, T.H. Understanding porosity and temperature induced variabilities in interface, mechanical characteristics and
thermal conductivity of borophene membranes. Sci. Rep. 2021, 11, 12123.
10. Yeung, H.; Yang, Z.; Yan, L. A meltpool prediction based scan strategy for powder bed fusion additive manufacturing. Addit.
Manuf. 2020, 35, 2214–8604.
11. Ravindran, S. Five ways deep learning has transformed image analysis. Nature 2022, 609, 864–866.
12. Kwon, O.; Kim, H.G.; Kim, W.; Kim, G.H.; Kim, K. A convolutional neural network for prediction of laser power using melt-pool
images in laser powder bed fusion. IEEE Access 2020, 8, 23255–23263.
13. Wang, C.; Tana, X.P.; Tora, S.B.; Lim, C.S. Machine learning in additive manufacturing: State-of-the-art and perspectives. Addit.
Manuf. 2020, 36, 2214–8604.
14. Meng, L.; Park, H.Y.; Jarosinski, W.; Jung, Y.G. Machine learning in additive manufacturing: A Review. J. Miner. Met. Mater. Soc.
2020, 72, 2363–2377.
15. Gordon, J.V.; Narra, S.P.; Cunningham, R.W.; Liu, H.; Chen, H.; Suter, R.M.; Beuth, J.L.; Rollett, A.D. Defect structure process
maps for laser powder bed fusion additive manufacturing. Addit. Manuf. 2020, 36, 2214–8604.
16. Mahapatra, D. Analyzing Training Information from Random Forests for Improved Image Segmentation. IEEE Trans. Image
Process. 2014, 23, 1504–1512.
17. Aurelia, J.E.; Rustam, Z.; Hartini, S.; Darmawan N.A. Comparison Between Convolutional Neural Network and Random Forest as
Classifier for Cerebral Infarction. In Proceedings of the International Conference on Advanced Intelligent Systems for Sustainable
Development, AISC 1417, Marrakech, Morocco, 6–8 July 2020; pp. 930–939.
18. Mahmoud, D.; Magolon, M.; Boer, J.; Elbestawi, M.A.; Mohammadi M.G. Applications of Machine Learning in Process Monitoring
and Controls of L-PBF Additive Manufacturing: A Review. J. Appl. Sci. 2021, 11, 11910.
19. Wu, Y.; Misra, S. Intelligent Image Segmentation for Organic-Rich Shales Using Random Forest, Wavelet Transform, and Hessian
Matrix. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1144–1147.
20. Rajendran, P.; Madheswaran, M. Hybrid Medical Image Classification Using Association Rule Mining with Decision Tree
Algorithm. J. Comput. 2010, 2, 2151–9617.
21. ASTM F1472. Available online: https://standards.globalspec.com/std/14343636/ASTM%20F1472 (accessed on 1 November
2020).
22. ASTM International. Available online: https://www.astm.org/standards/f2924 (accessed on 20 October 2021).
23. A Gentle Introduction to k-Fold Cross-Validation. Available online: https://machinelearningmastery.com/k-fold-cross-
validation/ (accessed on 3 August 2020).
24. Wang, P.; Nakano T.; Bai, J. Additive Manufacturing: Materials, Processing, Characterization and Applications. Crystals 2022,
12, 747.
25. Zenzinger, G.; Bamberg, J.; Ladewig, A.; Hess, T.; Henkel, B.; Satzger, W. Process Monitoring of Additive Manufacturing by Using
Optical Tomography. AIP Conf. Proc. 2015, 1650, 164–170.
26. Liu, H.; Huang, J.; Li, L.; Cai, W. Volumetric imaging of flame refractive index, density, and temperature using background-
oriented Schlieren tomography. Sci. China Technol. Sci. 2021, 64, 98–110.
27. Samajpati, B.J.; Degadwala, S.D. Hybrid Approach for Apple Fruit Diseases Detection and Classification Using Random Forest
Classifier. In Proceedings of the International Conference on Communication and Signal Processing, Melmaruvathur, India, 6–8
April 2016; pp. 1015–1019.
28. Random Forest: A Complete Guide for Machine Learning. Available online: https://builtin.com/data-science/random-forest-
algorithm (accessed on 14 March 2023).
29. Hiew, B.; Teoh, A.B.; Ngo, D.C. Preprocessing of Fingerprint Images Captured with a Digital Camera. In Proceedings of the 2006
9th International Conference on Control, Automation, Robotics and Vision, Singapore, 5–8 December 2006; pp. 1–6.
30. Kumar, K.V.; Jayasankar, T. An identification of crop disease using image segmentation. Int. J. Pharm. Sci. Res. (IJPSR) 2019, 10,
1054–1064.
31. See, Y.C.; Noor, N.M.; Low, J.L.; Liew, E. Investigation of Face Recognition Using Gabor Filter with Random Forest As Learning
Framework. In Proceedings of the TENCON 2017—2017 IEEE Region 10 Conference, Penang, Malaysia, 5–8 November 2017;
pp. 1153–1158.
32. Haddad, R.A.; Akansu, A.N. A Class of Fast Gaussian Binomial Filters for Speech and Image Processing. IEEE Trans. Signal
Process. 1991, 39, 723–727.
33. Image Gradients with OpenCV (Sobel and Scharr). Available online: https://pyimagesearch.com/2021/05/12/image-gradients-
with-opencv-sobel-and-scharr/ (accessed on 12 May 2021).
34. Using Train Test Split in Sklearn: A Complete Tutorial. Available online: https://ioflood.com/blog/train-test-split-sklearn/
(accessed on 5 September 2023).
Materials 2023, 16, 6470 19 of 19
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.