Aircraft Engine Remaining Useful Life Prediction U
Aircraft Engine Remaining Useful Life Prediction U
Methodology
In this study, predictive models for aircraft engine RUL
involved two problem formulations, each addressing spe-
cific operational requirements and decision-making con-
texts. The first is a binary classification task, predicting Figure 1: Training vs Validation Loss and Accuracy for
whether an engine will fail within the next 30 days or not, LSTM Classifier
simplifying the prediction into identifying immediate atten-
tion needs. The second treats RUL prediction as a regres- 0.2 was applied between each of these layers. The classi-
sion problem, estimating the remaining operational cycles fier achieved impressive performance metrics: 96% on pre-
Copyright © 2024 by the authors. cision, 0.88 recall, and 0.92 F1-score.
This open access article is published under the Creative Commons Figure 1 shows the learning curves, including the loss and
Attribution-NonCommercial 4.0 International License. accuracy of the LSTM binary classifier during training.
Experiment 2: Regression results. It consisted of two convolutional layers and two max
In this experiment, three different types of models were built pooling layers. This model achieved a training RMSE of
and tested to predict the actual RUL of the engine, and the 11.09 and a test RMSE of 14.02, accompanied by an R-
results were compared. square value of 0.88.
Additionally, other CNN models were explored, includ-
Regression Models In this category, various regression ing CNN-2, which comprised 1 convolutional layer and 1
machine learning models, including linear regression, ran- max pool layer. Furthermore, the CNN-1+LSTM model was
dom forest, k-nearest neighbors, and others, were trained to composed of 1 convolutional layer followed by 2 LSTM lay-
predict the RUL. Different models exhibited varying perfor- ers. The corresponding performances of CNN-2 and CNN-
mances, as shown in Table 1. Random Forest had the best 1+LSTM are detailed in Table 3.
overall performance in this category, with an RMSE of 15.6
on the training set and 46.3 on the test set. However, the
practicality of the random forest model was challenged by Table 3: Performance Comparison of selected CNN Models
the high test set RMSE. Model Train RMSE Test RMSE
CNN-4 11.909235 14.023282
Table 1: Train and Test RMSE for Regression Models
CNN-2 11.957053 16.370277
Model Train RMSE Test RMSE
CNN-1+LSTM 12.477310 14.625450
Decision Tree 0.000000 69.070572
Extra Tree 0.000000 46.190967 The initial observations indicate that the proposed CNN
Forest 15.626659 46.369789 model outperforms most of the reported values from other
research studies, as indicated in Table 4.
XGB 28.174315 48.496991
KNR 40.501531 48.955520
SVM Reg 43.472257 48.873759
LReg 44.660360 48.399484
Ada Reg 47.671437 51.666636