0% found this document useful (0 votes)
14 views7 pages

Improving Prediction Accuracy Using Random Forest Algorithm

Uploaded by

h6643246
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views7 pages

Improving Prediction Accuracy Using Random Forest Algorithm

Uploaded by

h6643246
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/384110389

Improving Prediction Accuracy using Random Forest Algorithm

Article in International Journal of Advanced Computer Science and Applications · January 2024

CITATIONS READS
0 43

3 authors, including:

Sherif Adel Abd El Aleem Abd El Hameed Nesma Ahmed Mostafa Elsayed
Helwan University Helwan University
11 PUBLICATIONS 0 CITATIONS 2 PUBLICATIONS 0 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Sherif Adel Abd El Aleem Abd El Hameed on 18 September 2024.

The user has requested enhancement of the downloaded file.


(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 4, 2024

Improving Prediction Accuracy using Random Forest


Algorithm
Nesma Elsayed1* , Sherif Abd Elaleem2, Mohamed Marie3
Business Information Systems Department-Faculty of Commerce and Business Administration,
Helwan University, Cairo, Egypt1
Business Administration Department-Faculty of Commerce and Business Administration, Helwan University, Cairo, Egypt2
Information Systems Department-Faculty of Computers and Artificial Intelligence, Helwan University, Cairo, Egypt3

Abstract—One of the latest studies in predicting bankruptcy The key idea is that public information of corporations
is the performance of the financial prediction models. Although comprises significant data and information that could be used
several models have been developed, they ofte n do not achieve by investors to asses financial status, which may be a major
high performance, especially when using an imbalanced data set. reason to cause bankruptcy [2]. Financial crisis prediction
This highlights the need for more exact prediction models. This indicators included Profitability, Solvency, Growth ability,
paper examines the application as well as the benefits of machine Cash flow and Capital structure [3]. Enhanced prediction
learning with the purpose of constructing pre diction models in accuracy is bound to increase the earnings to shareholders by
the field of corporate financial performance. There is a lack of improving financial risk management inside rising markets
scientific research related to the effects of using random forest
[4].
algorithms in attribute selection and prediction process for
enhancing financial prediction. This paper tes ts various feature Recent research has employed financial ratios to show the
selection methods along with different prediction models to fill exploration models for business failure. To improve prediction
the gap. The study used a quantitative approach to develop and accuracy, it is important to find the most influential factors on
propose a business failure model. The approach involved financial performance. The discriminatory influence acquired
analyzing and preprocessing a large dataset of bankrupt and by bringing together distinctive groups of financial ratios
non-bankrupt enterprises. The performance of the model was (FRs) and corporate governance indicators (CGIs) for business
then evaluated using various metrics such as accuracy, precision, failure prediction was examined [5].
and recall. Findings from the present study show that random
forest is recommended as the best model to predict corporate It is worth mentioning that the massive amount of
bankruptcy. Moreover, findings write down that the proper use corporate data presents an opportunity to deeply analyze the
of attribute selection methods helps to enhance the prediction data and, consecutively, gain a great deal of knowledge.
precision of the proposed models. The use of random forest Unfortunately, the necessity for many human resources and
algorithm in feature selection and prediction can produce more too much time limit the benefits of the financial data.
exact and more reliable results in predicting bankruptcy. The Alternatively, improving machine learning techniques can
study proves the potential of machine learning techniques to save both time and money. This helps to provide decision
enhance financial performance.
makers with significant evidence to be a base for making
Keywords—Corporate bankruptcy; feature selection; financial strategic plans.
ratios; prediction models; random forest In past works, a variety of prediction models were applied
to define the early warning factors of a potential bankruptcy.
I. INTRODUCTION This paper attempts to examine and compare the significance
Predictions in business are essential tools for decision- of using decision tree, k-nearest neighbor, logistic regression,
making and strategic planning. At its core, a prediction is an multilayer Perceptron, and random forest in predicting
educated guess about what the future holds based on past corporate failure.
trends and current data. When used correctly, predictions can
help businesses prepare for various scenarios and make In 2022 a study used only three financial indicators: the
informed decisions. return on assets, the current ratio, and the solvency ratio
reported prediction accuracy rates of more than 80 percent.
It is a common fact that there is no certainty in the field of The study used Belgian companies` data set contains a sample
business. Prediction models can provide decision makers with of 3728 Belgian companies that were announced bankrupt
a framework to set more realistic strategies via predicting between 2002 and 2012 to anticipate bankruptcy [6].
financial performance. In the case of predicting a business
failure, management can prevent business bankruptcy. The main research gap is that “the performance of
Bankruptcy prediction helps in increasing accuracy of prediction models attained by combination of various
decision making process for business enterprises since it has a categories of FRs has not been completely investigated. Only
variety of applications in financial fields [1]. some chosen FRs have been utilized in previous researches
and the selected attributes may vary from study to study [7].
*Corresponding Author.
Email ID: Nesma.Ahmed.MSC2022@co mmerce.helwan.edu.eg

436 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 4, 2024

The goal of our study is using random forest algorithm for one employs diverse classifiers in construction of the
analyzing corporate data encompassing various goals. Firstly, bankruptcy model [13].
it aims to evaluate the tendency of business failure in different
companies by developing prediction models that incorporate First, in the financial variables step, selecting the most
informative financial variables can improve prediction
random forest algorithms in both attribute selection process
and prediction process. Secondly, the model eases the performance. Regarding to Chih-Fong Tsai investigational
enhancement of prediction process by enabling researchers to outcomes, applying attribute selection tools to choose and
foresee the influence of fluctuations in ninety-five different extract the extra valuable, demonstrative and illustrative
financial ratios on corporate financial performance. variables can reduce the effort and time of training the model,
which certainly results in increasing the performance of
Additionally, the research contributes by developing a novel
model applying random forest algorithm along with seven prediction [14].
various categories of financial indicators to anticipate business Second, in the model construction step, a variety of
failures. methods have been offered, including decision tree, k-nearest
Paper structure consists of literature review in Section II neighbor, logistic regression, multilayer Perceptron, and
random forest. In 1980, Ohlson estimated the probabilities of
providing a clear explanation of previous studies in
bankruptcy prediction field, methods in Section III presenting bankruptcy employing logistic regression [15].
our model that is based on incorporating the most common One of the early applications of random forests was
algorithms in financial prediction field, results in Section IV reported in 2001 that presented random sampling of trees and
demonstrating the average performance measures for the concept of tree correlation. His discoveries indicated that
prediction models used in our study, discussion section for the first-time forest algorithms can rival with arcing
clarifying the importance of our model and the significance of approaches, in both classification and regression analysis [16].
our contribution. Separate research done in 2012 showed that RF is effective for
more accurate results. This assists the researchers in
II. LITERATURE REVIEW estimating feature significance and value [17].
A. Corporate Bankruptcy Prediction Artificial neural networks (ANNs) are powerful artificial
The substantial rise in the total papers, especially intelligence technologies that are widely-used as they are able
subsequent the 2008 global financial disaster, has verified that to combine several nonlinear functions to express non-linear
corporate bankruptcy is a subject of growing interest, which relationships between input data and a class label [18]. A
indicates the importance of this issue for corporations [8]. previous study on corporate distress prediction examined the
Regrettably, the COVID-19 epidemic that has invaded the precision of Logit and ANN to establish a comparison
globe since 2020 was one of the major triggers for bankruptcy between using statistical and artificial intelligence in modeling
filing. While data analytics has many applications in the financial risk [4].
financial field, Bankruptcy Prediction Models (BPM) have
witnessed an increase in recognition [9]. III. METHODS
Beaver assessed various financial variables to evaluate A. Dataset
their ability in classifying and predicting bankrupt firms. That The data set is acquired from (UCIMLR)[19], which
made him a pioneer in researches that study the enterprise supplies datasets to the interested researchers in machine
failure prediction [10]. Altman presented business failure learning field. Initially, the sample data was gathered by the
models according to discriminant study in categorizing Taiwan Economic Journal. It comprises the financial variables
economic failure based on five financial ratios: working of industrial, electronic, shipping, tourism, and retail
capital/total assets, market value of equity/total debt, earnings companies for the years 1999–2009. The data set includes
before interest and taxes/total assets, retained earnings/total ninety-five various financial indicators and 6,819 rows, of
assets and sales/total assets [11]. which 6,599 are corporations that did not go bankrupt, and
220 are bankrupt corporations. The description of business
Once Altman issued one of the very popular models in
prediction of firms bankruptcy in 1968, a variety of models failure is established on the rules of the Taiwan Stock
that predict bankruptcy have been issued in the literature [12]. Exchange. Our proposed model is shown in Fig. 1.
It does not only direct attention to the increasing number of B. Preprocessing
research issued, but also to the diversity of enterprise failure In In data preprocessing stage, we checked for missing
prediction models employed for business crisis prediction. values and duplicates within the dataset, but there were none.
Owing to the advance in machine learning methods and We applied data normalization on the dataset. All
computer ability in latest years, more diverse analytical tools preprocessing and feature selection steps were conducted
have been employed to create a business failure model with using WEKA.
superior precision.
An imbalanced data set is created when the total
B. Prediction Accuracy Enhancement observations in one group exceeds the total of observations in
There are two steps to assess financial crisis. Whereas the the other group. Prediction techniques behave disappointingly
first stage employs a variety of financial variables, the other in data sets with imbalanced classes because they regularly
suppose that all classes are represented equally.

437 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 4, 2024

As a result, the cases that are represented in the smaller group are miscategorized as belonging to the mass group [20].

Dataset

 Checking for missing data.


Preprocessing  Removing duplicates.
 Data normalization.

 DT
Feature
 K-NN
All Features Selection
 LR
(Wrapper)  MLP
 RF

 DT
 K-NN
Prediction Model  LR
 MLP
 RF

 Accuracy
Evaluation  Precision
 Recall

Fig. 1 Our proposed model.

In each year, the total of business failures is smaller in benefits of both over and under-sampling. That leads to having
comparison with the total of companies that did not go more reliable and realistic results.
bankrupt. If failed corporations are outliers, this causes a key
breach to the fundamental distributional conjectures for C. Feature Selection
logistic regression [21]. Resampling techniques generate new Attribute selection is the practice of selecting the important
samples of data from the original dataset using a set of attributes that have an influence on the performance of the
statistical methods. It is essential to lower the danger of the model. Attribute selection is a research problem in wrapper
study or machine learning algorithm biasing toward the methodology, so different combinations are made, assessed,
common class. and compared with other combinations. The algorithm is
trained by using the subset of features iteratively.
We applied unsupervised resample filter on data to get
more reliable results by producing a random subsample of a In the present study, the wrapper method is applied on the
dataset. It applies over sampling on the smaller group and resampled data since it interacts with classifier, models feature
under sampling on the mass group at the same time while dependencies, minimizes computational cost, and provides
keeping the same number of records in the original dataset. good classification accuracy.
Thus, using unsupervised resampling helps in gaining the

438 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 4, 2024

Features chosen will differ regarding the kind of classifier measure are constructed on confusion matrix involving a
as diverse classifiers perform better with various arrays of classification of actual and predicted values into the next
attributes to generate more competent conclusions. The five groups: true positive (TP), true negative (TN), false positive
classifiers which will be employed for the feature electing (FP), and false negative (FN).
manner outcomes are illustrated in Table I.
Accuracy demonstrates how often a machine learning
TABLE. I. FEAT URES SELECT ED USING W RAPPER M ET HOD model is correct overall. Depending only on accuracy measure
to estimate model performance can be deceiving when
Classifier Attributes selected based on the utilizing imbalanced data set as it assigns equal weight to the
wrapper method classes which mitigate the model’s capability to predict all
Decision Tree (DT) X8, X10, X12, X40, X55, X64, X65,
X87, X92 classes. Precision presents how frequently a machine learning
K-nearest Neighbor (KNN) X9, X14, X21, X25, X31, X52, X54, model is precise when predicting the intended category. Recall
X62, X73, X85, X87, X90, X92 expresses whether a machine learning model can recognize all
Logistic Regression (LR) X11, X13, X17, X18, X21, X27, objects of the intended category.
X34, X39, X44, X51, X64
Multilayer Perceptron (MLP) X2, X3, X16, X32, X39, X43, X49, TABLE. II. PERFORMANCE M ET RICS
X50, X52, X59, X61, X68, X73,
X76, X84 Evaluation measure Rule
Random Forest (RF) X34, X40, X48, X50, X54, X68,
TP + TN
X76, X77, X80, X90, X91, X93 Accuracy
Total
The highest significant features for effective bankruptcy TP
prediction are the categories of solvency and profitability [5]. Precision
TP + FP
All classifiers have selected attributes from both categories TP
except random forest algorithm has not selected any attributes Recall
TP + FN
from solvency category. RF algorithm has selected more
attributes from growth category than other classifiers. IV. RESULTS
D. Prediction Since bankruptcy is an imbalanced problem, then
To recognize the best bankruptcy model, different the weighted average is preferred for measuring performance
techniques have been applied on the data set, and then their of classification models. Table III in Appendix summarizes
outcomes are matched with each other. Models are established the scores of the evaluation process. Considering these results,
according to two distinctive situations: we can state that some models such as KNN, MLP, and RF
work better with the features chosen utilizing wrapper method
All attributes will be employed, and only the features with the same algorithm. On the contrary, some models such
chosen using the wrapper method will be employed. as decision tree and logistic provided better results employing
We trained models utilizing the same five algorithms all attributes.
employed in features selection wrapper method to analyze the
relation between employing the same algorithm in both Average Accuracy
attribute selection stage and prediction modeling stage. 98.80%
99.00%
E. Evaluation
98.50%
In cross-validation the data set is indiscriminately divided 98.13%
97.97%
into ‘k’ groups. Only one group is treated as a test set whereas 98.00% 97.73%
the extra groups are treated as training sets. The training sets
Accuracy

aim to teach the model while the test set is utilized to assess 97.50%
the model. The activity is done repeatedly until every
97.00% 96.73%
distinctive group has been utilized as the test set.
Cross-validation test is preferred to be used in such cases 96.50%
because it offers the model the chance to learn on multiple
96.00%
train-test divisions. This provides us with a well sign of how
accurate the model will operate on undetected data. 95.50%
DT K-NN LR MLP RF
In this research, the following metrics are employed to
estimate model performance: accuracy, precision, and recall. Fig. 2 Average accuracy of each model.
The metrics computation is established on rules presented in
Table II. Computation of accuracy, precision, recall and F-

439 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 4, 2024

outperformed all other models in all performance metrics in


Average Precision predicting bankruptcy.
99.50%
98.77% V. DISCUSSION
99.00%
98.50%
To further confirm our findings, we compared them to
98.08%
98.00%
other studies using the same sample dataset of Taiwanese
97.55%
enterprises along with various resampling, attributes selection
97.50%
and prediction techniques.
Precision

97.00%
96.50% 96.23% In 2016, the models published by [5] employed Support
96.00% 95.70% Vector Machine (SVM) and generated five different machine
95.50%
learning models. Along with the ninety-five financial ratios
we utilized, they also used CGIs. They employed 10-fold
95.00%
cross-validation to generate ten distinct training and test
94.50% samples. They also tried five alternative attribute selection
94.00% techniques. The model with the best performance in their
DT K-NN LR MLP RF
study achieved 81.5% accuracy that was exceeded by the
Fig. 3 Average precision of each model.
weakest model in our research.
In 2022, the research by [22] closely examined the
discriminatory competence of a MLP in studying financial
Average Recall
failure prediction. For this purpose, they employed different
99.00% 98.78% setups of optimization algorithms, activation functions,
number of neurons, and number of layers. The model with the
98.50% best performance in their study achieved 86.67% accuracy,
98.13%
95.47% precision and 85.24% sensitivity that was
98.00% 97.73% outperformed by the worst model in our study.
97.50%
Recall

VI. CONCLUSIONS
96.98%
97.00% 96.73% This research covers the usage of different techniques with
the aim of enhancing the findings of prediction. We can state
96.50%
that firstly, using feature selection can significantly improve
96.00%
performance of prediction models. Secondly, constructing
prediction models using random forest algorithms
95.50% outperformed other models using different machine learning
DT K-NN LR MLP RF techniques in terms of accuracy, precision, and sensitivity.
Thirdly, employing growth ratios in dataset used in financial
Fig. 4 Average recall of each model. failure prediction is significant. Results from this study
recommend that, in general, random forest algorithms tend to
In analyzing Fig. 2, we see that the RF is better than the attain more exact results. The impressive performance of the
other models concerning illustrating and comparing the random forest model can be improved when the wrapper
differences in average accuracy of models. As shown in this method is used as the attribute selection method with random
figure, the random forest was the most accurate model with forest algorithm to detect the best features for the classifier.
98.80% compared with the lowest accuracy of 96.73% for Practitioners can benefit from these conclusions to enhance
logistic model. the accuracy of their predictions. For future work, researchers
Fig. 3 displays the average precision of models employing may use different feature selection methods combined with a
different feature selection algorithms. Once more, the data diversity of resampling approaches to identify what works
suggests that RF, on average, provide an improved better.
performance than their counterparts. This finding also sides Funding: None.
with the preceding idea that RF is more efficient. While
random forest was the most precise model with 98.77% the Conflicts of Interest: None.
logistic model with 95.70% was the lowest in precision.
REFERENCES
Fig. 4 also proves that random forest model performs [1] Y. Zhang et al., "Towards augmented kernel extreme learning models for
better than other models. Again, we notice the same bankruptcy prediction: algorithmic behavior and comprehensive
tendencies of RF model exceeding other models. Random analysis," Neurocomputing, vol. 430, pp. 185-212, 2021, doi:
forest model was the most model able to correctly identify http://dx.doi.org/10.1016/j.neucom.2020.10.038.
most of the positive results with 98.78% sensitivity while the [2] Z. Huang, H. Chen, C.-J. Hsu, W.-H. Chen, and S. Wu, "Credit rating
logistic model only had 96.73% which was the lowest. We can analysis with support vector machines and neural networks: a market
comparative study," Decision support systems, vol. 37, no. 4, pp. 543-
state that the model using random forest technique 558, 2004, doi: http://dx.doi.org/10.1016/S0167-9236(03)00086-1.

440 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 4, 2024

[3] M. Jiang and X. Wang, "Research on intelligent prediction method of [18] J. Heaton, "Ian Goodfellow, Yoshua Bengio, and Aaron Courville: Deep
financial crisis of listed enterprises based on Random Forest algorithm," learning: The MIT Press, 2016, 800 pp, ISBN: 0262035618," Genetic
Security and Communication Networks, vol. 2021, pp. 1-7, 2021. programming and evolvable machines, vol. 19, no. 1-2, pp. 305-307,
[4] L. Muparuri and V. Gumbo, "On logit and artificial neural networks in 2018, doi: http://dx.doi.org/10.1007/s10710-017-9314-z.
corporate distress modelling for Zimbabwe listed corporates," [19] Taiwanese Bankruptcy Prediction, doi:
Sustainability Analytics and Modeling, vol. 2, p. 100006, 2022, doi: https://doi.org/10.24432/C5004D.
http://dx.doi.org/10.1016/j.samod.2022.100006. [20] Y. F. Roumani, J. K. Nwankpa, and M. Tanniru, "Predicting firm failure
[5] D. Liang, C.-C. Lu, C.-F. Tsai, and G.-A. Shih, "Financial ratios and in the software industry," Artificial Intelligence Review, vol. 53, pp.
corporate governance indicators in bankruptcy prediction: A 4161-4182, 2020, doi: http://dx.doi.org/10.1007/s10462-019-09789-2.
comprehensive study," European journal of operational research, vol. [21] R. P. Hauser and D. Booth, "Predicting bankruptcy with robust logistic
252, no. 2, pp. 561-572, 2016. regression," Journal of Data Science, vol. 9, no. 4, pp. 565-584, 2011,
[6] S. Shetty, M. Musa, and X. Brédart, "Bankruptcy Prediction Using doi: http://dx.doi.org/10.6339/JDS.201110_09(4).0006.
Machine Learning Techniques," Journal of Risk and Financial [22] R. F. Brenes, A. Johannssen, and N. Chukhrova, "An intelligent
Management, vol. 15, no. 1, p. 35, 2022. bankruptcy prediction model using a multilayer perceptron," Intelligent
[7] D. Liang, C.-F. Tsai, H.-Y. R. Lu, and L.-S. Chang, "Combining Systems with Applications, p. 200136, 2022.
corporate governance indicators with stacking ensembles for financial
distress prediction," Journal of Business Research, vol. 120, pp. 137- APPENDIX
146, 2020.
[8] Y. Shi and X. Li, "An overview of bankruptcy prediction models for TABLE. III. COMPARISON BET WEEN T HE OUT COMES OF USING DIFFERENT
corporate firms: A systematic literature review," Intangible Capital, vol. A T T RIBUT E SELECT ION AND M ODELING ALGORITHMS
15, no. 2, pp. 114-127, 2019, doi: http://dx.doi.org/10.3926/ic.1354.
Prediction model Feature selection Accuracy Precision Recall
[9] S. C. Mann and R. Logeswaran, "Data Analytics in Improved
Bankruptcy Prediction with Industrial Risk," in 2021 14th International DT None 98.0056% .979 .980
Conference on Developments in eSystems Engineering (DeSE), 2021: DT DT Wrapper 98.0496% .979 .980
IEEE, pp. 23-26, doi: DT KNN Wrapper 97.6536% .974 .977
http://dx.doi.org/10.1109/DeSE54285.2021.9719372. DT LR Wrapper 97.375% .971 .974
[10] W. H. Beaver, "Financial Ratios As Predictors of Failure," Journal of DT MLP Wrapper 97.8003% .976 .978
Accounting Research, vol. 4, pp. 71-111, 1966, doi: 10.2307/2490171. DT RF Wrapper 97.4776% .974 .975
KNN None 98.1816% .981 .982
[11] E. I. Altman, "Financial Ratios, Discriminant Analysis and the
Prediction of Corporate Bankruptcy," The Journal of Finance, vol. 23, KNN DT Wrapper 98.1522% .981 .982
no. 4, pp. 589-609, 1968, doi: 10.1111/j.1540-6261.1968.tb00843.x. KNN KNN Wrapper 98.5922% .985 .986
KNN LR Wrapper 97.8003% .978 .978
[12] J. L. Bellovary, D. E. Giacomino, and M. D. Akers, "A review of
KNN MLP Wrapper 97.9909% .980 .980
bankruptcy prediction studies: 1930 to present," Journal of Financial
education, pp. 1-42, 2007. KNN RF Wrapper 98.0496% .980 .980
LR None 97.1257% .967 .971
[13] F. Lin, D. Liang, and E. Chen, "Financial ratio selection for business
LR DT Wrapper 96.5244% .949 .965
crisis prediction," Expert systems with applications, vol. 38, no. 12, pp.
15094-15102, 2011, doi: http://dx.doi.org/10.1016/j.eswa.2011.05.035. LR KNN Wrapper 96.5684% .953 .966
LR LR Wrapper 96.891% .964 .969
[14] C.-F. Tsai, "Feature selection in bankruptcy prediction," Knowledge- LR MLP Wrapper 96.8031% .960 .968
Based Systems, vol. 22, no. 2, pp. 120-127, 2009, doi:
LR RF Wrapper 96.4511% .949 .965
http://dx.doi.org/10.1016/j.knosys.2008.08.002.
MLP None 98.0056% .979 .980
[15] J. A. Ohlson, "Financial ratios and the probabilistic prediction of MLP DT Wrapper 96.7004% .959 .967
bankruptcy," Journal of accounting research, pp. 109-131, 1980, doi:
MLP KNN Wrapper 96.7004% .957 .967
http://dx.doi.org/10.2307/2490395.
MLP LR Wrapper 96.6711% .956 .967
[16] L. Breiman, "Random forests," Machine learning, vol. 45, pp. 5-32, MLP MLP Wrapper 96.979% .964 .970
2001, doi: https://doi.org/10.1023/A:1010933404324. MLP RF Wrapper 96.7591% .959 .968
[17] D. Sharma, "Improving the art, craft and science of economic credit risk RF None 98.8268% .988 .988
scorecards using random forests: Why credit scorers and economists RF DT Wrapper 98.7975% .987 .988
should use random forests," Craft and Science of Economic Credit Risk RF KNN Wrapper 98.7975% .988 .988
Scorecards Using Random Forests: Why Credit Scorers and Economists
RF LR Wrapper 98.6362% .986 .986
Should Use Random Forests (June 9, 2011), 2011, doi:
RF MLP Wrapper 98.8121% .988 .988
http://dx.doi.org/10.2139/ssrn.1861535.
RF RF Wrapper 98.9001% .989 .989

ORCID: Nesma Elsayed: https://orcid.org/my-


orcid?orcid=0009-0004-7859-562X

441 | P a g e
www.ijacsa.thesai.org
View publication stats

You might also like