DM Assignment - Thena Bank
DM Assignment - Thena Bank
Santhosh Sadasivam
12/11/2019
Description
Thera Bank - Loan Purchase Modeling
This case is about a bank (Thera Bank) which has a growing customer base. Majority of
these customers are liability customers (depositors) with varying size of deposits. The
number of customers who are also borrowers (asset customers) is quite small, and the
bank is interested in expanding this base rapidly to bring in more loan business and in the
process, earn more through the interest on loans. In particular, the management wants to
explore ways of converting its liability customers to personal loan customers (while
retaining them as depositors). A campaign that the bank ran last year for liability
customers showed a healthy conversion rate of over 9% success. This has encouraged the
retail marketing department to devise campaigns with better target marketing to increase
the success ratio with a minimal budget. The department wants to build a model that will
help them identify the potential customers who have a higher probability of purchasing the
loan. This will increase the success ratio while at the same time reduce the cost of the
campaign. The dataset has data on 5000 customers. The data include customer
demographic information (age, income, etc.), the customer’s relationship with the bank
(mortgage, securities account, etc.), and the customer response to the last personal loan
campaign (Personal Loan). Among these 5000 customers, only 480 (= 9.6%) accepted the
personal loan that was offered to them in the earlier campaign.
Problem Statement
# Libraries to install
library(readxl)
library(readr)
library(DataExplorer)
library(caTools)
library(rpart)
library(rpart.plot)
library(rattle)
library(data.table)
library(ROCR)
## Loading required package: gplots
##
## Attaching package: 'gplots'
library(ineq)
library(InformationValue)
library(ModelMetrics)
##
## Attaching package: 'ModelMetrics'
library(reshape)
##
## Attaching package: 'reshape'
library(randomForest)
## randomForest 4.6-14
##
## Attaching package: 'randomForest'
attach(data)
# we could see the summary of the data set for each column with Mean,Median,
Min, Max, 1st Qtr, 3rd Qtr etc..
str(data)
# Data has all varibales as numeric and it is found that data is a mix of
table and dataframe
## [1] 5000 14
colnames(data)=make.names(colnames(data))
print(colnames(data))
## [1] 18
There are 18 NAs in the dataset. as observed earlier all 18 in Family Members column
# Proportio of Responders and Non responders to personal loan campaign
prop.table(table(data$Personal.Loan))*100
##
## 0 1
## 90.4 9.6
9.6% responded to the personal loan campaign 90.4% has not responded to the campaign
# missing values and plotting them
plot_missing(data)
colSums(is.na(data))
Family Members observed with 0.36% missing values Since the percnetage is low we can
delete it from the dataset
# Missing Value Treatment
print.data.frame(data[!complete.cases(data),]) # showing no of rows where NA
is present
print(colSums(data<0))
Age and Experience has high correlation Income and Averge spending on credit card has
medium correlation There is no other significan correlation as we observe the plot
# Histogram
seed = 1000
set.seed(seed)
x = sample.split(data$Personal.Loan, SplitRatio = 0.7)
TrainDS = subset(data, x==TRUE)
TestDS = subset(data,x==FALSE)
TrainDS_RF = TrainDS
TestDS_RF = TestDS
Modelling
# CART Modelling
# setting CART Parameters
cartParameters = rpart.control(minsplit = 15, cp =0.009,xval = 10)
cartModel = rpart(formula = TrainDS$Personal.Loan~ .,data = TrainDS, method =
"class", control = cartParameters)
cartModel
## n= 3488
##
## node), split, n, loss, yval, (yprob)
## * denotes terminal node
##
## 1) root 3488 335 0 (0.903956422 0.096043578)
## 2) Income..in.K.month.< 119.5 2874 76 0 (0.973556019 0.026443981)
## 4) CCAvg< 2.95 2633 13 0 (0.995062666 0.004937334) *
## 5) CCAvg>=2.95 241 63 0 (0.738589212 0.261410788)
## 10) CD.Account=0 221 47 0 (0.787330317 0.212669683)
## 20) Education=1 113 9 0 (0.920353982 0.079646018) *
## 21) Education=2,3 108 38 0 (0.648148148 0.351851852)
## 42) Income..in.K.month.< 92.5 67 9 0 (0.865671642 0.134328358)
*
## 43) Income..in.K.month.>=92.5 41 12 1 (0.292682927 0.707317073)
*
## 11) CD.Account=1 20 4 1 (0.200000000 0.800000000) *
## 3) Income..in.K.month.>=119.5 614 259 0 (0.578175896 0.421824104)
## 6) Education=1 406 51 0 (0.874384236 0.125615764)
## 12) Family.members=1,2 355 0 0 (1.000000000 0.000000000) *
## 13) Family.members=3,4 51 0 1 (0.000000000 1.000000000) *
## 7) Education=2,3 208 0 1 (0.000000000 1.000000000) *
printcp(cartModel)
##
## Classification tree:
## rpart(formula = TrainDS$Personal.Loan ~ ., data = TrainDS, method =
"class",
## control = cartParameters)
##
## Variables actually used in tree construction:
## [1] CCAvg CD.Account Education
## [4] Family.members Income..in.K.month.
##
## Root node error: 335/3488 = 0.096044
##
## n= 3488
##
## CP nsplit rel error xerror xstd
## 1 0.31045 0 1.00000 1.00000 0.051946
## 2 0.15224 2 0.37910 0.38806 0.033395
## 3 0.01791 3 0.22687 0.23582 0.026230
## 4 0.00900 7 0.14030 0.14925 0.020956
plotcp(cartModel)
The Built Cart tree have scope for Pruning as we see form the above plot considering the
lowest error
# Fnding the best CP
## [1] 0.009
## pruning Tree
pTree = prune(cartModel,cp = bestCP, "CP")
pTree
## n= 3488
##
## node), split, n, loss, yval, (yprob)
## * denotes terminal node
##
## 1) root 3488 335 0 (0.903956422 0.096043578)
## 2) Income..in.K.month.< 119.5 2874 76 0 (0.973556019 0.026443981)
## 4) CCAvg< 2.95 2633 13 0 (0.995062666 0.004937334) *
## 5) CCAvg>=2.95 241 63 0 (0.738589212 0.261410788)
## 10) CD.Account=0 221 47 0 (0.787330317 0.212669683)
## 20) Education=1 113 9 0 (0.920353982 0.079646018) *
## 21) Education=2,3 108 38 0 (0.648148148 0.351851852)
## 42) Income..in.K.month.< 92.5 67 9 0 (0.865671642 0.134328358)
*
## 43) Income..in.K.month.>=92.5 41 12 1 (0.292682927 0.707317073)
*
## 11) CD.Account=1 20 4 1 (0.200000000 0.800000000) *
## 3) Income..in.K.month.>=119.5 614 259 0 (0.578175896 0.421824104)
## 6) Education=1 406 51 0 (0.874384236 0.125615764)
## 12) Family.members=1,2 355 0 0 (1.000000000 0.000000000) *
## 13) Family.members=3,4 51 0 1 (0.000000000 1.000000000) *
## 7) Education=2,3 208 0 1 (0.000000000 1.000000000) *
##
## Classification tree:
## rpart(formula = TrainDS$Personal.Loan ~ ., data = TrainDS, method =
"class",
## control = cartParameters)
##
## Variables actually used in tree construction:
## [1] CCAvg CD.Account Education
## [4] Family.members Income..in.K.month.
##
## Root node error: 335/3488 = 0.096044
##
## n= 3488
##
## CP nsplit rel error xerror xstd
## 1 0.31045 0 1.00000 1.00000 0.051946
## 2 0.15224 2 0.37910 0.38806 0.033395
## 3 0.01791 3 0.22687 0.23582 0.026230
## 4 0.00900 7 0.14030 0.14925 0.020956
# Confusion MAtrix
##
## 0 1
## 0 3137 31
## 1 16 304
CER_TrDS = (tb1_TrDS_CART[1,2]+tb1_TrDS_CART[2,1])/sum(tb1_TrDS_CART)
CER_TrDS
## [1] 0.01347477
Acc_TrDS = 1 - CER_TrDS
Acc_TrDS
## [1] 0.9865252
## [1] 0.9074627
## [1] 0.9949255
TrainDS$deciles = cut(TrainDS$Probability,
unique(qt_TrDS_CART),include.lowest = TRUE, right = TRUE)
table(TrainDS$deciles)
##
## [0,0.00494] (0.00494,0.134] (0.134,1]
## 2988 180 320
Three different buckets were created based on a specific number interval Above 0-0.00494
is one bucket, 0.00494 - 0.134 in one bucket and 0.134-1 in the kast bucket
# Rank ordering Table (Model PErofrmance 1)
TrainDS = data.table(TrainDS)
rankTbl_TrDS_CART = TrainDS[, list(
cnt = length(Personal.Loan),
cnt_tar1 = sum(Personal.Loan == 1),
cnt_tar0 = sum(Personal.Loan == 0)),
by=deciles][order(-deciles)]
rankTbl_TrDS_CART$resp_rate = round(rankTbl_TrDS_CART$cnt_tar1 /
rankTbl_TrDS_CART$cnt,4)*100;
rankTbl_TrDS_CART$cum_resp = cumsum(rankTbl_TrDS_CART$cnt_tar1)
rankTbl_TrDS_CART$cum_non_resp = cumsum(rankTbl_TrDS_CART$cnt_tar0)
rankTbl_TrDS_CART$cum_rel_resp = round(rankTbl_TrDS_CART$cum_resp /
sum(rankTbl_TrDS_CART$cnt_tar1),4)*100
rankTbl_TrDS_CART$cum_rel_non_resp = round(rankTbl_TrDS_CART$cum_non_resp /
sum(rankTbl_TrDS_CART$cnt_tar0),4)*100
rankTbl_TrDS_CART$ks = abs(rankTbl_TrDS_CART$cum_rel_resp -
rankTbl_TrDS_CART$sum_rel_non_resp)
print(rankTbl_TrDS_CART)
## $Concordance
## [1] 0.9629162
##
## $Discordance
## [1] 0.03708385
##
## $Tied
## [1] -2.775558e-17
##
## $Pairs
## [1] 1056255
Concordance is very good since it shows 96% so the model is very good
# Root Mean Square Error (RMSE)
# computed considering the personal loan as a continous variable or number
RMSE_TrDS = rmse(TrainDS$Personal.Loan,TrainDS$Prediction)
RMSE_TrDS
## [1] 0.1160809
## [1] 0.01347477
# Prediction
tb1_TeDS=table(TestDS$Prediction, TestDS$Personal.Loan)
print(tb1_TeDS)
##
## 0 1
## 0 1343 14
## 1 8 129
## [1] 0.01472557
## [1] 0.9852744
Accuracy of the model on the testing data is 98.5% which is quite similar tot he one of the
train data
# finding True positive rate / Sensitivity
TPR_TeDS=tb1_TeDS[2,2]/(tb1_TeDS[1,2]+tb1_TeDS[2,2])
TPR_TeDS
## [1] 0.9020979
## [1] 0.9940785
As we observe that most of the data falls between 90% - 100% bucket Almost 86% of the
response falls in that bucket
TestDS$deciles = cut(TestDS$Probability, unique(qt_TeDS_CART),include.lowest
= TRUE, right = TRUE)
table(TestDS$deciles)
##
## [0,0.00494] (0.00494,0.134] (0.134,1]
## 1282 75 137
testDT = data.table(TestDS)
rankTbl_TeDS_CART = testDT[, list(
cnt = length(Personal.Loan),
cnt_tar1 = sum(Personal.Loan == 1),
cnt_tar0 = sum(Personal.Loan == 0)),
by=deciles][order(-deciles)]
rankTbl_TeDS_CART$resp_rate = round(rankTbl_TeDS_CART$cnt_tar1 /
rankTbl_TeDS_CART$cnt,4)*100
rankTbl_TeDS_CART$cum_resp = cumsum(rankTbl_TeDS_CART$cnt_tar1)
rankTbl_TeDS_CART$cum_non_resp = cumsum(rankTbl_TeDS_CART$cnt_tar0)
rankTbl_TeDS_CART$cum_rel_resp = round(rankTbl_TeDS_CART$cum_resp /
sum(rankTbl_TeDS_CART$cnt_tar1),4)*100
rankTbl_TeDS_CART$cum_rel_non_resp = round(rankTbl_TeDS_CART$cum_non_resp /
sum(rankTbl_TeDS_CART$cnt_tar0),4)*100
rankTbl_TeDS_CART$ks = abs(rankTbl_TeDS_CART$cum_rel_resp -
rankTbl_TeDS_CART$cum_rel_non_resp) #ks
rankTbl_TeDS_CART
## $Concordance
## [1] 0.9419803
##
## $Discordance
## [1] 0.0580197
##
## $Tied
## [1] -4.163336e-17
##
## $Pairs
## [1] 193193
# Root Mean Square Error (RMSE)
# computed considering the personal loan as a continous variable or number
RMSE_TeDS = rmse(TestDS$Personal.Loan,TestDS$Prediction)
RMSE_TeDS
## [1] 0.121349
## [1] 0.01472557
Training_CART = c(CER_TrDS,
Acc_TrDS,
TPR_TrDS,
TNR_TrDS,
ks_TrDS,
auc_TrDS,
gini_TrDS,
Concordance_TrDS$Concordance,
RMSE_TrDS,
MAE_TrDS)
Test_CART =c(CeR_TeDS,
Accuracy_TeDS,
TPR_TeDS,
TNR_TeDS,
ks_TeDS,
auc_TeDS,
gini_TeDS,
Concordance_TeDS$Concordance,
RMSE_TeDS,
MAE_TeDS)
x=cbind(Performance_KPI,Training_CART,Test_CART)
x=data.table(x)
x$Training_CART=as.numeric(x$Training_CART)
x$Test_CART=as.numeric(x$Test_CART)
print(x)
TrainDS =TrainDS_RF
TestDS=TestDS_RF
##
## Call:
## randomForest(formula = Personal.Loan ~ ., data = TrainDS, ntree = 501,
mtry = 5, nodesize = 10, importance = TRUE)
## Type of random forest: classification
## Number of trees: 501
## No. of variables tried at each split: 5
##
## OOB estimate of error rate: 1.32%
## Confusion matrix:
## 0 1 class.error
## 0 3147 6 0.00190295
## 1 40 295 0.11940299
min(rndForest$err.rate)
## [1] 0.001585791
# Plotting Error Rates for Random Forest
print(rndForest$importance)
## 0 1 MeanDecreaseAccuracy
## Age..in.years. 3.673304e-03 6.194839e-04 3.378982e-03
## Experience..in.years. 3.189203e-03 2.180676e-03 3.091802e-03
## Income..in.K.month. 1.285083e-01 4.589875e-01 1.600288e-01
## Family.members 5.306916e-02 7.721191e-02 5.536076e-02
## CCAvg 3.162860e-02 7.401166e-02 3.565070e-02
## Education 7.129464e-02 1.317559e-01 7.705097e-02
## Mortgage 1.186234e-03 -2.870626e-03 7.936276e-04
## Securities.Account 4.405486e-05 -5.149235e-05 3.663810e-05
## CD.Account 3.255770e-03 1.031084e-02 3.929585e-03
## Online 5.194141e-05 3.917111e-04 8.482881e-05
## CreditCard 6.970194e-04 5.825216e-04 6.873496e-04
## MeanDecreaseGini
## Age..in.years. 9.7745913
## Experience..in.years. 9.9801487
## Income..in.K.month. 182.4077177
## Family.members 83.4483778
## CCAvg 80.2558097
## Education 155.3222629
## Mortgage 8.3953196
## Securities.Account 0.7925112
## CD.Account 29.3202665
## Online 1.0615870
## CreditCard 2.0055241
set.seed(1000)
set.seed(seed)
tRndForest=tuneRF(x=TrainDS[,-
which(colnames(TrainDS)=="Personal.Loan")],y=TrainDS$Personal.Loan,
mtryStart = 9,
ntreeTry = 101,
stepFactor = 1.2,
improve = 0.001,
trace = FALSE,
plot = TRUE,
doBest = TRUE,
nodesize = 10,
importance = TRUE )
## 0.1632653 0.001
## -0.1463415 0.001
## -0.1707317 0.001
# Finding important variables
importance(tRndForest)
## 0 1 MeanDecreaseAccuracy
## Age..in.years. 17.785873 -1.1339472 16.0236905
## Experience..in.years. 13.793447 -0.5375885 13.0188811
## Income..in.K.month. 235.480675 123.4019580 245.4670544
## Family.members 174.449089 68.6975275 178.4550860
## CCAvg 34.016640 52.1285949 41.2768729
## Education 226.777339 96.6939567 238.5661690
## Mortgage 4.066117 0.1846508 4.0537627
## Securities.Account -1.568248 1.5192923 -1.0841882
## CD.Account 13.230849 13.5666479 17.7969469
## Online 0.454509 0.6883306 0.7216765
## CreditCard 2.689135 -0.4054599 2.2414486
## MeanDecreaseGini
## Age..in.years. 7.9980541
## Experience..in.years. 6.7010570
## Income..in.K.month. 190.9673454
## Family.members 97.0932383
## CCAvg 58.7611727
## Education 189.1666498
## Mortgage 2.9831095
## Securities.Account 0.5013371
## CD.Account 18.3850897
## Online 0.8344523
## CreditCard 0.9173974
# Prediction:
# Confusion Matrix:
tbl_TrDS_RF=table(TrainDS$Prediction_RF, TrainDS$Personal.Loan)
tbl_TrDS_RF
##
## 0 1
## 0 3153 17
## 1 0 318
CeR_TrDS_RF=(tbl_TrDS_RF[1,2]+tbl_TrDS_RF[2,1])/sum(tbl_TrDS_RF)
CeR_TrDS_RF
## [1] 0.004873853
# Accuracy:
Accuracy_TrDS_RF=1-(tbl_TrDS_RF[1,2]+tbl_TrDS_RF[2,1])/sum(tbl_TrDS_RF)
Accuracy_TrDS_RF
## [1] 0.9951261
TPR_TrDS_RF=tbl_TrDS_RF[2,2]/(tbl_TrDS_RF[1,2]+tbl_TrDS_RF[2,2])
## [1] 0.9492537
TNR_TrDS_RF=tbl_TrDS_RF[1,1]/(tbl_TrDS_RF[1,1]+tbl_TrDS_RF[2,1])
TNR_TrDS_RF
## [1] 1
probs_TrDS_RF=seq(0,1,length=11)
qs_TrDS_RF=quantile(TrainDS$Probability1_RF, probs_TrDS_RF)
qs_TrDS_RF
## 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
## 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.002 0.248 1.000
TrainDS$deciles_RF=cut(TrainDS$Probability1_RF, unique(qs_TrDS_RF),
include.lowest = TRUE, right=TRUE)
table(TrainDS$deciles_RF)
##
## [0,0.002] (0.002,0.248] (0.248,1]
## 2826 313 349
Three deciles has been split, first decile 0 -0.002, second decile 0.002 - 0.248, third decile
0.248 - 1 where majority of the data falls
# Rank ordering table computing
library(data.table)
trainDT = data.table(TrainDS)
rankTbl_TrDS_RF = trainDT[, list(
cnt = length(Personal.Loan),
cnt_tar1= sum(Personal.Loan == 1),
cnt_tar0 = sum(Personal.Loan == 0)),
by=deciles_RF][order(-deciles_RF)]
rankTbl_TrDS_RF$resp_rate = round(rankTbl_TrDS_RF$cnt_tar1 /
rankTbl_TrDS_RF$cnt,4)*100
rankTbl_TrDS_RF$cum_resp = cumsum(rankTbl_TrDS_RF$cnt_tar1)
rankTbl_TrDS_RF$cum_non_resp = cumsum(rankTbl_TrDS_RF$cnt_tar0)
rankTbl_TrDS_RF$cum_rel_resp = round(rankTbl_TrDS_RF$cum_resp /
sum(rankTbl_TrDS_RF$cnt_tar1),4)*100
rankTbl_TrDS_RF$cum_rel_non_resp = round(rankTbl_TrDS_RF$cum_non_resp /
sum(rankTbl_TrDS_RF$cnt_tar0),4)*100
rankTbl_TrDS_RF$ks = abs(rankTbl_TrDS_RF$cum_rel_resp -
rankTbl_TrDS_RF$cum_rel_non_resp) #ks
rankTbl_TrDS_RF
# ks
Concordance_TrDS_RF=Concordance(actuals=TrainDS$Personal.Loan,
predictedScores=TrainDS$Probability1_RF)
Concordance_TrDS_RF
## $Concordance
## [1] 0.9998731
##
## $Discordance
## [1] 0.0001268633
##
## $Tied
## [1] -2.981556e-17
##
## $Pairs
## [1] 1056255
RMSE_TrDS_RF=rmse(TrainDS$Personal.Loan, TrainDS$Prediction_RF)
RMSE_TrDS_RF
## [1] 0.06981299
## [1] 0.004873853
# Prediction:
# Confusion Matrix:
tbl_TeDS_RF=table(TestDS$Prediction_RF, TestDS$Personal.Loan)
tbl_TeDS_RF
##
## 0 1
## 0 1347 14
## 1 4 129
CeR_TeDS_RF=(tbl_TeDS_RF[1,2]+tbl_TeDS_RF[2,1])/sum(tbl_TeDS_RF)
CeR_TeDS_RF
## [1] 0.01204819
# Accuracy:
Accuracy_TeDS_RF=1-CeR_TeDS_RF
Accuracy_TeDS_RF
## [1] 0.9879518
TPR_TeDS_RF=tbl_TeDS_RF[2,2]/(tbl_TeDS_RF[1,2]+tbl_TeDS_RF[2,2])
TPR_TeDS_RF
## [1] 0.9020979
TNR_TeDS_RF=tbl_TeDS_RF[1,1]/(tbl_TeDS_RF[1,1]+tbl_TeDS_RF[2,1])
TNR_TeDS_RF
## [1] 0.9970392
probs_TeDS_RF=seq(0,1,length=11)
qs_TeDS_RF=quantile(TestDS$Probability1_RF, probs_TeDS_RF)
qs_TeDS_RF
## 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
## 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.004 0.205 1.000
# Splitting Deciles
TestDS$deciles_RF=cut(TestDS$Probability1_RF, unique(qs_TeDS_RF),
include.lowest = TRUE, right=TRUE)
table(TestDS$deciles_RF)
##
## [0,0.004] (0.004,0.205] (0.205,1]
## 1210 134 150
# Rank ordering table on RF Test Data
testDT = data.table(TestDS)
rankTbl_TeDS_RF = testDT[, list(
cnt = length(Personal.Loan),
cnt_tar1 = sum(Personal.Loan == 1),
cnt_tar0 = sum(Personal.Loan == 0)),
by=deciles_RF][order(-deciles_RF)]
rankTbl_TeDS_RF$resp_rate = round(rankTbl_TeDS_RF$cnt_tar1 /
rankTbl_TeDS_RF$cnt,4)*100
rankTbl_TeDS_RF$cum_resp = cumsum(rankTbl_TeDS_RF$cnt_tar1)
rankTbl_TeDS_RF$cum_non_resp = cumsum(rankTbl_TeDS_RF$cnt_tar0)
rankTbl_TeDS_RF$cum_rel_resp = round(rankTbl_TeDS_RF$cum_resp /
sum(rankTbl_TeDS_RF$cnt_tar1),4)*100
rankTbl_TeDS_RF$cum_rel_non_resp = round(rankTbl_TeDS_RF$cum_non_resp /
sum(rankTbl_TeDS_RF$cnt_tar0),4)*100
rankTbl_TeDS_RF$ks = abs(rankTbl_TeDS_RF$cum_rel_resp -
rankTbl_TeDS_RF$cum_rel_non_resp) #ks
rankTbl_TeDS_RF
Concordance_TeDS_RF=Concordance(actuals=TestDS$Personal.Loan,
predictedScores=TestDS$Probability1_RF)
Concordance_TeDS_RF
## $Concordance
## [1] 0.9968011
##
## $Discordance
## [1] 0.003198874
##
## $Tied
## [1] -1.864828e-17
##
## $Pairs
## [1] 193193
RMSE_TeDS_RF=rmse(TestDS$Personal.Loan, TestDS$Prediction_RF)
RMSE_TeDS_RF
## [1] 0.1097643
MAE_TeDS_RF=mae(TestDS$Personal.Loan, TestDS$Prediction_RF)
MAE_TeDS_RF
## [1] 0.01204819
Training_CART = c(CER_TrDS,
Acc_TrDS,
TPR_TrDS,
TNR_TrDS,
ks_TrDS,
auc_TrDS,
gini_TrDS,
Concordance_TrDS$Concordance,
RMSE_TrDS,
MAE_TrDS)
Test_CART = c(CeR_TeDS,
Accuracy_TeDS,
TPR_TeDS,
TNR_TeDS,
ks_TeDS,
auc_TeDS,
gini_TeDS,
Concordance_TeDS$Concordance,
RMSE_TeDS,
MAE_TeDS)
Training_RF = c(CeR_TrDS_RF,
Accuracy_TrDS_RF,
TPR_TrDS_RF,
TNR_TrDS_RF,
ks_TrDS_RF,
auc_TrDS_RF,
gini_TrDS_RF,
Concordance_TrDS_RF$Concordance,
RMSE_TrDS_RF,
MAE_TrDS_RF)
Test_RF = c(CeR_TeDS_RF,
Accuracy_TeDS_RF,
TPR_TeDS_RF,
TNR_TeDS_RF,
ks_TeDS_RF,
auc_TeDS_RF,
gini_TeDS_RF,
Concordance_TeDS_RF$Concordance,
RMSE_TeDS_RF,
MAE_TeDS_RF)