Frauds are really in many transactions. We can apply machine learning algorithms to lies the past data and predict the possibility of a transaction being a fraud transaction. In our example we will take credit card transactions, analyse the data, create the features and labels and finally apply one of the ML algorithms to judge the nature of transaction as being fraud or not. Then we will find out the accuracy, precision as well as f-score of the model we are chosen.
Preparing the Data
We in this step we read the source data, study the variables present in it and have a look at some sample data. This will help us in knowing the different columns present in the data set and study their features. We will use Pandas is library to create the data frame which will be used in the subsequent steps.
Example
import pandas as pd #Load the creditcard.csv using pandas datainput = pd.read_csv('E:\\creditcard.csv') #https://www.kaggle.com/mlg-ulb/creditcardfraud # Print the top 5 records print(datainput[0:5],"\n") # Print the complete shape of the dataset print("Shape of Complete Data Set") print(datainput.shape,"\n")
Output
Running the above code gives us the following result −
Time V1 V2 V3 ... V27 V28 Amount Class 0 0.0 -1.359807 -0.072781 2.536347 ... 0.133558 -0.021053 149.62 0 1 0.0 1.191857 0.266151 0.166480 ... -0.008983 0.014724 2.69 0 2 1.0 -1.358354 -1.340163 1.773209 ... -0.055353 -0.059752 378.66 0 3 1.0 -0.966272 -0.185226 1.792993 ... 0.062723 0.061458 123.50 0 4 2.0 -1.158233 0.877737 1.548718 ... 0.219422 0.215153 69.99 0 [5 rows x 31 columns] Shape of Complete Data Set (284807, 31)
Checking the Imbalance in the Data
Now we check how the data is distributed among fraudulent and genuine transactions. This gives us an idea of of what percentage of data is expected to be fraudulent. In ml algorithm this is is referred as data imbalance. If most of the transactions is not fraudulent then it becomes difficult to judge few transactions as genuine or not. We use the class column to count the number of fraudulent engine in transactions and then figure out the actual percentage of fraudulent transactions.
Example
import pandas as pd #Load the creditcard.csv using pandas datainput = pd.read_csv('E:\\creditcard.csv') false = datainput[datainput['Class'] == 1] true = datainput[datainput['Class'] == 0] n = len(false)/float(len(true)) print(n) print('False Detection Cases: {}'.format(len(datainput[datainput['Class'] == 1]))) print('True Detection Cases: {}'.format(len(datainput[datainput['Class'] == 0])),"\n")
Output
Running the above code gives us the following result −
0.0017304750013189597 False Detection Cases: 492 True Detection Cases: 284315
Details of Transaction Types
We investigate further into the nature of the transactions for each category of fraudulent and non-fraudulent transactions. We try to statistically estimate various parameters like mean standard deviation maximum value minimum value and different percentiles. This is achieved by using the described method.
Example
import pandas as pd #Load the creditcard.csv using pandas datainput = pd.read_csv('E:\\creditcard.csv') #Check for imbalance in data false = datainput[datainput['Class'] == 1] true = datainput[datainput['Class'] == 0] #False Detection Cases print("False Detection Cases") print("----------------------") print(false.Amount.describe(),"\n") #True Detection Cases print("True Detection Cases") print("----------------------") print(true.Amount.describe(),"\n")
Output
Running the above code gives us the following result −
False Detection Cases ---------------------- count 492.000000 mean 122.211321 std 256.683288 min 0.000000 25% 1.000000 50% 9.250000 75% 105.890000 max 2125.870000 Name: Amount, dtype: float64 True Detection Cases ---------------------- count 284315.000000 mean 88.291022 std 250.105092 min 0.000000 25% 5.650000 50% 22.000000 75% 77.050000 max 25691.160000 Name: Amount, dtype: float64
Separating features and Label
Before we implement the ML algorithm, we need to decide on the features and labels. Which basically means the categorizing the dependent variables and the independent ones. In our dataset the class column is dependent on the rest of all other columns. So we create a data frames for the last column as well as another dataframe for rest of all other columns. These dataframes will be used to train the model that we are going to create.
Example
import pandas as pd #Load the creditcard.csv using pandas datainput = pd.read_csv('E:\\creditcard.csv') #separating features(X) and label(y) # Select all columns except the last for all rows X = datainput.iloc[:, :-1].values # Select the last column of all rows Y = datainput.iloc[:, -1].values print(X.shape) print(Y.shape)
Output
Running the above code gives us the following result −
(284807, 30) (284807,)
Train the Model
Now we split the data set into two parts. One is for training and another is for testing. The test_size parameter is used to decide what percentage of the data set will be used only for testing. This exercise will help us gain the confidence on the model we are creating.
Example
import pandas as pd from sklearn.model_selection import train_test_split #Load the creditcard.csv using pandas datainput = pd.read_csv('E:\\creditcard.csv') #separating features(X) and label(y) X = datainput.iloc[:, :-1].values # Select the last column of all rows Y = datainput.iloc[:, -1].values #train_test_split method X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2)
Applying Decision Tree Classification
There are many different kinds of algorithms available to be applied to this situation. But we choose decision tree as our algorithm for classification. Which is a max tree depth of 4 and supply the test sample to predict the values. Finally, we calculate the accuracy of the result from the test to decide on whether to continue further with this algorithm or not.
Example
import pandas as pd from sklearn import metrics from sklearn.model_selection import train_test_split #Load the creditcard.csv using pandas datainput = pd.read_csv('E:\\creditcard.csv') #separating features(X) and label(y) X = datainput.iloc[:, :-1].values Y = datainput.iloc[:, -1].values #train_test_split method X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2) #DecisionTreeClassifier from sklearn.tree import DecisionTreeClassifier classifier=DecisionTreeClassifier(max_depth=4) classifier.fit(X_train,Y_train) predicted=classifier.predict(X_test) print("\npredicted values :\n",predicted) #Accuracy DT = metrics.accuracy_score(Y_test, predicted) * 100 print("\nThe accuracy score using the DecisionTreeClassifier : ",DT)
Output
Running the above code gives us the following result −
predicted values : [0 0 0 ... 0 0 0] The accuracy score using the DecisionTreeClassifier : 99.9367999719111
Finding Evaluation Parameters
Once the accuracy level in the above step is acceptable we go on a further evaluation of the model by finding out different parameters. Which use Precision, recall value and F score as our parameters. precision is the fraction of relevant instances among the retrieved instances, while recall is the fraction of the total amount of relevant instances that were actually retrieved. F score provides a single score that balances both the concerns of precision and recall in one number.
Example
import pandas as pd from sklearn import metrics from sklearn.model_selection import train_test_split from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import f1_score #Load the creditcard.csv using pandas datainput = pd.read_csv('E:\\creditcard.csv') #separating features(X) and label(y) X = datainput.iloc[:, :-1].values Y = datainput.iloc[:, -1].values #train_test_split method X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2) #DecisionTreeClassifier from sklearn.tree import DecisionTreeClassifier classifier=DecisionTreeClassifier(max_depth=4) classifier.fit(X_train,Y_train) predicted=classifier.predict(X_test) print("\npredicted values :\n",predicted) # # #Accuracy DT = metrics.accuracy_score(Y_test, predicted) * 100 print("\nThe accuracy score using the DecisionTreeClassifier : ",DT) # # #Precision print('precision') # Precision = TP / (TP + FP) (Where TP = True Positive, TN = True Negative, FP = False Positive, FN = False Negative). precision = precision_score(Y_test, predicted, pos_label=1) print(precision_score(Y_test, predicted, pos_label=1)) #Recall print('recall') # Recall = TP / (TP + FN) recall = recall_score(Y_test, predicted, pos_label=1) print(recall_score(Y_test, predicted, pos_label=1)) #f1-score print('f-Score') # F - scores are a statistical method for determining accuracy accounting for both precision and recall. fscore = f1_score(Y_test, predicted, pos_label=1) print(f1_score(Y_test, predicted, pos_label=1))
Output
Running the above code gives us the following result −
The accuracy score using the DecisionTreeClassifier : 99.9403110845827 precision 0.810126582278481 recall 0.7710843373493976 f-Score 0.7901234567901234