Analysis of Transport Choice of Employees - A Project On Machine Learning
Analysis of Transport Choice of Employees - A Project On Machine Learning
0|Page
Table of Contents
1|Page
Project Objective
This case study is prepared for an organization to study their employees transport preference to commute and need
to predict whether an employee will use Car as a mode of transport. Also, which variables are a significant predictor
behind this decision. The objective is to build the best model using Machine Learning techniques which can
identify right employees who prefers cars.
We will be performing below steps and will analyze the data using Machine Learning Modeling techniques to
identify such customers:
1. EDA
1.1 How does the data look like by doing Univariate and bivariate analysis? Plots and charts which
illustrate the relationships between variables.
1.2 Will look out for outliers and missing values.
1.3 Checking of distribution of target variable in given data set and applying treatment using SMOTE
accordingly.
1.4 Checking multicollinearity & its treatment.
1.5 Summarize the insights we will get from EDA.
2. Data Preparation
1.1 Prepare the data for analysis
3. Build Various Predictive Models and compare them to get to the best one
3.1 Building Logistic Regression Model and its interpretation.
3.2 Building KNN Model and its interpretation.
3.3 Building Naive Bayes and its interpretation.
3.4 Performing Model Comparison using Model Performance metrices.
3.5 Apply both bagging and boosting modeling procedures to create 2 models and compare its
accuracy with the best model.
4. Actionable Insights
4.1 Interpretation & Recommendations from the best model
Complete case study is performed on given dataset (Cars.csv) to build suitable model for predicting using Machine
Learning Techniques like Logistic Regression, KNN, Naive Bayes and applying Bagging and Boosting on top of that
and finally perform Model Performance Measures by various metrics.
Confusion Matrix (for all models)
AUC – ROC (for all model)
Gini Coefficient (only for Logistic regression)
Kolmogorov Smirnov (KS) Chart (only for Logistic regression)
2|Page
Defining Business Problem
Objective is to helps in understanding the mode of transport employees prefers to commute to their office.
There are several factors which predominantly plays important role in their mode of transport like:-
- Monthly salary
- Expenses
- Work Experience
- Distance
- Position they hold
- Age
In this case study we will try to understand that what are the factor which are influencing employee’s decision to
use car as their favorable means of transport by building Machine Learning model.
Data Dictionary
The dataset has data of 418 employee’s information about their mode of transport as well as their personal and
professional details like age, salary, work exp etc.
Variables Description
Age Age of an employee
Gender Gender of an employee
Engineer Whether 3mployee is an engineer graduate or not.1 menas Engineer, 0 means not.
MBA Whether 3mployee has done MBA or not.1 menas MBA, 0 means not.
Work Exp Total Work Experience of an empolyee
Salary Monthly salary of an employee
Distance average distance employee travels
license Whether employee holds valid driving license or not. 1 means Yes, 0 means no.
Transport Transport mode which they prefer to commute currently.
3|Page
Target Variable: - Transport
Summary of Data
1. Transport
Conclusion: -
1. Has 2 values 0 and 1
2. Out of 418 employee, 83 employee travel by 2-Wheeler, 35 by Car and 300 by Public Transport.
3. Percentage of employees are as follows 19.9% use Two Wheelers, 8.4% use Cars and 71.8% use
Public Transport.
4|Page
2. Mode of transport by Gender
Conclusion: -
1. Very Few Females use Cars compared to Males. Both Males & Females use more of Public
Transport.
No Significant difference due to Gender.
Conclusion: -
1. No Significant difference due to Engineer/Non-Engineer
5|Page
4. Mode of Transport by MBA
Conclusion: -
1. No Significant difference due to MBA/Non-MBA
Conclusion: -
1. Driving License Holders prefer 2 Wheelers & Cars over Public Transport.
2. Significant number of people without Driving License use 2 Wheelers
6|Page
6. Analysis of Work Experience in Years by Transport Mode
Conclusion: -
1. The higher the Experience the more the usage of Cars over 2 Wheelers and Public
Transport.
2. Work Experience between 15 & 25 Years prefer Cars
Conclusion: -
1. The higher the Salary the less the usage of 2 Wheelers and Public Transport.
7|Page
8. Analysis of Distance by Transport Mode
Conclusion: -
1. Car is preferred for travelling a distance greater than 13 Miles
Our primary interest as per problem statement is to understand the factors influencing car usage. Hence, we will
create a new column for Car usage. It will take value 0 for Public Transport & 2-Wheeler and 1 for car usage and
Understand the proportion of cars in Transport Mode accordingly
We can clearly see that Target variable is less than 10% in total available data set so we will be applying SMOTE in
further steps. Before that we will be converting Engineer, MBA and License variable into Factor Variable by executing
below R Code
The number of records for people travelling by car is in minority i.e. 10%. Hence, we need to use an appropriate
sampling method.
We will explore using SMOTE to balance target variable proportion and we will use those Test and Train dataset in
logistic regression to see the best fit model and explore a couple of black box models for prediction later.
8|Page
Applying SMOTE for data balancing
After balancing we can see that ratio of target variable has increased over 10% and we can use this balanced dataset
in further validating models.
Checking Correlation
Let's look at the correlation between all the variables and treat highly correlated variables accordingly to build the
regression model.
Correlation Interpretation
• Age, Work Exp and Salary are highly Correlated
• Age, Work Exp and Salary are all moderately correlated with Distance and License
• Transport is somewhat moderately (marginally) correlated with Gender but not significant
Since we are unable to identify clearly the variables from which we can predict the Mode of Transport, we
will perform a logistic Regression
9|Page
Logistic Regression
We will start with Logistic Regression Analysis as it will give us clear insight that what are those variables which are
significant in building model so that we can achieve more precision by eliminating irrelevant variables
Building Logistic Regression Model based upon all the given variables
Interpretation from logistic model using all available variables and going through the multicollinearity
- The multicollinearity has caused the inflated VIF values for correlated variables, making the model
unreliable for model building.
- VIF values for Salary and Work Exp are 5.54 and 15.69 respectively which are not inflated as such
- Being conservative and not considering VIF values above 5, we will remove Salary & Work Exp (Highly
Correlated)
10 | P a g e
Creating Model 2 - Logistic Regression after Removing highly correlated variables
Create 2nd Model after removing correlated variables Salary & Work Exp
Engineer, Distance, Gender and MBA are Insignificant, so we will remove them as well and create a new model
based upon the rest of the variables.
Creating Model 3 - Logistic Regression built after Removing all insignificant variables
Now based on this new built model we can see that all values are significant, and we can verify the same by
checking the multicollinearity as well.
Now we can see that VIF values are within range and all variables are significant and results are making more sense
and are in line with the results which we obtained from EDA.
11 | P a g e
Regression Model Performance on Train and Test Data set
1. Confusion Matrix: -
We will start model evaluation on train and test data by executing below code and will see that how accurate our
model will be in identification of employee who will be preferring Car as mode of transport.
Calculating Confusion Matrix on Train Test Data: - We are predicting classification of 0 and 1 for each row and then
we are putting our actual and predicted into a table to build confusion matrix to check that how accurate our model
is by executing below R Code.
From Confusion matrix we can clearly see that our Train data is 96.75% accurate in predicting and Train data
confirms the same with 96.20% of accuracy. We can see there is a slight variation but that is within the range so
we can confirm that our model is good model.
2. ROC
The ROC curve is the plot between sensitivity and (1- specificity).
(1- specificity) is also known as false positive rate and sensitivity is also known as True Positive rate.
12 | P a g e
Calculating ROC on Test Data
In Train data my true positive rate is 99.66% and in test data it’s 98.80%. so, there is no major variation in our Test
and Train data, and this proves that our model is more stable.
3. K-S chart
K-S will measure the degree of separation between car users and non-car users
By executing below code on Train and Test model, we will be able to see K-S Analysis result: -
13 | P a g e
4. Gini chart
Gini is the ratio between area between the ROC curve and the diagonal line & the area of the above triangle.
k-NN Classification
k-NN is a supervised learning algorithm. It uses labeled input data to learn a function that produces an
appropriate output when given new unlabeled data. So, let’s build our classification model by following below
steps: -
14 | P a g e
Creating k-NN model
When we choose 3 neighbors
Creating k-NN model on Train and Test data set
1. Confusion Matrix: -
We will start model evaluation on train and test data by executing below code and will see that how accurate our
model will be in identification of employee who will be preferring Car as mode of transport.
Calculating Confusion Matrix on Train Data: - We are predicting classification of 0 and 1 for each row and then we
are putting our actual and predicted into a table to build confusion matrix to check that how accurate our model is
by executing below R Code.
From Confusion matrix we can clearly see that our Train data is 97.83% accurate in predicting and Train data
confirms the same with 94.93% of accuracy. We can see there is a slight variation but that is within the range so
we can confirm that our model is good model.
2. ROC
The ROC curve is the plot between sensitivity and (1- specificity).
(1- specificity) is also known as false positive rate and sensitivity is also known as True Positive rate.
15 | P a g e
Calculating ROC on Test Data
In Train data my true positive rate is 97.22% and in test data it’s 92.58%. so, there is major variation in our Test and
Train data, and this proves that our model is stable.
3. K-S chart
K-S will measure the degree of separation between car users and non-car users
By executing below code on Train and Test model, we will be able to see K-S Analysis result: -
4. Gini chart
Gini is the ratio between area between the ROC curve and the diagonal line & the area of the above triangle.
16 | P a g e
Gini Output Analysis
From Gini analysis we can clearly see that our Train data not covering maximum area of car and non-car use
employee with 15.39% and test data with 15.98% of accuracy. We can see there is a slight variation but that is within
the range so we can confirm that our model is ok.
1. Confusion Matrix: -
We will start model evaluation on train and test data by executing below code and will see that how accurate our
model will be in identification of employee who will be preferring Car as mode of transport.
Calculating Confusion Matrix on Train Data: - We are predicting classification of 0 and 1 for each row and then we
are putting our actual and predicted into a table to build confusion matrix to check that how accurate our model is
by executing below R Code.
17 | P a g e
Calculating ROC on Test Data
In Train data my true positive rate is 97.02% and in test data it’s 94.04%. so, there is no major variation in our Test
and Train data, and this proves that our model is stable.
3. K-S chart
K-S will measure the degree of separation between car users and non-car users
By executing below code on Train and Test model, we will be able to see K-S Analysis result: -
4. Gini chart
Gini is the ratio between area between the ROC curve and the diagonal line & the area of the above triangle.
18 | P a g e
Gini Output Analysis
From Gini analysis we can clearly see that our Train data not covering maximum area of car and non-car use
employee with 15.32% and test data with 15.57% of accuracy. We can see there is a slight variation but that is within
the range so we can confirm that our model is ok.
Naive Bayes classifier presume that the presence of a feature in a class is unrelated to the presence of any other
feature in the same class, so let’s build the model and see how good our model is as per this classification model
1. Confusion Matrix: -
Calculating Confusion Matrix on Train and Test Data: -
From Confusion matrix we can clearly see that our Train data is 95.94% accurate in predicting and Test data has
93.67% accuracy in prediction the churn rate.
2. ROC
The ROC curve is the plot between sensitivity and (1- specificity).
(1- specificity) is also known as false positive rate and sensitivity is also known as True Positive rate.
19 | P a g e
Calculating ROC on Test Data
In Train data my true positive rate is 72.93% and in test data it’s 94.04%. so, there is major variation in our Test and
Train data, and this proves that our model is not stable.
3. K-S chart
K-S will measure the degree of separation between car users and non-car users
By executing below code on Train and Test model, we will be able to see K-S Analysis result: -
Bagging (aka Bootstrap Aggregating): is a way to decrease the variance of prediction by generating
additional data for training from your original dataset using combinations with repetitions to produce multisets of
the same cardinality/size as your original data.
20 | P a g e
Applying Bagging model:
Helps in comparing the prediction with the observed values thereby estimating the errors
Interpretation:
Bagging here is going with the baseline approach calling everything as true hence it’s an extreme, hence bagging is
going with the minority therefore it is not preferable
We are using XG method that is a specialized implementation of gradient boosting decision trees designed for
performance
XGBoost works with matrixes that contain all numeric variables. Hence firstly change the data to matrix
21 | P a g e
The functions above are described as below:
eta = A learning rate at which the values are updated, it’s a slow learning rate
max_depth = Represents how many nodes to expand the trees. Larger the depth, more complex the model; higher
chances of overfitting. There is no standard value for max_depth. Larger data sets require deep trees to learn the
rules from data.
min_child_weight = it blocks the potential feature interactions to prevent overfitting
nrounds = Controls the maximum number of iterations. For classification, it is similar to the number of trees to grow
nfold = used for cross validation
verbose = do not want to see the output printed
early_stopping_rounds = stop if no improvement for 10 consecutive trees
Shows a prediction of 100% accuracy that the customers are using cars
This model same as bagging and therefore is a proper representation of both majority and minority class
However, using bagging and Boosting, we can predict the Choice of Transport Mode with 100% Accuracy.
In this case, any of the models Logistics Regression, K-NN, Naïve Bayes or Bagging/Boosting can be used for high
accuracy prediction. However, the key aspect is SMOTE for balancing the minority and majority class, without which
our models will not be so accurate.
22 | P a g e
Appendix
R Code
23 | P a g e