Linear Regression
Linear Regression
Linear Regression is one of the most fundamental algorithms in the Machine Learning world. It is the door to the
magical world ahead. But before proceeding with the algorithm, let’s first discuss the lifecycle of any machine
learning model. This diagram explains the creation of a Machine Learning model from scratch and then taking
the same model further with hyperparameter tuning to increase its accuracy, deciding the deployment strategies
for that model and once deployed setting up the logging and monitoring frameworks to generate reports and
dashboards based on the client requirements. A typical lifecycle diagram for a machine learning model looks
like:
Regression analysis is an important tool for analysing and modelling data. Here, we fit a curve/line to the data
points, in such a manner that the differences between the distance of the actual data points from the plotted
curve/line is minimum. The topic will be explained in detail in the coming sections.
Let’s suppose we want to make an application which predicts the chances of admission a student to a foreign
university. In that case, the
It shows the significant relationships between the Lable (dependent variable) and the features(independent
variable).
It shows the extent of the impact of multiple independent variables on the dependent variable.
It can also measure these effects even if the variables are on a different scale.
These features enable the data scientists to find the best set of independent variables for predictions.
Linear Regression
Linear Regression is one of the most fundamental and widely known Machine Learning Algorithms which
people start with. Building blocks of a Linear Regression Model are:
Y=a+b*X + e
Where, a is the intercept, b is the slope of the line, and e is the error term. The equation above is used to
predict the value of the target variable based on the given predictor variable(s).
1 # necessary Imports
2 import pandas as pd
3 import matplotlib.pyplot as plt
4 import pickle
5 % matpllotlib inline
In [16]:
In [17]:
Out[17]:
TV: Advertising dollars spent on TV for a single product in a given market (in thousands of dollars)
Radio: Advertising dollars spent on Radio
Newspaper: Advertising dollars spent on Newspaper
In [18]:
1 data.shape
Out[18]:
(200, 5)
In [19]:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 200 entries, 0 to 199
Data columns (total 5 columns):
Unnamed: 0 200 non-null int64
TV 200 non-null float64
radio 200 non-null float64
newspaper 200 non-null float64
sales 200 non-null float64
dtypes: float64(4), int64(1)
memory usage: 7.9 KB
In [20]:
Out[20]:
Unnamed: 0 0
TV 0
radio 0
newspaper 0
sales 0
dtype: int64
Now, let's showcase the relationship between the feature and target column
In [21]:
1 # visualize the relationship between the features and the response using scatterplots
2 fig, axs = plt.subplots(1, 3, sharey=True)
3 data.plot(kind='scatter', x='TV', y='sales', ax=axs[0], figsize=(16, 8))
4 data.plot(kind='scatter', x='radio', y='sales', ax=axs[1])
5 data.plot(kind='scatter', x='newspaper', y='sales', ax=axs[2])
'c' argument looks like a single numeric RGB or RGBA sequence, which should
be avoided as value-mapping will have precedence in case its length matches
with 'x' & 'y'. Please use a 2-D array with a single row if you really want
to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should
be avoided as value-mapping will have precedence in case its length matches
with 'x' & 'y'. Please use a 2-D array with a single row if you really want
to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should
be avoided as value-mapping will have precedence in case its length matches
with 'x' & 'y'. Please use a 2-D array with a single row if you really want
to specify the same RGB or RGBA value for all points.
Out[21]:
<matplotlib.axes._subplots.AxesSubplot at 0x269b21705c0>
From the relationship diagrams above, it can be observed that there seems to be a linear relationship between
the features TV ad, Radio ad and the sales is almost a linear one. A linear relationship typically looks like:
𝑦 = 𝛽0 + 𝛽1 𝑥
What do terms represent?
The coefficients are estimated using the least-squares criterion, i.e., the best fit line has to be calculated that
minimizes the sum of squared residuals (or "sum of squared errors").
Before, we're labelling each green line as having a distance D, and each red point as having a coordinate of (X,
Y). Then we can define our best fit line as the line having the property were:
𝐷21 + 𝐷22 + 𝐷23 + 𝐷24 +....+𝐷2𝑁
So how do we find this line? The least-square line approximating the set of points:
The best fit line is obtained by minimizing the residual. Residual is the distance between the actual Y and the
predicted Y, as shown below:
As we can see that the residual is both a function of m and b, so differentiating partially with respect to m and b
will give us:
For getting the best fit line, residual should be minimum. The minima of a function occurs where the
derivative=0. So, equating our corresponding derivatives to 0, we get:
This same equation can be written in matrix form as:
Ideally, if we'd have an equation of one dependent and one independent variable the minima will look as
follows:
But as the residual's minima is dependent on two variables m and b, it becomes a Paraboloid and the
appropriate m and b are calculated using Gradient Descent as shown below:
Photo:Google
Now, let’s understand how to check, how well the model fits our data.
The new values for 'slope' and 'intercept' are caluclated as follows:
where, 𝜃0 𝜃1
is 'intercept' , 𝛼
is the slope, is the learning rate, m is the total number of observations and the
term after the ∑ sign is the loss. Google Tensor board recommends a Learning rate between 0.00001 and 10.
Generally a smaller learning rate is recommended to avoid overshooting while creating a model.
𝑅2 statistics
The R-squared statistic provides a measure of fit. It takes the form of a proportion—the proportion of variance
explained—and so it always takes on a value between 0 and 1. In simple words, it represents how much of our
data is being explained by our model. For example, 𝑅2 statistic = 0.75, it says that our model fits 75 % of the
total data set. Similarly, if it is 0, it means none of the data points is being explained and a value of 1 represents
100% data explanation. Mathematically 𝑅2 statistic is calculated as :
RSS is the residual(error) term we have been talking about so far. And, TSS: is the Total sum of squares and
given as :
TSS is calculated when we consider the line passing through the mean value of y, to be the best fit line. Just
like RSS, we calculate the error term when the best fit line is the line passing through the mean value of y and
we get the value of TSS.
The closer the value of R2 is to 1 the better the model fits our data. If R2 comes below 0(which is a possibility)
that means the model is so bad that it is performing even worse than the average best fit line.
Adjusted 𝑅2 statistics
As we increase the number of independent variables in our equation, the R2 increases as well. But that doesn’t
mean that the new independent variables have any correlation with the output variable. In other words, even
with the addition of new features in our model, it is not necessary that our model will yield better results but R2
value will increase. To rectify this problem, we use Adjusted R2 value which penalises excessive use of such
features which do not correlate with the output data. Let’s understand this with an example:
We can see that R2 always increases with an increase in the number of independent variables. Thus, it doesn’t
give a better picture and so we need Adjusted R2 value to keep this in check. Mathematically, it is calculated as:
In the equation above, when p = 0, we can see that adjusted R2 becomes equal to R2. Thus, adjusted R2 will
always be less than or equal to R2, and it penalises the excess of independent variables which do not affect the
dependent variable.
In [22]:
1 # create X and y
2 feature_cols = ['TV']
3 X = data[feature_cols]
4 y = data.sales
5
6 # follow the usual sklearn pattern: import, instantiate, fit
7 from sklearn.linear_model import LinearRegression
8 lm = LinearRegression()
9 lm.fit(X, y)
10
11 # print intercept and coefficients
12 print(lm.intercept_)
13 print(lm.coef_)
7.032593549127693
[0.04753664]
𝑦 = 𝛽0 + 𝛽1 𝑥
𝑦 = 7.032594 + 0.047537 × 50
In [23]:
Out[23]:
9.409444
Out[24]:
TV
0 50
In [25]:
Out[25]:
array([9.40942557])
In [26]:
Out[26]:
TV
0 0.7
1 296.4
In [27]:
Out[27]:
'c' argument looks like a single numeric RGB or RGBA sequence, which should
be avoided as value-mapping will have precedence in case its length matches
with 'x' & 'y'. Please use a 2-D array with a single row if you really want
to specify the same RGB or RGBA value for all points.
Out[28]:
[<matplotlib.lines.Line2D at 0x269b2214b38>]
Model Confidence
Question: Is linear regression a low bias/high variance model or a high bias/low variance model?
Answer: It's a High bias/low variance model. Even after repeated sampling, the best fit line will stay roughly in
the same position (low variance), but the average of the models created after repeated sampling won't do a
great job in capturing the perfect relationship (high bias). Low variance is helpful when we don't have less
training data!
If the model has calculated a 95% confidence for our model coefficients, it can be interpreted as follows: If the
population from which this sample is drawn, is sampled 100 times, then approximately 95 (out of 100) of
those confidence intervals shall contain the "true" coefficients.
In [29]:
Out[29]:
0 1
TV 0.042231 0.052843
Keep in mind that we only have a single sample of data, and not the entire population of data. The "true"
coefficient is either within this interval or it isn't, but there's no way actually to know. We estimate the coefficient
with the data we do have, and we show uncertainty about that estimate by giving a range that the coefficient is
probably within.
Note that using 95% confidence intervals is just a convention. You can create 90% confidence intervals (which
will be more narrow), 99% confidence intervals (which will be wider), or whatever intervals you like.
("Failing to reject" the null hypothesis does not mean "accepting" the null hypothesis. The alternative hypothesis
might indeed be true, but that we just don't have enough data to prove that.)
Null hypothesis: No relationship exists between TV advertisements and Sales (and hence equals 𝛽1
zero).
Alternative hypothesis: There exists a relationship between TV advertisements and Sales (and hence, 𝛽1
is not equal to zero).
How do we test this? We reject the null hypothesis (and thus believe the alternative hypothesis) if the 95%
confidence interval does not include zero. The p-value represents the probability of the coefficient actually
being zero.
In [30]:
Out[30]:
Intercept 1.406300e-35
TV 1.467390e-42
dtype: float64
If the 95% confidence interval includes zero, the p-value for that coefficient will be greater than 0.05. If the
95% confidence interval does not include zero, the p-value will be less than 0.05.
Thus, a p-value of less than 0.05 is a way to decide whether there is any relationship between the feature in
consideration and the response or not. Using 0.05 as the cutoff is just a convention.
In this case, the p-value for TV ads is way less than 0.05, and so we believe that there is a relationship
between TV advertisements and Sales.
The value of R-squared lies between 0 and 1. A value closer to 1 is better as it means that more variance is
explained by the model.
In [31]:
Out[31]:
0.611875050850071
Is it a "good" R-squared value? Now, that’s hard to say. In reality, the domain to which the data belongs to plays
a significant role in deciding the threshold for the R-squared value. Therefore, it's a tool for comparing
different models.
𝑦 = 𝛽0 + 𝛽1 𝑥1 +...+𝛽𝑛 𝑥𝑛
Each 𝑥 represents a different feature, and each feature has its own coefficient. In this case:
𝑦 = 𝛽0 + 𝛽1 × 𝑇𝑉 + 𝛽2 × 𝑅𝑎𝑑𝑖𝑜 + 𝛽3 × 𝑁𝑒𝑤𝑠𝑝𝑎𝑝𝑒𝑟
Let's use Statsmodels to estimate these coefficients
In [32]:
1 # create X and y
2 feature_cols = ['TV', 'radio', 'newspaper']
3 X = data[feature_cols]
4 y = data.sales
5
6 lm = LinearRegression()
7 lm.fit(X, y)
8
9 # print intercept and coefficients
10 print(lm.intercept_)
11 print(lm.coef_)
2.9388893694594085
[ 0.04576465 0.18853002 -0.00103749]
How do we interpret these coefficients? If we look at the coefficients, the coefficient for the newspaper spends
is negative. It means that the money spent for newspaper advertisements is not contributing in a positive way to
the sales.
A lot of the information we have been reviewing piece-by-piece is available in the model summary output:
In [33]:
Out[33]:
Df Model: 3
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
TV and Radio have positive p-values, whereas Newspaper has a negative one. Hence, we can reject the
null hypothesis for TV and Radio that there is no relation between those features and Sales, but we fail to
reject the null hypothesis for Newspaper that there is no relationship between newspaper spends and
sales.
The expenses on bot TV and Radio ads arepositively associated with Sales, whereas the expense on
newspaper ad is slightly negatively associated with the Sales.
This model has a higher value of R-squared (0.897) than the previous model, which means that this model
explains more variance and provides a better fit to the data than a model that only includes the TV.
Feature Selection
How do I decide which features have to be included in a linear model? Here's one idea:
Try different models, and only keep predictors in the model if they have small p-values.
Check if the R-squared value goes up when you add new predictors to the model.
What are the drawbacks in this approach? -If the underlying assumptions for creating a Linear model(the
features being independent) are violated(which usually is the case),p-values and R-squared values are less
reliable.
Using a p-value cutoff of 0.05 means that adding 100 predictors to a model that are pure noise, still 5 of
them (on average) will be counted as significant.
R-squared is susceptible to model overfitting, and thus there is no guarantee that a model with a high R-
squared value will generalise. Following is an example:
In [34]:
Out[34]:
0.8971942610828956
In [35]:
1 # add Newspaper to the model (which we believe has no association with Sales)
2 lm = smf.ols(formula='sales ~ TV + radio + newspaper', data=data).fit()
3 lm.rsquared
Out[35]:
0.8972106381789522
Selecting the model with the highest value of R-squared is not a correct approach as the value of R-squared
shall always increase whenever a new feature is taken for consideration even if the feature is unrelated to the
response.
The alternative is to use adjusted R-squared which penalises the model complexity (to control overfitting), but
this again generally under-penalizes complexity (http://scott.fortmann-roe.com/docs/MeasuringError.html).
a better approach to feature selection isCross-validation. It provides a more reliable way to choose which of
the created models will best generalise as it better estimates of out-of-sample error. An advantage is that the
cross-validation method can be applied to any machine learning model and the scikit-learn package provides
extensive functionality for that.
We’ll create a new feature called Scale, and shall randomly assign observations as small or large:
In [36]:
1 import numpy as np
2
3 # set a seed for reproducibility
4 np.random.seed(12345)
5
6 # create a Series of booleans in which roughly half are True
7 nums = np.random.rand(len(data))
8 mask_large = nums > 0.5
9
10 # initially set Size to small, then change roughly half to be large
11 data['Scale'] = 'small'
12 data.loc[mask_large, 'Scale'] = 'large'
13 data.head()
Out[36]:
For the scikit-learn library, all data must be represented numerically. If the feature only has two categories, we
can simply create a dummy variable that represents the categories as a combination of binary value:
In [38]:
Out[38]:
Let's redo the multiple linear regression problem and include the IsLarge predictor:
In [39]:
1 # create X and y
2 feature_cols = ['TV', 'radio', 'newspaper', 'IsLarge']
3 X = data[feature_cols]
4 y = data.sales
5
6 # instantiate, fit
7 lm = LinearRegression()
8 lm.fit(X, y)
9
10 # print coefficients
11 i=0
12 for col in feature_cols:
13 print('The Coefficient of ',col, ' is: ',lm.coef_[i])
14 i=i+1
How do we interpret the coefficient for IsLarge? For a given TV/Radio/Newspaper ad expenditure if the average
sales increases by 57.42 widgets, it’s considered as a large market.
What if the 0/1encoding is reversed? Still, the value of the coefficient shall be same, the only difference being
the sign. It’ll be a negative number instead of positive.
Out[40]:
We need to represent the ‘Targeted Geography’ column numerically. But mapping urban=0, suburban=1 and
rural=2 will mean that rural is two times suburban which is not the case. Hence, we’ll create another dummy
variable:
In [41]:
1 # create three dummy variables using get_dummies, then exclude the first dummy column
2 area_dummies = pd.get_dummies(data['Targeted Geography'], prefix='Targeted Geography').
3
4 # concatenate the dummy variable columns onto the original DataFrame (axis=0 means rows
5 data = pd.concat([data, area_dummies], axis=1)
6 data.head()
Out[41]:
Because using only two dummy columns, we can capture the information of all the 3 columns. For example, if
the value for Targeted Geography_urban as well as Targeted Geography_rural is 0, it automatically means that
the data belongs to Targeted Geography_suburban.
This is called handling the dummy variable trap. If there are N dummy variable columns, then the same
information can be conveyed by N-1 columns. Let's include the two new dummy variables in the model:
In [42]:
1 # create X and y
2 feature_cols = ['TV', 'radio', 'newspaper', 'IsLarge', 'Targeted Geography_suburban', '
3 X = data[feature_cols]
4 y = data.sales
5
6 # instantiate, fit
7 lm = LinearRegression()
8 lm.fit(X, y)
9
10 # print coefficients
11 print(feature_cols, lm.coef_)
If all other columns are constant, the suburban geography is associated with an average decrease of
106.56 widgets in sales for $1000 spent.
if $1000 is spent in an urban geography, it amounts to an average increase in Sales of 268.13 widgets
A final note about dummy encoding: If we have categories that can be ranked (i.e., worst, bad, good, better,
best), we can potentially represent them numerically as (1, 2, 3, 4, 5) using a single dummy column
Multi- Collinearity
Origin of the word: The word multi-collinearity consists of two words:Multi, meaning multiple, and Collinear,
meaning being linearly dependent on each other.
Definition: The purpose of executing a Linear Regression is to predict the value of a dependent variable based
on certain independent variables.
So, when we perform a Linear Regression, we want our dataset to have variables which are independent i.e.,
we should not be able to define an independent variable with the help of another independent variable because
now in our model we have two variables which can be defined based on a certain set of independent variables
which defeats the entire purpose.
Multi-collinearity is the statistical term to represent this type of a relation amongst the independent variable-
when the independent variables are not so independent😊.
We can define multi-collinearity as the situation where the independent variables (or the predictors) have
strong correlation amongst themselves.
The coefficients in a Linear Regression model represent the extent of change in Y when a certain x
(amongst X1,X2,X3…) is changed keeping others constant. But, if x1 and x2 are dependent, then this
assumption itself is wrong that we are changing one variable keeping others constant as the dependent
variable will also be changed. It means that our model itself becomes a bit flawed.
We have a redundancy in our model as two variables (or more than two) are trying to convey the same
information.
As the extent of the collinearity increases, there is a chance that we might produce an overfitted model. An
overfitted model works well with the test data but its accuracy fluctuates when exposed to other data sets.
Can result in a Dummy Variable Trap.
Detection
Correlation Matrices and Plots: for correlation between all the X variables.
This plot shows the extent of correlation between the independent variable. G
enerally, a correlation greater than 0.9 or less than -0.9 is to be avoided.
1
VIF= (1−𝑅𝑠𝑞𝑢𝑎𝑟𝑒𝑑)
The VIF factor, if greater than 10 shows extreme correlation between the
variables and then we need to take care of the correlation.
Do Nothing: If the Correlation is not that extreme, we can ignore it. If the correlated variables are not used
in solving our business question, they can be ignored.
Remove One Variable: Like in dummy variable trap
Combine the correlated variables: Like creating a seniority score based on Age and Years of experience
Principal Component Analysis
Regularization
When we use regression models to train some data, there is a good chance that the model will overfit the given
training data set. Regularization helps sort this overfitting problem by restricting the degrees of freedom of a
given equation i.e. simply reducing the number of degrees of a polynomial function by reducing their
corresponding weights.
In a linear equation, we do not want huge weights/coefficients as a small change in weight can make a large
difference for the dependent variable (Y). So, regularization constraints the weights of such features to avoid
overfitting. Simple linear regression is given as:
𝑦 = 𝛽0 + 𝛽1 𝑥1 + 𝛽2 𝑥2 + 𝛽3 𝑥3+...+𝛽𝑃 𝑥𝑃
Using the OLS method, we try to minimize the cost function given as:
To regularize the model, a Shrinkage penalty is added to the cost function. Let’s see different types of
regularizations in regression:
LASSO regression penalizes the model based on the sum of magnitude of the coefficients. The regularization
term is given by
regularization=𝜆 ∗ ∑|𝛽𝑗 |
Where, λ is the shrinkage factor.
Ridge regression penalizes the model based on the sum of squares of magnitude of the coefficients. The
regularization term is given by
regularization=𝜆 ∗ ∑ |𝛽𝑗2 |
Where, λ is the shrinkage factor.
This value of lambda can be anything and should be calculated by cross validation as to what suits the model.
Let’s consider 𝛽1 and 𝛽2 be coefficients of a linear regression and λ = 1:
For Lasso, 𝛽1 + 𝛽2 <= s
Where s is the maximum value the equations can achieve . If we plot both the above equations, we get the
following graph:
The red ellipse represents the cost function of the model, whereas the square (left side) represents the Lasso
regression and the circle (right side) represents the Ridge regression.
Ridge regression shrinks the coefficients for those predictors which contribute very less in the model but have
huge weights, very close to zero. But it never makes them exactly zero. Thus, the final model will still contain all
those predictors, though with less weights. This doesn’t help in interpreting the model very well. This is where
Lasso regression differs with Ridge regression. In Lasso, the L1 penalty does reduce some coefficients exactly
to zero when we use a sufficiently large tuning parameter λ. So, in addition to regularizing, lasso also performs
feature selection.
Regularization helps to reduce the variance of the model, without a substantial increase in the bias. If there is
variance in the model that means that the model won’t fit well for dataset different that training data. The tuning
parameter λ controls this bias and variance tradeoff. When the value of λ is increased up to a certain limit, it
reduces the variance without losing any important properties in the data. But after a certain limit, the model will
start losing some important properties which will increase the bias in the data. Thus, the selection of good value
of λ is the key. The value of λ is selected using cross-validation methods. A set of λ is selected and cross-
validation error is calculated for each value of λ and that value of λ is selected for which the cross-validation
error is minimum.
Elastic Net
According to the Hands-on Machine Learning book, elastic Net is a middle ground between Ridge Regression
and Lasso Regression. The regularization term is a simple mix of both Ridge and Lasso’s regularization terms,
and you can control the mix ratio α.
where α is the mixing parameter between ridge (α = 0) and lasso (α = 1).
When should you use plain Linear Regression (i.e., without any regularization), Ridge, Lasso, or Elastic
Net?
According to the Hands-on Machine Learning book, it is almost always preferable to have at least a little bit of
regularization, so generally you should avoid plain Linear Regression. Ridge is a good default, but if you
suspect that only a few features are actually useful, you should prefer Lasso or Elastic Net since they tend to
reduce the useless features’ weights down to zero as we have discussed. In general, Elastic Net is preferred
over Lasso since Lasso may behave erratically when the number of features is greater than the number of
training instances or when several features are strongly correlated.
In [43]:
In [44]:
1 data =pd.read_csv('Admission_Prediction.csv')
2 data.head()
Out[45]:
In [46]:
1 data.describe(include='all')
Out[46]:
TOEFL University
Serial No. GRE Score SOP LOR CGPA Re
Score Rating
In [47]:
1 data.describe()
Out[48]:
TOEFL University
Serial No. GRE Score SOP LOR CGPA Re
Score Rating
Now the data looks good and there are no missing values. Also, the first cloumn is just serial numbers, so we
don' need that column. Let's drop it from data and make it more clean.
In [49]:
Out[49]:
GRE Score TOEFL Score University Rating SOP LOR CGPA Research Chance of Admit
Let's visualize the data and analyze the relationship between independent and dependent variables:
In [50]:
The data distribution looks decent enough and there doesn't seem to be any skewness. Great let's go ahead!
Let's observe the relationship between independent variables and dependent variable.
In [51]:
1 y = data['Chance of Admit']
2 X =data.drop(columns = ['Chance of Admit'])
In [52]:
1 plt.figure(figsize=(20,30), facecolor='white')
2 plotnumber = 1
3
4 for column in X:
5 if plotnumber<=15 :
6 ax = plt.subplot(5,3,plotnumber)
7 plt.scatter(X[column],y)
8 plt.xlabel(column,fontsize=20)
9 plt.ylabel('Chance of Admit',fontsize=20)
10 plotnumber+=1
11 plt.tight_layout()
Great, the relationship between the dependent and independent variables look fairly linear. Thus, our linearity
assumption is satisfied.
1 scaler =StandardScaler()
2
3 X_scaled = scaler.fit_transform(X)
C:\Users\virat\Anaconda3\lib\site-packages\sklearn\preprocessing\data.py:64
5: DataConversionWarning: Data with input dtype int64, float64 were all conv
erted to float64 by StandardScaler.
return self.partial_fit(X, y)
C:\Users\virat\Anaconda3\lib\site-packages\sklearn\base.py:464: DataConversi
onWarning: Data with input dtype int64, float64 were all converted to float6
4 by StandardScaler.
return self.fit(X, **fit_params).transform(X)
In [54]:
In [55]:
1 vif
Out[55]:
VIF Features
3 2.776393 SOP
4 2.037449 LOR
5 4.654369 CGPA
6 1.459411 Research
Here, we have the correlation values for all the features. As a thumb rule, a VIF value greater than 5 means a
very severe multicollinearity. We don't any VIF greater than 5 , so we are good to go.
Great. Let's go ahead and use linear regression and see how good it fits our data. But first. let's split our data in
train and test.
In [56]:
1 y_train
Out[57]:
378 0.56
23 0.95
122 0.57
344 0.47
246 0.72
409 0.61
197 0.73
116 0.56
83 0.92
91 0.38
106 0.87
258 0.77
444 0.92
304 0.62
12 0.78
392 0.84
462 0.62
204 0.69
250 0.74
370 0.72
313 0.67
126 0.85
424 0.91
457 0.37
100 0.71
17 0.65
140 0.84
40 0.46
279 0.67
64 0.52
...
362 0.91
221 0.75
209 0.68
201 0.72
406 0.61
480 0.80
403 0.91
225 0.61
177 0.82
487 0.79
152 0.86
417 0.52
336 0.72
329 0.43
447 0.84
438 0.67
333 0.71
402 0.78
45 0.88
285 0.93
31 0.74
430 0.74
95 0.42
401 0.66
255 0.79
51 0.56
291 0.56
346 0.47
130 0.96
254 0.85
Name: Chance of Admit, Length: 375, dtype: float64
In [58]:
1 regression = LinearRegression()
2
3 regression.fit(x_train,y_train)
Out[58]:
In [59]:
In [60]:
Out[60]:
array([0.92190162])
In [61]:
1 regression.score(x_train,y_train)
Out[61]:
0.8415250484247909
In [62]:
1 adj_r2(x_train,y_train)
Out[62]:
0.8385023654247188
Our r2 score is 84.15% and adj r2 is 83.85% for our training et., so looks like we are not being penalized by use
of any feature.
Now let's check if our model is overfitting our data using regularization.
In [63]:
1 regression.score(x_test,y_test)
Out[63]:
0.7534898831471066
In [64]:
1 adj_r2(x_test,y_test)
Out[64]:
0.7387414146174464
In [65]:
1 # Lasso Regularization
2 # LassoCV will return best alpha and coefficients after performing 10 cross validations
3 lasscv = LassoCV(alphas = None,cv =10, max_iter = 100000, normalize = True)
4 lasscv.fit(x_train, y_train)
Out[65]:
In [66]:
Out[66]:
3.0341655445178153e-05
In [67]:
1 #now that we have best parameter, let's use Lasso regression and see how well our data
2
3 lasso_reg = Lasso(alpha)
4 lasso_reg.fit(x_train, y_train)
Out[67]:
1 lasso_reg.score(x_test, y_test)
Out[68]:
0.7534654960492284
our r2_score for test data (75.34%) comes same as before using regularization. So, it is fair to say our OLS
model did not overfit the data.
In [69]:
C:\Users\virat\Anaconda3\lib\site-packages\sklearn\model_selection\_search.p
y:841: DeprecationWarning: The default of the `iid` parameter will change fr
om True to False in version 0.22 and will be removed in 0.24. This will chan
ge numeric results when test-set sizes are unequal.
DeprecationWarning)
Out[69]:
In [73]:
1 ridgecv.alpha_
Out[73]:
0.8432446610176114
In [74]:
1 ridge_model = Ridge(alpha=ridgecv.alpha_)
2 ridge_model.fit(x_train, y_train)
Out[74]:
1 ridge_model.score(x_test, y_test)
Out[75]:
0.7538937537809315
we got the same r2 square using Ridge regression as well. So, it's safe to say there is no overfitting.
In [76]:
1 # Elastic net
2
3 elasticCV = ElasticNetCV(alphas = None, cv =10)
4
5 elasticCV.fit(x_train, y_train)
Out[76]:
In [77]:
1 elasticCV.alpha_
Out[77]:
0.0011069728449315508
In [78]:
1 # l1_ration gives how close the model is to L1 regularization, below value indicates we
2 #preference to L1 and L2
3 elasticCV.l1_ratio
Out[78]:
0.5
In [79]:
Out[79]:
1 elasticnet_reg.score(x_test, y_test)
Out[80]:
0.7531695370639867
So, we can see by using different type of regularization, we still are getting the same r2 score. That means our
OLS model has been well trained over the training data and there is no overfitting.
Polynomial Regression
For understanding Polynomial Regression, let's first understand a polynomial. Merriam-webster defines a
polynomial as: "A mathematical expression of one or more algebraic terms each of which consists of a constant
multiplied by one or more variables raised to a non-negative integral power (such as a + bx + cx^2)". Simply
said, poly means many. So, a polynomial is an aggregation of many monomials(or Variables). A simple
polynomial equation can be written as:
𝑦 = 𝑎 + 𝑏𝑥 + 𝑐𝑥2 +...+𝑛𝑥𝑛 +...
So, Polynomial Regression can be defined as a mechanism to predict a dependent variable based on the
polynomial relationship with the independent variable.
In the equation,
𝑦 = 𝑎 + 𝑏𝑥 + 𝑐𝑥2 +...+𝑛𝑥𝑛 +...
the maximum power of 'x' is called the degree of the polynomial equation. For example, if the degree is 1, the
equation becomes
𝑦 = 𝑎 + 𝑏𝑥
which is a simple linear equation. if the degree is 2, the equation becomes
𝑦 = 𝑎 + 𝑏𝑥 + 𝑐𝑥2
which is a quadratic equation and so on.
We can generalize the matrix obtained above (for Linear Regression) for an equation of n coefficients(in
y=mx+b, m and b are the coefficients) as follows:
Where m is the degree(maximum power of x) of the polynomial and n is the number of observation points. The
above matrix results in the general formula for Polynomial Regression. Earlier, we were able to visualize the
calculation of minima because the graph was in three dimensions. But as there are n number of coefficients, it's
not possible to create an (n+1) dimension graph here.
As we have the maths clear, now Let's focus on the Python implementation of it
In [4]:
Out[5]:
3 Manager 4 80000
Here, it can be seen that there are 3 columns in the dataset. The problem statement here is to predict the salary
based on the Position and Level of the employee. But we may observe that the Position and the level are
related or level is one other way of conveying the position of the employee in the company. So, essentially
Position and Level are conveying the same kind of information. As Level is a numeric column, let's use that in
our Machine Learning Model. Hence, Level is our feature or X variable. And, Salary is Label or the Y variable
In [6]:
1 x=dataset.iloc[:,1:2].values
2 #x=dataset.iloc[:,1].values
3 # this is written in this way to make x as a matrix as the machine learning algorithm.
4 # if we write 'x=dataset.iloc[:,1].values', it will return x as a single-dimensional ar
5 x
Out[6]:
array([[ 1],
[ 2],
[ 3],
[ 4],
[ 5],
[ 6],
[ 7],
[ 8],
[ 9],
[10]], dtype=int64)
In [7]:
1 y=dataset.iloc[:,2].values
2 y
Out[7]:
To learn Polynomial Regression, we'd follow a comparative approach. First, we'll try to create a Linear Model
using Linear Regression and then we'd prepare a Polynomial Regression Model and see how do they compare
to each other
In [8]:
Out[8]:
Here, the red dots are the actual data points and, the blue straight line is what our model has created. It is
evident from the diagram above that a Linear model does not fit our dataset well. So, let's try with a Polynomial
Model.
In [10]:
[[ 1. 1. 1.]
[ 1. 2. 4.]
[ 1. 3. 9.]
[ 1. 4. 16.]
[ 1. 5. 25.]
[ 1. 6. 36.]
[ 1. 7. 49.]
[ 1. 8. 64.]
[ 1. 9. 81.]
[ 1. 10. 100.]]
Out[10]:
In [11]:
Out[11]:
It can be noted here that for Polynomial Regression also, we are using the Linear Regression Object.
Why is it so?
It is because the Linear in Linear Regression does not talk about the degree of the Polynomial equation in
terms of the dependent variable(x). Instead, it talks about the degree of the coefficients. Mathematically,
𝑦 = 𝑎 + 𝑏𝑥 + 𝑐𝑥2 +...+𝑛𝑥𝑛 +...
It's not talking about the power of x, but the powers of a,b,c etc. And as the coefficients are only of degree 1,
hence the name Linear Regression.
In [12]:
Still, a two degree equation is also not a good fit. Now, we'll try to increase the degree of the equation i.e. we'll
try to see that whether we get a good fit at a higher degree or not. After some hit and trial, we see that the
model get's the best fit for a 4th degree polynomial equation.
In [13]:
Out[13]:
Here, we can see that our model now accurately fits the dataset. This kind of a fit might not be the case with the
actual business datasets. we are getting a brilliant fit as the number of datapoints are a few.
Once the training is completed, we need to expose the trained model as an API for the user to consume it. For
prediction, the saved model is loaded first and then the predictions are made using it. If the web app works fine,
the same app is deployed to the cloud platform. The application flow for cloud deployment looks like: