Simple Linear Regression
Simple Linear Regression
courses-data/CognitiveClass/ML0101ENv3/labs/FuelConsumptionCo2.csv
1
Length: 72629 (71K) [text/csv]
Saving to: ‘FuelConsumption.csv’
Did you know? When it comes to Machine Learning, you will likely be working with large
datasets. As a business, where can you host your data? IBM is offering a unique opportunity for
businesses, with 10 Tb of IBM Cloud Object Storage: Sign up now for free
Understanding the Data
0.0.3 FuelConsumption.csv:
2
3 AS6 Z 12.7 9.1
4 AS6 Z 12.1 8.7
Data Exploration
Lets first have a descriptive exploration on our data.
[6]: # summarize the data
df.describe()
CO2EMISSIONS
count 1067.000000
mean 256.228679
std 63.372304
min 108.000000
25% 207.000000
50% 251.000000
75% 294.000000
max 488.000000
3
[7]: cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
Now, lets plot each of these features vs the Emission, to see how linear is their relation:
[9]: plt.scatter(cdf.FUELCONSUMPTION_COMB, cdf.CO2EMISSIONS,
color='blue')
plt.xlabel("FUELCONSUMPTION_COMB")
plt.ylabel("Emission")
4
plt.show()
5
0.1 Practice
plot CYLINDER vs the Emission, to see how linear is their relation:
6
Creating train and test dataset Train/Test Split involves splitting the dataset into training and
testing sets respectively, which are mutually exclusive. After which, you train with the training
set and test with the testing set. This will provide a more accurate evaluation on out-of-sample
accuracy because the testing dataset is not part of the dataset that have been used to train the
data. It is more realistic for real world problems.
This means that we know the outcome of each data point in this dataset, making it great to test
with! And since this data has not been used to train the model, the model has no knowledge of
the outcome of these data points. So, in essence, it is truly an out-of-sample testing.
Lets split our dataset into train and test sets, 80% of the entire data for training, and the 20% for
testing. We create a mask to select random rows using np.random.rand() function:
[14]: msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
Linear Regression fits a linear model with coefficients θ = (θ1, ..., θn) to minimize the ’residual
sum of squares’ between the independent x in the dataset, and the dependent y by the linear
approximation.
7
[15]: plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
Coefficients: [[38.70722175]]
Intercept: [125.83531187]
As mentioned before, Coefficient and Intercept in the simple linear regression, are the parameters
of the fit line. Given that it is a simple linear regression, with only 2 parameters, and knowing that the
parameters are the intercept and slope of the line, sklearn can estimate them directly from our data.
Notice that all of the data must be available to traverse and calculate the parameters.
Plot outputs we can plot the fit line over the data:
8
[17]: plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
plt.plot(train_x, regr.coef_[0][0]*train_x + regr.intercept_[0], '-r')
plt.xlabel("Engine size")
plt.ylabel("Emission")
Evaluation we compare the actual values and predicted values to calculate the accuracy of a
regression model. Evaluation metrics provide a key role in the development of a model, as it
provides insight to areas that require improvement.
There are different model evaluation metrics, lets use MSE here to calculate the accuracy of our
model based on the test set:
<li> Mean absolute error: It is the mean of the absolute value of the errors. This is the easi
<li> Mean Squared Error (MSE): Mean Squared Error (MSE) is the mean of the squared error. It’s
<li> Root Mean Squared Error (RMSE): This is the square root of the Mean Square Error. </li>
<li> R-squared is not error, but is a popular metric for accuracy of your model. It represents
test_x = np.asanyarray(test[['ENGINESIZE']])
test_y = np.asanyarray(test[['CO2EMISSIONS']])
test_y_hat = regr.predict(test_x)
9
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_hat - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_hat -
test_y) **␣ ,→2))
print("R2-score: %.2f" % r2_score(test_y_hat , test_y) )
10