0% found this document useful (0 votes)
158 views4 pages

ML0101EN Reg Simple Linear Regression Co2 Py v1

Reg Simple Linear Regression

Uploaded by

Muhammad Rafly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
158 views4 pages

ML0101EN Reg Simple Linear Regression Co2 Py v1

Reg Simple Linear Regression

Uploaded by

Muhammad Rafly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

ML0101EN-Reg-Simple-Linear-Regression-Co2-py-v1

December 5, 2018

#
Simple Linear Regression

About this Notebook In this notebook, we learn how to use scikit-learn to implement simple
linear regression. We download a dataset that is related to fuel consumption and Carbon dioxide
emission of cars. Then, we split our data into training and test sets, create a model using training
set, evaluate your model using test set, and finally use model to predict unknown value.

0.0.1 Importing Needed packages


In [ ]: import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
%matplotlib inline

0.0.2 Downloading Data


To download the data, we will use !wget to download it from IBM Object Storage.

In [ ]: !wget -O FuelConsumption.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-course

Did you know? When it comes to Machine Learning, you will likely be working with large
datasets. As a business, where can you host your data? IBM is offering a unique opportunity for
businesses, with 10 Tb of IBM Cloud Object Storage: Sign up now for free

0.1 Understanding the Data


0.1.1 FuelConsumption.csv:
We have downloaded a fuel consumption dataset, FuelConsumption.csv, which contains model-
specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty ve-
hicles for retail sale in Canada. Dataset source

• MODELYEAR e.g. 2014


• MAKE e.g. Acura
• MODEL e.g. ILX
• VEHICLE CLASS e.g. SUV

1
• ENGINE SIZE e.g. 4.7
• CYLINDERS e.g 6
• TRANSMISSION e.g. A6
• FUEL CONSUMPTION in CITY(L/100 km) e.g. 9.9
• FUEL CONSUMPTION in HWY (L/100 km) e.g. 8.9
• FUEL CONSUMPTION COMB (L/100 km) e.g. 9.2
• CO2 EMISSIONS (g/km) e.g. 182 --> low --> 0

0.2 Reading the data in


In [ ]: df = pd.read_csv("FuelConsumption.csv")

# take a look at the dataset


df.head()

0.2.1 Data Exploration


Lets first have a descriptive exploration on our data.

In [ ]: # summarize the data


df.describe()

Lets select some features to explore more.

In [ ]: cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)

we can plot each of these features:

In [ ]: viz = cdf[['CYLINDERS','ENGINESIZE','CO2EMISSIONS','FUELCONSUMPTION_COMB']]
viz.hist()
plt.show()

Now, lets plot each of these features vs the Emission, to see how linear is their relation:

In [ ]: plt.scatter(cdf.FUELCONSUMPTION_COMB, cdf.CO2EMISSIONS, color='blue')


plt.xlabel("FUELCONSUMPTION_COMB")
plt.ylabel("Emission")
plt.show()

In [ ]: plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')


plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()

0.3 Practice
plot CYLINDER vs the Emission, to see how linear is their relation:

In [ ]: # write your code here

Double-click here for the solution.

2
Creating train and test dataset Train/Test Split involves splitting the dataset into training and
testing sets respectively, which are mutually exclusive. After which, you train with the training
set and test with the testing set. This will provide a more accurate evaluation on out-of-sample
accuracy because the testing dataset is not part of the dataset that have been used to train the data.
It is more realistic for real world problems.
This means that we know the outcome of each data point in this dataset, making it great to test
with! And since this data has not been used to train the model, the model has no knowledge of
the outcome of these data points. So, in essence, it is truly an out-of-sample testing.
Lets split our dataset into train and test sets, 80% of the entire data for training, and the 20%
for testing. We create a mask to select random rows using np.random.rand() function:
In [ ]: msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]

0.3.1 Simple Regression Model


Linear Regression fits a linear model with coefficients θ = (θ1 , ..., θn ) to minimize the ’residual
sum of squares’ between the independent x in the dataset, and the dependent y by the linear
approximation.

Train data distribution


In [ ]: plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()

Modeling Using sklearn package to model data.


In [ ]: from sklearn import linear_model
regr = linear_model.LinearRegression()
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
regr.fit (train_x, train_y)
# The coefficients
print ('Coefficients: ', regr.coef_)
print ('Intercept: ',regr.intercept_)
As mentioned before, Coefficient and Intercept in the simple linear regression, are the param-
eters of the fit line. Given that it is a simple linear regression, with only 2 parameters, and knowing
that the parameters are the intercept and slope of the line, sklearn can estimate them directly from
our data. Notice that all of the data must be available to traverse and calculate the parameters.

Plot outputs we can plot the fit line over the data:
In [ ]: plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
plt.plot(train_x, regr.coef_[0][0]*train_x + regr.intercept_[0], '-r')
plt.xlabel("Engine size")
plt.ylabel("Emission")

3
Evaluation we compare the actual values and predicted values to calculate the accuracy of a
regression model. Evaluation metrics provide a key role in the development of a model, as it
provides insight to areas that require improvement.
There are different model evaluation metrics, lets use MSE here to calculate the accuracy of
our model based on the test set:

<li> Mean absolute error: It is the mean of the absolute value of the errors. This is the easies
<li> Mean Squared Error (MSE): Mean Squared Error (MSE) is the mean of the squared error. Its mo
<li> Root Mean Squared Error (RMSE): This is the square root of the Mean Square Error. </li>
<li> R-squared is not error, but is a popular metric for accuracy of your model. It represents h

In [ ]: from sklearn.metrics import r2_score

test_x = np.asanyarray(test[['ENGINESIZE']])
test_y = np.asanyarray(test[['CO2EMISSIONS']])
test_y_hat = regr.predict(test_x)

print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_hat - test_y)))


print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_hat - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y_hat , test_y) )

0.4 Want to learn more?


IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algo-
rithms. It has been designed to bring predictive intelligence to decisions made by individuals, by
groups, by systems – by your enterprise as a whole. A free trial is available through this course,
available here: SPSS Modeler.
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson
Studio is IBM’s leading cloud solution for data scientists, built by data scientists. With Jupyter
notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Stu-
dio enables data scientists to collaborate on their projects without having to install anything. Join
the fast-growing community of Watson Studio users today with a free account at Watson Studio

0.4.1 Thanks for completing this lesson!


Notebook created by: Saeed Aghabozorgi
Copyright l’ 2018 Cognitive Class. This notebook and its source code are released under the
terms of the MIT License.

You might also like