0% found this document useful (0 votes)
15 views13 pages

Making Predictions

This document outlines a structured approach to making predictions using Machine Learning, specifically focusing on predicting house prices in Boston. It covers essential steps such as understanding the problem, hypothesis generation, data exploration, preprocessing, feature engineering, model training, and evaluation. The document emphasizes the importance of following a systematic process to achieve accurate predictions and includes practical coding examples and assignments for hands-on learning.

Uploaded by

semeriuss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views13 pages

Making Predictions

This document outlines a structured approach to making predictions using Machine Learning, specifically focusing on predicting house prices in Boston. It covers essential steps such as understanding the problem, hypothesis generation, data exploration, preprocessing, feature engineering, model training, and evaluation. The document emphasizes the importance of following a systematic process to achieve accurate predictions and includes practical coding examples and assignments for hands-on learning.

Uploaded by

semeriuss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Getting familiarized with Making predictions process using Machine Learning

Learning Objectives
At the end of this session you will be able to:
● Understand machine learning based making prediction process
● Apply correlation technique for feature selection purpose
● Build machine learning predictor for Boston Housing
● Learn how to evaluate the predictor
Introduction
Making predictions using Machine Learning isn't just about grabbing the data and feeding it to
algorithms. The algorithm might spit out some prediction but that's not what you are aiming for.
The difference between good data science professionals and naive data science aspirants is that
the former set follows this process religiously. The process is as follows: 1. Understand the
problem: Before getting the data, we need to understand the problem we are trying to solve. If
you know the domain, think of which factors could play an epic role in solving the problem. If you
don't know the domain, read about it. 2. Hypothesis Generation: This is quite important, yet it is
often forgotten. In simple words, hypothesis generation refers to creating a set of features which
could influence the target variable given a confidence interval ( taken as 95% all the time). We
can do this before looking at the data to avoid biased thoughts. This step often helps in creating
new features. 3. Get Data: Now, we download the data and look at it. Determine which features
are available and which aren't, how many features we generated in hypothesis generation hit the
mark, and which ones could be created. Answering these questions will set us on the right track.
4. Data Exploration: We can't determine everything by just looking at the data. We need to dig
deeper. This step helps us understand the nature of variables ( missing, zero variance feature)
so that they can be treated properly. It involves creating charts, graphs (univariate and bivariate
analysis), and cross-tables to understand the behavior of features. 5. *Data Preprocessing: *Here,
we impute missing values and clean string variables (remove space, irregular tabs, data time
format) and anything that shouldn't be there. This step is usually followed along with the data
exploration stage. 6. Feature Engineering: Now, we create and add new features to the data set.
Most of the ideas for these features come during the hypothesis generation stage. 7. Model
Training: Using a suitable algorithm, we train the model on the given data set. 8. Model Evaluation:
Once the model is trained, we evaluate the model's performance using a suitable error metric.
Here, we also look for variable importance, i.e., which variables have proved to be significant in
determining the target variable. And, accordingly we can shortlist the best variables and train the
model again. 9. Model Testing: Finally, we test the model on the unseen data (test data) set.

We'll follow this process in the project to arrive at our final predictions. Let's get started.
1.Understand the problem
This lab aims at predicting house prices (residential) in Boston, USA. I believe this problem
statement is quite self-explanatory and doesn't need more explanation. Hence, we move to the
next step.

2. Hypothesis Generation
Well, this is going to be interesting. What factors can you think of right now which can influence
house prices ? As you read this, I want you to write down your factors as well, then we can match
them with the data set. Defining a hypothesis has two parts: Null Hypothesis (Ho) and Alternate
Hypothesis(Ha). They can be understood as:

Ho - There exists no impact of a particular feature on the dependent variable. Ha - There exists a
direct impact of a particular feature on the dependent variable.
Based on a decision criterion (say, 5% significance level), we always 'reject' or 'fail to reject' the
null hypothesis in statistical parlance. Practically, while model building we look for probability (p)
values. If p value < 0.05, we reject the null hypothesis. If p > 0.05, we fail to reject the null
hypothesis. Some factors which I can think of that directly influence house prices are the following:
Per capita crime rate by town
Proportion of residential land zoned for lots over 25,000 sq. ft
Proportion of non-retail business acres per town
Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
Nitric oxide concentration (parts per 10 million)
Average number of rooms per dwelling
Proportion of owner-occupied units built prior to 1940
Weighted distances to five Boston employment centers
Index of accessibility to radial highways
Full-value property tax rate per $10,000
Pupil-teacher ratio by town
1000(Bk — 0.63)², where Bk is the proportion of [people of African American descent] by town
LSTAT: Percentage of lower status of the population
Median value of owner-occupied homes in $1000s
…keep thinking. I am sure you can come up with many more apart from these.
3. Get Data
You can download the data from this https://www.kaggle.com/altavish/boston-housing-dataset
and load it in your python IDE. Also, check the competition page where all the details about the
data and variables are given. The data set consists of 13 explanatory variables. Yes, it's going to
be one heck of a data exploration ride. But, we'll learn how to deal with so many variables. The
target variable is MEDV. As you can see the data set comprises numeric, categorical, and ordinal
variables.
4. Data Exploration
Data Exploration is the key to getting insights from data. Practitioners say a good data exploration
strategy can solve even complicated problems in a few hours. A good data exploration strategy
comprises the following:

1. Univariate Analysis - It is used to visualize one variable in one plot. Examples: histogram,
density plot, etc.
2. Bivariate Analysis - It is used to visualize two variables (x and y axis) in one plot. Examples:
bar chart, line chart, area chart, etc.
3. Multivariate Analysis - As the name suggests, it is used to visualize more than two variables
at once. Examples: stacked bar chart, dodged bar chart, etc.
4. Cross Tables -They are used to compare the behavior of two categorical variables (used in
pivot tables as well).

Let's load the necessary libraries and data and start coding.

After we read the data, we can look at the data using:

The description of all the features is given below:


CRIM: Per capita crime rate by town
ZN: Proportion of residential land zoned for lots over 25,000 sq.
ft
INDUS: Proportion of non-retail business acres per town
CHAS: Charles River dummy variable (= 1 if tract bounds river; 0
otherwise)
NOX: Nitric oxide concentration (parts per 10 million)
RM: Average number of rooms per dwelling
AGE: Proportion of owner-occupied units built prior to 1940
DIS: Weighted distances to five Boston employment centers
RAD: Index of accessibility to radial highways
TAX: Full-value property tax rate per $10,000
PTRATIO: Pupil-teacher ratio by town
B: 1000(Bk — 0.63)², where Bk is the proportion of [people of
African American descent] by town
LSTAT: Percentage of lower status of the population
MEDV: Median value of owner-occupied homes in $1000s
The prices of the house indicated by the variable MEDV is our target
variable and the remaining are the feature variables based on
which we will predict the value of a house.

Alternatively, you can also check the data set information using the info() command.
5.Data Preprocessing
After loading the data, it’s a good practice to see if there are any missing values in the data. We count
the number of missing values for each feature using isnull()

Out of 14 features, 6 features have missing values. Let's check the percentage of missing values in
these columns.

We can infer that the all variables has 3.9% missing values. Let's look at a pretty picture explaining
these missing values using a bar plot.
Let's proceed and check the distribution of the target variable.
We see that the values of MEDV are distributed normally with few
outliers.
Next, we create a correlation matrix that measures the linear
relationships between the variables. The correlation matrix can be
formed by using the corr function from the pandas dataframe
library. We will use the heatmap function from the seaborn library to
plot the correlation matrix.

The correlation coefficient ranges from -1 to 1. If the value is close


to 1, it means that there is a strong positive correlation between the
two variables. When it is close to -1, the variables have a strong
negative correlation.
Observations:
To fit a linear regression model, we select those features which have
a high correlation with our target variable MEDV. By looking at the
correlation matrix we can see that RM has a strong positive
correlation with MEDV (0.7) where as LSTAT has a high negative
correlation with MEDV(-0.74).
6. Feature Engineering
An important point in selecting features for a linear regression
model is to check for multi-co-linearity. The features RAD, TAX
have a correlation of 0.91. These feature pairs are strongly
correlated to each other. We should not select both these features
together for training the model. Check this for an explanation.
Same goes for the features DIS and AGE which have a correlation
of -0.75.
Based on the above observations we will select RM and LSTAT as
our features. Using a scatter plot let’s see how these features vary
with MEDV.

Observations:
The prices increase as the value of RM increases linearly. There are
few outliers and the data seems to be capped at 50.
The prices tend to decrease with an increase in LSTAT. Though it
doesn’t look to be following exactly a linear line.

Feature Normalization
If you look at the values, note that house sizes are about 1000 times
the number of bedrooms. When features differ by orders of
magnitude, first performing feature scaling can make gradient
descent converge much more quickly. To normalize the dataset the
following steps is used:
Subtract the mean value of each feature from the dataset.
After subtracting the mean, additionally scale (divide) the feature
values by their respective “standard deviations."

The standard deviation is a way of measuring how much variation


there is in the range of values of a particular feature (most data
points will lie within ±2 standard deviations of the mean); this is
an alternative to taking the range of values (max-min). When
normalizing the features, the values used for normalization should
be kept for later use. After learning the parameters from the model,
we often want to predict the prices of houses we have not seen
before. Given a new x value (living room area and number of bed-
rooms), we must first normalize x using the mean and standard
deviation that we had previously computed from the training set.
Here the code snapshot that is used for the normalization purpose.

Model Training using Gradient descent


Here is the code snap shot for implementing the gradient descent
optimization techniques according to the formula discussed during
the lecture session.
One of the mechanisms to make sure that gradient descent is
working properly or not is to look at the value of the cost function
and check that it is decreasing with each iteration. After the
gradient descent is properly implemented, the value of the cost
function must decease and should converge to a steady value by the
end of the algorithm. The final values for the model parameters will
be used to make prediction based on the new observations. The
following code snap shoot and graph demonstrate the correct
behavior of the cost function.
Testing the model
Using the final value of the model parameters predict MEDV with
LSTAT 9.14 and RM 6.42. The values should be normalized.
Analyzing the Impact of Learning Rate
In this part you will apply different values of learning rates for the
dataset and find a learning rate that converges quickly. Here is the
code snap shoot and the graph of cost function against the number
of iterations for different values of learning rates.

7. Model Training
We concatenate the LSTAT and RM columns using np.c_ provided
by the numpy library.
Splitting the data into training and testing sets
Next, we split the data into training and testing sets. We train the
model with 80% of the samples and test with the remaining 20%.
We do this to assess the model’s performance on unseen data. To
split the data we use train_test_split function provided by scikit-
learn library. We finally print the sizes of our training and test set
to verify if the splitting has occurred properly.

Training and testing the model


We use scikit-learn’s LinearRegression to train our model on both
the training and test sets.

Model evaluation
We will evaluate our model using RMSE and R2-score.
Assignment
This question will use the Boston housing dataset once again. Again, create a test
set consisting of 1/2 of the data using the rest for training.
1. Build and evaluate the model by using additional one feature which has high
feature next to RM and LSTV
2. Fit a polynomial regression model to the training data.
3. Predict the labels for the corresponding test data.
4. Evaluate and generate the model parameters.
5. Out of these predictors used in this assignment, which would you choose as a
final model for the boston housing?

You might also like