Lecture 2
Lecture 2
Model representation
Andrew Ng
Linear Regression
• Linear Regression is the supervised Machine Learning
model in which the model finds the best fit linear line
between the independent and dependent
variable variable.
Andrew Ng
500000
Housing Prices
400000
(Portland, OR)
300000
Price 200000
(in 1000s 100000
of dollars) 0
500 1000 1500 2000 2500 3000
Size (feet2)
Supervised Learning Regression Problem
Given the “right answer” for Predict real-valued output
each example in the data.
Andrew Ng
Training set of Size in feet2 (x) Price ($) in 1000's (y)
housing prices 2104 460
(Portland, OR) 1416 232
1534 315
852 178
… …
Notation:
m = Number of training examples
x’s = “input” variable / features
y’s = “output” variable / “target” variable
Andrew Ng
Training Set How do we represent h ?
Learning Algorithm
Size of h Estimated
house price
Andrew Ng
Andrew Ng
Hypothesis
A hypothesis is a certain function that we believe (or hope) is similar to the true
function, the target function that we want to model. In context of email spam
classification, it would be the rule we came up with that allows us to separate spam
from non-spam emails.
Cost Function
The cost function or Sum of Squared Errors(SSE) is a measure of how far away our
hypothesis is from the optimal hypothesis. The closer our hypothesis matches the
training examples, the smaller the value of the cost function. Theoretically, we would
like J(θ)=0
Andrew Ng
Linear regression
with one variable
Cost function
Machine Learning
Andrew Ng
Size in feet2 (x) Price ($) in 1000's (y)
Training Set 2104 460
1416 232
1534 315
852 178
… …
Hypothesis:
‘s: Parameters
How to choose ‘s ?
Andrew Ng
3 3 3
2 2 2
1 1 1
0 0 0
0 1 2 3 0 1 2 3 0 1 2 3
Andrew Ng
y
Andrew Ng
Square error function
Andrew Ng
Linear regression
with one variable
Cost function
intuition I
Machine Learning
Andrew Ng
Simplified
Hypothesis:
Parameters:
Cost Function:
Goal:
Andrew Ng
(for fixed , this is a function of x) (function of the parameter )
3 3
2 2
y
1 1
0 0
0 1 2 3 -0.5 0 0.5 1 1.5 2 2.5
x
Andrew Ng
Andrew Ng
(for fixed , this is a function of x) (function of the parameter )
3 3
2 2
y
1 1
0 0
0 1 2 3 -0.5 0 0.5 1 1.5 2 2.5
x
Andrew Ng
Andrew Ng
(for fixed , this is a function of x) (function of the parameter )
3 3
2 2
y
1 1
0 0
0 1 2 3 -0.5 0 0.5 1 1.5 2 2.5
x
Andrew Ng
Andrew Ng
Andrew Ng
Andrew Ng
Linear regression
with one variable
Cost function
intuition II
Machine Learning
Andrew Ng
Hypothesis:
Parameters:
Cost Function:
Goal:
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
500000
400000
Price ($) 300000
in 1000’s
200000
100000
0
500 1000 1500 2000 2500 3000
Size in feet2 (x)
Andrew Ng
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Contour plot/figure
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
Andrew Ng
Linear regression
with one variable
Gradient
Machine Learning
descent
Andrew Ng
Have some function
Want
Outline:
• Start with some
• Keep changing to reduce
until we hopefully end up at a minimum
Andrew Ng
J(0,1)
1
0
Andrew Ng
J(0,1)
1
0
Andrew Ng
Gradient descent algorithm
Andrew Ng
Andrew Ng
Linear regression
with one variable
Gradient descent
intuition
Machine Learning
Andrew Ng
Gradient descent algorithm
Andrew Ng
Andrew Ng
If α is too small, gradient descent
can be slow.
Andrew Ng
at local optima
Current value of
Andrew Ng
Gradient descent can converge to a local
minimum, even with the learning rate α fixed.
As we approach a local
minimum, gradient
descent will automatically
take smaller steps. So, no
need to decrease α over
time.
Andrew Ng
Andrew Ng
Linear regression
with one variable
Gradient descent for
linear regression
Machine Learning
Andrew Ng
Gradient descent algorithm Linear Regression Model
Andrew Ng
Andrew Ng
Gradient descent algorithm
update
and
simultaneously
Andrew Ng
J(0,1)
1
0
Andrew Ng
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
“Batch” Gradient Descent
Andrew Ng
Title: "Exploring Univariate Linear Regression Analysis for Parameter
Optimization in Public Datasets"
Abstract: This research project aims to perform a comprehensive analysis of
univariate linear regression techniques for parameter optimization using publicly
available datasets. Specifically, this study will focus on the application of linear
regression to determine the optimal values of parameters in a univariate setting.
The research will involve the selection and acquisition of relevant public datasets,
and the application of linear regression methods to derive optimal parameter
values. The results of this study will provide valuable insights for researchers in
various fields, particularly in the domain of predictive modeling and statistical
analysis.
Andrew Ng
Research Objectives:
1) To identify and select a suitable public dataset for a univariate linear regression analysis.
2) To apply univariate linear regression techniques to the selected dataset with the aim of optimizing
model parameters.
3) To evaluate the performance of the linear regression model using appropriate metrics.
4) To draw conclusions and provide insights based on the results, highlighting the practical implications of
parameter optimization in linear regression.
Andrew Ng