TMI04.2 Linear Regression PDF
TMI04.2 Linear Regression PDF
Ingmar Schuster
Patrick Jähnichen
using slides by Andrew Ng
● Linear Regression
● Hypothesis formulation,
hypthesis space
● Optimizing Cost with Gradient
Descent
● Using multiple input features
with Linear Regression
● Feature Scaling
● Nonlinear Regression
● Optimizing Cost using
derivatives
● Hypothesis parameters
Training data
● linear regression,
one input variable (univariate)
Learning Algorithm
Size Estimated
of flat price
hypothesis
(mapping between
How to choose parameters?
input and output)
● Squaring
– Penalty for positive and negative deviations the same
– Penalty for large deviations stronger
Linear regression w. gradient descent 8
Optimizing Cost
with Gradient Descent
● Want to minimize
● Start with random
● Keep changing to reduce
until we end up at minimum
Stepwise
Stepwise
descent
descent
towards
towards
minimum
minimum
steps
steps become
become smaller
smaller
without
without changing
changing
Linear regression w. gradient descent learning
learning rate
rate 12
Learning Rate considerations
Random restart
X with different
parameter(s)
● Notation
● More compact
with definition
● Center data on 0
● Scale data so majority falls into range [-1, 1]
mean
mean // empirical
empirical
expected
expected value
value
(mu)
(mu)
empirical
empirical standard
standard
deviation
deviation (sigma)
(sigma)
for all i
● Linear Regression
● Hypothesis formulation,
hypthesis space
● Optimizing Cost with Gradient
Descent
● Using multiple input features
with Linear Regression
● Feature Scaling
● Nonlinear Regression
● Optimizing Cost using
derivatives