Lecture4 Alankar LinearRegression
Lecture4 Alankar LinearRegression
Alankar Alankar
Alankar Alankar (IIT Bombay, India) Linear Regression January 25, 2024 2/7
Simple linear regression
Y ≈ β0 + β1 X
Ŷ = β̂0 + β̂1 X
Alankar Alankar (IIT Bombay, India) Linear Regression January 25, 2024 3/7
Estimating the coefficients
ei = yi − ŷi
where yˆi = βˆ0 + βˆ1 xi . The residual sums of squares (RSS) is defined as
n
X
RSS = e12 + e22 + e32 ...en2 = (yi − β̂0 − β̂1 xi )2
i=1
The least square method minimizes RSS to choose βˆ0 and βˆ1 .
Alankar Alankar (IIT Bombay, India) Linear Regression January 25, 2024 4/7
Analyses of coefficients
Alankar Alankar (IIT Bombay, India) Linear Regression January 25, 2024 5/7
Assessing the accuracy of the coefficient estimates
Figure: Red line on the left side is the true relation of the data in this example i.e.
f (x) = β0 + β1 X . ϵ. The blue line is the estimated line based on least square fit.
On the right plot, various blue lines are for least square fits for various samples
drawn from the population.
Alankar Alankar (IIT Bombay, India) Linear Regression January 25, 2024 6/7
True vs Real Data
Alankar Alankar (IIT Bombay, India) Linear Regression January 25, 2024 7/7