0% found this document useful (0 votes)
39 views71 pages

Lec 9 Linear Correlation and Linear Regression

1) The document discusses linear correlation and linear regression. It provides examples of scatter plots showing relationships between variables and calculates the correlation coefficient. 2) Regression analysis is used to predict the value of a dependent variable based on the value of an independent variable and explain how changes in the independent variable impact the dependent variable. 3) Simple linear regression involves using one independent variable to describe the linear relationship with the dependent variable. The regression equation provides an estimate of the population regression line to predict y values.

Uploaded by

Aliul Hassan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views71 pages

Lec 9 Linear Correlation and Linear Regression

1) The document discusses linear correlation and linear regression. It provides examples of scatter plots showing relationships between variables and calculates the correlation coefficient. 2) Regression analysis is used to predict the value of a dependent variable based on the value of an independent variable and explain how changes in the independent variable impact the dependent variable. 3) Simple linear regression involves using one independent variable to describe the linear relationship with the dependent variable. The regression equation provides an estimate of the population regression line to predict y values.

Uploaded by

Aliul Hassan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

Linear correlation and

linear regression

By
Md. Siddikur Rahman, PhD
Associate Professor
Department of Statistics
Begum Rokeya University, Rangpur.
Scatter Plots and Correlation
— A scatter plot (or scatter diagram)
is used to show the relationship
between two variables.
— Correlation analysis is used to
measure strength of the association
(linear relationship) between two
variables.
Scatter Plot Examples
Linear relationships Curvilinear relationships

y y

x x

y y

x x
Scatter Plot Examples
(continued)
Strong relationships Weak relationships

y y

x x

y y

x x
Scatter Plot Examples
(continued)
No relationship

x
Recall: Covariance

å ( x - X )( y - Y )
i i
cov ( x , y ) = i =1
n -1
Interpreting Covariance

cov(X,Y) > 0 X and Y are positively correlated

cov(X,Y) < 0 X and Y are inversely correlated

cov(X,Y) = 0 X and Y are independent


Correlation Coefficient
— Correlation measures the strength of the linear
association between two variables

— The sample correlation coefficient r is a


measure of the strength of the linear
relationship between two variables, based on
sample observations. It is the standardized
covariance (unitless):

cov ariance( x, y) SP( x, y)


r= =
var x var y SS ( x).SS ( y)
Features of r
— Measures the relative strength of the linear relationship
between two variables
— Unit-less
— Ranges between –1 and 1
— The closer to –1, the stronger the negative linear
relationship
— The closer to 1, the stronger the positive linear relationship
— The closer to 0, the weaker any positive linear relationship
Scatter Plots of Data with Various
Correlation Coefficients

Y Y Y

X X X
r = -1 r = -.6 r=0
Y
Y Y

X X X
r = +1 r = +.3 r=0
Calculating the Correlation Coefficient

å ( x - x )( y - y )
i =1
i i

cov ariance ( x, y ) n -1
r= =
var x var y n n

å i
( x
i =1
- x ) 2
å i
( y
i =1
- y ) 2

n -1 n - 1 Numera
å ( x - x )( y - y ) =
tor of
SP ( x, y ) covarian
= ce
[å ( x - x ) ][å ( y - y ) ]
2 2
SS ( x) SS ( y )
Numerator of
variance
Calculating the Correlation Coefficient

where:
— r = Sample correlation coefficient
— n = Sample size
— x = Value of the independent variable
— y = Value of the dependent variable
Calculation Example
Tree Trunk
Height Diameter
• y • x xy y2 x2
• 35 • 8 280 1225 64
• 49 • 9 441 2401 81
• 27 • 7 189 729 49
• 33 • 6 198 1089 36
• 60 • 13 780 3600 169
• 21 • 7 147 441 49
• 45 • 11 495 2025 121
• 51 • 12 612 2601 144
S=321 S=73 S=3142 S=14111 S=713
Calculation Example
(continued)
Tree
Height,
nå xy - å x å y
r=
y70 [n( å x 2 ) - ( å x)2 ][n(å y 2 ) - ( å y)2 ]
60

8(3142) - (73)(321)
50 =
40 [8(713) - (73)2 ][8(14111) - (321)2 ]
30

= 0.886
20

10

0
r = 0.886 → relatively strong positive
0 2 4 6 8 10 12 14
linear association between x and y
Trunk Diameter, x
Excel Output
Excel Correlation Output
Tools / data analysis / correlation…
Manually CORRL(array1,array2)

Tree Height Trunk Diameter


Tree Height 1
Trunk Diameter 0.886231 1

Correlation between
Tree Height and Trunk Diameter
Business Statistics: A Decision-
Making Approach, 7e © 2008
Prentice-Hall, Inc. Chap 14-15
Introduction to
Regression Analysis
— Regression analysis is used to:
◦ Predict the value of a dependent variable based on the
value of at least one independent variable
◦ Explain the impact of changes in an independent variable
on the dependent variable
Dependent variable: the variable we wish to explain
Independent variable: the variable used to explain
the dependent variable

Business Statistics: A Decision-


Making Approach, 7e © 2008
Prentice-Hall, Inc. Chap 14-16
Simple Linear Regression Model

— Only one independent variable, x


— Relationship between x and y is described
by a linear function
— Changes in y are assumed to be caused by
changes in x
Types of Regression Models

Positive Linear Relationship Relationship NOT Linear

Negative Linear Relationship No Relationship

Business Statistics: A Decision-


Making Approach, 7e © 2008
Prentice-Hall, Inc. Chap 14-18
Population Linear Regression

The population regression model:


Population Random
Population Independent Error
Slope
y intercept Variable term, or
Coefficient
Dependent residual

y = β0 + β1x + ε
Variable

Linear component Random Error


component

Business Statistics: A Decision-


Making Approach, 7e © 2008
Prentice-Hall, Inc. Chap 14-19
Linear Regression Assumptions

— Error values (ε) are statistically independent


— Error values are normally distributed for any
given value of x
— The probability distribution of the errors is
normal
— The distributions of possible ε values have
equal variances for all values of x
— The underlying relationship between the x
variable and the y variable is linear
Business Statistics: A Decision-
Making Approach, 7e © 2008
Prentice-Hall, Inc. Chap 14-20
Population Linear Regression
(continued)

y y = β0 + β1x + ε
Observed Value
of y for xi

εi Slope = β1
Predicted Value Random Error for
of y for xi
this x value

Intercept = β0

xi x
Business Statistics: A Decision-
Making Approach, 7e © 2008
Prentice-Hall, Inc. Chap 14-21
Simple Linear Regression Equation
(Prediction Line)
The simple linear regression equation provides an estimate of the
population regression line

Estimated (or
predicted) Y Estimate of the Estimate of the
value for regression regression slope
observation i intercept

Value of X for
Ù Ù observation i

Ŷi = b 0 + b 1 Xi
The individual random error terms ei have a mean of zero
Department of Statistics, ITS
Surabaya Slide-22
Least Squares Criterion
— b0 and b1 are obtained by finding the values of
b0 and b1 that minimize the sum of the squared
residuals

Ù Ù
SSE = f ( b 0 , b 1) = å e2 = å (y -ŷ)2
Ù Ù
= å (y - ( b 0 + b 1 X)) 2

Business Statistics: A Decision-


Making Approach, 7e © 2008
Prentice-Hall, Inc. Chap 14-23
n
d (SSE) Ù Ù
— Ù
= -2 å (Y i - b 0 - b1 Xi )
d b0 i =1
n
d (SSE) Ù Ù
Ù
= -2 å (Y i - b 0 - b 1 X i )X i
d b1 i =1

Least - squares normal equations :


Ù Ù n n
n b 0 + b1 åX
i =1
i = åY
i =1
i

Ù n Ù n n
b0 å
i =1
Xi + b1 å i =1
X i2 = åY X
i =1
i i

24
The Least Squares Equation
Ù Ù
— The formulas for b 0 and b 1 are:

Ù
b1 =
å (x - x)(y - y) algebraic equivalent for b1:

å (x - x) 2 Ù

å xå y
å xy -
b1

Ù
b1 = n
and
å x2 -
( å x )2
n
Ù Ù
b 0 = y - b1 x

Business Statistics: A Decision-


Making Approach, 7e © 2008
Prentice-Hall, Inc. Chap 14-25
Interpretation of the
Slope and the Intercept
Ù
— b0is the estimated average value of y
when the value of x is zero
Ù
— b1 is the estimated change in the average
value of y as a result of a one-unit change
in x
Example
The following data was collected in a study of
age and fatness in humans.

Age 23 23 27 27 39 41 45 49 50
% Fat 9.5 27.9 7.8 17.8 31.4 25.9 27.4 25.2 31.1

Age 53 53 54 56 57 58 58 60 61
% Fat 34.7 42 29.1 32.5 30.3 33 33.8 41.1 34.5

One of the questions was, “What is the relationship


between age and fatness?”
Example
2
Age (x) % Fat y x xy
23 9.5 529 218.5
23 27.9 529 641.7
27 7.8 729 210.6
27 17.8 729 480.6
39 31.4 1521 1224.6
n = 18 41 25.9 1681 1061.9

å X = 834
45 27.4 2025 1233
49 25.2 2401 1234.8
50 31.1 2500 1555
å y = 515 53
53
34.7
42
2809 1839.1
2809 2226
å X = 41612
2
54
56
29.1
32.5
2916 1571.4
3136 1820
å XY = 25489.2 57
58
30.3
33
3249 1727.1
3364 1914
58 33.8 3364 1960.4
60 41.1 3600 2466
61 34.5 3721 2104.5
834 Copyright
515 © 2005 41612
Brooks/Cole,25489.2
a division
28 of Thomson Learning, Inc.
Example
n = 18, å x = 834, å y = 515
å = 41612,
x 2
å xy = 25489.2
( å x)
2

S xx = å x 2
-
n
8342
= 41612 - = 2970
18

S xy = å xy -
( å x )( å y )
n
= 25489.2 -
( 834 )( 515 )
= 1627.53
1829 of Thomson Learning, Inc.
Copyright © 2005 Brooks/Cole, a division
Example

Ù S xy 1627.53
b1 = b = = = 0.54799
S xx 2970

Ù 515 834
b 0 = a = y - bx = - 0.54799 = 3.2209
18 18
ŷ = 3.22 + 0.548x
If we want to predict average %Fat for 45(say) year
old humans( ? )
ŷ = 3.22 + 0.548x =3.22+0.548*(45)=27.9
Simple Linear Regression Example

— A real estate agent wishes to examine the relationship


between the selling price of a home and its size
(measured in square feet)

— A random sample of 10 houses is selected


◦ Dependent variable (y) = house price in $1000s
◦ Independent variable (x) = square feet
Sample Data for
House Price Model
House Price in $1000s Square Feet
(y) (x)
245 1400
312 1600
279 1700
308 1875
199 1100
219 1550
405 2350
324 2450
319 1425
255 1700
Regression Using Excel
— Data / Data Analysis / Regression

Business Statistics: A Decision-


Making Approach, 7e © 2008
Prentice-Hall, Inc. Chap 14-33
Excel Output
Regression Statistics
Multiple R 0.76211 The regression equation is:
R Square 0.58082
Adjusted R Square 0.52842 house price = 98.24833 + 0.10977 (square feet)
Standard Error 41.33032
Observations 10

ANOVA
df SS MS F Significance F
Regression 1 18934.9348 18934.9348 11.0848 0.01039
Residual 8 13665.5652 1708.1957
Total 9 32600.5000

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 98.24833 58.03348 1.69296 0.12892 -35.57720 232.07386
Square Feet 0.10977 0.03297 3.32938 0.01039 0.03374 0.18580
Graphical Presentation
— House price model: scatter plot and
regression line
450
400
House Price ($1000s)

350 Slope
300
250
= 0.10977
200
150
100
50
Intercept 0
= 98.248
0 500 1000 1500 2000 2500 3000
Square Feet

house price = 98.24833 + 0.10977 (square feet)


Interpretation
Ù
of the
Intercept, b 0

house price = 98.24833 + 0.10977 (square feet)


Ù
— b0 is the estimated average value of Y
when the value of X is zero (if x = 0 is in the
range of observed x values)
◦ Here, no houses had 0 square feet, so b0 =
98.24833 just indicates that, for houses within the
range of sizes observed, $98,248.33 is the portion
of the house price not explained by square feet
Interpretation of the
Ù
Slope Coefficient, b
1

house price = 98.24833 + 0.10977 (square feet)


Ù
— b1 measures the estimated change in the
average value of Y as a result of a one-unit
change in X
Ù
◦ Here,b 1 = .10977 tells us that the average value
of a house increases by .10977($1000) = $109.77,
on average, for each additional one square foot of
size
Least Squares Regression Properties

— The sum of the residuals from the least squares regression


line is 0 ( å (y -yˆ ) = 0 )
— The sum of the squared residuals is a minimum (minimized
å (y -ŷ)2 )

— The simple regression line always passes through the mean


of the y variable and the mean of the x variable
— The least squares coefficients are unbiased estimates of
β0 and β1
Residual Analysis: check assumptions

ei = Yi - Yˆi
— The residual for observation i, ei, is the difference between
its observed and predicted value
— Check the assumptions of regression by examining the
residuals
◦ Examine for linearity assumption
◦ Examine for constant variance for all levels of X (homoscedasticity)
◦ Evaluate normal distribution assumption
◦ Evaluate independence assumption

— Graphical Analysis of Residuals


◦ Can plot residuals vs. X
Residual Analysis for Linearity

Y Y

x x
residuals

x residuals x

Not Linear
ü Linear
Residual Analysis for
Homoscedasticity

Y Y

x x
residuals

x residuals x

Non-constant variance ü Constant variance


Residual Analysis for
Independence

Not Independent
ü Independent
residuals

residuals
X
residuals

X
Explained and Unexplained Variation

— Total variation is made up of two parts:

SST = SSE + SSR


Total sum of Sum of Squares Sum of Squares
Squares Error Regression

SST = å ( y - y)2 SSE = å ( y - ŷ)2 SSR = å ( ŷ - y)2


where:
y = Average value of the dependent variable
y = Observed values of the dependent variable
ŷ = Estimated value of y for the given x value
Explained and Unexplained Variation
(continued)
y
yi Ù
Ù 2 y
SSE = å(yi - yi )
_
SST = å(yi - y)2
Ù
y Ù _2
_ SSR = å(yi - y) _
y y

Xi x
Coefficient of Determination, R2
— The coefficient of determination is the portion of
the total variation in the dependent variable that is
explained by variation in the independent variable

— The coefficient of determination is also called R-


squared and is denoted as R2

SSR
R =2 where 0 £R £1 2

SST
Coefficient of Determination, R2
(continued)
Coefficient of determination
SSR sum of squares explained by regression
R =
2
=
SST total sum of squares

Note: In the single independent variable case, the coefficient of


determination is

R =r 2 2
where:
R2 = Coefficient of determination
r = Simple correlation coefficient
Examples of Approximate
R2 Values
y
R2 = 1

Perfect linear relationship


between x and y:
x
R2 = 1
y 100% of the variation in y is
explained by variation in x

x
R2 = +1
Examples of Approximate
R2 Values
(continued)

y
0 < R2 < 1

Weaker linear relationship


between x and y:
x
Some but not all of the
y
variation in y is explained by
variation in x

x
Examples of Approximate
R2 Values
(continued)

R2 = 0
y
No linear relationship
between x and y:

The value of Y does not


R2 = 0 x depend on x. (None of the
variation in y is explained by
variation in x)
Excel Output
SSR 18934.9348
Regression Statistics
R =2
= = 0.58082
Multiple R 0.76211
SST 32600.5000
R Square 0.58082
Adjusted R Square 0.52842 58.08% of the variation in house
Standard Error 41.33032 prices is explained by variation in
Observations 10
square feet
ANOVA
df SS MS F Significance F
Regression 1 18934.9348 18934.9348 11.0848 0.01039
Residual 8 13665.5652 1708.1957
Total 9 32600.5000

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 98.24833 58.03348 1.69296 0.12892 -35.57720 232.07386
Square Feet 0.10977 0.03297 3.32938 0.01039 0.03374 0.18580
Outliers in linear regression
— An outlier is an observation that is unusually small or large. It differs from
the main trend in the data.

— Outlier may arise:


— Extreme in Y- direction.
— Extreme in X-direction (High leverage Point).
— Extreme in both X and Y direction
— Distant from the rest of the data

— Several possibilities need to be investigated when an outlier is observed:


◦ There was an error in recording the value.
◦ The point does not belong in the sample.
◦ The observation is valid.

51
Outliers in linear regression
Extreme X value Extreme Y value

Extreme X and Y Distant data point


Identify outliers from the scatter
diagram.

Figure 1: Scatter plot of Years of experience (X) vs. Income(Y)


Affects of LS in presence of
outlier
— The slope and intercept of the least squares (LS) line is very
sensitive to outliers.
Affects of LS in presence of
outlier
Without Outlier With Outlier

Regression equation: Regression equation:


ŷ = 104.78 - 4.10x ŷ = 97.51 - 3.32x
Coefficient of determination: Coefficient of determination:
R2 = 0.94 R2 = 0.55
Affects of LS in presence of
outlier
Without Outlier With Outlier

Regression equation: Regression equation:


ŷ = 92.54 - 2.5x ŷ = 87.59 - 1.6x
Coefficient of determination: Coefficient of determination:
R2 = 0.46 R2 = 0.52
Outlying Observations
qWe have to identify such observations and then
decide if they need to be eliminated or if their
influence needs to be reduced.
q When dealing with more than one variable, simple
plots (boxplots, scatterplots etc.) may not be useful to
identify outliers and we have to use the residuals or
functions of residuals.
Outlier Diagnosis
— To cheKey assumptions about e (error)
1. At any particular x value, the distribution of e is
a normal distribution
2. At any particular x value, the standard deviation
of e is s, which is constant over all values of x.
ck on these assumptions, one would examine the
deviations e1, e2, …, en.

y1 - yˆ 1 = y1 - (a + bx1 )
y 2 - yˆ 2 = y 2 - (a + bx 2 )

y n - yˆ n = yn - (a + bx n )
,
Outlier In Y-direction
Standardized Residuals:
StandardizedÙ residuals are defined as
ei
di = Ù
; i = 1, 2,......., n
s

Ù 1 n Ù2
Where, s i å ei ;
2
= i = 1, 2,......n
n - p i =1
If |di| > 3 then the corresponding observation
is said an outlier.
Outlier In Y-direction
Studentised Residuals:
Studentised residuals are defined as,
Ù Ù
ei ei
ri = Ù
= Ù
; i = 1, 2,......., n
s hii s 1 - wii

Where, 1 ( xi - x )2
wii = + n ; i = 1, 2,......., n
n
å i( x
i =1
- x ) 2

If |ri| > 3 then the corresponding observation


is said an outlier.
Outlier In Y-direction
R-Student residuals:
R-Student residuals are defined as
Ù Ù
( -i )
ei e i ( -i )
ti = = ; i = 1, 2,......., n
S(i ) hii S(i ) 1 - wii

which follows t-distribution with (n-p-1) d.f.


2
1 é n
ù
Ù ( -i )

Where, S = (n - p - 1) å ê y j - x j b
2 T
(i ) ú
j =1 ë û
Outlier In Y-direction
Ù
ei is the residual with the i-th case
( -i )

deleted. If |ti| > tα,(n−p−1) then the


corresponding observation is said an
outlier.
Outlier In X-direction
q We know Hat matrix as H = XT(XTX)-1 X
ˆ = HY and e = (I-H)Y
qUsing the hat matrix, Y
T T -1
qThe diagonal elements of the hat matrix, hii= x ( X X ) xi
i
; 0< hii< 1, are called Leverages .

qIt is the standerdized measure of distance of ith observation


from the center of X-space

h= p/n

63
Outlier In X-direction

Hoaglin and welsch (1978) suggested an observations


as high leverage points if
hii > 2 p / n
Where, p is the number of parameter in the model.
Vellman and welsch (1981) suggested an observations
as high leverage points if

hii > 3 p / n
Outlier In X-direction
Huber’s suggestions: Huber(1981)
suggested to break this range of hii into
three intervals. Observations having hii ≤
0.2 are said to be safe, 0.2 < hii ≤ 0.5 are
risky and hii > 0.5 should be avoided
Influential Outlier
Influence point:
For the point A, it has a moderately unusual x-
coordinate, and the y value is unusual as well.
An influence point has a noticeable impact on the
model coefficients in that it pulls the regression model
in its direction.
Influential Outlier
Cook’s distance:

Ù ( -i ) Ù Ù ( -i ) Ù
2 [b - b ] ( X X )[ b
T T
- b]
CDi = Ù
ps2

If CDi 2 > 1 then the i-th observation is


called influential observation.
Influential Outlier
DFFITS and DFBETAS :

Blesley, Kuh and Welsch (1980) introduce two


useful measures of deletion influence.

First one: How much the regression coefficient


changes.

Here Cjj is the jth diagonal element of (X’X)-1


Influential Outlier
A large value of DFBETASj,i indicates that observation i
has considerable influence on the jth regression
coefficient.
If |DFBETASj,i| > 2/n1/2, then the ith observation is said
to be an influential.

Second one: the deletion influence of the ith


observation on the predicted or fitted value
Influential Outlier
p
If |DFFITSi|
>2 n then the corresponding
observation is influential.
— The least-squares estimator:

71

You might also like