0% found this document useful (0 votes)
71 views67 pages

The Bucharest University of Economic Studies Bucharest Business School Romanian - French INDE MBA Program

The document discusses simple linear regression analysis. It defines the linear regression model and how it is used to estimate the relationship between a dependent variable (y) and independent variable (x). It explains how the least squares method is used to calculate the slope (b1) and y-intercept (b0) coefficients in order to draw the regression line that best fits the data by minimizing the sum of squared errors. It also discusses important assumptions about the error term and methods for assessing how well the linear model fits the data, including analyzing the sum of squares for errors.

Uploaded by

Adrian Petcu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views67 pages

The Bucharest University of Economic Studies Bucharest Business School Romanian - French INDE MBA Program

The document discusses simple linear regression analysis. It defines the linear regression model and how it is used to estimate the relationship between a dependent variable (y) and independent variable (x). It explains how the least squares method is used to calculate the slope (b1) and y-intercept (b0) coefficients in order to draw the regression line that best fits the data by minimizing the sum of squared errors. It also discusses important assumptions about the error term and methods for assessing how well the linear model fits the data, including analyzing the sum of squares for errors.

Uploaded by

Adrian Petcu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 67

The Bucharest University of Economic Studies

THE BUCHAREST UNIVERSITY OF ECONOMIC STUDIES – BUCHAREST BUSINESS


Bucharest Business School
Romanian - French INDE MBA Program

Prof. univ. dr. Constantin MITRUT STATISTICS 1


LESSON 8

Simple Linear Regression

2
The Model

The first order linear model

y   0  1x  
y = dependent variable b0 and b1 are unknown population
x = independent variable y parameters, therefore are estimated
from the data.
b0 = y-intercept
b1 = slope of the line
Rise
e = error variable b1 = Rise/Run
b0 Run
x
3
Estimating the Coefficients

The estimates are determined by


 drawing a sample from the population of
interest,
 calculating sample statistics.
 producing
y w
a straight line that cuts into the
data. w Question: What should be
w considered a good line?
w
w w w ww
w w w w w
w
x 4
The Least Squares
(Regression) Line

A good line is one that minimizes


the sum of squared differences between the
points and the line.

5
The Least Squares
(Regression) Line
Sum of squared differences = (2 - 1)2 + (4 - 2)2 + (1.5 - 3)2 + (3.2 - 4)2 = 6.89
Sum of squared differences = (2 -2.5)2 + (4 - 2.5)2 + (1.5 - 2.5)2 + (3.2 - 2.5)2 = 3.99

(2,4)
Let us compare two lines
4
w The second line is horizontal
3 w (4,3.2)
2.5
2
(1,2) w
w (3,1.5)
1 The smaller the sum of
squared differences
the better the fit of the
1 2 3 4
line to the data.
6
The Estimated Coefficients

To calculate the estimates of the line The regression equation that estimates
coefficients, that minimize the differences the equation of the first order linear model
between the data points and the line, use is:
the formulas:

cov( X , Y )
b1  ŷ  b 0  b1x
s 2x
b 0  y  b1 x

7
The Simple Linear Regression Line

 A car dealer wants to


find Car Odometer Price
the relationship 1 37388 14636
between 2 44758 14122
3 45833 14016
the odometer reading 4 30862 15590
and 5 31705 15568
the selling price of used 6 34010 14718
. . .
cars. .
Independent
.
Dependent
.
variable x variable y
 A random sample of . . .
100 cars is selected, 8
The Simple Linear Regression
Line
• Solution
– Solving by hand: Calculate a number of statistics
x  36,009 .45; s 2

 ( x  x)
i
2

 43,528,690
n 1
x

y  14,822 .823; cov(X , Y ) 


 ( x  x)( yi i
 y)
 2,712,511
n 1
where n = 100.
cov(X , Y )  1,712,511
b1    .06232
sx
2
43,528,690
b 0  y  b1 x  14,822.82  ( .06232 )( 36,009.45)  17,067

ŷ  b 0  b1x  17,067  .0623 x


9
The Simple Linear Regression
Line
• Solution – continued
– Using the computer

Tools > Data Analysis > Regression >


[Shade the y range and the x range] > OK

10
The Simple Linear Regression
Line
SUMMARY OUTPUT

Regression Statistics
Multiple R 0.8063
R Square 0.6501
Adjusted R Square
0.6466
Standard Error 303.1
Observations 100 yˆ  17,067  .0623x
ANOVA
df SS MS F Significance F
Regression 1 16734111 16734111 182.11 0.0000
Residual 98 9005450 91892
Total 99 25739561

CoefficientsStandard Error t Stat P-value


Intercept 17067 169 100.97 0.0000
Odometer -0.0623 0.0046 -13.49 0.0000

11
Interpreting the Linear
Regression -Equation
17067 Odometer Line Fit Plot

16000

15000
Price

14000

0 No data 13000
Odometer

yˆ  17,067  .0623x

The intercept is b0 = $17067. This is the slope of the line.


For each additional mile on the odometer,
the price decreases by an average of $0.0623
Do not interpret the intercept as the
“Price of cars that have not been driven”
12
Error Variable: Required Conditions
The error e is a critical part of the
regression model.
Four requirements involving the distribution
of e must be satisfied.
 The probability distribution of e is normal.
 The mean of e is zero: E(e) = 0.
 The standard deviation of e is se for all values
of x.
 The set of errors associated with different 13
The Normality of e
E(y|x3)
The standard deviation remains constant,
b0 + b1x3 m3
E(y|x2)
b0 + b1x2 m2

but the mean value changes with x E(y|x1)

b0 + b1x1 m1

From the
From the first
first three
three assumptions
assumptions we we have:
have:
yy isis normally
normally distributed
distributed with
with mean
mean x1 x2 x3
E(y) == bb00 ++ bb11x,
E(y) x, and
and aa constant
constant standard
standard
deviation ssee
deviation
14
Assessing the Model
The least squares method will produces
a regression line whether or not there
are linear relationship between x and y.
Consequently, it is important to assess
how well the linear model fits the data.
Several methods are used to assess
the model. All are based on the sum of
squares for errors, SSE.

15
Sum of Squares for Errors
 This is the sum of differences between the
points and the regression line.
 It can serve as a measure of how well the
line fits the data. SSE is defined by
n
SSE  i 1
( y i  ŷ i ) 2 .

– A shortcut formula
cov( X , Y )
SSE  (n  1)s 2Y 
s 2x
16
Standard Error of Estimate
 The mean error is equal to zero.
 If se is small the errors tend to be close to
zero (close to the mean error). Then, the
model fits the data well.
 Therefore, we can, use se as a measure of
the suitability of using a linear model.
 An estimator of se is given by se

S tan dard Error of Estimate


SSE
s 
n2
17
Standard Error of Estimate,
Example
Example
Calculate the standard error of estimate for Example
18.2, and describe what does it tell you about the
model fit?
Solution Calculated before
s 2
Y 
 ( y  ŷ )
i i
2

 259,996
n 1
2
cov( X , Y ) ( 2,712,511)
SSE  (n  1)s 2Y  2
 99(259,996)   9,005,450
sx 43,528,690
SSE 9,005,450 It is hard to assess the model based
s    303.13 on se even when compared with the
n2 98
mean value of y.
s   303.1 y  14,823 18
Testing the slope
 When no linear relationship exists between
two variables, the regression line should be
horizontal.
q q
q
q q q
q q q
q q q q q
q q q q
qq qq
q qq
q q
q q q q
q q q
q q q q
q q q q
q qq qq qq qq q qq qq q q q q q qq
qqq
q q q q q q
qq qq q q qq q q q q q q q q
q q
q qq q q qq q q q q q qq q q q q q q

Linear relationship. No linear relationship.


Different inputs (x) yield Different inputs (x) yield
different outputs (y). the same output (y).
The slope is not equal to zero The slope is equal to zero
19
Testing the Slope
We can draw inference about b1 from b1 by
testing
H0: b1 = 0
H1: b1 = 0 (or < 0,or > 0)
 The test statistic is
b1  1 s
t where s b1 
s b1 (n  1)s 2x
The standard error of b1.

20
Testing the Slope,
Example
Example
Test to determine whether there is
enough evidence to infer that there is a
linear relationship between the car
auction price and the odometer reading
for all three-year-old Tauruses, in
Example 18.2.
Use a = 5%.
21
Testing the Slope,
Example
Solving by hand
 To compute “t” we need the values of b1 and sb1.
b1  .0623
s 303.1
sb1    .00462
(n  1) s x2 (99)( 43,528,690)
b1  1  .0623  0
t   13.49
sb1 .00462

 The rejection region is t > t.025 or t < -t.025 with n =


n-2 = 98.
Approximately, t.025 = 1.984 22
Testing the Slope,
Example
• Using the computer
Price Odometer SUMMARY OUTPUT
14636 37388
14122 44758 Regression Statistics
14016 45833 Multiple R 0.8063
15590 30862 R Square 0.6501
15568 31705 Adjusted R Square
0.6466 There is overwhelming evidence to infer
14718 34010 Standard Error 303.1 that the odometer reading affects the
14470 45854 Observations 100
auction selling price.
15690 19057
15072 40149 ANOVA
14802 40237 df SS MS F Significance F
15190 32359 Regression 1 16734111 16734111 182.11 0.0000
14660 43533 Residual 98 9005450 91892
15612 32744 Total 99 25739561
15610 34470
14634 37720 Coefficients
Standard Error t Stat P-value
14632 41350 Intercept 17067 169 100.97 0.0000
15740 24469 Odometer -0.0623 0.0046 -13.49 0.0000

23
Coefficient of determination
 To measure the strength of the linear
relationship we use the coefficient of
determination.

R2 
 cov(X , Y )  2
2
or R  1 
SSE
s 2x s 2y  (y i  y )2

24
Coefficient of determination

• To understand the significance of this coefficient


note:
in p art by The regression model
plained
Ex
Overall variability in y Re
mains
, in par
t, une
xplain
ed The error

25
Coefficient of determination
y2
Two data points (x1,y1) and (x2,y2)
of a certain sample are shown.

y1 Variation in y = SSR + SSE

x1 x2
Total variation in y = Variation explained by the + Unexplained variation (error)
regression line
(y1  y )2  (y 2  y)2  ( ŷ 1  y ) 2  ( ŷ 2  y ) 2  ( y 1  ŷ 1 ) 2  ( y 2  ŷ 2 ) 2 26
Coefficient of determination
• R2 measures the proportion of the variation in y
that is explained by the variation in x.

2
R  1
SSE

 ( y i  y ) 2  SSE

SSR
 (y i  y ) 2
 (y  y)
i
2
 (y i  y ) 2

• R2 takes on any value between zero and one.


R2 = 1: Perfect match between the line and the data points.
R2 = 0: There are no linear relationship between x and y.
27
Coefficient of determination,
Example
Example
Find the coefficient of determination ; what
does this statistic tell you about the
model? 2
2 [cov( x , y )] 2
[ 2, 712,511]
Solution R  2 2
 ( 43,528, 688)( 259,996)
 .6501
s s x y
 Solving by hand;

28
Coefficient of determination
– Using the computer
From the regression output we have
SUMMARY OUTPUT

Regression Statistics
Multiple R 0.8063 65% of the variation in the auction
R Square 0.6501 selling price is explained by the
Adjusted R Square
0.6466
Standard Error 303.1
variation in odometer reading. The
Observations 100 rest (35%) remains unexplained by
this model.
ANOVA
df SS MS F Significance F
Regression 1 16734111 16734111 182.11 0.0000
Residual 98 9005450 91892
Total 99 25739561

CoefficientsStandard Error t Stat P-value


Intercept 17067 169 100.97 0.0000 29
Odometer -0.0623 0.0046 -13.49 0.0000
Finance Application: Market Model
One of the most important applications
of linear regression is the market model.
It is assumed that rate of return on a
stock (R) is linearly related to the rate of
return on the overall market.
R = b0 + b1Rm +e
Rate of return on a particular stock Rate of return on some major stock index
The beta coefficient measures how sensitive the stock’s rate
of return is to changes in the level of the overall market.
30
The Market Model,
Example

SUMMARY OUTPUT

Regression Statistics Estimate the market model for Nortel, a


Multiple R 0.5601 stock traded in the Toronto Stock Exchange
R Square 0.3137
Adjusted R Square
0.3019 (TSE).
Standard Error0.0631
Observations 60
Data consisted of monthly percentage
return for Nortel and monthly percentage
This is a measure of the stock’sreturnThis
for isalla measure
the stocks.
ANOVA
df SS MS F
of the total market-related risk
Significance F
marketRegression
related risk. In this
1 sample, embedded in26.51
0.10563 0.10563 the Nortel stock.
0.0000
for each 1% increase in58the0.231105
Residual TSE Specifically, 31.37% of the variation in Nortel’s
0.003985
return,Total 59 0.336734
the average increase in return are explained by the variation in the
Nortel’s return is .8877%.Standard Error tTSE’s
Coefficients Stat
returns.
P-value
Intercept 0.0128 0.0082 1.56 0.1245
TSE 0.8877 0.1724 5.15 0.0000
31
Using the Regression Equation
• Before using the regression model, we need to
assess how well it fits the data.
• If we are satisfied with how well the model fits
the data, we can use it to predict the values of y.
• To make a prediction we use
– Point prediction, and
– Interval prediction

32
Point Prediction
• Example
• Predict the selling price of a three-year-old
Taurus with 40,000 miles on the odometer

ŷ  17067  .0623 x  17067  .0623( 40,000 )  14,575

– It is predicted that a 40,000 miles car would sell for


$14,575.
– How close is this prediction to the real price?
33
Interval Estimates
Two intervals can be used to discover how
closely the predicted value will match the true
value of y.
 Prediction interval – predicts y for a given value of
x,
 Confidence interval – estimates the average y for
The
–– The prediction interval
prediction interval – The confidence interval
a given x.
1 ( x g  x) 1 ( x g  x)
2 2

ŷ  t  2 s  1   ŷ  t  2 s  
n (n  1)s x 2
n (n  1) s 2x

34
Interval Estimates,
Example
Example - continued
 Provide an interval estimate for the bidding
price on a Ford Taurus with 40,000 miles
on the odometer.
 Two types of predictions are required:
 A prediction for a specific car
 An estimate for the average price per car

35
Interval Estimates,
Example
• Solution
– A prediction interval provides the price estimate for a
single car:
1 ( x g  x)
2

ŷ  t  2 s  1 
n (n  1)s 2x
t.025,98
Approximately

1 (40,000  36,009) 2
[17,067  .0623(40000)]  1.984(303.1) 1    14,575  605
100 (100  1)43,528,690

36
Interval Estimates,
Example

Solution – continued
 A confidence interval provides the estimate
of the mean price per car for a Ford Taurus
with 40,000 miles reading on the odometer.
1 ( x g  x)2
ŷ  t =2 s 
 The confidence interval (95%) 
n
 ( x i  x)2

1 ( 40,000  36,009) 2
[17,067  .0623( 40000)]  1.984(303.1)   14,575  70
100 (100  1) 43,528,690

37
The effect of the given xg on
the length of the interval
 As xg moves away from x the interval
becomes longer. That is, the shortest
interval isŷ found
b 0  b1xat
g
x.
1 ( x g  x)
2

ŷ  t  2 s  
n (n  1)s 2x

38
The effect of the given xg on
the length of the interval
 As xg moves away from x the interval
becomes longer. That is, the shortest
interval isŷ found
b 0  b1xat
g
x.
1 ( x g  x)
2

ŷ  t  2 s  
n (n  1)s 2x
ŷ( x g  x  1)
ŷ( x g  x  1) 1 12
ŷ  t  2 s  
n (n  1)s 2x

x 1 x 1
x
( x  1)  x  1 ( x  1)  x  1
39
The effect of the given xg on
the length of the interval
 As xg moves away from x the interval becomes
longer. That is, the shortest interval is found at x.
ŷ  b 0  b1x g
1 ( x g  x)
2

ŷ  t  2 s  
n (n  1) s 2x

1 12
ŷ  t  2 s  
n (n  1)s 2x

1 22
x2
x
x2 ŷ  t  2 s  
n (n  1)s 2x
( x  2)  x  2 ( x  2)  x  2

40
Coefficient of Correlation

The coefficient of correlation is used to


measure the strength of association
between two variables.
The coefficient values range between -1
and 1.
 If r = -1 (negative association) or r = +1
(positive association) every point falls on the
regression line.
 If r = 0 there is no linear pattern.
41
Testing the coefficient of
correlation
To test the coefficient of correlation for
linear relationship between X and Y
 X and Y must be observational
 X and Y are bivariate normally distributed

42
Testing the coefficient of
correlation
When no linear relationship exist between the
two variables, r = 0.
The hypotheses are:
n2
H0: r = 0 tr
1 r 2
H1: r ¹ 0
where r is the sample
The test statistic is: coefficient of correlation
The statistic is Student t cov(x, y )
calculated by r 
distributed with d.f. = n - 2, sx s y
provided the variables
are bivariate normally
distributed.
43
Testing the Coefficient of
correlation
Foreign Index Funds (Index)
 A certain investor prefers the investment in
an index mutual funds constructed by
buying a wide assortment of stocks.
 The investor decides to avoid the
investment in a Japanese index fund if it is
strongly correlated with an American index
fund that he owns.
 From the data shown in Index.xls should
he avoid the investment in the Japanese
index fund?
44
Testing the Coefficient of
correlation
Foreign Index Funds
 A certain investor prefers the investment in
an index mutual funds constructed by
buying a wide assortment of stocks.
 The investor decides to avoid the
investment in a Japanese index fund if it is
strongly correlated with an American index
fund that he owns.
 From the data shown in Index.xls should
he avoid the investment in the Japanese
index fund?
45
Correlation,
Example
Solution
 Problem objective: Analyze relationship
between two interval variables.
 The two variables are observational (the
return for each fund was not controlled).
 We are interested in whether there is a
linear relationship between the two
variables, thus, we need to test the
coefficient of correlation
46
Correlation,
Example
Solution – continued
The value of the t statistic is
 The hypotheses
n2
H0: r = 0 t r  4.26
1 r 2

H1: r ¹ 0. Conclusion: There is sufficient


 Solving by hand: evidence at a = 5% to infer that
 The rejection region: there are linear relationship
|t| > ta/2,n-2 = t.025,59-2 » 2.000.between the two variables.

 The sample coefficient of correlation:


Cov(x,y) = .001279; sx = .0509; sy = 0512
r = cov(x,y)/sxsy=.491
47
Correlation,
Example
 Excel solution

US Index Japanese Index


US Index 1
Japanese Index 0.4911 1

48
Spearman Rank Correlation
Coefficient
The Spearman rank test is a nonparametric
procedure.
The procedure is used to test linear
relationships between two variables when the
bivariate distribution is nonnormal.
Bivariate nonnormal distribution may occur
when
 at least one variable is ordinal, or
 both variables are interval but at least one variable
is not normal.
49
Spearman Rank Correlation
Coefficient
 The hypotheses are:
 H0: rs = 0
 H1: rs ¹ 0
 The test statistic is cov(a, b)
rs 
s a sb

where ‘a’ and ‘b’ are the ranks of x and y


respectively.
– For a large sample (n > 30) rs is approximately normally
distributed
z  rs n  1
50
Spearman Rank Correlation
Coefficient,
Example
Example
A production manager wants to examine
the relationship between:
 Aptitude test score given prior to hiring, and
 Performance rating three months after starting
work.
 A random sample of 20 production workers
was selected. The test scores as well as
performance rating was recorded.
51
Spearman Rank Correlation
Coefficient,
Example
Aptitude Performance
Employee test rating
1 59 3
2 47 2
3 58 4
4 66 3
5 77 2
. . .
. . .
. . .

Scores range from 0 to 100 Scores range from 1 to 5


52
Spearman Rank Correlation
Coefficient,
Example
• Solution
– The problem objective is to analyze the
relationship between two variables.
(Note: Performance rating is ordinal.)

– The hypotheses are: – The test statistic is rs,


• H0: r s = 0 and the rejection region
• H1: rs = 0 is |rs| > rcritical (taken from
the Spearman rank
correlation table).
53
Spearman Rank Correlation
Coefficient,
Example
Aptitud e Perf o rmance
Emp loyee test Ran k(a) rating Ran k(b )
1 59 9 3 10.5
2 47 3 2 3.5
3 58 8 4 17 Ties are broken
4 66 14 3 10.5 by averaging the
5 77 20 2 3.5 ranks.
. . . . .
. . . . .
. . . . .

– Solving by hand
• Rank each variable separately.
• Calculate sa = 5.92; sb =5.50; cov(a,b) = 12.34
• Thus rs = cov(a,b)/[sasb] = .379.
• The critical value for a = .05 and n = 20 is .450. 54
Spearman Rank Correlation
Coefficient,
Example

Conclusion:
Conclusion:
Do not
Do not reject
reject thethe null
null hypothesis.
hypothesis. At At 5%
5% significance
significance
level there
level there isis insufficient
insufficient evidence
evidence toto infer
infer that
that the
the
two variables
two variables are are related
related toto one
one another.
another.

55
Spearman Rank Correlation
Coefficient,
Example
Excel Solution

Spearman Rank Correlation

Aptitude and Performance


Spearman Rank Correlation 0.3792
z Stat 1.65
P(Z<=z) one tail 0.0492
z Critical one tail 1.6449
P(Z<=z) two tail 0.0984 > 0.05
z Critical two tail 1.96

56
Regression Diagnostics - I
The three conditions required for the
validity of the regression analysis are:
 the error variable is normally distributed.
 the error variance is constant for all values
of x.
 The errors are independent of each other.
How can we diagnose violations of
these conditions?
57
Residual Analysis
Examining the residuals (or
standardized residuals), help detect
violations of the required conditions.
Example – continued:
 Nonnormality.
 Use Excel to obtain the standardized residual
histogram.
 Examine the histogram and look for a bell
shaped. diagram with a mean close to zero.

58
Residual Analysis
ObservationPredicted Price Residuals Standard Residuals
1 14736.91 -100.91 -0.33
2 14277.65 -155.65 -0.52
3 14210.66 -194.66 -0.65
4 15143.59 446.41 1.48
5 15091.05 476.95 1.58
A Partial list of
For each residual we calculate Standard residuals
the standard deviation as follows:
s ri  s  1  hi where Standardized residual ‘i’ =
1 ( x i  x) 2 Residual ‘i’
hi   Standard deviation
n (n  1)s 2x
59
Residual Analysis
Standardized residuals

40
30
20
10
0
-2 -1 0 1 2 More

It seems the residual are normally distributed with mean zero

60
Heteroscedasticity
When the requirement of a constant variance is
violated we have a condition of heteroscedasticity.
Diagnose heteroscedasticity by plotting the residual
against the predicted y. +
^y
++
Residual
+
+ + + ++
+
+ + + ++ + +
+ + + +
+ + + ++ +
+ + + + y^
+ + ++ +
+ + +
+ + ++
+ + ++

The spread increases with ^y 61


Homoscedasticity
When the requirement of a constant variance
is not violated we have a condition of
homoscedasticity.
Example 10.2 - continued
1000

500
Residuals

0
13500 14000 14500 15000 15500 16000
-500

-1000
Predicted Price

62
Non Independence of Error
Variables
 A time series is constituted if data were
collected over time.
 Examining the residuals over time, no
pattern should be observed if the errors are
independent.
 When a pattern is detected, the errors are
said to be autocorrelated.
 Autocorrelation can be detected by
graphing the residuals against time.
63
Non Independence of Error Variables
Patterns in the appearance of the residuals over time indicates
that autocorrelation exists.
Residual Residual

+
+ ++
+
+ + +
+ + +
0 + 0 + +
+ Time Time
+ + + + + +
+
+
+ ++ +
+

Note the runs of positive residuals, Note the oscillating behavior of the
replaced by runs of negative residuals residuals around zero.
64
Outliers
An outlier is an observation that is unusually
small or large.
Several possibilities need to be investigated
when an outlier is observed:
 There was an error in recording the value.
 The point does not belong in the sample.
 The observation is valid.
Identify outliers from the scatter diagram.
It is customary to suspect an observation is an
outlier if its |standard residual| > 2
65
An outlier An influential observation

+++++++++++
+ +
+ … but, some outliers
+ +
+ +
may be very influential
+
+ + + +
+
+ +
+

The outlier causes a shift


in the regression line

66
Procedure for Regression
Diagnostics
Develop a model that has a theoretical basis.
Gather data for the two variables in the model.
Draw the scatter diagram to determine whether a
linear model appears to be appropriate.
Determine the regression equation.
Check the required conditions for the errors.
Check the existence of outliers and influential
observations
Assess the model fit.
If the model fits the data, use the regression
equation. 67

You might also like