0% found this document useful (0 votes)
86 views61 pages

Topic 3 SRM 1

Simple regression models examine the relationship between a dependent variable (Y) and a single independent or explanatory variable (X) using an equation of the form Y = β1 + β2X + u, where β1 and β2 are unknown parameters estimated from data, u is a disturbance term representing random variability, and the goal is to find estimates (b1 and b2) of the population parameters (β1 and β2) that best fit the data using the method of least squares. The fitted regression line approximates the true relationship between X and Y but generally does not pass through all data points due to the disturbance term

Uploaded by

Linxi Zuo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views61 pages

Topic 3 SRM 1

Simple regression models examine the relationship between a dependent variable (Y) and a single independent or explanatory variable (X) using an equation of the form Y = β1 + β2X + u, where β1 and β2 are unknown parameters estimated from data, u is a disturbance term representing random variability, and the goal is to find estimates (b1 and b2) of the population parameters (β1 and β2) that best fit the data using the method of least squares. The fitted regression line approximates the true relationship between X and Y but generally does not pass through all data points due to the disturbance term

Uploaded by

Linxi Zuo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 61

Topic 3.

SIMPLE REGRESSION MODEL


(I)

• SIMPLE REGRESSION MODEL


( 简单回归模型 )
• ORDINARY LEAST SQUARES REGRESSION
( 最小二乘法 )
• DERIVING LINEAR REGRESSION MODEL COEFFICIENT
• Eviews Introduction

1
Historical origin of the term “Regression”:

This term was introduced by Francis Galton (1886). In a


famous paper, Galton found that, although there was a
tendency for tall parents to have tall children and for short
parents to have short children, the average height of children
born of parents of a given height tended to move or “regress”
toward the average height in the population as a whole. In
other words, the average height of sons of a group of tall
fathers was less than their fathers’ height and the average
height of sons of a groups of short fathers was greater than
their fathers’ height.

2
The modern interpretation of REGRESSION:

Regression analysis is concerned with the study of


the dependence of one variable, the dependent
variable, on one or more other variables, the
explanatory variables, with a view to estimating
and/or predicting the (population) mean or average
value of the former in terms of the known or fixed (in
repeated sampling) values of the latter.

3
4
Deposit Rate and Changes in Consumer Prices in Japan
(1987:Q1-2004:Q1)

5
SIMPLE REGRESSION MODEL

Y  1   2 X

1

X1 X2 X3 X4 X

Suppose that a variable Y is a linear function of another variable X, with unknown parameters 1 and 2
that we wish to estimate. Therefore, as a first approximation, we may write the function as
Y = 1 + 2X .

6
SIMPLE REGRESSION MODEL

Y  1   2 X
Q4
Q3
Q2
1 Q1

X1 X2 X3 X4 X

If the relationship were an exact one, the observations would lie on a


straight line and we would have no trouble obtaining accurate
estimates of 1 and 2.

7
SIMPLE REGRESSION MODEL

Y P4

P1 Q4
Q3
Q2
1 Q1 P3
P2

X1 X2 X3 X4 X

In practice, most economic relationships are not exact and the actual
values of Y are different from those corresponding to the straight line.

8
SIMPLE REGRESSION MODEL

Y P4

Y  1   2 X  u

P1 Q4
Q3
Q2
1 Q1 P3
P2

X1 X2 X3 X4 X

To allow for such divergences, we will write the model as Y = 1 + 2X + u, where u is a
disturbance term( 干扰项 ). A disturbance term is the deviation of an individual Yi around its
expected value. It is an unobservabable random variable taking positive or negative values.

9
SIMPLE REGRESSION MODEL

• Why does the disturbance term u


exist ???
1. Vagueness of theory;
2. Unavailability of data;
3. Core variables versus peripheral
variables (Principle of parsimony);
4. Intrinsic randomness in human
behaviour;
5. Errors of measurement;
6. Model misspecification

10
SIMPLE REGRESSION MODEL

• Some Basic Assumptions on u in SRM


– The expected value of u, the disturbance term,
in the population is 0. That is,
E (u) = 0
This assumption means that the factors subsumed in ui
do not systematically affect the mean value of Y; so to
speak, the positive ui values cancel out the negative ui
values so that their average or mean affect on Y is zero.

– Var (u )   2 , one single variance.


– Cov (us , ut) = 0, that is to say, series u has no
correlation with each other.
11
The population Linear Regression Model
(总体线性回归模型 )
Y=1 + 2X +ui, i=1, 2,...,n

X = independent /exogenous/explanatory variable or


regressor( 独立 / 外生 / 解释变量、自变量、回归元 )
Y = dependent or endogenous variable
(被解释 / 内生变量、因变量)
1 = intercept (截距)
2 = slope (斜率)
ui = disturbance term (随机误差)

12
SIMPLE REGRESSION MODEL

• The meaning of the term “Linear” ( 线性 )

1. Linearity in the Variables (对变量的线性)

2. Linearity in the Parameters (对参数的线性)

13
SIMPLE REGRESSION MODEL

Y P4

Y  1   2 X  u

Q4
u 1 P1 Q3
Q2
1 Q1 P3
P2
1   2 X 1

X1 X2 X3 X4 X

Each value of Y thus has a non-random component, 1 + 2X, and a random


component, u. The first observation has been decomposed into these two
components.

14
SIMPLE REGRESSION MODEL

Y P4

P1

P3
P2

X1 X2 X3 X4 X

In practice we can see only the P points.

15
SIMPLE REGRESSION MODEL

Y P4

Yˆ  b1  b2 X

P1

P3
b1 P2

X1 X2 X3 X4 X

Obviously, we can use the P points to draw a line which is an


approximation to the line Y = 1 + 2X +u.
If we write this line Y = b1 + b2X, b1 is an estimate of 1 and b2 is an
estimate of 2.

16
SIMPLE REGRESSION MODEL

Y (actual value)
Y Ŷ (fitted value) P4

Yˆ  b1  b2 X
R3 R4
R2
P1

R1 P3
b1 P2

X1 X2 X3 X4 X

The line is called the fitted model (估计模型) or sample regression model
(样本回归模型) and the values of Y predicted by it are called the fitted
values (估计值) of Y. They are given by the heights of the R points.

17
SIMPLE REGRESSION MODEL

Y (actual value)
Y Ŷ (fitted value) P4

Y  Yˆ  e (residual) e4
Yˆ  b1  b2 X
R3 R4
R2
e1 P1 e3
e2
R1 P3
b1 P2

X1 X2 X3 X4 X

The discrepancies between the actual and fitted values of Y are


known as the residuals (残差) , e.

18
SIMPLE REGRESSION MODEL

Y (actual value)
Y Ŷ (fitted value) P4

Yˆ  b1  b2 X
R3 R4
R2 Y  1   2 X  u
P1

1 R1 P3
b1 P2

X1 X2 X3 X4 X

Note that the values of the residuals are not the same as the values
of the disturbance term. The diagram now shows the true unknown
relationship as well as the fitted line.

19
SIMPLE REGRESSION MODEL

Y (actual value)
Y Ŷ (fitted value) P4

Yˆ  b1  b2 X

Q4 Y  1   2 X  u
P1
Q3
Q2
1 Q1 P3
b1 P2

X1 X2 X3 X4 X

The disturbance term (干扰项) in each observation is


responsible for the divergence between the non-random
component of the true relationship and the actual observation.

20
SIMPLE REGRESSION MODEL

Y (actual value)
Y Ŷ (fitted value) P4

Yˆ  b1  b2 X
R3 R4
R2 Y  1   2 X  u
P1

1 R1 P3
b1 P2

X1 X2 X3 X4 X

Meanwhile, the residuals (残差) are the discrepancies


between the actual and the fitted values.

21
SIMPLE REGRESSION MODEL

Y (actual value)
Y Ŷ (fitted value) P4

Yˆ  b1  b2 X
R3 R4 Y  1   2 X
R2
P1

1 R1 P3
b1 P2

X1 X2 X3 X4 X

If the fit is a good one, the residuals and the values of the disturbance
term will be similar, but they must be kept apart conceptually.

22
SIMPLE REGRESSION MODEL

Ordinary Least Squares


( OLS, 普通最小二乘法) :

Minimize RSS (Residual Sum of Squares), where


n
RSS   ei2  e12  ...  en2
i 1

We will draw the fitted line so as to minimize the sum of the squares of
the residuals, RSS. This is described as the Least Squares Criterion.

23
For PRM Y = 1 + 2X +u
n
RSS  [Y  (b  b X )]
2

i 1
i 1 2 i
The result is the OLS estimators of β1 and β2
n

(X i  X )(Yi  Y )
̂ 2  i 1
n

 i
( X  X
i 1
) 2

ˆ1  Y  ˆ2 X

We will return to these later.


24
SIMPLE REGRESSION MODEL

Least Squares Criterion:


Minimize RSS (Residual Sum of Squares), where
n
RSS   ei2  e12  ...  en2
i 1

Why the squares of the residuals?


Why not just minimize the sum of the residuals?

e
i 1
i  e1  ...  en

25
SIMPLE REGRESSION MODEL

Y P4

Y P1

P3
P2

X1 X2 X3 X4 X

The answer is that you would get a seemingly perfect fit by


drawing a horizontal line through the mean value of Y. The sum
of the residuals would be zero.

26
DERIVING LINEAR REGRESSION COEFFICIENTS
True model : Y   1   2 X  u
Fitted line : Yˆ  b1  b2 X
Y
6
Y3
5 Y2
4

3 Y1
2

0
0 1 2 3 X

This sequence shows how the regression coefficients for a simple regression model are
derived, using the least squares criterion (OLS, for ordinary least squares).
27
DERIVING LINEAR REGRESSION COEFFICIENTS
True model : Y   1   2 X  u
Fitted line : Yˆ  b1  b2 X
Y
6
Y3
5 Y2
4

3 Y1
2

0
0 1 2 3 X

We will start with a numerical example with just three observations: (1,3), (2,5), and (3,6).

28
DERIVING LINEAR REGRESSION COEFFICIENTS
True model : Y   1   2 X  u
Fitted line : Yˆ  b1  b2 X
Y Yˆ3  b1  3b2
6
Y3
5 Y2
4 Yˆ2  b1  2b2
Yˆ1  b1  b2
3 Y1
2 b2
b1
1

0
0 1 2 3 X
^ = b + b X, we will determine the values of b and b that
Writing the fitted regression as Y 1 2 1 2
minimize RSS, the sum of the squares of the residuals.
29
DERIVING LINEAR REGRESSION COEFFICIENTS
True model : Y   1   2 X  u
Fitted line : Yˆ  b1  b2 X
Y Yˆ3  b1  3b2
6
Y3
5 Y2
4 Yˆ2  b1  2b2
Yˆ1  b1  b2
3 Y1 e1  Y1  Yˆ1  3  b1  b2
2 b2 e2  Y2  Yˆ2  5  b1  2b2
b1
1 e3  Y3  Yˆ3  6  b1  3b2
0
0 1 2 3 X

Given our choice of b1 and b2, the residuals are as shown.

30
SIMPLE REGRESSION ANALYSIS

RSS  e12  e22  e32  ( 3  b1  b2 )2  (5  b1  2b2 )2  (6  b1  3b2 ) 2


 9  b12  b22  6b1  6b2  2b1b2
 25  b12  4b22  10b1  20b2  4b1b2
 36  b12  9b22  12b1  36b2  6b1b2
 70  3b12  14b22  28b1  62b2  12b1b2

RSS
 0  6b1  12b2  28  0
b1
RSS
 0  12b1  28b2  62  0
b2

For a minimum, the partial derivatives of RSS with respect to b1 and b2 should be zero.

31
SIMPLE REGRESSION ANALYSIS

RSS  e12  e22  e32  ( 3  b1  b2 )2  (5  b1  2b2 )2  (6  b1  3b2 ) 2


 9  b12  b22  6b1  6b2  2b1b2
 25  b12  4b22  10b1  20b2  4b1b2
 36  b12  9b22  12b1  36b2  6b1b2
 70  3b12  14b22  28b1  62b2  12b1b2

RSS
 0  6b1  12b2  28  0
b1
RSS
 0  12b1  28b2  62  0
b2
 b1  1.67, b2  1.50

Solving them, we find that RSS is minimized when b1 and b2 are equal to 1.67 and 1.50,
respectively.
32
DERIVING LINEAR REGRESSION COEFFICIENTS
True model : Y   1   2 X  u
Fitted line : Yˆ  b1  b2 X
Y Yˆ3  b1  3b2
6
Y3
5 Y2
4 Yˆ2  b1  2b2
Yˆ1  b1  b2
3 Y1
2 b2
b1
1

0
0 1 2 3 X

Here is the scatter diagram again.

33
DERIVING LINEAR REGRESSION COEFFICIENTS
True model : Y   1   2 X  u
Fitted line : Yˆ  1.67  1.50 X
Y Yˆ3  6.17
6
Y3
5 Y2
4 Yˆ2  4.67
Yˆ1  3.17
3 Y1
2 1.50
1.67
1

0
0 1 2 3 X

The fitted line and the fitted values of Y are as shown.

34
DERIVING LINEAR REGRESSION COEFFICIENTS

Y True model : Y   1   2 X  u
Fitted line : Yˆ  b1  b2 X

Yn
Y1

X1 Xn X

Now we will do the same thing for the general case with n observations.

35
DERIVING LINEAR REGRESSION COEFFICIENTS

Y True model : Y   1   2 X  u
Fitted line : Yˆ  b1  b2 X

Yˆn  b1  b2 X n

Yn
Y1
e1 e1  Y1  Yˆ1  Y1  b1  b2 X 1
.....
Yˆ1  b1  b2 X 1
b1 b2 en  Yn  Yˆn  Yn  b1  b2 X n

X1 Xn X

The residual for the first observation is defined.

36
DERIVING LINEAR REGRESSION COEFFICIENTS

Y True model : Y   1   2 X  u
Fitted line : Yˆ  b1  b2 X

Yˆn  b1  b2 X n
en
Yn
Y1
e1 e1  Y1  Yˆ1  Y1  b1  b2 X 1
.....
Yˆ1  b1  b2 X 1
b1 b2 en  Yn  Yˆn  Yn  b1  b2 X n

X1 Xn X

Similarly we define the residuals for the remaining observations. That for the last one is
marked.
37
DERIVING LINEAR REGRESSION COEFFICIENTS

RSS  e12  e22  e32  ( 3  b1  b2 )2  (5  b1  2b2 )2  (6  b1  3b2 ) 2


 9  b12  b22  6b1  6b2  2b1b2
 25  b12  4b22  10b1  20b2  4b1b2
 36  b12  9b22  12b1  36b2  6b1b2
 70  3b12  14b22  28b1  62b2  12b1b2

RSS  e12  ...  en2  (Y1  b1  b2 X 1 ) 2  ...  (Yn  b1  b2 X n )2


 Y12  b12  b22 X 12  2b1Y1  2b2 X 1Y1  2b1b2 X 1
 ...
 Yn2  b12  b22 X n2  2b1Yn  2b2 X nYn  2b1b2 X n
  Yi 2  nb12  b22  X i2  2b1  Yi  2b2  X iYi  2b1b2  X i

Like terms are added together.

38
DERIVING LINEAR REGRESSION COEFFICIENTS

RSS  70  3b12  14b22  28b1  62b2  12b1b2


RSS
 0  6b1  12b2  28  0
b1
 b1  1.67, b2  1.50
RSS
 0  12b1  28b2  62  0
b2

RSS   Yi 2  nb12  b22  X i2  2b1  Yi  2b2  X iYi  2b1b2  X i


RSS
 0  2nb1  2 Yi  2b2  X i  0
b1
nb1   Yi b2  X i b1  Y  b2 X
RSS
 0  2b2  X i2  2 X iYi  2b2  X i  0
b2
The first derivative with respect to b2.

39
SIMPLE REGRESSION ANALYSIS

RSS
 0  2b2  X i2  2 X iYi  2b1  X i  0
b2
b2  X i2   X iYi  b1  X i  0

b2  X i2   X iYi  (Y  b2 X ) X i  0

b2  X i2   X iYi  (Y  b2 X )nX  0

X  X i

X i  nX

The definition of the sample mean has been used.

40
SIMPLE REGRESSION ANALYSIS

RSS
 0  2b2  X i2  2 X iYi  2b1  X i  0
b2
b2  X i2   X iYi  b1  X i  0

b2  X i2   X iYi  (Y  b2 X ) X i  0

b2  X i2   X iYi  (Y  b2 X )nX  0

b2   X i2  nX 2    X iYi  nXY

1  1
b2   X i2  X 2    X iYi  XY
n  n

Terms not involving b2 have been transferred to the right side and the equation has been
divided through by n.
41
SIMPLE REGRESSION ANALYSIS
RSS
 0  2b2  X i2  2   X iYi  2b1  X i  0
b2
b2  X i2   X iYi  b1  X i  0

b2  X i2   X iYi  (Y  b2 X ) X i  0

b2  X i2   X iYi  (Y  b2 X )nX  0

b2   X i2  nX 2    X iYi  nXY

1  1
b2   X i2  X 2    X iYi  XY
n  n
b2 Var ( X )  Cov( X ,Y )
Hence we obtain a tidy expression for b2. Cov( X ,Y )
b2 
Var( X ) 42
DERIVING LINEAR REGRESSION COEFFICIENTS

Y True model : Y   1   2 X  u
Fitted line : Yˆ  b1  b2 X

Yˆn  b1  b2 X n

Yn
Y1
b1  Y  b2 X

Yˆ1  b1  b2 X 1 Cov( X ,Y )
b2 b2 
b1 Var( X )

X1 Xn X

We chose the parameters of the fitted line so as to minimize the sum of the squares of the
residuals. As a result, we derived the expressions for b1 and b2.
43
SIMPLE REGRESSION ANALYSIS

Alternative expressions for b2


Cov( X ,Y )
b2 
Var( X )

1
n
 ( X i  X )(Yi  Y )
 ( X i  X )(Yi  Y )
b2  
1
 (Xi  X ) 2  i ( X  X ) 2

n
1
n
 X iYi  XY
 X iYi  nXY
b2  
1
 Xi  X
2 2  iX 2
 n X 2

n
A further variation is obtained by using the alternative expressions for sample covariance
and variance derived in an earlier sequence.
44
INTERPRETATION OF A REGRESSION EQUATION

80

70

60
Hourly earnings ($)

50

40

30

20

10

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-10

Years of schooling

The scatter diagram shows hourly earnings in 1994 plotted against highest grade
completed for a sample of 570 respondents from the National Longitudinal Survey of Youth.
45
INTERPRETATION OF A REGRESSION EQUATION

80

70

60
Hourly earnings ($)

50

40

30

20

10

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-10

Years of schooling
Highest grade completed means just that for elementary and high school. Grades 13, 14,
and 15 mean completion of one, two and three years of college. Grade 16 means completion
of four-year college. Higher grades indicate years of postgraduate education.

46
INTERPRETATION OF A REGRESSION EQUATION

Dependent Variable: EARNINGS


Method: Least Squares
Date: 03/09/01 Time: 09:50
Sample: 1 570
Included observations: 570
Variable Coefficient Std. Error t-Statistic Prob.
C -1.391004 1.820305 -0.764160 0.4451
S 1.073055 0.132450 8.101575 0.0000
R-squared 0.103586 Mean dependent var 13.11782
Adjusted R-squared 0.102007 S.D. dependent var 8.214719
S.E. of regression 7.784471 Akaike info criterion 6.945641
Sum squared resid 34419.66 Schwarz criterion 6.960889
Log likelihood -1977.508 F-statistic 65.63552
Durbin-Watson stat 1.934783 Prob(F-statistic) 0.000000

This is the output from a regression of earnings on highest grade completed, using
Eviews. For the time being, we will be concerned only with the estimates of the
parameters.

47
INTERPRETATION OF A REGRESSION EQUATION

80
^
70
EARNINGS  1.391  1.073S

60
Hourly earnings ($)

50

40

30

20

10

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-10

Years of schooling

Here is the scatter diagram again, with the regression line shown.

48
INTERPRETATION OF A REGRESSION EQUATION

80
^
70
EARNINGS  1.391  1.073S

60
Hourly earnings ($)

50

40

30

20

10

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-10

Years of schooling

What do the coefficients actually mean?

49
INTERPRETATION OF A REGRESSION EQUATION

80
^
70
EARNINGS  1.391  1.073S

60
Hourly earnings ($)

50

40

30

20

10

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-10

Years of schooling
S is measured in years (strictly speaking, grades completed), EARNINGS in dollars per
hour. So the slope coefficient implies that hourly earnings increase by $1.07 for each extra
year of schooling.
50
INTERPRETATION OF A REGRESSION EQUATION

80
^
70
EARNINGS  1.391  1.073S

60
Hourly earnings ($)

50

40

30

20

10

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-10

Years of schooling

We will look at a geometrical representation of this interpretation. To do this, we will


enlarge the marked section of the scatter diagram.
51
INTERPRETATION OF A REGRESSION EQUATION
15

14

13
Hourly earnings ($)

$11.49
12

11
$1.07
10 One year
$10.41
9

7
10.8 11 11.2 11.4 11.6 11.8 12 12.2
Highest grade completed

The regression line indicates that completing 12th grade instead of 11th grade would
increase earnings by $1.073, from $10.413 to $11.486, as a general tendency.
52
INTERPRETATION OF A REGRESSION EQUATION

80
^
70
EARNINGS  1.391  1.073S

60
Hourly earnings ($)

50

40

30

20

10

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-10

Years of schooling

You should ask yourself whether this is a plausible figure. If it is implausible, this could be
a sign that your model is misspecified in some way.
53
INTERPRETATION OF A REGRESSION EQUATION

80
^
70
EARNINGS  1.391  1.073S

60
Hourly earnings ($)

50

40

30

20

10

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-10

Years of schooling

For low levels of education it might be plausible. But for high levels it would seem to be an
underestimate.
54
INTERPRETATION OF A REGRESSION EQUATION

80
^
70
EARNINGS  1.391  1.073S

60
Hourly earnings ($)

50

40

30

20

10

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-10

Years of schooling
What about the constant term? Literally, the constant indicates that an individual with no
years of education would have to pay $1.39 per hour to be allowed to work. This does not
make any sense at all.
55
INTERPRETATION OF A REGRESSION EQUATION

80
^
70
EARNINGS  1.391  1.073S

60
Hourly earnings ($)

50

40

30

20

10

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-10

Years of schooling

A safe solution to the problem is to limit the interpretation to the range of the sample data,
and to refuse to extrapolate on the ground that we have no evidence outside the data range.
56
INTERPRETATION OF A REGRESSION EQUATION

80
^
70
EARNINGS  1.391  1.073S

60
Hourly earnings ($)

50

40

30

20

10

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-10

Years of schooling

With this explanation, the only function of the constant term is to enable you to draw the
regression line at the correct height on the scatter diagram. It has no meaning of its own.
57
INTERPRETATION OF A REGRESSION EQUATION

80

70

60
Hourly earnings ($)

50

40

30

20

10

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-10

Years of schooling
Another solution is to explore the possibility that the true relationship is nonlinear and that
we are approximating it with a linear regression.

58
EVIEWS
What is EViews?
• EViews provides sophisticated data analysis, regression, and
forecasting tools on windows-based computers.
• The immediate predecessor of EViews was MicroTSP, which was
initially developed by Robert Hall during his graduate studies
at Massachusetts Institute of Technology in the 1960s.
• With EViews you can quickly develop a statistical relation from your
data and then use the relation to forecast future values of the data.
• Areas where EViews can be useful include: scientific data analysis
and evaluation, financial analysis, macroeconomic forecasting,
simulation, sales forecasting, and cost analysis.
• EViews takes advantage of the visual features of modern Windows
software. Alternatively, you may use EView’s powerful command
and batch processing language.
1. Does earnings depend on education?
Earnings is the hourly earnings of the respondent, in
dollars, in 1994. Perform a regression of Earnings on S
and interpret the regression results.
2. Does weight depend on height?
Perform one regression of Weight85 on Height, and
one regression of Weight94 on Height, then interpret
the regression results and explain the difference
between those two regressions.

61

You might also like