0% found this document useful (0 votes)
37 views69 pages

TCMG - MEEG 573 - SP - 20 - Lecture - 7

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 69

TCMG/MEEG 573: Forecasting

Cost of Tire Air

It’s against the law for gas stations in


Connecticut to charge you for using their air
hose to inflate your tires. The law says that
air for tires has to be free.

http://www.ct.gov/dcp/cwp/view.asp?a=1629&q=428698
Metropolitan Museum of Art, NY

Admission

Recommended
Adults $20
Seniors (65 and older) $15
Students $10*
Members Free
Children under 12 (accompanied by an adult) Free
Forecasting Models

• Subjective Models
Delphi Methods
• Causal Models
Regression Models
• Time Series Models
Moving Averages
Exponential Smoothing
Forecasting Methods
• Qualitative: primarily subjective; rely on
judgment and opinion
• Time Series: use historical demand only
– Static
– Adaptive
• Causal: use the relationship between demand
and some other factor to develop forecast
• Simulation
– Imitate real life
– Can combine time series and causal methods
Characteristics of Forecasts

• Forecasts are always wrong. Should


include expected value and measure of
error.
• Long-term forecasts are less accurate
than short-term forecasts (forecast
horizon is important)
• Aggregate forecasts are more accurate
than disaggregate forecasts
Regression
Adopted from:
Data Science and Analytics with Python by
Jesus Rogel-Salazar, 1st Edition, Chapman
and Hall/CRC , Published August 16, 2017 ,
ISBN 9781498742092.
Introduction to Regression
• One of the most widely used tools in statistical analysis
– Ease of calculation
– Simplicity of assumptions
• Linear regression
• E.g.:
• Dependent variable based on an independent variable
• Future event/more valuable/harder to observe 
Easier to observe/more available/apparent/more
affordable
– Height of a child  Height of parents
– Ice cream sales  Weather temperature
Goal of Regression
• To predict the outcome of the variable of interest (response variable,
dependent variable) given the values of the other (independent
variables, predictors, explanatory variables)
• Correlation coefficient [-1 ,1] : strength of the association
• Correlation coefficient does not imply causal relationship (Umbrella
usage v. rain)
• Beware of Sir Bedever’s witch reasoning:
– Confounding variables: weather temperature rise, ice cream sales rise,
murders rise. However, no association between ice cream sales and
murders
• Sir Francis Galton: First cousin of Darwin: Fingerprints as evidence
– Tall parents – taller than average child but shorter than the parent
– Short parents – shorter than the average child but taller than the parent
– Height of an offspring regresses towards a mediocre point
• Expectation of the measured mean: inescapable fact of life: one time
doesn’t mean much – e.g. breaking a record
Assumptions of Linear Regression

• Y = f(x) + e
• Y = b0 + b1 + e

 LR assumes:
1. Linear relationship
2. Multivariate normality
3. No or little multicollinearity
4. No auto-correlation

  Multivariete Linear Regression:

 yi = b0 + + ei
Simple Linear Regression

• Based on fitting a line to data


– Provides a regression coefficient, which is the slope of the line
• Y = ax + b
– Use to predict a dependent variable’s value based on the value
of an independent variable.
• Very helpful- In analysis of height and weight, for a known
height, one can predict weight.
• Much more useful than correlation
– Allows prediction of values of Y rather than just whether there
is a relationship between two variable.

http://www.stat.ncsu.edu/people/reiland/courses/st302/simple_lin_regress_inference.ppt
Introduction

• The motivation for using the technique:


– Forecast the value of a dependent variable (y) from
the value of independent variables (x1, x2,…xk.).
– Analyze the specific relationships between the
independent variables and the dependent variable.

 In published papers, the multivariable models are


more powerful than univariable models and take
precedence.

13
The Model
The model has a deterministic and a probabilistic components

House Cost

a bout
c osts
h ouse ot. iz e)
a fo ( S
u i lding square + 75
B
p e r 2 5 000
$75 o st =
Similar houses sell o u se c
H
for $25,000

House size
The Model

•The first order linear model


y   0  1x  

b0 and b1 are unknown population


– y = dependent variable parameters, therefore are estimated
y from the data.
– x = independent variable
– b0 = y-intercept
– b1 = slope of the line Rise
b1 = Rise/Run

– e = error variable b0
Run

x
Estimating the Coefficients
• The estimates are determined by
– drawing a sample from the population of interest,
– calculating sample statistics.
– producing a straight line that cuts into the data.

w Question: What should be


w
considered a good line?
w
w
w w w w
w
w w w w
w

x
16
The Least Squares (Regression) Line

A good line is one that minimizes


the sum of squared differences between the
points and the line.

17
The Least Squares (Regression) Line

Sum of squared differences = (2 - 1)2 +(4 - 2)2 +(1.5 - 3)2 + (3.2 - 4)2 = 6.89
Sum of squared differences = (2 -2.5)2 + (4 - 2.5)2 +(1.5 - 2.5)2 + (3.2 - 2.5)2 = 3.99
Let us compare two lines
(2,4)
4 The second line is horizontal
w

3 w (4,3.2)

2.5
2 The smaller the sum of
(1,2) w
w
(3,1.5) squared differences
1 the better the fit of the
line to the data.

1 2 3 4

18
The Estimated Coefficients

To calculate the estimates of the


line coefficients, that minimize the The regression equation that
differences between the data estimates the equation of the
points and the line, use the first order linear model is:
formulas:
cov( X , Y )
b1 
s 2x ŷ  b 0  b1x
b 0  y  b1 x

19
The Simple Linear Regression Line

• Example Car Odometer Price


1 37388 14636
– A car dealer wants to find 2 44758 14122
the relationship between 3 45833 14016
the odometer reading and
4 30862 15590
the selling price of used cars.
5 31705 15568
– A random sample of 100 cars is
selected, and the data 6 34010 14718
recorded. . . .
Independent Dependent
– Find the regression line. . variable x . variable y .
. . .

20
The Simple Linear Regression Line

• Solution
– Solving by hand: Calculate a number of statistics
x  36,009 .45; s 2

 ( x  x)
i
2

 43,528,690
n 1
x

y  14,822.823; cov(X, Y) 
 ( x  x)( y
i i
 y)
 2,712,511
n 1
where n = 100.
cov(X , Y )  1,712,511
b1    .06232
s 2x 43,528,690
b 0  y  b1x  14,822.82  ( .06232 )( 36,009.45)  17,067

ŷ  b 0  b1x  17,067  .0623 x


The Simple Linear Regression Line

• Solution – continued
– Using the computer

Tools > Data Analysis > Regression >


[Shade the y range and the x range] > OK
The Simple Linear Regression Line
SUMMARY OUTPUT

Regression Statistics
Multiple R 0.8063
R Square 0.6501
Adjusted R Square
0.6466
Standard Error 303.1
Observations 100
yˆ  17,067  .0623x
ANOVA
df SS MS F Significance F
Regression 1 16734111 16734111 182.11 0.0000
Residual 98 9005450 91892
Total 99 25739561

CoefficientsStandard Error t Stat P-value


Intercept 17067 169 100.97 0.0000
Odometer -0.0623 0.0046 -13.49 0.0000
Interpreting the Linear Regression Equation

17067 Odometer Line Fit Plot


17000

Price
16000
15000
14000
13000
No data
12000
0 1500020000 25000 30000 35000 40000 4500050000 55000
Odometer

yˆ  17,067  .0623x
This is the slope of the line.
The intercept is b0 = $17067. For each additional mile on the odometer,
the price decreases by an average of $0.0623

Do not interpret the intercept as the


“Price of cars that have not been driven”
Error Variable: Required Conditions
• The error e is a critical part of the regression
model.
• Four requirements involving the distribution of e
must be satisfied.
– The probability distribution of e is normal.
– The mean of e is zero: E(e) = 0.
– The standard deviation of e is se for all values of x.
– The set of errors associated with different values of y are all
independent.
Assessing the Model

• The least squares method will produces a regression


line whether or not there is a linear relationship
between x and y.
• Consequently, it is important to assess how well the
linear model fits the data.
• Several methods are used to assess the model. All
are based on the sum of squares for errors, SSE.
Sum of Squares for Errors

– This is the sum of differences between the points and the


regression line.
– It can serve as a measure of how well the line fits the data. SSE
is defined by

n
SSE   (y  ŷ ) .
i1
i i
2

– A shortcut formula

SSE   yi2 b0  yi  b1  xi yi


Standard Error of Estimate

– The mean error is equal to zero.


– If se is small the errors tend to be close to zero (close to the
mean error). Then, the model fits the data well.
– Therefore, we can, use se as a measure of the suitability of
using a linear model.
– An estimator of se is given by se

S tan dard Error of Estimate


SSE
s 
n2
Standard Error of Estimate - Example
• Example:
– Calculate the standard error of estimate for the previous example
and describe what it tells you about the model fit.
• Solution
SSE  9, 005, 450
SSE 9, 005, 450
s    303.13
n2 98

It is hard to assess the model based


on se even when compared with the
mean value of y.
s   303.1 y  14,823
Testing the slope
– When no linear relationship exists between two variables, the regression line should
be horizontal.

q
q

q q q
q
q
q qq

q q q q
q q q q q q
q q q q q qq q qq q q q q q
q
q q q q qq q q
q q q q q qq q q q q q

Linear relationship. No linear relationship.


Different inputs (x) yield Different inputs (x) yield
different outputs (y). the same output (y).
The slope is not equal to zero The slope is equal to zero
Testing the Slope

• We can draw inference about b1 from b1 by testing


H 0 : b1 = 0
H1: b1 = 0 (or < 0,or > 0)
– The test statistic is
s
b 1  1 sb1 
t SS xx
s b1 The standard error of b1.

– If the error variable is normally distributed, the statistic is Student t


distribution with d.f. = n-2.
Testing the Slope - Example
• Example
– Test to determine whether there is enough evidence
to infer that there is a linear relationship between
the car auction price and the odometer reading for
all three-year-old Tauruses in the previous example .
Use a = 5%.
Testing the Slope - Example
• Solving by hand
– To compute “t” we need the values of b1 and sb1.
b1  .0623
s 303.1
sb1    .00462
(n  1) s x2 (99)( 43,528,690)
b1  1  .0623  0
t   13.49
sb1 .00462

– The rejection region is t > t.025 or t < -t.025 with n = n-2 = 98.
Approximately, t.025 = 1.984
Testing the Slope - Example

• Using the computer


Price Odometer SUMMARY OUTPUT
14636 37388
14122 44758 Regression Statistics
14016 45833 Multiple R 0.8063
15590 30862 R Square 0.6501 There is overwhelming evidence to infer
15568 31705 Adjusted R Square
0.6466 that the odometer reading affects the
14718 34010 Standard Error 303.1 auction selling price.
14470 45854 Observations 100
15690 19057
15072 40149 ANOVA
14802 40237 df SS MS F Significance F
15190 32359 Regression 1 16734111 16734111 182.11 0.0000
14660 43533 Residual 98 9005450 91892
15612 32744 Total 99 25739561
15610 34470
14634 37720 Coefficients
Standard Error t Stat P-value
14632 41350 Intercept 17067 169 100.97 0.0000
15740 24469 Odometer -0.0623 0.0046 -13.49 0.0000
Coefficient of determination
– To measure the strength of the linear relationship we use the coefficient of
determination.

  ( xi  x )( yi  y 
2

R 
2

sx2 s y2
SSE
or R  1 
2

 i
( y  y ) 2

Note that the coefficient of determination is r2


Coefficient of determination
• To understand the significance of this coefficient note:

pa rt by The regression model


i n ed in
Expla
Overall variability in y Rem
ain s, in
part,
unex
plain
ed The error
Coefficient of determination

y2
Two data points (x1,y1) and (x2,y2)
of a certain sample are shown.

y1 Variation in y = SSR + SSE

x1 x2
Total variation in y = Variation explained by the + Unexplained variation (error)
regression line
(y1  y) 2  (y 2  y) 2  ( ŷ 1  y ) 2  ( ŷ 2  y ) 2  ( y 1  ŷ 1 ) 2  ( y 2  ŷ 2 ) 2
Coefficient of determination

• R2 measures the proportion of the variation in y that is


explained by the variation in x.

R2  1
SSE

 ( y i  y ) 2  SSE

SSR
 (y  y)
i
2
(y  y)
i
2
 (y  y)
i
2

• R2 takes on any value between zero and one.


R2 = 1: Perfect match between the line and the data points.
R2 = 0: There are no linear relationship between x and y.
Coefficient of determination,
Example

• Example
– Find the coefficient of determination for the used car price –
odometer example. What does this statistic tell you about the
model?
• Solution
– Solving by hand;

  ( xi  x )( yi  y 
2
[ 2,712,511]2
R 
2
2 2
 (43,528,688)(259,996)
 .6501
sx s y
Coefficient of Determination

– Using the computer


From the regression output we have
SUMMARY OUTPUT

Regression Statistics
Multiple R 0.8063 65% of the variation in the auction
R Square 0.6501
selling price is explained by the
Adjusted R Square
0.6466
Standard Error 303.1
variation in odometer reading. The
Observations 100 rest (35%) remains unexplained by
this model.
ANOVA
df SS MS F Significance F
Regression 1 16734111 16734111 182.11 0.0000
Residual 98 9005450 91892
Total 99 25739561

CoefficientsStandard Error t Stat P-value


Intercept 17067 169 100.97 0.0000
Odometer -0.0623 0.0046 -13.49 0.0000
MOVING AVERAGES AND
EXPONENTIAL SMOOTHING

Adopted from: http://faculty.wiu.edu/F-Dehkordi/DS-


533/Lectures/Moving-average-methods.ppt
Introduction: MA and ES
• Models applicable to time series data with seasonal,
trend, or both seasonal and trend component and
stationary data.
• Forecasting methods discussed in this chapter can be
classified as:
– Averaging methods.
• Equally weighted observations
– Exponential Smoothing methods.
• Unequal set of weights to past data, where the weights decay
exponentially from the most recent to the most distant data
points.
• All methods in this group require that certain parameters to be
defined.
• These parameters (with values between 0 and 1) will determine
the unequal weights to be applied to past data.
Introduction: Averaging methods

• Averaging methods
– If a time series is generated by a constant process
subject to random error, then mean is a useful
statistic and can be used as a forecast for the next
period.
– Averaging methods are suitable for stationary time
series data where the series is in equilibrium around
a constant value ( the underlying mean) with a
constant variance over time.
N Period Moving Average
Let : MAT = The N period moving average at the end of
period T
AT = Actual observation for period T

Then: MAT = (AT + AT-1 + AT-2 + …..+ AT-N+1)/N

Characteristics:
Need N observations to make a forecast
Very inexpensive and easy to understand
Gives equal weight to all observations
Does not consider observations older than N periods

6-44
Moving Average Example

Saturday Occupancy at a 100-room Hotel

Three-period
Saturday Period Occupancy Moving Average Forecast

Aug. 1 1 79
8 2 84
15 3 83 82
22 4 81 83 82
29 5 98 87 83
Sept. 5 6 100 93 87
12 7 93

Accompanied by: Hotel Occupancy.xls


Averaging Methods
• The Mean
– Uses the average of all the historical data as the forecast
1 t
Ft 1   yi
t i 1

– When new data becomes available , the forecast for time t+2 is
the new mean including the previously observed data plus this
new observation. 1 t 1
Ft  2 
t 1
 y
i 1
i

– This method is appropriate when there is no noticeable trend or


seasonality.
Averaging Methods
• The moving average for time period t is
the mean of the “k” most recent
observations.
• The constant number k is specified at the
outset.
• The smaller the number k, the more
weight is given to recent periods.
• The greater the number k, the less
weight is given to more recent periods.
Moving Averages
• A large k is desirable when there are
wide, infrequent fluctuations in the
series.
• A small k is most desirable when there
are sudden shifts in the level of series.
• For quarterly data, a four-quarter moving
average, MA(4), eliminates or averages
out seasonal effects.
Moving Averages
• For monthly data, a 12-month moving
average, MA(12), eliminate or averages
out seasonal effect.
• Equal weights are assigned to each
observation used in the average.
• Each new data point is included in the
average as it becomes available, and the
oldest data point is discarded.
Moving Averages
• A moving average of order k, MA(k) is the value of k
consecutive observations.

( yt  yt 1  yt  2    yt  k 1 )
Ft 1  yˆ t 1 
K
1 t
Ft 1   yi
k i t  k 1

– K is the number of terms in the moving average.


• The moving average model does not handle trend or
seasonality very well although it can do better than
the total mean.
Example: Weekly Department Store Sales
Period (t) Sales (y)
1 5.3
2 4.4
The weekly sales 3
4
5.4
5.8

figures (in millions of 5


6
5.6
4.8

dollars) presented in
7 5.6
8 5.6
9 5.4

the following table 10


11
6.5
5.1

are used by a major


12 5.8
13 5
14 6.2
department store to 15
16
5.6
6.7

determine the need 17


18
5.2
5.5
19 5.8
for temporary sales 20
21
5.1
5.8

personnel. 22
23
6.7
5.2
24 6
25 5.8
Example: Weekly Department Store Sales

Weekly Sales

5
Sales

Sales
4 (y)

0
0 5 10 15 20 25 30
Weeks
Example: Weekly Department Store Sales

• Use a three-week moving average (k=3)


for the department store sales to forecast
for the week 24 and 26.
( y23  y22  y21 ) 5.2  6.7  5.8
yˆ 24    5.9
3 3
• The forecast error is

e24  y24  yˆ 24  6  5.9  .1


Example: Weekly Department Store
Sales
• The forecast for the week 26 is

y25  y24  y23 5.8  6  5.2


yˆ 26    5. 7
3 3
Example: Weekly Department Store Sales
• RMSE = 0.63 Period (t) Sales (y) forecast
1 5.3
2 4.4
Weekly Sales Forecasts
3 5.4
4 5.8 5.033333
8

5 5.6 5.2
6 4.8 5.6
7
7 5.6 5.4
8 5.6 5.333333
6 9 5.4 5.333333
10 6.5 5.533333
5 11 5.1 5.833333
12 5.8 5.666667
Sales

4
13 5 5.8
14 6.2 5.3
15 5.6 5.666667
16 6.7 5.6
3

17 5.2 6.166667
2
18 5.5 5.833333
19 5.8 5.8
1 20 5.1 5.5
21 5.8 5.466667
0
22 6.7 5.566667
0 5 10
Weeks
15 20 25 30 23 5.2 5.866667
24 6 5.9
25 5.8 5.966667
5.666667

RMSE: The root-mean-square deviation (RMSD) or root-mean-square error


(RMSE)
Introduction: Exponential smoothing

• Exponential smoothing methods


– The simplest exponential smoothing method is the
single smoothing (SES) method where only one
parameter needs to be estimated
– Holt’s method makes use of two different
parameters and allows forecasting for series with
trend.
– Holt-Winters’ method involves three smoothing
parameters to smooth the data, the trend, and the
seasonal index.
Exponential Smoothing Methods
• This method provides an exponentially
weighted moving average of all previously
observed values.
• Appropriate for data with no predictable
upward or downward trend.
• The aim is to estimate the current level
and use it as a forecast of future value.
Simple Exponential Smoothing Method

• Formally, the exponential smoothing equation


is
Ft 1   yt  (1   ) Ft
Ft 1 
• forecast for the next period.
•  = smoothing constant.
• F
yt t = observed value of series in period t.
• = old forecast for period t.
– The forecast Ft+1 is based on weighting the most
recent observation yt with a weight  and weighting
the most recent forecast Ft with a weight of 1- 
Simple Exponential Smoothing Method

• The exponential smoothing equation


rewritten in the following form elucidate
the role of weighting factor .
Ft 1  Ft   ( yt  Ft )
• Exponential smoothing forecast is the old
forecast plus an adjustment for the error
that occurred in the last forecast.
Simple Exponential Smoothing Method
• The value of smoothing constant  must be
between 0 and 1.
•  can not be equal to 0 or 1.
• If stable predictions with smoothed random
variation is desired then a small value of  is
desire.
• If a rapid response to a real change in the
pattern of observations is desired, a large value
of  is appropriate.
Simple Exponential Smoothing Method

• To estimate , Forecasts are computed


for  equal to .1, .2, .3, …, .9 and the
sum of squared forecast error is
computed for each.
• The value of  with the smallest RMSE is
chosen for use in producing the future
forecasts.
Simple Exponential Smoothing Method

• To start the algorithm, we need F1 because


F2   y1  (1   ) F1

• Since F1 is not known, we can


– Set the first estimate equal to the first observation.
– Use the average of the first five or six observations
for the initial smoothed value.
Example: University of Michigan Index of
Consumer Sentiment
Date Observed
• University of Michigan Jan-95
Feb-95
97.6
95.1
Index of Consumer Mar-95
Apr-95
90.3
92.5
Sentiment for May-95
Jun-95
89.8
92.7
January1995- Jul-95
Aug-95
94.4
96.2
December1996. Sep-95
Oct-95
88.9
90.2

• we want to forecast Nov-95


Dec-95
88.2
91

the University of
Jan-96 89.3
Feb-96 88.5
Mar-96 93.7
Michigan Index of Apr-96
May-96
92.7
94.7
Consumer Sentiment Jun-96
Jul-96
95.3
94.7
using Simple Aug-96
Sep-96
95.3
94.7
Exponential Smoothing Oct-96
Nov-96
96.5
99.2
Method. Dec-96
Jan-97
96.9
Example: University of Michigan Index
of Consumer Sentiment
• Since no forecast is

C onsum er Sentim ent Index


available for the University of Michigan Index of Consumer Sentiment

first period, we will


set the first 102
100

estimate equal to
98
96
94

the first 92
90
88

observation. 86
84
82

• We try  =0.3, and


Sep-94 Apr-95 Oct-95 May-96 Dec-96 Jun-97

Date

0.6.
Example: University of Michigan Index
of Consumer Sentiment
Date Consumer Sentiment Alpha =0.3 Alpha=0.6

• Note the first forecast Jan-95


Feb-95
Mar-95
97.6
95.1
90.3
#N/A
97.60
96.85
#N/A
97.60
96.10

is the first observed


Apr-95 92.5 94.89 92.62
May-95 89.8 94.17 92.55
Jun-95 92.7 92.86 90.90

value.
Jul-95 94.4 92.81 91.98
Aug-95 96.2 93.29 93.43
Sep-95 88.9 94.16 95.09
Oct-95 90.2 92.58 91.38

• The forecast for Feb. Nov-95


Dec-95
Jan-96
88.2
91
89.3
91.87
90.77
90.84
90.67
89.19
90.28

95 (t = 2) and Mar. 95
Feb-96 88.5 90.38 89.69
Mar-96 93.7 89.81 88.98
Apr-96 92.7 90.98 91.81

(t = 3) are evaluated
May-96 89.4 91.50 92.34
Jun-96 92.4 90.87 90.58
Jul-96 94.7 91.33 91.67

as follows:
Aug-96 95.3 92.34 93.49
Sep-96 94.7 93.23 94.58
Oct-96 96.5 93.67 94.65
Nov-96 99.2 94.52 95.76
Dec-96 96.9 95.92 97.82
Jan-97 97.4 96.22 97.27
Feb-97 99.7 96.57 97.35

yˆ t 1  yˆ t   ( yt  yˆ t ) Mar-97
Apr-97
100
101.4
97.51
98.26
98.76
99.50
May-97 103.2 99.20 100.64

yˆ 2  yˆ1  0.6( y1  yˆ1 )  97.6  0.6(97.6  97.6)  97.6 Jun-97


Jul-97
104.5
107.1
100.40
101.63
102.18
103.57
Aug-97 104.4 103.27 105.69

yˆ 3  yˆ 2  0.6( y2  yˆ 2 )  97.6  0.6(95.1  97.6)  96.1 Sep-97


Oct-97
106
105.6
103.61
104.33
104.92
105.57
Nov-97 107.2 104.71 105.59
Dec-97 102.1 105.46 106.55
Example: University of Michigan
Index of Consumer Sentiment
• RMSE =2.66 for  = 0.6
• RMSE =2.96 for  = 0.3
University of Michigan Index of Consumer sentiments

120

100
Sentiment Index

80

Consumer Sentiment

60 SES (Alpha = 0.3)


SES(Alpha= 0.6)

40

20

0
Jun-94 Oct-95 Mar-97 Jul-98 Dec-99 Apr-01
Months
Measures of Forecast Error
• Forecast error = Et = Ft - Dt
• Mean squared error (MSE)
MSEn = (Sum(t=1 to n)[Et2])/n
• Absolute deviation = At = |Et|
• Mean absolute deviation (MAD)
MADn = (Sum(t=1 to n)[At])/n
• s = 1.25MAD
Measures of Forecast Error
• Mean absolute percentage error (MAPE)
MAPEn = (Sum(t=1 to n)[|Et/ Dt|100])/n
• Bias
• Shows whether the forecast consistently under- or
overestimates demand; should fluctuate around 0
biasn = Sum(t=1 to n)[Et]
• Tracking signal
– Should be within the range of +6
– Otherwise, possibly use a new forecasting method
– Tracking Signal = Accumulated Forecast Errors / Mean Absolute Deviation
TSt = bias / MADt
Tracking signal

You might also like