0% found this document useful (0 votes)
21 views11 pages

CH 13

The document discusses different methods for forecasting, including qualitative and quantitative approaches. It provides details on various forecasting models like causal, time series, and curve fitting models. Evaluation methods for forecasting models like bias, mean absolute deviation, mean square error and mean absolute percentage error are also explained.

Uploaded by

salamovagunel28
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views11 pages

CH 13

The document discusses different methods for forecasting, including qualitative and quantitative approaches. It provides details on various forecasting models like causal, time series, and curve fitting models. Evaluation methods for forecasting models like bias, mean absolute deviation, mean square error and mean absolute percentage error are also explained.

Uploaded by

salamovagunel28
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Forecasting

MGS 3100
• Why Forecasting?
Business Analysis • How Do We Forecast?
ƒ Qualitative approach
z based on experience, judgment, and knowledge
Chapter 13 ƒ Quantitative approach
z based on historical data and models
Forecasting z assume past patterns will continue into the future

Forecasting Models 2

Overview of Forecasting Methods Qualitative Forecasting


Q g
(based on experience, judgement, and knowledge)
Forecasting

Quantitative Qualitative
• Grass
G R
Roots
t

Causal Model Expert Judgment • Market Research


Trend Delphi Method • Panel Consensus
Time Series Model Grassroots • Delphi Method
Stationary
Market Research

Trend

Trend + Seasonality
Forecasting Models 3 Forecasting Models 4
Quantitative Forecasting Evaluation of Forecasting
g Model
(based on data and models)
• BIAS - The arithmetic mean of the errors
• Casual Models:
BIAS =
∑ (Actual - Forecast) = ∑ Error
n n
Price Causal Year 2009
P
Population
l ti
Model Sales
ƒ n is the number of forecast errors
Advertising
…… ƒ Excel: =AVERAGE(error range)

• Time Series Models: • Mean Absolute Deviation - MAD


MAD =
∑ | Actual - Forecast | = ∑ | Error |
Sales
S l 2008 n n
Sales2007
Time Series Year 2009
Model Sales ƒ No direct Excel function to calculate MAD
Sales2006
……

Forecasting Models 5 Forecasting Models 6

Evaluation of Forecasting Model Causal Forecasting


g
• Mean Square Error - MSE
∑ (Actual - Forecast) 2
∑ (Error) 2 • Find a straight line that fits the data best.
MSE = =
n n
Y
ƒ Excel: =SUMSQ(error range)/COUNT(error range) 12 Best line!
10
• Mean
M Ab
Absolute
l P Percentage E
Error - MAPE 8

| Actual - Forecast |

6
*100 %
MAPE = Actual 4

n 2

• R - only for curve fitting model such as regression


2 Intercept 0
10 11 12 13 14 15 16 17 18 19 20 X
• In general, the lower the error measure (BIAS,
MAD, MSE) or the higher the R2, the better the ¾ Y = Intercept + slope * X (= a + bX)
forecasting model ¾ Slope = change in Y / change in X
Forecasting Models 7 Forecasting Models 8
Causal Forecasting Models • Curve Fitting: Simple Linear Regression
ƒ Fi d th
Find the regression
i line
li with
ith Excel
E l
• Curve Fitting: Simple Linear Regression z Use Function:
ƒ O Independent
One p Variable
V ((X)) is used to ppredict a = INTERCEPT(Y range; X range))
one Dependent Variable (Y): Y = a + b X
b = SLOPE(Y range; X range)
ƒ Given n observations ((Xi, Yi), we can fit a line to z Use Solver
the overall pattern of these data points. The Least
z Use Excel’s Data Analysis Tools | Regression
Squares Method in statistics can give us the best a
and b in the sense of minimizing Σ(Yi - a - bXi)2: • Curve Fitting: Multiple Regression
ƒ Two or more independent variables are used to

b = ⎜ ∑ X iY i −
∑ X i∑ Y i ⎞ ⎛ (∑ X i) 2 ⎞
⎟ /⎜ ∑ X i −
⎜ ⎟ ⎜ 2 ⎟ predict the dependent variable:
n n ⎟
⎝ ⎠ ⎝ ⎠
Y = b0 + b1X1 + b2X2 + … + bpXp
a =
∑ Yi − b ∑ X i
n n ƒ U E
Use Excel’s
l’ Data
D t Analysis
A l i Tools
T l | Regression
R i
Forecasting Models 9 Forecasting Models 10

Regression Example:
An oil company expansion

Consider an oil company that is planning to


expand its network of modern self-service
gasoline stations. The company plans to use $300.00

traffic flow (measured in the average number of $250.00

ccarss per
pe hour)
ou ) too forecast
o ec s sales
s es (measured
( e su ed in $200.00
$

Sales/hour ($)
average dollar sales per hour). The firm has had $150.00

p
five stations in operation for more than a yyear $100.00

and has used historical data to calculate the $50.00

$-

following g averages:
g 0 50 100
Cars/hour
150 200 250

Forecasting Models 11 Forecasting Models 12


A linear trend is fit to the data: Using Excel (2003 or older), click on Tools –
Data Analysis
y …
$300.00

$250.00

$200.00
ur ($)

Series1 In the resulting


Sales/hou

$150.00
Linear (Series1)
dialog, choose
$100.00
Regression.
$50.00

$-
0 50 100 150 200 250
Cars/hour

Forecasting Models 13 Forecasting Models 14

Using Excel (2007), click on Data tab and then In the Regression dialog, enter the Y-range and
y ggroup,
in the Analysis p, click Data Analysis
y Tools g
X-range.

Choose to
place the
output in
a new
worksheet
called Results

Select Residual Plots and Normal Probability


Forecasting Models 15 Plots
Forecastingto be created along with the output.
Models 16
Click OK to produce the following results:
• One of the other summary output values that is
given
i in
i Excel
E l is:
i R Square
S = 69.4%
69 4%
• This is a “goodness
g of fit” measure which
2
represents the R statistic discussed in
introductory statistics classes. It is calculated
as Regression SS/Total SS
• R2 ranges
g in value from 0 to 1 and ggives an
indication of how much of the total variation in
Y from its mean is explained by the trend line
Note that a (Intercept) and b (X Variable 1) are (or total variation in the dependent variable that
is explained by the independent variable(s))
reported as 57.104 and 0.92997, respectively.
Forecasting Models 17 Forecasting Models 18

Another value of interest in the Summary report


Now the question: Should we build a station at is the t-statistic for the X variable and its
B ff l Grove
Buffalo G where
h traffic
ffi is
i 183 cars/hour?
/h ? associated values.
The best guess at what the corresponding sales
volume would be is found by placing this X
value into the regression equation: The t-statistic is 2.61 and the P-value is 0.0798.
^ A P-value less than 0.05 represents that we have
Y = a + b × X at least 95% confidence that the slope parameter
Sales/hour = 57.104 + 0.92997 × (183) (b) is statistically significantly than 0 (zero).
= $227.29 A slope of 0 results in a flat trend line and
i di
indicates no relationship
l i hi between
b Y andd X.
The 95% confidence limit for b is [-0.205; 2.064]
Thus, we can’t exclude the possibility that the
Forecasting Models 19 20
true value of b might be 0.
Time Series Model Building Components
p of A Time Series
• Trend: long term overall up or down movement
• Historical data collection
• Seasonality: periodic pattern repeating every year
• Data plotting (time series plot)
• Cycles: up & down movement repeating over long
• Forecasting model building time frame
• Evaluation and selection of model • Random Variations: random movements follow no
• Forecasting with the final selected model pattern

Forecasting Models 21 Forecasting Models 22

Components of A Time Series Components of a Time Series


Time
Trend C l
Cycle series
value Linear trend and seasonality time series
R d
Random
movement
Time Time
Future
Seasonal
pattern
Trend with
seasonal pattern
Linear trend time series
mand
Dem

A stationary time series


Time Time
Forecasting Models 23 Forecasting Models 24
Time
Time Series Forecasting Models Trend Model
Curve fitting method used for time series
Look at the data
((Time Series Plot))
Forecast using one or
more techniques
q
Evaluate the technique
and ppick the best one.
data (also called time series regression model)
Observations from the
Techniques to try Ways to evaluate
Useful when the time series has a clear trend
time series plot
Heuristics - Averaging
• MAD
Can not capture seasonal patterns
Data is reasonably methods
• MAPE
stationary • Naive
• MSE Linear Trend Model: Yt = a + bt
(no trend or seasonality) • Moving Averages
• BIAS
• Simple Exponential Smoothing ƒ t is time index for each period,
period t = 1,
1 2,
2 3,…
3
• MAD
Regression
• MAPE
Data shows a consistent • Linear
• MSE 7
trend • Non
Non-linear
linear Regressions (not 6
• BIAS 5
covered in this course)
• R-Squared 4
3

• MAD
2

Classical decomposition 1
• MAPE 0
Data
D t shows
h bboth
th a trend
t d • Find
Fi d Seasonal
S l Index
I d
• MSE 1 2 3 4 5 6 7 8 9 10
and a seasonal pattern • Use regression analyses to find
• BIAS
the trend component
• R-Squared 25 Forecasting Models 26

Stationary Models Naïve Model


• Naïve The simplest time series forecasting model
¾ I sold 10 units yesterday, so I think I will sell 10
units
it today.
t d Idea: ““what
Id h t hhappened
d last
l t time
ti (last
(l t year,
• n-period moving average
¾ For the past n days, I sold 12 units on average.
last month, yesterday) will happen again
Therefore, I think I will sell 12 units today. this time”
• Exponential smoothing
¾ I predicted
di d to sellll 10 units
i at the
h beginning
b i i off Naïve Model:
yesterday; At the end of yesterday, I found out I ƒ Algebraic: Ft = Yt-1
actually sold 8 units. So, I will adjust the forecast of
10 (yesterday’s
( d ’ forecast)
f ) by
b adding
ddi adjusted
dj d error z Yt-1 : actual
t l value
l ini period
i d t-1
t1
(α * error). This will compensate over- (or under-) z Ft : forecast for period t
forecast of yesterday.
ƒ S
Spreadsheet:
d h t ini B3:
B3 “ = A2”;
A2” Copy
C down
d
Forecasting Models 27 Forecasting Models 28
Moving Average Model MA Example
p
Simple n-Period Moving Average Forecast

Sum of actual values in previous n periods Month Demand n=3 n=5


F = Jan 120 - -
t n
Y +Y +L+ Y Feb 90 - -
= t −1 t−2 t−n
Mar 100 - -
n
Apr 75 103.3 -
Issues of MA Model May 110 88.3 - = (120+90+100)/3

ƒ Naïve model is a special case of MA with n = 1 J


June 50 95 0
95.0 99 0
99.0
= (90+100+75)/3
ƒ Idea is to reduce random variation or smooth data July 75 78.3 85.0
ƒ All previous n observation are treated equally (equal Aug 130 78.3 82.0
weights) Sept 110 85.0 88.0
Oct 90 105.0 95.0
ƒ Suitable for relatively stable time series with no trend or
Nov ? 110 0
110.0 91 0
91.0
seasonal pattern
Forecasting Models 29 Forecasting Models 30

Smoothing
g Effect of MA Model Moving Average Model
Weighted n-Period Moving Average
F =w Y +w Y +L+ w Y
t 1 t −1 2 t−2 n t−n
ƒ Typically
yp y weights
g are decreasing:
g
w1>w2>…>wn
ƒ Sum of the weights = ∑wi = 1
ƒ Flexible weights reflect relative importance of
each
h previous
i observation
b ti in i forecasting
f ti
Longer-period moving averages (larger n)
react to actual changes more slowly or will ƒ Optimal weights can be found via Solver
get less impact from a new data point
Forecasting Models 31 Forecasting Models 32
Simple Exponential Smoothing
Weighted MA: An Illustration
A special type of weighted moving average
ƒ Include all ppast observations
Month Weight Data
ƒ Use a unique set of weights that weight recent observations
August 17% 130 much more heavily than very old observations:
September 33% 110 weight 0 < α <1
October 50% 90 Decreasingg weights
g g
given α
to older observations
November forecast: α (1 − α )
FNov = (0.50)(90)+(0.33)(110)+(0.17)(130)
(0 50)(90)+(0 33)(110)+(0 17)(130) α (1 − α ) 2

= 103.4 α (1 − α ) 3
M
Forecasting Models 33 Forecasting Models 34

Simple ES: The Model ES Example


p
F t = α Y t − 1 + α (1 − α ) Y t − 2 + α (1 − α ) 2 Y t − 3 + L
F t = α Y t − 1 + (1 − α ) [α Y t − 2 + α (1 − a ) Y t − 3 + L ]
Period Month Demand Fcst (α =0.1) Fcst (α =0.5)
1 J
Jan 37 37 37 A
Assume F1=Y
Y1
2 Feb 40 37.00 37.00

F t = α Y t − 1 + (1 − α ) F t − 1 3
4
Mar
A
Apr
41
37
37.30
37 67
37.67
38.50
39 75
39.75
=0.1(37)+0.9(37)

= F t −1 + α (Y t −1 − F t −1 ) 5 May 45 37.60 38.37


=0.1(40)+0.9(37.00)
6 Jun 50 38.34 41.68
7 Jul 43 39 51
39.51 45 84
45.84
ƒ α: Smoothing constant 8 Aug 47 39.86 44.42
9 Sep 56 40.57 45.71
ƒ Ft : Forecast for pperiod t 10 Oct 52 42 11
42.11 50 85
50.85
11 Nov 55 43.10 51.42
ƒ Ft-1: Last period forecast
12 Dec 54 44.29 53.21
ƒ Yt-1: Last period actual value 13 Jan ? 45.26 53.61

Forecasting Models 35 Forecasting Models 36


g Effect of α
The Smoothing Properties of Simple ES
70 • Widely used and successful model
mand 60
α=0.5 • Requires very little data
50 actual
• Larger α, more responsive forecast; Smaller α,
Dem

40
smoother forecast (See also Table 13.2)
30
α=0.1
α
20 • “B t” α in
“Best” i terms
t off minimizing
i i i i MSE or MAD
1 2 3 4 5 6 7 8 9 10 11 12 can be found by Solver
Period

Larger α reacts to actual changes more quickly (responsive)


hile smaller α responds more slo
while slowly
l to act
actual
al changes

Forecasting Models 37 Forecasting Models 38

Time Series Decomposition Model Time Series Decomposition


The basic model is:
Basic Idea: a time series is composed
p of ƒ Y = Trend × Cyclical × Seasonal ± Error
several basic components: Trend, Seasonality,
Cycle, and Random Error Since we cannot easily extract or predict
cycles,
l we will ill assume that
th t the
th trend
t d
The multiplicative decomposition model:
component will capture cycles during the
Yt = Trend t × Cycle t × Seasonalit y t ± Errort f
forecastt period
i d
Since we have to live with error (cannot
ƒ These components
Th t contribute
t ib t to
t time
ti series
i value
l predict it), our model is simplified to:
in a multiplicative way
ƒ Y = Trend × Seasonal

Forecasting Models 39 Forecasting Models 40


I. Estimate Seasonal Index Seasonal Index: An Example
(Si lifi d method)
(Simplified h d) Example
Raw Index = Demand / Average Demand

Average Seasonal
Assume we only have two
1 Calculate overall average demand using all
1. Year
1
Month Demand Demand
January 80 94
Ratio
0.851
Index
0.957
years off data.
d t
1. Calculate overall average
data points February
March
75
80
94
94
0.798
0.851
0.851
0.904 demand using all 24 demands
April 90 94 0.957 1.064
2. Divide each demand by the
Mayy 115 94 1.223 1.309
2 Divide each demand by overall demand
2. June
July
110
100
94
94
1.170
1.064
1.223
1.117
overallll average tto gett raw
index
average August
September
90
85
94
94
0.957
0.904
1.064
0.957
3. Then the seasonal index is
calculated by averaging raw
October 75 94 0.798 0.851
" Each resulting number is called a raw index November 75 94 0.798 0.851 i di
indices iin th
the same month th
December 80 94 0.851 0.851 (here only two per same
" The number of raw indices for the same month or 2 January
February
100
85
94
94
1.064
0.904
0.957
0.851
month)
quarter is equal
q q to the number of yyears March 90 94 0.957 0.904
Example: average of year 1
April 110 94 1.170 1.064 J
January ratio
ti and
d year 2
3. Calculate Seasonal Index (SI) by averaging May
June
131
120
94
94
1.394
1.277
1.309
1.223
January ratio:
(0.851 + 1.064)/2 = 0.957
July 110 94 1.170 1.117
all raw indices for the same month or quarter August
September
110
95
94
94
1.170
1.011
1.064
0.957
October 85 94 0.904 0.851
November 85 94 0.904 0.851
Forecasting Models 41 Forecasting
December Models
80 94 0.851 0.851 42

II. Estimate Trend Component


p III. Forecast
Step 1: Remove seasonal effect Combine seasonal and trend components
ƒ D
Deseasonalized
li d data
d t t = Yt / SIt
Step 2: Fit a trend line to deseasonalized ƒ Ft = Trend Valuet × Seasonal Indext
d using
data i least
l squares method
h d
ƒ This final step is also called reseasonalizing
Step 3: Calculate the trend value for each
ƒ Trend Valuet is the trend estimate for the period t,
t based
period
on the trend model fitted to the deseasonalized data
Œ Note: If the deseasonalized data look stable (no
apparentt trend),
t d) simple
i l exponential ti l smoothing
thi
may be used in Steps 2 and 3 to calculate the
forecast ((rather than trend)) for each p period.
Forecasting Models 43 Forecasting Models 44

You might also like