OM Forecasting
OM Forecasting
Forecasting
Harambe University
OM is mostly proactive not reactive
It involves structured planning activities
Process of predicting a
future event
Underlying basis ??
of all business decisions
Production
Inventory
Personnel
Facilities
Human resources
Marketing
MIS
Operations
Predicted
demand
looking
Time back six
Jan Feb Mar Apr May Jun Jul Aug months
Actual demand (past sales)
Predicted demand
Timely
Reliable Accurate
se
ul u
f to
ing Written sy
an Ea
e
M
HARAMBE UNIVERSITY COLLE
GE
Some general characteristics of forecasts
Forecasts are always wrong/Forecasts are
seldom perfect
Forecasts are more accurate for groups or
families of items/Product family and
aggregated forecasts are more accurate
than individual product forecasts
Forecasts are more accurate for shorter
time periods/Forecast accuracy decreases as
time horizon for forecasts increases
Ex. I can forecast this year’s class
average better than next year’s class
average HARAMBE UNIVERSITY COLLE
GE
Steps in the Forecasting Process
“The forecast”
HARAMBE UNIVERSITY
COLLEGE
Key issues in forecasting
• Trends
• Seasonality
• Cyclical elements
• Autocorrelation
• Random variation
1. Data availability
2. Time horizon for the forecast
3. Required accuracy
4. Required Resources
1. Naive approach
2. Moving averages time-series
models
3. Exponential
smoothing associative
model
4. Trend projection
5. Linear regression
Trend Cyclical
Seasonal Random
Seasonal peaks
Actual demand
line
Average demand
over 4 years
Random variation
| | | |
1 2 3 4
Time (years)
HARAMBE UNIVERSITY COLLE
GE
Trend Component
Persistent, overall upward or
downward pattern
Changes due to population,
technology, age, culture, etc.
Typically several years duration
0
HARAMBE UNIVERSITY 5
COLLE 10 15 20
GE
Random Component
Erratic, unsystematic, ‘residual’ fluctuations
Due to random variation or unforeseen events
Short duration
and non repeating
MCOLLE
HARAMBE UNIVERSITY T W T F
GE
Forecasting Methods for Time Series model
1. Naive Approach
Assumes demand in next
period is the same as
demand in most recent period
e.g., If January sales were 68, then
February sales will be 68
Sometimes cost effective and efficient
Can be good starting point
At + At-1 + … + At-n
Ft+1 =
n
wt + wt-1 + … + wt-n = 1
January 10
February 12
March 13
April 16 [(3 x 13) + (2 x 12) + (10)]/6 = 121/6
May 19 [(3 x 16) + (2 x 13) + (12)]/6 = 141/3
June 23 [(3 x 19) + (2 x 16) + (13)]/6 = 17
July 26 [(3 x 23) + (2 x 19) + (16)]/6 = 201/2
HARAMBE UNIVERSITY COLLE
GE
Weighted average
For a 6-month
SMA, attributing
equal weights to all
past data we miss
Time the downward trend
Jan Feb Mar Apr May Jun Jul Aug
HARAMBE UNIVERSITY COLLE
GE
3. Exponential Smoothing (ES)
Ft Ft 1 ( At 1 Ft 1 )
Ft Ft 1 ( At 1 Ft 1 )
Forecast today=Forecast yesterday+(alpha)*(Forecast error yesterday)
Each new forecast is equal to the previous forecast plus a percentage of
the previous error.
Today’s forecast
Depends on yesterday’s (time-wise dependence, strong memory)
But it has to be corrected by forecast error
Therefore, we should give more weight to the more recent time
periods when forecasting.
Alpha = smoothing constant = percentage of the forecast error.
HARAMBE UNIVERSITY COLLE
GE
Exponential Smoothing
t= Last period’s forecast
+ a (Last period’s actual demand
– Last period’s forecast)
Ft = Ft – 1 + a(At – 1 - Ft – 1)
where Ft = new forecast
Ft – 1 = previous forecast
a = smoothing (or weighting)
constant (0 ≤ a ≤ 1)
HARAMBE UNIVERSITY
COLLEGE
Exponential Smoothing (Example)
Example: F(3) = F(2) + α*{(A(2) – F(2)} = 100 + 0.2*(80 – 100) = 96.
Exponential Smoothing Method (sales are actual sales indicated by A in equation)
alpha = 0.2
Month Sales Forecast Comment and Calculation
Forecast for period 1 should be available before
1 100 100 starting the calculations. If it is not given then set it
equal to the sales of period 1.
2 80 100.00
3 90 96.00
4 110 94.80
5 100 97.84
6 110 98.27
7 95 100.62
8 115 99.50
9 120 102.60
10 90 106.08
11 105 102.86
12 110 103.29
HARAMBE UNIVERSITY COLLE
GE
Example of Exponential Smoothing
Forecasts made in a period and the period has the same color
Period Actual Forecast withAlpha
Error with
= 0.1
Forecast withError with
1 42 Alpha=0.1 Alpha=0.1 Alpha=0.4 Alpha=0.4
2 40 42 -2.00 42 -2
3 43 41.8 1.20 41.2 1.8
4 40 41.92 -1.92 41.92 -1.92
5 41 41.73 -0.73 41.15 -0.15
6 39 41.66 -2.66 41.09 -2.09
7 46 41.39 4.61 40.25 5.75
8 44 41.85 2.15 42.55 1.45
9 45 42.07 2.93 43.13 1.87
10 38 42.36 -4.36 43.88 -5.88
11 40 41.92 -1.92 41.53 -1.53
12 41.73 40.92
Ft At 1 (1 ) Ft 1
HARAMBE UNIVERSITY COLLE
GE
Picking a Smoothing Constant:
Responsiveness vs. Smoothing
The quickness of forecast adjustment to error is determined by the
smoothing constant.
The closer the alpha is to zero, the slower the forecast will be to
adjust to forecast errors.
Conversely, the closer the value of alpha is to 1.00, the greater the
responsiveness to the actual observations and the less the
smoothing
Select a smoothing constant that balances the benefits of
responding to real changes if and when they occur.
Ft Ft 1 ( At 1 Ft 1 ) At 1 (1 ) Ft 1
Why use exponential smoothing?
Parabolic
Exponential
Growth
At Ft Tt FITt α = 0.8
Feb 1353
Mar 1305
Apr 1275
May 1210
Jun
At Ft Tt FITt α = 0.8
1350
Actual
1300 a = 0.2
1250 a = 0.8
a = 0.8, d = 0.5
1200
1150
0 1 2 3 4 5 6 7
The multiplicative
seasonal model can
adjust trend data for
seasonal variations in
demand (jet skis, snow
mobiles)
110 –
100 –
90 –
80 –
70 –
| | | | | | | | | | | |
J F M A M J J A S O N D
Time
HARAMBE UNIVERSITY COLLE
GE
How can we compare across forecasting models?
100 n Yi Ŷi
Mean Absolute Percent Error, MAPE =
n i 1 Yi
i 1 n
RMSE MSE
Root Mean Square Error.
Where: Y= actual dd, Ŷ= forecasted dd, n=no of observation, i=
a given time period
HARAMBE UNIVERSITY
Measuring Accuracy, examples
Perio Actual Forecast A-F/ |error| Error [|error|/actual]x 100
d Error 2
]
MAPE= ∑[|e|/Actual x 100 /n =10.26%/8 = 1.28%
Measuring Accuracy, examples
Perio Actual Forecast A-F/ |error| Error [|error|/actual]x 100
d Error 2
]
MAPE= ∑[|e|/Actual x 100 /n =10.26%/8 = 1.28%
Measuring Accuracy…..
• From a computational standpoint, the difference between
these measures is that MAD weights all errors evenly,
MSE weights errors according to their squared values,
and MAPE weights according to relative error.
• Relative to the number of observation. For instance, 10
out of 15 and 10 out of 1000.
On average, the
arrows hit the
bullseye (so much
for averages!)
HARAMBE UNIVERSITY COLLE
GE
MFE & MAD:
An Analogy
The forecasts
are inaccurate &
biased