0% found this document useful (0 votes)
37 views12 pages

Forecasting Errors - Main Discussion

This document discusses various methods for measuring forecasting errors, including mean absolute deviation (MAD), mean absolute percentage error (MAPE), and other alternative measures. MAD measures the average size of the error in units, while MAPE measures the average size of the error in percentage terms. Both can be misleading under certain circumstances. The document provides examples and formulas for calculating these error measures.

Uploaded by

Vedant Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views12 pages

Forecasting Errors - Main Discussion

This document discusses various methods for measuring forecasting errors, including mean absolute deviation (MAD), mean absolute percentage error (MAPE), and other alternative measures. MAD measures the average size of the error in units, while MAPE measures the average size of the error in percentage terms. Both can be misleading under certain circumstances. The document provides examples and formulas for calculating these error measures.

Uploaded by

Vedant Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

FORECASTING ERRORS

1. INTRODUCTION

 Demand forecasting is one of the important activities in a supply chain

which provides all the supply chain planning processes with market

information crucial for efficient supply chain management.

 Its performance is measured by forecasting error, which is defined using the

difference between forecast and actual sales. Error measurement statistics

play a critical role in tracking forecast accuracy, monitoring for exceptions,

and benchmarking forecasting process.

 Interpretation of these measurement in statistics can be tricky, particularly

when working with low-volume data or when trying to assess accuracy

across multiple items.

 Measuring forecast accuracy (or error) is not an easy task as there is no one-

size-fits-all indicator. Only experimentation will show you what Key

Performance Indicator (KPI) is best for you. As you will see, each indicator

will avoid some pitfalls but will be prone to others.

 The first distinction we have to make is the difference between the precision

of a forecast and its bias:

1
 Bias represents the historical average error. Basically, will your forecasts be

on average too high (i.e., you overshot the demand) or too low (i.e.,

you undershot the demand)? …..This gives the overall direction of the error.

 Precision measures how much spread you will have between the forecast and

the actual value. The precision of a forecast gives an idea of the magnitude of

the errors but not their overall direction.

 Error: Let’s start by defining the error (et) as the forecast (ft) minus the

demand (dt) as : et = ft-dt ; Note that with this definition, if the forecast

overshoots the demand, the error will be positive, and if the forecast

undershoots the demand, then the error will be negative. We here discuss

some commonly used tools for measuring forecasting errors.

2. MEAN ABSOLUTE DEVIATION (MAD)

MAD (Mean Absolute Deviation) measures the ‘size of the error in units’. It is

calculated as the average of the unsigned errors. The MAD is a good statistic to use

when analyzing the ‘error for a single item’. However, if you aggregate MADs

over multiple items you need to be careful about high-volume products dominating

the results - more on this later.

Forecast MAD is used in three contexts:

2
 As a basis for calculating allowable margin of error for forecasts when

checked using forecast alarm 1 and 2.

 To periodically recalculate alpha values when using forecast methods based

on adaptive exponential smoothing.

 When calculating the standard deviation for forecast error when setting the

dimensions for safety stock.

METHODS FOR CALCULATION

MAD can be calculated in three ways, as follows:

First: Exponential Smoothing

MAD (i + 1) = ((i) * ABS (D(i) - F(i)) + (1 - ((i)) * MAD(i)

Second: Average Absolute Forecast Errors

MAD(i + 1) = (ABS(D(i) - F(i)) + ....... + ABS(D(i - (n - 1)) - F(i - (n - 1))) / n

Third: Average Absolute Error from Mean Demand

MAD(i + 1) = (ABS(D(i) - A(n)) + ....... + ABS(D(i - (x - 1)) - A(n))) / n

Key:

MAD(i) = Mean absolute deviation for period (i)

((i) = Smoothing constant for exponential smoothing in period (i)

3
ABS( ) = The absolute amount of a difference (without minus sign)

D(i) = Base demand during period (i)

F(i) = Base forecast for period (i)

A(n) = Average demand for (n) periods

i = Period number

n = Number of periods included in calculating the mean

Base demand and base forecast represent demand and forecast, respectively, for a

period adjusted for seasonal variations and the effect of a varying number of

workdays per period.

Example

The example below describes each of the calculation methods:

The following data is entered for the product:

Aug. Sep. Oct. Nov.

Base Demand 120 145 138 129

Base Forecast 136 132 135 133

Applicable MAD for Nov. 10

(-factor used 0.3

4
The following MAD values will be calculated for December using the three

methods as listed:

By Exponential Smoothing

MAD (Dec.) = 0.3 * ABS (D(Nov.) - F(Nov.)) + (1 - 0.3) * MAD(Nov.) = 0.3 *

ABS(129 - 133) + 0.7 * 10

= 0.3 * 4 + 0.7 * 10 = 8.2

By Average Absolute Forecast Errors

MAD(Dec.) = (ABS(D(Nov.) - F(Nov.)) + ABS(D(Oct.) - F(Oct.)) + ABS(D(Sep.)

- F(Sep.)) + ABS(D(Aug.) - F(Aug.))) / 4

= (16 + 13 + 3 + 4) / 4 = 9

By Average Absolute Error from Mean Demand

Mean demand A (4) = (120 + 145 + 138 + 129) / 4 = 133

MAD(Dec.) = (ABS(D(Nov.) - A(4)) + ABS(D(Oct.) - A(4)) + ABS(D(Sep.) -

A(4)) + ABS(D(Aug.) - A(4))) / 4

= (13 + 12 + 5 + 4) / 4 = 8.5

5
Actually, regardless of whether data values are zero, positive, or negative,

the MAD can never be negative. This is because the MAD is calculated by finding

absolute values of the deviations (or differences) from the mean, and then taking

the average (or mean) of these absolute values.

3. MEAN ABSOLUTE PERCENT ERROR (MAPE)

 MAPE measures the ‘size of the error in percentage’. The Mean Absolute

Percentage Error (MAPE) is a statistical measure of how accurate a forecast

system is.

 It measures this accuracy as a percentage, and can be calculated as the

average absolute percent error for each time period minus actual values

divided by actual values. The MAPE (Mean Absolute Percent Error)

measures the size of the error in percentage terms. It is calculated as the

average of the unsigned percentage error, as shown in the example below:

6
 Many organizations focus primarily on MAPE when assessing forecast

accuracy. Most people are comfortable thinking in percentage terms, making

the MAPE easy to interpret. It can also convey information when you don’t

know the item’s demand volume. MAPE is scale sensitive and should not be

used when working with low-volume data. Notice that because "Actual" is

in the denominator of the equation, the MAPE is undefined when Actual

demand is zero. Furthermore, when the Actual value is not zero, but quite

small, MAPE will often take on extreme values. This scale sensitivity

renders the MAPE close to worthless as an error measure for low-volume

data.

 The performance of a forecasting model should be the baseline for

determining whether your values are good. It is irresponsible to set arbitrary

forecasting performance targets (such as MAPE < 10% is

7
Excellent, MAPE < 20% is Good) without the context of the forecastability

of data.

 Since MAPE is a measure of error, high numbers are bad and low numbers

are good. For reporting purposes, some companies will translate this to

accuracy numbers by subtracting the MAPE from 100.

 When your MAPE is negative, it says you have larger problems than just

the MAPE calculation itself. MAPE = Abs (Act – Forecast) / Actual. Since

numerator is always positive, the negativity comes from the denominator.

SUMMARY
Measuring forecast error can be a tricky business. The MAPE and MAD are the
most commonly used error measurement statistics; however, both can be
misleading under certain circumstances. The MAPE is scale sensitive and care
needs to be taken when using the MAPE with low-volume items. All error
measurement statistics can be problematic when aggregated over multiple items
and as a forecaster you need to carefully think through your approach when doing
so.

4. OTHER MEASURES

MAPE and MAD are by far the most commonly used error measurement statistics.

There are a slew of alternative statistics in the forecasting literature, many of which

8
are variations on the MAPE and the MAD. A few of the more important ones are

listed below:

5. MAD/Mean Ratio

The MAD/Mean ratio is an alternative to the MAPE that is better suited to

intermittent and low-volume data. As stated previously, percentage errors cannot

be calculated when the actual equals zero and can take on extreme values when

dealing with low-volume data. These issues become magnified when you start to

average MAPEs over multiple time series. The MAD/Mean ratio tries to overcome

this problem by dividing the MAD by the Mean - essentially rescaling the error to

make it comparable across time series of varying scales. The statistic is calculated

exactly as the name suggests -it is simply the MAD divided by the Mean.

6. GEOMETRIC MEAN RELATIVE ABSOLUTE ERROR (GMRAE)

GMRAE (Geometric Mean Relative Absolute Error) is used to measure out-of-

sample forecast performance. It is calculated using the relative error between the

naïve model (i.e., next period’s forecast is this period’s actual) and the currently

selected model. A GMRAE of 0.54 indicates that the size of the current model’s

error is only 54% of the size of the error generated using the naïve model for the

same data set. Because the GMRAE is based on a relative error, it is less scale

sensitive than the MAPE and the MAD.

Symmetric Mean Absolute Percentage Error (SMAPE)

9
SMAPE (Symmetric Mean Absolute Percentage Error) is a variation on the MAPE

that is calculated using the average of the absolute value of the actual and the

absolute value of the forecast in the denominator. This statistic is preferred to the

MAPE by some and was used as an accuracy measure in several forecasting

competitions.

7. TRACKING SIGNAL (TS)

 Tracking Signal (TS) is used to determine the larger deviation (in both plus

and minus) of Error in Forecast, and is calculated by the following formula:

TS = Accumulated Forecast Errors / Mean Absolute Deviation.

 A positive tracking signal denotes that the demand is higher than the

forecast. The negative signals indicate that the demand is lower than the

forecast. A good or a better tracking signal denotes the one with less

cumulative errors. The positive signals of tracking denote that the demand is

higher than the forecast.

 Tracking Signal is calculated as the ratio of Cumulative Error divided by the

mean absolute deviation. The cumulative error can be positive or negative,

so the TS can be positive or negative as well. The purpose of a tracking

signal is to detect a systematic change in the demand or a systematic error of

the forecast method.

10
 A good or a better tracking signal denotes the one with less cumulative

errors. The positive signals of tracking denote that the demand is higher than

the forecast. On the other hand, a negative indicator denotes that the demand

is lower than the forecast.

 Tracking signal is a measure used to evaluate if the actual demand does not

reflect the assumptions in the forecast about the level and perhaps trend in

the demand profile. In Statistical Process Control, people study when a

process is going out of control and needs intervention.

 Similarly tracking signal tries to flag if there is a persistent tendency for

actual to be higher or lower systematically. If Forecast is consistently lower

than the actual demand quantity, then there is persistent under forecasting

and Tracking Signal will be positive.

 Tracking Signal is calculated as the ratio of Cumulative Error divided by the

mean absolute deviation. The cumulative error can be positive or negative,

so the TS can be positive or negative as well.

 TS should pass a threshold test to be significant. If Tracking Signal > 3.75

then there is persistent under forecasting. On the other hand, if this is less

than -3.75 then, there is persistent over-forecasting. So in essence, |TS| >

3.75 implies a forecast bias ==> TS < -3.75 or TS > 3.75 implies a bias.

11
 So what is magical about 3.75. This is an approximation using the

relationship between a normally distributed forecast error and the Mean

Absolute deviation.

 In General, Forecast Error (using RMSE) * 0.8 = MAD. At 99% promised

service level, you will be using a 3 Sigma level. As a measure of MAD, this

translates into 3.75 MAD hence the 3.75 as the threshold for TS.

12

You might also like