0% found this document useful (0 votes)
18 views17 pages

AE 413 Lecture 4 - Forecsating Typologies 2021-2022

The document discusses various forecasting techniques including qualitative and quantitative methods. Qualitative methods are subjective and include expert opinions, surveys, and Delphi method. Quantitative methods use past data patterns and include time series models like moving averages, exponential smoothing, and trend estimation. Time series decomposition identifies trends, seasonality, cycles, and random variations in data.

Uploaded by

Jamali Adam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views17 pages

AE 413 Lecture 4 - Forecsating Typologies 2021-2022

The document discusses various forecasting techniques including qualitative and quantitative methods. Qualitative methods are subjective and include expert opinions, surveys, and Delphi method. Quantitative methods use past data patterns and include time series models like moving averages, exponential smoothing, and trend estimation. Time series decomposition identifies trends, seasonality, cycles, and random variations in data.

Uploaded by

Jamali Adam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

S8CHOOL OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF AGRICULTURAL ENGINEERING


AE 413 Lecture 4 – Forecasting Fundamentals

Sectional Course Outline: Forecasting Typologies: time series techniques –


simple moving averages, exponential smoothing; regression-based
techniques – causal models.

INTRODUCTION

Forecasting is the process of making statements about events whose actual outcomes
(typically) have not yet been observed. Prediction is a similar, but more general term.
Both might refer to formal statistical methods employing time series, cross-sectional or
longitudinal data, or alternatively to the less formal judgmental methods.

Usage can differ between areas of application: for example, in hydrology, the terms
"forecast" and "forecasting" are sometimes reserved for estimates of values at certain
specific future times, while the term "prediction" is used for more general estimates, such
as the number of times floods will occur over a given (long) period.

An important, albeit often ignored aspect of forecasting, is the relationship it holds with
planning. Forecasting can be described as predicting what the future will look like,
whereas planning predicts what the future should look like.
Forecast: A prediction, projection, or estimate of some future
activity, event, or occurrence.

Risk and uncertainty are central to forecasting and prediction; it is generally considered
good practice to indicate the degree of uncertainty attaching to forecasts. In any case, the
data must be up to date in order for the forecast to be as accurate as possible.

1
Forecasting activities can be categorized according to two major factors – how far into
the future we are required to forecast and the methodology by which we estimate the
future outcomes.

In terms of time frame we classify as either as short-range (normally forecasting one or


two periods into the future), medium-range forecasts (are those beyond one or two
periods into the future up to about one year) and long-range forecast (generally beyond
one to two years). Whereas short and medium range forecasts could be seen as tactical,
the long-range forecasts are more strategic.

TYPES OF FORECASTING METHODS

There are two categories, namely qualitative and quantitative methods. Qualitative
forecasting methods or techniques are subjective in nature, based on the opinion,
intuition, experience and judgment of consumers or experts; they are appropriate when
past data are not available and usually applied to intermediate- or long-range decisions.
Examples of qualitative forecasting methods include informed opinion and judgment, the
Delphi method, market research, and historical life-cycle analogy.

Qualitative forecasting methods

Qualitative Methods

2
Executive Market Sales Force Delphi
Opinion Survey Composite Method

Approach in Approach that Approach in Approach in


which a group of uses interviews which each which consensus
managers meet and surveys to salesperson agreement is
and collectively judge preferences estimates sales in reached among a
develop a forecast of customer and his or her region group of experts
to assess demand

Quantitative forecasting methods

Quantitative
Methods

Time-Series Models Associative Models

Time series models look at past Associative models (often called


patterns of data and attempt to causal models) assume that the
predict the future based upon variable being forecasted is
the underlying patterns related to other variables in the
contained within those data. environment. They try to project
based upon those associations.

3
Time Seri4es Methods

A time series is a historical record or list of the behavior of a variable over time.
Therefore, time series methods use historical data as the basis of estimating future
outcomes.

Decomposition of a Time Series: Patterns that may be present in a time series include the
following:

Trend: Is the smooth underlying component that represents the general long-
term sweep or movement of the data values. Data exhibit a steady growth or decline
over time.

Seasonality: Also consists of wave like variations and data exhibit upward and
downward swings in a short to intermediate time frame (most notably during a year).

Cycles: a cyclic pattern consists of wavelike patterns similar to sine wave


with data exhibiting upward and downward swings in over a very long time frame with
duration longer than one year.

Random variations: Erratic and unpredictable variation in the data over


time with no discernable pattern. It is the ‘noise’ around one or more of the other
three patterns, and is caused by outside factors that are generally not predictable.

The time series methods include techniques such as simple moving averages, weighted
moving averages, exponential smoothing, extrapolation, linear prediction and trend
estimation whose characteristics are outlined in the table below:

Time Series Models

4
Model Description

Naïve Uses last period’s actual value as a forecast

Simple Mean (Average) Uses an average of all past data as a forecast

Uses an average of a specified number of the most


Simple Moving Average
recent observations, with each observation receiving the
same emphasis (weight)
Uses an average of a specified number of the most
Weighted Moving Average
recent observations, with each observation receiving a
different emphasis (weight)
A weighted average procedure with weights declining
Exponential Smoothing
exponentially as data become older

Technique that uses the least squares method to fit a


Trend Projection
straight line to the data

A mechanism for adjusting the forecast to accommodate


Seasonal Indexes
any seasonal patterns inherent in the data

To demonstrate the mechanics of some of the above techniques, we will use the
following hypothetical set of data representing sales in millions of shillings of a
product during the past year

Month Sales (in Millions, Tsh)


January 23
February 25

5
March 31
April 27
May 35
June 33
July 52
August ` 45
September 40
October 46
November 41
December 45

(A) Simple Moving Average


We divide time series into segments or windows of length n, such that the first
window consists of observations Y1, Y2, Y3,…,Yn, where Yi represents the ith
observation in the time series. The next window consists of observations Y2, Y3,
Y4,….,Yn+1, and so on. The moving average will be the arithmetic average of the
observations within each window or segment. The arithmetic average of each
window is then considered to be the forecast for the period just beyond that
window. Thus, using Ft to represent a forecast for any period t, we can use the
following formula to calculate the moving average forecasts:

Y t−1 +Y t −2 +   +Y t −n
Ft ¿ .
n

Using for example a three-month window,

F 1 + F 2+ F 3 23+ 25+31
F4 ¿ = = 26.33 and,
3 3

6
F 2 + F 3+ F 4 25+31+27
F5 ¿ = =¿ 27.66, etc.
3 3

The following table shows the moving average forecasts using the
three - and five- month windows using the same logic:

______________________________________________________

Month Yt (Sales) Ft(three-month) Ft (five-month)


______________________________________________________
January 23 - -
February 25 - -
March 31 - -
April 27 26.33 -
May 35 27.66 -
June 33 31 28.2
July 52 31.66 30.2
August 45 40 35.6
September 40 43.33 38.4
October 46 45.66 41
November 41 43.66 43.2
December 45 42.33 44.8
- - 44 43.4

7
Note: 1. If we draw a graph of Sales and the Three- and Five-Month moving
average forecasts superimposed to see the relative patterns of each we will notice
that the three-month moving average forecasts are much smoother than the actual
sales and even smoother are the five-month moving averages; 2. The longer the
window, the more likely we are to see the underlying long term trend; 3. The
bottom line is that through proper choice of the moving average length, we can
adequately remove random variations from our forecasts while controlling the
degree to which the other components are smoothed.

(B) Weighted Moving Average


It is a slight modification of the simple moving average that allows for more
responsiveness to current observations. Weighting each observation within a
window by a different amount, with greater weight put on the later observations
and therefore, the arithmetic average is biased towards the more recent
observations.
If we let w1, w2, …. , wn be the weights applied to the n values within a window,
the moving average formula can be stated as:

w1 Y t−1 +w 2 Y t −2+     +w n Y t−n


Ft ¿
w1+ w2 +      +w n

Note that we are now dividing the weighted sum of the data values in a particular
window by the sum of the weights utilized.
For the sales data we have, suppose we want to calculate a three-month weighted
moving average using weights of 4, 3, and 2; our first forecast would be:

2 ( 23 )+ 3 (25 )+ 4 (31)
F4 = = 27.22
2+3+ 4

Note that this value (27.22) is much closer to the actual value for period 4 (27)
than was our simple moving average forecast (26.33) developed earlier. The table

8
below lists all the weighted forecasts calculated using the 4, 3, and 2 weights. By
trying different combinations of weights and comparing the resulting moving
average forecasts to the actual time series values, a forecaster may eventually
arrive at a set of weights that appears to be optimal for the particular time series.

______________________________________________________
Month Yt (Sales) Ft (4-3-2 weighted)
______________________________________________________
January 23 -
February 25 -
March 31 -
April 27 27.22
May 35 27.88
June 33 31.44
July 52 32.33
August 45 41.88
September 40 44.66
October 46 44.33
November 41 43.77
December 45 42.44
- - 43.88

(C) Simple Exponential Smoothing


This is another form of weighted forecasting technique. In order to calculate a
forecast for some period t+1, we weight the observation in period t and the
forecast in period t and add these together. If we let α and (1−α) represent the
weights, where α is between 0 and 1, the formula for exponential smoothing
forecasts is:
Ft+1 = αYt + (1 – α)Ft

9
We begin in period 1 by letting F1 = Y1 and then we apply the formula for each
subsequent period. As an example, suppose that α = 0.2 and (1 – α) = 0.8, for our
prototype example:
For period 1, we have:
F1 = Y1 = 23
For period 2 we get
F2 = 0.2(Y1) + 0.8(F1) = 0.2(23) + 0.8(23) = 23
For period 3 we get
F3 = 0.2(Y2) + 0.8(F2) = 0.2(25) + 0.8(23) = 23.4
For period 4, we get
F4 = 0.2(Y3) + 0.8(F3) = 0.2(31) + 0.8(23.4) = 24.92 and so on.

The table below lists the rest of the exponential smoothing forecasts for the
prototype example using the value of α = 0.2.

______________________________________________________
Month Yt (Sales) Ft (α = 0.2)
______________________________________________________
January 23 -
February 25 23.00
March 31 23.40
April 27 24.92
May 35 25.34
June 33 27.27
July 52 28.42
August 45 33.14
September 40 35.51
October 46 36.41
November 41 38.33
December 45 38.86
- - 40.10

10
Note: (a) This method is called exponential on the reason that whereas the moving
average used only windows of data to form a forecast, the exponential smoothing model
uses all prior data, but as we move farther into the future, past data is reflected at an
exponentially decreasing rate (b) It should be obvious that the extremes of α = 0 and α =
1 represent total smoothing and no smoothing, respectively (c) In practice the values of α
are derived through experimenting with different values until we find one that is
satisfactory.

JUDGING THE ACCURACY OF FORECASTTING MODELS

The most common approach to a numerical measure of forecast accuracy is to work with
some function of the forecast errors or residuals. For any period t for which we have both
an actual value and a forecast, the error or residual (et) is the difference between the two.
That is:
et =Yt − Ft

Table below shows the forecast errors on column 4 and 6 for the three- and five-month
simple moving averages prototype example. By eyeballing the errors, it is still difficult to
say which of the two methods is better than the other. Two measures or functions of the
forecast errors that may make this task a little easier are the mea square error (MSE) and
the mean absolute deviation (MAD).
______________________________________________________

Month Yt Ft(3-mo) Yt − Ft(3) Ft (5-mo) Yt − Ft(5)


____________________________________________________
Jan 23 - - - -
Feb 25 - - - -
Mar 31 - - - -
Apr 27 26.33 0.67 - -
May 35 27.66 7.34 - -

11
Jun 33 31 2.00 28.2 4.8
Jul 52 31.66 20.34 30.2 21.8
Aug 45 40 5.00 35.6 9.4
Sept 40 43.33 −3.33 38.4 1.6
Oct 46 45.66 0.34 41 5.0
Nov 41 43.66 −2.66 43.2 −2.2
Dec 45 42.33 2.67 44.8 0.2
- - 44 - 43.4 -
---------------------------------------------------------------------------
Residual: MSE 58.05* 88.44
MAD 4.93* 6.43

Mean Square Error (MSE)


As the name implies, if we look at the average of the squared errors, this should
provide some indications of the relative goodness of two or more methods. We
say relative goodness because the actual value of MSE may not mean much. Is a
MSE of 400 good or bad? We do not know. If the actual values and forecasts are
small numbers, 400 may represent a large average squared error. But if the values
themselves are in hundreds, 400 may represent a very small average squared
deviation. However, we can be sure that an average deviation of 400 for one
method is larger than say, an average deviation of 200 for another method. The
formula for the MSE is:

MSE = ∑ ¿¿ ¿ ¿ , where n is the number of error terms or


t

residual available.

The MSE of 58.05 for the three-month moving average forecasts and an MSE of
88.44 for the five-month moving average forecasts, indicate that the three-month
window seems to be more appropriate.

12
Mean Absolute Deviation (MAD)
In MAD instead of squaring each error we take the absolute value of each.
Consequently MAD does not penalize large deviations to the same degree as the
MSE. The formula for the MAD is:

[Y ¿ ¿ t−F t ]
MAD = ∑ ¿
t n

The MAD values of 4.93 and 6.43 for the three- and five-month moving
averages forecasts again show that the three-month window seems
better.

CAUSAL METHODS/MODELS

These forecasting methods try to identify the underlying factors that might influence the
variable that is being forecast taking into account of past relationships between variable.
The method attempts to forecast by relating the behavior of the variable of interest to the
behavior of one or more other variables that we believe to have a cause-effect
relationship with it.

Several informal methods used in causal forecasting do not employ strict algorithms, but
instead use the judgment of the forecaster. Some forecasts take account of past
relationships between variables: if one variable has, for example, been approximately
linearly related to another for a long period of time, it may be appropriate to extrapolate
such a relationship into the future, without necessarily understanding the reasons for the
relationship.

Regression analysis includes a large group of methods for predicting future values of a
variable using information about other variables. These methods include both parametric
(linear or non-linear) and non-parametric techniques. In these regression-based

13
techniques, we will only look at the simple regression, which is essentially a method of
fitting a linear equation to a set of data points.
The equation will be of the form:
y = a + bx,
where a is the intercept of the fitted equation and b is the slope. The y is
the dependent variable and x the independent variable as we seek to
forecast y based on x.

The applicable formulas are:

a = y − bx

and
b = ¿¿

where: n represents the number of pairs of x and y values available in the


data set, and x and y represent the means of x and y, respectively.

Note that regression based methods could be used either as projective or


causal models. Let us explore the method as a causal model by introducing
another dimension into our prototype example. Suppose, we have
information on how much money was used to advertise in each of the
month preceding the sales we already know as given in the table below.

14
Month Sales (Mil Tsh) Prior Advert (Mil Tsh )
January 23 10
February 25 10.5
March 31 11
April 27 11
May 35 12
June 33 12
July 52 13
August 45 13
September 40 12
October 46 13
November 41 12
December 45 13

Let us assume that the advertising had influenced the sales and therefore there is a cause-
and-effect relationship. We can then treat sales as y and advertising expenses as x and the
regression model for this relationship will be as follows:

Month x y xy x2 y2
Jan 10 23 230 100 529
Feb 10.5 25 262.5 110.25 625
Mar 11 31 341 121 961
Apr 11 27 297 121 729
May 12 35 420 144 1225
Jun 12 33 396 144 1089
Jul 13 52 676 169 2704
Aug 13 45 585 169 2025
Sept 12 40 480 144 1600
Oct 13 46 598 169 2116
Nov 12 41 492 144 1681
Dec 13 45 585 169 2025

15
---------------------------------------------------------------------------
Total 142.5 443 5,362.5 1,704.25 17309

And, therefore:

142.5
x= =11.875
12

443
y = = 36.917
12

5362 – (12)(11.875)(36.917)
b=
1704.25 – (12)¿ ¿

a = 36.917 – (8.44)(11.875) = − 63.308

Therefore, the causal model for forecasting sales ( y t ¿based on the amount of advertising
(expenses) in the prior period ( x t ¿ is:

y t =−63.308+8.44 xt

 Judging how good the forecast


We can judge how good the forecast is by calculating the MSE and MAD. In
addition in regression, we look at the measure known as the coefficient of
determination r 2 :

16
2 n ∑ xy −∑ x ∑ y
r ={
√¿ ¿ ¿

This measure tells us the percentage variability in the time series that is
accounted for by our regression model, with r 2 = 1.0 representing 100%. For the
causal model we have just developed:

12 ( 5362.5 ) −( 142.5 ) ( 443 )


r 2={
√ [ 12 ) ( 1704.25 )−¿ ¿¿ ¿

giving the value of r 2 ≈ 0. 9 (Please check this out!).

HOD/AE413/EngOpnMangt/Jan2022.

17

You might also like