Load Forecasting

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

D.C. Park, M.A. El-Sharkawi, R.J. Marks II, L.E. Atlas & M.J.

Damborg,
"Electric load forecasting using an artificial neural network", IEEE Transactions on Power Engineering, vol.6, pp.442-449 (1991).

IEEE Transadions on Power Systems, Vo1.6, No. 2, May 1991

Electric Load Forecasting Using An Artificial Neural Network

D.C. Park, M.A. El-Sharkawi, R.J. Marks 11,


L.E. Atlas and M.J. Damborg

Department of Electrical Engineering, FT-10


University of Washington
Seattle, M'A 98195
Abstract load forecasting problem. Previous approaches can be
This paper presents an artificial neural network(ANN) generally classified into two categories in accordance with
approach t o electric load forecasting. The ANN is used techniques they employ. One approach treats the load
to learn the relationship among past, current and future pattern as a time series signal and predicts the future load
temperatures and loads. In order to provide the fore- by using various time series analysis techniques [I-71. The
casted load, the ANN interpolates among the load and second approach recognizes that the load pattern is heav-
temperature data in a training data set. The average ily dependent on weather variables, and finds a functional
absolute errors of the one-hour and 24-hour ahead fore- relationship between the weather variables and the sys-
casts in our test on actual utility data are shown to be tem load. The future load is then predicted by inserting
1.40% and 2.06%, respectively. This compares with an the predicted weather information into the predetermined
average error of 4.22% for 24hour ahead forecasts with a functional relationship [8-111.
currently used forecasting technique applied to the same General problems with the time series approach in-
data. clude the inaccuracy of prediction and numerical insta-
Keywords - Load Forecasting, Artificial Neural bility. One of the reasons this method often gives inac-
Network curate results is. that it does not utilize weather infor-
mation. There IS a strong correlat~onbetween the be-
havior of power consumption and weather variables such
1 Introduction as temperature, humidity, wind speed, and cloud cover.
This is especially true in residential areas. The time
Various techniques for power system load forecasting have series approach mostly utilizes computationally cumber-
been proposed in the last few decades. Load forecast- some matrix-oriented adaptive algorithms which, in cer-
ing with lead-times, from a few minutes to several days, tain cases, may be unstable.
helps the system operator t o efficiently schedule spinning Most regression approaches try to find functional re-
reserve allocation. In addition, load forecasting can pro- lationships between weather variables and current load
vide information which is able to be used for possible demands. The conventional regression approaches use
energy interchange with other utilities. In addition to linear or piecewise-linear representations for the forecast-
these economical reasons, load forecasting is also useful ing functions. By a linear combination of these repre-
for system security. If applied t o the system security as- sentations, the regression approach finds the functional
sessment problem, it can provide valuable information to relationships between selected weather variables and load
detect many vulnerable situations in advance. demand. Conventional techniques assume, without justi-
Traditional computationally economic approaches, fication, a linear relationship. The functional relationship
such as regression and interpolation, may not give suffi- between load and weather variables, however, is not sta-
ciently accurate results. Conversely, complex algorithmic tionary, but depends on spatio-temporal elements. Con-
methods with heavy computational burden can converge ventional regression approach does not have the versa-
slowly and may diverge in certain cases. tility to address this temporal variation. It, rather, will
A number of algorithms have been suggested for the produce an averaged result. Therefore, an adaptable tech-
niaue is needed.
In this paper, we present an algorithm which combines
both time series and regressional approaches. Our algo-
rithm utilizes a layered perceptron artificial n e u r a l net-
work (ANN). As is the case with time series approach,
the ANN traces previous load patterns and predicts(2.e.
extrapolates) a load pattern using recent load data. Our
90 SM 377-2 PWRS A paper recommended and approved algorithm uses weather information for modeling. The
by t h e IEEE Power System Engineering Committee of t h e ANN is able to perform non-linear modeling and adap-
IEEE Power Engineering Society f o r presentation a t the tation. It does not require assumption of any functional
IEEE/PES 1990 Summer Meeting, Minneapolis, Minnesota, relationship between load and weather variables in ad-
July 15-19, 1990. Manuscript submitted August 31, vance. We can adapt the ANN by exposing it to new
1989; made a v a i l a b l e f o r p r i n t i n g April 24, 1990. data. The ANN is also currently being investigated as a
tool in other power system problems such as security as-
sessment, harmonic load identification, alarm processing,
fault diagnosis, and topological observability [12-181.

0885-8950191/0500-0442$o1.0001991IEEE
In the next section, we briefly review various load fore- relationships, require a long computational time [20] and
casting algorithms. These include both the time series result in a possible numerical instabilities.
and regression approach. The generalized Delta rule used
t o train the ANN is shown in Section 3. In Section 4, 2.2 Regression
we define the load forecasting problems, show the topolo-
gies of the ANN used in our simulations, and analyze the The general procedure for the regression approach is: 1)
performance in terms of errors (the differences between select the proper and/or available weather variables, 2)
actual and forecasted loads). A discussion of our results assume basic functional elements, and 3) find proper co-
and conclusions are presented in Section 5. efficients for the linear combination of the assumed basic
functional elements.
Since temperature is the most important information
2 Previous Approaches of all weather variables, it is used most commonly in the
regression approach (possibly nonlinear). However, if we
2.1 Time Series use additional variables such as humidity, wind velocity,
and cloud cover, better results should be obtained.
The idea of the time series approach is based on the un-
derstanding that a load pattern is nothing more than a Most regression approaches have simply linear or piece-
time series signal with known seasonal, weekly, and daily wise linear functions as the basic functional elements [8-
periodicities. These periodicities give a rough prediction 11, 21-23]. A widely used functional relationship between
of the load a t the given season, day of the week , and time load, L, and temperature, T, is
of the day. The difference between the prediction and the
actual load can be considered as a stochastic process. By
the analysis of this random signal, we may get more ac-
curate prediction. The techniques used for the analysis
of this random signal include the Kalman filtering [I],
the Box-Jenkins method [3,4], the auto-regressive mov- where
1, i f T 2 O
ing average (ARMA) model [2], and spectral expansion
technique 151.
u(T) = { 0 , otherwise
. .

The Kalman filter approach requires estimation of a and ai, Til,


Ti2, and C are constant,
covariance matrix. The possible high nonstationarity of and Til > Tiz for all i.
the load pattern, however, typically may not allow an
accurate estimate to be made [6,7]. The variables (L, a;, T, Til, Ti2, and C ) are temporally
The Box-Jenkins method requires the autocorrelation varying. The time-dependency, however, is not explicitly
function for identifying proper ARMA models. This can noted for reasons of notational compactness.
be accomplished by using pattern recognition techniques. After the basic functional forms of each subclass of tem-
A major obstacle here is its slow performance [2]. perature range are decided, the proper coefficients of the
The ARMA model is used to describe the stochastic functional forms are found in order to make a represen-
behavior of hourly load pattern on a power system. The tative linear combination of the basic functions.
ARMA model assumes the load a t the hour can be esti- Approaches other than regression have been proposed
mated by a linear combination of the previous few hours. for finding functional coefficients:
Generally, the larger the data set, the better is the result
in terms of accuracy. A longer computational time for the 1. Jabbour e t al.[ll] used a pattern recognition tech-
parameter identification, however, is required. nique to find the nearest neighbor for best 8 hourly
The spectral expansion technique utilizes the Fourier matches for a given weather pattern. The corre-
Series. Since load pattern can be approximately con- sponding linear regressiuu coefficients were used.
sidered as a periodic signal, load pattern can be decom-
posed into a number of sinusoids with different frequen- 2. An application of the Generalized Linear Square Al-
cies. Each sinusoid with a specific frequency represents gorithm(GLSA) was proposed by Irisarri et a1.[23].
an orthogonal base [19]. A linear combination of these The GLSA, however, is often faced with numerical
orthogonal basis with proper coefficients can represent instabilities when applied to a large data base.
a perfectly periodic load pattern if the orthogonal ba-
sis span the whole signal space. However, load patterns 3. Rahman e t a1.[10] have applied an expert system ap-
are not perfectly periodic. This technique usually em- proach. The expert system takes the advantages of
ploys only a small fraction of possible orthogonal basis set, the expert knowledge of the operator. It makes many
and therefore is limited t o slowly varying signals. Abrupt subdivisions of temperature range and forms differ-
changes of weather cause fast variations of load pattern ent functional relationships according to the hour of
which result in high frequency components in frequency interest. It shows fairly accurate forecasting. As
domain. Therefore, the spectral expansion technique can pointed out in the discussion of [lo] by Tsoi, it is
not provide any accurate forecasting for the case of fast not easy to extract a knowledge base from an expert
weather change unless sufficiently large number of base and can be rather difficult for the expert to articulate
elements are used. their experience and knowledge.
Generally, techniques in time series approaches work
well unless there is an abrupt change in the environmental 4. Lu et al.[24] utilize the modified Gram-Schmidt or-
or sociological variables which are believed to affect load thogonalization process (hfGSOP) to find an orthog-
pattern. If there is any change in those variables, the onal basis set which spans the output signal space
time series technique is no longer useful. On the other formed by load information. The MGSOP requires a
hand, these techniques use a large number of complex predetermined cardinality of the orthogonal basis set
INPUT 3.2 A N N Training
In this paper, the generalized Delta rule (GDR) [25,26] is
used to train a layered perceptron-type ANN. An output
vector is produced by presenting an input pattern to the
network. According to the difference between the pro-
duced and target outputs, the network's weights {Wij)
are adjusted to reduce the output error. The error at the
output layer propagates backward to the hidden layer,
until it reaches the input layer. Because of backward
propagation of error, the GDR is also called error back
propagation algorithm.
The output from neuron i, Oi, is connected to the in-
put of neuron j through the interconnection weight Wij.
Unless neuron k is one of the input neurons, the state of
the neuron t is:
OUTPUT

Figure 1: Structure of a Three-Layered Perceptron Type where f (2) = l / ( l + e-"), and the sum is over all neurons
ANN in the adjacent layer. Let the target state of the output
neuron be t . Thus, the error a t the output neuron can be
defined as
and the threshold value of error used in adaptation
procedure. If the cardinality of the basis set is too
small or the threshold is not small enough, the accu-
racy of the approach suffers severely. On the other where neuron k is the output neuron.
hand, if the threshold is too small, numerical insta- The gradient descent algorithm adapts the weights ac-
bility can result. The MGSOP also has an ambiguity cording to the gradient error, i.e.,
problem in the sequence of input vectors. Different
exposition of input vectors result in different sets of
orthogonal basis and different forecasting outputs.

Specifically, we define the error signal as


3 A Layered A N N

3.1 Architecture
An ANN can be defined as a highly connected array of el-
ementary processors called neurons. A widely used model With some manipulation, we can get the following GDR:
called the multi-layered perceptron(MLP) ANN is shown
in Figure 1. The MLP type ANN consists of one input
layer, one or more hidden layers and one output layer.
Each layer employs several neurons and each neuron in where 6 is an adaptation gain. 6 j is computed based on
a layer is connected t o the neurons in the adjacent layer whether or not neuron j is in the output layer. If neuron
with different weights. Signals flow into the input layer, j is one of the output neurons,
pass through the hidden layers, and arrive a t the out-
put layer. With the exception of the input layer, each
neuron receives signals from the neurons of the previous
layer linearly weighted by the interconnect values between
neurons. The neuron then produces its output signal by If neuron j is not in the output layer,
passing the summed signal through a sigmoid function
[12-181.
A total of Q sets of training data are assumed t o be
available. Inputs of {TI, z2, zQ}
..., are imposed on the In order to improve the convergence characteristics, we
top layer. The ANN is trained t o respond to the cor- can introduce a momentum term with momentum gain cu
responding target vectors, {c.,&, . . . , &), on the bottom
layer. The training continues until a certain stop-criterion
to Equation 7.

is satisfied. Typically, training is halted when the aver-


age error between the desired and actual outputs of the
neural network over the Q training data sets is less than where n represents the iteration indl .
a predetermined threshold. The training time required is Once the neural network is trail11 I I produces very
dictated by various elements including the complexity of fast output for a given input data. 1 1 only requires a
the problem, the number of data, the structure of net- few multiplications, additions, and calculations of sigmoid
work, and the training parameters used. function [14].
Table 1: Test Data Sets Table 2: Error(%) of Peak Load Forecasting
- - ~

4 Test Cases and Results


Hourly temperature and load data for Seattle/Tacoma Table 3: Error(%) of Total Load Forecasting
area in the interval of Nov. 1, 1988 - Jan. 30, 1989 were
collected by the Puget Sound Power and Light Company. --
We used this data to train the ANN and test its perfor-
mance. Our focus is on a normal weekday (i.e. no holiday
or weekends).
Table 1 shows five sets used to test the neural network.
Each set contains 6 normal days. These test data were not
used in the training process of the neural network. This
approach of classifier evaluation is known as a jack-knife
method.
The ANN was trained to recognize the following cases:
Table 2 shows the error(%) of each day in the test sets.
Case 1: Peak load of the day The average error for all 5 sets is 2.04 %.
Case 2: Total load of the day
4.2 Case 2
Case 3: Hourly load The topology of the ANN for the total load forecasting is
as follows;
where
Peak load a t day d = max (L(1, d), . . . , L(24, d)) Input neurons: Tl(k), T2(k), and T3(k)
(11) Hidden neurons: 5 hidden neurons
24 Output neuron : L(k)
Total load a t day d = L(h, d) (12) where
h=l k = day of predicted load,
L(k) = total load a t day k,
L(h, d) is the load a t hour h on day d. = average temperature a t day k,
The neural network structures used in this paper, in- = peak temperature at day k,
cluding the size of the hidden layer, were chosen from = lowest temperature a t day k.
among several structures. The chosen structure is the
one that gave the best network performance in terms of Table 3 shows the error(%) of each day in test sets. The
accuracy. In most cases, we found that adding one or average error for all 5 sets is 1.68 %.
two hidden neurons did not significantly effect the neural
network accuracy.
4.3 Case 3
To evaluate the resulting ANN'S performance, the fol-
lowing percentage error measure is used throughout this The topology of the ANN for the hourly load forecasting
paper: with one hour of lead time is as follows;
Input neurons: k, L(k-2), L(k-1),
I actual load - forecasted load T(k-2 , T(k-1), and ?(k)
error =
actual load
(
x 100 (13) d
Hidden neurons: 10 hi den neurons
Output neuron : L(k)
4.1 Case 1 k = hour of predicted load
The topology of the ANN for the peak load forecasting is L(x) = load a t hour x,
as follows; T(x) = temperature a t hour x,
T(x) = predicted temp. for hour x
Input neurons: Tl(k), T2(k), and T3(k)
Hidden neurons: 5 h~ddenneurons
Out,put neuron : L(k) In training stage, T(x) was used instead of Ti'(x). The
where lead times of predicted temperatures, T(x), vary from 16
k = day of predicted load, to 40 hours.
L(k) = peak load a t day k, Table 4 shows the error(%) of each day in the test sets.
= average temperature a t day k, The average error for all 5 sets is found to be 1.40 %.
= peak temperature a t day k, Note that each day's result is averaged over a 24 hour
= lowest temperature a t day k. period.
Table 4: Error(%) of Hourly Load Forecasting
with One Hour Lead Time

(*: Predicted temperatures, ;i',are not available.)

In order to find the effect of the lead time on the. ANN


load forecastmg, we used set 2 whose performance in Ta-
ble 4 was the closest t o the average. The lead time was Hours
varied from 1 to 24 hours with a 3 hour interval. The (a) Jan. 24,1989
topology of ANN was as follows:
input neurons : k, L(24,k), T(24,k),
L m,k), T(m,k), and T(k)
hidden neurons : 1 (hidden neuron
ouput neuron : L(k)
where
k = hour of predicted load
m = lead time,
L(x,k) = load x hours before hour k
T(x,k) = temperature x hours before hour k
T(k) = predicted temperature for hour k

In the training stage, T(x) was used instead of F(x). The


lead times of predicted temperatures, ?(x), vary from 16
to 40 hours.
Figure 2 shows examples of the hourly actual and fore-
casted loads with one-hour and 24-hour lead times. Fig-
ure 3 shows the average errors (%) of the forecasted loads
with different lead hours for test set 2. Hours
From Figure 3, the error gradually increases as the lead (b) Jan. 27,1989
hour grows. This is true up t o 18 hours of lead time. One
of the reasons for this error pattern is the periodicity of Figure 2: Hourly Load Forecasting and Actual Load
temperature and load pattern. Even though they are not (in MW) (solid: actual load, dash: 1-hour lead
quite the same as those of the previous day, the temper- forecast, dot: 24hour lead forecast)
ature and system load are very similar to those of the
previous day.
We compare our results with the prediction of Puget table that Moody and Darken's technique is remarkably
Sound Power and Light Co. (PSPL) in Figure 4. Since similar to the estimation of Gaussian mixture models.
the PSPL forecasts loads with lead times of 16- to 40- The results shows that the ANN is suitable to inter-
hour, there are 3 overlaps(l8-, 21-, and 24-hour) with our polate among the load and temperature pattern data of
results. As shown in Figure 4, the average errors for the training sets t o provide the future load pattern. In order
18-, 21- and 24-hour lead times are 2.79, 2.65, and 2.06 %, to forecast the future load from the trained ANN, we need
respectively. This compares quite favorably with errors of to use the recent load and temperature data in addition
2.72, 6.44, and 4.22 % (18-, 21-, and 24-hour lead times) to the predicted future temperature. Compared to the
obtained by current load forecasting technique using the other regression methods, the ANN allows more flexible
same data from PSPL [27]. The current load forecasting relationships between temperature and load pattern. A
method, in addition, uses cloud cover, opaque cover, and more intensive comparison can be found in [30].
relative humidity information. Since the neural network simply interpolates among the
training data, it will give high error with the test data
5 Conclusions that is not close enough to any one of the training data.
In general, the neural network requires training data
We have presented an electric load forecasting methodol- well spread in the feature space in order to provide highly
ogy using an artificial neural network(ANN). This tech- accurate results. The training times required in our ex-
nique was inspired by the work of Lapedes and Farber periments vary, depending on the cases studied, from 3
[28]. The performance of this technique is similar to the to 7 hours of CPU time using the SUN SPARK Station
ANN with locally tuned receptive field [29]. We find it no- 1. However, a trained ANN requires only 3 to 10 millisec-
Washington. The authors thank Mr. Milan L. Bruce of
the Puget Sound Power and Light Co. for his contribu-
tion.

References
[I] J. Toyoda, M. Chen, and Y. Inoue, "An Application
of State Estimation to short-Term Load Forecasting,
P a r t l : Forecasting Modeling," "- Part2: Imple-
mentation," IEEE Tr. on Power App. and Sys., vol.
PAS-89, pp.1678-1688, Oct., 1970
[2] S. Vemuri, W. Huang, and D. Nelson, "On-line Al-
gorithms For Forecasting Hourly Loads of an Elec-
tric Utility," IEEE Tr. on Power App. and Sys., vol.,
PAS-100, pp.3775-3784, Aug., 1981
[3] G.E. Box and G.M. Jenkins, Time Series Analysis -
Lead Time (Hour) Forecasting and Control, Holden-day, San Francisco,
1976
[4] S. Vemuri, D. Hill, R. Balasubramanian, "Load Fore-
Figure 3: Mean(m) and Standard Deviation(a) casting Using Stochastic Models," Paper No. TPI-B,
of Errors Vs. Lead Time Proc. of 8th PICA conference, Minneapolis, hlinn.,
pp.31-37, 1973
[5] W. Christiaanse, "Short-Term Load Forecasting Us-
ing General Exponential Smoothing," IEEE Tr. on
- Power App. and Sys., vol. PAS-90, pp. 900 - 910,
PSPL Apr., 1971
[6] A. Sage and G . Husa, "Algorithms for Sequential
Adaptive Estimation of Prior Statistics," Proc. of
IEEE Symp. on Adaptive Processes, State College,
Pa., Nov., 1969
[7] R. Mehra, " On the Identification of Variance and
Adaptive Kalman Filtering, " Proc. of JACC (Boul-
der, Colo.), pp.494505, 1969
[8] P. Gupta and K. Yamada, "Adaptive Short-Term
ANN Forecasting of Hourly Loads Using Weather Informa-
0 I I I tion," IEEE Tr. on Power App. and Sys., vol. PAS-
18 21 24 91, pp.2085-2094, 1972
[9] C. Asbury, "Weather Load Model for Electric De-
Lead Time (Hour) mand Energy Forecasting," IEEE Tr. on Power App.
and Sys., vol. PAS-94, no.4, pp.1111-1116, 1975
Figure 4: Mean and Standard Deviation of Errors: [lo] S. Rahman and R. Bhatnagar, " An Expert System
ANN Vs. Conventional Technique Used Based Algorithm for Short Load Forecast," IEEE Tr.
in PSPL on Power Systems, vo1.3, no.2, pp.392-399, May, 1988
[ l l ] K. Jabbour, J . Riveros, D. Landbergen, and W.
Meyer, "ALFA: Automated Load Forecasting As-
onds for testing. sistant," IEEE Tr. on Power Systems, vo1.3, no.3,
The neural network typically shows higher error in the pp.908-014, Aug., 1988
days when people have specific start-up activities such as [12] D. Sobajic and Y. Pao, "Artificial Neural-Net Based
Monday (for example, on day 1 of set 1 in Table 2), or Dynamic Security Assessment for Electric Power
variant activities such as during the holiday seasons (for Systems," IEEE Tr. on Power Systems, vo1.4, no.1,
example, on days 4 & 5 of set 3 in Table 3). In order to pp.220-228, Feb, 1989
have more accurate results, we may need to have more [13] M. Aggoune, M. El-Sharkawi, D. Park, M. Damborg,
sophisticated topology for the neural network which can and R. Marks 11, "Preliminary Results on Using Ar-
discriminate start-up days from other days. tificial Neural Networks for Security Assessment,"
We utilize only temperature information among Proc. of PICA, pp.252-258, May, 1989
weather variables since it is the only information avail- [14] M. El-Sharkawi, R. Marks 11, M. Aggoune, D. Park,
able to us. Use of additional weather variables such as M. Damborg, and L. Atlas, " Dynamic Security As-
cloud coverage and wind speed should yield even better sessment of Power Systems Using Back Error Prop-
results. agation Artificial Neural Networks," Proc. of 2nd
Sym. on Expert Systems Applications to Power Sys-
6 Acknowledgments tems, pp.366-370, July, 1989
[15] H. Mori, H. Uematsu, S. Tsuzuki, T . Sakurai, Y. Ko-
This work was supported by the Puget Sound Power jima, K. Suzuki, "Identification of Harmonic Loads
and Light Co., the National Science Foundation, and in Power Systems Using An Artificial Neural Net-
the Washington Technology Center a t the University of work," Proc. of 2nd Sym. on Expert Systems Appli-
cations to Power Systems, pp.371-377, July, 1989
[16] E.H. Chan, "Application of Neural-Network Com- he has been working toward the Ph.D. degree in the De-
puting in Intelligent Alarm Processing," Proc. of partment of Electrical Engineering a t the University of
PICA, pp.246-251, May, 1989 Washington. His research interests include artificial neu-
[17] H. Tanaka, S. Matsuda, H. Ogi, Y. Izui, H. Taoka, ral network application to nonlinear system modeling, sig-
and T . Sakaguchi, "Design and Evaluation of Neu- nal processing and optical computing.
ral Network for Fault Diagnosis," Proc. of 2nd Sym. M. A. El-Sharkawi (SM'76-M180-SrM'83) was born in
on Expert Systems Application to Power Systems, Cairo, Egypt, in 1948. He received his B.Sc. in Elec-
pp.378-384, July, 1989 trical Engineering in 1971 from Cairo High Institute of
[18] H. Mori and S. Tsuzuki, "Power System Topologi- Technology, Egypt. His M.A.SC and Ph.D. in Electri-
cal Observability Analysis Using a Neural Network cal Engineering were received from University of British
Model," Proc. of 2nd Sym. on Expert Systems Ap- Columbia in 1977 and 1980 respectively. In 1980 he joined
plication to Power Systems, pp.385-391, July, 1989 University of Washington as a faculty member where he
[19] N. Naylor and G. Sell, Linear Operator Theory, New is presently an associate professor. He is the Chairman
York, Holt, Rinehart and Winston, 1971 of IEEE Task Force on "Application of Artificial Neural
[20] M. Honig and D. Messerschmitt, Adaptive Filters, Networks for Power Systems". His major areas of re-
Structures, Algorithms, and Applications, Klumer search include neural network applications to power sys-
Academic Publishers, Hingham, Massachusetts, 1984 tems, electric devices, high performance tracking control,
[21] J. Davey, J . Saacks, G. Cunningham, and K. Priest, power system dynamics and control. Most of his research
"Practical Application of Weather Sensitive Load in these areas are funded by the US government, and by
Forecasting to System Planning," IEEE Tr. on Power public and private industrial organizations.
App. and Sys., vol.PAS-91, pq.971-977, 1972 Robert J. Marks I1 received his Ph.D. in 1977 from
[22] R. Thompson, "Weather Sensitive Electric Demand Texas Tech University in Lubbock. He joined the faculty
and Energy Analysis on a Large Geographically Di- of the Department of Electrical Engineering at the Uni-
verse Power System - Application to Short Term versity of Washington, Seattle, in December of 1977 where
Hourly Electric Demand Forecasting," IEEE Tr. on he currently holds the title of Professor. Prof. Marks
owe; App. and Sys., vol. PAS-95, nud.1, pp.385-393, was awarded the Outstanding Branch Councillor award in
Jan., 1976 1982 by IEEE and, in 1984, was presented with an IEEE
[23] G. Irisarri, S. Widergren, and P. Yehsakul, "On-Line Centennial Medal. He is President of the IEEE Council
Load Forecasting for Energy Control Center Appli- on Neural Networks and former Chair of the IEEE Neu-
ral Network Committee. He was also the co-founder and
cation," IEEE Tr. on Power App. and Sys., vol. PAS- first Chair of the IEEE Circuit & Systems Society Tech-
101, no.1, pp.71-78, Jan., 1982 nical Committee on Neural Systems & Applications. He
[24] Q. Lu, W. Grady, M. Crawford, and G. Anderson, is a Fellow of the Optical Society of America and a Senior
"An Adaptive Nonlinear Predictor with Orthogo- Member of IEEE. He has over eighty archival journal and
nal Escalator Structure for Short-Term Load Fore- proceedings publications in the areas of signal analysis,
casting," IEEE Tr. on Power Systems, vo1.4, No.1, detection theory, signal recovery, optical computing and
pp.158-164, Feb., 1989 artificial neural processing. Dr. Marks is a cc-founder
[25] Y.-H. Pao, Adaptive Pattern Recognition and Neu- of the Christian Faculty Fellowship a t the University of
ral Network, Addison-Wesley Pub. Co. Inc., Reading, Washington. He is a member of Eta Kappa Nu and Sigma
MA., 1989 Xi.
[26] D. Rumelhart, G. Hinton, and R. Williams, "Learn- Les E. Atlas (Member, IEEE) received his B.S.E.E. de-
ing Internal Representations by Error Propagation, " gree from the University of Wisconsin and his M.S. and
in Parallel Distrzbuted Processing Explorations in the Ph.D. degrees from Stanford University. He joined the
Microstructures of Cognition, vol.1: Foundations, University of Washington College of Engineering in 1984
pp.318-362, MIT Press, 1986 and is currently an Associate Professor of Electrical En-
[27] S. Mitten-Lewis, Short-Term Weather Load Forecast- gineering. He is currently doing research in speech pro-
ing Project Final Report, Puget Sound Power and cessing and recognition, neural network classifiers, and
Light Co., Bellevue, Washington, 1989 biologically-inspired signal processing algorithms and ar-
[28] A. Lapedes and R. Farber, Nonlinear Signal Process- chitectures. His research in these areas is funded by
ing Using Neural Networks: Prediction and System the National Science Foundation, the Office of Naval Re-
Modeling, Technical Report, Los Alamos National search, and the Washington Technology Center. Dr. At-
Laboratory, Los Alamos, New Mexico, 1987 las was a 1985 recipient of a National Science Founda-
[29] J. Moody and C. Darken, "Learning with Localized tion's Presidential Young Investigator Award.
Receptive Fields ," Proc. of the 1988 Connectionist M. J. Damborg received his B.S. Degree in Electrical
Models Summer School, Morgan Kaufmann, 1988 Engineering in 1962 from Iowa State University, and the
[30] L. Atlas, J. Connor, D. Park, M. El-Sharkawi, R. M.S. and Ph.D. degrees from the University of Michigan
Marks 11, A. Lippman, and Y. Muthusamy, "A Per- in 1963 and 1969, respectively. Since 1969, Dr. Damborg
formance Comparison of Trained Multi-Layer Per- has been a t the University of Washington where he is
ceptrons and Trained Classification Trees," Proc. of now Professor of Electrical Engineering. His research in-
the 1989 IEEE International Conference on Systems, terests concern analysis and control of dynamic systems
Man, and Cybernetics, pp.915-920, Nov. 1989 with emphasis on power systems.
Dong C. Park received his B.S. Degree in Electronic
Engineering in 1980 from Sogang University and the M S .
degree in Electrical Engineering in 1982 from the Korea
Advanced Institute of Science and Technology, Seoul, Ko-
rea. From 1982 through 1985, he had been with the Gold-
star Central Research Laboratory. Since September 1985,
449
Discussion (NN) in load forecasting. Future work should certainly address questions
related to weather conditions, distinct load profiles, cold snaps, etc.
0. A. Mohammed (Florida International University Miami, FL): The To respond to the specific issues raised by the reviewer, we would like to
authors are to be thanked on their excellent work applying this new ANN offer the following comments:
technique to load forecasting. Iwould like the authors to clarify or explain
the followings points: 1. The role of expert systems in NN environment, and vise versa, is a topic
1. The authors presented a new method for load forecast which shows a that is being proposed for several applications. In load forecasting
promise for providing accurate forecasts. This discussor feels that applications, as an example, the selection of relevant training sets from
the ANN method would be adequate for providing the base forecast load and weather data base is currently accomplished manually and off-
which might be combined with an expert system approach to fine line. Also, the convergency of the NN is currently observed and
tune the load forecast for additional factors. controlled at only discrete training steps. These functions, for example,
2. I f one experiments with additional factors which may affect the load may be effectively accomplished by a supervisory layer employing a rule-
forecast such as humidity, load inertia, wind velocity, etc., how based system.
much additional training time would be required compared with the
data size. 2. Other weather variables such as wind speed and humidity may result in
3. The authors presented results for hourly load forecast for weekdays more accurate load forecasting. The problem, however, is that the
but not weekends because o f the variation i n load pattern. Will this forecasting errors of these variables are usually high which may lead to a
be handled by a separate neural network? and if so, how would i t be
biased training or erroneous network.
combined with previous day forecasts. For example, to forecase
Monday's load. 3. Except for Tuesday to Thursday, the load profile of the each other day
4. Have the authors experimented with different ANN architectures of the week is distinct. For example, the profile of Monday morning
other than the ones explained i n the paper. I t seems to this discussor include the "pickup loads". Due to these differences in load profiles, we
that the proposed architectures w i l l not work all the time or i t may
have used one NN for the days with similar load profiles and one NN for
yield larger errors because o f the continual change i n weather and
load information. May be a methodology which updates the weights each day with distinct load profile.
o f the ANN based on the new short term weather and load informa-
tion. When we forecasted the electric loads of Saturday, Sunday or Monday,
we used weather and load data obtained up to Friday moining (9:OO am)
Manuscript received August 13, 1990.
to conform with Puget Power practice.

4. We have tried several architectures for load forecasting. The key issue
in selecting a particular NN configuration is to achieve low training error
M. A. El-Sharkawi and M. J. Damborg: The authors would like to thank without "memorization". This can be accomplished by first selecting an
the discusser for their interest and encouraging comments. The research over sized network then "prune" the network to eliminate any
work reported in this paper is preliminary. Several key issues, such as memorization problem that might exist without jeopardizing the training
those raised by the discusser, need to be carefully addressed before a accuracy.
viable electric load forecasting system is deployed. The purpose of the
paper, however, is to investigate the potentials of the Neural Network Manuscript received September 2 3 , 1990.

You might also like