Seasonality in The Statistics of Surface Air Temperature and The Pricing of Weather Derivatives

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Meteorol. Appl. 10, 367–376 (2003) DOI:10.

1017/S1350482703001105

Seasonality in the statistics of surface air


temperature and the pricing
of weather derivatives
Stephen Jewson1 & Rodrigo Caballero2
1
Risk Management Solutions, 10 Eastcheap, London, EC3M 1AJ, UK
Email: [email protected]
2
Department of the Geophysical Sciences, University of Chicago, 5734 S. Ellis Ave.,
Chicago IL 60637, USA
Email: [email protected]

The pricing of weather derivatives motivates the need to build accurate statistical models of daily
temperature variability.Current published models are shown to be inaccurate for locations that show
strong seasonality in the probability distribution and autocorrelation structure of temperature
anomalies. With respect to the first of these problems, we present a new transform that allows seasonally
varying non-normal temperature anomaly distributions to be cast into normal distributions. With
respect to the second, we present a new parametric time-series model that captures both the seasonality
and the slow decay of the autocorrelation structure of observed temperature anomalies. This model is
valid when the seasonality is slowly varying. We also present a simple non-parametric method that is
accurate in all cases, including extreme non-normality and rapidly varying seasonality. Application of
these new methods in some realistic weather derivative valuation examples shows that they can have a
very large impact on the final price when compared to existing methods.

1. Introduction The most obvious disadvantage of burn analysis is that


it does not sample the possible extreme outcomes of
Weather derivatives are contracts that allow entities to the index very well. This can be overcome by fitting an
insure themselves against the financial losses that can appropriate distribution to the historical index values,
occur due to unfavourable weather conditions. For which smooths the distribution and extrapolates it to
instance, there are many retailing companies which higher and lower levels of probability. This is known as
have sales that are adversely affected by either warmer index modelling.
or colder than normal temperatures. These companies
could all potentially benefit from hedging their exposure However, neither of these so-called index-based
with temperature-based weather derivatives. approaches for determining the distribution of the index
achieve the highest possible accuracy because much of
The payoff from a weather derivative is determined by the historical data is discarded in the calculation of the
the outcome of a weather index such as mean summer historical index values. For example, when considering
or winter temperature. Pricing of weather derivatives is a one-week contract, 51/52 of the data is not used by
mainly based on estimates of the distribution of possible these methods. If some of the data from the other 51
values of the index. Before any forecasts are available weeks of the year contain statistical information about
historical weather data must be obtained to make this the behaviour of weather during the one week of the
estimate. Commonly a simple method is used in which contract, as is likely, then it could be more accurate to
the values of the index for past years are determined. use some of these data too.
Using 30 years of historical data would give 30 historical
index values (for example, 30 mean winter temperatures) These considerations lead one naturally to consider the
and these would be taken as 30 independent samples possibility of using statistical modelling of temperatures
from the distribution to be estimated. In most cases, a on a daily basis, and considerable work has gone
trend would be removed prior to estimating the index in to trying to build such models. In the most
distribution, either from the daily weather data or from common approach, a deterministic seasonal cycle is
the 30 annual historical index values. This method for removed from the mean and/or standard deviation
deriving the index distribution is known as (detrended) of temperature, and the residuals (which we will
burn analysis. call temperature anomalies) are then modelled using

367
Stephen Jewson & Rodrigo Caballero
continuous or discrete stochastic processes. The diffi- Table 1. The eight US weather stations used in the modelling
culty lies in finding stochastic processes that accurately described in the text, with the optimum lengths of the four
capture the observed behaviour of the temperature moving averages as selected automatically as part of the fitting
anomalies. Dischel (1998), Cao & Wei (2000), Torro procedure for the AROMA model.
et al. (2001) and Alaton et al. (2002) have suggested using
AR(1) (first-order autoregressive) models or continuous Station m1 m2 m3 m4
equivalents. Others (Dornier & Querel 2000; Diebold & Chicago Midway 1 2 3 17
Campbell 2001) have suggested more general models Miami 1 2 4 28
that all lie within the larger class of autore- Los Angeles 1 2 9 33
gressive moving-average (ARMA) models (Box & Boston 1 2 5 32
Jenkins 1970). Caballero et al. (2002) (henceforth CJB) New York Central Park 1 2 4 18
have shown that all these models fail to capture the Charleston 1 2 4 22
slow decay of the temperature autocorrelation function, Detroit 1 2 3 24
and hence lead to significant underpricing of weather Atlanta 1 2 7 27
options. CJB and Brody et al. (2002) (henceforth
BSZ) have suggested Gaussian discrete and continuous
stochastic processes, respectively, that overcome this
problem by using models with long memory (i.e. listed in Table 1). These data are not directly suitable for
power law decay of the autocorrelation function). analysis because of (a) gaps in the data due to failures
CJB model the variability of temperature anomalies in measurement equipment, recording methods or data
using a stationary autoregressive fractionally integrated transmission and storage, and (b) jumps in the data
moving average (ARFIMA) process (Granger & Joyeux due to changes in measurement equipment or station
1980), while BSZ use a fractional Ornstein-Uhlenbeck location. To avoid these problems we use data that
(fOU) process. ARFIMA and fOU models are mutual have been processed by Earth Satellite Corporation to
analogies in the discrete and continuous domains. These fill in such gaps and remove such jumps (Boissonnade
models work well for locations where the assumptions et al. 2002). These processed data were provided to the
of normality and stationarity are accurate. However, authors by Risk Management Solutions Ltd. Even after
as we shall see below, temperature anomalies at many such processing the data are still not representative of
locations show marked departures from normality and the current climate because of trends due to urbanisation
stationarity. In these cases, the CJB and BSZ models and global warming. Removing such trends is extremely
should not be used for pricing weather derivatives as difficult, as the trends vary by location, by year and
they are likely to give misleading results. by season. Attempting to model the variations by
year and season can only by done rather subjectively,
This paper tackles the question of how to model and so we have restricted ourselves to the simple and
temperatures at these locations. In Section 2 we describe transparent approach of removing a non-seasonal linear
the data sets to be used. In Section 3 we examine trend, estimated over the entire data period, for daily
the evidence for non-stationarity in the autocorrelation mean temperature at each station. The linear trends were
structure of surface air temperatures. In Section 4 we fitted by the method of least squares, and removed in
model it using a new class of parametric statistical such a way that the detrended data values are consistent
models that we have developed specifically for this with the last day of the data (in other words, where
purpose. In Section 4.2 we also describe a simple there is a warming trend we increase the earlier values up
non-parametric model that provides an alternative to present-day levels). Most weather contracts depend
method for cases where parametric models entirely fail. on daily mid-point temperature, calculated as the mid-
In Section 5 we investigate seasonally varying non- point between the daily maximum and minimum, and
normality of temperature variability and we present so all our modelling will focus on these values.
a new method that can be used to model such non-
normality. In Section 6, we compare the performance The second data set is the US National Centre for
of the various approaches in pricing a specific weather Environmental Prediction (NCEP)’s 40 year reanalysis
derivative. In Section 7 we draw some conclusions. (Kalnay et al. 1996). This data set is obtained by
assimilating observations into a dynamical model and
produces a gridded representation of the climate over
2. Data the whole globe. We only use surface temperature values
from these data. The reason we use these data in addition
The studies of temperature variability described in this to the station data described above is simply that the
paper are based on two data sets. The first data set reanalysis data are conveniently presented on a spatial
originates from the US National Weather Service and grid, which makes it much easier to produce maps of
consists of daily minimum and maximum temperatures spatial fields.
measured between midnight and midnight (local time)
for 40 years (1961–2000) at 200 US weather stations (in Both temperature data sets are converted to anomalies
this study we only present results for eight of them, by removing deterministic seasonal cycles in the mean
368
Seasonality and the pricing of weather derivatives
of the temperature anomalies. Figure 2 shows maps
of s for summer and winter over North America and
Europe. In both seasons, persistence is greater over
the oceans. In winter, persistence is relatively uniform
over North America, but shows a banded pattern over
Europe, with high-persistence bands over Scandinavia
and the Mediterranean and lower persistence over the
southern European mainland. The situation changes
markedly in summer. The biggest change is over the
oceans, where persistence increases almost everywhere.
There are also changes over the continents. In North
America, persistence increases over the Gulf states
and southwestern USA, but decreases over coastal
California. In Europe, the banded pattern described
above is still in place but the relative amplitudes change,
with persistence decreasing over Scandinavia and central
Europe and increasing over the Mediterranean. In
summary, we find strong seasonality of the ACF over
Figure 1. Observed ACFs for Chicago (top row) and Miami
large (and economically significant) parts of the USA
(bottom row) in winter (defined as December–January, DJF) and Europe. This motivates the search for a time series
and summer (June–August, JJA). In each panel, the solid black model capable of capturing such seasonality.
line is the annual ACF (i.e. the ACF computed using the entire
data set) and the dotted line is the ACF for the season specified
(computed as the average of the seasonal ACFs in the 40 4. Modelling of seasonality in the
individual years). The dashed lines show the 95% confidence autocorrelation structure
intervals around the observed estimate, calculated using the
method of Moran (1947). We now proceed to the main topic of this article, which
is the modelling of seasonality in the autocorrelation
and the standard deviation: this was achieved by structure of temperature. A first approach might be to
regression onto three sines and cosines in each case. try to extend the ARFIMA model of CJB to include
seasonality. This could be attempted by allowing the
parameters of the model to vary with time of year, or
3. Seasonality in the autocorrelation structure perhaps by fitting the model to data from only one
of surface air temperatures part of the year. Both such approaches are theoretically
possible. However, they are also rather complex. The
In Figure 1 we compare the seasonal autocorrelation ARFIMA model represents the slow decay of the ACF
functions (ACFs) at two locations, Chicago and using a long memory parameter d. A model that allows
Miami. Chicago shows essentially no seasonality, the long memory parameter d to vary with the time of
with no statistically significant difference between year is hard to fit, and fitting d on data for only one
summer, winter and annual ACFs. The situation is season is also difficult.
strikingly different in Miami. Persistence of temperature
anomalies is clearly much higher in summer than in CJB suggested that the long memory of surface air
winter. It is clear from this that a stochastic model (such temperatures may simply result from the aggregation
as those used in CJB or BSZ) that assumes stationarity of several processes with different time-scales, such as
of the ACF will severely underestimate the memory in internal atmospheric variability on short time-scales,
summer, and will overestimate it in winter. If such a land-surface processes on medium time-scales, and the
model is used to derive the distribution of a weather interaction of the atmosphere and ocean on long time-
index such as cumulative temperature, it will severely scales. Indeed, a simple statistical model consisting of
underestimate the standard deviation of the index in a sum of 3 AR(1) processes was shown to give results
summer (see CJB, Equation 11). that were indistinguishable from long memory over the
length of data and number of lags used. This 3 × AR(1)
We turn to the NCEP data set to examine how model is not, however, practical for simulating artificial
widespread seasonality of the ACF is. We define an temperatures because the parameters cannot be easily
index s as estimated. It does, nevertheless, motivate the search

40 for other simple models that might perform as well as
s= ρk (1) ARFIMA and yet avoid the difficulties introduced by
k=1 the long memory parameter d.

where ρk is the ACF for a particular season at lag k. It was shown in CJB, and subsequently in more detail
The higher the value of s , the greater the persistence in Brix et al. (2002) that the well-known ARMA
369
Stephen Jewson & Rodrigo Caballero

Figure 2. Persistence of surface air temperature anomalies as quantified by the index s (defined as an average of the ACF at each
point over the first 40 lags; see text) in (a) winter (DJF) and (b) summer (JJA). Contour interval is 0.05. Light shading shows
values > 0.1, dark shading values > 0.2. (c) Difference (DJF−JJA). Contour interval is 0.05. Dashed lines show negative values.
Shading shows absolute values > 0.05. Data from NCEP Reanalysis, 1950–1999.

processes (Box & Jenkins 1970) do not work well for


surface temperature anomalies because they do not
capture the observed slow decay of the ACF. For
an AR process to capture this shape of ACF out
to 40 lags, 40 parameters are needed. Such a model
is extremely non-parsimonious, and the parameters
cannot be estimated with any accuracy. Indeed, it would Figure 3. Averaging intervals used in the AROMA model.
be impossible to distinguish most of the parameters
from zero. This is unfortunate, because it would be
of past temperatures:
relatively straightforward to generalise the AR process
to have seasonally varying parameters, and hence solve xn = α1 ym1 + α2 ym2 + · · · + αr ymr + ξn , (3)
the problem of modelling seasonality.
where
We now present a new statistical model that maintains
1 
m
the simplicity of the AR processes, but is as accurate and
ym = xn−i (4)
parsimonious as the more complex ARFIMA model. m i=1
Initially, we present a non-seasonal version of the model,
but extend it to include seasonality in Section 4.1. The Note that all the averages start from day n −
model is patterned after AR( p): 1. This is sketched graphically in Figure 3. Since
this model consists of autoregressive terms on
xn = α1 xn−1 + α2 xn−2 + · · · + α p xn− p + ξn , (2) moving averages, we give it the name AROMA. An
AROMA(m1 , m2 , . . . , mr ) process can be rewritten as
where xn is the value of the process at step n, ξn is an AR(mr ) process, but it can accurately capture the
a Gaussian white-noise process, and αi are constants, observed temperature autocorrelation structure with
but rather than using individual temperatures for days many fewer than mr independent parameters (see
n − 1, n − 2, . . . as predictors, we use moving averages below).

370
Seasonality and the pricing of weather derivatives
readily to include seasonality. From the standpoint
of meteorological interpretation, AROMA is more
attractive than the ARFIMA model, appealing as it
does to the idea of different timescales in weather
variability, corresponding to each of the moving average
terms. If temperature today is related to an average of
temperature over the last two days, then this is probably
just a reflection of the impact of small scale weather
systems. If it is related to an average of temperatures
over the last 20 days then this may be because of the
memory in soil moisture, for instance.

4.1. Extending AROMA to include seasonality


We now extend the AROMA model to include
seasonality. First, we fit the model to annual data, as
described above. This fixes the lags m1 , m2 , m3 , m4 that
Figure 4. The observed (solid line) and modelled ACFs are to be used. We do not use different lags at different
for Chicago. The modelled ACFs were produced using an times of year. For each day of the year, we then fit
ARFIMA model (dotted lines) and the AROMA model a different model with different regression parameters
(circles). αi . This is done be selecting a window of data either
side of that day, and performing the regression using
data within that window. The regression parameter for
How are we to choose the number and length of a moving average of length m can only be fitted if
moving averages to use in the model, and calculate the the length of the fitting window is significantly larger
coefficients αi ? As for the number of moving averages, than m, in order that the window contain a sufficient
this should be chosen to be as small as possible, so that number of regression pairs that a sensible regression can
the parameters can be well estimated. Experiments on be performed. In our case, we limit the lengths of the
temperature anomalies for our 8 stations suggest that moving averages to 35 days, and set the window length
four moving averages (i.e. r = 4) are typically enough to 91 days (an ad hoc choice designed to give enough
to capture the observed ACF well out to 40 lags: this data to allow accurate fitting of the model for each day
is a great improvement on the number of parameters while still allowing for seasonal variability). Thus even
required by the AR model for the same accuracy. Given for the longest moving average of 35 days we have 66
values for m1 , m2 , m3 and m4 , it is straightforward to pairs of values from each year of data with which to
calculate the weights α1 , . . . , α4 using linear regression. calculate the appropriate αi . The data used for adjacent
All that remains is to decide which moving averages to days of the year overlap almost entirely, bar one day
use. Experiments were performed on our eight stations at each end of the 91-day window, and so we would
in which all combinations of values of m1 , m2 , m3 and m4 expect the regression coefficients to vary only slowly
up to 35 were tested. The results were ranked according during the year.
to the root mean square error between the model ACF
and the observed ACF (an alternative method would Figure 5 shows observed and modelled ACFs for
be to rank the results using the likelihood of each different seasons for Miami. In each panel of this figure
model). Results are shown in Table 1. Interestingly, we show the annual ACF for reference, which is the
all locations were modelled optimally using m1 = 1 and same in all four figures. The solid line in each case
m2 = 2. Values of m3 and m4 , were, however, different shows the observed ACF for that season. Because these
for different stations. This suggests that a simple way seasonal ACFs are calculated using fewer data than for
to choose the lengths of the moving averages to be used the annual ACF, they show more sampling variability
is to fix m1 = 1 and m2 = 2, and optimise the ACF by and are hence less smooth. The circles show the seasonal
varying the other two lags. This is a simple optimisation ACFs from the seasonal AROMA (SAROMA) model
problem and can be solved in a matter of seconds on a fitted to the Miami daily temperatures as described
personal computer. above. We see that the model captures the seasonal
variation in the ACF well. In particular it shows a slow
Figure 4 shows the observed and modelled annual ACF decay of memory in summer, and more rapid decay in
for Chicago using the AROMA and ARFIMA models. the other seasons.
We see that both the AROMA and the ARFIMA
models give a very good fit to the observed ACF. Figure 6 shows the seasonal variation of the four
The advantage of the AROMA model is that it can regression parameters in the SAROMA model. We see
be fitted much more simply (as we have seen above) that they vary slowly with the time of year. The slow
and (as we shall see below) can be extended very decay of the ACF in summer corresponds to a summer
371
Stephen Jewson & Rodrigo Caballero
4.2. A non-parametric method: extended
burn analysis
We have discussed non-seasonality in the ACF, and
shown that it is strong in some cases. We have also
introduced a simple parametric model that captures this
seasonality well. There are, however, potential limits
to how well the model can work. In particular, if the
timescales of change of the observed ACF are rapid, then
one would want to use a short window to fit the model.
A short window, however, prevents accurate estimation
of the parameters because the regression has insufficient
data. The model is thus limited to representing ACFs
that change over timescales somewhat longer than the
longest lag in the model. Another limitation is that the
model uses the same lags at different times of year, while
it could be that the timescales of memory in the physical
system actually change with time of year. Finally,
the model assumes that the (distribution adjusted)
temperature anomalies are governed by linear dynamics,
Figure 5. The four panels show observed and modelled ACFs
for Miami for the four seasons. The observed data are the
which may not be entirely correct. It is possible that one
same as in Figure 1. In each panel the dotted line is the could make a different parametric model that would
annual ACF which is included for reference. The solid line is solve some of these problems in a satisfactory way.
the observed ACF for that season, and the circles are the However, given the richness and complexity of climate
modelled (SAROMA) ACF for that season. Confidence limits variability, we believe it will always be the case that
are omitted for clarity. there will be some location somewhere for which the
observed temperature variability will not fit a given
parametric form. Because of this, it would seem useful
to also explore non-parametric methods that make the
fewest possible assumptions about the data, and hence
are likely to be generally applicable.

We present here one such simple method that is


essentially an extension of the burn analysis described in
the Introduction. It relies on the simple idea that there
may be some information from outside the contract
period which is relevant to the contract period itself.
Consider for instance a one-week contract: we may
expect data from the weeks preceding and following the
contract period to have statistics similar to that of the
contract period itself. There is an implicit assumption
here that the ACF and distribution of variability vary
only fairly slowly (i.e. do not change much from week to
week), and that the inaccuracy introduced by the small
week-to-week changes in the ACF and distribution
can be outweighed by the benefits of having more
Figure 6. Seasonal variation of the four regression parameters data to work with. Note, however, that this model can
for the SAROMA model for Miami. The solid lines show the work with ACFs that change more rapidly than the
estimated parameter values, while the dotted lines show the SAROMA model can accommodate.
95% error bounds. We see that each of the parameters shows a
strong seasonal cycle, corresponding to the strong seasonal cycle
seen in the observed and modelled ACFs. The method works as follows. We define a period that
extends either side of the contract period and captures
the data to be used. In the above example, we might
peak in the fourth parameter which applies to a moving employ a window of two weeks on either side of
average of length 28 days. The SAROMA model could the contract period. This gives us five times as much
be extended by smoothing these regression parameters historical data to work with than using only the contract
in time (either parametrically or non-parametrically) to period (as in standard burn analysis), which would be
preserve only the long time-scale variations and remove expected to increase the accuracy of our estimates by
the short time-scale fluctuations which are presumably more than a factor of 2. We now slide a window of the
only due to sampling error. same length as the contract period along the relevant

372
Seasonality and the pricing of weather derivatives
data. For each window position, we add back the
seasonal cycle in variance and mean appropriate for the
contract period, and calculate a historical index value.
The end result is many more historical index values than
are obtained in the index-based methods. In our example
we have a seven-day contract and a 35-day relevant data
period which means that the seven-day window can
take 29 different positions. Each year of data thus gives
us 29 historical index values. This is 29 times as many
as if we only used the contract period itself (although
these 29 values are not independent). The advantage of
sliding the window rather than jumping it (even though
the underlying data used are the same) is that (a) it
creates a smoother estimate of the final distribution,
and (b) it uses more possible combinations of days of
daily weather, which can result in more extreme values
for the index.

The only remaining issue is to decide the length of


the relevant data period. Too long a period will start
including data that are not relevant because the statistics
of weather variability have changed. Too short a period
Figure 7. The four panels show QQ plots for temperature
will not reap the potential benefits of using more data. anomalies in Miami for the four seasons. The horizontal axis
The optimum window length is clearly dependent on shows the observed quantiles, while the vertical axis shows the
the station: with Chicago we might be tempted to use modelled quantiles. We see that in all seasons the cold tail of
data for the whole year, while with Miami that would the distribution is heavy tailed (cold events are more likely
clearly be wrong since the summer data are markedly than predicted by the normal distribution) while the warm tail
different from those of the other seasons. One way of the distribution is light tailed (warm events are less likely
to choose the window length would be to analyse than predicted by the normal distribution). The most significant
seasonal variations in higher moments of temperature departure from normal is the warm tail in winter.
variability and seasonal variations in autocorrelations
at, or averaged over, certain lags.
accuracy of the temperature simulations significantly,
and lead to mis-pricing of weather derivatives. The
5. Seasonal non-normality of surface errors will be particularly large for contracts which
air temperatures depend heavily on extreme values of temperature, but
will also be important for standard contracts. It is thus
By construction, our temperature anomalies are sensible to attempt to model this non-normality directly.
stationary in the mean and the variance. It is still
possible, however, that they are non-stationary in higher There are a number of possible methods that can be
moments of variability. Figure 7 shows the cumulative used to model non-normality in temperature variability.
distribution functions (CDFs) of observed temperature An initial observation we make is that although
anomalies for the four seasons for Miami. We have temperatures are clearly not normally distributed at
plotted these distributions against a fitted normal many locations, the distribution is, at the same time,
distribution in the form of a QQ plot. If the observed still reasonably close to normal. This suggests that if
data are normally distributed, they will lie along the we approach the problem using transformations that
diagonal. If they are skewed they will tend to lie at an convert the data to a normal distribution, then the
angle to, and cross, the diagonal. If there are heavy tails at dynamics will not be affected too strongly, and the same
the warm end of the distribution, the data will lie below time series models as are used in the normal case may
the diagonal and if there are heavy tails at the cold end still work.
of the distribution, the data will lie above the diagonal.
We see that all seasons show deviations from a normal The first choice is whether to model the non-normality
distribution with light warm tails and heavy cold tails. before or after attempting to model the autocorrelation.
The largest deviations are the light warm tails in summer, The first of these methods would involve applying
showing that extreme warm events are much less likely a transformation to the temperature anomalies that
than would be supposed from a normal distribution. renders them more or less normal. Time series modelling
methods that assume normality, such as ARFIMA or
The levels of non-normality seen in many locations SAROMA, can then be applied to the transformed
are sufficiently large that making the mathematically anomalies. Simulated values are passed back through the
convenient assumption of normality will degrade the transformation to get back to the original distribution.
373
Stephen Jewson & Rodrigo Caballero
A second method would be to apply a time series model
to the temperature anomalies directly, and model non-
normality in the residuals. We choose to apply the first of
these methods because it allows us to re-use algorithms
such as the ARFIMA or SAROMA models simply by
applying a transformation to the inputs and outputs of
those models.

The second choice is whether to use parametric or


non-parametric models for the transformation of the
anomalies. Box–Cox transformations (Box & Cox 1964)
are a commonly used parametric distribution transform,
and can be extended to vary seasonally. They are not,
however, completely general. Non-parametric methods,
on the other hand, can cope with any temperature
anomaly distribution. For this reason we choose a non-
parametric method, and the method we present, is, as
far as we are aware, new.

The method works as follows. We derive a separate


estimate of the cumulative distribution of the tempera-
ture anomalies for each day of the year (this step is
Figure 8. The four panels show QQ plots for temperature
discussed in detail below). These cumulative distri- anomalies in Miami for the four seasons, after having been
butions are used to convert the temperature anomaly transformed using the non-parametric seasonally varying
for each day (over all years) into a probability, and transform described in the text. We see that most of the non-
that probability is then converted, using the inverse of normality has been removed.
the standard normal CDF, into a value sampled from a
normal distribution.
simulated temperature anomalies to exceed historical
It remains to specify how to estimate the distributions values. This is unlikely to be a problem in most
used for each day. They could be estimated using only cases. However, if we have a particular interest in
data from that day of the year: however, this would give a extreme events then it would be advisable to extend the
poor estimate because using 50 years of data would only distribution used in the transformation using extreme
give 50 points on the distribution. Instead we assume value theory (Reiss & Thomas 1997).
that the anomaly distribution only varies slowly with
time and that we can estimate this distribution more
accurately by using additional data from surrounding 6. Examples
days. We do this by taking temperature values from
a window surrounding the actual day, with a window We now present some examples to illustrate the
length of 91 days (arbitrarily chosen to be long enough effects of improved modelling of the seasonality in the
to give a smooth distribution, but short enough not to distribution (Section 5 above) and the ACF (Section 4
smooth out seasonal variations). Thus for each day of the above) on the calculation of weather derivative prices.
year, we form the estimate of the anomaly distribution Our examples are all based on Miami, since that location
for that day using 91 days per year × 50 years = 4550 has strong levels of seasonality in both the distribution
days of data. This gives a smooth estimate. Since the dis- and the ACF and so is likely to show the greatest benefits
tributions for adjacent days are based on almost the same of using the more advanced modelling methods.
data because of substantial overlapping of the windows,
the transform only changes gradually with the time of The type of weather derivative we consider for our
year, which seems realistic. examples are ‘unlimited call options’. These require
one party (the seller of the option) to pay another
The results of the application of this transform to (the buyer) a certain amount of money if the final
Miami temperatures are shown in Figure 8. We see weather index is above a specified level known as the
that most of the non-normality has been removed. strike. The amount paid (the payoff) is proportional to
The transformed temperatures can now be modelled the difference between the index and the strike, with
using a Gaussian process, and simulations of the a constant of proportionality known as the tick. This
transformed temperatures can be converted back to effectively insures the buyer against high values of the
the appropriate distribution using the inverse of the index. The question is: given the strike and tick, what
distribution transform. One caveat for this method is should the premium (i.e. the price paid by the buyer)
that as long as we use an empirical distribution as be? The simplest answer is that the premium should
described above, it would not be possible for final equal the expected payoff of the contract. Then, in the
374
Seasonality and the pricing of weather derivatives
Table 2. The contract structures for the examples. modelled in the same way for all the models presented.
The small differences are due to sampling error, caused
Summer contract Winter contract by the use of a finite number of years of simulation.
The differences between the models appear in the index
Strike 1750 CDDs 90 HDDs standard deviations. The ARFIMA model gives lower
Tick $5000/CDD $5000/HDD standard deviations that the SAROMA model, for both
normal and transformed distributions. This is explained
long run, neither the buyer nor the seller will make by the higher autocorrelation in summer (see Figure 5),
profit. In practice, the seller may add a risk loading on which is captured by SAROMA but not by ARFIMA.
top of the premium to compensate for the risk taken in Higher autocorrelations directly lead to a higher index
underwriting the derivative. For the present discussion standard deviation (see Eq. 11 in CJB). As a result,
we will ignore this issue and focus on the estimation of SAROMA prices are over 25% higher than ARFIMA
the expected payoff. Given the payoff structure and the prices. A seller using ARFIMA pricing would, on
index distribution, we can easily estimate the expected average, lose over $20,000 on this contract.
payoff using numerical integration. We will do this for
both ARFIMA and SAROMA models, both with and In the summer example above, it makes little difference
without the distribution transform (giving four cases in whether one uses a normal or transformed distribution
all). We will consider two examples: a winter contract in the models: clearly, the distribution is always close
(December to February) and a summer contract (June to normal in this case. Things are quite different in
to August). the winter (Table 4). The mean index is now much
higher for the models with the distribution transform
The details of these contracts are shown in Table 2. than not. This can be explained as follows. Miami in
The winter contract is based on heating degree days winter is quite warm, and temperatures below 18 ◦ C are
(HDDs), which are defined as the sum of the excursions uncommon. It is only the cold tail of the temperature
below 18 ◦ C (65◦ F) during the contract period, while distribution that creates HDDs, and hence modelling
the summer contract is based on cooling degree of this cold tail is crucial. Without transforming the
days (CDDs) which are defined as the sum of the temperature distribution, the cold tail is modelled as
excursions above 18 ◦ C during the contract period (these being too thin, and fewer HDDs occur in the model
definitions are standard in the energy industry). HDDs than in reality. This causes the normally distributed
are a measure of how cold the season is, and relate to models to underestimate the mean number of HDDs.
use of heating; CDDs are a measure of how warm the The transformed-distribution models are more accurate
season is and relate to the use of cooling. since they take into account the fatter cold tail in winter
(Figure 7). This has a dramatic impact on the expected
Table 3 shows the results for the summer contract. payoff: prices given by the transformed-distribution
The index means for the different models are virtually models are almost three times higher than those of
identical in all cases. This is because summer in Miami the normally-distributed models. In this case, it hardly
is very warm and the temperature rarely drops below matters whether one used ARFIMA or SAROMA, since
18 ◦ C. As a result, the mean number of CDDs is fixed most of the price difference is due to the change in the
by the seasonal cycle (see Eq. 10 in CJB) which is index mean rather than its standard deviation.

Table 3. Results for the summer contract example. Expected payoff values have been rounded to three significant figures.

Index mean Index std. dev. Expected


Model Distribution (CDDs) (CDDs) payoff ($)
ARFIMA Normal 1727.1 62.1 75,900
ARFIMA Transformed 1727.5 62.5 76,400
SAROMA Normal 1727.0 73.5 96,200
SAROMA Transformed 1727.4 73.6 96,900

Table 4. Results for the winter contract example. Expected payoff values have been rounded to three significant figures.

Index mean Index std. dev. Expected


Model Distribution (HDDs) (HDDs) payoff ($)
ARFIMA Normal 56.1 45.1 15,000
ARFIMA Transformed 85.2 61.0 43,200
SAROMA Normal 57.1 45.8 16,100
SAROMA Transformed 87.1 60.7 44,300

375
Stephen Jewson & Rodrigo Caballero
Taken together, these examples emphasise the im- tions in other areas of geophysical and environmental
portance of modelling both the distribution and the modelling.
ACF of temperature correctly in the pricing of weather
options. Acknowledgments
The authors would like to thank Jeremy Penzer, Anders
Brix and Christine Ziehmann for useful discussions
7. Summary on the subject of the statistical modelling of daily
The advent of weather derivatives has created temperatures. Rodrigo Caballero was supported by
significant interest in the understanding and statistical Danmarks Grundforskningsfond.
modelling of surface air temperature variability. Weather
derivative pricing methods based on modelling of daily References
temperatures have certain potential advantages over Alaton, P., Djehiche, B. & Stillberger, D. (2002) On modelling
simpler methods. Such modelling is not, however, easy and pricing weather derivatives. Appl. Math. Fin. 9(1): 1–20.
because of the richness and complexity of climate Boissonnade, A., Heitkemper, L. & Whitehead, D. (2002)
variability, and in particular, because of long memory Weather data: cleaning and enhancement. Climate Risk and
and seasonality. The CJB and BSZ models were the the Weather Market, Risk Books, pp. 73–98.
first to capture the observed slow decay of memory of Box, G. & Cox, D. (1964) An analysis of transformations.
surface air temperature variability. They are, however, J. Roy. Stat. Soc. B 26: 211–243.
limited by assumptions of normality and stationarity Box, G. E. P. & Jenkins, G. M. (1970) Time Series Analysis,
and, as we have shown, many locations do not conform Forecasting and Control Holden-Day.
to these restrictions. Brix, A., Jewson, S. & Ziehmann, C. (2002) Weather derivative
modelling and valuation: a statistical perspective. Climate
Risk and the Weather Market, Risk Books, pp. 127–150.
We have presented a relatively simple framework Brody, D., Syroka, J. & Zervos, M. (2002) Dynamical pricing
in which the non-normality and seasonality of of weather derivatives. Quant. Finance 2: 189–198.
temperature variability can be accommodated, as long as Caballero, R., Jewson, S. & Brix, A. (2002) Long memory
it is reasonably slow in varying: changes in probability in surface air temperature: detection, modelling and
distribution and ACF from season to season can be application to weather derivative valuation. Clim. Res. 21:
captured, but much more rapid changes cannot. The 127–140.
model for seasonal variation in the ACF that we present Cao, M. & Wei, J. (2000) Pricing the weather. Risk 13(5): 14–22.
can also be interpreted more simply than the long Davis, M. (2001) Pricing weather derivatives by marginal
memory models. The latter capture the slow decay value. Quant. Finance 1: 1–4.
of the ACF using a slightly mysterious long memory Diebold, F. & Campbell, S. (2001) Weather forecasting for
weather derivatives. University of Pennsylvania Institute
parameter d. Our model, however, presents temperature
for Economic Research, Tech. Rep. 8.
today as the sum of components of temperature vari-
Dischel, R. (1998) Black-scholes won’t do. Energy Power and
ability on different timescales, some short (presumably Risk Management Weather Risk Special Report.
due to small scale weather variability), and some longer Dornier, F. & Querel, M. (2000) Caution to the wind. Energy
(presumably due to either atmospheric long waves, or Power and Risk Management Weather Risk Special Report,
soil or ocean processes). Finally, we present a non- pp. 30–32.
parametric model that can be applied to the pricing of Granger, C. W. J. & Joyeux, R. (1980) An introduction to
weather derivatives in those cases where the parametric long memory time series models and fractional differencing.
methods presented still do not appear to give a good fit J. Time Ser. Anal. 1: 15–29.
to the observed variability. Kalnay, E. et al. (1996) The NCEP/NCAR 40-year reanalysis
project. Bull. Am. Meteorol. Soc. 77: 437–471.
Moran, P. (1947) Some theorems on time series. Biometrika
The models we have presented should lead to the
34: 281–291.
more accurate pricing of weather derivatives, especially
Reiss, R. & Thomas, M. (1997) Statistical Analysis of Extreme
for contracts based on locations that show strong Values. Birkhauser.
seasonality. Furthermore, since they deal with the Torro, H., Meneu, V. & Valor, E. (2001) Single factor stochastic
difficult problem of modelling non-normal time series models with seasonality applied to underlying weather
with slowly decaying autocorrelations and seasonally derivatives variables. European Financial Management
varying dynamics in novel ways, they may find applica- Association, Tech. Rep. 60.

376

You might also like