0% found this document useful (0 votes)
32 views31 pages

RECORD, Volume 27, No. 2 : Toronto Spring Meeting June 20-22, 2001

Uploaded by

Magofrost
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views31 pages

RECORD, Volume 27, No. 2 : Toronto Spring Meeting June 20-22, 2001

Uploaded by

Magofrost
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

*

RECORD, Volume 27, No. 2


Toronto Spring Meeting
June 20–22, 2001

Session 86PD
Stochastic Pricing
Track: Investment/Product Development

Moderator TIMOTHY E. HILL


Panelists: MICHAEL BEAN
W. STEVEN PRINCE
CHRIS STIEFELING
ERIC VON SHILING

Summary: In the U.S. asset/liability models have been used for more than 10 years
for stress testing reserves. In Canada, universal life products are subject to a
similar process with valuation technique paper #11. Some companies are also using
their models for pricing and product development. This session discusses the
advantages, methods, and pitfalls of stochastic pricing for life and annuity products.

MR.TIMOTHY HILL: Our first speaker is Michael Bean. Michael currently works in
variable annuity pricing and valuation at Manulife Financial, where he has been for
the past year. Prior to joining Manulife, he was a professor at the University of
Michigan, Ann Arbor, the University of Toronto, and the University of Western
Ontario. Michael is also a Course 7 instructor for the Society of Actuaries and has
recently published the book Probablity: The Science of Uncertainty with Applications
to Investments, Insurance, and Engineering, published by Brooks/Cole
(www.brookscole.com).

MR. MICHAEL BEAN: My presentation will provide a general overview of stochastic


pricing—what it is, when it should be used, etc. More specific aspects of the subject,
such as a discussion of particular models, will be addressed by my colleagues on the
panel.

When considering stochastic pricing, there are four questions that naturally come to
_________________________________
*
Copyright © 2002, Society of Actuaries

Note: The chart(s) referred to in the text can be found at the end of the manuscript.
Stochastic Pricing 2

mind:

1. What is stochastic pricing?


2. When should it be used?
3. What practical problems are associated with it?
4. Is it worth the effort?

Hopefully by the end of this session, you’ll be convinced that for many products like
variable annuities, it is indeed worth the effort.

Let’s start off by considering the first question: What is stochastic pricing? There are
probably a couple of definitions that you can give, but this is the one that I came up
with: Stochastic pricing is an approach to pricing that acknowledges upfront that
things are not going to turn out the way we expect them to. Stochastic pricing
techniques enable us to quantify in a formal way the risk that experience will be
unfavorable.

Before we talk in any detail about what stochastic pricing is, I think it’s a good idea
to go back and talk about the traditional approach to insurance pricing. The
traditional approach begins with looking at averages. If you think about mortality
tables, essentially you’re looking at deterministic averages. That applies also if you
look at, say, an assumption about long-term interest rates and need some kind of
average assumption. The second stage is putting in some kind of provision for
adverse deviations (PfADs), and then the third stage is testing your results under
what you think are adverse scenarios.

The theory behind traditional insurance pricing is essentially the law of large
numbers from probability. In figure 1, I've given the statement of the law of large
numbers in theoretical form: If you have n independent losses that are denoted X1
up to Xn and you consider the random variable Y, which is the average of those
losses, then the law of large numbers asserts that if these random variables are
independent and identically distributed, the mean of this average (or the sample
mean) has the same mean as the individual losses, but the variance goes to zero as
the number of losses tends to infinity. This is the theoretical justification for using
averages in insurance pricing.
Stochastic Pricing 3

Figure 1

The Theory Behind Traditional


Insurance
„ Underlying risk can be eliminated (in
principle) by diversification
„ Based on the law of large numbers:
Suppose that X1,…, Xn are independent,
identically distributed losses with mean µ and
variance σ2. Let Y = (X1+ … + Xn)/n be the
average loss per policy. Then
E[Y] = µ ,
Var(Y) = σ2/n Æ 0 as n Æ ∞.

There are several cases in which averages are insufficient, with three main ones
being that losses are widely dispersed, risks are dependent in some nontrivial way,
and claims occur with low frequency and high severity. I’m going to give you an
example of each one of them.

The first example is losses widely dispersed. This is a very, very simple example,
but I think it gets the point across in a fairly straightforward way. We have two
losses. In the first case, the individual loss is either $1,100 or $900, with equal
probability for an average loss of $1,000. In the second case, it’s either $1,500 or
$500, again with equal probability for an average loss of $1,000. So in this
example, you can see that the average is the same, but clearly the risk is different.

The second example is that risks are dependent. The one that obviously comes to
mind here is investment risk. There is a lot of policy guarantees out there, for
instance, interest rate guarantees or equity guarantees, in which the risks are not
independent because of systematic risk. While you can eliminate policy-specific risk,
you can’t eliminate systematic risk. That’s why the risk in these types of policies can
be diversified, but it can’t be entirely eliminated.

The third example is low-frequency/high-severity, or catastrophic, risks. I have a


very simple example here, too: Consider a $1 million loss and suppose the
probability of that loss occurring is .001 percent. Then the average loss is $10. Now
even if you had a large number of independent losses of this type, how many
policies would you have to write to collect enough premiums just to cover one loss?
This example indicates that if you’re pricing just on the basis of averages, you’re
Stochastic Pricing 4

going to have some problems.

What are some other difficulties with the traditional approach to pricing? Well, I
mentioned that you’re going to be putting PfADs and you’re going to be testing
adverse scenarios. The first things you have to ask are, "What are appropriate
levels of PfADs?" And more importantly, for testing under an adverse scenario,
"What is an adverse scenario?" If you think back to a year ago, most people
wouldn’t have thought that an adverse scenario for technology stocks is a 90
percent decline, but that’s exactly what we’ve experienced in a lot of individual
cases. So deterministic scenario testing can actually be dangerous. Why? Your
choice of scenarios is colored too much by your recent experience. And the point of
this session is that stochastic pricing can help to overcome a lot of these problems.

So what is the methodology behind stochastic pricing? Well the basic idea is that
rather than considering averages, you consider the entire distribution. In figure 2, I
have labeled the distribution of the aggregate loss as S = X1+…+ Xn. We’re going to
then set our per-policy premium using some percentile of the distribution rather
than the average. You can express this using a condition of the form "probability of
aggregate loss less than aggregate premiums is at least alpha," where alpha is
typically something like 95 percent or 80 percent or whatever your criterion is.

Figure 2

Stochastic Pricing:
The Basic Idea

„ Consider the distribution of the aggregate


loss S = X1 + … + Xn.
„ Set the per-policy premium using a
condition of the form
Pr(aggregate loss < aggregate premium) > α
where α is typically close to one.

As a simple example, think about variable annuities. Now this isn’t the only way you
can price variable annuities, but this is one approach. First, you generate a large
number of investment scenarios randomly. That could be 1,000, 5,000, 10,000—we
won’t get into exactly what the number is at this moment. Next, for each scenario,
Stochastic Pricing 5

you project the cash flows. You calculate the benefits, which could be death
benefits, maturity benefits, or other kinds of benefits. You look at the present value
of that plus the present value of your premium if your premium is calculated on a
margin basis, or percentage of account value basis. Third, you set your premiums
using a condition of the form "probability that present value cost is less than the
present value of the premium is high." For instance, it’s at least alpha where alpha
is 95 percent, 80 percent, or whatever, again depending on what your pricing
criterion is.

Now some of you might be thinking that this sounds an awful lot like traditional
scenario analysis, and the point that I want to make is that while it’s similar, it
actually is quite different. With traditional scenario analysis, all of the uncertainty is
upfront. You basically say, "I’m going to examine three scenarios, five scenarios, 10
scenarios, 100 scenarios" or whatever, but you pick all the scenarios today, and all
of the uncertainty is determined today. So if you look one time step ahead, you’re
on a particular path, and there is no uncertainty about the future that remains. With
a full stochastic simulation, you’re going to allow uncertainty to come into the
model at each time step. So you might have 10 choices today, and then one time
step from now you have another 10 choices, and one time step from there you have
another 10 choices, and so on. The difference here is that when you start out and
as time progresses, you don’t know which of those paths you’re going to eventually
take.

Chart 1 shows this concept. On the left side, there is an illustration of traditional
scenario analysis. You can see that in this case, there are three paths. We don’t
know which path we’re going to be on at time zero, but at time one we know we’re
going to be on one of those three paths, and once we’re at time one, we know
which one of those paths we’re going to be on.

With a stochastic simulation, which is illustrated on the right-hand side, there are
several branches at each time step, so the uncertainty is not all concentrated at
time zero.

Suppose you’re interested in testing pricing results under various appropriate


assumptions. Under a traditional scenario analysis, you might look at three
scenarios: an average expected scenario of eight percent growth; an optimistic
scenario of 10 percent growth, and a pessimistic scenario of six percent growth.
Then you have three different paths. On the other hand, if you were doing
stochastic analysis, you would allow the growth rate to vary randomly at each time
step, and the result would be a large number of independent paths.

Next I’m going to be talking about the effect of compounding on uncertainty, but
before I do that, I’d like you to think about the following investment opportunity.
Let’s suppose that for each time step, you either gain 50 percent or you lose 40
percent and your returns are independent in each time step. Here’s a concrete
illustration: Let’s suppose I have a coin in my pocket that’s a fair coin and I flip it. If
Stochastic Pricing 6

it comes up heads you get 50 percent, if it comes up tails you lose 40 percent. Is
this a good investment? And why or why not?

FROM THE FLOOR: This is not a good investment because the compounded return
is negative.

MR. BEAN: A lot of people make the mistake of thinking this isn’t bad and that’s
because they’ll look and say "wait a minute, up 50 percent, down 40 percent,
average return five percent, okay" and then they’ll use five percent to project
returns going forward.

Yes, the compound return is negative. You were supposed to say that the
investment is good so that I could tell you why this isn’t the case!

FROM THE FLOOR: It’s a real-world example about uncertainty.

MR. BEAN: We’ve just considered why this isn’t a good investment, but are there
circumstances under which it could be a good investment? When would this be a
good investment? Would this ever be a good investment? Well if we had a large
number of investments of this type and we could hold them for one day, then on
half we would make 50 percent and on the other half we would lose 40 percent. So
overall we would make five percent on average. In that circumstance, it would be a
good investment. If we must "buy and hold," compounding our returns over time,
then it is a bad investment because our principal value is changing. We don’t know
what our principal value is going to be looking forward. This example illustrates
quite clearly a couple of things. First of all, you can’t just use arithmetic averages
blindly, and second, when you have uncertainty in investment returns, the
arithmetic average overstates the actual return.

Now, let’s talk about some practical problems with stochastic modeling. Hopefully
I’ve convinced you that this is something that’s important and this is something that
you should be doing if you’re not already doing it. But there are a lot of things that
you have to consider:

• The first is that it’s more complicated and it requires a greater level of
mathematical expertise. Hopefully, the last example was a concrete
illustration of this.
• The second thing is that selecting appropriate values for input parameters
can be quite challenging in some models, such as the regime-switching
model. (If you’re not familiar with that model, it will be discussed a little bit
later in this presentation.) In the regime switching log normal model for
equities, the parameters do not have intuitive interpretations, and that can
be a problem.
• The third item is more of a practical consideration; the results have to be
interpreted with care. That’s particularly important when the audiences that
are getting end results are not familiar with stochastic techniques. Unlike
Stochastic Pricing 7

deterministic approaches, the results of stochastic approaches are


distributions. You don’t get one number; you get a whole bunch of numbers.
For end users such as accountants and auditors, this can be a little bit
disconcerting and it requires education.
• A fourth consideration is that small changes in the inputs to a stochastic
model can lead to large changes in the output. This can be true, of course,
with a deterministic model, but it’s much more the case with a stochastic
model.
• Fifth, in the hands of a novice, a stochastic model can be a dangerous tool.
If you don’t know what you’re doing, a stochastic model can actually be
worse than a deterministic model that could be a little bit deficient.
• Finally, when you use stochastic modeling, you have to have some kind of
simulation process that involves random number generation. But is the
simulation process really random? And what’s the quality of your random
number generator? If you’re using some kind of software or some kind of
methodology and you think that the results are random but in fact they’re
not random, then you can be making faulty conclusions.

So, to end my part of the presentation, let me restate the fourth and final question
that I posed at the beginning of my presentation: Is it worth the effort? The answer
I’d like to leave with you is Yes, definitely. It’s worth the effort when you’re dealing
with investment risks or catastrophic risks or other similar risks. On the other hand,
for traditional risks that can be eliminated through diversification, it depends. I’m
sure that you can get valuable information using the stochastic approach, but you
may not get as much of a bang for your buck as you think.

MR. TIMOTHY E. HILL: Thank you, Michael. The next speaker is Steve Prince.
Steve works for Dion Durell and Associates. He works with stochastic modeling seg
funds guarantees, policy-holder behavior and reinsurance, and he has recently
published an article entitled "The Securitization of Insurance" that can be found in
the June issue of The Actuary.

MR. W. STEVEN PRINCE: My topic today is stochastic variables, including equity


market models, interest rate models, advantages of various models, and some
words on policy-holder behavior. The objective today is to provide insights, not
necessarily details (which are available in abundance in various text books), and
also some observation on what I call the disconnect between the models people use
and the way they think of markets. They will build the world’s most elaborate
models and they'll say it proves this, but then when you ask them what the market
does, they’ll say something entirely different.

A few people have asked how anything as complex and sophisticated as the
financial markets of the world can be modeled randomly—does this possibly work at
all? The answer is yes, because markets are reactions to news. The analysts of the
world study things to death and then something happens tomorrow and they’re
surprised, and it’s the surprise they’re reacting to. If Microsoft is forecasting profits
Stochastic Pricing 8

of $100 billion this year and it comes in at $105, the world reacts to the five parts,
not the 100 parts. News is, by definition, unexpected, and these unexpected events
are well-modeled by random behavior.

Whether you like the philosophic justification or not, there’s an awful lot of empirical
evidence that these models do a reasonably good job. None of these models is
attempting to predict the market, and we're not economists telling you what the
inflation rate will be next month. We’re simply producing 1,000 scenarios that
reasonably reflect 1,000 things that might happen.

You’ll hear about log normal models. The reason log normal comes up is that we’re
saying that the percentage changes are what the world is most interested in. You’re
interested in percentage change generally because somebody tells you the stock
market has gone up 50 points today or 10 points, but that doesn’t tell you much
about your return unless you know whether the market started at 100 or 1,000. I’ll
explain why that is when I cover the equity model.

The simplest equity model is that the value of your stock at time t is equal to the
value of your stock at time t-1, multiplied by 1 + G (Figure 3), where G is some
growth rate. That’s mathematically equivalent to saying S(t) is equal to S(t-1)
multiplied by e r for some number r and you can take a logarithm for both sides and
say the logarithm of the current stock value equals the logarithm of the previous
stock value plus some r factor. R tends to be normally distributed for the random
period in question.
Stochastic Pricing 9

Figure 3

Simplest Model

S(t) = S(t-1) * (1+g)


or

S(t) = S(t-1) * exp(r)


or

ln(S(t)) = ln(S(t-1)) + r
S(t) is equity index value (e.g. S&P 500)
r is normally distributed random return for the period

So in this simple model, if you were trying to model investment returns, you’d dig
up this Standard & Poors (S&P) 500 index returns for a time period—that could be a
year, 10 years, or 50 months. You calculate the logarithms of the index and you get
the mean and standard deviation of the growth from period to period, and then you
just generate 1,000 simulations of those index returns.

That’s one simple approach. It assumes the periods are independent, and from that
you conclude that markets don’t bounce back from a slump and markets don’t cool
off after a gain. I stress this by comparison with the probability coin toss example.
If a coin has been tossed heads five times in a row, what are the odds of another
head? The textbooks would say the odds are 50 percent, but that ignores the real
question: how many heads in a row do you need before you wonder whether it’s
truly an unbiased coin? If this is your model, even if you had four lousy quarters in
a row, the next one could be good or it could be bad. You still don’t know what’s
going to happen next quarter.

This simplest model does, in fact, fit the data reasonably well. If you don’t have
much data to begin with, you can’t build a very elaborate model. If you get behind
things like the Black-Scholes in pricing models, the basic assumptions are
independence and lognormal distribution, and they get a bit complicated. And then
they adjust the parameters daily to reflect emerging events, but the basic
assumptions are the same.

Now we can go to a more refined model. There’s a mean reversion model, and it
Stochastic Pricing 10

says that markets go in cycles to some degree. If things are down, the world will
tend to move toward the mean. This model is also called auto regression, because it
tends to regress to itself.

The formula for this is not as complex as it might seem (Figure 4). It says that the
return in period r is equal to some basic mean value plus some other factor based
on how far the last period’s return was from the mean, plus some volatility for the
period. ε is the normal number; K is usually in a range of –1 to +1. Your return this
period depends on the return last period. If rt-1 was above the mean, then K is
greater than zero, and this factor in the middle K X(R t −1 mean) is positive so if r was
above the mean last period, then r is likely to be above the mean this period. If
returns are below the mean, they’re likely to stay there. That same factor, if it’s less
than the mean and K is positive, tends to pull the mean down. The larger K is, the
longer it tends to stay down.

Figure 4

Mean Reversion

Formula is:
rt = mean + k * (rt-1 - mean) + volatility * ε
ε is normal(0,1)
k is usually in the range (-1,1)
mean, volatility are parameters

So now you have some kind of cycle. If things are down, they’re probably going to
be down next time. If things are up, they’re probably going to be up next time. If
the return was below the mean last period and K is negative, then that central
factor tends to add to the return this period, and you’ll more likely get some kind of
bounce-back. Conversely, if the return is above the mean, you’ll probably have
some cooling off. Similar formulas are easily developed and look back for two or
three or four or five periods. You can fit parameters to the curve using curve fitting
and parameter fitting least-square types of formulas. Some note that the mean and
volatility in that formula are not simply the mean and standard deviation of your
Stochastic Pricing 11

sample size. You have to choose the mean and volatility so that when you apply the
K factor, the end result has the desired mean and volatility.

Mean reversion also fits the data reasonably well. There is evidence of cycles in
financial markets. There’s a lot of debate about whether they’re real cycles or just
noise that happens to follow a cyclical pattern at times, and I don’t have the answer
to that. In practical terms, if there isn’t really a cycle and you get lots of data, your
parameter-fitting algorithms will give you a K factor of close to zero. You can plug
this into the formula and if it comes out close to zero, you might simply ignore the
fact.

We have touched on regime-switching models, which is another approach to going


beyond the one-period models. There is evidence that the market goes through
periods of low volatility and high volatility, and if you simply combine all that and
take one mean and one standard deviation, you’re missing something. So the
regime-switching models have two or more regimes—typically two. You have a low-
volatility period with a mean return of µ and a standard deviation of σ 1 and a high-
volatility period with a mean deviation of M 2 and a standard deviation of σ 2 and two
probabilities:ρ12 and ρ21. The model will switch from one regime to another. This is
not a correlation coefficient; it’s a probability of switching from one regime to
another or visa versa. So if you set that against the data for the last 30 years or so,
you find that the world has been in a low-volatility regime about 80 percent of time.
Typically σ 1 is a low-volatility period less than the market as a whole, and σ 2 is a
high-volatility period greater than the market as a whole. That’s no surprise, but
pretty consistently, the mean return when things are volatile is less than the mean
return when things are not volatile.

So then when the markets are volatile, they generally aren’t gaining, and if that
affects anybody’s investment strategy, it’s worth taking note. There’s nothing in
what I just said that says regimes have to be log normal, but evidence shows that
this is a fairly good choice, hence the name regime switching, log normal. Academic
papers that have tried more than two regimes say that log normal works as well as
anything else and sometimes better. Another attraction of regime-switching log
normal is that it lets you get the tail volatility more broadly without also affecting
the volatility near the mean of the distribution. That’s significant because the world
is moving toward stochastic capital models and stochastic reserve models, and if
you’re only using these models to set your capital levels, then you can just increase
the volatility to give yourself a safety margin. If you’re using the same model to set
your reserve and you’ve increased the volatility, now you have also increased your
reserve, which was not the objective. There are ways to compensate for that.
Basically, when in doubt, pick the model that fits the data and run with it.

There are other equity models out there. Some of the models have been criticized
because the parameters do not change over time. It is possible to make models in
which the parameters shift over time, but while such models fit specific data sets
Stochastic Pricing 12

better, they don’t seem to fit all data better.

At a session two years ago that was put on by the CIA, we heard several models of
ever-increasing refinement, and several of the proponents said, "You know, if you
pick this particular set of data and study this to death and choose your parameters
just so, that does actually fit this particular model better than the other model," but
they will also generally admit that if you simply go to another period, the model
doesn’t work any better than something else. If you have enough terms and study
the same data long enough, you can get a better fit, but you haven’t really added
any insight.

Models have been criticized for ignoring that things are different now. There’s a new
economy and the world has globalized. The reply to this is that the world has been
changing all along. We did not wake up one morning and realize we’ve gone global.
The world has been becoming more globalized for the last 30 years, and these
models fit that data. . The world tends to view things through fairly short time
horizons, and anyone who tells you things are different is usually surprised the next
day.

Interest rate models work reasonably well with equity returns. Either the simple
model or the mean reversion model works reasonably well with interest rates, but
you apply the math to the logarithm of the interest rate rather than the
accumulated fund value and then you get a pattern of future interest rates.
Algorithms tend to give you a single interest rate, such as the 90-day T-bill rate or
the one-year T-bill rate or the 20-year bond rate. They do not on their own give you
a yield curve. You could create a number of independent models to give you the
yield or the interest rate at different terms, but there's extremely little economic
justification because there’s generally some relationship between short- and long-
term interest rates.

One approach is to have a mean reversion model for the long-term interest rate and
a second mean reversion series for the short-term rate as a percentage of the long-
term rate. There’s significant empirical evidence that this works, and I’ve tested it
myself, although I’ve never seen it in one of the finance textbooks.

Another approach is to generate a series of short-term interest rates and then


calculate the implicit long-term rate by multiplying out those future short-term
rates. Doing this calculation rests on the assumption that interest rates are
arbitrage-free, which is a point of some debate. If you want to make arbitrage-free
your assumption, you determine your three-year rate today by running your model
for three years with the short-term rate (in figure 5, the rates are five, six, and
seven percent). Then, if the world is truly arbitrage-free, you could invest for three
years at the same effect as three one-year rates, in theory, so multiplying the note
and taking a cube root, the three-year rate today must be 5.9 percent.
Stochastic Pricing 13

Figure 5

Yield Curve Example

¾ Your mean reversion algorithm says that


the first three 1 year rates are:
i1= 5%, i2= 6%, i3= 7%
¾ Then, implied three year rate at time 1 is
(1.05 *1.06 * 1.07) (1/3) - 1 = 5.9%

That only gives you the interest rate itself; it doesn’t give you the returns on your
investment portfolio. To get a portfolio return, you have to model the interest rates,
and then you have to generate a portfolio of bond holdings. These might be one-
year bonds or 20-year bonds or any combination thereof. Then you have to model
the shift in bond values as interest rates change. If you’re simply modeling one-
year investments, then the simple i interest rate is all you need. If you’re modeling
a 20-year investment, you have to make a 20-year investment choice and then a
year from now work out the value of a 19-year to maturity bond at whatever
interest scenario you’ve got. That gives you a return for the period, and if you got a
mix of different term bonds, you’ve got a fairly complex modeling exercise. But
that’s the only way to have the portfolio return accurately reflect the interest rates
and your choice of investment.

Of some note, significant testing has shown that the models I’ve shown you do not
work well if you simply take your total bond portfolio just like you would take your
total stock return and model the returns in your total bond portfolio. That approach
tends to produce periods of implied negative interest rates, and we just haven’t
seen enough of those to believe it.

There aren’t many ways to coherently model interest rates and equity returns in the
same model, and this is particularly important in the seg fund guarantee, where to
some extent you’re hoping that your equity returns are counterbalanced by bond
returns going in the other direction. It’s also important if you’re trying to forecast
Stochastic Pricing 14

hedge prices. Hedge prices reflect the underlying or assumed underlying behavior of
the stock. Another important point in hedge point calculation is what you think
interest rates are doing on the day you’re trying to calculate the future price for
your hedge. You have to have some kind correlation between your interest rates
and your equity return. One of my favorites is the Wilkie model, which has
relationships among inflation, interests rates, and equity returns. In practical terms,
the model does have a reasonably good fit, and Wilkie and his colleagues have
produced a few papers showing it will work surprisingly well in a number of
countries. However, there are 30 parameters per country to fit, and if you’re doing
more than one country, that’s 30 times however many countries you’re doing. The
parameters are very difficult to fit over time.

The Wilkie model doesn’t model individual funds versus the overall market. It gives
you risk-free interest rates and total equity market returns and a few other things,
but it doesn’t tell you how the XYZ fund is going to do relative to the market as a
whole. Even if you do all that work, it still has some significant shortcomings. The
equity models I gave you a minute ago work equally well if you take the return for
the XYZ fund or the market as a whole or a foreign country or any of those things.

Arbitrage-free models relate to the setting of parameters rather than the


mathematical structure of the model itself. Arbitrage-free asserts that there are no
opportunities to gain by swapping what are essentially the same risks in arbitrage
because the market will force such prices to equilibrium. That’s the basic
assumption they make. If that’s the same as this, then their returns and prices
ought to be the same or whatever the corresponding mirror image is.

It’s been elevated to a religion in some circles, but if you read the finance text
books, it’s really introduced simply as the way to get to the other side of some
equations. Well, I’ve studied this side of it and I’ve got a formula, but I can’t solve
for the price or I can’t solve for the volatility because I need something on the other
side of the equation. Well, there is some justification that if the market is
reasonably efficient, you can look at some other aspects of the market by a similar
equation and get the other side, and if this equals that, you can now solve for alpha
or beta or whatever it is. That’s a reasonable assumption in many quarters, but
that’s far short of a religious justification for saying it’s necessarily true in all
quarters.

Black-Scholes and many option pricing models use arbitrage-free to get prices.
There’s evidence that it works fairly well in the parts of the world or the parts of the
economy where it’s used, but that does not necessarily make it true everywhere
else. Arbitrage-free models are generally used in short-terms options, typically less
than six months. The assumed long-term growth rate in the arbitrage-free model
tends not to matter. They are literally worried about minute-to-minute, hour-to-
hour, day-to-day volatility in whatever it is they’re talking about, so typically they
adjust the parameters daily and do extensive back testing.
Stochastic Pricing 15

Over a period of many years, though, my understanding of this is that it’s


essentially assuming the markets are giving you a risk-free rate of return, and there
are probably finance professors in the room that will disagree with me. Whether
you’ve assumed that or not, that seems to be the result. In much actuarial work,
such as 20- or 30-year segregated fund guarantees, the long-term return is far
more important than the day-to-day volatility because you’re guaranteed an
amount based on the amount of money the person gave you, and if you’ve had a
couple of years of good returns, that gives you a very nice cushion to absorb a
couple of days of bad volatility. So, in most actuarial work, it’s far more important
to get the long-term average right than it is to get the day-to-day volatility right.
Whatever your assumption, the stock market does out perform the risk-free rate
over the long term, and if it didn’t, you could reasonably wonder why anyone puts
any money in the stock market. Whatever your mean returns are, they have to be
higher than the risk-free rates.

My final topic is policyholder behavior. There’s very little theoretical basis and even
less factual data for modeling policyholder behavior. Two years ago, the accepted
wisdom was that policyholders were rational—that they had the segregated fund
guarantee that said I’ll get my money back no matter what this fund does. You
would think that when the market went down, people would hang on to their funds
and keep their policies when they were most valuable. All the evidence over the last
couple of years is completely the reverse. When funds do badly, people think
they’re smarter than the market and should dump their funds while they still can
and go somewhere else. So just when the guarantees would cost an insurer, the
most policyholders tend to leave and let the company off the hook. This is good
news for companies, but whether you want to count on that continuing to happen is
another question.

In terms of modeling, you build a model with two probabilities. If funds are doing
poorly, there’s a probability of x that people will leave; if funds are doing well,
there’s a probably of y that they’ll leave. Your model must dynamically assess how
the fund is doing. Has this been a good return this period, or the last two periods?
You assess in the model that things are doing well, and then you pick x or y and
randomly lapse the policy. Some of the models out there are taking single
deterministic scenarios and say you’re losing people at a certain aggregate rate.
Some of the models I’ve seen actually model a number of people, such as 10,000
individuals, and have those individuals randomly lapse or die. That gives you
perhaps a better model, but I’m not sure it gives you a different answer.

There’s also significant evidence that policyholders are not homogeneous. Do people
on average do this? Do people on average do that? There tends to be two types of
policy-holders: the ones that move every time the market does anything and reset
on a whim and 10 times a week want to know how their funds are doing, and the
ones who don’t seem to react to anything for any reason at any time in any case.
It’s probably a mistake to model this as one group with average behavior. Probably
that second group will never cost you a cent no matter what happens, while the first
Stochastic Pricing 16

group is very active and could cost you a heck of a lot more. In that case, it’s
important to model what the first group is costing you and whether that group is 10
percent of your customers or 90 percent of your customers.

Similar logic can be applied to assess whether or not policyholders move between
funds. If funds are up, people either say, "I’m going to stay with it since it’s doing
well" or "I’m going to cash it now while I’m ahead." There’s not much evidence as
to which behavior is dominant, but it affects your behavior as a company. Of
course, all of this doesn’t tell you much without a fairly sophisticated pricing model
to see what the related benefit costs will be.

To recap, there are models out there from the very simple to the very complex. The
empirical data so far says that the moderate complexity seems to work as well as
anything else, and a number of academic papers have deliberately made the models
more complicated to see if that helps, although it doesn’t beyond a certain limit.
How can you model anything as complicated as the world economy? Well, we’re not
trying to make economic predictions; we’re simply trying to generate a very large
number of scenarios to let us get some sense of what our cost of loss or our risk of
loss is.

MR. HILL: Our last speaker will be Eric Von Shiling. Eric has worked for MMC
Enterprise Risk Consulting for about eight years. His primary areas of focus include
guarantees on variable annuities and segregated funds, economic capital
calculations, and enterprise risk management.

MR. ERIC VON SHILING: I am going to discuss stochastic pricing of variable


annuity guarantees: guaranteed minimum death benefits (GMDB) and guaranteed
minimum income benefits (GMIB).

Stochastic modeling generates a large amount of information that we have to digest


and draw conclusions from. It’s a complex exercise. We’ve heard a lot of
information about whether this is a worthwhile event, and I think the key thing for
me is that it actually goes and tries to find a likelihood of future events, and with
that we can build a future picture of what our outcomes could be; we could assess
with variability and come up with a price that we feel comfortable taking on. In this
situation, we’re taking on risk. We need to be willing to take the up sides and the
down sides for that price.

Our variable annuities are highly skewed and so our most probable outcome is of
little or no cost to us. If you have a very poor scenario, you can run a deterministic
one and say, "if this happens, this will probably be my cost." Or how likely is that
scenario? How can we relate that to our other scenarios, such as the most probable
one of no cost at all?

With this stochastic pricing framework, we can integrate that and draw a picture of
our future outcomes, and we can really assess the price that we feel comfortable in
Stochastic Pricing 17

taking. Today I’m just to going to introduce the stochastic modeling with variable
annuities. We’re going to look at base benefit costs, just looking at basically the
cash flow cost of the component alone, and we’re going to extend this and try to
incorporate a capital element and the associated cost of capital. And then once we
have this framework with cost of capital, future balance sheets, and income
statements, we can then apply other pricing objectives that are more common to
our normal measurement.

The core part of variable annuity GMIB and GMDB benefits is the embedded option
that the insurance company is guaranteeing a minimum value of the policyholder’s
fund. So the embedded option is asymmetrical in a sense. As markets go up relative
to the minimum value, there’s no payment to be made; as markets go down, then
the insurance companies have to pay the benefits. An additional characteristic of
the risk profile is that at the same time our costs are going up because markets are
going down, our revenue stream is also going down because our revenues are
generally expressed as a basis point on the fund balance. The other thing to note is
the risk of the case element to these benefits. To a certain extent, you can achieve
diversification across various ages, times, different maturities, etc., but underlying
is still a strong systematic element. It was discussed earlier as well. What you can’t
really get rid of are the underlying connections to this market movement. So we’re
combining an asymmetric cost profile with strong nondiversifiable risk.

Table 1 gives some numbers, and next we’ll try to implement these models we’re
talking about. This one is a sample pricing cell of a single deposit of $1,000 based
on the S&P 500 or a diversified U.S. equity fund, with an issue age of 55. I gave
this thing a GMDB that’s sort of an annual ratchet; it resets annually on the
anniversaries at minimum value. The GMIB has a minimum value rolled up at 5
percent per annum, and you’re allowed to annuitize between 65 and 75. Overall,
the guarantee terminates at age 85.
Stochastic Pricing 18

Table 1

Example Product

‰ Sample Pricing Product


– Single Deposit of $1,000
– S&P500 Based Fund
– Issue Age 55
– GMDB - Net Deposit with Annual Reset on anniversary
– GMIB - Net Deposit rolled at 5% pa, annuitization 65-75
– Guarantee terminates at age 85
‰ Assumptions
– S&P index
– mortality Annuity 2000
– lapse 10%pa
– asset earned rate = 6%
– hurdle rate = 15%

Jun 21, 2001 PD86: Stochastic Pricing of Variable Annuity GMDB/GMIB Guarantees 4

The important thing here is that we have to create a stochastic model from which
we can do our pricing exercise. As was mentioned earlier, if you don’t know what
you’re doing or if you’re building a crazy model, you’re going to get crazy results at
the end. You can think of it perhaps as if you’re trying to use a mirror to see what
your possible future outcomes could be. If you build yourself a nice straight mirror,
you’re going to see an accurate reflection of yourself in that mirror. If you’re looking
at the back of a spoon, then you’re going to have a very big nose and a small ear.
You might go and get a nose job to shrink your nose, when really it’s the ear that’s
the problem. So I can’t stress enough that this is a critical component, and you
have to be comfortable with your underlying stochastic model before you take the
next steps to do the pricing exercise and to draw conclusions. This way you can be
confident about some of the conclusions that you’re drawing. Obviously not all
models are perfect, and every model contains model error, so you have to
recognize and remember that when you’re drawing a conclusion, but hopefully you
can create a reasonable representation of what the future is going to be.

Let’s build our model. In Table 2, I’ve separated the embedded option from the
underlying variable annuity contract. The approach here is that you might as well
use stochastic modeling when you can benefit from it most; adding all kinds of
random elements has a risk of confusing things and makes it difficult to draw
conclusions. So at least we’ll be beginning with as simple a base concept as
possible.
Stochastic Pricing 19

Table 2

Stochastic Pricing Model

‰ Price Embedded Option separate from base VA product


– model elements which benefit from stochastic simulation
– fixed total expense charges (investment, M&E, etc)
‰ Stochastic model of investment returns
– Desired stochastic process of underlying risk factor
– Regime switching lognormal - for S&P500 index

‰ Cash flow / Product model


– projects associated cash flow for each stochastic scenario
• survivorship
• policyholder behavior (dynamic lapses, GMIB election rates, etc.)

Jun 21, 2001 PD86: Stochastic Pricing of Variable Annuity GMDB/GMIB Guarantees 5

Our next step is to choose a stochastic model for an underlying risk factor. In this
case, our future investment return is our critical component, and I’m just modeling
a U.S. equity fund here. In this case, I selected a regime switching, log normal
model calibrated from the S&P 500 index.

All right, so now we’ve generated 1,000 scenarios, or you could choose another
number if you like more precision. We can go on, and we need a product or a cash
flow model that will generate the cash flows along each of these scenarios. So at
this stage we say, for the other uncertain elements that there may be, like mortality
and lapse rates and things, we still maintain a deterministic approach using our
normal actuarial decrement model.

The other element you can reflect at this point is policy-holder behavior, but
perhaps we may build a case for some dynamic lapse rates depending on the
movement of the market. The GMIB election rates depend on how in-the-money
this guarantee is

Before I move on, I’m gong to get to this issue of realistic versus risk neutral as the
basis for your scenario (Table 3). Which one should we use for this exercise? We’re
pricing a financial contract here, so it’s difficult to know if we should be using a
capital markets approach or a realistic framework. Actually, the way I see it, it
could just be a simplistic view, but when you’re going to be bearing a risk—in this
case it’s our variable annuity against uncertainties of the future—you need to
Stochastic Pricing 20

understand what the future outcomes could be so you can evaluate the uncertainty
and how comfortable you are with this risk. You can only do that in a realistic
framework.

Table 3

Realistic vs Risk Neutral Basis

‰ Realistic Scenarios (P-measure)


– un-hedged liability
– policyholder behavior
– future balance sheet provisions

‰ Risk Neutral (Q-measure)


– assumes can build a replicating portfolio
– useful benchmark
– useful if included a hedge / risk management program

Jun 21, 2001 PD86: Stochastic Pricing of Variable Annuity GMDB/GMIB Guarantees 6

The opposite is risk neutral, also known as Q-measures or arbitrage-free. It is often


used obviously for pricing and options in the capital market, but underlying this is
really the assumption that you can build a replicating arbitrage-free portfolio and
that the price of this replicating portfolio must be the same as your option.
However, you have to be able to fully hedge this risk and deal with all the other
constraints—continuous rebalancing, no transaction costs—and everyone should
know what’s on this list.

I think, especially for variable annuities guarantees, most companies in general are
bearing most of this risk through diversified basis. Perhaps you have some hedging
in place, but there’s a considerable amount of basis risk. Maybe you can use an
index fund to help you hedge, but you have a fund that may or may not track that.
So I think we need to view this from a realistic perspective so that we can really
appreciate the risk return tradeoffs and see this picture and get comfortable with it.

All right, let’s put out a few numbers. The first thing we’re going to look at is just
the base benefit cash flow cost (Table 4). Let’s say we projected a cash flow and
discounted these benefit costs and we have present value (PV) benefit costs and we
divide by the present value of the fund at each of those points in time. We end up
with an equivalent level spread or base point measure as a price for that particular
Stochastic Pricing 21

scenario. We can see from the summary statistics what we have. The average
spread is, say, 14 basic points, with a maximum of 265. So the interesting
measures are percentile of distribution and conditional tail expectation. I think
we’ve heard about this a lot these last few days, but it’s essentially the average of
the tail of your distribution. If your threshold is 80, we’re averaging the top 20
percentile of the scenario. In Chart 2, we graph the equivalent level spread.
Beyond the 90th percentile, it really starts to take off, and that’s this tail risk that
we’ve been talking about. This is where you may want to say, "I can take the 96th
percentile or the 98th percentile or the 99th percentile as my worst result." At
which point do you draw this line? It’s a difficult question.

Table 4

Base Benefit Costs


Base Benefit Cost (bps)
Average 14.3
‰ Equivalent Level Spread Min 0.8
Max 265.0
– PV Benefit Costs / PV Fund Value
Percentile (x)
– for each scenario
80 15
‰ Summary Statistics 90 23
95 54
– average 97.5 73
– percentile of distribution 99 141
99.5 183
– Conditional Tail Expectation (CTE)
CTE (x)
‰ Risk Neutral Cost 80 43
– theoretical price 90 67
95 101
– if could perfectly hedge the risk
99 196
– no transaction cost, continuous
rebalance, no basis risk, etc. etc. Risk Neutral 46
RN Percentile 94.3%

Jun 21, 2001 PD86: Stochastic Pricing of Variable Annuity GMDB/GMIB Guarantees 9

Now there’s another line here labeled risk neutral. I generated a number of risk
neutral scenarios as well and just determined the price that would be based on that
basis, and it came out as 46 basic points, which is, I think, the 94th percentile. So
that’s quite a theoretical price and quite an interesting benchmark. So I think if I
can perfectly hedge this risk with no transaction cost, continuous rebalancing, no
basis risk, and shorting of all securities, then this could be the return that I could
achieve, or this would be the price that I would have to charge.

And what question do I answer with this type of measure that I have so far? If I
charge X basis points of margin, I can accumulate this with interest, less the cost
that I incur over the scenario. I can basically access a Y percent likelihood of
covering my base benefit cost over the lifetime of the product. This is a useful
measure; it gives you a sense of the risk you might be taking. However, it doesn’t
Stochastic Pricing 22

answer other questions that we’re generally concerned with in terms of normal
operations measurement. How much capital is this product going to require? What
is the return on capital at this pricing level? How volatile are my earnings, or what
kind of earning patterns could I get from this price?

It’s important for us to extend our pricing structures to give us more information
than we normally deal with in our course of managing the business, and a key part
of that is projecting the future cost of balance sheet provisions. In general,
balancing provisions takes two forms: your liability or reserve and the capital. The
combination of the two is your total balance sheet requirement. Now you can do
this by an economic or statutory basis, depending on the rules that are out there. I
think if you’re going to be taking on this risk, there’s a certain amount of capital
that’s required to support this risk. So I think it’s important to pursue what the
economic capital would be and that you would need a return on that economic
capital. Unfortunately, this builds in a bit of complexity, because basically you have
an option here, and the amount of money that you’re going to require will be value
based, and I use the term loosely. It’s not necessarily a market value, so to speak,
but it could be a more conservative value measure that you might use for your
reserve and your total balance that appears in the capital.

Now the implication of this is that it’s going to be state dependent; it’s going to be
dependent upon the relationships of the contract at the point in time that you do
this valuation. You know from option theory you have the strike price and then the
security price. The more in-the-money in the guarantee, the higher the value of the
option. So again, with our embedded option here, as we move out on along a
particular path, we’re going to experience that the value or the capital requirements
that we’re going to need are going to fluctuate depending on this relationship of our
market value to guarantee value.

As a first example, I’m just going to use the Canadian GAAP requirements. You can
use other ones—perhaps U.S. GAAP market value accounting—or you could come
up with another economic framework you might feel more comfortable using. Now,
a few other sessions have gone over these bases, but the key is that it is sort of an
economic base calculation. It does do evaluations, so I’m just going to do a quick
recap. As part of a stochastic valuation, you do, say, 1,000 or 2,000 scenarios and
you calculate for each scenario your present value net cost, with the present value
benefits, less the present value risk margins. This is your price that you can
incorporate in your total balance sheet. You set your total balance sheet
requirements at a high threshold and the stochastic modeling basis in this
environment will be, let’s say, CTE (95), and you figure your liability at perhaps a
lower threshold—maybe CTE (80) or a little less, and the capital represents the
difference between the total balance sheet requirement and the liability, and you
may include a operating multiple.

So we get back to our problem here, which is projecting balance sheet provisions.
As I mentioned before, it’s state dependent and so theoretically requires stochastic
Stochastic Pricing 23

analysis within a stochastic analysis. You can imagine the practical problem this
causes. I have my original 1,000 scenarios and I’d like to project capital along each
of those 1,000 scenarios. So I move out one time point after the other valuation
and another time point and have to do another valuation. By the thousandth
scenario, let’s say I’m doing them quarterly for 30 years, 120 time points, I’m
talking about theoretically 120,000 valuations to do this correctly.

How can we get around this? We could use perhaps a factor-based approach so it’s
already the Canadian basis for setting the total balance sheet requirements and it’s
based on a factor, which is a percentage of your fund value, and it relates to
different products that you have. The problem with this is that it’s difficult to reflect
the changing evolutions of your product—high immaturity and all the nuances that
you may have. We could go on perhaps and develop an auction pricing format
closed form. There have been a few put out there that you could essentially
calculate a closed form solution, a CTE (95) based on the state of your option value.
Indeed try out some deterministic scenarios and just get a measure for where the
value is. I’m set on one called the neighboring path approximation, which we used
effectively, or you could just go with the full stochastic within stochastic projection.

What is the essence of the neighboring path approximation? We’ll use the
information that we already have. We have our initial set of stochastic projections,
and let’s say among that first scenario and they’re moving it to time one, and I
would like to measure this valuation; I could look for other scenarios that happen to
be in a similar state to where I’m at. So let’s say my MV/GV ratio has gone off to
1.1; if I scan across all my other scenarios at the same time point, there could be a
number of similar ones. Let’s say I can collect them and pull them all together, and
I can do a mini valuation based on this subset of scenarios that is in a similar state.
Let’s say I can extend my original projection and generate a large number of these
10,000, I might be able to find in a given point in time 300 that are very similar and
try to estimate this valuation. A smaller number is good perhaps for producing a
standard error in your measurements and a little more difficult to get to the edge of
your projection. There aren’t many that are similar, but what we’ve used is actually
quite effective, and it allows us to overcome this probability of the problem; it
allows us to project future capital and reserves.

So what can we do now? With the future capital reserves, we have future financial
statements, so for a given risk charge, we can project those cash flows doing
income statements, and we have our after-tax net income, and we can also select a
charge on this capital. So now we can generate perhaps an embedded value for
each scenario.

When we come to our pricing objective, we now have much more information that
is in our more common operating measures. We have information about income and
how much capital we need, and we can measure the return on capital. So this is
now in the framework of how we’re normally doing our business, we can set
ourselves a pricing objective and try to solve for a spread that satisfies that. In my
Stochastic Pricing 24

example (Table 5), I’m just going to try to get the return on capital to meet the
hurdle rate on averaging all across my scenario.

Table 5

Set Pricing Objective

‰ New Possibilities in setting Pricing Objective


– Average ROC = 15%
– Likelihood / Severity of negative scenarios (how bad could it get?)
– Incorporate Earnings Volatility

‰ Example - Pricing Objective


– Average ROC = 15%

‰ Solved Margin = 58 bps


– 96th percentile in base cost distribution

Jun 21, 2001 PD86: Stochastic Pricing of Variable Annuity GMDB/GMIB Guarantees 17

Once we’re doing the embedded value, we actually assign capital that varies
depending on the riskiness of this, and we’ve included a charge for this. First, we
take an average of the return. We use a solved margin of 58 base points and a 96th
percentile in base cost distribution.

Chart 3 shows embedded values as a percentage of deposit, and it’s a cumulative


distribution. So we see that for balanced distribution, you just need the base benefit
costs; however, it does have a bit of a longer tail on the far left, and we see that in
this case about 45 percent of the time we’re negative and the remainder would be
positive.

At this stage, we have to take a step back and decide how comfortable we are with
this downside. We may have to go back and add more constraints to the formula or
perhaps introduce a change in the product structure, or possibly even exit the
market.

Charts 4 and 5 show the total balance sheet requirement as a percentage of fund
value at each point in time. The blue line is the average at each point in time across
all the scenarios. We see that it starts at around 1.5 percent or so and trends up to
seven or eight percent and then it declines over time. However, if you look at a
single particular scenario and you find it’s plucked out number 444, you see that it
Stochastic Pricing 25

bounces around quite a bit, and this is a reflection that we have a sort of value-
based approach to setting this capital environment. As the scenario is bouncing all
around, so is our capital requirement. This shows that we aren’t hedging liability
and we’re exposing ourselves to volatility. This is just a single policy. We are not
including diversification across the block, and we don’t have any potential
smoothing elements to this. So this is what we would experience if we were just to
do a pure value approach for setting our total balance sheet requirement.

And then we can look at our earnings volatility; the blue line is consistent and
steady. However, along a particular scenario, we have a significant amount of
volatility. This again reflects the fact that we have a single policy and that our
reserve element is also value-based. It’s not focused on an emergence of income at
all. As a consequence, we’ll see this bouncing around considerably, but the benefit
from this is that we now have taken our pricing exercise; we’ve assigned likelihood
to our future events, which allows us to paint this picture of what our future
outcomes could be. We’re going to have to bear this risk unless we do some
hedging, and even with hedging, we’ll still have a considerable amount of basis risk.
So we’re setting a price, and we need to be comfortable with it.

With an embedded option, we can project base benefit costs, but it’s important to
extend them, I think, to the other operating measures that we’re normally used to
dealing with, and we can make better decisions about the risk we're taking on.

MR. HILL: I just had one quick note on what he was talking about and it was the
complexities. If you start doing some type of stochastic capital within a model of
stochastic, that gets really difficult, but I think one problem in the U.S. has been
that we’re allowed a guaranteed living benefit—the GMIBs, the GMDBs. The typical
pricing is present value/future benefits divided by present value/future account
value to get a basis points cost, and, you know, you pick an 80th percentile or a 90th
percentile, and that’s the charge, and you’re ignoring any type of a balance sheet
piece. As he was pointing out, it’s very important that you look at the cost of your
capital and any reserve that you have to set up. I think often times that is being
ignored.

PANELIST: The point that I was trying to make is that in some circumstances,
arithmetic averages are the correct thing to look at, and in other circumstances,
geometric averages are the correct thing to look at. When you have compounding
and uncertainty, then the geometric average is the thing you want to look at. But I
also did mention that whether or not the investment is a good one really depends
on how you play it. If you buy and hold, then it’s not a good investment. On the
other hand, if you’ve got a large number of investments that are independent, then
your expected return is five percent across them. So it’s not necessarily a black and
white issue. It depends on the circumstances. You need to take care and not just do
the obvious thing, because the obvious thing might get you into trouble.

MS. MARY HARDY: I have some comments mainly for Steve Prince. I enjoyed your
Stochastic Pricing 26

presentation, and thank you for mentioning the regime switch model, which is close
to my heart, but I did have a few disagreements when you got on to discussing
arbitrage-free models. For stock return models, any model that has a positive
probability of either positive or negative outcome, is arbitrage–free, because you
can’t make a risk-free profit on it. When you use the term arbitrage-free model, I
think what you really mean is a risk-neutral model, which is actually something
different. What I would call a risk-neutral model is what you call arbitrage–free. The
arithmetic turns out when you’re valuing under Q-measure or risk neutral that the
drift term of the P- measure doesn’t matter. In other words, you made a point that
we’re discounting out the risk-free rate of interest, but that absolutely does not
mean that that’s what we’re assuming happens in the market, which was the
implication of one of your slides. It falls out of the mathematics, but it is not part of
the assumption that stocks will not outperform bonds. In fact, the whole theory
says that stock has to outperform bonds, in the sense that you have to have extra
returns to compensate for the extra volatility.

I also had a point for Michael Bean, and I think it relates to the point that someone
else was making. I am absolutely sure that if I have independent returns, and my
bank distributes its returns each year and the expected return each year is five
percent, and I’ve accumulated over n years, the expected value of my accumulation
is 1.05 n . I can personally prove my result, and I can actually disprove your result.

MR. BEAN: In the example that I gave, the arithmetic expected return per period is
5 percent, which means that the expected per-period accumulation factor is 1.05.
Since returns in distinct periods are assumed to be independent, it follows that the
arithmetic expected value of a $1 investment after n periods is (1.05) n, which tends
to infinity as n becomes arbitrarily large. On the basis of this observation, many
people erroneously conclude that following a buy-and-hold strategy with this
investment is a good one. However, consider the probability of actually having a
gain after n periods, i.e., Pr(Pn>1), where Pn is the value of a $1 investment after n
periods. For large values of n, the distribution of Pn is approximately log normal,
and one can show that Pr(Pn>1) is approximately zero, which means that there is
virtually no chance of coming out ahead with this investment over the long term.
This demonstrates that the arithmetic expected value is an inappropriate statistic to
use when analyzing an investment in which uncertainty compounds over time. The
appropriate statistic to consider is the geometric expected value, which in this case
is (0.9)n/2 after n periods.

PANELIST: Steve Mitchell made some comments about a professor in the audience
who was going to have an issue. I don’t disagree with Mary. I would caution that
some other finance professors have different points of view with which I was
disagreeing, and thank you for making that clarification, Mary.

MR. RANDY WRIGHT: When you calculate variable annuity GMDB costs and you
send those costs over 30 years and then pull them back at a present value to
consider basis point equivalent or something, what interest rate would be the right
Stochastic Pricing 27

theoretical rate used to pull back that present value? Would it be the interest rate
you used for setting reserves, or the hurdle interest rate that your company is going
to hold you to, or the interest rate that you originally priced as your goal, or
perhaps the interest rate that you’ve assumed as your mean market return?

PANELIST: I guess the answer is none of the above. The point is you have some
cash flows; what are the reserves today against those cash flows? One approach is
to say it’s whatever I do my reserves on, because that’s what I’m discounting. The
second approach is that somewhere in your modeling, you have an implicit return
on whatever it was you were going to be investing in when you set up those
reserves, and if your model can extract that effective interest rate, I think that the
theoretically more precise basis is the discount factor. If we hold X dollars today
and it earns interest in the meantime, then what are the odds that that money will
be sufficient to cover the cash flow claims we’ve also modeled? But try to explain
that to a regulator or an auditor.

FROM THE FLOOR: My question is for Mr. Bean. Do I understand correctly that if I
play your game assuming the corporate account model—that is, if I earn the $0.50,
then I transfer the extra $0.50 to my corporate account and if I lose the 40 I get
the 40 from the corporate account, then on average I’d be winning? Is that right?

MR. BEAN: Yes.

FROM THE FLOOR: But if you play one consecutively, then you’d lose. The math
works that way. However, if you played the one consecutively with the corporate
account model where if you win the extra $0.50 you transfer to the corporate
account and if you lose the $0.40 you transfer it back from the corporate account to
a processing center and you play consecutively, then you’d win.

MR. BEAN: Yes, that’s right because in that circumstance you’d be keeping your
principal value constant.

FROM THE FLOOR: I just wanted to make sure that the corporate accounts model
works.

MR. BEAN: The point is that if your principal value changes over time in an
uncertain way and your gains and losses are compounded, then you might not get
the results that you think. That’s the point that I was making.

FROM THE FLOOR: I just had a quick question for Steve. You mentioned in your
presentation with respect to policyholder behavior that the little available data
shows that the policyholders are tending to leave policies that are actually in the
money and they’re guaranteed with more frequency, which is the opposite of what
the rational assumption might have been, but clearly the data that’s available is
from the first, second, or third years of 10-year maturity guarantees. Would you
think that in the seventh, eighth, and ninth years of the maturity guarantees, that
Stochastic Pricing 28

trend might reverse itself, and people who are in the money might actually tend to
stick around?

MR. PRINCE: Yes, I’d expect that. What you’re saying is people aren’t going to
wait eight or 10 years or something, but they might very well behave differently
when it’s a matter of waiting one or two years.

FROM THE FLOOR: Because as the maturity date became closer to today, you
would expect that behavior to reverse.

MR. PRINCE: I would expect so, but this is simply based on the data one way or
the other.

PANELIST: And just to add to that, when we would do GMIB modeling, for
instance, it would be something similar. We have an increase in lapses if the
market’s down, but then we’d have another parameter that would show the value of
the guarantee, and the value gets greater as it gets closer to the end of the waiting
period, and that would pull lapses back down.

MR. DAVID SHURIK: This is for Steve. Do you have any testing to see what the
percentile of the returns of the last 18 months has been against any of the models
that you mentioned?

MR. PRINCE: Not I personally, but the working group of the Canadian Institute of
Actuaries has gone to significant lengths to add to that very question, and when
they talk about people adopting other models, they have established what we call
calibration criteria because you have to track that question fairly well. That report is
available on the Canadian Institute of Actuaries Web site.

MR. ALLAN BENDER: I have a question for Eric. Everything about the pricing
seems to be dependent upon the model, so one of the questions is, if the market
really moves significantly, what kind of move does it take to get you to reprice? Are
you always locked in or are you continually repricing?

MR. VON SHILING: I guess I wasn’t envisioning repricing as you go along; it was
more starting out with the money and taking the chance of being locked into your
pricing structure.
Stochastic Pricing 29

Chart 1

Graphical Illustration:
„ Scenario Analysis: „ Stochastic Simulation:

Chart 2

Base Benefit Cost Distribution


Equivalent Level Spread Risk Neutral
300

Base Benefit Cost

250

200
Cost ELS (bps)

150

100

50

0
50% 60% 70% 80% 90% 100%
Percentile

Jun 21, 2001 PD86: Stochastic Pricing of Variable Annuity GMDB/GMIB Guarantees 10
Stochastic Pricing 30

Chart 3

Distribution of Embedded Value


Embedded Value as % of Deposit
Spread = 57.7 bps (Avg EV=0)

6%

Embedded Value

4%

2%
Embedded Value as % of Deposit

0%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

-2%

-4%

-6%

-8%
Percentile

Jun 21, 2001 PD86: Stochastic Pricing of Variable Annuity GMDB/GMIB Guarantees 18

Chart 4

Future TBSR as % of FV
Risk Margin = 60 bps

35%

30%

25%
TBSR as % of Fund Value

20%
Average
444
15%

10%

5%

0%
0

0
0

5
10

15

20

25

30

35

40

45

50

55

60

65

70

75

80

85

90

95
10

10

11

11

12

Quarters

Jun 21, 2001 PD86: Stochastic Pricing of Variable Annuity GMDB/GMIB Guarantees 20
Stochastic Pricing 31

Chart 5

Earnings Volatility
Earnings as % of FV

600

400

200

Average
Bps

0
444
0

10

15

20

25

30

35

40

45

50

55

60

65

70

75

80

85

90

95

100

105

110

115

120
-200

-400

-600
Quarters

Jun 21, 2001 PD86: Stochastic Pricing of Variable Annuity GMDB/GMIB Guarantees 21

You might also like