0% found this document useful (0 votes)
47 views47 pages

Non Life Insurance Matemathics

Matemáticas actuariales para productos de no vida

Uploaded by

carlosd.prado
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views47 pages

Non Life Insurance Matemathics

Matemáticas actuariales para productos de no vida

Uploaded by

carlosd.prado
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Non-Life Insurance Mathematics

Christel Geiss
Department of Mathematics and Statistics
University of Jyväskylä

March 1, 2010
2
Contents

1 Introduction 5
1.1 Some facts about probability . . . . . . . . . . . . . . . . . . 5

2 Claim number process models 7


2.1 The homogeneous Poisson process . . . . . . . . . . . . . . . . 7
2.2 The renewal process . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 The inhomogeneous Poisson process... . . . . . . . . . . . . . . 14

3 The total claim amount process S(t) 17


3.1 The Cramér-Lundberg-model . . . . . . . . . . . . . . . . . . 17
3.2 The renewal model . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Properties of S(t) . . . . . . . . . . . . . . . . . . . . . . . . . 18

4 Premium calculation principles 21


4.1 Used principles . . . . . . . . . . . . . . . . . . . . . . . . . . 21

5 Claim size distributions 23


5.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.3 The QQ-plot . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

6 Modern premium calculation principles 25


6.1 The exponential principle . . . . . . . . . . . . . . . . . . . . 25
6.2 The quantile principle . . . . . . . . . . . . . . . . . . . . . . 25
6.3 The Esscher principle . . . . . . . . . . . . . . . . . . . . . . . 25

7 The distribution of S(t) 27


7.2 Mixture distributions . . . . . . . . . . . . . . . . . . . . . . . 27
7.3 Applications in insurance . . . . . . . . . . . . . . . . . . . . . 30
7.4 The Panjer recursion . . . . . . . . . . . . . . . . . . . . . . . 31
7.5 Approximation of FS(t) . . . . . . . . . . . . . . . . . . . . . . 34
7.6 Monte Carlo approximations of FS(t) . . . . . . . . . . . . . . 34

8 Reinsurance treaties 37

3
4 CONTENTS

9 Probability of ruin 39
9.1 The risk process . . . . . . . . . . . . . . . . . . . . . . . . . . 39
9.2 Bounds for the ruin probability . . . . . . . . . . . . . . . . . 42
9.3 An asymptotics for the ruin probability . . . . . . . . . . . . . 43
1. Introduction

Insurances can be divided into two categories, life insurances and non-life
insurances. The former are taken for example for life insurances and pensions
and they cover a long term. The latter are insurances against fire, water
damage, earthquake, industrial catastrophes or for cars, for example. Non-
life insurances cover in general a year or other fixed time periods.
The course material is based on the textbook Non-Life Insurance Mathemat-
ics by Thomas Mikosch [2]

The problem to solve


We will consider the following Situation.
1. Insurance contracts (or ”policies” ) are sold. This is the income of
the insurance company.
2. At times Ti , 0 ≤ T1 ≤ T2 ≤ . . . claims happen. The times Ti are called
the claim arrival times.
3. The i-th claim arriving at time Ti causes the claim size Xi .
Task: Find a mathematical (probabilistic) model for the Ti ’s and Xi ’s to
compute or estimate how much an insurance company should demand for its
contracts to avoid ruin.

1.1 Some facts about probability


We shortly introduce some definitions and facts from probability theory
which we need in this course. For more information see [4] or [1], for ex-
ample.
(1) A probability space is a triple (Ω, F, P), where Ω is a non-empty
set, F is a σ-algebra consisting of subsets of Ω and P is a probability
measure on Ω.
(2) A function f : Ω → R is called a random variable if and only if for
all intervals (a, b), −∞ < a < b < ∞ the pre-image
f −1 ((a, b)) = {ω ∈ Ω : a < f (ω) < b} ∈ F.

5
6 CHAPTER 1. INTRODUCTION

(3) The random variables f1 , ..., fn are independent if and only if


P(a1 < f1 < b1 , ..., an < fn < bn ) = P(a1 < f1 < b1 ) · · · P(an < fn < bn )
for all −∞ ≤ ai < bi ≤ ∞, i = 1, ..., n.
If the fi ’s have discrete values, i.e. fi : Ω → {x1 , x2 , x3 . . .}, then the
random variables f1 , ..., fn are independent if and only if
P(f1 = k1 , ..., fn = kn ) = P(f1 = k1 ) · · · P(fn = kn )
for all ki ∈ {x1 , x2 , x3 . . .}.
(4) If f1 . . . fn are independent random variables such that fi has the den-
Rb
sity function hi (x), i.e. P(fi ∈ (a, b)) = a hi (x)dx, then
Z
P((f1 , ..., fn ) ∈ B) = 1IB (x1 , ..., xn )h1 (x) × · · · × hn (xn )dx1 · · · dxn
Rn

for all B ∈ B(R ). The σ-algebra B(Rn ) is the Borel σ-algebra,


n

which is the smallest σ-algebra containing all the open rectangles


(a1 , b1 ) × ... × (an , bn ). The function 1IB (x) is the indicator function
for the set B, which is defined as
½
1 if x ∈ B
1IB (x) =
0 if x 6∈ B.

(5) A random variable f : Ω → {0, 1, 2, ...} is Poisson distributed with


parameter λ > 0 if and only if
λk
P(f = k) = e−λ .
k!
This is often written as f ∼ P ois(λ).
(6) A random variable g : Ω → [0, ∞) is exponentially distributed with
parameter λ > 0 if and only if for all a < b
Z b
P(g ∈ (a, b)) = λ 1I[0,∞) (x)e−λx dx.
a

The picture below shows the density λ1I[0,∞) (x)e−λx for λ = 3.

density for lambda=3


3.0
2.0
y
1.0
0.0

0 1 2 3 4
x
2. Three models for the
claim number process N (t)

Definition 2.0.1 The claim number process N (t) is defined as the


number of claims that occur before time t,
N (t) = #{i ≥ 1 : Ti ≤ t}.

2.1 The homogeneous Poisson process with


parameter λ > 0
Definition 2.1.1 (homogeneous Poisson process) A stochastic process
N = (N (t))t∈[0,∞) is a Poisson process if the following conditions are ful-
filled:
(P1) P({ω ∈ Ω : N (0, ω) = 0}) = 1 or N (0) = 0 a.s. (almost surely)
(P2) N has independent increments, i.e. if 0 = t0 < t1 < ... < tn , (n ≥ 1),
then N (tn ) − N (tn−1 ), N (tn−1 ) − N (tn−2 ), ..., N (t1 ) − N (t0 ) are inde-
pendent.
(P3) For any s ≥ 0 and t > 0 the random variable N (t + s) − N (s) is Poisson
distributed, i.e.
(λt)m
P(N (t + s) − N (s) = m) = e−λt , m = 0, 1, 2, ...
m!
(P4) The ”paths” of N , i.e. N (t, ω), t ∈ [0, ∞), ω fixed are almost surely
right continuous and have left limits. One says N has càdlàg (continue
à droite, limite à gauche) paths.
Lemma 2.1.2 Assume W1 , W2 , ... are independent and exponentially dis-
tributed with parameter λ > 0. Then, for x > 0
n−1
X (λx)k
P(W1 + · · · + Wn ≤ x) = 1 − e−λx .
k=0
k!
This means, that the sum of independent exponential random variables is a
Gamma-distributed random variable.

7
8 CHAPTER 2. CLAIM NUMBER PROCESS MODELS

Proof:
The proof is done by iteration:
n = 1: Z x
P(W1 ≤ x) = λ e−λy dy = 1 − e−λx
0

n = 2: Bt tool (4) we get

P(W1 + W2 ≤ x) = P((W1 , W2 ) ∈ {(x1 , x2 ); x1 + x2 ≤ x})


Z
= λ1I[0,∞) (x1 )e−λx1 λ1I[0,∞) (x2 )e−λx2 dx1 dx2
{x1 +x2 ≤x}
Z x Z x−x2
= λ2 e−λx1 dx1 e−λx2 dx2
Z0 x 0

= λe−λx2 (1 − e−λ(x−x2 ) )dx2


0
= 1 − e−λx − λe−λx x = 1 − e−λx (1 + λx)

Hence, the claim is true for n = 2. By iteration,


n−1
−λx
X (λx)k
P(W1 + ... + Wn ≤ x) = 1 − e .
k=0
k!

Definition 2.1.3 Let W1 , W2 , ... be independent and exponentially dis-


tributed with parameter λ > 0. Define

Tn := W1 + ... + Wn

and
N̂ (t, ω) := #{i ≥ 1 : Ti (ω) ≤ t}, t ≥ 0.

Lemma 2.1.4 For each n = 0, 1, 2, ... and for all t > 0 it holds

(λt)n
P({ω ∈ Ω : N̂ (t, ω) = n}) = e−λt ,
n!

i.e. N̂ (t) is Poisson distributed with parameter λt.

Proof: From the definition of N̂ it can be concluded that

{ω : N̂ (t, ω) = n} = {ω : Tn (ω) ≤ t ≤ Tn+1 (ω)}


= {ω : Tn (ω) ≤ t} \ {ω : Tn+1 (ω) ≤ t}
2.1. THE HOMOGENEOUS POISSON PROCESS 9

Because Tn ≤ Tn+1 , the set {Tn+1 ≤ t} ⊆ {Tn ≤ t}. Clearly, if A ⊇ B, then


P(A \ B) = P(A) − P(B). This implies

P(N̂ (t) = n) = P(Tn ≤ t) − P(Tn+1 ≤ t)


n−1 n
−λt
X (λt)k −λt
X (λt)k
= 1−e −1+e
k=0
k! k=0
k!
(λt)n
= e−λt
n!
¤

Theorem 2.1.5 (a) N̂ (t)t∈[0,∞) is a Poisson process with parameter λ > 0.

(b) Any Poisson process N (t) with parameter λ > 0 can be written as

N (t) = #{i ≥ 1, Ti ≤ t}, t ≥ 0,

where Tn = W1 + ... + Wn , n ≥ 1, and W1 , W2 , ... are independent and


exponentially distributed with λ > 0.

Proof:

(a) We check the properties of the Definition 2.1.1. From tool (6) of Section
1.1 we get.

(P1) Since for all x > 0

P(W1 < x) = P(W1 ∈ (−∞, x))


Z x
= λ 1I[0,∞) (y)e−λy dy = 1 − e−λx ,
−∞
−λx
P(W1 ≥ x) = e

and therefore
1 λ
P(W1 > 0) = lim P(W1 ≥ ) = lim e− N = 1.
N →∞ N N →∞

This implies that N̂ (t, ω) = 0 if only t < T1 (ω) = W1 (ω) but


W1 (ω) > 0 holds almost surely. Hence N̂ (0, ω) = 0 a.s.
(P2) Here, only a special case

P(N̂ (s) = l, N̂ (t) − N̂ (s) = m)=P(N̂ (s) = l)P(N̂ (t) − N̂ (s) = m)

of this property will be shown. The general case can be shown


similarly. Let l, m ≥ 0. Then

P(N̂ (s) = l, N̂ (t) − N̂ (s) = m)


10 CHAPTER 2. CLAIM NUMBER PROCESS MODELS

= P(N̂ (s) = l, N̂ (t) = m + l)


= P(Tl ≤ s < Tl+1 , Tl+m ≤ t < Tl+m+1 )

By defining functions f1 , f2 , f3 and f4 as

f1 := Tl
f2 := Wl+1
f3 := Wl+2 + ... + Wl+m
f4 := Wl+m+1 ,

and h1 , ..., h4 as the respective densities, it follows that

P(Tl ≤ s < Tl+1 , Tl+m ≤ t < Tl+m+1 )


= P(f1 ≤ s < f1 + f2 , f1 + f2 + f3 ≤ t < f1 + f2 + f3 + f4 )
= P(0 ≤ f1 < s, s − f1 < f2 < ∞, 0 ≤ f3 < t − f1 − f2 ,
t − (f1 + f2 + f3 ) < f4 < ∞)
Z1 −x2 Z∞
Zs Z∞ t−x
= h4 (x4 )dx4 h3 (x3 )dx3 h2 (x2 )dx2 h1 (x1 )dx1
0 s−x1 0 t−x1 −x2 −x3
| {z }
I4
| {z }
I3
| {z }
I2
= I1

By direct computation and rewriting the density function of


f4 = Wl+m+1 ,
Z ∞
I4 = λe−λx4 1I[0,∞) (x4 )dx4
t−x1 −x2 −x3
−λ(t−x1 −x2 −x3 )
= e 1I[0,t−x1 −x2 ) (x3 ).

The density of f3 = Wl+2 + ... + Wl+m is

xm−2
3
h3 (x3 ) = λm−1 1I[0,∞) (x3 )e−λx3 .
(m − 2)!

Therefore,
Z1 −x2
t−x
xm−2
3
I3 = λ m−1
e−λx3 e−λ(t−x1 −x2 −x3 ) dx3 1I[0,t−x1 ) (x2 )
(m − 2)!
0
(t − x1 − x2 )m−1
= 1I[0,t−x1 ) (x2 )e−λ(t−x1 −x2 ) λm−1 .
(m − 1)!
2.1. THE HOMOGENEOUS POISSON PROCESS 11

The density of f2 = Wl+1 is

h2 (x2 ) = 1I[0,∞) λe−λx2 .

This implies

Z∞
(t−x1−x2 )m−1 −λx2
I2 = 1I[0,t−x1 )(x2 )e−λ(t−x1 −x2 )λm−1 λe dx2 1I[0,s)(x1 )
(m−1)!
s−x1
(t − s)m
= λm e−λ(t−x1 ) 1I[0,s) (x1 ).
m!
Finally,
Z s
− s)m l xl−1
m −λ(t−x1 ) (t 1
I1 = λ e λ 1I[0,∞) (x1 )e−λx1 dx1
0 m! (l − 1)!
m l
(t − s) s
= λm λl e−λt
µ ¶m!
µ l! ¶
(λs)l −λs (λ(t − s))m −λ(t−s)
= e e
l! m!
= P(N̂ (s) = l)P(N̂ (t − s) = m).

(P3) Follows from Lemma 2.1.4.


(P4) Clear from the construction.
(b) The proof is omitted.

Poisson, lambda=50
40
30
X

20
10
0

0.0 0.2 0.4 0.6 0.8 1.0

T
12 CHAPTER 2. CLAIM NUMBER PROCESS MODELS

Poisson, lambda=10

8
6
X

4
2
0

0.0 0.2 0.4 0.6 0.8 1.0

2.2 The renewal process


To model windstorm claims, for example, it is not good to use the Poisson
process because windstorm claims happen rarely, sometimes with years in
between.
Definition 2.2.1 (Renewal process) Assume that W1 , W2 , ... are i.i.d.
(=independent and identically distributed) random variables such that
Wi > 0 almost surely (i.e. P(Wi > 0) = 1), i = 1, 2, .... Then
½
T0 = 0
Tn = W1 + ... + Wn , n ≥ 1
is a renewal sequence and
N (t) = #{i ≥ 1 : Ti ≤ t}, t ≥ 0
is the renewal process.
Theorem 2.2.2 Assume the above setting, i.e. N (t) is a renewal process.
If EW1 < ∞, then
EN (t) 1
lim = . (1)
t→∞ t EW1
Remark 2.2.3 If the Wi ’s are exponentially distributed with parameter
λ > 0, Wi ∼ Exp(λ), i = 1, 2, ..., then N (t) is a Poisson process. Con-
sequently,
EN (t) = λt.
Since EWi = λ1 , it follows that for all t > 0
EN (t) 1
= . (2)
t EW1
2.2. THE RENEWAL PROCESS 13

If the Wi ’s are not exponentially distributed, then the equation (2) holds
only in the limit t → ∞.

Theorem 2.2.4 Assume N (t) is a renewal process. If EW1 < ∞, then

N (t) 1
lim = .
t→∞ t EW1

Remark 2.2.5 Theorem 2.2.4 is called the Strong Law of Large Num-
bers (or SLLN) for the renewal process. The SLLN for a sequence of
i.i.d. random variables X1 , X2 , ... with E|X1 | < ∞ states that

X1 + X2 + ... + Xn
−→ EX1 a.s.
n n→∞

So in the case where N (t) is the Poisson process this implies that

N (n) (N (1) − N (0)) + (N (2) − N (1)) + ... + (N (n) − N (n − 1))


=
n n
1
−→ EN (1) = λ = a.s.
n→∞ EW1
14 CHAPTER 2. CLAIM NUMBER PROCESS MODELS

2.3 The inhomogeneous Poisson process and


the mixed Poisson process
Definition 2.3.1 Let µ : [0, ∞) → [0, ∞) be a function such that

(a) µ(0) = 0

(b) µ is non-decreasing, i.e. 0 ≤ s ≤ t ⇒ µ(s) ≤ µ(t)

(c) µ is càdlàg.

Then the function µ is called a mean-value function.

µ(t)=t
2 4
3
y
1
0

0 1 2 3 4
t

µ(t) continuous
2 4
3
y
1
0

0.0 0.5 1.0 1.5 2.0


t

µ(t) càdlàg
3.0
2.0
y
1.0
0.0

0.0 0.5 1.0 1.5 2.0 2.5 3.0


t

Definition 2.3.2 (Inhomogeneous Poisson process) A stochastic pro-


cess N = N (t)t∈[0,∞) is an inhomogeneous Poisson process if and only if
it has the following properties:
2.3. THE INHOMOGENEOUS POISSON PROCESS... 15

(P1) N (0) = 0 a.s.

(P2) N has independent increments, i.e. if 0 = t0 < t1 < ... < tn , (n ≥ 1),
it holds that N (tn ) − N (tn−1 ), N (tn−1 ) − N (tn−2 ), ..., N (t1 ) − N (t0 ) are
independent.

(Pinh. 3) There exists a mean-value function µ such that for 0 ≤ s < t

−(µ(t)−µ(s)) (µ(t) − µ(s))m


P(N (t) − N (s) = m) = e ,
m!
where m = 0, 1, 2, ..., and t > 0.

(P4) The paths of N are càdlàg a.s.

Theorem 2.3.3 (Time change for the Poisson process) If µ denotes


the mean-value function of an inhomogeneous Poisson process N and Ñ is a
homogeneous Poisson process with λ = 1, then

(1)
d
(N (t))t∈[0,∞) = (Ñ (µ(t)))t∈[0,∞)

(2) If µ is continuous, increasing and limt→∞ µ(t) = ∞, then


d
N (µ−1 (t))t∈[0,∞) = (Ñ (t))t∈[0,∞) .

d
In the notation µ−1 (t) is the inverse function of µ and f = g means that the
two random variables f and g have the same distributions, but not the paths.

Definition 2.3.4 (Mixed Poisson process) Let N̂ be a homogeneous


Poisson process and µ be a mean-value function. Let θ : Ω → R be a
random variable such that θ > 0 a.s., and θ is independent of N̂ . Then

N (t) := N̂ (θµ(t)), t ≥ 0

is a mixed Poisson process with mixing variable θ.

Remark 2.3.5 It can be shown that


µ ¶
var(θ)
var(N (t)) = EN (t) 1 + µ(t) .

The property var(N (t)) > EN (t) is called over-dispersion. If N is an


inhomogeneous Poisson process, then

var(N (t)) = EN (t).


16 CHAPTER 2. CLAIM NUMBER PROCESS MODELS
3. The total claim amount
process S(t)

3.1 The Cramér-Lundberg-model


Definition 3.1.1 The Cramér-Lundberg-model considers the following
setting:

1. Claims happen at the claim arrival times 0 < T1 < T2 < ... of a Poisson
process
N (t) = #{i ≥ 1 : Ti ≤ t}, t ≥ 0.

2. At time Ti the claim size Xi happens and it holds that the sequence
(Xi )∞
i=1 is i.i.d., Xi ≥ 0.

3. The processes (Ti )∞ ∞


i=1 and (Xi )i=1 are independent.

Remark: Are N and (Xi )∞


i=1 independent?

3.2 The renewal model


Definition 3.2.1 The renewal model (or Sparre-Anderson-model) consid-
ers the following setting:

1. Claims happen at the claim arrival times 0 ≤ T1 ≤ T2 ≤ ... of a


renewal process

N (t) = #{i ≥ 1 : Ti ≤ t}, t ≥ 0.

2. At time Ti the claim size Xi happens and it holds that the sequence
(Xi )∞
i=1 is i.i.d., Xi ≥ 0.

3. The processes (Ti )∞ ∞


i=1 and (Xi )i=1 are independent.

17
18 CHAPTER 3. THE TOTAL CLAIM AMOUNT PROCESS S(T )

3.3 Properties of the total claim amount pro-


cess S(t)
Definition 3.3.1 The total claim amount process is defined as
N (t)
X
S(t) := Xi , t ≥ 0.
i=1

The insurance company needs information about S(t) in order to determine


a premium which covers the losses represented by S(t). In general, the
distribution of S(t), i.e.

P({ω ∈ Ω : S(t, ω) ≤ x}), x ≥ 0,

can only be approximated by numerical methods or simulations while ES(t)


and var(S(t)) are easy to compute exactly. One can establish principles which
use only ES(t) and var(S(t)) to calculate the premium. This will be done in
chapter 4.

Proposition 3.3.2 For the Cramér-Lundberg-model

ES(t) = λtEX1 , and


varS(t) = λtEX12 .

Proof:
Since

X
1 = 1IΩ (ω) = 1I{N (t)=k} ,
k=0

by direct computation,
N (t) N (t) µ ∞ ¶
X X X
ES(t) = E Xi = E Xi 1I{N (t)=k}
i=1 i=1 k=0
∞ µ
X k
X ¶
= E ( Xi )1I{N (t)=k}
k=0 i=1

X
= E(X + ... + Xk ) E1I{N (t)=k}
| 1 {z } | {z }
k=0 =kEX1 =P(N (t)=k)

X
= EX1 kP(N (t) = k)
k=0
= EX1 EN (t) = λtEX1

and
3.3. PROPERTIES OF S(T) 19

 2 Ã∞ à k ! !2
N (t)
X X X
ES(t)2 = E Xi  = E Xi 1I{N (t)=k}
i=1 k=0 i=1


à k !2
X X
= E Xi 1I{N (t)=k}
k=0 i=1

XX k
¡ ¢
= E Xi Xj 1I{N (t)=k}
k=0 i,j=1
X∞ X∞
2 2
= EX1 kP(N (t) = k) + (EX1 ) k(k − 1)P(N (t) = k)
k=0 k=1
= EX12 EN (t) + (EX1 )2 (EN (t)2 − EN (t))
= var(X1 )EN (t) + (EX1 )2 EN (t)2

and it follows that

var(S(t)) = ES(t)2 − (ES(t))2


= ES(t)2 − (EX1 )2 (EN (t))2
= var(X1 ) EN (t) +(EX1 )2 var(N (t))
| {z } | {z }
=λt =λt
= λt(var(X1 ) + (EX1 ) ) = λtEX12
2

1
Proposition 3.3.3 Assume the renewal model. Let EW1 = λ
and
EX1 < ∞. Then
ES(t)
lim = λEX1 .
t→∞ t
If, moreover, var(W1 ) < ∞ and var(X1 ) < ∞, then

var(S(t)) ¡ ¢
lim = λ var(X1 ) + var(W1 )λ2 (EX1 )2 .
t→∞ t

Remark 3.3.4 The Strong Law of Large Numbers (SLLN) and the Central
Limit Theorem (CLT) for (S(t)) in the renewal model can be stated as follows:

(1) SLLN for (S(t)): If EW1 < ∞ and EX1 < ∞, then

S(t)
lim = λEX1 .
t→∞ t
20 CHAPTER 3. THE TOTAL CLAIM AMOUNT PROCESS S(T )

(2) CLT for (S(t)): If var(W1 ) < ∞, and var(X1 ) < ∞, then
¯ Ã ! ¯
¯ S(t) − ES(t) ¯
¯ ¯ t→∞
sup ¯P p ≤ x − Φ(x)¯ → 0,
x∈R ¯ var(S(t)) ¯

where Φ is the distribution function of the standard normal distribu-


tion, Z x
1 x2
Φ(x) = √ e− 2 dx.
2π −∞
4. Classical premium
calculation principles

The standard problem for the insurance companies is to determine that


amount of premium such that the losses S(t) are covered. On the over hand
the the price should be low enough to be competitive and attract customers.
A first approximation of S(t) is given by ES(t). For the premium income
p(t) this implies

p(t) < ES(t) ⇒ insurance company loses on average


p(t) > ES(t) ⇒ insurance company gains on average

A reasonable solution would be

p(t) = (1 + ρ)ES(t)

where ρ > 0 is the safety loading. Proposition 3.3.3 tells us that


ES(t) ≈ λt EX1 for large t.

4.1 Used principles


(1) The net principle,
pN ET (t) = ES(t)
defines the premium to be a ”fair market premium”. This however,
can be very risky for the company, which one can conclude from the
Central Limit Theorem for S(t).

(2) The expected value principle,

pEV (t) = (1 + ρ)ES(t),

which is motivated by the Strong Law of Large Numbers.

(3) The variance principle,

pV AR (t) = ES(t) + αvar(S(t)), α > 0.

21
22 CHAPTER 4. PREMIUM CALCULATION PRINCIPLES

This principle is in the rnewal model asymptotically the same as pEV (t),
since by Proposition 3.3.3 we have that

pEV (t)
lim
t→∞ pV AR (t)

is a constant. This means that α plays the role of a safety loading ρ.

(4) The standard deviation principle,


p
pSD (t) = ES(t) + α var(S(t)), α > 0.
5. Claim size distributions

What distributions one should choose to model the claim sizes (Xi )? If one
analyzes data of claim sizes that have happened in the past, for example by
a histogram or a QQ-plot, it turns out that the distribution is often ”heavy-
tailed”.
Definition 5.1.1 Let F (x) be the distribution function of X1 , i.e.
F (x) = P({ω ∈ Ω : X1 (ω) ≤ x}).
F is called light-tailed ⇐⇒
1 − F (x)
lim sup <∞
n→∞ x≥n e−λx
for some λ > 0.
F is called heavy-tailed ⇐⇒
1 − F (x)
lim inf >0
n→∞ x≥n e−λx
for all λ > 0.

5.2 Examples
(1) The exponential distribution Exp(α) is light-tailed for all α > 0, since
the distribution function is F (x) = 1 − e−αx , x > 0, and
1 − F (x) e−αx
−λx
= −λx
= e(λ−α)x ,
e e
and by choosing 0 < λ < α,
sup e(λ−α)x = e(λ−α)n → 0, as n → ∞.
x≥n≥0

(2) The Pareto distribution is heavy-tailed. The distribution function


is
κα
F (x) = 1 − , x ≥ 0, α > 0, κ > 0,
(κ + x)α
or
ba
F (x) = 1 − a , x ≥ b > 0, a > 0.
x

23
24 CHAPTER 5. CLAIM SIZE DISTRIBUTIONS

5.3 The QQ-plot


A quantile is ”the inverse of the distribution function”. We take the ”left
inverse” if the distribution function is not strictly increasing and continuous
which is is defined by

F ← (t) := inf{x ∈ R, F (x) ≥ t}, 0 < t < 1,

and the empirical distribution function of the data X1 , ...Xn as


n
1X
Fn (x) := 1I(−∞,x] (Xi ), x ∈ R.
n i=1

It can be shown that if X1 ∼ F , (Xi )∞


i=1 i.i.d., then

lim Fn← (t) → F ← (t),


n→∞

almost surely for all continuity points t of F ← . Hence, if X1 ∼ F , then the


plot of (Fn← (t), F ← (t)) should give almost the straight line y = x.
6
5

left inverse of F(x)


34
y

F(x)
2
1
0

0 1 2 3 4 5 6
x
6. About modern premium
calculation principles

6.1 The exponential principle


The exponential principle is defined as
1
pexp (t) := log EeδS(t) ,
δ
for some δ > 0, where δ is the risk aversion constant. The function pexp (t) is
defined via the so-called ’utility theory’.

6.2 The quantile principle


Suppose F (x) = P({ω : S(t) ≤ x}), x ∈ R, is the distribution function of
S(t). In Section 5.3 we defined the ’left inverse’ of the distribution function
F by
F ← (y) := inf{x ∈ R : F (x) ≥ y}, 0 < y < 1.
Then the (1 − ε)−quantile principle is defined as
pquant (t) = F ← (1 − ε),
where the expression F ← (1−ε) converges for ε → 0 to the ’probable maximal
loss’. This setting is related to the theory of ’Value at Risk’.

6.3 The Esscher principle


The Esscher principle is defined as
ES(t)eδS(t)
pEss (t) = , δ > 0.
Eeδ(S(t))

In all the above principles the expected value E(g(S(t)) needs to be computed
for a certain function g(x) to compute p(t). This means it is not enough to
know ES(t) and var(S(t)), the distribution of S(t) is needed as well.

25
26 CHAPTER 6. MODERN PREMIUM CALCULATION PRINCIPLES
7. Finding the distribution of
the total claim amount S(t)

Theorem 7.1.1 Let (Ω, F, P) be a probability space.


(a) The distribution of a random variable f : Ω → R can be uniquely
described by its distribution function F : R → [0, 1],
F (x) := P({ω ∈ Ω : f (ω) ≤ x}), x ∈ R.

(b) Especially, it holds for g : R → R, such that g −1 (B) ∈ B(R), for all
B ∈ B(R), that Z
Eg(f ) = g(x)dF (x)
R
(in the sense that, if either side of this expression exists, so does the
other, and then they are equal, see [3], pp. 168-169).
(c) The distribution of f can also be determined by its characteristic
function, (see [4])
ϕf (u) := Eeiuf , u ∈ R,
or by its moment-generating function
mf (h) := Eehf , h ∈ (−h0 , h0 )
provided that Eeh0 f < ∞ for some h0 > 0.

Remember: for independent random variables f and g it holds


ϕf +g (u) = ϕf (u)ϕg (u).

7.2 Mixture distributions


Definition 7.2.1 (Mixture distributions) P Let Fi , i = 1, ..., n be distri-
bution functions and pi ∈ [0, 1] such that ni=1 pi = 1. Then
G(x) = p1 F1 (x) + ... + pn Fn (x), x ∈ R,
is called the mixture distribution of F1 , ..., Fn .

27
28 CHAPTER 7. THE DISTRIBUTION OF S(T )

Lemma 7.2.2 Let f1 , ..., fn be random variables with distribution function


F1 , ..., Fn , respectively. Assume that J : Ω → {1, ..., n} is independent from
f1 , ..., fn and P(J = i) = pi . Then the random variable

Z = 1I{J=1} f1 + ... + 1I{J=n} fn

has the mixture distribution function G.

Definition 7.2.3 (Compound Poisson random variable) Let


Nλ ∼ P ois(λ) and (Xi )∞
i=1 i.i.d. random variables, independent from
Nλ . Then

X
Z := Xi
i=1

is called a compound Poisson random variable.

Proposition 7.2.4 The sum of independent compound Poisson random


variables is a compound Poisson random variable: Let S1 , . . . , Sn given by
Nk
X (k)
Sk = Xj , k = 1, ..., n
j=1

be independent compound Poisson random variables such that

Nk ∼ P ois(λk ), λk > 0,
(k)
(Xj )j≥1 i.i.d.,
(k)
and Nk is independent from (Xj )j≥1 for all k = 1, ..., n. Then
S := S1 + ... + Sn is a compound Poisson random variable with represen-
tation

X
d
S= Yl , Nλ ∼ P ois(λ), λ = λ1 + ... + λn
l=1

and (Yl )l≥1 is an i.i.d. sequence, independent from Nλ and


n
d
X (k) λk
Y1 = 1I{J=k} X1 , with P(J = k) = ,
k=1
λ

(k)
and J is independent of (X1 )k .

Proof: P λ
From Theorem 7.1.1 we know that it is sufficient to show that S and N l=1 Yl
have the same characteristic function. We start with the characteristic func-
tion of Sk :
PNk (k)
ϕSk (u) = EeiuSk = Eeiu j=1 Xj
7.2. MIXTURE DISTRIBUTIONS 29


X Pm (k)
= E eiu j=1 Xj
1I{Nk =m}
m=0
X∞
(k) (k)
= E eiuX1 × ... × eiuXm 1I{Nk =m}
m=0
| {z }
all of these are independent

X ³ ´m
(k)
iuX1
= Ee P(Nk = m)
m=0
X∞ ³ ´m
= ϕX (k) (u) P(Nk = m)
1
m=0
∞ ³ ´m λm
X −λk (1−ϕ (k) (u))
k −λk
= ϕX (k) (u) e =e X1
.
m=0
1 m!

Then

ϕS (u) = Eeiu(S1 +...+Sn )


= EeiuS1 × ... × EeiuSn
= ϕS1 (u) × ... × ϕSn (u)
−λ1 (1−ϕ (1) (u)) −λn (1−ϕ (n) (u))
= e Ã
X1
× ... × e X1

µ n ¶!
X λk
= exp −λ 1− ϕX (k) (u) .
k=1
λ 1

PNλ
Let ξ = l=1 Yl . Then by the same computation as we have done for ϕSk (u)
we have

ϕξ (u) = Eeiuξ
Pn (k)
= eiu k=1 1I{J=k} X1
= e−λ(1−ϕY1 (u)) .

Finally,
Pn (k)
ϕY1 (u) = eiu k=1 1I{J=k} X1
Xn ³ ´
Pn (k)
= E eiu k=1 1I{J=k} X1 1I{J=l}
l=1
n
X ³ ´
(l)
iuX1
= E e 1I{J=l}
l=1
n
X λl
= ϕX (l) (u)) .
l=1
1 λ

¤
30 CHAPTER 7. THE DISTRIBUTION OF S(T )

7.3 Applications in insurance


First application
Assume that the claims arrive according to an inhomogeneous Poisson pro-
cess, i.e.
N (t) − N (s) ∼ P ois(µ(t) − µ(s)).
The total claim amount in year l is
N (l)
X (l)
Sl = Xj , l = 1, ..., n.
j=N (l−1)+1

Now, it can be seen, that


N (l)−N (l−1)
d
X (l)
Sl = Xj , l = 1, ..., n
j=1

and Sl is compound Poisson distributed. Proposition 7.2.4 implies that the


total claim amount of the first n years is again compound Poisson distributed,
where

X
d
S(n) := S1 + ... + Sn = Yi
i=1
Nλ ∼ P ois(µ(n))
d (1) (n)
= 1I{J=1} X1 + ... + 1I{J=n} X1
Yi
µ(i) − µ(i − 1)
P(J = i) = .
µ(n)
Hence the total claim amount S(n) in the first n years (with possibly
different claim size distributions in each year) has a representation as a
compound Poisson random variable.

Second application
We can interprete the random variables
Ni
X (i)
Si = Xj , Ni ∼ P ois(λi ), i = 1, . . . , n,
j=1

as the total claim amounts of n independent portfolios for the same fixed pe-
(i)
riod of time. The (Xj )j≥1 in the i-th portfolio are i.i.d, but the distributions
may differ from portfolio to portfolio (one particular type of car insurance,
for example). Then

X
d
S(n) = S1 + ... + Sn = Yi
i=1
7.4. THE PANJER RECURSION 31

is again compound Poisson distributed with

Nλ = P ois(λ1 + ... + λn )
d (1) (n)
Yi = 1I{J=1} X1 + ... + 1I{J=n} X1

λl
and P(J = l) = λ
.

7.4 The Panjer recursion: an exact numerical


procedure to calculate FS(t)
Let
N
X
S= Xi ,
i=1

N : Ω → {0, 1, ...} and (Xi )i≥1 i.i.d, N and (Xi ) independent. Then, setting
S0 := 0, Sn := X1 + ... + Xn , n ≥ 1 yields

X
P(S ≤ x) = P(S ≤ x, N = n)
n=0
X∞
= P(S ≤ x|N = n)P(N = n)
n=0

X
= P(Sn ≤ x)P(N = n)
n=0
X∞
= FXn∗1 (x)P(N = n),
n=0

where FXn∗1 (x) is the n-th convolution of FX1 , i.e.

FX2∗1 (x) = P(X1 + X2 ≤ x) = E1I{X1 +X2 ≤x}


Z Z
X1 ,X2 independent
= 1I{x1 +x2 ≤x} (x1 , x2 )dFX1 (x1 )dFX2 (x2 )
R
Z Z R

= 1I{x1 ≤x−x2 } (x1 , x2 )dFX1 (x1 )dFX2 (x2 )


ZR R

= FX1 (x − x2 )dFX2 (x2 )


R

and by recursion using FX1 = FX2 ,


Z
(n+1)∗
FX1 (x) := FXn∗1 (x − y)dFX1 (y).
R

But the computation of FXn∗1 (x) is numerically difficult. However, there is a


recursion formula for P(S ≤ x) that holds under certain conditions:
32 CHAPTER 7. THE DISTRIBUTION OF S(T )

Theorem 7.4.1 (Panjer recursion scheme) Assume the following con-


ditions:
(C1) Xi : Ω → {0, 1, ...}
(C2) for N it holds that
b
qn = P(N = n) = (a + )qn−1 , n = 1, 2, ...
n
for some a, b ∈ R.
Then for

pn = P(S = n), n = 0, 1, 2, ...


½
q0 , if P(X1 = 0) = 0
p0 = N (1)
EP(X1 = 0) , otherwise
1 Xµn
bi

pn = a+ P(X1 = i)pn−i , n ≥ 1. (2)
1 − aP(X1 = 0) i=1 n

Proof:

p0 = P(S = 0) = P(S = 0, N = 0) + P(S = 0, N > 0)


= P(S0 = 0) P(N = 0) + P(S = 0, N > 0)
| {z }
=1
= P(N = 0) +P(S = 0, N > 0)
| {z }
=q0

X
= q0 + P(X + ... + Xk = 0, N = k)
|{z} | 1 {z }
P(X1 =0)0 P(N =0) k=1
P(X1 =0) P(N = k)
k
| {z }
qk

N
= EP(X1 = 0) .

This implies (1).


For pn , n ≥ 1,

X
pn = P(S = n) = P(Sk = n)qk
k=1

(C2) X b
= P(Sk = n)(a + )qk−1 . (3)
k=1
k

Now, because Q = P(·|Sk = n) is a probability measure the following holds.


n µ ¶
X bl
a+ P(X = l|S = n)
n | 1 {z k }
l=0
Q(X1 =l)
7.4. THE PANJER RECURSION 33

b
= a+ EQ X1
n
b
= a+ EQ (X1 + ... + Xk )
nk
b
= a+ EQ Sk
nk | {z }
=n
b
= a+ , (4)
k
where the last equation yields from the fact that Q(Sk = n) = 1. On the
other hand, we can express the term a + kb also by
n µ ¶
X bl
a+ P(X1 = l|Sk = n)
l=0
n
n
X bl P(X1 = l, Sk − X1 = n − l)
= (a + )
l=0
n P(Sk = n)
n
X bl P(X1 = l)P(Sk−1 = n − l)
= (a + ) . (5)
n P(S k = n)
l=0

b
Thanks to (4) we can now replace the term a + k
in (3) by the RHS of (5)
which yields

∞ X
n µ ¶
X bl
pn = a+ P(X1 = l)P(Sk−1 = n − l)qk−1
k=1 l=0
n
n µ ¶ ∞
X bl X
= a+ P(X1 = l) P(Sk−1 = n − l)qk−1
l=0
n k=1
| {z }
P(S=n−l)
n µ ¶
X bl
= aP(X1 = 0)P(S = n) + a+ P(X1 = l)P(S = n − l)
l=1
n
n µ ¶
X bl
= aP(X1 = 0)pn + a+ P(X1 = l)pn−l ,
l=1
n

which will give the equation (2)


n µ ¶
1 X bl
pn = a+ P(X1 = l)pn−l
1 − aP(X1 = 0) l=1 n

¤
Remark 7.4.2
34 CHAPTER 7. THE DISTRIBUTION OF S(T )

• TheP Panjer recursion only works for distributions of Xi on {0, 1, 2, ...}


i.e. ∞ k=0 PXi (k) = 1 (or, by scaling, on a lattice {0, d, 2d, ...} for d > 0
fixed).

• RTraditionally, the distributions used to model Xi have a density, and


h (x)dx = 0. But on the other hand, claim sizes are expressed
{0,1,2,...} xi
in terms of prices, so they take values on a lattice. The density hXi (x)
could be approximated to have a distribution on a lattice, but how
large would the approximation error then be?

• N can only be Poisson, binomially or negative binomially distributed.

7.5 Approximation of FS(t) using the Central


Limit Theorem
Assume, that the renewal model is used, and that
N (t)
X
S(t) = Xi , t ≥ 0.
i=1

In remark 3.3.4 The Central Limit Theorem is used to state that if


var(W1 ) < ∞ and var(X1 ) < ∞, then
¯ Ã ! ¯
¯ S(t) − ES(t) ¯
¯ ¯ t→∞
sup ¯P p ≤ x − Φ(x)¯ → 0.
x∈R ¯ var(S(t)) ¯

Now, by setting
y − ES(t)
x := p ,
var(S(t))
for large t the approximation
à !
y − ES(t)
P(S(t) ≤ y) ≈ Φ p
var(S(t))

can be used.
Warning: This approximation is not good enough to estimate P(S(t) > y)
for large y, see [2], section 3.3.4.

7.6 Monte Carlo approximations of FS(t)


a) The Monte Carlo -method:
If the distributions of N (t) and X1 are known, then an i.i.d. sample of

N1 , ..., Nm , (Nk ∼ N (t), k = 1, ..., m)


7.6. MONTE CARLO APPROXIMATIONS OF FS(T ) 35

and i.i.d. samples of


(1) (1) 
X1 , ..., XN1 
(j)
... Xi ∼ X1 , i = 1, ..., Nj , j = 1, ..., m
(n) (n) 
X1 , ..., XNm

can be simulated on a computer and the sums


N1
X Nm
X
S1 = Xi1 , ..., Sm = Xim
i=1 i=1

calculated. Then it follows that Si ∼ S(t), and the Si ’s are indepen-


dent. By the Strong Law of Large Numbers,
m
1 X a.s.
ρ̂m := 1IA (Si ) → P(S(t) ∈ A) = p, as m → ∞.
m i=1

It can be shown that this does not work well for small values of p (see
[2], section 3.3.5 for details).

b) The bootstrap method


The bootstrap method is a statistical simulation technique, that doesn’t
require the distribution of Xi ’s. The term ”bootstrap” is a reference to
Münchhausen’s tale, where the baron escaped from a swamp by pulling
himself up by his own bootstraps. Similarly, the bootstrap method only
uses the given data.
Assume, there’s a sample, i.e. for some fixed ω ∈ Ω we have the real
numbers
x1 = X1 (ω), ..., xn = Xn (ω),
of the random variables X1 , ..., Xn , which are supposed to be i.i.d.
Then, a draw with replacement can be made as illustrated in the
following example:
Assume n = 3 and x1 = 4, x2 = 1, x3 = 10 for example. Drawing
with replacement means we choose a sequence of triples were each
triple consists of the randomly out of {1,4,10} chosen numbers. For
example, we could get:

x1 x2 x3
³»³»» ©
©
³»³»© ¢ ¢©
»
³ ³»
» © © ¢ ©© ¢
»
³³© © ¼
9 ³»
»»»
)
³ ¼
© ©®¢© ®¢
©
x2 x1 x1 x3 x1 x2 x3 x2 x2 ...

We denote the k-th triple by X ∗ (k) = (X1∗ (k), X2∗ (k), X3∗ (k)), k ∈ {1, 2, ...}.
36 CHAPTER 7. THE DISTRIBUTION OF S(T )

Then, for example, the sample mean of the k-th triple

X1∗ (k) + X2∗ (k) + X3∗ (k)


X̄ ∗ (k) :=
3
has values between min{x1 , x2 , x3 } = 1 and max{x1 , x2 , x3 } = 10, but
the values near x1 +x32 +x3 = 5 are more likely than the minimum or the
maximum, and it holds the SLLN
N
1 X ∗ x1 + x2 + x3
lim X̄ (i) → a.s.
N →∞ N
i=1
3

Moreover, it holds in general

var(X1 )
var(X̄ ∗ (i)) = .
n
Verifying this is left as an exercise.
In insurance, the sum of the claim sizes X1 + ... + Xn = nX̄n is the
target of interest and with this, the total claim amount
N (t) ∞
à n !
X X X
S(t) = Xi = Xi 1I{N (t)=n} .
i=1 n=0 i=1

Here, the bootstrap method is used to calculate confidence bands for


(the parameters of) the distributions of the Xi ’s and N (t).
Warning: The bootstrap method doesn’t always work! In general,
simulation should only be used, if everything else fails. Often better
approximation results can be obtained by using the Central Limit The-
orem.

So all the methods represented should be used with great care, as each of
them has advantages and disadvantages. After all, ”nobody is perfect” also
applies to approximation methods.
8. Reinsurance treaties

Reinsurance treaties are mutual agreements between different insurance com-


panies to reduce the risk in a particular insurance portfolio. Reinsurances
can be considered as insurance for the insurance company.
Reinsurances are used if there is a risk of rare but huge claims. Examples of
these usually involve a catastrophe such as earthquake, nuclear power station
disaster, industrial fire, war, tanker accident, etc.
According to Wikipedia, the world’s largest reinsurance company in 2009 is
Munich Re, based in Germany, with gross written premiums worth over $31.4
billion, followed by Swiss Re (Switzerland), General Re (USA) and Hannover
Re (Germany).
There are two different types of reinsurance:

A Random walk type reinsurance


1. Proportional reinsurance: The reinsurer pays an agreed proportion
p of the claims,
Rprop (t) = pS(t).
2. Stop-loss reinsurance: The reinsurer covers the losses that exceed
an agreed amount of K,

RSL (t) = (S(t) − K)+ ,

where x+ = max{x, 0}.


3. Excess-of-loss reinsurance: The reinsurer covers the losses that ex-
ceed an agreed amount of D for each claim separately,
N (t)
X
RExL = (Xi − D)+ ,
i=1

where D is the deductible.


B Extreme value type reinsurance
Extreme value type reinsurances cover the largest claims in a portfolio.
The ordering of the claims X1 , ..., XN (t) is denoted by

X(1) ≤ ... ≤ X(N (t)) .

37
38 CHAPTER 8. REINSURANCE TREATIES

1. Largest claims reinsurance: The largest claims reinsurance covers


the k largest claims arriving within time frame [0, t],
k
X
RLC (t) = X(N (t)−i+1) .
i=1

2. ECOMOR reinsurance:
(Excédent du coût moyen relatif = ”excess of the average cost”)
Define k = ⌊ N (t)+1
2
⌋. Then

N (t)
X
RECOM OR (t) = (X(N (t)−i+1) − X(N (t)−k+1) )+
i=1
k−1
X
= X(N (t)−i+1) − (k − 1)X(N (t)−k+1)
i=1

Treaties of random walk type can be handled like before. For example,

P( RSL (t) ≤ x) = P(S(t) ≤ K) + P(K < S(t) ≤ x + K),


| {z }
(S(t)−K)+

so if FS(t) is known, so is FRSL (t) .


Treaties of extreme value type are dealt with extreme value theory tech-
niques.
9. Probability of ruin

9.1 The risk process


If the renewal model is assumed, then the total claim amount process is
N (t)
X
S(t) = Xi , t ≥ 0.
i=1

Let p(t) = ct be the premium income function, where c is the premium


rate. The risk process (or surplus process) is then defined by

U (t) := u + p(t) − S(t), t ≥ 0,

where U (t) is the insurer’s capital balance at time t, and u is the initial
capital.

risk process U(t)


6

U(0)=4
5
4
U(t)
32
1
0

0 2 4 6 8 10 12
t

Definition 9.1.1 (Ruin, ruin time, ruin probability)

ruin := {ω ∈ Ω : U (t, ω) < 0 for some t > 0}

39
40 CHAPTER 9. PROBABILITY OF RUIN

= the event that U ever falls below zero.

Ruin time T := inf{t > 0 : U (t) < 0}


= the time when the process falls below zero
for the first time.

The ruin probability is given by

ψ(u) = P(ruin) = P(T < ∞).


Remark 9.1.2

1. T : Ω → R ∪ {∞} is an extended random variable (i.e. T can also


assume ∞).

2. In the literature ψ(u) is often written as

ψ(u) = P(ruin|U (0) = u)

to indicate the dependence of the initial capital u.

3. Ruin can only occur at the times t = Tn , n ≥ 1. This implies

ruin = {ω ∈ Ω : T (ω) < ∞}


= {ω ∈ Ω : inf U (t, ω) < 0}
t>0
= {ω ∈ Ω : inf U (Tn (ω), ω) < 0}
n≥1
= {ω ∈ Ω : inf (u + cTn − S(Tn )) < 0},
n≥1

where the last equation yields from the fact that U (t) = u + ct − S(t).
Since in the renewal model it was assumed that Wi > 0 a.s., it follows
that
N (Tn ) = #{i ≥ 1 : Ti ≤ Tn } = n
and
N (Tn ) n
X X
S(Tn ) = Xi = Xi ,
i=1 i=1

where
Tn = W1 + ... + Wn ,
which imply that
½ ³ ´ ¾
ω ∈ Ω : inf u + cTn − S(Tn ) < 0
n≥1
( Ã n
! )
X
= ω ∈ Ω : inf u + cTn − Xi < 0 .
n≥1
i=1
9.1. THE RISK PROCESS 41

By setting
Zn := Xn − cWn , n ≥ 1
and
Gn := Z1 + ... + Zn , n ≥ 1, G0 := 0,
it follows that

{ω ∈ Ω : T (ω) < ∞} = {ω ∈ Ω : inf (−Gn ) < −u}


n≥1
= {ω ∈ Ω : sup(Gn ) > u}
n≥1

and for the ruin probability the equality

ψ(u) = P(sup Gn (ω) > u)


n≥1

holds.

The objective is to achieve the following properties:

• Avoiding a situation, where the probability of ruin ψ(u) = 1

• ψ(u) should be small, if the initial capital u is large.

By the Strong Law of Large Numbers (with the assumption that E|Z1 | < ∞),

Gn
lim = EZ1 almost surely.
n→∞ n

If EZ1 > 0, then


a.s.
Gn → ∞, n → ∞,
because Gn ≈ nEZ1 for large n. This means ruin probability ψ(u) = 1 for
all u > 0, if EZ1 > 0.

Theorem 9.1.3 If EW1 < ∞, EX1 < ∞ and

EZ1 = EX1 − cEW1 ≥ 0,

then ψ(u) = 1, i.e. for every fixed u > 0 ruin occurs with probability 1.

Proof:
The case EZ1 > 0 is clear from above. The case EZ1 = 0 can be shown by
random walk theory, but the proof is omitted here.

Definition 9.1.4 (Net profit condition) The renewal model satisfies the
net profit condition (NPC) if and only if

EZ1 = EX1 − cEW1 < 0.


42 CHAPTER 9. PROBABILITY OF RUIN

The assertion is, that on average more premium flows into the portfolio of
the company than claim sizes flow out:

Gn = −p(Tn ) + S(Tn )
= −c(W1 + ... + Wn ) + X1 + ... + Xn

which implies
EGn = nEZ1 < 0.

9.2 Bounds for the ruin probability in the


small claim size case
In this section it is assumed, that the renewal model is used and the net
profit condition holds (i.e. EX1 − cEW1 < 0).

Recall from Theorem 7.1.1 that for a random variable f : (Ω, F) → (R, B(R))
the function
mf (h) = Eehf ,
was called the moment-generating function if it exists at least for h in a
small interval (−h0 , h0 ). We will say that the small claim condition holds
if and only if there exists h0 > 0 such that

mX1 (h) = EehX1 exists for all h ∈ (−h0 , h0 ).

Theorem 9.2.1 (The Lundberg inequality) If there exists r > 0 such


that
mZ1 (r) = Eer(X1 −cW1 ) = 1,
then for each u > 0 it holds that

ψ(u) ≤ e−ru ,

where r is called the Lundberg coefficient.

Remark 9.2.2 It can be shown, that if r exists, it is unique. In practice,


r is hard to compute from the distributions of X1 and W1 . Therefore it is
often approximated numerically or by Monte Carlo methods.

The result implies, that if the small claim condition holds and the initial
capital u is large, there is ”in principal” no danger of ruin.

The following theorem considers the special case, the Cramér-Lundberg-


model:
9.3. AN ASYMPTOTICS FOR THE RUIN PROBABILITY 43

Theorem 9.2.3 Assume that the Cramér-Lundberg-model is used and the


net profit condition (NPC) holds. In addition, let the distribution of X1 have
a density, assume mX1 (h) exists in a small neighborhood (−h0 , h0 ) of the
origin and it holds for the Lundberg coefficient r that −h0 < r < h0 . Then
there exists a constant c > 0 such that

lim eru ψ(u) = c.


u→∞

9.3 An asymptotics for the ruin probability


in the large claim size case
Definition 9.3.1 A distribution function F is called subexponential if
and only if for i.i.d. (Xi ) and Xi ∼ F it holds that

P(X1 + ... + Xn > x)


lim = 1.
x→∞ P(max{X1 , ..., Xn } > x)

For the Cramér-Lundberg-model, by Proposition 3.3.2

EX1
ES(t) = EN (t)EX1 = λtEX1 = t.
EW1

Then, assuming the expected value (or variance) principle:

EX1
pEV (t) = (1 + ρ)ES(t) = (1 + ρ) t.
EW1

Choosing the premium rate c, i.e.

p(t) = ct,

it follows that
EX1
c = (1 + ρ) (1)
EW1

which implies
EX1 (1 + ρ) − cEW1 = 0
and further
EX1 − cEW1 < 0,
which means that the net profit condition holds. Equation (1) implies that

EW1
ρ=c − 1.
EX1
44 CHAPTER 9. PROBABILITY OF RUIN

Theorem 9.3.2 Assume the Cramér-Lundberg-model is used, EX1 < ∞,


the net profit condition (NPC) is fulfilled, X1 has a density and the distribu-
tion function
Z y
1
FX1 ,I (y) := 1 − FX1 (z)dz, y > 0
EX1 0
is subexponential. Then

ψ(u) 1
lim = .
u→∞ 1 − FX1 ,I (u) ρ
Index

bootstrap method, 35 mixed, 14


policy, 5
càdlàg, 7 premium calculation principle, 21
characteristic function, 27 Esscher principle, 25
claim, 5 expected value principle, 21
claim arrival times, 5 exponential principle, 25
claim number process, 7 net principle, 21
compound Poisson variable, 28 quantile principle, 25
density function, 5 standard deviation principle, 22
variance principle, 21
exponentially distributed random probability space, 5
variable, 5
extended random variable, 39 QQ-plot, 23

heavy tailed distribution, 23 random variable, 5


homogeneous Poisson process, 7 exponentially distributed, 5
Poisson distributed, 5
i.i.d., 12 reinsurance, 37
independence, 5 renewal model, 17
inhomogeneous Poisson process, 14 renewal process, 12
insurance contract, 5 renewal sequence, 12
ruin, 39
light tailed distribution, 23
ruin probability, 39
Lundberg coefficient, 42
ruin time, 39
Lundberg inequality, 42
S(t), 17
mean-value function, 14
properties, 18
mixed Poisson process, 14
Strong Law of Large Numbers, 13
mixture distribution, 27
moment-generating function, 27 total claim amount process, 17
Monte Carlo -method, 34 properties, 18
N (t), 7

Panjer recursion scheme, 31


Poisson distributed random variable,
5
Poisson process, 7
homogeneous, 7
inhomogeneous, 14

45
46 INDEX
Bibliography

[1] C. Geiss and S. Geiss. An introduction to probability theory. University


of Jyväskylä, Lecture Notes 60

[2] T. Mikosch. Non-life insurance mathematics: an introduction with


stochastic processes. Springer, 2004.

[3] M. Loève. Probability theory 1. Springer, 1977.

[4] A.N. Shiryaev. Probability. Springer, 1996.

47

You might also like