0% found this document useful (0 votes)
59 views20 pages

A Collective Risk Model For A Short Period

1. The document discusses a collective risk model where the number of terms (N) in a sum is itself a random variable, making both the individual terms (Xj) and the number of terms random. 2. It presents three basic propositions about the mean, variance, and moment generating function of the random sum S=X1+...+XN. If N is Poisson distributed, the mean of S is the mean of X times the Poisson parameter, and the variance is the variance of X times the Poisson parameter. 3. It also discusses the Poisson distribution, noting it is commonly used to model counting or frequency distributions, and that for a Poisson random variable Z, the mean and variance are both

Uploaded by

Marwan Monajjed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views20 pages

A Collective Risk Model For A Short Period

1. The document discusses a collective risk model where the number of terms (N) in a sum is itself a random variable, making both the individual terms (Xj) and the number of terms random. 2. It presents three basic propositions about the mean, variance, and moment generating function of the random sum S=X1+...+XN. If N is Poisson distributed, the mean of S is the mean of X times the Poisson parameter, and the variance is the variance of X times the Poisson parameter. 3. It also discusses the Poisson distribution, noting it is commonly used to model counting or frequency distributions, and that for a Poisson random variable Z, the mean and variance are both

Uploaded by

Marwan Monajjed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Chapter 3

A Collective Risk Model for a Short Period

From a purely mathematical point of view, the model of this chapter differs from what we
considered in Chapter 2. by the fact that now we will explore sums of r.v.’s where not only
separate terms (addends) are random but the number of the terms is random also. In other
words, our object of study is the r.v.
N
S = SN = ∑ X j, (0.1)
j=1

where N and X1 , X2 , ... are r.v.’s., as well. If N assumes zero value, we set S = 0.

1 CONDITIONAL EXPECTATION. CONDITIONING


1.1 Definitions
Let X and Y be r.v.’s. Our immediate goal is to define the quantity which we will denote
by E{Y | X = x}.
First assume X and Y to be discrete. Then we can write

P(X = x, Y = y)
P(Y = y | X = x) = . (1.1)
P(X = x)
For the usual expectation
E{Y } = ∑ yP(Y = y).
y

For conditional expectation we write

E{Y | X = x} = ∑ yP(Y = y | X = x).


y

EXAMPLE 1 is classical. Let N1 and N2 be independent Poisson r.v.’s with parameters


λ1 and λ2 , respectively. Find E{N1 | N1 + N2 = n}.
Thus, N1 plays the role of Y , and N1 + N2 plays the role of X; we also replaced x by n.
It is known that the conditional distribution of N1 given N is binomial. More precisely,
( )
P(N1 = k, N2 = n − k) P(N1 = k) P(N2 = n − k) n k
P(N1 = k | N1 +N2 = n) = = = p (1 − p)n−k ,
P(N1 + N2 = n) P(N1 + N2 = n) k

1
2 3. A COLLECTIVE RISK MODEL

where
λ1
p= .
λ1 + λ2
Since the mean value of the binomial distribution is np,

nλ1
E{N1 | N1 + N2 = n} = .  (1.2)
λ1 + λ2
P(X = x, Y = y)
In the continuous case, we deal with densities, and the analogue of is the
P(X = x)
ratio of densities
f (x, y)
f (y | x) = ,
f (x)
and ∫ ∞
E{Y | X = x} = y f (y | x)dy. (1.3)
−∞

For E{Y | X = x} so defined, we will also use the notation mY |X (x). The function mY |X (x)
is often called a regression function of Y on X. When it does not cause misunderstanding,
we omit the index Y |X and write just m(x).
So, m(x) is the mean value of Y given X = x. However, since X is a random variable, its
values x may be different, random. To reflect this circumstance, let us replace in m(x) the
argument x by the r.v. X itself, that is, consider the r.v. m(X).
This r.v. has a special notation: E{Y | X}, and is called the conditional expectation of Y
given X.
EXAMPLE 2. Let us return to Example 1 and set N = N1 + N2 , λ = λ1 + λ2 . By virtue
λ1
of (1.2), mN1 | N (n) = n, and hence
λ
λ1
E{N1 | N} = N. 
λ

1.2 Properties of conditional expectations


Some questions:

• If X and Y are independent, then E{Y | X} =?

• E{X | X} =?

• E{X 2 | X} =?

• E{E{Y | X}} =?
2. Three Basic Propositions 3

PROPERTIES
1. The main and extremely important property of conditional expectation is that for any
X and for any Y with a finite E{Y },

E{E{Y | X}} = E{Y }. (1.4)

Thus,

If we “condition Y on X” and then compute the expected value of the conditional


expectation, we come back to the original unconditional expectation E{Y }.

We call (1.4) the formula for total expectation or the law of total expectation.
2. For any number c and r.v.’s X, Y,Y1 , and Y2 ,
E{cY | X} = cE{Y | X} and E{Y1 +Y2 | X} = E{Y1 | X} + E{Y2 | X}. (1.5)

3. If r.v.’s X and Y are independent, then E{Y | X} is not random and equals E{Y }.
4. Consider Y = g(X)Z, where Z is a r.v. and g(x) is a function. Then
E{g(X)Z | X} = g(X)E{Z | X}. (1.6)
In particular,
E{g(X) | X} = g(X), (1.7)
and E{X | X} = X. Intuitively, it is quite understandable. When conditioning on X,
we view X as a constant, and hence g(X) may be brought outside of the conditional
expectation.

2 THREE BASIC PROPOSITIONS


We are coming back to (0.1).
Assume the r.v.’s X1 , X2 , ... and N to be mutually independent, r.v.’s X1 , X2 , ... are identi-
cally distributed, and N assumes values 0, 1, 2, ... . Set m = E{Xi }, σ2 = Var{Xi }.
Proposition 1 The mean value of S is given by

E{S} = mE{N}. (2.1)

In particular, if N is a Poisson r.v. with parameter λ, then

E{S} = mλ. (2.2)


4 3. A COLLECTIVE RISK MODEL

Proof. By the formula for total expectation (1.4), E{S} = E{E{S | N}}. In the condi-
tional expectation E{S | N}, the value of the r.v. N is given, and we deal with a sum of a
fixed number of addends. Hence, E{S | N} = mN and E{S} = E{mN} = mE{N}. 
Proposition 2 The variance of S is given by

Var{S} = σ2 E{N} + m2Var{N}. (2.3)

In particular, if N is a Poisson r.v. with parameter λ, then

Var{S} = λE{X 2 }, (2.4)

where X is a r.v. distributed as the Xi ’s.

Proof: Var{S} = E{S2 } − (E{S})2 . E{S2 } = E{E{S2 | N}}, E{S2 | N} = σ2 N +


(mN)2 .
Proposition 3 For all z for which the m.g.f.’s below are well defined, the m.g.f. of S is

MS (z) = MN (ln MX (z)), (2.5)

where MN (·) is the m.g.f. of N, and MX (z) is the (common) m.g.f. of the r.v.’s Xi .
In particular, if N is a Poisson r.v. with parameter λ, then

MS (z) = exp{λ(MX (z) − 1)}. (2.6)

Proof. We have
MS (z) = E{ezS } = E{E{ezS | N}}.
In E{ezS | N}, the value of N is given, so the conditional expectation E{ezS | N} is the m.g.f.
of a sum of a fixed number of terms. Hence, by the main property of m.g.f.’s,

E{ezS | N} = (MX (z))N = e(ln MX (z))N ,

and
E{ezS } = E{e(ln MX (z))N }. (2.7)
The r.-h.s. of (2.7) is the m.g.f. of N at the point (ln MX (z)), which implies (2.5).
If N is a Poisson r.v. with parameter λ, then the m.g.f. MN (z) = exp{λ(ez − 1)}. Replac-
ing z by ln MX (z), we obtain (2). 

Note also that in the case where the m.g.f.’s above exist, Propositions 1-2 follow from
Proposition 3 (see Exercise 1) but the direct proofs above are simpler than the derivation
from (2.5).
2. Three Basic Propositions 5

MS (z) = MN (ln MX (z)).


Let P(N = n) = 1. Then

MN (z) = E{ezN } = E{ezn } = ezn

MS (z) = eln MX (z)n = (MX (z))n = MXn (z).


6 3. A COLLECTIVE RISK MODEL

MS (z) = exp{λ(MX (z) − 1)}.

1
MS (z) = exp{200( − 1)}.
1 − z/2
1
MS (z) = exp{200( − 1)}.
(1 − z/2)2

MS (z) = exp{300(0.1ez + 0.9e2z − 1)}.


2. Counting Distributions. Poisson distribution 7

3 COUNTING OR FREQUENCY DISTRIBUTIONS. POISSON


DISTRIBUTION
The distribution of the r.v. N is sometimes called a counting or frequency distribution.
Different types of this distribution are considered in the theory and applications. We restrict
ourselves to the Poisson distribution; in a certain sense, the simplest and most important
distribution.
The Poisson distribution is that of an integer valued r.v. Z such that

P(Z = k) = e−λ λk /k! for k = 0, 1, ... , (3.1)

where λ is a positive parameter. As is proved in almost any course in Probability,

E{Z} = λ, Var{Z} = λ. (3.2)

There are at least two explanations why this distribution plays a key role in our model.
First, the Poisson distribution may appear when we view the flow of claims arriving at the
company as a random process in continuous time. We will consider this later.
Another explanation is connected with Poisson’s theorem. Consider a sequence of n
independent trials with the probability of success at each trial equal to p. Let N be the total
number of successes. As we know, N has the binomial distribution; that is,
( )
n k
P(N = k) = p (1 − p)n−k . (3.3)
k
To state the following theorem rigorously, we assume that the probability p depends on n,
and ( )
λ 1
p = pn = + o , (3.4)
n n
where λ is a positive number.
We use here the Calculus symbol o(x) which denotes a function converging to zero, as
x → 0, faster than x; that is, o(x)
x → 0.
In other words, the second term o( 1n ) in (3.4) is negligible for large n with respect to the
first term λn .
Thus, the r.v. N in this framework depends on n, so we write N = Nn .
Theorem 4 (Poisson). For any k,

λk
P(Nn = k) → e−λ as n → ∞. (3.5)
k!
.
Consider, for example, a portfolio of n policies “functioning” independently, and suppose
that for each policy, the probability of the occurrence (during a fixed period) of a claim
equals the same number p. Then, we may identify policies with independent trials and in
the case of “large” n and “small” p, approximate the distribution of the total number of
claims by the Poisson distribution with the parameter λ = pn.
8 3. A COLLECTIVE RISK MODEL

EXAMPLE 1. Assume n = 30, and, initially, p = 0.5. Then λ = 15. The Excel work-
sheet in Fig.1a shows the binomial [the r.-h.s. of (3.3)] and Poisson [the r.-h.s. of (3.5)]
probabilities in columns B and C, respectively. The corresponding graphs are in the chart.
The values of the distribution functions are given in columns E and F, and the difference
between them—in column G.
Now, let n still be 30, but let p = 0.1. Then λ = 3. The result given in Fig.1b shows
that now the distributions are fairly close, the chart looks just perfect, and the maximal
difference between the d.f.’s is 0.0107 (for k = 5), which is not bad at all. It is a bit
surprising that such a good approximation can appear for relatively small n.
3. Counting Distributions 9

A B C D E F G H I J K
1 k Binomial prob.'s Poisson prob.'s Binom.d.f. Poisson d.f. The difference 30 n
2 0 9.31323E-10 3.05902E-07 9.31323E-10 3.05902E-07 -3.04971E-07
3 1 2.79397E-08 4.58853E-06 2.8871E-08 4.89444E-06 -4.86557E-06 0.5
p
4 2 4.05125E-07 3.4414E-05 4.33996E-07 3.93084E-05 -3.88745E-05
5 3 3.78117E-06 0.00017207 4.21517E-06 0.000211379 -0.000207163
6 4 2.55229E-05 0.000645263 2.97381E-05 0.000856641 -0.000826903 15 λ = np
7 5 0.000132719 0.001935788 0.000162457 0.002792429 -0.002629972
8 6 0.000552996 0.00483947 0.000715453 0.0076319 -0.006916446
9 7 0.001895986 0.010370294 0.00261144 0.018002193 -0.015390753 0.16
10 8 0.005450961 0.0194443 0.008062401 0.037446493 -0.029384093
11 9 0.013324572 0.032407167 0.021386973 0.069853661 -0.048466688
12 10 0.027981601 0.048610751 0.049368573 0.118464412 -0.069095838 0.14
13 11 0.050875638 0.066287387 0.100244211 0.184751799 -0.084507588
14 12 0.080553093 0.082859234 0.180797304 0.267611033 -0.086813729 0.12
15 13 0.111535052 0.095606809 0.292332356 0.363217842 -0.070885486
16 14 0.13543542 0.102435867 0.427767776 0.465653709 -0.037885933
0.1
17 15 0.144464448 0.102435867 0.572232224 0.568089576 0.004142648
18 16 0.13543542 0.096033625 0.707667644 0.664123201 0.043544444
19 17 0.111535052 0.084735551 0.819202696 0.748858752 0.070343944 0.08
20 18 0.080553093 0.07061296 0.899755789 0.819471712 0.080284077
21 19 0.050875638 0.055747073 0.950631427 0.875218785 0.075412642 0.06 f

22 20 0.027981601 0.041810305 0.978613027 0.91702909 0.061583937 P

23 21 0.013324572 0.029864504 0.991937599 0.946893594 0.045044006 0.04


P

24 22 0.005450961 0.020362162 0.99738856 0.967255755 0.030132805


25 23 0.001895986 0.013279671 0.999284547 0.980535426 0.018749121
26 24 0.000552996 0.008299794 0.999837543 0.98883522 0.011002323 0.02
27 25 0.000132719 0.004979876 0.999970262 0.993815096 0.006155166
28 26 2.55229E-05 0.002873006 0.999995785 0.996688102 0.003307683 0
29 27 3.78117E-06 0.001596114 0.999999566 0.998284216 0.00171535 1 4 7 10 13 16 19 22 25 28 31
30 28 4.05125E-07 0.000855061 0.999999971 0.999139277 0.000860694
31 29 2.79397E-08 0.000442273 0.999999999 0.99958155 0.000418449 Binomial Poisson
32 30 9.31323E-10 0.000221137 1 0.999802687 0.000197313

(a)

A B C D E F G H I J K
1 k Binom ial prob.'s Poisson prob.'s Binom .d.f. Poisson d.f. The difference 30
n
2 0 0.042391158 0.049787068 0.042391158 0.049787068 -0.00739591
3 1 0.141303861 0.149361205 0.183695019 0.199148273 -0.015453254 0.1
4 2 0.22765622 0.224041808 0.41135124 0.423190081 -0.011838842
p
5 3 0.236087932 0.224041808 0.647439172 0.647231889 0.000207283
6 4 0.177065949 0.168031356 0.824505121 0.815263245 0.009241876 3 λ = np
7 5 0.102304771 0.100818813 0.926809892 0.916082058 0.010727834
8 6 0.04736332 0.050409407 0.974173211 0.966491465 0.007681747
9 7 0.018043169 0.021604031 0.992216381 0.988095496 0.004120885 0.25
10 8 0.00576379 0.008101512 0.997980171 0.996197008 0.001783163
11 9 0.001565474 0.002700504 0.999545645 0.998897512 0.000648133
12 10 0.000365277 0.000810151 0.999910922 0.999707663 0.000203259
13 11 7.37934E-05 0.00022095 0.999984716 0.999928613 5.61021E-05 0.2
14 12 1.29822E-05 5.52376E-05 0.999997698 0.999983851 1.38467E-05
15 13 1.99726E-06 1.27471E-05 0.999999695 0.999996598 3.09685E-06
16 14 2.69471E-07 2.73153E-06 0.999999964 0.99999933 6.34791E-07
17 15 3.19373E-08 5.46306E-07 0.999999996 0.999999876 1.20423E-07 0.15
18 16 3.3268E-09 1.02432E-07 1 0.999999978 2.13172E-08
19 17 3.04413E-10 1.80763E-08 1 0.999999996 3.5453E-09
20 18 2.44282E-11 3.01272E-09 1 0.999999999 5.57013E-10
0.1
21 19 1.71426E-12 4.75692E-10 1 1 8.3035E-11
22 20 1.0476E-13 7.13538E-11 1 1 1.1786E-11
23 21 5.54288E-15 1.01934E-11 1 1 1.59806E-12
24 22 2.51949E-16 1.39001E-12 1 1 2.08167E-13 0.05
25 23 9.73717E-18 1.81306E-13 1 1 2.68674E-14
26 24 3.15556E-19 2.26632E-14 1 1 4.21885E-15
27 25 8.41484E-21 2.71958E-15 1 1 0
28 26 1.79804E-22 3.13798E-16 1 1 0 0
29 27 2.95974E-24 3.48664E-17 1 1 0 1 4 7 10 13 16 19 22 25 28 31
30 28 3.5235E-26 3.73569E-18 1 1 0 Binomial
31 29 2.7E-28 3.86451E-19 1 1 0 Poisson
32 30 1E-30 3.86451E-20 1 1 0

(b)
FIGURE 1. The accuracy of Poisson approximation.
10 3. A COLLECTIVE RISK MODEL

Next, we consider the case of different probabilities of successes, which for the portfolio
example correspond to a non-homogenous group of clients. Let
{
1 with probability p j ,
Ij =
0 with probability 1 − p j .

Say, I j is the indicator of the event that the jth customer will make a claim. Let

n
Nn = ∑ Ij
j=1

(the total number of claims).


The distribution of Nn is sometimes called the Poisson-Binomial. We will see that, if p j ’s
are small, we again can apply the Poisson approximation. To state it rigorously, assume, as
we did above, that each probability p j depends on n, or in symbols, p j = p jn . Let

1
pn = (p1n + ... + pnn ),
n

the average probability.

Theorem 5 (Generalized Poisson). Assume that

max p jn → 0, (3.6)
j≤n

and ( )
λ 1
pn = + o (3.7)
n n
for some λ > 0. Then (3.5) is true.

4 THE DISTRIBUTION OF THE AGGREGATE CLAIM


4.1 The case of a homogeneous group
First, we consider a homogeneous group of clients and claims coming from this group.
Namely, we consider a fixed time period and assume that the total claim S during this
period is represented by relation (0.1) where the sizes of claims, Xi ’s, are independent and
identically distributed (i.i.d.) r.v.’s, and the total number of claims, N, is independent of the
X’s. If N = 0, we set S = 0. Unless stated otherwise, we suppose P(X j > 0) = 1 for all j.
Propositions 1 and 2 give a clear way to compute E{S} and Var{S}. Examples are given
in Exercises.
4. The Distribution of the Aggregate Claim 11

4.1.1 The convolution method


Let gn = P(N = n) and F(x) be the d.f. of X j . Since all X’s are positive with probability
one,
P(S = 0) = P(N = 0) = g0 , (4.1.1)

and the d.f. FS (x) has a “jump” of g0 at x = 0. Furthermore, for x ≥ 0,


∞ ∞
FS (x) = P(S ≤ x) = ∑ P(S ≤ x | N = n)P(N = n) = ∑ P(Sn ≤ x | N = n)P(N = n),
n=0 n=0

where, as usual, Sn = X1 + ... + Xn , and S0 = 0. The X’s do not depend on N, and they are
mutually independent. Hence for n ≥ 1, we have P(Sn ≤ x | N = n) = P(Sn ≤ x ) = F ∗n (x),
where the symbol F ∗n denotes F ∗ ... ∗ F, the nth convolution of F (see Section ??.??).
Thus,

FS (x) = ∑ gn F ∗n (x). (4.1.2)
n=0

If the density f (x) = F ′ (x) exists, the density fSn (x) also exists for all n except zero.
Since the derivative (F ∗0 (x))′ = 0 for x > 0, the d.f. FS (x) is differentiable for all x > 0. We
call the corresponding derivative the density of S, which exists for all x > 0. Eventually,
differentiating (4.1.2), we get that for x > 0,

fS (x) = ∑ gn f ∗n (x). (4.1.3)
n=1

The similar formula is true for the case when the X’s are discrete r.v.’s. i
EXAMPLE 1. Let all Xi have the Γ-distribution with parameters (a, ν). In particular, for
ν = 1, it is the exponential distribution with parameter a. Denote the corresponding d.f. by
Γ(x; a, ν).
We know that Sn has the d.f. Γ(x; a, nν), and if f (x) is the density of Xi , then the density
of Sn is
f ∗n (x) = anν xnν−1 e−ax /Γ(nν).

By (4.1.2), for the density fS (x) anf x > 0, we have



anν xnν−1 e−ax
fS (x) = ∑ gn Γ(nν)
.  (4.1.4)
n=1

4.1.2 The case where N has a Poisson distribution


The distribution of S in the case where N is Poisson is called compound Poisson.
Consider l independent Poisson r.v.’s, N1 , ..., Nl with respective means λ1 , ..., λl , and set
N = N1 + ... + Nl . Clearly, N is a Poisson r.v. with parameter λ = λ1 + ... + λl .
12 3. A COLLECTIVE RISK MODEL

Proposition 6 Let pi = λi /λ, i = 1, ..., l. (So, p1 + ... + pl = 1.) Then for any n =
1, 2, ... , and any non-negative integers m1 , ..., ml such that m1 + ... + ml = n,
n!
P(N1 = m1 , ..., Nl = ml | N = n) = pm1 · · · pm l
l . (4.1.5)
m1 ! · · · ml ! 1
In particular, for any i = 1, .., l, and k = 0, ..., n.
( )
n k
P(Ni = k | N = n) = p (1 − pi )n−k . (4.1.6)
k i

Comments.

• Let l = 2. set m1 = k. Then for N = n, we should have m2 = n − k,

( )
n! n k
P(N1 = k | N = n) = P(N1 = k, N2 = n−k | N = n) = k n−k
p1 p2 = p (1− p1 )n−k .
k!(n − k)! k 1
(4.1.7)
Consider Ni and Ñ = N − Ni , that is N = Ni + Ñ. Then from above, we get (4.1.6).

• Let n = 1. Then from (4.1.6) it follows that


( )
1 1 λi
P(Ni = 1 | N = 1) = pi (1 − pi )0 = pi = .
1 λ

• Let l = 3. Then from (4.1.5), we have

n!
P(N1 = m1 , N2 = m2 , N3 = m3 | N = n) = pm1 pm2 pm3 .
m1 !m2 !m3 ! 1 2 3

In particular, from this theorem it follows that the probability that a separate claim is that
of type i is pi = λi /λ.
4. The Distribution of the Aggregate Claim 13

The next fact may be considered converse to the first.


Let now N be the random number of some objects, and suppose that N is a Poisson r.v.
with a mean of λ. Each object, independently of the other objects and of the number of the
objects, may belong to one of l types. For each object, the probability of belonging to type
i is pi ; p1 + ... + pl = 1.
For example, each day, a company deals with N claims, the size of each claim equals
either $100 or $150 with respective probabilities p1 and p2 .
Coming back to the general wording, denote by N1 , ..., Nl the numbers of objects of types
i = 1, ..., l. Clearly, N1 + ... + Nl = N.
Proposition 7 Let λi = pi λ, i = 1, ..., l. Then the r.v.’s N1 , ..., Nl are independent and
have the Poisson distribution with respective parameters λ1 , ..., λl .

The r.v.’s Ni are called sometimes marked Poisson r.v.’s: they count only “marked” ob-
jects.
Let us come back to arriving claims, denote by N the total number of claims, and by
X j the size of the jth claim. Let N be a Poisson r.v., E{N} = λ. Assume that the X’s are
independent, and each X takes on l values x1 , ..., xl with respective probabilities p1 , ..., pl .
Consider the sum S = X1 + ... + XN , and denote by Ni the number of the r.v.’s X that took
on the value xi , i = 1, ..., l. Then N1 + ... + Nl = N and the total aggregate claim

S = x1 N1 + ... + xl Nl , (4.1.8)

which may essentially simplify calculations; especially if l is not large.


EXAMPLE 1. Let us come back to the arriving claims with a size of $100 or $150.
Assume that the number of claims during a day is a Poisson r.v. N with a mean of 40,
and on the average, 75% of claims equal $100. If we had been solving the problem in a
straightforward fashion, we would have introduced the r.v.’s
{
100 with probability 3/4,
Xj =
150 with probability 1/4,

and would have considered S = X1 + ... + XN , the sum where not only the separate terms
are random, but the number of terms is random also.
As we saw, this is a complex object. However, in the case under consideration, we may
just write
S = $100 · N1 + $150 · N2 ,
where N1 and N2 are the number of claims equal to $100 and $150, respectively. By Propo-
sition 7, N1 are N2 are independent Poisson r.v.’s with parameters λ1 = 0.75 · 40 = 30 and
λ2 = 0.25 · 40 = 10, respectively.
Thus, the sum of 40 r.v.’s on the average has been reduced to the sum of only two (!)
r.v.’s. Such a sum is easily tractable. The first characteristics may be written immediately:

E{S} = 100E{N1 } + 150E{N2 } = 100λ1 + 150λ2 = 4, 500;


Var{S} = 1002Var{N1 } + 1502Var{N2 } = 1002 λ1 + 1502 λ2 = 525, 000. 
14 3. A COLLECTIVE RISK MODEL

4.2 The case of several homogeneous groups


4.2.1 The probability of coming from a particular group
Let Ni be the number of claims coming from the ith group, i = 1, ..., l; N = N1 + ... + Nl ,
the total number of claims. We assume Ni ’s to be independent and Poisson with respective
parameters λ1 , ..., λl .
Let λ = λ1 + ... + λl , and pi = λi /λ. In Section 4.1.2, we have proved that given N, the
joint distribution of N1 , ..., Nl is multivariate with parameters p1 , ..., pl .
For certainty, consider the first group. Assume we do know that N took on a particular
value n ̸= 0, and N1 took on a value k. Then the probability that a particular claim (from
the n claims arrived) came from the first group, is k/n. This is true if we choose a claim at
random from n claims or consider a specific claim, say, the fifth (provided n ≥ 5).
So, formally, if A is the event that a particular claim chosen came from the first group,
then P(A | N1 = k, N = n) = k/n. Then by the formula for total probability, for n ≥ 1,
n
1 n
P(A | N = n) = ∑ P(A | N1 = k, N = n)P(N1 = k | N = n) = ∑ kP(N1 = k | N = n).
n k=0
k=0

In Proposition 6 we have shown that the conditional distribution of N1 given N = n is


binomial with parameters (n, p1 )). The sum above is the mean of the distribution mentioned
and, hence, equals np1 . Then
1
P(A | N = n) = np1 = p1 .
n
Without loss of generality, we can also postulate that if there are no claims, then the prob-
ability that a claim comes from the first group is also p1 . In other words, let us set by
convention P(A | N = 0) = p1 . This a formal (and non-significant) assumption. Then
∞ ∞
P(A) = ∑ P(A | N = n) P(N = n) = p1 ∑ P(N = n) = p1 .
n=0 n=0

Certainly, the same concerns all other groups.


Consider also a couple of examples based on the same Proposition 6.
EXAMPLE 1. Let l = 3, λ1 = 2 and λ2 = 3, λ3 = 5. Assume that the total number of
claims, N, took on the value 6. What is the probability that N1 ≤ 2 and N2 ≤ 3? Since
p1 = 0.2, p2 = 0.3, and p3 = 0.5, in accordance with (4.1.5),
2 3
6!
P(N1 ≤ 2, N2 ≤ 3 | N = 6) = ∑ ∑ (0.2)i (0.3) j (0.5)6−i− j = 0.83065,
i=0 j=0 i! j!(6 − i − j)!

which may be calculated even by hand.


EXAMPLE 2. Let l = 3, λ1 = 100, λ2 = 200, λ3 = 500. Assume that the total number of
claims N has assumed a particular value of 900. What is P(N1 ≤ 120 | N = 900)? According
to (4.1.6), the conditional distribution of N1 is binomial with parameters p = p1 = 1/8
and n = 900. Let X be a r.v. with this distribution. Of course, computing P(X ≤ 120)
4. The Distribution of the Aggregate Claim 15

exactly is meaningless but we can apply the normal approximation by using the central
limit theorem. (In the case of the binomial distribution, it is called the Moivre-Laplace
theorem). Since E{X} = 18 · 900 = 112.5 and Var{X} = 18 · 78 · 900 ≈ 98.44, we can write

P(X ≤ 120) ≈ Φ((120 − 112.5)/ 98.44) ≈ Φ(0.756) ≈ 0.78. 

4.2.2 A general scheme and reduction to one group


Now, together with Ni ’s and N, we adopt the following notation:

Xi j , i = 1, ..., l, j = 1, 2, ..., is the size of the jth claim coming from the ith gro
Fi (x), i = 1, ..., l, is the common d.f. of Xi j ;
Mi (z), i = 1, ..., l, is the common m.g.f. of Xi j ;
Ni is the (random) number of claims coming from the ith group;
N = N1 + ... + Nl
Si = ∑Nj=1 i
Xi j , i = 1, ..., l, the total of all the claims in the ith group;
l
S = ∑i=1 Si , the total of all the claims.

It makes sense to emphasize that for each group i, the r.v.’s Xi j are identically distributed.
We assume that all r.v.’s Ni , i = 1, ..., l, and Xi j , i = 1, ..., l, j = 1, 2, ... are mutually inde-
pendent. Then the distribution of S is given by

FS = FS1 ∗ ... ∗ FSl , (4.2.1)

the convolution of the distributions of the aggregate claims for separate groups. If we
manage to find separate FSi , and if l is not large, then operation (4.2.1) may be numerically
tractable.
In the case where all r.v.’s Ni have Poisson distributions, the above scheme may be sim-
plified.
Set again λi = E{Ni }, and λ = λ1 + ... + λl . So, N is Poisson with parameter λ.
Let us consider the portfolio as a whole and denote by Yk the size of the kth claim arriving,
whichever group it comes from.
Let Bik be the event that the kth claim comes from the ith group. We know that for any k,
the probability P(Bik ) = pi , where pi = λi /λ (that is, does not depend on k.)
. Then the d.f. of Yk does not depend on k and is equal to the function
l l
FY (x) = P(Yk ≤ x) = ∑ P(Yk ≤ x | Bik ) P(Bik ) = ∑ Fi (x) pi . (4.2.2)
i=1 i=1

So, the Y ’s are i.i.d. and the distribution of the Y ’s is a mixture of the distribution Fi .
Eventually, we may unify the groups in one homogeneous group writing
N
S= ∑ Yk .
k=0
16 3. A COLLECTIVE RISK MODEL

EXAMPLE 1. Let l = 3, λ1 = 100, λ2 = 200, λ3 = 500, and r.v.’s


{ {
1 with probability 1/3, 1 with probability 1/4,
X1 j = X2 j =
2 with probability 2/3, 2 with probability 3/4,
{
1 with probability 1/6,
X3 j =
2 with probability 5/6.

Then λ = 800, p1 = λλ1 = 18 , p2 = 14 , and p3 = 85 . The distribution of each Yk is the mixture


of the distributions above. Therefore, Yk takes on values 1 and 2, and
1 1 1 1 5 1 5
P(Yk = 1) = · + · + · = .
8 3 4 4 8 6 24
Hence, {
1 with probability p1 = 5/24,
Yk =
2 with probability p2 = 19/24.

Thus, S = Y1 + ...YN where N is a Poisson r.v. with parameter λ = 800. By (2.2) and (2.4),
43 4300
E{S} = E{Y j }E{N} = · 800 = = 1433.3...,
24 3
81
Var{S} = E{Y j2 }E{N} = · 800 = 2700.
24
The distribution of S is compound Poisson. Certainly, we cannot write this distribution
in an explicit form but we can write its m.g.f. By (2.6),
{ ( )}
5 z 19 2z
MS (z) = exp {800 (MY (z) − 1)} = exp 800 e + e −1
24 24
{ }
500 z 1900 2z
= exp e + e − 800 .
3 3

In the case under consideration, we can proceed further using the construction of Section
4.1.2. In accordance with the results of this section,

S = K1 + 2K2 ,

where K1 and K2 are independent Poisson r.v.’s with parameters p1 λ = 24 5


· 800 = 500
3 and
p2 λ = 24 · 800 = 3 , respectively. The program for calculating such a distribution may be
19 1900

straightforward.
4. The Distribution of the Aggregate Claim 17

EXAMPLE 2. Let l = 2, λ1 = 200, λ2 = 300. Assume the r.v.’s X1 j and X2 j are exponen-
tially distributed with E{X1 j } = 2 and E{X2 j } = 3. Then λ = 500, p1 = λλ1 = 0.4, p2 = 0.6,
and S = Y1 + ... +YN , where N is a Poisson r.v. with parameter λ = 500, and the distribution
of Y ’s is the mixture of the exponential distributions above. The density

fYi (x) = p1 f1 (x) + p2 f2 (x),

where f1 and f2 are the densities of r.v. X1 j and X2 j , respectively. Thus, for x ≥ 0,

1 1
fYi (x) = 0.4 · e−x/2 + 0.6 · e−x/3 = 0.2(e−x/2 + e−x/3 ). (4.2.3)
2 3
The collection of two groups above can be reduced to a homogeneous portfolio with the
distribution of a particular claim given in (4.2.3). The m.g.f. of distribution (4.2.3) is the
mixture of the m.g.f.’s of the above exponential distributions and equals
1 1 1 − 2.4z
M(z) = 0.4 · + 0.6 · = .
1 − 2z 1 − 3z (1 − 2z)(1 − 3z)

The m.g.f. of the r.v. S is

MS (z) = exp{500(M(z) − 1)}.

Calculating E{S}and Var{S} is easy and may be done either using (2.2) and (2.4), or
directly as follows:

E{S} = λ1 E{X1 j } + λ2 E{X2 j } = 200 · 2 + 300 · 3 = 1300,


Var{S} = λ1 E{X12j } + λ2 E{X22j } = 200 · 8 + 300 · 18 = 7000.
18 3. A COLLECTIVE RISK MODEL

EXAMPLE 3. An insurance company pays claims at a Poisson rate of 2,000 per year.
Claims are divided into three categories: “minor”, “major”, and “severe”, with payment
amounts of $1,000, $5,000, and $10,000, respectively. The proportion of “minor” claims is
50%. The total expected claim payments per year is $7,000,000. What is the proportion of
“severe” claims?
Denote by λi , i = 1, 2, 3, the Poisson rates above. Let λ = λ1 + λ2 + λ3 and pi = λi /λ.
The term “proportion” concerns the probabilities pi . Let us choose $1000 as a monetary
unit. Then the total expected payment is λ1 + 5λ2 + 10λ3 = 7000. Dividing it by λ = 2000,
we have p1 + 5p2 + 10p3 = 3.5. Together with p1 = 0.5, and p1 + p2 + p3 = 1, this leads
to p3 = 0.1. 
4. Premiums and Solvency 19

5 PREMIUMS AND SOLVENCY.


NORMAL APPROXIMATION
5.1 Limit theorem
In this section, we restrict ourselves to a homogeneous portfolio or more precisely, we
assume the random addends Xi in (0.1) are independent and identically distributed (i.i.d.).
Let N = Nλ be a r.v. having the Poisson distribution with parameter λ, and let

S(λ) = X1 + ... + XNλ . (5.1.1)

We enclosed λ by parentheses in order to distinguish this notation from Sn = X1 + ... + Xn .


As already mentioned in Section 4.1.2, the distribution of S(λ) is called compound Poisson.
Set m = E{Xi } and σ2 = Var{Xi }. By (2.2) and (2.4),

E{S(λ) } = mλ, Var{S(λ) } = (σ2 + m2 )λ.

Consider the normalized r.v.

∗ S(λ) − E{S(λ) } S(λ) − mλ


S(λ) = √ =√ .
Var{S(λ) } (σ2 + m2 )λ

∗ } = 0, Var{S∗ } = 1.
As we know, E{S(λ) (λ)

Theorem 8 For any x,



P(S(λ) ≤ x) → Φ(x), as λ → ∞,

where, as usual, Φ(x) is the standard normal d.f.

1,2,3
5.2 Estimation of premiums
Formally, our model does not involve premiums since N counts claims coming from the
portfolio as a whole rather than from clients who pay premiums.
We can, however, talk about an amount of money c = cλ sufficient to cover claims with
a given probability β; i.e., the amount c for which P(S(λ) ≤ c) ≥ β.
We may view c as an aggregate premium and define the loading coefficient θ by the
relation c = (1 + θ)E{S(λ) }.
Since the normal approximation works in our situation, then we can apply it to the deter-
mination of the minimal acceptable θ, following the scheme of the previous chapter keeping
the similar formulas. namely be

qβs Var{S(λ) }
θ≈ . (5.2.1)
E{S(λ) }
20 3. A COLLECTIVE RISK MODEL

In view of (2.2)-(2.4), in the case when N is Poisson with parameter λ, the last formula
may be rewritten as √
qβs m2 λ
θ≈ ,

where m2 = E{X j2 }, the second moment of the X’s.
Thus, in the compound Poisson case,
√ √
qβs m2 qβs m2 + σ2 qβs √
θ≈ √ = √ =√ 1 + k2 , (5.2.2)
m λ m λ λ
and k = σ/m, the coefficient of variation of r.v.’s X. All three representations in (5.2.2) may
be useful.

You might also like