2018 Exam Stam Tables
2018 Exam Stam Tables
2018 Exam Stam Tables
Exam STAM
The reading material for Exam STAM includes a variety of textbooks.
Each text has a set of probability distributions that are used in its readings.
For those distributions used in more than one text, the choices of
parameterization may not be the same in all of the books. This may be of
educational value while you study, but could add a layer of uncertainty in
the examination. For this latter reason, we have adopted one set of
parameterizations to be used in examinations. This set will be based
on Appendices A & B of Loss Models: From Data to Decisions by
Klugman, Panjer and Willmot. A slightly revised version of these
appendices is included in this note. A copy of this note will also be distributed
to each candidate at the examination.
Each text also has its own system of dedicated notation and
terminology. Sometimes these may conflict. If alternative meanings could
apply in an examination question, the symbols will be defined.
For Exam STAM, in addition to the abridged table from Loss Models,
sets of values from the standard normal and chi-square distributions
will be available for use in examinations. These are also included in this note.
When using the normal distribution, choose the nearest z-value to find the
probability, or if the probability is given, choose the nearest z-value. No
interpolation should be used.
Example: If the given z-value is 0.759, and you need to find Pr(Z < 0.759) from
the normal distribution table, then choose the probability for z-value = 0.76: Pr(Z
< 0.76) = 0.7764.
1 − 12 x2
The density function for the standard normal distribution is φ ( x) = e .
2π
Excerpts from the Appendices to Loss Models: From Data to
Decisions, 3rd edition
An Inventory of Continuous
Distributions
A.1 Introduction
The incomplete gamma function is given by
Z x
1
Γ(α; x) = tα−1 e−t dt, α > 0, x > 0
Γ(α) 0
Z ∞
with Γ(α) = tα−1 e−t dt, α > 0.
0
Also, define Z ∞
G(α; x) = tα−1 e−t dt, x > 0.
x
At times we will need this integral for nonpositive values of α. Integration by parts produces the relationship
xα e−x 1
G(α; x) = − + G(α + 1; x)
α α
This can be repeated until the first argument of G is α + k, a positive number. Then it can be evaluated
from
G(α + k; x) = Γ(α + k)[1 − Γ(α + k; x)].
The incomplete beta function is given by
Z
Γ(a + b) x a−1
β(a, b; x) = t (1 − t)b−1 dt, a > 0, b > 0, 0 < x < 1.
Γ(a)Γ(b) 0
1
APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS 2
Γ(α + τ ) θα xτ −1 x
f (x) = F (x) = β(τ , α; u), u=
Γ(α)Γ(τ ) (x + θ)α+τ x+θ
θk Γ(τ + k)Γ(α − k)
E[X k ] = , −τ < k < α
Γ(α)Γ(τ )
θk τ (τ + 1) · · · (τ + k − 1)
E[X k ] = , if k is an integer
(α − 1) · · · (α − k)
θk Γ(τ + k)Γ(α − k)
E[(X ∧ x)k ] = β(τ + k, α − k; u) + xk [1 − F (x)], k > −τ
Γ(α)Γ(τ )
τ −1
mode = θ , τ > 1, else 0
α+1
αγ(x/θ)γ 1
f (x) = F (x) = 1 − uα , u=
x[1 + (x/θ)γ ]α+1 1 + (x/θ)γ
θk Γ(1 + k/γ)Γ(α − k/γ)
E[X k ] = , −γ < k < αγ
Γ(α)
VaRp (X) = θ[(1 − p)−1/α − 1]1/γ
θk Γ(1 + k/γ)Γ(α − k/γ)
E[(X ∧ x)k ] = β(1 + k/γ, α − k/γ; 1 − u) + xk uα , k > −γ
Γ(α)
µ ¶1/γ
γ−1
mode = θ , γ > 1, else 0
αγ + 1
τ γ(x/θ)γτ (x/θ)γ
f (x) = F (x) = uτ , u=
x[1 + (x/θ)γ ]τ +1 1 + (x/θ)γ
θk Γ(τ + k/γ)Γ(1 − k/γ)
E[X k ] = , −τ γ < k < γ
Γ(τ )
VaRp (X) = θ(p−1/τ − 1)−1/γ
θk Γ(τ + k/γ)Γ(1 − k/γ)
E[(X ∧ x)k ] = β(τ + k/γ, 1 − k/γ; u) + xk [1 − uτ ], k > −τ γ
Γ(τ )
µ ¶1/γ
τγ − 1
mode = θ , τ γ > 1, else 0
γ+1
APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS 3
γ(x/θ)γ (x/θ)γ
f (x) = F (x) = u, u=
x[1 + (x/θ)γ ]2 1 + (x/θ)γ
E[X k ] = θk Γ(1 + k/γ)Γ(1 − k/γ), −γ < k < γ
VaRp (X) = θ(p−1 − 1)−1/γ
E[(X ∧ x)k ] = θk Γ(1 + k/γ)Γ(1 − k/γ)β(1 + k/γ, 1 − k/γ; u) + xk (1 − u), k > −γ
µ ¶1/γ
γ−1
mode = θ , γ > 1, else 0
γ+1
APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS 4
A.2.3.4 Paralogistic–α, θ
This is a Burr distribution with γ = α.
α2 (x/θ)α 1
f (x) = F (x) = 1 − uα , u=
x[1 + (x/θ)α ]α+1 1 + (x/θ)α
θk Γ(1 + k/α)Γ(α − k/α)
E[X k ] = , −α < k < α2
Γ(α)
VaRp (X) = θ[(1 − p)−1/α − 1]1/α
θk Γ(1 + k/α)Γ(α − k/α)
E[(X ∧ x)k ] = β(1 + k/α, α − k/α; 1 − u) + xk uα , k > −α
Γ(α)
µ ¶1/α
α−1
mode = θ , α > 1, else 0
α2 + 1
2
τ 2 (x/θ)τ (x/θ)τ
f (x) = F (x) = uτ , u=
x[1 + (x/θ)τ ]τ +1 1 + (x/θ)τ
θk Γ(τ + k/τ )Γ(1 − k/τ )
E[X k ] = , −τ 2 < k < τ
Γ(τ )
VaRp (X) = θ(p−1/τ − 1)−1/τ
θk Γ(τ + k/τ )Γ(1 − k/τ )
E[(X ∧ x)k ] = β(τ + k/τ , 1 − k/τ ; u) + xk [1 − uτ ], k > −τ 2
Γ(τ )
mode = θ (τ − 1)1/τ , τ > 1, else 0
(x/θ)α e−x/θ
f (x) = F (x) = Γ(α; x/θ)
xΓ(α)
θk Γ(α + k)
M (t) = (1 − θt)−α , t < 1/θ E[X k ] = , k > −α
Γ(α)
E[X k ] = θk (α + k − 1) · · · α, if k is an integer
θk Γ(α + k)
E[(X ∧ x)k ] = Γ(α + k; x/θ) + xk [1 − Γ(α; x/θ)], k > −α
Γ(α)
= α(α + 1) · · · (α + k − 1)θk Γ(α + k; x/θ) + xk [1 − Γ(α; x/θ)], k an integer
mode = θ(α − 1), α > 1, else 0
APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS 5
(θ/x)α e−θ/x
f (x) = F (x) = 1 − Γ(α; θ/x)
xΓ(α)
θk Γ(α − k) θk
E[X k ] = , k<α E[X k ] = , if k is an integer
Γ(α) (α − 1) · · · (α − k)
θk Γ(α − k)
E[(X ∧ x)k ] = [1 − Γ(α − k; θ/x)] + xk Γ(α; θ/x)
Γ(α)
θk Γ(α − k)
= G(α − k; θ/x) + xk Γ(α; θ/x), all k
Γ(α)
mode = θ/(α + 1)
A.3.2.3 Weibull–θ, τ
τ
τ (x/θ)τ e−(x/θ) τ
f (x) = F (x) = 1 − e−(x/θ)
x
E[X k ] = θk Γ(1 + k/τ ), k > −τ
VaRp (X) = θ[− ln(1 − p)]1/τ
τ
E[(X ∧ x)k ] = θk Γ(1 + k/τ )Γ[1 + k/τ ; (x/θ)τ ] + xk e−(x/θ) , k > −τ
µ ¶1/τ
τ −1
mode = θ , τ > 1, else 0
τ
τ
τ (θ/x)τ e−(θ/x) τ
f (x) = F (x) = e−(θ/x)
x
E[X k ] = θk Γ(1 − k/τ ), k < τ
VaRp (X) = θ(− ln p)−1/τ
h τ
i
E[(X ∧ x)k ] = θk Γ(1 − k/τ ){1 − Γ[1 − k/τ ; (θ/x)τ ]} + xk 1 − e−(θ/x) , all k
h τ
i
= θk Γ(1 − k/τ )G[1 − k/τ ; (θ/x)τ ] + xk 1 − e−(θ/x)
µ ¶1/τ
τ
mode = θ
τ +1
APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS 6
e−x/θ
f (x) = F (x) = 1 − e−x/θ
θ
M (t) = (1 − θt)−1 E[X k ] = θk Γ(k + 1), k > −1
k
E[X k ] = θ k!, if k is an integer
VaRp (X) = −θ ln(1 − p)
TVaRp (X) = −θ ln(1 − p) + θ
E[X ∧ x] = θ(1 − e−x/θ )
E[(X ∧ x)k ] = θk Γ(k + 1)Γ(k + 1; x/θ) + xk e−x/θ , k > −1
= θk k!Γ(k + 1; x/θ) + xk e−x/θ , k an integer
mode = 0
θe−θ/x
f (x) = F (x) = e−θ/x
x2
k k
E[X ] = θ Γ(1 − k), k<1
VaRp (X) = θ(− ln p)−1
E[(X ∧ x)k ] = θk G(1 − k; θ/x) + xk (1 − e−θ/x ), all k
mode = θ/2
1 ln x − μ
f (x) = √ exp(−z 2 /2) = φ(z)/(σx), z = F (x) = Φ(z)
xσ 2π σ
k 2 2
E[X ] = exp(kμ + k σ /2)
µ ¶
k 2 2 ln x − μ − kσ 2
E[(X ∧ x) ] = exp(kμ + k σ /2)Φ + xk [1 − F (x)]
σ
mode = exp(μ − σ2 )
APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS 7
µ ¶1/2 µ ¶
θ θz 2 x−μ
f (x) = exp − , z=
2πx3 2x μ
" µ ¶ # µ ¶ " µ ¶1/2 #
1/2
θ 2θ θ x+μ
F (x) = Φ z + exp Φ −y , y=
x μ x μ
" Ã r !#
θ 2tμ2 θ
M (t) = exp 1− 1− , t < 2, E[X] = μ, Var[X] = μ3 /θ
μ θ 2μ
" µ ¶ # µ ¶ " µ ¶1/2 #
1/2
θ 2θ θ
E[X ∧ x] = x − μzΦ z − μy exp Φ −y
x μ x
αθα
f (x) = , x>θ F (x) = 1 − (θ/x)α , x>θ
xα+1
αθ(1 − p)−1/α
VaRp (X) = θ(1 − p)−1/α TVaRp (X) = , α>1
α−1
αθk αθk kθα
E[X k ] = , k<α E[(X ∧ x)k ] = − , x≥θ
α−k α − k (α − k)xα−k
mode = θ
Note: Although there appears to be two parameters, only α is a true parameter. The value of θ must be
set in advance.
APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS 8
Γ(a + b) a τ
f (x) = u (1 − u)b−1 , 0 < x < θ, u = (x/θ)τ
Γ(a)Γ(b) x
F (x) = β(a, b; u)
θk Γ(a + b)Γ(a + k/τ )
E[X k ] = , k > −aτ
Γ(a)Γ(a + b + k/τ )
θk Γ(a + b)Γ(a + k/τ )
E[(X ∧ x)k ] = β(a + k/τ , b; u) + xk [1 − β(a, b; u)]
Γ(a)Γ(a + b + k/τ )
A.6.1.2 beta–a, b, θ
Γ(a + b) a 1
f (x) = u (1 − u)b−1 , 0 < x < θ, u = x/θ
Γ(a)Γ(b) x
F (x) = β(a, b; u)
θk Γ(a + b)Γ(a + k)
E[X k ] = , k > −a
Γ(a)Γ(a + b + k)
θk a(a + 1) · · · (a + k − 1)
E[X k ] = , if k is an integer
(a + b)(a + b + 1) · · · (a + b + k − 1)
θk a(a + 1) · · · (a + k − 1)
E[(X ∧ x)k ] = β(a + k, b; u)
(a + b)(a + b + 1) · · · (a + b + k − 1)
+xk [1 − β(a, b; u)]
Appendix B
An Inventory of Discrete
Distributions
B.1 Introduction
The 16 models fall into three classes. The divisions are based on the algorithm by which the probabilities are
computed. For some of the more familiar distributions these formulas will look different from the ones you
may have learned, but they produce the same probabilities. After each name, the parameters are given. All
parameters are positive unless otherwise indicated. In all cases, pk is the probability of observing k losses.
For finding moments, the most convenient form is to give the factorial moments. The jth factorial
moment is μ(j) = E[N (N − 1) · · · (N − j + 1)]. We have E[N ] = μ(1) and Var(N ) = μ(2) + μ(1) − μ2(1) .
The estimators which are presented are not intended to be useful estimators but rather for providing
starting values for maximizing the likelihood (or other) function. For determining starting values, the
following quantities are used [where nk is the observed frequency at k (if, for the last entry, nk represents
the number of observations at k or more, assume it was at exactly k) and n is the sample size]:
∞ ∞
1X 1X 2
μ̂ = knk , σ̂ 2 = k nk − μ̂2 .
n n
k=1 k=1
When the method of moments is used to determine the starting value, a circumflex (e.g., λ̂) is used. For
any other method, a tilde (e.g., λ̃) is used. When the starting value formulas do not provide admissible
parameter values, a truly crude guess is to set the product of all λ and β parameters equal to the sample
mean and set all other parameters equal to 1. If there are two λ and/or β parameters, an easy choice is to
set each to the square root of the sample mean.
The last item presented is the probability generating function,
P (z) = E[z N ].
e−λ λk
p0 = e−λ , a = 0, b=λ pk =
k!
E[N ] = λ, Var[N ] = λ P (z) = eλ(z−1)
9
APPENDIX B. AN INVENTORY OF DISCRETE DISTRIBUTIONS 10
B.2.1.2 Geometric–β
1 β βk
p0 = , a= , b=0 pk =
1+β 1+β (1 + β)k+1
E[N ] = β, Var[N ] = β(1 + β) P (z) = [1 − β(z − 1)]−1 .
q (m + 1)q
p0= (1 − q)m , a = − , b=
1−q 1−q
µ ¶
m k
pk = q (1 − q)m−k , k = 0, 1, . . . , m
k
E[N ] = mq, Var[N ] = mq(1 − q) P (z) = [1 + q(z − 1)]m .
β (r − 1)β
p0 = (1 + β)−r , a= , b=
1+β 1+β
r(r + 1) · · · (r + k − 1)β k
pk =
k!(1 + β)r+k
E[N ] = rβ, Var[N ] = rβ(1 + β) P (z) = [1 − β(z − 1)]−r .
There are two sub-classes of this class. When discussing their members, we often refer to the “corresponding”
member of the (a, b, 0) class. This refers to the member of that class with the same values for a and b. The
notation pk will continue to be used for probabilities for the corresponding (a, b, 0) distribution.
λ
pT1 = , a = 0, b = λ,
eλ − 1
λk
pTk = ,
k!(eλ − 1)
E[N ] = λ/(1 − e−λ ), Var[N ] = λ[1 − (λ + 1)e−λ ]/(1 − e−λ )2 ,
λ̃ = ln(nμ̂/n1 ),
eλz − 1
P (z) = .
eλ − 1
1 β
pT1 = , a= , b = 0,
1+β 1+β
β k−1
pTk = ,
(1 + β)k
E[N ] = 1 + β, Var[N ] = β(1 + β),
β̂ = μ̂ − 1,
[1 − β(z − 1)]−1 − (1 + β)−1
P (z) = .
1 − (1 + β)−1
B.3.1.3 Logarithmic–β
β β β
pT1 = , a= , b=− ,
(1 + β) ln(1 + β) 1+β 1+β
βk
pTk = ,
k(1 + β)k ln(1 + β)
β[1 + β − β/ ln(1 + β)]
E[N ] = β/ ln(1 + β), Var[N ] = ,
ln(1 + β)
nμ̂ 2(μ̂ − 1)
β̃ =− 1 or ,
n1 μ̂
ln[1 − β(z − 1)]
P (z) = 1 − .
ln(1 + β)
rβ β (r − 1)β
pT1 = , a= , b= ,
(1 +β)r+1 − (1 + β) 1+β 1+β
µ ¶k
r(r + 1) · · · (r + k − 1) β
pTk = ,
k![(1 + β)r − 1] 1+β
rβ
E[N ] = ,
1 − (1 + β)−r
rβ[(1 + β) − (1 + β + rβ)(1 + β)−r ]
V ar[N ] = ,
[1 − (1 + β)−r ]2
σ̂2 μ̂2
β̃ = − 1, r̃ = 2 ,
μ̂ σ̂ − μ̂
[1 − β(z − 1)]−r − (1 + β)−r
P (z) = .
1 − (1 + β)−r
This distribution is sometimes called the extended truncated negative binomial distribution because the
parameter r can extend below 0.