0% found this document useful (0 votes)
471 views2 pages

Properties of The Trinomial Distribution

The trinomial distribution describes the probability of outcomes from trials that can result in three possible outcomes (success, failure, or neither) with fixed probabilities p, θ, and 1-p-θ, respectively. The joint probability mass function of the number of successes (X) and failures (Y) in n trials is fX,Y(k, l) = (n! / k!l!(n-k-l)!) pk θl (1-p-θ)n-k-l, where k + l ≤ n. The marginal distributions of X and Y are binomial with parameters n, p and n, θ, respectively. If Y = l, the conditional distribution

Uploaded by

Perry01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
471 views2 pages

Properties of The Trinomial Distribution

The trinomial distribution describes the probability of outcomes from trials that can result in three possible outcomes (success, failure, or neither) with fixed probabilities p, θ, and 1-p-θ, respectively. The joint probability mass function of the number of successes (X) and failures (Y) in n trials is fX,Y(k, l) = (n! / k!l!(n-k-l)!) pk θl (1-p-θ)n-k-l, where k + l ≤ n. The marginal distributions of X and Y are binomial with parameters n, p and n, θ, respectively. If Y = l, the conditional distribution

Uploaded by

Perry01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Probability 2 - Notes 6

The Trinomial Distribution


Consider a sequence of n independent trials of an experiment. The binomial distribution arises
if each trial can result in 2 outcomes, success or failure, with xed probability of success p at
each trial. If X counts the number of successes, then X Binomial(n, p).
Now suppose that at each trial there are 3 possibilities, say success, failure, or neither of
the two, with corresponding probabilities p, , 1 p , which are the same for all trials. If
we write 1 for success, 0 for failure, and 1 for neither, then the outcome of n trials can
be described as a sequence of n numbers
= (i
1
, i
2
, ..., i
n
), where each i
j
takes vales 1, 0, or -1
Obviously, P(i
j
= 1) = p, P(i
j
= 0) = P(i
j
=1) = 1p.
Denition. Let X be the number of trials where 1 occurs, and Y be the number of trials where
and 0 occurs. The joint distribution of the pare (X,Y) is called the trinomial distribution.
The following statement provides us with .
Theorem. The joint p.m.f. for (X,Y) is given by
f
X,Y
(k, l) = P(X = k,Y = l) =
n!
k!l!(nk l)!
p
k

l
(1p)
nkl
,
where k, l 0 and k +l n.
Proof. The sample space consists of all sequences of length n described above. If a specic
sequence has k successes (1s) and l failures (0s)then P() = p
k

l
(1 p )
nkl
.
There are

n
k

nk
l

=
n!
k!l!(nkl)!
different sequences with k successes (1s) and l failures
(0s). Hence P(X = k,Y = l) =
n!
k!l!(nkl)!
p
k

l
(1p)
nkl
. 2
The name of the distribution comes from the trinomial expansion
(a+b+c)
n
= (a+(b+c))
n
=
n

k=0

n
k

a
k
(b+c)
nk
=
n

k=0
nk

l=0

n
k

nk
l

a
k
b
l
c
nkl
=
n

k=0
nk

l=0
n!
k!l!(nk l)!
a
k
b
l
c
nkl
Properties of the trinomial distribution
1) The marginal distributions of X and Y are just X Binomial(n, p) and Y Binomial(n, ).
This follows the fact that X is the number of successes in n independent trials with p being
the probability of successes in each trial. Similar argument works for Y.
Note that therefore E[X] = np, E[Y] = n and E[Y
2
] =Var(Y) +(E[Y])
2
= n(1) +n
2

2
1
2) If Y = l, then the conditional distribution of X|(Y = l) is Binomal(nl,
p
1
).
Proof.
P(X = k|Y = l) =
P(X = k,Y = l)
P(Y = l)
=
n!
k!l!(nkl)!
p
k

l
(1p)
nkl
n!
l!(nl)!

l
(1)
nl
=

nl
k

p
1

1
p
1

nlk
for x = 0, 1, ..., (ny). Hence (X|Y = y) Binomial

ny,
p
1

. 2
This is intuitively obvious. Consider those trials for which failure (or 0) did not occur. There
are (nl) such trials, for each of which the probability that 1 occurs is actually the conditional
probability of 1 given that 0 has not occurred, i.e.
p
1
. So you have the standard binomial
set-up.
3) We shall now use the results on conditional distributions (Notes 5) and the above properties
to nd Cov(X,Y) and the coefcient of correlation (X,Y).
We proved that E[XY] = E[YE[X|Y]] (see the last page of Notes 5). According to property 2),
E[X|Y = l] = (nl)
p
1
and thus E[X|Y] = (nY)
p
1
. Hence
E[XY] = E

Y (nY)
p
(1)

=
p
1
E(nY Y
2
) =
p
1

n
2
n(1) n
2

=
p
(1)
[n(n1)(1)] = n(n1)p
Therefore Cov(X,Y) = E[XY] E[X]E[Y] = n(n1)pn
2
p =np and hence
(X,Y) =
Cov(X,Y)

Var(X)Var(Y)
=
np

n
2
p(1p)(1)
=

p
(1p)(1)
1
2
Note that if p+ = 1 then Y = nX and there is an exact linear relation between Y and X. In
this case it is easily seen that (X,Y) =1.
Denition of the multinomial distribution
Now suppose that there are k outcomes possible at each of the n independent trials. Denote the
outcomes A
1
, A
2
, ..., A
k
and the corresponding probabilities p
1
, ..., p
k
where
k
j=1
p
j
=1. Let X
j
count the number of times A
j
occurs. Then
P(X
1
= x
1
, ..., X
k1
= x
k1
) =
n!
x
1
!x
2
!...x
k1
!(n
k1
j=1
x
j
)!
p
x
1
1
p
x
2
2
...p
x
k1
k1
p
n
k1
j=1
x
j
k
where x
1
, x
2
, ..., x
k1
are non-negative integers with
k1
j=1
x
j
n.
2

You might also like