Chapter 4 Multinomial Learning: Presented To
Chapter 4 Multinomial Learning: Presented To
Assignment 6
Presented to
Meritorious. Professor .Dr.Aqil Burni
Head of Actuarial Sciences
Institute of Business Management
Page 1
Multinomial Learning
Consider the generalization of Bernoulli where instead of two states, the outcome of a random
event is one of K mutually exclusive and exhaustive states, for example, classes, each of which has a
probability of occurring pi with
Summation I to K from pi=1 pi = 1. Let x1, x2, . . . , xK are the indicator variables where xi is 1 if the
outcome is state i and 0 otherwise.
X = { xt }t where xt ~ p (x)
Parametric estimation:
Assume a form for p (x |q ) and estimate q , its sufficient statistics, using X
e.g., N ( , 2) where q = { , 2}
Likelihood of q given the sample X
l (|X) = p (X |) = t p (xt|)
Log likelihood
L(|X) = log l (|X) = t log p (xt|)
Maximum likelihood estimator (MLE)
* = argmax L(|X)
Bernoulli: Two states, failure/success, x in {0,1}
P (x) = pox (1 po ) (1 x)
L (po|X) = log t poxt (1 po ) (1 xt)
MLE: po = t xt / N
Multinomial: K>2 states, xi in {0,1}
P (x1,x2,...,xK) = i pixi
L(p1,p2,...,pK|X) = log t i pixit
t
MLE: pi = t xi / N
Page 2