0% found this document useful (0 votes)
27 views1 page

Ex 3

This document contains 4 statistical signal processing exercises that are due on December 4th, 2011. The exercises are: 1) Prove a formula for Gaussian Fisher information when y is normally distributed. 2) Derive the maximum likelihood estimator of a multinomial distribution parameter vector using independent realizations and a Lagrange multiplier constraint. 3) Derive the maximum likelihood estimator of the variance of a normal distribution and show its behavior for large and small sample sizes. 4) Derive the maximum likelihood estimator of the parameter of a uniform distribution and show its non-Gaussian behavior even for large sample sizes.

Uploaded by

Alex Oringen
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views1 page

Ex 3

This document contains 4 statistical signal processing exercises that are due on December 4th, 2011. The exercises are: 1) Prove a formula for Gaussian Fisher information when y is normally distributed. 2) Derive the maximum likelihood estimator of a multinomial distribution parameter vector using independent realizations and a Lagrange multiplier constraint. 3) Derive the maximum likelihood estimator of the variance of a normal distribution and show its behavior for large and small sample sizes. 4) Derive the maximum likelihood estimator of the parameter of a uniform distribution and show its non-Gaussian behavior even for large sample sizes.

Uploaded by

Alex Oringen
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Statistical Signal Processing

Exercise No. 3, due 12/4/2011

1. Gaussian Fisher Information. Let y ∼ N (µ (θ) , Σ (θ)). Prove the


useful formula:
 T    
∂µ (θ) −1 ∂µ (θ) 1 −1 ∂Σ (θ) −1 ∂Σ (θ)
[I (θ)]i,j = Σ (θ) + Tr Σ (θ) Σ (θ)
∂θi ∂θj 2 ∂θi ∂θj
You can use the following identities:
 
∂log |Σ (θ)| −1 ∂Σ (θ)
= Tr Σ (θ)
∂θk ∂θk

∂Σ−1 (θ) ∂Σ (θ) −1


= −Σ−1 (θ) Σ (θ)
∂θk ∂θk

E y T Ayy T By = Tr {AC} Tr {BC} + 2Tr {ACBC}




for symmetric A and B and y ∼ N (0, C).


2. Derive the maximum likelihood estimator of the parameter vector θ of a
multinomial distribution
Pp with p events using n i.i.d. realizations. Note
the constraint i=1 θi = 1 and enforce it using a Lagrange multiplier.
3. Let yi for i = 1, · · · , N be i.i.d. realizations of an N (0, σ 2 ) random
variable. Derive the ML estimator of σ 2 and show numerically (draw
2
the histogram in MATLAB) that σ̂M L behaves like a Gaussian random
variable for large N . Also show that it does not behave like a Gaussian
for small N .
4. Let yi for i = 1, · · · , N be i.i.d. realizations of a U (0, θ) random vari-
able. Derive the ML estimator of θ and show numerically (draw the
histogram in MATLAB) that θ̂M L does not behave like a Gaussian
random variable for large N .

You might also like