Practice Problem Set 3
Practice Problem Set 3
Where x(n) are assumed to be independent. Determine the Maximum Likelihood Estimator
(MLE) for λ using N data samples.
(b) Now, consider the case of Bayesian estimate for λ. Consider the following conditional PDF
(
λexp(−λx(n)) for x(n) ≥ 0
p(x(n)|λ) =
0 otherwise.
The x(n) are independent when conditioned on λ. The variable λ is assumed to be a random
variable with the following prior distribution
(
a exp(−λa) for λ ≥ 0
p(λ) =
0 otherwise.
1
with variance Var(λ) = a2
i. Determine the MAP estimator of λ using N data samples.
ii. Explain the relation between the MAP estimator and the above derived MLE estimator
(in Q1(a)) in terms of Var(λ) and number of data samples of N .
2. Two unbiased estimators are proposed whose variances satisfy var(θ̂) < var(θ̃). If both estimators
are Gaussian, prove that
P {|θ̂ − θ| > } < P {|θ̃ − θ| > }
for any > 0. This says that the estimator with less variance is to be preferred since its PDF is
more concentrated about the true value.
3. Suppose a coin is tossed N times. The probability of getting head is θ (unknown). The prior for θ
is Beta distribution given by
1
P (θ) = θα−1 (1 − θ)β−1
B(α, β)
where, Z 1
γ(α)γ(β)
B(α, β) = θα−1 (1 − θ)β−1 dθ =
0 γ(α + β)
Find the estimator for θ.
4. A company produces 100 ohms resistors. Due to manufacturing tolerances, the manufactured re-
sistors are in error by e, where e ∼ N (0, 0.0011). Mr A works in the company as quality assurance
inspector to monitor the resistance values of manufactured resistors. He chooses one resistor from
batch and measures its resistance using an ohmmeter. The ohmmeter is of poor quality and he
models the error due to ohmmeter as N (0, 1) random variable. He takes N independent mea-
surements. If Mr. A chooses one resistor, how many ohmmeter measurements are necessary to
ensure that a MMSE estimator of the resistance R yields the correct resistance to 0.1 ohms on the
average? How many measurements would he need if he does not have any prior knowledge about
the manufacturing tolerances?
1
5. Consider a parameter A which you want to estimate and you observe two data samples as follows:
xn = A2n +wn ; n = 0, 1. Here w[n] are Gaussian random variables with w0 ∼ N (0, 2), w1 ∼ N (0, 3),
and covariance Cov(w0 , w1 ) = 2. The prior distribution of the parameter is A ∼ N (2, 4).
(a) Write the probability density function of the vector ~z = [A x0 x1 ]T .
(b) Find the Bayesian MMSE estimator of A and its MSE.
(c) Find the linear MMSE for A and compare it with the estimator found in the earlier part.
6. We observe data x[n], n = 0, 1, . . . , N − 1 with PDF of p(x[n]|µ) ∼ N (µ, σ 2 ). The x[n]0 s are
independent conditioned on µ. The mean µ has the prior PDF µ ∼ N (µ0 , σ02 ). Find the MAP
estimator of µ and compare it with its MMSE estimator. What happens when σ02 → 0 and
σ02 → ∞?
7. Consider the data points x[n] = A + w[n], n = 0, 1, . . . , N − 1. Here w[n] ∼ N (0, σ 2 ) are i.i.d.
and independent of A. The prior PDF of A is A ∼ U[−A0 , A0 ]. Find the linear MMSE (LMMSE)
estimator of A.
8. Consider a random variable Y with mean µY and variance σY2 . To estimate Y we observe another
2
random variable X which have mean µX , variance σX and the covariance of X, Y is C. We estimate
Ŷ = aX + b. Find the constants a, b which will minimize the mean square error for this estimator.
9. We observe a data x and x ∼ U[1, 2]. We want to estimate θ based on observed x. The posterior
probability of θ is p(θ|x) = xe−θx , θ ≥ 0. Find the Bayesian MMSE estimator for θ and the MSE
of this estimator.
10. We observe a random variable X and we want to estimate another random variable Y based on
our observation. The joint probability density function of X and Y is given as:
(
2, (x, y) ∈ R = {(x, y) : 0 < y < x, 0 < x < 1}
p(x, y) =
0 otherwise.