0% found this document useful (0 votes)
45 views4 pages

VE564 Homework 2

This document is a homework assignment containing 5 problems related to linear regression and Bayesian estimation. It includes derivations of best linear unbiased estimators, maximum likelihood estimators, and Bayesian posterior distributions for linear models. Equations are provided for the estimators and their variances under different assumptions about the covariance structure of the data. Plots and priors are also described.

Uploaded by

Haorui Li
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views4 pages

VE564 Homework 2

This document is a homework assignment containing 5 problems related to linear regression and Bayesian estimation. It includes derivations of best linear unbiased estimators, maximum likelihood estimators, and Bayesian posterior distributions for linear models. Equations are provided for the estimators and their variances under different assumptions about the covariance structure of the data. Plots and priors are also described.

Uploaded by

Haorui Li
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

VE564

Homework 2

Haorui LI (022370910035)

2023.6.23

Problem 1
1.
T −1 x
Form the BLUE theory, we can get that = ssT CC−1 s
and its minimal variance var(Â) = sT C1−1 s .
2.
If s is an eigenvector of C, we know that C−1 s = λ1 s, where λ is the eigenvalue corresponding to s. Conse-
1 T
T −1 s x
quently, Â = ssT C x
C−1 s
= λ
1 = sT x and the variance var(Â) = 1
sT C−1 s

λ
3.  
1 0
If C =  , we know that N = 2 and the eigenvector s = [s[0], s[1]]T . And we know the correct estimator
 
0 1
is:
sT C−1 x s[0]x[0] + s[1]x[1]
 = T −1
= (1)
s C s s[0]2 + s[1]2
1 1
var(Â) = = (2)
sT C−1 s s[0]2 + s[1]2
While we use the Ĉ instead of C, we can get that:

sT Ĉ−1 x s[0]x[0] + α1 s[1]x[1]


Â∗ = = (3)
sT Ĉ−1 s s[0]2 + α1 s[1]2

1 1
var(Â∗ ) = = (4)
sT Ĉ−1 s s[0]2 + α1 s[1]2
1
s[0]2 A+s[1]2 A s[0]2 A+ α s[1]2 A
We can see that E(Â) = s[0]2 +s[1]2
= A and E(Â∗ ) = 1
s[0]2 + α s[1]2
= A. So even if we wrongly use the
Ĉ, we still have unbiased estimator. While when α → 0 the variance will decrease, when α → 1 the variance
will go to correct situation, when α → ∞ the variance will increase.

Problem 2
1. PN −1
1
The PDF is p(x; θ) = N exp[− 2θ n=0 (x[n])2 ]. Take the derivative of Likelihood function with respect
( 2π
θ
)2
∂lnp(x;θ) N 1 PN −1
to θ, we get that ∂θ = 2θ − 2 n=0 (x[n])2 . Let it equals to 0, we get the MLE estimator is that
θ̂ = PN −1N 2
n=0 (x[n])

1
Homework 2 VE564

2. P −1
It’s clear that the log-likelihood function is lnp(x; α) = − N2 ln2π − N2 lnσ 2 − 2σ1 2 N 2 N
n=0 (x[n]−A) = − 2 ln2π −
N 2 1 P N −1 x[n] A 2 N N 2 1 P N −1 x[n] √ 2
2 lnσ − 2 n=0 ( σ − σ ) = − 2 ln2π − 2 lnσ − 2 n=0 ( σ ± α) . Take the derivative of Likelihood
1 −1 x[n] √
function with respect to α, we get that ∂lnp(x;α) = ± 12 α− 2 N
P
∂α n=0 ( σ ± α). Let it equals to 0, we get the
P −1
estimator α̂ = N 21σ2 ( N n=0 x[n])
2

Problem 3
Pk+1
Form the general signal model s = Hθ, we know that ŝk+1 = i=1 θi hi = ŝk + θ̂k+1 hk+1 = ŝk + αh′k+1 .
hT ⊥
k+1 Pk x
Consequently, h′k+1 = hk+1 and α = xh′k−1 T (h′k−1 T h′k+1 )−1 . Because HTk hk+1 = 0 we know that hT h
=
k+1 k+1
 
αhTk+1 hk+1
θ̂k 
hT
= α. As a result, θ̂k+1 =  
k+1 hk+1
 
α

Problem 4
From this method, we get the parameters θ = [θ1 θ2 ] = [5.214 − 0.234], and the plot is that:

Figure 1: Levenberg-Marquardt algorithm fitting

Problem 5
The estimator θ = [A B]T , and from the prior knowledge, we get that:

µθ = [A0 B0 ]T (5)

 
2
σA ρ 
Cθ = 
 
 (6)
2
ρ σB

Page 2
Homework 2 VE564

Cw = σ 2 I (7)

 
1 −M 
 
 
1 −M + 1
 
H=
.
 (8)
. .
..

.


 
 
1 M

According to the generalized Bayesian model, we know that:

E(θ|x) = µθ + (C−1 T −1 −1 T −1
θ + H Cw H) H Cw (x − Hµθ ) (9)

Cθ|x = (C−1 T −1
θ + H Cw H)
−1
(10)
When the ρ = 0, we can get that:
 

−1 1 2M + 1 0 
HT Cw H=   (11)
σ2  P 2
0 n
 
 −1
1 2M +1  1 +12M +1 0
 σ2 + σ2 0 
 σ2 σ2
(C−1 T
C−1 −1
 
A A
θ +H w H) =


 =


 (12)
2
P
1 n 1P
0 2 + σ2
 0 n2

σB 1
+ 2
σ2 σ
B
 
P
1  x[n] − (2M + 1)A0 
HT C−1
w (x − Hµ θ ) =   (13)
σ2  P P 2 
nx[n] − n B0

As a result, we get the estimators:


1 PM
σ2
[ n=−M x[n] − (2M + 1)A0 ]
 = A0 + 1 2M +1
(14)
2 +
σA σ2

1 PM PM
σ2
[ n=−M nx[n] − n=−M n − B0 ]
B̂ = B0 + PM 2
(15)
1 n=−M n
2
σB
+ σ 2

And we can get the MMSE are:


1 2M + 1 −1
Bmse(Â) = [(C−1 T −1 −1
θ + H Cw H) ]11 = ( 2 + ) (16)
σA σ2
PM
1 n=−M n2
Bmese(B̂) = [(C−1
θ +HT
C−1 −1
w H) ]22 =( 2 + )−1 (17)
σB σ2

Page 3
Homework 2 VE564

Problem 6
1.As to E(xy), it is a valid inner product.
1) For property 1, E(x2 ) = (E(x))2 + var(x) = var(x). If and only if x is a constant, var(x) = 0. This equals
that x = 0.
2) For property 2, as xy = yx, E(xy) = E(yx) is indeed.
3)E((c1 x1 + c2 x2 )y) = E(c1 x1 y + c2 x2 y = E(c1 x1 y) + E(c2 x2 y).
2.As to cov(xy), it is not a valid inner product. Because for property 1, cov(xx ) = E(xy) − E(x)E(y). It can
be greater, equal, or less than 0.

Page 4

You might also like