hw2 Sol
hw2 Sol
2.5
1.5
0.5
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
-5
(b) In the presence of noise, the polynomial of degree 15 overfits the data
points, whereas the second order polynomial fits the data points with
more error while preserving the original trend of the data points. Hence,
even though a higher order polynomial may provide a lower error for
the same estimation problem, the solution obtained may not generalise
well. In learning theory, a regularization is done to trade-off between
the goodness of the fit and overfitting. The results of the Matlab exer-
cise are shown in Figure 2.
The Matlab code used to generate the plots is posted online on Canvas.
a0 = 0.5353
a1 = 0.2032
a2 = 0.3727
Consider the RHS of the given equality that we need to prove for
xk :
ˆ + Q−1
xk−1 T
k ck (yk − ck xk−1
ˆ )
ˆ (1 − Qk−1 cTk ck ) + Q−1
= xk−1 T
k ck y k
!
cTk ck cTk yk
= xk−1
ˆ 1 − Pk + k
k−i cT c k−i cT c
P
i=1 f i i i=1 f i i
Pk !
k−i T
i=1 f ci ci − cTk ck cTk yk
= xk−1
ˆ Pk + k
k−i cT c k−i cT c
P
i=1 f i i i=1 f i i
Pk−1 k−1−i T Pk !
k−1−i T
f ci y i i=1 f ci ci cTk yk
= Pi=1
k−1 k−1−i T k
+ k
k−i cT c k−i cT c
P P
i=1 f ci ci i=1 f i i i=1 f i i
Pk−1 k−1−i T T
i=1 f ci y i ck yk
= P k
+ k
k−i cT c k−i cT c
P
i=1 f i i i=1 f i i
Pk k−i T
f ci y i
= Pi=1
k k−i cT c
i=1 f i i
= xk
(c) We are given that the gain of the estimator is given as:
c
gk = Q−1
k c = Pk k−i c2
i=1 f
hi
ui = Pn−1 ŷ
j=0 h2j
√
(a) Let un = r(ŷ − yn ). Augmenting this with the vector U , so that
Ũ = [u0 , u1 , . . . , un ]0 and substituting in the expression in part (i)
gives,
hi
ui = Pn−1 2 1 ŷ
j=0 hj + r