0% found this document useful (0 votes)
45 views7 pages

hw2 Sol

This document provides solutions to homework problems from an electrical engineering course. It includes: 1) A summary and solution to problem 1.6 on derivatives involving products and quotients of functions. 2) A summary and solutions to problem 2.2 on polynomial approximation of data with and without noise, including MATLAB code and plots. 3) A summary and solution to problem 2.3 on estimating a parameter from multiple measurements with different weights. 4) A summary and solution to problem 2.4 on polynomial approximation involving weighted least squares.

Uploaded by

priyankakaswan19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views7 pages

hw2 Sol

This document provides solutions to homework problems from an electrical engineering course. It includes: 1) A summary and solution to problem 1.6 on derivatives involving products and quotients of functions. 2) A summary and solutions to problem 2.2 on polynomial approximation of data with and without noise, including MATLAB code and plots. 3) A summary and solution to problem 2.3 on estimating a parameter from multiple measurements with different weights. 4) A summary and solution to problem 2.4 on polynomial approximation involving weighted least squares.

Uploaded by

priyankakaswan19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Department of Electrical and Computer Engineering

University of Maryland, College Park


Fall 2015 ENEE 660 Homework 2 Solutions Prof.N. Martins & V. Raju

1. Exercise 1.6 from the lecture notes.


Solution.
(a) Using the definition of first derivatives, we can write:
d A(t + h)B(t + h) − A(t)B(t)
A(t)B(t) = lim
dt h→0 h
A(t + h)B(t + h) − A(t + h)B(t) + A(t + h)B(t) − A(t)B(t)
= lim
h→0 h
(B(t + h) − B(t)) (A(t + h) − A(t))
= lim A(t + h) + B(t)
h→0 h h
dB(t) dA(t)
= lim A(t + h) + B(t)
h→0 dt dt
dB(t) dA(t)
= A(t) + B(t)
dt dt
(b) Substituting B = A−1 (t) gives the required result.
d dA−1 (t) dA(t) −1
A(t)A−1 (t) = 0 = A(t) + A (t)
dt dt dt
dA−1 (t) dA(t) −1
=⇒ = −A−1 (t) A (t)
dt dt
2. Exercise 2.2 from the lecture notes.
Solution.
(a) We are given data points {ti } that comprise the matrix T , cor-
responding to which, the outputs yi are generated using Matlab.
To approximate the function f using a polynomial of degree n,
let An denote the matrix whose column vectors are the basis in
the measurement space. The bases for the polynomial pn (t) of or-
der n with coefficients {a0 , a1 , . . . , an } are in fact, {1, t, t2 , . . . , tn }.
Hence,
 
a0
 a1 
 
pn (t) = a0 + a1 t + a2 t2 + . . . + an tn = [1 t t2 . . . tn ]  a2 
 
 .. 
 . 
an
Case 1: Approximation using p15 (t). We are given 16 data
points ti , i = 1, 2, . . . , 16 and outputs yi . A15 can be constructed
by concatenating the bases functions pi (t) row-wise, as follows:
 
1 t1 . . . t151
 1 t2 . . . t15 
2 
A16 =  .. ..

. 
 . . . . . .. 
1 t16 . . . t15
16

so that Y = A15 C, where C = [a0 a1 . . . a15 ]0 is the column


vector of coefficients of the 15t h degree polynomial, so that C can
be obtained as C = A−1 15 Y . In this case, the fitting error is zero.

2.5

1.5

0.5
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Figure 1: Approximation in the absence of noise: p15 (red), p2 (blue) and


yi (green).

Case 2: Approximation using p2 (t). In this case, we have


fewer bases, namely 1, t, t2 . In this case, An = A2 is of dimen-
sions 16 × 3 for each given value of t. Hence, we have more data
points than the number of bases and hence, the problem is one
of underconstrained optimization. The estimate of the coefficients
C = [a0 a1 a2 ]0 is given by C = (AT2 A2 )−1 AT2 Y . Since the problem
is underconstrained, the error is non-zero.
The results of the Matlab exercise are shown in Figure 1.
10

-5

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Figure 2: Approximation with noise: p15 (red), p2 (blue) and yi (green).

(b) In the presence of noise, the polynomial of degree 15 overfits the data
points, whereas the second order polynomial fits the data points with
more error while preserving the original trend of the data points. Hence,
even though a higher order polynomial may provide a lower error for
the same estimation problem, the solution obtained may not generalise
well. In learning theory, a regularization is done to trade-off between
the goodness of the fit and overfitting. The results of the Matlab exer-
cise are shown in Figure 2.

The Matlab code used to generate the plots is posted online on Canvas.

(c) We need to minimize the square error e2 is to be minimized in [0, 2]:


Z 2
2
min||f (t) − p2 (t)||2 = min |f (t) − p2 (t)|2 dt
0

< f (t) − y, 1 >= 0


< f (t) − y, t >= 0
< f (t) − y, t2 >= 0
Using the inner product defined in the problem, we have that:
Z 2
< y1 (t), y2 (t) >= y1 (τ )y2 (τ )dτ
0

Hence, the orthogonality constraints can be solved as follows, after


substituting for f(t):
Z 2
(0.5e0.8τ − a0 − a1 τ − a2 τ 2 )(1)dτ >= 0
0
Z 2
8
=⇒ 2a0 + 2a1 + a2 = 0.5e0.8τ dτ = 2.47
3 0

< f (t) − y, t > = 0


Z 2
8
=⇒ 2a0 + a1 + 4a2 = 0.5e0.8τ τ dτ = 3.13
3 0

< f (t) − y, t2 > = 0


Z 2
8 32
=⇒ a0 + 4a1 + a2 = 0.5e0.8τ τ 2 dτ = 4.56
3 5 0

The problem now reduces to solving for a0 , a1 and a2 to give:

a0 = 0.5353
a1 = 0.2032
a2 = 0.3727

The answer obtained in this case is not necessarily similar to those


obtained from parts (a) and (b), since we have used a different inner
product to compute the error and hence minimize it.

3. Exercise 2.3 from the lecture notes.


Solution.
First, notice that the two equations y1 = C1 x + e1 and y2 = C2 x + e2
can be written as:
     
y1 C1 x e1
= +
y2 C2 x e2
and that the error to be minimised, e = eT1 S1 e1 + eT2 S2 e2 can be written
as:
  
  S1 0 e1
e= e1 e2
0 S2 e2
Denoting ỹ = [y1 y2 ]0 , C̃ = [C1 C2 ]0 and S̃ = diag{S1 , S2 } as the block
diagonal matrix of weights on the errors, direct substitution for the
minimum mean square error estimate x̂ is given by:
x̂ = (C̃ T S̃ C̃)−1 C̃ T S̃ ỹ = (C1T S1 C1 + C2T S2 C2 )−1 (C1T S1 y1 + C2T S2 y2 )
In terms of xˆ1 = (C1T S1 C1 )−1 C1T S1 y1 , xˆ2 = (C2T S2 C2 )−1 C2T S2 y2 , Q1
and Q2 , x̂ can be written as:
x̂ = (Q1 + Q2 )−1 (Q1 xˆ1 + Q2 xˆ2 )
The significance of this result is as follows: Given the estimate of x
in the range space of each of C1 and C2 respectively, the minimum
square error estimate of x in the space spanned by the direct sum of
bases of C1 and C2 can be obtained directly, according to the weights
determined by S1 and S2 . The existence and uniqueness of the estimate
is guaranteed by the matrices Ci and Si , for i = 1, 2 being full rank.
4. Exercise 2.4 from the lecture notes.
Solution.

(a) This problem is similar to Exercise 2.3, with C = [c1 c2 . . . ck ]0 ,


S = diag{f k−1 f k−2 . . . 1}, Y = [y1 y2 . . . yn ]0 . This gives the
estimate xˆk as:
xˆk = (C T SC)−1 C T Sy
Substituting for each of the matrices and multiplying them out
gives the required result as follows:
k
X
T
C SC = f k−i cTi ci
i=1
Xk
C T Sy = f k−i cTi yi
i=1
Pk
f k−i cTi yi
=⇒ xˆk = Pi=1
k
i=1 f k−i cTi ci
(b) First, notice that Qk can be iteratively solved for and obtained to
be:
k
X
Qk = f k−i cTi ci
i=1

Consider the RHS of the given equality that we need to prove for
xk :
ˆ + Q−1
xk−1 T
k ck (yk − ck xk−1
ˆ )
ˆ (1 − Qk−1 cTk ck ) + Q−1
= xk−1 T
k ck y k
!
cTk ck cTk yk
= xk−1
ˆ 1 − Pk + k
k−i cT c k−i cT c
P
i=1 f i i i=1 f i i
Pk !
k−i T
i=1 f ci ci − cTk ck cTk yk
= xk−1
ˆ Pk + k
k−i cT c k−i cT c
P
i=1 f i i i=1 f i i
Pk−1 k−1−i T Pk !
k−1−i T
f ci y i i=1 f ci ci cTk yk
= Pi=1
k−1 k−1−i T k
+ k
k−i cT c k−i cT c
P P
i=1 f ci ci i=1 f i i i=1 f i i
Pk−1 k−1−i T T
i=1 f ci y i ck yk
= P k
+ k
k−i cT c k−i cT c
P
i=1 f i i i=1 f i i
Pk k−i T
f ci y i
= Pi=1
k k−i cT c
i=1 f i i
= xk
(c) We are given that the gain of the estimator is given as:
c
gk = Q−1
k c = Pk k−i c2
i=1 f

Since 0 < f < 1, the denominator in the above expression con-


1−f
verges as k → ∞. Hence, g∞ = . As f increases, g∞
c
decreases and as f decreases, g∞ increases. This is expected,
since higher values of the fade factor discounts the past data more
quickly.
5. Exercise 3.2 from the lecture notes.
Solution.
The minimum square error estimate of U = [u0 , u1 , . . . , un−1 ]0 is given
by:
U = H T (HH T )−1 ŷ
where H = [h0 h1 . . . hn−1 ]. Solving for each ui , i = 1, . . . , n − 1 gives:

hi
ui = Pn−1 ŷ
j=0 h2j

(a) Let un = r(ŷ − yn ). Augmenting this with the vector U , so that
Ũ = [u0 , u1 , . . . , un ]0 and substituting in the expression in part (i)
gives,
hi
ui = Pn−1 2 1 ŷ
j=0 hj + r

(b) When r = 0, the denominator in part (a) tends to infinity. Hence,


ui = 0 ∀i. When r = ∞, 1r tends to zero and hence, the solution
to the problem is the same as the one in part (i).

You might also like