0% found this document useful (0 votes)
48 views10 pages

FloresAlejandro Exercise1

(1) The Poisson distribution is normalized, as the sum of probabilities Pn across all possible values of n sums to 1. (2) The first moment or expected value μ of the Poisson distribution is equal to λ. (3) The variance σ2 of the Poisson distribution is also equal to λ. Both the mean and variance are therefore equal to the single parameter λ of the Poisson distribution.

Uploaded by

kero92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views10 pages

FloresAlejandro Exercise1

(1) The Poisson distribution is normalized, as the sum of probabilities Pn across all possible values of n sums to 1. (2) The first moment or expected value μ of the Poisson distribution is equal to λ. (3) The variance σ2 of the Poisson distribution is also equal to λ. Both the mean and variance are therefore equal to the single parameter λ of the Poisson distribution.

Uploaded by

kero92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

031051S Exercise 1

Alejandro Flores ([email protected])


September 8, 2020

1. Consider the discrete Poisson distribution:


λn
Pn = e−λ
n!
n = 0, 1, 2, 3, ... and λ > 0.
Pinf
(a) Show that this distribution is normalized: n=0 Pn = 1.
(b) What is the first moment µ?

(c) What is the variance σ 2 ?

(a)
∞ ∞
X X λn
Pn = e−λ (1)
n=0 n=0
n!

X λn
= e−λ (2)
n=0
n!

Then, the sum presented is a well-known series:



X xn
= ex (3)
n=0
n!

Using 3 in 2, we get:

X
Pn = e−λ eλ (4)
n=0
=1 (5)

(b)

X
µ = E [Pn ] = nPn (6)
n=0

X λn
= ne−λ (7)
n=0
n!

1
031051S Alejandro Flores ([email protected]) Exercise 1

Considering the convention 0! = 1, it’s easy to see that, in 7, the first term of the sum (n = 0) is 0, so,
without losing any information, we write:

X λn
µ = E [Pn ] = ne−λ (8)
n=1
n!

X λn
= e−λ n (9)
n=1
n(n − 1)!

X λn−1 λ
= e−λ (10)
n=1
(n − 1)!

X λn−1
= λe−λ (11)
n=1
(n − 1)!

We perform a change of index k = n − 1 in 11 and we get:



X λk
µ = E [Pn ] = λe−λ (12)
k!
k=0

We use 3 in 12 and we get:


µ = E [Pn ] = λe−λ eλ (13)
=λ (14)

(c) In the following proof we use similar arguments than in the mean, in particular, if the first term of
a sum is 0, we set the sum to start one element after. Also, when helpful, we make index changes
accordingly.
2
σ 2 = E (Pn − µ)2 = E Pn2 − E [Pn ]
   
(15)

X
= n2 Pn − µ2 (16)
n=0

X λn
= n2 e−λ − λ2 (17)
n=1
n!

X λn−1 λ
= e−λ nn − λ2 (18)
n=1
(n − 1)!n

X λn−1
= e−λ λ (n − 1 + 1) − λ2 (19)
n=1
(n − 1)!

X λk
= e−λ λ (k + 1) − λ2 (20)
k!
k=0
∞ ∞
!
−λ
X λk X λk
=e λ k + − λ2 (21)
k! k!
k=0 k=0
∞ ∞
!
k−1 k
Xλ λ X λ
= e−λ λ k + − λ2 (22)
(k − 1)!k k!
k=1 k=0
∞ ∞
!
l k
−λ
X λ X λ
=e λ λ + − λ2 (23)
l! k!
l=0 k=0

2
031051S Alejandro Flores ([email protected]) Exercise 1

We use the expression on 3 and get:

σ 2 = E (Pn − µ)2 = e−λ λ λeλ + eλ − λ2


  
(24)
λ −λ 2
=e e λ (λ + 1) − λ (25)
2
= λ (λ + 1) − λ (26)
2 2
=λ +λ−λ (27)
=λ (28)

2. Let Xn be an IID sequence of Gaussian random variables with zero mean and variance σ 2 ,
and let Yn be
Xn + Xn−1
Yn =
2
(a) What are the mean and covariance of Yn ?

(b) Is Yn a Gaussian random variable?

(a) For the mean, we have:


 
Xn + Xn−1
mY = E[Yn ] = E (29)
2
1
= E [Xn + Xn−1 ] (30)
2
Given the linearity of the expectancy, we have:
1
mY = E[Yn ] = (E [Xn ] + E [Xn−1 ]) (31)
2
Since Xn is IID the mean is time invariant and it is also equal to 0 (zero-mean).
1
mY = E[Yn ] = (0 + 0) = 0 (32)
2

For the covariance, we have:

σY2 = E (Yn − mY )2
 
(33)
= E (Yn − 0)2
 
(34)
" 2 #
Xn + Xn−1
=E (35)
2
1 
= E (Xn + Xn−1 )2

(36)
4
1 
= E Xn2 + Xn Xn−1 + Xn−1 2

(37)
4
1
E Xn2 + E [Xn Xn−1 ] + E Xn−1
   2 
= (38)
4
(39)

3
031051S Alejandro Flores ([email protected]) Exercise 1
 
Since Xn is zero-mean, E Xn2 is the covariance of Xn . Since Xn is IID, Xn and Xn−1 are independent,
and E [Xn Xn−1 ] = E [Xn ] E [Xn−1 ]. Also, its covariance is time-invariant, so we have:
1 2
σY2 = σ + E [Xn ] E [Xn−1 ] + σ 2

(40)
4
1
2σ 2 + (0)(0)

= (41)
4
σ2
= (42)
2

(b) Is Yn a Gaussian random variable?


Solution in figure 1.

Figure 1: Question 2 literal b

3. Consider the stochastic process defined by

Y [n] = X[n] + βX[n − 1]


where β ∈ R and X[n] is a zero-mean WSS process with auto-correlation function given by
Rx [k] = σ 2 α|k| for |α| < 1.

(a) Compute the PSD Py ejω of Yn .

(b) Form the auto-correlation matrix with filter length N = 2, compute cross-correlation
Rxy [n].

Solution in figures 2 and 3.

4
031051S Alejandro Flores ([email protected]) Exercise 1

Figure 2: Question 3, literal a

5
031051S Alejandro Flores ([email protected]) Exercise 1

Figure 3: Question 3, literal b

6
031051S Alejandro Flores ([email protected]) Exercise 1

4. Find the eigenvalues and eigenvectors of the given matrix


 
1 ρ
R= , −1 ≤ ρ ≤ 1
ρ 1

Solution in figure 4.

Figure 4: Question 4

7
031051S Alejandro Flores ([email protected]) Exercise 1

5. Find the derivative of the following functions

(a) f (x) = ||x||2 with respect to complex vector x ∈ CN ×1 .


(b) f (x) = xH Ax with respect to complex vector x ∈ CN ×1 . Note that A is a constant square
matrix.

(c) f (X) = trace(X) with respect to real-valued matrix X ∈ RN ×N .

Solution in figures 5 and 6.

8
031051S Alejandro Flores ([email protected]) Exercise 1

Figure 5: Question 5, literal a and b

9
031051S Alejandro Flores ([email protected]) Exercise 1

Figure 6: Question 5, literal c

10

You might also like