Signal Processing
Signal Processing
PN −1
I. I NTRODUCTION b) To calculate the mean of r, we apply E[r] = E[ N1 n=0 r(n)]
1
PN −1
Continuing with the idea of the receiver that you implemented so E[r] = N n=0 E[r(n)],and as we said E[r(n)] =
PN −1
in the previous practice, in this one we will work on maximum Acos(2πfo n + ϕ).Finally, E[r] = N1 n=0 Acos(2πf o n + ϕ).
likelihood estimation (MLE) in the context of a real application Again,the mean depends on the n sample.
dealing with the well-known Global Navigation Satellite Systems
IV. Q UESTION 3
(GNSS). Specifically, we will carefully evaluate the MLE of the
carrier phase of a GNSS received signal. The term GNSS refers to a Look closely at the MLE ϕ in (8). Are the operations implemented
set of satellites orbiting the Earth, such as GPS, Galileo, Glonass and in this formula familiar to you? (recall your background on
BeiDou that were designed to broadcast signals with global coverage fundamentals of communications)
whose processing provides the user’s positioning. The procedure is
essentially based on calculating the travel time of those signals going Answer Question 3
from the satellites to the user’s receiver, and then estimating the Referring to Fonaments de Comunicacions, this form is reminiscent of
position of the latter by means of triangulation techniques. the optimal receiver filter, where we had a signal r(t) = si (t)+w(n),
where the signal was constructed based on orthonormal basis
II. Q UESTION 1: PRELAB functions multiplied by their corresponding symbols. The optimal
Let us consider only one sample of the random signal r(n) in (3) receiver filter was designed by a bank of correlators, where the
and assume that the carrier phase is unknown but deterministic: a) signal r(t) entered and was multiplied by the basis functions Φ1 ,
Write down the corresponding pdf. b) Compute, step by step, the Φ2 , ... , Φn , and integrated over a period T . Due to the properties of
mean u(n)=E[x(n)] and determine whether the result depends on n the basis functions, the integral over one period of ϕm · ϕn was 0 if
or it is constant. m ̸= n. Considering only the signal and not the noise, at the output
of each correlator, we had the coefficient si (t) corresponding to its
Answer Question 1: PRELAB basis function. However, considering the noise, our symbol would
Once we have determined what our signal r(n), we describe be shifted because of it. Therefore, at the output of the correlator
it as : r(n)=Acos(2πfo n + ϕ) + ω(n), where ϕ is unknown bank, a comparator is installed, and the error ||y − ŝi (t)|| between
but deterministic. Once we know the model of a pdf, and the sent and received symbol is measured, and the most likely one is
we know that w(n) is AW GN with zero mean and σw 2 chosen. That is, if our sent symbol ŝi (t) falls closer to ŝ1 (t) than to
variance, we calculate the mean µx to compute the pdf : ŝ2 (t), we will choose the first symbol. This is the well-known ML
E[r(n)] = E[Acos(2πfo n + ϕ)] + E[ω(n)]. As we said, the (Maximum Likelihood) estimator and is optimal when the symbols
noise is zero mean and the rest of the signal is not random, are equiprobable. The formula we see is derived from minimizing
so we can say that: E[r(n)] = Acos(2πfo n + ϕ). Now we arg min ||y − ŝi (t)||, the error between the noisy signal and the
know the mean µx , but it’s important to say that it is a vector symbol we are comparing with.
so we have µx , so the mean depends on the sample number n.
1 1 H
f(x)= N · exp( 2σ 2 (x − µx ) (x − µx )) so we can substitute
2 ) 2
(2πσw ω
1 1
PN −1 2
f(x)= N · exp( 2σ 2 ( n=0 x(n) − Acos(2πfo n + ϕ)) )
2 ) 2
(2πσw ω
V. Q UESTION 4
III. Q UESTION 2: PRELAB Generate a vector r containing N = 100 samples of the signal r(n)
Let us consider N samples of the random signal r(n) in (3) that in (3) using A = 1 and no noise. Implement the MLE ϕM L ML in
are stacked into an (N × 1) vector r = [r(0), r(1)..., r(N − 1)]T . (8) and feed it with the vector of N samples r that you have just
Assuming, as it was done in the previous question that the carrier created. Run the MLE that you have implemented and check that
phase is unknown but deterministic: it estimates exactly any value of ϕML that is present in the input
a) Write down the corresponding pdf. signal.
b) Compute, step by step, the mean mu(n) = E[x(n)]and determine
whether the result depends on n or it is constant.
Answer Question 4
Once we generate an r vector with N = 100 without noise, we
Answer Question 2: PRELAB observe that the Maximum Likelihood Estimation (MLE) perfectly
a) Now, we stack our r(n) measurments in a r vector, each one with estimates the phase without any errors. We plot both the original
it’s corresponding samples and noise, so:r = [r(0), r(1), r(N − 1)]. signal and the estimated signal (see Figure 2), and we observe that
We remember that:E[r(n)] = Acos(2πfo n+ϕ) The P DF becomes they are exactly the same. Setting ϕ = 0.5, and creating a variable
the result of multiplying from n = 0 to n = N −1 the corresponding θML , we execute the script and find that θML = ϕ = 0.5. This
Q −1 1 (x(n)−µ(n))2 result makes sense because we are not adding any randomness to
P DF in the following form: f(x)= N n=0 σ 2π · exp
√ 2σ 2 , our generated signal,so it is deterministic, and the estimator works
QN −1 1 − x[n]−Acos(2πfo n+ϕ))2
so we determine that f(x)= n=0 σ√2π · exp 2σ 2
perfectly (consider θML = ϕML using θ notation as discussed in class).
2
VI. Q UESTION 5
Take the vector r of N = 100 samples used in question 4 and add
noise with power equal to 0.1. Feed this noisy vector r to the MLE
implemented in question 4 and confirm that the output of the MLE
is now a noisy estimate.
Answer Question 5
Now, we are adding noise to our generated signal with a power of 1,
and we attempt to estimate the original signal. In Figure 3, the results
after executing the script are shown. First, we plot the signal as it is
supposed to be, then we plot the noisy generated signal, and finally,
the estimated signal after estimating the phase ϕ. They may appear
similar, but upon analyzing the variables ϕ and ϕML , they are not
exactly equal. Since the reader does not have access to the generated
variables, we plotted the different signals (original and estimated)
and zoomed in (Figure 4). We can observe a slight phase offset, a
result of imperfect estimation due to the presence of noise.So, we
can determine that when noise is present, the ML estimator may not
estimate the phase perfectly. In the following exercises, we will try
Fig. 1. Correlator Bank ; Question 3
to improve upon that.
VII. Q UESTION 6
.
In order to assess the goodness of the MLE estimator, generate
r[n] signal with N=100 samples
K=1000 realizations of the noisy vector r in question 5. That 1
is, generate ri for i=1,2,. . . ,K, each one with a different noise 0.5
realization. With these K realizations of r containing N=100 samples
0
each, find the K outputs of the MLE and use them to compute the
-0.5
bias of the MLE for N=100.
Repeat the process for N=10 and N=1000. -1
0 10 20 30 40 50 60 70 80 90 100
Once you are done, you should have the bias of the MLE for Signal estimating phi without noise
1
N=10,100,1000. Plot the bias in the Y-axis as a function of N in the
0.5
X-axis, and set the limits of the Y-axis to facilitate a proper view.
0
In this plot that you have just obtained, what is the trend that -0.5
Answer Question 6
Fig. 2. Question 4
As we know, our estimator won’t work perfectly, so we will try to
analyze the statistical components to improve it and bring it closer
to the real value. We generate 1000 realizations and compute the
bias, given by bias(θ̂) = E[θ̂] − θ, to observe how far the true
value of ϕ is from ϕML . The bias for N = 100 is approximately
8 × 10−3 , but it may vary if we execute the script again. We repeat 1
r[n] signal with N=100 samples without noise
shown in Figure 5, the bias is lower when the vector length is larger. -1
The more samples you have of something random, the closer the 0 10 20 30 40 50 60 70 80 90 100
Signal r with noise
values get to the mean. Therefore, we are interested in having the 2
Answer Question 7
The same thing happens with the variance. It decreases when you
have a large number of N samples (see Figure 6). We calculate it
with var(θ̂) = E[(θ̂ − E[θ̂])2 ]. The result is as expected. Once we 0.8
-0.2
IX. Q UESTION 8 -0.4
Repeat question 7 now by computing the MSE instead of -0.6
the variance. Use N= 1:10:1000 instead of just the three values
-0.8
N=10,100,1000 used in question 7.
2 3 4 5 6 7 8 9 10
Plot the MSE obtained and the CRB both in the same plot so
that they are superposed. What do you observe? Do you get the
result you expected? Is the ML estimator efficient? Is the ML Fig. 4. Question 5.2
estimator consistent? Give the conclusions you think are appropriate.
Answer Question 8
We define the MSE as MSE = bias(θ̂)2 + var(θ̂). Instead of using
N = [10, 100, 1000], we start at N = 1 and increment in steps of
10 until we reach N = 1000, and then plot it. We use the given 1
10 -3 Bias of the theta estimator
definition to compute the CRB(θ). We plot it on a logarithmic
0.9
scale (Figure 8) and verify the condition MSE(θ) ≥ CRB(θ).
Additionally, we have plotted it on a linear scale (Figure 7) to 0.8
bound, it is still above it, indicating that we have not reached the 0.3
maximum theoretical efficiency. The graph is logical because, while
0.2
the MSE(θ) is never equal to the CRB(θ) (i.e., it will never be
0.1
100% efficient), it asymptotically approaches the CRB(θ) value as
the number of samples N increases. 0
0 200 400 600 800 1000
Fig. 5. Question 6
-2
Variance of the estimator in logarithmic scale
10
X 10
Y 0.00199638
10 -3
X 100
Y 0.000202109
-4
10
X 1000
Y 1.93666e-05
Fig. 6. Question 7
4
1
10 -3 Comparation between MSE and CRB subplot (3 ,1 ,2);
0.9
MSE plot ( r noise );
CRB
t i t l e ( ’ Signal r with noise ’ ) ;
0.8
subplot (3 ,1 ,3);
0.7
p l o t ( e s t i m a t e d s i g n a l , ’ color ’ , ’ red ’ ) ;
0.6 t i t l e ( ’ Signal e s t i m a t i n g phi with noise ’ ) ;
0.5
0.4
figure ;
0.3
plot ( r , ’ r ’);
h o l d on ;
0.2
plot ( estimated signal , ’b ’ ) ;
0.1
xlim ( [ 2 , 1 0 ] ) ;
0
0 200 400 600 800 1000
ylim ([ −0.9 , 0 . 9 ] ) ;
Question 6
Fig. 7. Question 8.1 A = 1;
f = 0.1;
phi = 0 . 5 ;
Comparation between MSE and CRB N = [ 1 0 100 1 0 0 0 ] ;
10 -3
MSE
CRB
K = 1000;
r e a l i z a t i o n s = zeros (1 , K) ;
v a r = z e r o s ( 1 , l e n g t h (N ) ) ;
t h e t a v e c t = z e r o s ( 1 , l e n g t h (N ) ) ;
10 -4 f o r i = 1 : l e n g t h (N)
variances = zeros (1 , K) ;
t h e t a s = zeros (1 , K) ;
f o r k = 1 :K
n = 0 :N( i ) − 1 ;
r = A * cos (2 * pi * f * n + phi ) ;
0 200 400 600 800 1000 n o i s e = normrnd ( 0 , 0 . 1 , [ 1 , N( i ) ] ) ;
cu rre nt re ali za tio n = r + noise ;
current theta =
X. − a t a n 2 ( sum ( c u r r e n t r e a l i z a t i o n . * s ) , sum ( c u r r e n t r e a l i z a t i o n . * c ) ) ;
figure ;
s e m i l o g y (N, v a r , ’ bo − ’ ) ;
t i t l e ( ’ Variance of the e s t i m a t o r in l o g a r i t h m i c scale ’ ) ;
ylim ( [ 0 , 0 . 0 1 ] ) ;
Question 8
A = 1;
f = 0.1;
phi = 0 . 5 ;
N = 1:10:1000;
sigma = 0 . 1 ;
K = 1000;
r e a l i z a t i o n s = zeros (1 , K) ;
v a r = z e r o s ( 1 , l e n g t h (N ) ) ;
t h e t a v e c t = z e r o s ( 1 , l e n g t h (N ) ) ;
f o r i = 1 : l e n g t h (N)
variances = zeros (1 , K) ;
t h e t a s = zeros (1 , K) ;
f o r k = 1 :K
n = 0 :N( i ) − 1 ;
r = A * cos (2 * pi * f * n + phi ) ;
n o i s e = normrnd ( 0 , 0 . 1 , [ 1 , N( i ) ] ) ;
cu rre nt re ali zat io n = r + noise ;
s = sin (2 * pi * f * n ) ;
c = cos (2 * pi * f * n ) ;
c u r r e n t t h e t a = − a t a n 2 ( sum ( c u r r e n t r e a l i z a t i o n . * s ) ,
sum ( c u r r e n t r e a l i z a t i o n . * c ) ) ;
% C a l c u l a t e t h e mean o v e r t h e K r e a l i z a t i o n s
v a r ( i ) = mean ( v a r i a n c e s ) ;
t h e t a v e c t ( i ) = mean ( t h e t a s ) ;
end
% C a l c u l a t e CRB
CRB = ( s i g m a ˆ 2 ) . / N;
b i a s = abs ( t h e t a v e c t − phi ) ;
mse = b i a s . ˆ 2 + v a r ;
figure ;
p l o t (N, mse , ’ b ’ , N, CRB, ’ r ’ ) ;
t i t l e ( ’ C o m p a r a t i o n b e t w e e n MSE and CRB’ )
l e g e n d ( ’MSE’ , ’CRB − Var ’ ) ;
ylim ( [ 0 , 0 . 0 0 1 ] ) ;