0% found this document useful (0 votes)
32 views96 pages

SSP Lab Exps Merged

The document describes an experiment on noise filtering. It involves generating a random signal, adding noise and an echo signal to obtain a noisy signal, designing an FIR Wiener filter to filter the noisy signal, and estimating the original signal and evaluating the filter performance. Code is provided in MATLAB to implement the different steps.

Uploaded by

aneetachristo94
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views96 pages

SSP Lab Exps Merged

The document describes an experiment on noise filtering. It involves generating a random signal, adding noise and an echo signal to obtain a noisy signal, designing an FIR Wiener filter to filter the noisy signal, and estimating the original signal and evaluating the filter performance. Code is provided in MATLAB to implement the different steps.

Uploaded by

aneetachristo94
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 96

National Institute of Technology

Calicut

EC6405E
Statistical Signal Processing

Lab Experiment 1

Author: Faculty in charge:


Aneeta Christopher Dr. Deepthi P. P.
1 Aim
Noise Filtering
a. Generate a random signal x(n) with autocorrelation
function as rx (k) = P α|k| for a chosen α where 0 < α < 1
Generate a random signal x(n) with autocorrelation function given by

rx (k) = P α|k|

for a chosen α where 0 < α < 1.

b. Sketch power spectral density (PSD) Px (ejw ) and find


the bandwidth of the signal
Sketch the power spectral density (PSD) Px (ejw ) and determine the band-
width of the signal.

c. Simulate noisy signal with additive uncorrelated noise


w(n) and an echo signal to get the noisy signal y(n) as
y(n) = x(n) + Ax(n − D) + w(n)
Simulate a noisy signal y(n) with additive uncorrelated noise w(n) and an
echo signal:
y(n) = x(n) + Ax(n − D) + w(n)

1
d. Consider noise autocorrelation function as rw (k) =
σ 2 δ(k).
i. Choose signal power P = 2, auto-correlation decay rate α = 0.4,
attenuation factor A = 0.1, delay of echo D = 1, noise variance
σ 2 = 0.25.
ii. Sketch the PSD of x(n) and y(n).
iii. Design FIR Wiener filter with tapped delay line structure hav-
ing L = 2 taps. Find the filtered output signal x̂(n). Sketch x(n),
x̂(n), and PSD of x̂(n).
iv. Find minimum mean squared error (MMSE) and improvement
in SNR through filtering.
v. Repeat steps (ii) to (iv) with different values for parameters P ,
α, A, and σ 2 . Increase the number of filter taps and L and check
the SNR improvement.
vi. Repeat steps (ii) to (iv) for IIR Wiener filter with any one set
of P , α, A, and σ 2 in step (v) and compare the performance with
the corresponding FIR filter.

e. Consider noise autocorrelation function as rw (k) =


σ 2 (δ(k) + 0.5δ(k − 1) + 0.5δ(k + 1)).
Repeat steps (ii) to (iv) with any one set of P , α, A, and σ 2 and different
values of L. Sketch PSDs and evaluate SNR improvement through filtering.

2 Theory
In statistical signal processing, the autocorrelation function rx (k) measures
the similarity between a signal and a time-shifted version of itself. The Power
Spectral Density (PSD) represents the distribution of power across different
frequencies in a signal.

2
Direct method
We consider the problem of modeling a deterministic signal, x(n), as the
unit sample response of a linear shift-invariant filter, h(n), having a rational
system function of the form given in Eq. (4.3). Thus, v(n) in Fig. 4.1 is taken
to be the unit sample, δ(n). We will assume, without any loss in generality,
that x(n) = 0 for n < 0 and that the filter h(n) is causal. Denoting the
modeling error by e′ (n),

e′ (n) = x(n) − h(n)


the problem is to find the filter coefficients, ap (k) and bq (k), that make
e′ (n) as small as possible. In the least squares or direct method of signal
modeling, the error measure that is to be minimized is the squared error [6]

X
Ls = |e′ (n)|2
n=0

(Note that since h(n) and x(n) are both assumed to be zero for n < 0,
then e′ (n) = 0 for n < 0 and the summation begins at n = 0).
A necessary condition for the filter coefficients ap (k) and bq (k) to minimize
the squared error is that the partial derivative of Ls with respect to each of
the coefficients vanish, i.e.,

∂Ls ∂Ls
= 0 and = 0 for k = 0, 1, . . . , p and k = 0, 1, . . . , q.
∂ap (k) ∂bq (k)
Using Parseval’s theorem, the least squares error may be written in the
frequency domain in terms of E ′ (ejω ), the Fourier transform of e′ (n), as
follows:
Z π
Ls = |E ′ (ejω )|2 dω
−π

This representation allows us to express Ls explicitly in terms of the


model parameters ap (k) and bq (k). Thus, setting the partial derivatives with
respect to ap (k) equal to zero, we have

Z π
∂Ls
= −2 E ′ (ejω )E ′∗ (ejω ) sin(kω)dω = 0 for k = 0, 1, . . . , p.
∂ap (k) −π

3
Since
Z π
δ(n)e−jωn dω = ejωn ,
−π

the equation becomes



X
E ′ (ejω )e−jωn sin(kω) = 0 for k = 0, 1, . . . , p.
n=−∞

The least squares approach is not mathematically tractable and not amenable
to real-time signal processing applications.

2.1 Procedure
2.2 Code and Explanation

MATLAB Code and Explanation


Initialization and Signal Generation
% MATLAB Code Section
clc;
clear;
close all;

% Parameters
P = 2; % Signal power
A = 0.4; % Autocorrelation decay rate
N = 10; % Signal length
A_d = 0.1; % Delay coefficient
D = 1; % Filter delay
v = randn(1, N); % White noise signal

% Generate input signal x(n)


x = zeros(1, N);
x(1) = v(1);
for n = 2:N
x(n) = 0.4 * x(n-1) + v(n);

4
end

Noisy Signal Simulation and Autocorrelation


% MATLAB Code Section
% Simulate noisy signal y(n)
w_n = normrnd(0, 0.5, [1, N]);
y = zeros(1, N);
y(1) = x(1) + w_n(1);
for n = 2:N
y(n) = x(n) + A_d * x(n-1) + w_n(n);
end

% Plot the generated signal


figure;
stem(0:N-1, x, "LineWidth", 2);
xlabel("n");
ylabel("x(n)");
title("Generated random signal");
grid on;

% Plot autocorrelation function


k = 0:N-1;
rx = ac(k, P, A);
figure;
stem(k, rx);
xlabel("k");
ylabel("r_x(k)");
title("Autocorrelation function");
grid on;

Power Spectral Density and Autocorrelation Matrix


% MATLAB Code Section
% Plot power spectral density
w = -pi:0.1:pi;
P_x = (1 - A^2) / (1 + A^2 - 2*A*cos(w));
figure;

5
plot(w, P_x);
xlabel("w");
ylabel("P_x(e^{jw})");
title("Power Spectral Density of the generated signal");
grid on;

% Plot autocorrelation of y(n)


z = 3;
t = 0:z-1;
r_y1 = (1 + A_d^2) * ac(t, P, A);
r_y2 = A_d * (ac(t+1, P, A) + ac(t-1, P, A));
r_y = r_y1 + r_y2;
r_y(1) = r_y(1) + 0.8; % Add noise variance
figure;
plot(t, r_y);
xlabel("n");
ylabel("R_Y(n)");
title("Autocorrelation of the generated signal");
grid on;

% Construct autocorrelation matrix


ry_mat = autocorrelationMatrix(z, P, A, A_d, 0.8);

Solve for Filter Coefficients and Power Spectral Density


of y(n)
% MATLAB Code Section
% Solve for filter coefficients
ry_mat_inv = inv(ry_mat);
w_fil = ry_mat_inv * transpose(r_xy);

% Plot power spectral density of y(n)


P_y = 1.01 * P_x + 0.2 * cos(w) .* P_x + 0.25;
figure;
plot(w, P_y);
xlabel("w");
ylabel("P_y(e^jw)");

6
title("Power Spectral Density of the input signal");
grid on;

Estimate Signal and Plot Results


% MATLAB Code Section
% Estimate signal from the filter
g = 2; % Filter taps
w_fil_n = w_fil(1:g);
x_estimate = zeros(1, N);
for o = 1:N
x_estimate(o) = w_fil(1) * y(o);
for k = 2:min(o, 1:g)
x_estimate(o) = x_estimate(o) + w_fil_n(k) * y(o - k + 1);
end
end

% Plot error from the estimated signal


e_estimate = x - x_estimate;
figure;
stem(0:N-1, e_estimate, "LineWidth", 2);
xlabel("n");
ylabel("error_{estimate}(n)");
title("Error from Estimated signal from the filter");
grid on;

Colored noise experimentation


% MATLAB Code Section
clc;
clear;
close all;

% Parameters
P = 2; % Signal power
A = 0.4; % Autocorrelation decay rate
N = 10; % Signal length
A_d = 0.1; % Delay coefficient

7
D = 1; % Filter delay
v = randn(1, N); % White noise signal

% Additional parameters
w_variance = 0.25; % Variance of the noise signal

% Colored noise autocorrelation function


n0 = 0;
n1 = 1;
n2 = -1;
n = -5:5;
r_w = w_variance * (((n-n0) == 0) - (0.5 * (((n-n1) == 0) + ((n-n2) == 0))));

% Colored noise generation


w_n = zeros(1, N);
w_n(1) = v(1) + 0.5 * v(2);

for ri = 2:N-1
w_n(ri) = v(ri) + 0.5 * (v(ri - 1) + v(ri + 1));
end

% Desired signal x(n)


x = zeros(1, N);
x(1) = w_n(1);

for n = 2:N
x(n) = 0.4 * x(n-1) + w_n(n);
end

% Input signal to the filter


y = zeros(1, N);
y(1) = x(1) + w_n(1);

for n = 2:N
y(n) = x(n) + A_d * x(n-1) + w_n(n);
end

% Autocorrelation of colored noise

8
figure;
stem(-5:4, r_w, "LineWidth", 2);
xlabel(’n’);
ylabel(’r_w(n)’);
title(’Autocorrelation function of colored noise, r_w(n)’);
grid on;

% Colored noise signal plot


figure;
stem(-5:5, w_n);
xlabel(’n’);
ylabel(’w_n(n)’);
title(’Colored noise of length N’);
grid on;

% Autocorrelation function of x(n)


k = 0:N-1;
rx = ac(k, P, A);
figure;
stem(k, rx);
xlabel("k");
ylabel("r_x(k)");
title("Autocorrelation function of x(n)");
grid on;

% Power spectral density


w = -pi:0.1:pi;
P_x = (1 - A^2) / (1 + A^2 - 2*A*cos(w));
figure;
plot(w, P_x);
xlabel("w");
ylabel("P_x(e^{jw})");
title("Power Spectral Density of x(n)")
grid on;

% Autocorrelation matrix for filtering


z = 3;
ry_mat = autocorrelationMatrix(z, P, A, A_d, w_variance);

9
ry_mat_inv = inv(ry_mat);
w_fil = ry_mat_inv * transpose(ac(0:z-1, P, A));

% Power spectral density of y(n)


P_y = (1 + A_d^2) * P_x + (2 * A_d * cos(w)) .* P_x + 0.25;
figure;
plot(w, P_y);
xlabel("w");
ylabel("P_y(e^jw)");
title("Power Spectral Density of y(n)")
grid on;

% Signal estimation from the filter


g = 2; % Filter taps
w_fil_n = w_fil(1:g);
x_estimate = zeros(1, N);

for o = 1:N
x_estimate(o) = w_fil(1) * y(o);
for k = 2:min(o, 1:g)
x_estimate(o) = x_estimate(o) + w_fil_n(k) * y(o - k + 1);
end
end

% Error from the estimated signal


e_estimate = x - x_estimate;

% SNR estimation for the original signal


rx_0 = ac(0, P, A);
power_noise = ((A_d*A_d) * rx_0 + r_w(5) + r_w(6) + r_w(7));
SNR_x = rx_0 / power_noise;
SNR_x_dB = 10 * log10(SNR_x);

% MMSE for the estimated signal


MMSE = rx_0 - (ac(0:2, P, A_d) * w_fil);

% Reconstructed signal SNR


SNR_rec = rx_0 / MMSE;

10
SNR_rec_dB = 10 * log10(SNR_rec);

IIR Filter Design


% MATLAB Code Section
clc;
clear all;
close all;

% Variables
P = 2;
Alpha = 0.4;
N = 100;
a = 0.1;
sigma = 0.5;
Fs = 8000;
D = 1;

% Generate white noise v(n) with the desired autocorrelation function


v = randn(1, N);
v = filter(1, [1, -Alpha^2], v);

% Generate the signal x(n)


x = zeros(1, N);
x(1) = v(1);
for n = 2:N
x(n) = Alpha * x(n-1) + v(n);
end

% Plot the generated signal x(n)


subplot(3,1,1);
stem(1:N, x);
xlabel("n");
ylabel("x(n)");
title("Generated signal x(n)");
grid on;

% Plot the autocorrelation function of x(n)

11
k = -N/2:N/2;
rx = P * Alpha.^abs(k);
subplot(3,1,2);
stem(k, rx);
xlabel("k");
ylabel("r_x(k)");
title("Autocorrelation function of x(n)");
grid on;

% Plot the power spectral density of x(n)


w = -pi:0.1:pi;
f = (w*Fs)/(2*pi);
P_x = (1-Alpha.^2)./(1+Alpha.^2-(2*Alpha*cos(w)));
subplot(3,1,3);
plot(f,P_x, ’LineWidth’, 2);
xlabel("Frequency (Hz)");
ylabel("P_x(e^{j\omega})");
title("Power Spectral Density of x(n)");
grid on;

% Generate uncorrelated noise and echo signal


w = sigma * randn(1, N+1);
echo = circshift(x, [0, D]);
y = x + a * echo(1:N) + w(1:N);

% Plot the signals


figure;
subplot(2,1,1);
stem(y);
title(’Noisy Signal y(n)’);

% Calculate the power spectral density of y(n)


w = -pi:0.1:pi;
P_y = (1.01 + 0.2 * cos(w)) .* P_x + sigma^2;

% Plot the power spectral density of y(n)


subplot(2,1,2);
f = (w*Fs)/(2*pi);

12
plot(f, P_y, ’LineWidth’, 2);
xlabel(’Frequency (Hz)’);
ylabel(’P_y(e^{j\omega})’);
title(’Power Spectral Density of y(n)’);
grid on;

% Estimate the signal x_hat(n)


x_hat = zeros(1, N);

for n = 2:N-1
x_hat(n) = 0.08 * y(n+1) + 0.74 * y(n) + 0.01 * y(n-1);
end

% Plot the original signal and the estimated signal


figure;
subplot(3,1,1);
stem(1:N, x);
xlabel("n");
ylabel("x(n)");
title("Generated signal x(n)");
grid on;

subplot(3,1,2);
stem(1:N, x_hat);
xlabel(’n’);
ylabel(’x_{hat}(n)’);
title(’Estimated Signal x_{hat}(n)’);
grid on;

% Plot the error signal


e_n = x - x_hat;
subplot(3,1,3);
stem(1:N, e_n);
xlabel(’n’);
ylabel(’error_value’);
title(’Error, e(n)’);
grid on;

13
% Calculate and display Mean Squared Error (MSE)
MSE = 0;
for n = 1:N
MSE = MSE + (x(n) - x_hat(n))^2;
end
MMSE = MSE / N;
fprintf(’Mean Squared Error (MSE): %.4f\n’, MMSE);

% Calculate and display original SNR


rx_0 = 2;
SNR_noise = rx_0 /(((a^2)*rx_0)+sigma^2);
disp("SNR of Original noise")
disp(SNR_noise)
SNR_noise_dB = 10*log10(SNR_noise);
disp("SNR of Original noise in dB : ")
disp(SNR_noise_dB)

% Calculate and display reconstructed SNR


Reconstructed_SNR = rx_0 / MMSE;
disp("Reconstructed_SNR : ")
disp(Reconstructed_SNR)
Re_SNR_dB = 10*log10(Reconstructed_SNR);
disp("Reconstructed SNR_dB : ")
disp(Re_SNR_dB)

% Calculate and display SNR Improvement


SNR_improvement = Re_SNR_dB - SNR_noise_dB ;
disp("SNR Improvement is: ")
disp(SNR_improvement);

2.3 Inference
The evaluation based on varying quantities of delay coefficient Ad , signal
power P , and correlation coefficient A has been studied and tabulated for
comparison.
From Table 1, we can observe that as the correlation between the signal
samples increases the noise reduction becomes much more prominent.
From Table.2 we can deduce that as the echo signal becomes much more

14
Sl. No. rx (k) base, α SNR before filtering (dB) SNR after filtering(dB)
1 0.1 8.696 9.5212
2 0.2 8.696 9.544
3 0.3 8.696 9.5788
4 0.4 8.696 9.62
5 0.5 8.696 9.699
6 0.6 8.696 9.8049
7 0.7 8.696 9.969
8 0.8 8.696 10.279
9 0.9 8.696 10.908
10 0.99 8.696 13.58

Table 1: Tabulation showing the variation in the noise reduction as the


autocorrelation base for the desired signal x(n) increases between 0 and 1.

Figure 1: Autocorrelation function for the desired signal x(n).

prominent in the input signal to the filter the denoising effect increases but
the filtering effect achieved has a linear dependence in reduction as the in-
terrelated noise element in the form of echo increases.
From the analysis of Table 4 we can conclude that as the representation
of the uncorrelated signal in the input signal arises the noise filtering or
reduction process becomes much complicated with the MMSE method.

3 Conclusion
The desired signal could be attained with much efficiency when the echo
representation is comparatively more prominent than the uncorrelated noise

15
Figure 2: Input signal to the noise filtering containing echo component and
the uncorrelated noise element.

Figure 3: Power spectral density of the input signal x(n).

16
Figure 4: Power spectral density of the generated signal y(n)

ˆ from the desired signal x(n).


Figure 5: Error e(n) estimated from the x(n)

Figure 6: Estimated signal x̂(n).

17
Figure 7: Error comparison from the original signal x(n), from the noisy
signal y(n) .

Figure 8: The input signal y(n) for the IIR filter, its autocorrelation function
rx (n) and its power spectral density Px (n).

18
Sl. No. decay coefficient, Ad SNR before filtering (dB) SNR after filtering(dB)
1 0.1 8.69 9.628
2 0.2 7.82 9.57
3 0.3 6.67 9.44
4 0.4 5.45 9.26
5 0.5 4.25 9.01
6 0.6 3.14 8.625
7 0.7 2.11 8.187
8 0.8 1.16 7.688
9 0.9 0.2919 7.1587

Table 2: Tabulation showing the variation in the noise reduction as the decay
coefficient arises from 0.1 to 0.9.

element. The colored noise representation in the signal has only lead to much
better noise filtering than the white noise for the current experimentation.
The IIR filtering has much better noise reduction property compared to FIR
filtering.

19
Sl. No. noise variance, σ 2 SNR before filtering (dB) SNR after filtering(dB)
1 0.1 12.218 13.2392
2 0.25 8.69 9.62
3 0.3 7.95 8.95
4 0.4 6.78 7.9172
5 0.5 5.8503 7.1512
6 0.6 5.0864 6.5509
7 0.7 4.437 6.06
8 0.8 3.872 5.65
9 0.9 2.921 4.381

Table 3: Tabulation showing the variation in the noise reduction as the


uncorrelated noise variance arises from 0.1 to 0.9.

Sl. No. Filter type SNR before filtering (dB) SNR after filtering(dB)
1 IIR 8.69 9.7969
2 FIR 8.69 9.62

Table 4: Tabulation showing the variation in the noise reduction as the


uncorrelated noise variance arises from 0.1 to 0.9.

Figure 9: The estimated signal x̂(n) from the IIR filter, its power spectral
density.

20
National Institute of Technology
Calicut

EC6405E
Statistical Signal Processing

Lab Experiment 2

Author: Faculty in charge:


Aneeta Christopher Dr. Deepthi P. P.
Aim
Predictor
i. Consider an autocorrelation function other than rx (k) = P α|k| . Generate
the random signal x(n) with the chosen autocorrelation function.
ii. Design the Wiener predictor as a tapped delay line filter with order 2
for the signal generated in Part (i). Observe and plot the error in prediction.
Quantify the performance gain if a PCM system is replaced with DPCM
system using this predictor.
iii. Using Levinson-Durbin algorithm, update the order of the filter to 5.
Repeat analysis in Part (ii).
iv. Repeat the experiment in Part (ii) with input to the filter as x(n) +
u(n), with u(n) as a white noise and u(n) as colored. Observe the filter
behavior in the two cases and compare the performance. Also compare the
filter performance in Part (ii) and (iv).

Theory
Pulse Code Modulation
• PCM involves quantizing the amplitude of each sample independently.

• PCM involves quantizing the amplitude of each sample independently.

• PCM is susceptible to quantization noise, which can be significant,


especially with lower bit depths.

Differential Pulse Code Modulation


• DPCM exploits the correlation between consecutive samples by encod-
ing the difference between successive samples.

• If the signal has low-frequency variations or is slowly varying, DPCM


can provide good compression and improved SNR compared to PCM.

• DPCM can be more robust against certain types of noise, especially


when the noise affects consecutive samples similarly.

1
Let x(n) be the input signal, d(n) be the desired signal, and e(n) be the
prediction error. The Wiener filter coefficients w(i) are adjusted to minimize
the mean-square prediction error E[e2 (n)]. The predicted signal y(n) is given
by the convolution of the input signal with the filter coefficients:
M
X
y(n) = w(i)x(n − i)
i=0

The Wiener filter minimizes the mean-square error by adjusting its coef-
ficients based on the autocorrelation function of the input signal Rx (k) and
the cross-correlation function between the input and desired signals Rxd (k).
The optimal filter coefficients are given by w = R−1 x Rxd , where Rx is the
autocorrelation matrix of the input signal and Rxd is the cross-correlation
vector between the input and desired signals.
The Wiener filter adapts to the statistical properties of the input signal
and desired output signal, making it effective for predicting signals with
known statistical characteristics.

Levinson - Durbin Algorithm


The Levinson-Durbin recursion is a powerful algorithm used in statistical
signal processing for efficiently designing all-pole Infinite Impulse Response
(IIR) filters with a desired autocorrelation sequence. Its applications range
from signal prediction and noise cancellation to spectral estimation and cod-
ing. This algorithm solves the symmetric Toeplitz system of linear equations:

r(1) r(2) · · · r(n)


    
a(2) −r(2)
..   a(3)   −r(3) 
 r(2)∗ r(1) · · · . 

=
  
 . . . . .. ..
 .. .. .. ..  
 
.   . 

r(n) · · · r(2) r(1)∗ a(n + 1) −r(n + 1)
where r = [r(1), . . . , r(n + 1)] is the input autocorrelation vector, and
r(i)∗ denotes the complex conjugate of r(i). The input r typically represents
autocorrelation coefficients, with lag 0 as the first element r(1).
The solution to this system of equations provides the coefficients
a(2), a(3), . . . , a(n + 1) of the all-pole IIR filter.

2
Procedure
i. Autocorrelation Function and Signal Generation
1. Choose an autocorrelation function other than rx (k) = P α|k|.

2. Implement the chosen autocorrelation function and generate a random


signal x(n) using MATLAB.

3. Plot the generated signal x(n) for visual analysis.

ii. Wiener Predictor Design and Performance Analysis


1. Design the Wiener predictor as a tapped delay line filter with order 2
for the signal generated in Part (i) using MATLAB.

2. Implement the predictor in MATLAB and observe the prediction error.

3. Quantify the performance gain by comparing a Pulse Code Modulation


(PCM) system with a Differential PCM (DPCM) system using this
predictor.

iii. Levinson-Durbin Algorithm for Order 5


1. Apply the Levinson-Durbin algorithm in MATLAB to update the order
of the filter to 5.

2. Repeat the analysis of the Wiener predictor’s performance with the


updated filter order.

iv. Effect of Input Noise on Wiener Predictor


1. Repeat the experiment in Part (ii) with the input to the filter as x(n)+
u(n), where u(n) is white noise.

2. Repeat the experiment with u(n) as colored noise.

3. Observe and compare the behavior of the filter in both cases using
MATLAB.

4. Compare the performance of the filter in Part (ii) and Part (iv) using
MATLAB.

3
MATLAB Code and Explanation
Signal Generation and Autocorrelation
1 clc ;
2 clear ;
3 close all ;
4

5 % Generation of random signal


6 N = 100;
7 v = randn (1 , N ) ;
8 x = zeros (1 , N ) ;
9

10 for n =1: N
11 if n ==1
12 x ( n ) = 0.447 * v (1) ;
13 elseif n ==2
14 x ( n ) = 0.447 * ( v (2) + v (1) ) ;
15 elseif n ==3
16 x ( n ) = 0.447 * ( v (3) + v (2) + v (1) ) ;
17 elseif n ==4
18 x ( n ) = 0.447 * ( v (4) + v (3) + v (2) + v (1) ) ;
19 else
20 x ( n ) = 0.447 * [ v ( n ) + v (n -1) + v (n -2) + v (n
-3) + v (n -4) ];
21 end
22 end
23

24 figure ;
25 subplot (2 , 1 , 1) ;
26 plot (1: N , v , ’ LineWidth ’ , 2) ;
27 xlabel ( ’n ’) ;
28 ylabel ( ’v ( n ) ’) ;
29 title ( ’ Random Noise v ( n ) ’) ;
30 grid on ;
31

32 subplot (2 , 1 , 2) ;
33 plot (1: N , x , ’ LineWidth ’ , 2) ;

4
34 xlabel ( ’n ’) ;
35 ylabel ( ’x ( n ) ’) ;
36 title ( ’ Generated Random Signal x ( n ) ’) ;
37 grid on ;
38

39 % Autocorrelation function
40 k = 0:10;
41 N = 2;
42 r_x = autocorr_func ( k ) ;
43 figure ;
44 plot (k , r_x ) ;
45 grid on ;
Listing 1: Signal Generation

2 tap Wiener filter prediction

2 % Autocorrelation matrix and vector


3 N = 2;
4 Rx_mat = autocorrelationMatrix ( N ) ;
5 rx_vect = autocorrelationvector ( N ) ;
6

7 % Wiener Predictor coefficients


8 s = inv ( Rx_mat ) ;
9 y = transpose ( rx_vect ) ;
10 w = s * y;
11

12

13 % Signal estimation using Wiener Predictor


14 N = 100;
15 x_hat = zeros (1 , N ) ;
16 for i = 1: N
17 if i == 1
18 x_hat ( i ) = x ( i ) ;
19 elseif i == 2
20 x_hat ( i ) = w (1 , 1) * x (i -1) ;
21 else

5
22 x_hat ( i ) = w (1 , 1) * x (i -1) + w (2 , 1) * x (i
-2) ;
23 end
24 end
25

26 % Plotting
27 figure ;
28 subplot (3 , 1 , 1) ;
29 plot (1: N , x ) ;
30 title ( ’ Generated Signal ’) ;
31

32 subplot (3 , 1 , 2) ;
33 plot (1: N , x_hat ) ;
34 title ( ’ Estimated Signal from Wiener Filter tap 2 ’) ;
35

36 error_signal = x - x_hat ;
37 subplot (3 , 1 , 3) ;
38 plot (1: N , error_signal ) ;
39 title ( ’ Error Signal e ( n ) ’) ;
40

41 function Rx = autocorrelationMatrix ( N )
42 Rx = zeros ( N ) ;
43 for i = 1: N
44 for j = i : N
45 Rx (i , j ) = autocorr_func ( i - j ) ;
46 Rx (j , i ) = Rx (i , j ) ;
47

48 end
49 end
50 end
51

52 function rx = autocorrelationvector ( N )
53 rx = zeros (1: N ) ;
54 for i = 1: N
55

56 rx ( i ) = autocorr_func ( i ) ;
57 end
58 end

6
59

60 function r_x = autocorr_func ( k )


61 r_x = 1 - 0.2 * abs ( k ) ;
62 end
Listing 2: 2 tap Wiener filter prediction

0.0.1 PCM and DPCM estimation

1 % PCM quantization error calculation (4 bit case )


2 Mx = max ( x_hat ,[] ," all ") ;
3 del_2 = Mx /2.^3;
4 quan_dpcm = ( del_2 .* del_2 ) /12;
5 SNR_dpcm = 1/ quan_dpcm ;
6 % DPCM quantization error calculation (4 bit case )
7 Me = max (e ,[] ," all ") ;
8 del_1 = Me /2.^3;
9 quan_pcm = ( del_1 .* del_1 ) /12;
10 SNR_pcm = 1/ quan_pcm ;
Listing 3: PCM and DPCM SNR estimation

Levinson - Durbin algorithm

1 % Levinson Durbin to increase order


2 % error power
3 P_2n = r_x (1) - [ w (1 ,1) * r_x (2) ] - [ w (2 ,1) * r_x (3) ];
4 r_3 =[ conj ( r_x (4) ) ] -[ w (1 ,1) * conj ( r_x (3) ) ] -[ w (2 ,1) *
conj ( r_x (2) ) ]
5 T_3 =( - r_3 / P_2n ) ;
6 v = T_3
7 w_21 = w (1 ,1) +( T_3 * w (2 ,1) ) ;
8 z = w (2 ,1) ;
9 c = w (1 ,1) ;
10 v = T_3
11 w_22 = w (2 ,1) +( T_3 * w (1 ,1) ) ;
12 w_23 = T_3 ;

7
13 P_3 = [1 -(( T_3 ) *( T_3 ) ) ]* P_2n ;
14 % P_3n = r_x (1) - [ w_21 * r_x (2) ] - [ w_22 * r_x (3) ] -[ w_23 *
r_x (4) ];
15 r_4 = r_x (5) - w_21 * r_x (4) - w_22 * r_x (3) - w_23 * r_x (2) ;
16 T_4 =( - r_4 / P_3 ) ;
17 w_31 = w_21 + T_4 * w_23 ;
18 w_32 = w_22 + T_4 * w_22 ;
19 w_33 = w_23 + T_4 * w_21 ;
20 w_34 = T_4 ;
21 P_4 = [1 -(( T_4 ) *( T_4 ) ) ]* P_3 ;
22 r_5 = r_x (6) - w_31 * r_x (5) - w_32 * r_x (4) - w_33 * r_x (3) -
w_34 * r_x (2) ;
23 T_5 =( - r_5 / P_4 ) ;
24 w_41 = w_31 + T_4 * w_34 ;
25 w_42 = w_32 + T_4 * w_33 ;
26 w_43 = w_33 + T_4 * w_32 ;
27 w_44 = w_34 + T_4 * w_31 ;
28 w_45 = T_5 ;
29 P_5 = [1 -(( T_5 ) *( T_5 ) ) ]* P_4 ;
30 w = randn (1 , N ) ;
31 y (1) = x (1) + w (1) ;
32 for n = 2: N
33 y(n) = x(n) + w(n);
34 end
35 figure ;
36 subplot (2 ,1 ,1) ;
37 plot (1: N , w , ’ LineWidth ’ , 2) ;
38 xlabel ( ’n ’) ;
39 ylabel ( ’w ( n ) ’) ;
40 title ( ’ random noise w ( n ) ’) ;
41 grid on ;
42 subplot (2 ,1 ,2) ;
43 plot (1: N , y , ’ LineWidth ’ , 2) ;
44 xlabel ( ’n ’) ;
45 ylabel ( ’y ( n ) ’) ;
46 title ( ’ generated signal y ( n ) ’) ;
47 grid on ;
48 r_w = 0.25;

8
49 if n == 0
50 r_y = r_x + r_w ;
51 else
52 r_y = r_x ;
53 end
54 N =10;
55 r_w = 0.25;
56 figure ;
57 plot ( r_y , ’ LineWidth ’ , 2) ;
58 xlabel ( ’n ’) ;
59 ylabel ( ’ r_y ’) ;
60 title ( ’ autocorrelation of generated signal y ( n ) ’) ;
61 grid on ;
Listing 4: Levinson - Durbin recursion

1 r_w = 0.25;
2 if n == 0
3 r_y = r_x + r_w ;
4 else
5 r_y = r_x ;
6 end
7 N =10;
8

9 figure ;
10 plot ( r_y , ’ LineWidth ’ , 2) ;
11 xlabel ( ’n ’) ;
12 ylabel ( ’ r_y ’) ;
13 title ( ’ autocorrelation of generated signal y ( n ) ’) ;
14 grid on ;
15 % plot of coloured noise
16 n0 = 0;
17 n1 = 1;
18 n2 = -1;
19 n = 0:10;
20 w_variance = 0.25;
21 r_wcolor = w_variance *((( n - n0 ) ==0) +(0.5*((( n - n1 ) ==0)
+(( n - n2 ) ==0) ) ) ) ; % Autocorrelation of coloured

9
noise
22 % r_w0 = r_w (5) + r_w (6) + r_w (7) ;
23 figure ;
24 subplot (2 ,1 ,1) ;
25 stem (n , r_wcolor ) ;
26 xlabel ( ’n ’) ;
27 ylabel ( ’ r_w ( n ) color ’) ;
28 title ( ’ Autocorrelation function of colored noise ’) ;
29 grid on ;
30 r_ycolor = r_x + r_wcolor ;
31 subplot (2 ,1 ,2) ;
32 stem (n , r_ycolor ) ;
33 xlabel ( ’n ’) ;
34 ylabel ( ’ r_y ( n ) color ’) ;
35 title ( ’ Autocorrelation function of colored noise ’) ;
36 grid on ;
37 N =2;
38 Ry_mat = autocorrelationMatrix_y ( N ) ;
39 rx_vect = autocorrelationvector ( N ) ;
40 % Perform matrix multiplication
41 a = inv ( Ry_mat ) ;
42 b = transpose ( rx_vect ) ;
43 w_y = a * b ;
44 N = 100;
45 y_hat = zeros (1 , N ) ; % Preallocate y_hat
46 for i = 1: N
47 if i == 1
48 y_hat ( i ) = y ( i ) ;
49 elseif i == 2
50 y_hat ( i ) = w (1) * y (i -1) ;
51 else
52 y_hat ( i ) = w (1) * y (i -1) + w (2) * y (i -2) ;
53 end
54 end
55 % Plotting
56 figure ;
57 subplot (3 ,1 ,1) ;
58 plot (1: N , y ) ;

10
59 title ( ’ Generated Signal ’) ;
60 subplot (3 ,1 ,2) ;
61 plot (1: N , y_hat ) ;
62 title ( ’ y_h ( Estimated Signal ) ’) ;
63 for i = 1:1: N
64 e_y ( i ) = y ( i ) - y_hat ( i ) ;
65

66 end
67 subplot (3 ,1 ,3) ;
68 plot (1: N , e_y ) ;
69 title ( ’ error signal of y ’) ;
70 % PCM quantization error calculation (4 bit case )
71 My = max ( y_hat ,[] ," all ") ;
72 del_2y = My /2.^3;
73 quan_dpcm_y = ( del_2y .* del_2y ) /12;
74 SNR_dpcm_y = 1/ quan_dpcm_y ;
75 % DPCM quantization error calculation (4 bit case )
76 Me_y = max ( e_y ,[] ," all ") ;
77 del_1y = Me_y /2.^3;
78 quan_pcm_y = ( del_1y .* del_1y ) /12;
79 SNR_pcm_y = 1/ quan_pcm_y ;
80 fprintf ( ’ Signal to Noise Ratio for PCM \ n ’) ;
81 disp ( SNR_pcm_y ) ;
82 fprintf ( ’ Signal to Noise Ratio for DPCM \ n ’) ;
83 disp ( SNR_dpcm_y ) ;
84

85 function Ry = autocorrelationMatrix_y ( N )
86 Ry = zeros ( N ) ;
87 for i = 1: N
88 for j = i : N
89 Ry (i , j ) = autocorr_func ( i - j ) ;
90 Ry (j , i ) = Ry (i , j ) ;
91

92 end
93 end
94 end
Listing 5: White and Color Noise added signal prediction

11
0.1 Inference
The performance analysis based on SNR for DPCM and PCM modalities for
signal quantization for 3 bit precision has been studied with 2 tap Wiener fil-
ter based prediction of x(n) with the specified autocorrelation functionRx (n)
which is linear in nature and white noise is added.

Sl. No. Prediction Qunatization methods SNR (dB)


1 2 - Tap Wiener Filter PCM 24.6127
2 2 - Tap Wiener Filter DPCM 27.9166
3 5 - Tap Levinson Durbin , white PCM 27.73 dB
4 5 - Levinson Durbin, white DPCM 34.49 dB

Table 1: Tabulation showing the variation in Signal to Noise ratio as the


2 tap Wiener filter-based prediction for corrupted signals with white and
colored noise.

From Table 1, we can infer that the wiener filter implementation for
prediction from 2 tap would enable much noise filtering or signal prominence
when the white noise corruption is introduced as compared to colored noise
corruption in the signal transmitted. As the Levinson - Durbin algorithm is
employed for increasing the wiener filter order for prediction implementation,
the noise component is reduced in the filter output and as the order increases
the SNR increases. The Levinson-Durbin algorithm optimizes the coefficients
of the filter to better utilize the correlations present in the signal samples.

Sl. No. Prediction Quantization methods SNR (dB)


1 2 - Tap Wiener Filter PCM 15.7627
2 2 - Tap Wiener Filter DPCM 21.8459
3 5 - Tap Levinson Durbin , white PCM 23.2429
4 5 - Levinson Durbin, white DPCM 25.6191

Table 2: Tabulation showing the variation in Signal to Noise ratio as the 2


tap Wiener filter-based prediction for corrupted signals with colored noise
and when updated with Levinson Durbin method to 5 tap filter design.

12
Figure 1: Autocorrelation function for the desired signal x(n) which is linear
in nature.

Figure 2: Input signal for prediction using 2 tap Wiener filter.

13
Figure 3: Predicted signal from the 2 tap Wiener filter and the error signal
e(n) with respect to the desired signal x(n).

14
Figure 4: The wiener filter with 5 tap using Levinson Durbin method esti-
mated signal xh (n), desired signal x(n) and the error signal e(n).

Figure 5: Autocorrelation of the colored noisy signal.

15
Figure 6: Colored noise corrupted signal xcolored (n) and error estimated from
x(n) using 2 tap Weiner filter.

Figure 7: Colored noise corrupted signal xcolored (n) and error estimated from
x(n) using weiner filter upgraded with Levinson Durbin method to 5 tap.

16
Conclusion
The Wiener filter based predictor implementation has been studied and the
advancement in the filter implementation based on Levinson - Durbin algo-
rithm has been carried out and verified. The Wiener predictor aims to reduce
the prediction error.
The Levinson–Durbin algorithm is employed to update the order of the
filter to 5. This increases the complexity of the predictor, potentially improv-
ing its performance. Here, the impact of noise on prediction and quantization
is analyzed. ‘

17
National Institute of Technology
Calicut

EC6405E
Statistical Signal Processing

Lab Experiment 3

Student: Faculty in charge:


Aneeta Christopher Dr. Deepthi P. P.
Aim
Power Spectral Density Estimation
Consider sum of two sinusoids in white noise as given below:
x(k) = V1 sin(w1 k + ϕ1 ) + V2 sin(w2 k + ϕ2 )
Here n(k) is white noise with variance σ 2 . ϕ1 and ϕ2 are iid random
variables uniformly distributed over 0 to 2π. Choose w1 , V1 , and V2 properly.
Take 1000 samples of data. Take w1 as 0.1π; vary w2 as 0.15π, 0.2π.
• Sketch periodogram estimate, Bartlett estimate, and Welch estimate
of power spectral density for each value of w2 . For Bartlett and Welch
estimate, consider window width of 100 and allow 50% overlap for
Welch estimate. Observe and plot the difference in resolution and give
your comments.

• Sketch MVM and BT estimate of PSD and compare your results.

• Vary relative strengths of the frequency components (by varying V1


and V2 ) and strength of noise (by varying σ 2 ). Observe and plot the
difference in resolution and give your comments.

Theory
Periodogram
• The periodogram method is the most common nonparametric method
for computing the power spectral density (PSD) of a random process.

• Let xw (n) = w(n)x(n) denote a windowed segment of samples from


a random process x(n) , where the window function w (classically
the rectangular window) contains M nonzero samples. Then the peri-
odogram is defined as the squared-magnitude DTFT of xw divided by
M.
M −1 2
1 1 X
Px,M (ω) |DTFT(xw )|2 = xw (n)e−jωn
M M n=0
M −1−|l|
1 X
←→ xw (n)xw (n + |l|), l ∈ Z.
M n=0

1
In the limit as M goes to infinity, the expected value of the periodogram
equals the power spectral
n density of the o noise process x(n) . This is
expressed by writing E lim Px,M (ω) = Sx (ω)
M →∞

where Sx (ω) denotes the power spectral density (PSD) of x.


Now, Px,M (ω) = M asinc2M ∗ Ŝx,M (ω).
That is, the periodogram is equal to the smoothed sample PSD.

Welch’s Method
Welch’s method (also called the periodogram method) for estimating power
spectra is carried out by dividing the time signal into successive blocks, form-
ing the periodogram for each block, and averaging.
Denote the m th windowed, zero-padded frame from the signal x by
xm (n)w(n)x(n + mR), n = 0, 1, . . . , M − 1, m = 0, 1, . . . , K − 1,
where R is defined as the window hop size, and let K denote the number
of available frames. Then the periodogram of the m th block is given by
N −1 2
1 2 1
X
−j2πnk/N
Pxm ,M (ωk ) = |FFTN,k (xm )| xm (n)e
M M n=0
as before, and the Welch estimate of the power spectral density is given
by
K−1
W 1 X
Ŝx (ωk ) Px ,M (ωk ).
K m=0 m
In other words, it’s just an average of periodograms across time. When
w(n) is the rectangular window, the periodograms are formed from non-
overlapping successive blocks of data. For other window types, the analysis
frames typically overlap.

Bartlett Method
Bartlett’s method consists of the following steps:
1. The original N point data segment is split up into K (non-overlapping)
data segments, each of length M .
2. For each segment, compute the periodogram by computing the dis-
crete Fourier transform (DFT version which does not divide by M ), then
computing the squared magnitude of the result and dividing this by M .
3. Average the result of the periodograms above for the K data segments.

2
The averaging reduces the variance, compared to the original N point
data segment.

Minimum Variance Method


In the minimum variance (MV) method the power spectrum is estimated by
filtering a random process with a bank of narrowband bandpass filters. The
bandpass filters are designed to be optimum by minimizing the variance of
the output of a narrowband filter that adapts to the spectral content of the
input process at each frequency of interest.
It aims to minimize the variance of the spectral estimate under certain
constraints.
The MVM can be summarized by the following steps:
1. Define a model for the signal’s autocorrelation function.
2. Formulate the estimation problem as a linear prediction problem.
3. Use the Levinson-Durbin recursion or the Yule-Walker equations to
estimate the model parameters.
4. Compute the spectral estimate using the model parameters.
The autocorrelation function of the signal is typically modeled using an
autoregressive (AR) model of order p, given by:
p
X
R(m) = ak R(m − k) + σ 2 δ(m)
k=1

where R(m) is the autocorrelation function at lag m, ak are the model


coefficients, σ 2 is the variance of the white noise process, and δ(m) is the
Kronecker delta function.
The goal of the MVM is to estimate the model coefficients ak such that the
variance of the spectral estimate is minimized, subject to the constraint that
the estimated autocorrelation function matches the observed autocorrelation
function.
The spectral estimate Ŝ(f ) using the MVM is given by:

σ2
Ŝ(f ) = Pp −j2πf k |2
|1 − k=1 ak e
where f is the frequency variable, and p is the order of the autoregressive
model.

3
The MVM provides a spectral estimate that minimizes the variance un-
der the given constraints, making it suitable for applications where accurate
estimation of the PSD is essential.

Blackman Tuckey Method


The Blackman-Tukey method is a technique for estimating the autocorrela-
tion function of a signal. It involves computing the discrete Fourier transform
(DFT) of the signal, then taking the squared magnitude of the DFT to ob-
tain the power spectrum. The autocorrelation function is then estimated by
taking the inverse Fourier transform of the power spectrum.
The Blackman-Tukey method can be summarized by the following steps:
1. Compute the autocovariance function R̂(m) of the signal, where m
represents the lag.
2. Apply a windowing function w(n) to the autocovariance function to
reduce the spectral leakage.
3. Compute the Fourier transform of the windowed autocovariance func-
tion to obtain the power spectrum.
4. Use the power spectrum to estimate the autocorrelation function.
The autocovariance function R̂(m) is given by:
N −1
1 X
R̂(m) = x(n) · x(n − m)
N n=0
where x(n) represents the discrete signal and N is the length of the signal.
The windowed autocovariance function is given by:

R̂w (m) = R̂(m) · w(m)


where w(m) is the windowing function.
The power spectrum P (ω) is computed by taking the Fourier transform
of the windowed autocovariance function:

P (ω) = F[R̂w (m)]


Finally, the autocorrelation function r̂(m) is estimated by taking the in-
verse Fourier transform of the power spectrum:

r̂(m) = F −1 [P (ω)]
This provides an estimate of the autocorrelation function of the signal.

4
Procedure

1 Procedure for Spectral Estimation


1.1 Problem Statement
Consider the sum of two sinusoids in white noise. The noise n(k) has variance
σ 2 , and ϕ1 and ϕ2 are independent and identically distributed (iid) random
variables uniformly distributed over the interval [0, 2π]. The task is to prop-
erly choose parameters w1 , V1 , and V2 , take 1000 samples of data, and vary
w2 as 0.15π and 0.2π.

1.2 Data Generation


• Generate 1000 samples of data based on the given parameters.

• Set w1 = 0.1π.

• Vary w2 as 0.15π and 0.2π.

1.3 Spectral Estimation


• For each value of w2 , compute the periodogram estimate, Bartlett es-
timate, and Welch estimate of the power spectral density.

• For Bartlett and Welch estimates, use a window width of 100 samples.

• For the Welch estimate, allow 50% overlap between consecutive win-
dows.

• Sketch the periodogram estimate, Bartlett estimate, and Welch esti-


mate of the power spectral density for each value of w2 .

• Observe and plot the differences in resolution between the spectral


estimates.

1.4 Comparison of Spectral Estimates


• Sketch the Minimum Variance Method (MVM) and Blackman-Tukey
(BT) estimate of the power spectral density.

5
• Compare the results obtained from the MVM and BT methods with
the periodogram, Bartlett, and Welch estimates.

• Analyze and discuss the differences and similarities in the spectral es-
timates obtained from different methods.

1.5 Sensitivity Analysis


• Vary the relative strengths of the frequency components V1 and V2 , as
well as the strength of the noise (σ 2 ).

• Observe and plot the differences in resolution of the spectral estimates


for different parameter values.

• Analyze how changes in the relative strengths of the frequency compo-


nents and the noise affect the spectral estimates.

MATLAB Code and Explanation


Signal Generation and Periodogram
1

2 N = 1000; % Number of samples


3 omega1 = 0.1 * pi ;
4 omega2_values = [0.15 * pi , 0.2 * pi ];
5 sigma_n_squared = 0.5;
6 V1 = 1;
7 V2 = 1;
8

9 % Generate white noise


10 n = sqrt ( sigma_n_squared ) * randn (1 , N ) ;
11

12 % Generate sinusoidal signals


13 phi1 = rand * 2 * pi ;
14 phi2 = rand * 2 * pi ;
15 x1 = V1 * sin ( omega1 * (0: N -1) + phi1 ) ;
16 x2_values = zeros ( length ( omega2_values ) , N ) ;
17 for i = 1: length ( omega2_values )

6
18 x2_values (i , :) = V2 * sin ( omega2_values ( i ) *
(0: N -1) + phi2 ) ;
19 end
20

21 % Generate combined signal


22 x_values = zeros ( length ( omega2_values ) , N ) ;
23 for i = 1: length ( omega2_values )
24 x_values (i , :) = x1 + x2_values (i , :) + n ;
25

26 end
27 power_x_bart = periodogram ( x_values , 100 , 0)
28 power_x_welch = periodogram ( x_values , 100 , 50)
29

30 function pow = periodogram ( signal , m , overlap )


31 % if the ’ overlap ’ parameter
is defined , use the
Welch Method and split
into overlapping windows
32 if exist ( ’ overlap ’ )
33 starts = [ 0 : m -
overlap : length (
signal ) - m ];
34 ends = starts + m ;
35 segments = [];
36 for i = 1: length (
starts )
37 segments ( end
+1 ,:) =
signal (
starts ( i )
+1: ends ( i
));
38 end
39 k = size ( segments ,
1 ) - 1;
40

41 % if the ’ overlap ’ parameter


isn ’ t defined , default

7
to Barlett ’ s Method and
use adjacent windows
42 else
43 % slice signal into
k data segments
of length m
44 segments = Utils .
slice ( signal , m ) ;
45 k = ( length ( signal
) / m ) - 1;
46 end
47 % compute the FFT of each
segment , then compute the
squared magnitude of the
48 % result and divide by m
49 period = ( fft ( segments
,[] ,2) .^2 ) / m ;
50

51 % average each of the k data


segments
52 pow = mean ( period ( 2: end , :
), 2 );
53 end
Listing 1: Signal Generation and Periodogram

Bartlett and Welch estimators

1 N = 1000; % Number of samples


2 omega1 = 0.1 * pi ;
3 omega2_values = [0.15 * pi , 0.2 * pi ];
4 sigma_n_squared = 0.5;
5 V1 = 1;
6 V2 = 1;
7

8 % Generate white noise


9 n = sqrt ( sigma_n_squared ) * randn (1 , N ) ;
10

8
11 % Generate sinusoidal signals
12 phi1 = rand * 2 * pi ;
13 phi2 = rand * 2 * pi ;
14 x1 = V1 * sin ( omega1 * (0: N -1) + phi1 ) ;
15 x2_values = zeros ( length ( omega2_values ) , N ) ;
16 for i = 1: length ( omega2_values )
17 x2_values (i , :) = V2 * sin ( omega2_values ( i ) *
(0: N -1) + phi2 ) ;
18 end
19

20 % Generate combined signal


21 x_values = zeros ( length ( omega2_values ) , N ) ;
22 for i = 1: length ( omega2_values )
23 x_values (i , :) = x1 + x2_values (i , :) + n ;
24

25 end
26

27 psd = periodogram ( x_values ) ;


28 psd_bart = bart ( x_values , 100) ;
29 psd_welch = welch ( x_values , N , 0.5 , 4) ;
30 psd_BT = per_smooth ( x_values , 4 , 500) ;
31 psd_mvm = minvar ( x_values , 100) ;
32

33 function Px = periodogram (x , n1 , n2 ) %
34

35 x = x (:) ;
36 if nargin == 1
37 n1 = 1; n2 = length ( x ) ;
38 end
39 Px = abs ( fft ( x ( n1 : n2 ) , 1024) ) .^2/( n2 - n1 +1) ;
40 Px (1) = Px (2) ;
41 end
42

43 function Px = mper (x , win , n1 , n2 )


44 X = x (:) ;
45 if nargin == 2
46 n1 = 1; n2 = length ( x ) ;
47 end

9
48 N = n2 - n1 + 1;
49 W = ones (N , 1) ;
50 if ( win == 2) w = hamming ( N ) ;
51 elseif ( win == 3) w = hanning ( N ) ;
52 elseif ( win == 4) w = bartlett ( N ) ;
53 elseif ( win == 5) w = blackman ( N ) ;
54 end
55 xw = x ( n1 : n2 ) .* w / norm ( w ) ;
56 Px = N * periodogram ( xw ) ;
57 end
58

59 function Px = bart (x , nsect )


60 L = floor ( length ( x ) / nsect ) ;
61 Px = 0;
62 n1 = 1;
63 for i =1: nsect
64 Px = Px + periodogram ( x ( n1 : n1 +L -1) ) / nsect ;
65 n1 = n1 + L ;
66 end
67 end
68

69 function Px = welch (x , L , over , win )


70 %
71 if ( over >= 1) ( over < 0)
72 error ( ’ Overlap is invalid ’) ,
73 end
74 n1 = 1;
75 n0 = (1 - over ) * L ;
76 nsect =1+ floor (( length ( x ) -L ) /( n0 ) ) ;
77 Px =0;
78 for i =1: nsect
79 Px = Px + mper (x , win , n1 , n1 +L -1) / nsect ; n1 = n1 +
n0 ;
80 end
81 end
Listing 2: Bartlett and Welch Estimators

10
1.5.1 Blackman Tukey Method

1 close all ;
2 clear ;
3 clc ;
4

5 % To generate the discrete time random sinusoids


combination with white
6 % noise
7

8 N = 1000; % Number of samples


9 omega1 = 0.1 * pi ;
10 omega2_values = [0.15 * pi , 0.2 * pi ];
11 sigma_n_squared = 0.5;
12 V1 = 1;
13 V2 = 1;
14

15 % Generate white noise


16 n = sqrt ( sigma_n_squared ) * randn (1 , N ) ;
17

18 % Generate sinusoidal signals


19 phi1 = rand * 2 * pi ;
20 phi2 = rand * 2 * pi ;
21

22 x1 = V1 * sin ( omega1 * (0: N -1) + phi1 ) + V2 * sin (


omega2_values (1) * (0: N -1) + phi2 ) +
sigma_n_squared * randn (1 , N ) ;
23 x2 = V1 * sin ( omega1 * (0: N -1) + phi1 ) + V2 * sin (
omega2_values (2) * (0: N -1) + phi2 ) +
sigma_n_squared * randn (1 , N ) ;
24

25 % Spectrum Estimation
26 Px1 = blackman_tukey ( x1 ,1 ,5) ; % order 4
27 Px2 = blackman_tukey ( x2 ,1 ,5) ;
28 % To plot the spectrum estimate
29 f = 5*((0:1023) /1024) ;
30 figure , plot (f ,10* log10 ( Px1 ) ) ;
31 xlabel ( ’ Frequency , w1 ’)

11
32 ylabel ( ’ Spectral Estimate P_ { X1 }( dB ) ’) ;
33 title ( ’ BT Spectrum for X1 signal ’)
34 figure , plot (f ,10* log10 ( Px2 ) ) ;
35 xlabel ( ’ Frequency , w2 ’)
36 ylabel ( ’ Spectral Estimate P_ { X2 }( dB ) ’) ;
37 title ( ’ BT Spectrum for X2 signal ’)
38

39 function X = convm (x , p )
40 N = length ( x ) +2* p -2;
41 x = x (:) ;
42 xpad = [ zeros (p -1 , 1) ; x ; zeros (p -1 ,1) ];
43 for i = 1: p
44 X (: , i ) = xpad (p - i +1: N - i +1) ;
45 end
46

47 function R = covar (x , p )
48 x = x (:) ;
49 m = length ( x ) ;
50 x = x - ones (m ,1) *( sum ( x ) / m ) ;
51 R = convm (x , p ) ’* convm (x , p ) /( m -1) ;
52 end
53

54 function Px = blackman_tukey (x , window , M , n1 , n2 )


55 % x = discrete time random process
56 % window = 1 = Rectangular
57 % 2 = Hamming
58 % 3 = Hanning
59 % 4 = Bartlett
60 % 5 = Blackman
61 % M = length of window
62 % n1
63 % n2
64

65 x = x (:) ;
66 if nargin == 3
67 n1 = 1;
68 n2 = length ( x ) ;
69 end

12
70

71 % periodogram smoothing technique


72 F = M +1;
73 R = covar ( x ( n1 : n2 ) , F ) ;
74 r = [ conj ( fliplr ( R (1 ,2: M +1) ) ) , R (1 ,1) , R (1 ,2: M +1) ];
75 M = 2*( M +1) -1;
76 w = ones (M ,1) ; % RECTANGULAR WINDOW FUNCTION
77 if ( window == 2)
78 w = hamming ( M ) ;
79 elseif ( window == 3)
80 w = hanning ( M ) ;
81 elseif ( window ==4)
82 w = bartlett ( M ) ;
83 elseif ( window == 5)
84 w = blackman ( M ) ;
85 end
86 r = r ’.* w ;
87 Px = abs ( fft (r ,1024) ) ;
88 Px (1) = Px (2) ;
89 end
Listing 3: Minimum Variance method for spectrum estimation.

Minimum Variance Method

1 close all ;
2 clear ;
3 clc ;
4 \
5 % to generate a discrete time random process
6

7 N = 1000; % Number of samples


8 omega1 = 0.1 * pi ;
9 omega2_values = [0.15 * pi , 0.2 * pi ];
10 sigma_n_squared = 0.5;
11 V1 = 1;
12 V2 = 1;
13

13
14 % Generate white noise
15 n = sqrt ( sigma_n_squared ) * randn (1 , N ) ;
16

17 % Generate sinusoidal signals


18 phi1 = rand * 2 * pi ;
19 phi2 = rand * 2 * pi ;
20

21 x1 = V1 * sin ( omega1 * (0: N -1) + phi1 ) + V2 * sin (


omega2_values (1) * (0: N -1) + phi2 ) +
sigma_n_squared * randn (1 , N ) ;
22 x2 = V1 * sin ( omega1 * (0: N -1) + phi1 ) + V2 * sin (
omega2_values (2) * (0: N -1) + phi2 ) +
sigma_n_squared * randn (1 , N ) ;
23

24 % Spectrum Estimation
25 Px1 = min_variance ( x1 ,4) ; % order 4
26 Px2 = min_variance ( x2 , 4)
27 % To plot the spectrum estimate
28 figure , plot ( - pi /0.5 L *: pi , Px1 ) ;
29 xlabel ( ’ Frequency , w1 ’)
30 ylabel ( ’ Spectral Estimate P_ { X1 } ’) ;
31 title ( ’M - V Spectrum for X1 signal ’)
32 figure , plot ( Px2 ) ;
33

34 function Px = min_variance (x , p )
35 x = x (:) ;
36 R = covar (x , p ) ;
37 [v , d ] = eig ( R ) ;
38 U = diag ( inv ( abs ( d ) + eps ) ) ;
39 V = abs ( fft (v ,1024) ) .^2;
40 Px = 10* log10 ( p ) -10* log10 ( V * U ) ;
41 end
Listing 4: Blackman Tukey method for spectral estimation.

14
1.6 Inference
Welch’s method is an improvement on the standard periodogram spectrum
estimating method and on Bartlett’s method, in that it reduces noise in the
estimated power spectra in exchange for reducing the frequency resolution.
The frequency resolution for periodogram method is 651.8986 Hz. When the
frequency resolution in Welch method has reduced to 571.89 Hz and Bartlett
method has remained the same as 651.8986 Hz when the overlaps of 50 re-
alizations were carried out with segment length = 100. Bartlett’s method
is a simple averaging method that divides the signal into non-overlapping
segments, applies a window function to each segment, computes the peri-
odogram of each segment, and averages them. Since it doesn’t involve any
additional processing beyond averaging, the frequency resolution is deter-
mined solely by the length of the segments (L) and the sampling frequency
(f s).
The difference in frequency resolution between Bartlett’s method and
Welch’s method depends on the amount of overlap used in the Welch method,
the length of the segments, and the specific characteristics of the signal be-
ing analyzed. Generally, the Welch method tends to provide slightly better
frequency resolution compared to Bartlett’s method due to the overlap of
segments.
In Bartlett’s method, each segment is non-overlapping, meaning that the
entire segment contributes only to its corresponding frequency bins in the
resulting power spectrum. The frequency resolution is directly determined
by the length of the segments.
In the Welch method, segments typically overlap, meaning that each seg-
ment contributes to multiple frequency bins in the resulting power spectrum.
The overlap allows more data to contribute to each frequency bin, resulting
in better frequency resolution compared to Bartlett’s method.
The difference in frequency resolution between the two methods depends
on the overlap factor used in the Welch method. Higher overlap factors
result in more overlap between segments and therefore better frequency reso-
lution, while lower overlap factors result in less overlap and potentially lower
frequency resolution.
The minimum variance method optimizes the window length and overlap
to minimize the variance of the estimate while still ensuring a reasonable
degree of frequency resolution. This method might achieve a better effective
frequency resolution compared to Bartlett’s method by efficiently utilizing

15
the available data. The BT method combines the advantages of Bartlett’s
method and the periodogram method by using overlapping segments and
applying a window function to each segment. By overlapping segments, the
BT method effectively increases the number of samples contributing to each
frequency bin, which can improve frequency resolution compared to Bartlett’s
method.
In the case of varying the magnitude of the sinusoidal signal components
in the input signal namely V1 andV2 , the power spectral density estimation
will likely show higher power at the frequency corresponding to the sinusoid
having higher amplitude, V . However this can also be affected by the phase
difference between the sinusoids under consideration and the presence of
noise.
When the variance of the noise signal increases, it means that the noise
signal has higher amplitude fluctuations. As a result, the noise will contribute
more power to the overall signal. The increased noise power may mask or
obscure weaker signal components.
With higher noise variance, the noise components contribute more to each
segment’s periodogram, potentially increasing the variability or uncertainty
in the estimated PSD. However, if the noise is stationary and uncorrelated
across segments, the averaging process may still reduce the noise contribution
and provide a reliable PSD estimate in Welch and Bartlett method.
Increasing noise variance can result in higher power spectral density values
at frequencies corresponding to the noise components. The periodogram’s
estimate may become less reliable as noise overwhelms the signal, especially
if the noise is non-stationary.

Sl. No. Method Frequency resolution (average)


1 Periodogram 1Hz
2 Bartlett 100Hz
3 Welch 52.63Hz
4 Blackman - Tuckey 1 Hz
5 MVM method 0.00333Hz

Table 1: Tabulation showing the variation in frequency resolution while


adopting various methods of spectrum estimation. based on the sampling
frequency of 1KHz.

16
Figure 1: The input signal X1 with ω2 = 0.15π.

Figure 2: The input signal X2 with ω2 = 0.2π.

17
Figure 3: Periodogram for signal X1 with ω2 = 0.15π.

Figure 4: Periodogram for signal X2 with ω2 = 0.2π.

18
Figure 5: Power spectral density P (X1 ) estimation based on Bartlett method
for signal with ω2 = 0.15π.

Figure 6: Power spectral density P (X2 ) estimation based on Bartlett method


and 50 realizations were generated.

19
Figure 7: Power spectral density estimation based on Welch method for X1
signal with ω2 = 0.15π were 50 realizations were studied and averaged to
Pavg (x(ejω )).

Figure 8: Power spectral density estimation based on Welch method for X2


signal with ω2 = 0.2π.

20
Figure 9: The power spectral density estimation P (X1 ) based on non - para-
metric Minimum Variance Method for signal X1 with ω2 = 0.15π.

21
Figure 10: The power spectral density estimation P (X2 ) based on non -
parametric Minimum Variance Method for signal X2 with ω2 = 0.2π.

Figure 11: The power spectral density estimation P (X2 ) based on non -
parametric Blackman Tukey Method for signal X2 with ω2 = 0.15π.

22
Figure 12: The power spectral density estimation P (X2 ) based on non -
parametric Blackman Tukey Method for signal X1 with ω2 = 0.2π.

23
(a) PSD estima- (b) PSD estima- (c) PSD estima-
tion : Periodogram, tion : Periodogram, tion : Periodogram,
V1 = 10V, V2 = 5V, ω2 = V1 = 5V, V2 = 10V, ω2 = V1 = 10V, V2 =
0.15π 0.15π 10V, ω2 = 0.15π

(d) PSD estima- (f) PSD estima-


tion : Periodogram, (e) PSD estimation tion : Periodogram,
V1 = 5V, V2 = 10V, ω2 = : Periodogram, V1 = V1 = 10V, V2 =
0.15π 10V, V2 = 5V, ω2 = 0.2π 10V, ω2 = 0.2π

Figure 13: PSD stimation base don periodogram method for varied
V1 andV2 values, i.e., amplitudesf orsinusoids

24
(a) Bartlett , V1 = (b) Bartlett , V1 = (c) Bartlett,
10V, V2 = 5V, ω2 = 10V, V2 = 10V, ω2 = V1 = 5V, V2 = 10V, ω2 =
0.15π 0.15π 0.15π

(e) Bartlett,
(d) Bartlett, V1 = V1 = 10V, V2 = (f) Bartlett, V1 =
10V, V2 = 5V, ω2 = 0.2π 10V, ω2 = 0.2π 5V, V2 = 10V, ω2 = 0.2π

Figure 14: PSD estimation based on Bartlett method for varied V1 andV2
values, i.e., amplitudes for sinusoids.

25
(a) Welch , V1 = (b) Welch , V1 = (c) Welch, V1 =
10V, V2 = 5V, ω2 = 5V, V2 = 10V, ω2 = 10V, V2 = 10V, ω2 =
0.15π 0.15π 0.15π

(f) Welch, V1 =
(d) Welch, V1 = (e) Welch, V1 = 5V, V2 = 10V, V2 = 10V, ω2 =
10V, V2 = 5V, ω2 = 0.2π 10V, ω2 = 0.2π 0.2π

Figure 15: PSD stimation based on Welch method for varied


V1 andV2 values, i.e., amplitudesf orsinusoids

26
(a) BT method , V1 = (b) BT Method , V1 = (c) BT Method, V1 =
10V, V2 = 10V, ω2 = 5V, V2 = 10V, ω2 = 10V, V2 = 10V, ω2 =
0.15π 0.15π 0.15π

(f) BT Method, V1 =
(d) BT Method, V1 = (e) BT Method, V1 = 10V, V2 = 10V, ω2 =
10V, V2 = 5V, ω2 = 0.2π 5V, V2 = 10V, ω2 = 0.2π 0.2π

Figure 16: PSD estimation based on Blackman Tukey method for varied V1
and V2 values, i.e., amplitudes for sinusoids.

27
(a) MVM , (b) MVM , (c) MVM, V1 =
V1 = 10V, V2 = 5V, ω2 = V1 = 5V, V2 = 10V, ω2 = 10V, V2 = 10V, ω2 =
0.15π 0.15π 0.15π

(f) Welch, V1 =
(d) MVM, V1 = (e) Welch, V1 = 5V, V2 = 10V, V2 = 10V, ω2 =
10V, V2 = 5V, ω2 = 0.2π 10V, ω2 = 0.2π 0.2π

Figure 17: PSD stimation based on Minimum Variance method (MVM) for
varied V1 andV2 values, i.e., amplitudesf orsinusoids

28
(a) Periodogram, σ2 = (b) Periodogram , σ2 = (c) Periodogram, σ2 =
0.25ω2 = 0.15π 0.5, ω2 = 0.15π 0.75, ω2 = 0.15π

(d) Periodogram, σ2 = (e) Periodogram, σ2 = (f) Periodogram, σ 2 =


0.75, ω2 = 0.15π 0.25, ω2 = 0.2π 0.5, ω2 = 0.2π

(g) Periodogram, σ 2 = (h) Periodogram, σ 2 =


0.75, ω2 = 0.2π 0.9, ω2 = 0.2π

Figure 18: PSD estimation based on periodogram method for varied σ 2 , i.e.,
noise variance.

29
(a) Bartlett, (b) Bartlett , σ2 = (c) Bartlett,
σ2 = 0.25ω2 = 0.15π 0.5, ω2 = 0.15π σ2 = 0.75, ω2 = 0.15π

(d) Bartlett, (e) Bartlett, (f) Bartlett,


σ2 = 0.9, ω2 = 0.15π σ2 = 0.25, ω2 = 0.2π σ 2 = 0.5, ω2 = 0.2π

(g) Bartlett, (h) Bartlett, σ2 =


2
σ = 0.75, ω2 = 0.2π 0.9, ω2 = 0.2π

Figure 19: PSD estimation based on Bartlett method for varied σ 2 , i.e., noise
variance.

30
(a) Welch, σ2 = (b) Welch , σ2 = (c) Welch, σ2 =
0.25ω2 = 0.15π 0.5, ω2 = 0.15π 0.75, ω2 = 0.15π

(d) Welch, σ2 = (e) Welch, σ2 = (f) Welch, σ2 =


0.9, ω2 = 0.15π 0.25, ω2 = 0.2π 0.5, ω2 = 0.2π

(g) Welch, σ2 = (h) Welch, σ2 =


0.75, ω2 = 0.2π 0.9, ω2 = 0.2π

Figure 20: PSD estimation based on Bartlett method for varied σ 2 , i.e., noise
variance.

31
(a) Blackman Tukey, (b) Blackman Tukey , (c) Blackman Tukey,
σ2 = 0.5ω2 = 0.15π σ2 = 0.9, ω2 = 0.15π σ2 = 0.25, ω2 = 0.2π

(d) Blackman Tukey, (e) Blackman Tukey, (f) Blackman Tukey,


σ2 = 0.5, ω2 = 0.2π σ2 = 0.9, ω2 = 0.2π σ 2 = 0.25, ω2 = 0.15π

Figure 21: PSD estimation based on Blackman Tukey method for varied σ 2 ,
i.e., noise variance.

32
(a) MVM, σ 2 = 0.1ω2 = (b) MVM , (c) MVM, σ 2 =
0.15π σ 2 = 0.25, ω2 = 0.15π 0.75, ω2 = 0.15π

(d) MVM, σ 2 = (e) MVM, σ 2 = (f) MVM, σ 2 =


0.9, ω2 = 0.15π 0.1, ω2 = 0.2π 0.25, ω2 = 0.2π

(g) MVM, σ 2 = (h) MVM, σ 2 =


0.75, ω2 = 0.2π 0.90, ω2 = 0.2π

Figure 22: PSD estimation based on MVM method for varied σ 2 , i.e., noise
variance.

33
Conclusion
The methods of Bartlett and Welch are designed to reduce the variance
of the periodogram by averaging periodograms and modified periodograms,
respectively. Welch’s method reduces the variance as compared to Bartlett’s
method. In the MV method, the power spectrum is estimated by filtering
a process with a bank of narrowband bandpass filters. The motivation for
this approach may be seen by looking, once again, at the effect of filtering a
WSS random process with a narrowband bandpass filter.The larger the order
of the filter the better the filter will be in rejecting out-of-band power. For
a pth-order filter the MV spectrum estimate requires the evaluation of the
inverse of a (p + 1) x (p + 1) autocorrelation matrix Rx . The filter order is
limited to p ≤ N for an N-length data sample. ‘

34
National Institute of Technology
Calicut

EC6405E
Statistical Signal Processing

Lab Experiment 4

Student:
Faculty in charge:
Aneeta Christopher
Dr. Deepthi P. P.
P230512EC
Aim
Recursive Least Squares Estimation
Consider the adaptive equalization of a linear dispersive channel. Channel
input is an equiprobable random binary sequence of zeroes and ones coded
as ±1. Channel additive noise is white Gaussian with variance V . Let the
channel impulse response be given by,
(
1 2π

[1 + cos (n − 2) ], for n = 1, 2, 3
h(n) = 2 W
0, elsewhere

where W controls the amplitude distortion.

Part (i)
Choose V to have SNR as 30 dB. Vary W values as W = 2.9, 3.1, 3.3, and 3.5.
Formulate Recursive Least Squares filtering for channel equalization with an
11-tap FIR filter to remove ISI. Use regularization parameter δ = 0.004 and
forget factor λ = 1. Sketch the convergence of RLS by plotting the variation
of MMSE with iteration number for 50 iterations for each W .

Part (ii)
Vary the SNR to 20 dB and repeat the experiment in Part (i).

Theory
Recursive Least Squares Method
In adaptive filter theory, the Recursive Least Squares (RLS) algorithm is
a powerful method for estimating the parameters of a linear model in a
recursive and computationally efficient manner. Consider a linear model
given by:

y(n) = wT (n)x(n) + e(n) (1)


where:
• y(n) is the observed response at time n,

1
• w(n) is the weight vector to be estimated,

• x(n) is the input vector at time n,

• e(n) is the error term assumed to be uncorrelated with x(n).

The goal is to find an estimate ŵ(n) of the true weight vector w(n) by
minimizing the mean squared error (MSE) criterion:

J(w(n)) = E{e2 (n)} (2)


The RLS algorithm updates the weight vector iteratively by recursively
computing the inverse of the correlation matrix of the input vector. The
update equation for RLS is given by:

w(n + 1) = w(n) + K(n)e(n) (3)


T −1
K(n + 1) = P(n)x(n)[λ + x (n)P(n)x(n)] (4)

where:

• K(n) is the gain vector,

• P(n) is the inverse correlation matrix,

• λ is the forgetting factor which controls the memory of past observa-


tions.

The RLS algorithm provides a robust and stable approach for adapt-
ing the filter coefficients in real-time applications. By recursively updating
the inverse correlation matrix, RLS achieves fast convergence and low com-
putational complexity, making it particularly suitable for applications with
changing environments or non-stationary signals.

Procedure
To solve the problem of adaptive equalization of a linear dispersive chan-
nel using Recursive Least Squares (RLS) filtering, the following steps are
undertaken:

• Step 1: Channel Model

2
– Define the channel impulse response h(n) using the given expres-
sion:
(
1
[1 + cos 2π

2 W
(n − 2) ], for n = 1, 2, 3
h(n) =
0, elsewhere

where W controls the amplitude distortion.

• Step 2: Noise Model

– Assume the channel additive noise to be white Gaussian with vari-


ance V .

• Step 3: Signal Model

– Consider the channel input as an equiprobable random binary


sequence of zeroes and ones coded as ±1.

• Step 4: Filter Model

– Utilize an 11-tap FIR filter for channel equalization.

• Step 5: RLS Algorithm Parameters

– Set the regularization parameter δ = 0.004 and forget factor λ = 1.

• Part (i) - Convergence of RLS

– For Part (i), where SNR is fixed at 30 dB and W varies, the


convergence of RLS is analyzed. This involves formulating the
RLS algorithm for channel equalization and plotting the variation
of Mean Squared Error (MMSE) with the iteration number for 50
iterations for each W value.

• Part (ii)

– In Part (ii), the experiment is repeated with the SNR varied to 20


dB while keeping the procedure similar to Part (i).

3
MATLAB Code and Explanation
1 clc
2 close all
3 clear all
4

5 % Define the length of the sequence


6 sequenceLength = 1000; % Change this to your desired
length
7

8 % Generate random binary sequence


9 x = randi ([0 , 1] , 1 , sequenceLength ) ;
10 x ( x == 0) = -1;
11 % Display the generated sequence
12 disp (" Binary Sequence ( x ) :") ;
13 disp ( x ) ;
14

15 % Test the function with n = 1 , 2 , 3 and W = [2.9 ,


3.1 , 3.3 , 3.5]
16 W_values = [2.9 , 3.1 , 3.3 , 3.5];
17 n_values = 1: sequenceLength ; % Extend n_values up to
sequenceLength
18

19 % Initialize h matrix
20 h = zeros ( length ( W_values ) , sequenceLength ) ;
21

22 % Compute h for each W value


23 for i = 1: length ( W_values )
24 for j = 1: length ( n_values )
25 h (i , j ) = channel_fn ( n_values ( j ) , W_values ( i
));
26 end
27 end
28

29 % Perform convolution of x and h for each W value


30 conv_results = zeros ( length ( W_values ) ,
sequenceLength ) ;

4
31 for i = 1: length ( W_values )
32 conv_results (i , :) = conv (x , h (i , :) , ’ same ’) ;
33 end
34

35 % Define the SNR in dB


36 SNR_dB = 30;
37

38 % Generate AWGN with the specified SNR


39 noise_power = 1 / 10^( SNR_dB / 10) ; % Calculate the
noise power based on SNR
40 noise = sqrt ( noise_power ) * randn ( size ( conv_results )
) ; % Scale the noise to achieve the desired SNR
41

42 % Add noise to the convolution results


43 x_out_with_noise = conv_results + noise ;
44 disp (" x_out_with_noise ") ;
45 disp ( x_out_with_noise ) ;
46

47 % Delay x_out_with_noise by skipping the first two


samples
48 x_out_with_noise_delayed = x_out_with_noise (: , 3: end
);
49

50 % Display the delayed sequence with AWGN


51 disp (" Delayed Element - wise Multiplication ( x ( n ) .* h ( n
) + v ) with AWGN :") ;
52 disp ( x_out_with_noise_delayed ) ;
53

54 iterations =100;
55 % Initialize arrays to store MMSE values for each W
56 mmse_values = zeros ( iterations , length ( W_values ) ) ;
57

58 % Define parameters for RLS filter


59 delta = 0.004; % Regularization parameter
60 lambda = 1.0; % Forgetting factor , slightly less
than 1 for stability
61 tap_length = 11; % FIR filter tap length
62

5
63 % Loop over each value of W
64 for w_idx = 1: length ( W_values )
65 % Initialize RLS filter coefficients and
matrices
66 P = eye ( tap_length ) / delta ; % Inverse of
regularization parameter for initial P matrix
67 w = zeros ( tap_length , 1) ; % Weight vector
initialized to zero
68

69 % RLS algorithm with delays


70 for k = 1: iterations
71 % Sliding window of received signal with a
delay of 5
72 x_k = x_out_with_noise_delayed ( w_idx , max (1 ,
k - tap_length + 1 - 5) : k - 5) ’;
73 if length ( x_k ) < tap_length
74 x_k = [ zeros ( tap_length - length ( x_k ) ,
1) ; x_k ]; % Zero - padding
75 end
76

77 % Desired signal with a delay of 7


78 d_k = x_out_with_noise ( w_idx , max (1 , k - 7) )
;
79

80 % RLS update equations


81 P_x_k = P * x_k ;
82 kal_gain = P_x_k / ( lambda + x_k ’ * P_x_k ) ;
83 prior_e = d_k - w ’ * x_k ;
84 w = w + kal_gain * prior_e ;
85 P = ( P - kal_gain * x_k ’ * P ) / lambda ;
86

87 % Calculate and store MMSE , avoid negative


values
88 mmse_values (k , w_idx ) = max ( prior_e ^2 , 0) ;
89 end
90 end
91

6
92 % Plot MMSE convergence for each W with specified
colors
93 figure ;
94 colors = [ ’k ’ , ’b ’ , ’r ’ , ’g ’]; % Define colors for
each W value
95 for w_idx = 1: length ( W_values )
96 valid_indices = mmse_values (: , w_idx ) >= 0; %
Filter out negative and zero MMSE values
97 plot ( find ( valid_indices ) , 10* log10 ( mmse_values (
valid_indices , w_idx ) ) , ’ LineWidth ’ , 1 , ’
color ’ , colors ( w_idx ) ) ;
98 hold on ;
99 end
100

101 title ( ’ Convergence of RLS Filter ( MMSE vs Iteration )


for SNR 20 dB ’) ;
102 xlabel ( ’ Iteration ’) ;
103 ylabel ( ’ Mean Squared Error ( MMSE ) [ dB ] ’) ;
104 legend ( arrayfun ( @ ( W ) sprintf ( ’ W = %.1 f ’ , W ) ,
W_values , ’ UniformOutput ’ , false ) , ’ Location ’ , ’
best ’) ;
105 grid on ;
106 hold off ;
107

108 % Define the channel_fn for h ( n )


109 function result = channel_fn (n , W )
110 if n >= 1 && n <= 3
111 result = 0.5 * (1 + cos ((2* pi / W ) * ( n - 2) ) )
;
112 else
113 result = 0; % Set h ( n ) to zero for n
=4 ,5 ,6 ,7 ,8 , etc .
114 end
115 end
116

117 figure ;
118 % colors = { ’r ’ , ’g ’ , ’b ’ , ’c ’};
119 % legend_entries = cell (1 , length ( W_values ) ) ;

7
120 % Loop through each ’w ’ value to plot on the same
subplot
121 for i = 1: length ( W_values )
122 % print ( length ( w_val_acc (: ,: , i ) ) , i ) ;
123 subplot ( length ( W_values ) , 1 , i ) ; % Adjust the
subplot dimensions as needed
124 for j = 1:11
125 data = squeeze ( w_val_acc (: , j , i ) ) ;
126

127 if length ( data ) ~= iterations


128 fprintf ( ’% d \n ’ , length ( data ) , iterations )
;
129 warning ( ’ Data length does not match the
number of iterations . Padding or
truncation may occur . ’) ;
130 end
131 % Pad or truncate data as needed to match
iterations
132 data = data (1: min ( end , iterations ) ) ; %
Truncate if longer , keep all if shorter
133 plot (1: length ( data ) , data , ’ DisplayName ’ , [ ’
Value ’ num2str ( j ) ’, W = ’ num2str (
W_values ( i ) ) ]) ;
134 hold on ;
135 end
136 hold off ;
137 title ([ ’ Variation with Iterations for W = ’
num2str ( W_values ( i ) ) ]) ;
138 xlabel ( ’ Iterations ’) ;
139 ylabel ( ’ Weight value ’) ;
140

141

142 end
143 % Create a single legend for the entire figure
outside the loop
144 % legend ( ’ Location ’ , ’ best ’) ;
145 % Adjust the layout

8
146 sgtitle ( ’ Variation of Weight values with Iterations
for Different eigen spread Values SNR = 30 dB ’) ;
147

148 figure ;
149 for i = 1: length ( W_values )
150 % print ( length ( w_val_acc (: ,: , i ) ) , i ) ;
151 subplot ( length ( W_values ) , 1 , i ) ; % Adjust the
subplot dimensions as needed
152 for j = 1:11
153 data = squeeze ( w_val_acc_2 (: , j , i ) ) ;
154

155 if length ( data ) ~= iterations


156 fprintf ( ’% d \n ’ , length ( data ) , iterations )
;
157 warning ( ’ Data length does not match the
number of iterations . Padding or
truncation may occur . ’) ;
158 end
159 % Pad or truncate data as needed to match
iterations
160 data = data (1: min ( end , iterations ) ) ; %
Truncate if longer , keep all if shorter
161 plot (1: length ( data ) , data , ’ DisplayName ’ , [ ’
Value ’ num2str ( j ) ’, W = ’ num2str (
W_values ( i ) ) ]) ;
162 hold on ;
163 end
164 hold off ;
165 title ([ ’ Variation with Iterations for W = ’
num2str ( W_values ( i ) ) ]) ;
166 xlabel ( ’ Iterations ’) ;
167 ylabel ( ’ Weight value ’) ;
168

169

170 end
171 % Create a single legend for the entire figure
outside the loop
172 % legend ( ’ Location ’ , ’ best ’) ;

9
173 % Adjust the layout
174 sgtitle ( ’ Variation of Weight values with Iterations
for Different W values ’)

0.1 Inference
The Recursive Least Squares (RLS) algorithm converges in about 2M itera-
tions, where M is the filter length. This means that the rate of convergence
of the RLS algorithm is typically an order of magnitude faster than that of
the LMS algorithm. The rate of convergence of the RLS algorithm is rela-
tively insensitive to the variations in the eigen value spread ,i.e., W . The
steady state value of the MMSE produced by the RLS algorithm is small, and
when compared with LMS algorithm as well. Hence we can conclude that
the RLS algorithm produces zero misadjustment (at least theoretically). We
can conclude that as the number of iterations n, approaches infinity, the
mean-square error approaches a final value equal to the variance σ 2 of the
measurement error. In other words, the RLS algorithm, in theory, produces
zero excess mean-square error (or, equivalently, zeromisadjustment). The
key findings and conclusions from this experiment are as follows:

• Adaptive Filtering Capability:: The convergence of weights reflects


the RLS filter’s ability to adapt to changing channel conditions and
noise levels over time. As the filter receives more iterations of data,
it adjusts its coefficients (weights) to minimize error and improve the
accuracy of signal estimation.

• Dynamic Response to Channel Changes: The weight conver-


gence implies that the RLS filter can dynamically respond to varia-
tions in the channel response (represented by different W values) and
effectively compensate for distortions introduced by the channel.

• Enhanced Signal Recovery :Converging weights indicate an improved


ability to recover the transmitted signal from the noisy received signal.
This enhanced signal recovery capability is crucial for ensuring reli-
able communication, especially in scenarios with challenging channel
conditions or lower SNR levels.

10
Figure 1: The plot of MMSE versus number of iterations for SNR of 30 dB
for signal x(n).

• Optimization of Filter Parameters : The convergence of weights


also suggests that the filter’s parameters, such as the regularization pa-
rameter (delta), forgetting factor (lambda), and tap length (tapl ength),
are appropriately configured to facilitate convergence and minimize er-
ror.

Conclusion
In this experiment, we simulated a communication system to analyze the
performance of a Recursive Least Squares (RLS) filter in mitigating noise
and recovering a transmitted binary sequence. RLS converges within twice
the number of taps in the filter as compared to other algorithms. The eigen
spread has not much effect on the convergence of the weight vector in RLS
algorithm.

11
Figure 2: The plot of MMSE versus number of iterations for SNR of 20 dB
for signal x(n).

Figure 3: The variation of weight values with iterations as the eigen spread
W is varied for SNR = 20dB.

12
Figure 4: The variation of weight values with iterations as the eigen spread
W is varied for SNR = 30dB.

13
National Institute of Technology
Calicut

EC6405E
Statistical Signal Processing

Lab Experiment 5

Student:
Faculty in charge:
Aneeta Christopher
Dr. Deepthi P. P.
P230512EC
Aim
Kalman Filter
Design a Kalman filter to track a moving object following the track given by
the function:
x(t) = 0.1(t2 − t)
Consider the process equation X̄(n) = A(n − 1)X̄(n − 1) + W (n). Here
the state vector X̄(n) = [x1 (n) x2 (n)]T where x1 (n) is the position along
the X-axis and x2 (n) is the velocity along the X-axis. The state transition
matrix A(n − 1) is given by:
 
1 T
A(n − 1) =
0 1

The measurements are taken as positions along the X and Y directions,


which are noisy observations of x1 (n) and x3 (n) given as:
 
  x1 (n)
Ȳ (n) = 1 0 + v(n)
x2 (n)

Consider W (n) to have zero mean with variance 0.2, while v(n) has zero
mean and unit variance.
(i) Derive Kalman prediction and filtering (updation) expressions.
(ii) Track the object and sketch the actual position as well as the tracked
position at various instances.

Theory
Kalman Filter
The Kalman filter is a widely used algorithm for estimating the state of a
dynamic system from noisy measurements. It is an optimal recursive filter
that uses Bayesian estimation to compute the state of the system at each
time step, based on the previous state and the current measurements.

1
Kalman Filter Equations
Consider a linear dynamic system described by the following equations:

X̄(n) = A(n − 1)X̄(n − 1) + B(n − 1)U (n − 1) + W (n)


Ȳ (n) = C(n)X̄(n) + V (n)

where:
• X̄(n) is the state vector at time n,

• A(n) is the state transition matrix,

• B(n) is the control-input matrix,

• U (n) is the control vector,

• C(n) is the measurement matrix,

• Ȳ (n) is the measurement vector at time n,

• W (n) is the process noise with covariance matrix Q(n),

• V (n) is the measurement noise with covariance matrix R(n).


The Kalman filter operates in two main steps: prediction and update.

Prediction Step
In the prediction step, the Kalman filter predicts the state of the system at
the next time step based on the previous state and control inputs:

X̂ − (n) = A(n − 1)X̂(n − 1) + B(n − 1)U (n − 1)


P − (n) = A(n − 1)P (n − 1)AT (n − 1) + Q(n − 1)

where:
• X̂ − (n) is the predicted state estimate,

• P − (n) is the predicted error covariance,

• X̂(n − 1) is the previous state estimate,

• P (n − 1) is the previous error covariance.

2
Update Step
In the update step, the Kalman filter incorporates the measurement infor-
mation to improve the state estimate:

K(n) = P − (n)C T (n)[C(n)P − (n)C T (n) + R(n)]−1


X̂(n) = X̂ − (n) + K(n)[Y (n) − C(n)X̂ − (n)]
P (n) = (I − K(n)C(n))P − (n)

where:

• K(n) is the Kalman gain,

• X̂(n) is the updated state estimate,

• P (n) is the updated error covariance,

• Y (n) is the measurement at time n.

Procedure
1. Define the dynamics of the system: Specify the state transition matrix
A(n), measurement matrix C(n), process noise covariance matrix Q(n),
and measurement noise covariance matrix R(n).

2. Generate the trajectory of the object: Define the function x(t) repre-
senting the object’s position over time.

3. Simulate measurements: Generate noisy measurements of the object’s


position using the function x(t) and add Gaussian noise.

4. Initialize Kalman filter: Set initial state estimate X̂(0) and initial error
covariance matrix P (0).

5. Perform prediction and update steps: Iterate through each time step,
predicting the state and updating based on measurements.

6. Track the object: Plot the actual position and the tracked position at
various instances to visualize the performance of the Kalman filter.

3
MATLAB Code and Explanation
1 % Define system dynamics
2 dt = 0.10;
3 F = [1 , dt ; 0 , 1];
4 H = [1 , 0];
5 Q = [0.2 , 0; 0 , 0.20];
6 R = 0.5;
7

8 % Generate synthetic data


9 t = linspace (0 , 100 , 100) ;
10 True_position = 0.1 * ( t .^2 - t ) ;
11 measurements = 0.1 * ( t .^2 - t ) + 10 * randn (1 , 100)
; % Add Gaussian noise
12

13 % Initialize Kalman filter


14 x = [0; 0]; % Initial state estimate
15 P = eye (2) ; % Initial state covariance estimate
16 predictions = zeros (1 , length ( measurements ) ) ;
17

18 % Kalman filter loop


19 for i = 1: length ( measurements )
20 % Predict step
21 x = F * x;
22 P = F * P * F’ + Q;
23

24 % Update step
25 y = measurements ( i ) - H * x ;
26 S = H * P * H’ + R;
27 K = P * H’ / S;
28 x = x + K * y;
29 P = ( eye (2) - K * H ) * P ;
30

31 predictions ( i ) = H * x ;
32 end
33

34 % Plot results

4
Figure 1: The tracking (red) and true (blue) value of the position of the
object along X axis.)

35 figure ;
36 plot (t , True_position , ’g ’ , ’ LineWidth ’ , 1.5 , ’
DisplayName ’ , ’ True Position ’) ;
37 hold on ;
38 plot (t , predictions , ’r ’ , ’ LineWidth ’ , 1.5 , ’
DisplayName ’ , ’ Kalman Filter Prediction ’) ;
39 hold off ;
40 title ( ’1 D Object Tracking ’) ;
41 xlabel ( ’ Time ’) ;
42 ylabel ( ’ Position ’) ;
43 grid on ;
44 legend ( ’ Location ’ , ’ best ’) ;

0.1 Inference
The code demonstrates the implementation of a Kalman filter for tracking the
position of an object in 1D space. Since the true value and predicted value
have coincided after a few updations, it demonstrates the filter’s capability to
accurately estimate the system’s state despite noise and uncertainties. The
inference highlights the effectiveness of the Kalman filter in state estimation

5
tasks, showcasing its ability to mitigate the impact of noise and uncertainties,
leading to accurate tracking and prediction of the system’s behavior.

Conclusion
The Kalman filter, known for its optimal estimation and noise robustness,
was implemented to track the position of an object in 1D space.

• Optimal State Estimation: The Kalman filter uses an optimal es-


timation algorithm that minimizes the mean squared error between
the estimated state and the true state, leading to convergence as more
measurements are incorporated.

• Measurement Incorporation: The Kalman filter continuously in-


corporates new measurements into its state estimation process, adjust-
ing the state estimate based on the latest information. This adaptive
nature helps the filter converge to the true state over time.

• Process and Measurement Models: Accurate modeling of the sys-


tem’s dynamics (process model) and measurement characteristics (mea-
surement model) improves the filter’s ability to converge. When the
models align well with the actual system behavior and measurement
properties, convergence is more likely.

• Kalman Gain Adjustment: The Kalman gain dynamically adjusts


based on the covariance matrices and measurement noise, ensuring that
more weight is given to reliable measurements while reducing the im-
pact of noisy or uncertain measurements. This adaptive gain helps in
convergence.

• Initial Conditions: Proper initialization of the state estimate x̂(0)


and covariance matrix P (0) is crucial for convergence. When the initial
conditions are close to the true state x(0) and the initial covariance
reflects the expected uncertainty, the filter converges faster.

• Noise Characterization: Accurate characterization of process noise


Q and measurement noise covariance matrices R helps the filter effec-
tively handle uncertainties and noise, leading to smoother convergence.

6
• Observability and Controllability: Systems that are well-observable
and controllable facilitate convergence as the Kalman filter can effec-
tively estimate and control the system’s state based on available mea-
surements and control inputs.

You might also like