SSP Lab Exps Merged
SSP Lab Exps Merged
Calicut
EC6405E
Statistical Signal Processing
Lab Experiment 1
rx (k) = P α|k|
1
d. Consider noise autocorrelation function as rw (k) =
σ 2 δ(k).
i. Choose signal power P = 2, auto-correlation decay rate α = 0.4,
attenuation factor A = 0.1, delay of echo D = 1, noise variance
σ 2 = 0.25.
ii. Sketch the PSD of x(n) and y(n).
iii. Design FIR Wiener filter with tapped delay line structure hav-
ing L = 2 taps. Find the filtered output signal x̂(n). Sketch x(n),
x̂(n), and PSD of x̂(n).
iv. Find minimum mean squared error (MMSE) and improvement
in SNR through filtering.
v. Repeat steps (ii) to (iv) with different values for parameters P ,
α, A, and σ 2 . Increase the number of filter taps and L and check
the SNR improvement.
vi. Repeat steps (ii) to (iv) for IIR Wiener filter with any one set
of P , α, A, and σ 2 in step (v) and compare the performance with
the corresponding FIR filter.
2 Theory
In statistical signal processing, the autocorrelation function rx (k) measures
the similarity between a signal and a time-shifted version of itself. The Power
Spectral Density (PSD) represents the distribution of power across different
frequencies in a signal.
2
Direct method
We consider the problem of modeling a deterministic signal, x(n), as the
unit sample response of a linear shift-invariant filter, h(n), having a rational
system function of the form given in Eq. (4.3). Thus, v(n) in Fig. 4.1 is taken
to be the unit sample, δ(n). We will assume, without any loss in generality,
that x(n) = 0 for n < 0 and that the filter h(n) is causal. Denoting the
modeling error by e′ (n),
(Note that since h(n) and x(n) are both assumed to be zero for n < 0,
then e′ (n) = 0 for n < 0 and the summation begins at n = 0).
A necessary condition for the filter coefficients ap (k) and bq (k) to minimize
the squared error is that the partial derivative of Ls with respect to each of
the coefficients vanish, i.e.,
∂Ls ∂Ls
= 0 and = 0 for k = 0, 1, . . . , p and k = 0, 1, . . . , q.
∂ap (k) ∂bq (k)
Using Parseval’s theorem, the least squares error may be written in the
frequency domain in terms of E ′ (ejω ), the Fourier transform of e′ (n), as
follows:
Z π
Ls = |E ′ (ejω )|2 dω
−π
Z π
∂Ls
= −2 E ′ (ejω )E ′∗ (ejω ) sin(kω)dω = 0 for k = 0, 1, . . . , p.
∂ap (k) −π
3
Since
Z π
δ(n)e−jωn dω = ejωn ,
−π
The least squares approach is not mathematically tractable and not amenable
to real-time signal processing applications.
2.1 Procedure
2.2 Code and Explanation
% Parameters
P = 2; % Signal power
A = 0.4; % Autocorrelation decay rate
N = 10; % Signal length
A_d = 0.1; % Delay coefficient
D = 1; % Filter delay
v = randn(1, N); % White noise signal
4
end
5
plot(w, P_x);
xlabel("w");
ylabel("P_x(e^{jw})");
title("Power Spectral Density of the generated signal");
grid on;
6
title("Power Spectral Density of the input signal");
grid on;
% Parameters
P = 2; % Signal power
A = 0.4; % Autocorrelation decay rate
N = 10; % Signal length
A_d = 0.1; % Delay coefficient
7
D = 1; % Filter delay
v = randn(1, N); % White noise signal
% Additional parameters
w_variance = 0.25; % Variance of the noise signal
for ri = 2:N-1
w_n(ri) = v(ri) + 0.5 * (v(ri - 1) + v(ri + 1));
end
for n = 2:N
x(n) = 0.4 * x(n-1) + w_n(n);
end
for n = 2:N
y(n) = x(n) + A_d * x(n-1) + w_n(n);
end
8
figure;
stem(-5:4, r_w, "LineWidth", 2);
xlabel(’n’);
ylabel(’r_w(n)’);
title(’Autocorrelation function of colored noise, r_w(n)’);
grid on;
9
ry_mat_inv = inv(ry_mat);
w_fil = ry_mat_inv * transpose(ac(0:z-1, P, A));
for o = 1:N
x_estimate(o) = w_fil(1) * y(o);
for k = 2:min(o, 1:g)
x_estimate(o) = x_estimate(o) + w_fil_n(k) * y(o - k + 1);
end
end
10
SNR_rec_dB = 10 * log10(SNR_rec);
% Variables
P = 2;
Alpha = 0.4;
N = 100;
a = 0.1;
sigma = 0.5;
Fs = 8000;
D = 1;
11
k = -N/2:N/2;
rx = P * Alpha.^abs(k);
subplot(3,1,2);
stem(k, rx);
xlabel("k");
ylabel("r_x(k)");
title("Autocorrelation function of x(n)");
grid on;
12
plot(f, P_y, ’LineWidth’, 2);
xlabel(’Frequency (Hz)’);
ylabel(’P_y(e^{j\omega})’);
title(’Power Spectral Density of y(n)’);
grid on;
for n = 2:N-1
x_hat(n) = 0.08 * y(n+1) + 0.74 * y(n) + 0.01 * y(n-1);
end
subplot(3,1,2);
stem(1:N, x_hat);
xlabel(’n’);
ylabel(’x_{hat}(n)’);
title(’Estimated Signal x_{hat}(n)’);
grid on;
13
% Calculate and display Mean Squared Error (MSE)
MSE = 0;
for n = 1:N
MSE = MSE + (x(n) - x_hat(n))^2;
end
MMSE = MSE / N;
fprintf(’Mean Squared Error (MSE): %.4f\n’, MMSE);
2.3 Inference
The evaluation based on varying quantities of delay coefficient Ad , signal
power P , and correlation coefficient A has been studied and tabulated for
comparison.
From Table 1, we can observe that as the correlation between the signal
samples increases the noise reduction becomes much more prominent.
From Table.2 we can deduce that as the echo signal becomes much more
14
Sl. No. rx (k) base, α SNR before filtering (dB) SNR after filtering(dB)
1 0.1 8.696 9.5212
2 0.2 8.696 9.544
3 0.3 8.696 9.5788
4 0.4 8.696 9.62
5 0.5 8.696 9.699
6 0.6 8.696 9.8049
7 0.7 8.696 9.969
8 0.8 8.696 10.279
9 0.9 8.696 10.908
10 0.99 8.696 13.58
prominent in the input signal to the filter the denoising effect increases but
the filtering effect achieved has a linear dependence in reduction as the in-
terrelated noise element in the form of echo increases.
From the analysis of Table 4 we can conclude that as the representation
of the uncorrelated signal in the input signal arises the noise filtering or
reduction process becomes much complicated with the MMSE method.
3 Conclusion
The desired signal could be attained with much efficiency when the echo
representation is comparatively more prominent than the uncorrelated noise
15
Figure 2: Input signal to the noise filtering containing echo component and
the uncorrelated noise element.
16
Figure 4: Power spectral density of the generated signal y(n)
17
Figure 7: Error comparison from the original signal x(n), from the noisy
signal y(n) .
Figure 8: The input signal y(n) for the IIR filter, its autocorrelation function
rx (n) and its power spectral density Px (n).
18
Sl. No. decay coefficient, Ad SNR before filtering (dB) SNR after filtering(dB)
1 0.1 8.69 9.628
2 0.2 7.82 9.57
3 0.3 6.67 9.44
4 0.4 5.45 9.26
5 0.5 4.25 9.01
6 0.6 3.14 8.625
7 0.7 2.11 8.187
8 0.8 1.16 7.688
9 0.9 0.2919 7.1587
Table 2: Tabulation showing the variation in the noise reduction as the decay
coefficient arises from 0.1 to 0.9.
element. The colored noise representation in the signal has only lead to much
better noise filtering than the white noise for the current experimentation.
The IIR filtering has much better noise reduction property compared to FIR
filtering.
19
Sl. No. noise variance, σ 2 SNR before filtering (dB) SNR after filtering(dB)
1 0.1 12.218 13.2392
2 0.25 8.69 9.62
3 0.3 7.95 8.95
4 0.4 6.78 7.9172
5 0.5 5.8503 7.1512
6 0.6 5.0864 6.5509
7 0.7 4.437 6.06
8 0.8 3.872 5.65
9 0.9 2.921 4.381
Sl. No. Filter type SNR before filtering (dB) SNR after filtering(dB)
1 IIR 8.69 9.7969
2 FIR 8.69 9.62
Figure 9: The estimated signal x̂(n) from the IIR filter, its power spectral
density.
20
National Institute of Technology
Calicut
EC6405E
Statistical Signal Processing
Lab Experiment 2
Theory
Pulse Code Modulation
• PCM involves quantizing the amplitude of each sample independently.
1
Let x(n) be the input signal, d(n) be the desired signal, and e(n) be the
prediction error. The Wiener filter coefficients w(i) are adjusted to minimize
the mean-square prediction error E[e2 (n)]. The predicted signal y(n) is given
by the convolution of the input signal with the filter coefficients:
M
X
y(n) = w(i)x(n − i)
i=0
The Wiener filter minimizes the mean-square error by adjusting its coef-
ficients based on the autocorrelation function of the input signal Rx (k) and
the cross-correlation function between the input and desired signals Rxd (k).
The optimal filter coefficients are given by w = R−1 x Rxd , where Rx is the
autocorrelation matrix of the input signal and Rxd is the cross-correlation
vector between the input and desired signals.
The Wiener filter adapts to the statistical properties of the input signal
and desired output signal, making it effective for predicting signals with
known statistical characteristics.
2
Procedure
i. Autocorrelation Function and Signal Generation
1. Choose an autocorrelation function other than rx (k) = P α|k|.
3. Observe and compare the behavior of the filter in both cases using
MATLAB.
4. Compare the performance of the filter in Part (ii) and Part (iv) using
MATLAB.
3
MATLAB Code and Explanation
Signal Generation and Autocorrelation
1 clc ;
2 clear ;
3 close all ;
4
10 for n =1: N
11 if n ==1
12 x ( n ) = 0.447 * v (1) ;
13 elseif n ==2
14 x ( n ) = 0.447 * ( v (2) + v (1) ) ;
15 elseif n ==3
16 x ( n ) = 0.447 * ( v (3) + v (2) + v (1) ) ;
17 elseif n ==4
18 x ( n ) = 0.447 * ( v (4) + v (3) + v (2) + v (1) ) ;
19 else
20 x ( n ) = 0.447 * [ v ( n ) + v (n -1) + v (n -2) + v (n
-3) + v (n -4) ];
21 end
22 end
23
24 figure ;
25 subplot (2 , 1 , 1) ;
26 plot (1: N , v , ’ LineWidth ’ , 2) ;
27 xlabel ( ’n ’) ;
28 ylabel ( ’v ( n ) ’) ;
29 title ( ’ Random Noise v ( n ) ’) ;
30 grid on ;
31
32 subplot (2 , 1 , 2) ;
33 plot (1: N , x , ’ LineWidth ’ , 2) ;
4
34 xlabel ( ’n ’) ;
35 ylabel ( ’x ( n ) ’) ;
36 title ( ’ Generated Random Signal x ( n ) ’) ;
37 grid on ;
38
39 % Autocorrelation function
40 k = 0:10;
41 N = 2;
42 r_x = autocorr_func ( k ) ;
43 figure ;
44 plot (k , r_x ) ;
45 grid on ;
Listing 1: Signal Generation
12
5
22 x_hat ( i ) = w (1 , 1) * x (i -1) + w (2 , 1) * x (i
-2) ;
23 end
24 end
25
26 % Plotting
27 figure ;
28 subplot (3 , 1 , 1) ;
29 plot (1: N , x ) ;
30 title ( ’ Generated Signal ’) ;
31
32 subplot (3 , 1 , 2) ;
33 plot (1: N , x_hat ) ;
34 title ( ’ Estimated Signal from Wiener Filter tap 2 ’) ;
35
36 error_signal = x - x_hat ;
37 subplot (3 , 1 , 3) ;
38 plot (1: N , error_signal ) ;
39 title ( ’ Error Signal e ( n ) ’) ;
40
41 function Rx = autocorrelationMatrix ( N )
42 Rx = zeros ( N ) ;
43 for i = 1: N
44 for j = i : N
45 Rx (i , j ) = autocorr_func ( i - j ) ;
46 Rx (j , i ) = Rx (i , j ) ;
47
48 end
49 end
50 end
51
52 function rx = autocorrelationvector ( N )
53 rx = zeros (1: N ) ;
54 for i = 1: N
55
56 rx ( i ) = autocorr_func ( i ) ;
57 end
58 end
6
59
7
13 P_3 = [1 -(( T_3 ) *( T_3 ) ) ]* P_2n ;
14 % P_3n = r_x (1) - [ w_21 * r_x (2) ] - [ w_22 * r_x (3) ] -[ w_23 *
r_x (4) ];
15 r_4 = r_x (5) - w_21 * r_x (4) - w_22 * r_x (3) - w_23 * r_x (2) ;
16 T_4 =( - r_4 / P_3 ) ;
17 w_31 = w_21 + T_4 * w_23 ;
18 w_32 = w_22 + T_4 * w_22 ;
19 w_33 = w_23 + T_4 * w_21 ;
20 w_34 = T_4 ;
21 P_4 = [1 -(( T_4 ) *( T_4 ) ) ]* P_3 ;
22 r_5 = r_x (6) - w_31 * r_x (5) - w_32 * r_x (4) - w_33 * r_x (3) -
w_34 * r_x (2) ;
23 T_5 =( - r_5 / P_4 ) ;
24 w_41 = w_31 + T_4 * w_34 ;
25 w_42 = w_32 + T_4 * w_33 ;
26 w_43 = w_33 + T_4 * w_32 ;
27 w_44 = w_34 + T_4 * w_31 ;
28 w_45 = T_5 ;
29 P_5 = [1 -(( T_5 ) *( T_5 ) ) ]* P_4 ;
30 w = randn (1 , N ) ;
31 y (1) = x (1) + w (1) ;
32 for n = 2: N
33 y(n) = x(n) + w(n);
34 end
35 figure ;
36 subplot (2 ,1 ,1) ;
37 plot (1: N , w , ’ LineWidth ’ , 2) ;
38 xlabel ( ’n ’) ;
39 ylabel ( ’w ( n ) ’) ;
40 title ( ’ random noise w ( n ) ’) ;
41 grid on ;
42 subplot (2 ,1 ,2) ;
43 plot (1: N , y , ’ LineWidth ’ , 2) ;
44 xlabel ( ’n ’) ;
45 ylabel ( ’y ( n ) ’) ;
46 title ( ’ generated signal y ( n ) ’) ;
47 grid on ;
48 r_w = 0.25;
8
49 if n == 0
50 r_y = r_x + r_w ;
51 else
52 r_y = r_x ;
53 end
54 N =10;
55 r_w = 0.25;
56 figure ;
57 plot ( r_y , ’ LineWidth ’ , 2) ;
58 xlabel ( ’n ’) ;
59 ylabel ( ’ r_y ’) ;
60 title ( ’ autocorrelation of generated signal y ( n ) ’) ;
61 grid on ;
Listing 4: Levinson - Durbin recursion
1 r_w = 0.25;
2 if n == 0
3 r_y = r_x + r_w ;
4 else
5 r_y = r_x ;
6 end
7 N =10;
8
9 figure ;
10 plot ( r_y , ’ LineWidth ’ , 2) ;
11 xlabel ( ’n ’) ;
12 ylabel ( ’ r_y ’) ;
13 title ( ’ autocorrelation of generated signal y ( n ) ’) ;
14 grid on ;
15 % plot of coloured noise
16 n0 = 0;
17 n1 = 1;
18 n2 = -1;
19 n = 0:10;
20 w_variance = 0.25;
21 r_wcolor = w_variance *((( n - n0 ) ==0) +(0.5*((( n - n1 ) ==0)
+(( n - n2 ) ==0) ) ) ) ; % Autocorrelation of coloured
9
noise
22 % r_w0 = r_w (5) + r_w (6) + r_w (7) ;
23 figure ;
24 subplot (2 ,1 ,1) ;
25 stem (n , r_wcolor ) ;
26 xlabel ( ’n ’) ;
27 ylabel ( ’ r_w ( n ) color ’) ;
28 title ( ’ Autocorrelation function of colored noise ’) ;
29 grid on ;
30 r_ycolor = r_x + r_wcolor ;
31 subplot (2 ,1 ,2) ;
32 stem (n , r_ycolor ) ;
33 xlabel ( ’n ’) ;
34 ylabel ( ’ r_y ( n ) color ’) ;
35 title ( ’ Autocorrelation function of colored noise ’) ;
36 grid on ;
37 N =2;
38 Ry_mat = autocorrelationMatrix_y ( N ) ;
39 rx_vect = autocorrelationvector ( N ) ;
40 % Perform matrix multiplication
41 a = inv ( Ry_mat ) ;
42 b = transpose ( rx_vect ) ;
43 w_y = a * b ;
44 N = 100;
45 y_hat = zeros (1 , N ) ; % Preallocate y_hat
46 for i = 1: N
47 if i == 1
48 y_hat ( i ) = y ( i ) ;
49 elseif i == 2
50 y_hat ( i ) = w (1) * y (i -1) ;
51 else
52 y_hat ( i ) = w (1) * y (i -1) + w (2) * y (i -2) ;
53 end
54 end
55 % Plotting
56 figure ;
57 subplot (3 ,1 ,1) ;
58 plot (1: N , y ) ;
10
59 title ( ’ Generated Signal ’) ;
60 subplot (3 ,1 ,2) ;
61 plot (1: N , y_hat ) ;
62 title ( ’ y_h ( Estimated Signal ) ’) ;
63 for i = 1:1: N
64 e_y ( i ) = y ( i ) - y_hat ( i ) ;
65
66 end
67 subplot (3 ,1 ,3) ;
68 plot (1: N , e_y ) ;
69 title ( ’ error signal of y ’) ;
70 % PCM quantization error calculation (4 bit case )
71 My = max ( y_hat ,[] ," all ") ;
72 del_2y = My /2.^3;
73 quan_dpcm_y = ( del_2y .* del_2y ) /12;
74 SNR_dpcm_y = 1/ quan_dpcm_y ;
75 % DPCM quantization error calculation (4 bit case )
76 Me_y = max ( e_y ,[] ," all ") ;
77 del_1y = Me_y /2.^3;
78 quan_pcm_y = ( del_1y .* del_1y ) /12;
79 SNR_pcm_y = 1/ quan_pcm_y ;
80 fprintf ( ’ Signal to Noise Ratio for PCM \ n ’) ;
81 disp ( SNR_pcm_y ) ;
82 fprintf ( ’ Signal to Noise Ratio for DPCM \ n ’) ;
83 disp ( SNR_dpcm_y ) ;
84
85 function Ry = autocorrelationMatrix_y ( N )
86 Ry = zeros ( N ) ;
87 for i = 1: N
88 for j = i : N
89 Ry (i , j ) = autocorr_func ( i - j ) ;
90 Ry (j , i ) = Ry (i , j ) ;
91
92 end
93 end
94 end
Listing 5: White and Color Noise added signal prediction
11
0.1 Inference
The performance analysis based on SNR for DPCM and PCM modalities for
signal quantization for 3 bit precision has been studied with 2 tap Wiener fil-
ter based prediction of x(n) with the specified autocorrelation functionRx (n)
which is linear in nature and white noise is added.
From Table 1, we can infer that the wiener filter implementation for
prediction from 2 tap would enable much noise filtering or signal prominence
when the white noise corruption is introduced as compared to colored noise
corruption in the signal transmitted. As the Levinson - Durbin algorithm is
employed for increasing the wiener filter order for prediction implementation,
the noise component is reduced in the filter output and as the order increases
the SNR increases. The Levinson-Durbin algorithm optimizes the coefficients
of the filter to better utilize the correlations present in the signal samples.
12
Figure 1: Autocorrelation function for the desired signal x(n) which is linear
in nature.
13
Figure 3: Predicted signal from the 2 tap Wiener filter and the error signal
e(n) with respect to the desired signal x(n).
14
Figure 4: The wiener filter with 5 tap using Levinson Durbin method esti-
mated signal xh (n), desired signal x(n) and the error signal e(n).
15
Figure 6: Colored noise corrupted signal xcolored (n) and error estimated from
x(n) using 2 tap Weiner filter.
Figure 7: Colored noise corrupted signal xcolored (n) and error estimated from
x(n) using weiner filter upgraded with Levinson Durbin method to 5 tap.
16
Conclusion
The Wiener filter based predictor implementation has been studied and the
advancement in the filter implementation based on Levinson - Durbin algo-
rithm has been carried out and verified. The Wiener predictor aims to reduce
the prediction error.
The Levinson–Durbin algorithm is employed to update the order of the
filter to 5. This increases the complexity of the predictor, potentially improv-
ing its performance. Here, the impact of noise on prediction and quantization
is analyzed. ‘
17
National Institute of Technology
Calicut
EC6405E
Statistical Signal Processing
Lab Experiment 3
Theory
Periodogram
• The periodogram method is the most common nonparametric method
for computing the power spectral density (PSD) of a random process.
1
In the limit as M goes to infinity, the expected value of the periodogram
equals the power spectral
n density of the o noise process x(n) . This is
expressed by writing E lim Px,M (ω) = Sx (ω)
M →∞
Welch’s Method
Welch’s method (also called the periodogram method) for estimating power
spectra is carried out by dividing the time signal into successive blocks, form-
ing the periodogram for each block, and averaging.
Denote the m th windowed, zero-padded frame from the signal x by
xm (n)w(n)x(n + mR), n = 0, 1, . . . , M − 1, m = 0, 1, . . . , K − 1,
where R is defined as the window hop size, and let K denote the number
of available frames. Then the periodogram of the m th block is given by
N −1 2
1 2 1
X
−j2πnk/N
Pxm ,M (ωk ) = |FFTN,k (xm )| xm (n)e
M M n=0
as before, and the Welch estimate of the power spectral density is given
by
K−1
W 1 X
Ŝx (ωk ) Px ,M (ωk ).
K m=0 m
In other words, it’s just an average of periodograms across time. When
w(n) is the rectangular window, the periodograms are formed from non-
overlapping successive blocks of data. For other window types, the analysis
frames typically overlap.
Bartlett Method
Bartlett’s method consists of the following steps:
1. The original N point data segment is split up into K (non-overlapping)
data segments, each of length M .
2. For each segment, compute the periodogram by computing the dis-
crete Fourier transform (DFT version which does not divide by M ), then
computing the squared magnitude of the result and dividing this by M .
3. Average the result of the periodograms above for the K data segments.
2
The averaging reduces the variance, compared to the original N point
data segment.
σ2
Ŝ(f ) = Pp −j2πf k |2
|1 − k=1 ak e
where f is the frequency variable, and p is the order of the autoregressive
model.
3
The MVM provides a spectral estimate that minimizes the variance un-
der the given constraints, making it suitable for applications where accurate
estimation of the PSD is essential.
r̂(m) = F −1 [P (ω)]
This provides an estimate of the autocorrelation function of the signal.
4
Procedure
• Set w1 = 0.1π.
• For Bartlett and Welch estimates, use a window width of 100 samples.
• For the Welch estimate, allow 50% overlap between consecutive win-
dows.
5
• Compare the results obtained from the MVM and BT methods with
the periodogram, Bartlett, and Welch estimates.
• Analyze and discuss the differences and similarities in the spectral es-
timates obtained from different methods.
6
18 x2_values (i , :) = V2 * sin ( omega2_values ( i ) *
(0: N -1) + phi2 ) ;
19 end
20
26 end
27 power_x_bart = periodogram ( x_values , 100 , 0)
28 power_x_welch = periodogram ( x_values , 100 , 50)
29
7
to Barlett ’ s Method and
use adjacent windows
42 else
43 % slice signal into
k data segments
of length m
44 segments = Utils .
slice ( signal , m ) ;
45 k = ( length ( signal
) / m ) - 1;
46 end
47 % compute the FFT of each
segment , then compute the
squared magnitude of the
48 % result and divide by m
49 period = ( fft ( segments
,[] ,2) .^2 ) / m ;
50
8
11 % Generate sinusoidal signals
12 phi1 = rand * 2 * pi ;
13 phi2 = rand * 2 * pi ;
14 x1 = V1 * sin ( omega1 * (0: N -1) + phi1 ) ;
15 x2_values = zeros ( length ( omega2_values ) , N ) ;
16 for i = 1: length ( omega2_values )
17 x2_values (i , :) = V2 * sin ( omega2_values ( i ) *
(0: N -1) + phi2 ) ;
18 end
19
25 end
26
33 function Px = periodogram (x , n1 , n2 ) %
34
35 x = x (:) ;
36 if nargin == 1
37 n1 = 1; n2 = length ( x ) ;
38 end
39 Px = abs ( fft ( x ( n1 : n2 ) , 1024) ) .^2/( n2 - n1 +1) ;
40 Px (1) = Px (2) ;
41 end
42
9
48 N = n2 - n1 + 1;
49 W = ones (N , 1) ;
50 if ( win == 2) w = hamming ( N ) ;
51 elseif ( win == 3) w = hanning ( N ) ;
52 elseif ( win == 4) w = bartlett ( N ) ;
53 elseif ( win == 5) w = blackman ( N ) ;
54 end
55 xw = x ( n1 : n2 ) .* w / norm ( w ) ;
56 Px = N * periodogram ( xw ) ;
57 end
58
10
1.5.1 Blackman Tukey Method
1 close all ;
2 clear ;
3 clc ;
4
25 % Spectrum Estimation
26 Px1 = blackman_tukey ( x1 ,1 ,5) ; % order 4
27 Px2 = blackman_tukey ( x2 ,1 ,5) ;
28 % To plot the spectrum estimate
29 f = 5*((0:1023) /1024) ;
30 figure , plot (f ,10* log10 ( Px1 ) ) ;
31 xlabel ( ’ Frequency , w1 ’)
11
32 ylabel ( ’ Spectral Estimate P_ { X1 }( dB ) ’) ;
33 title ( ’ BT Spectrum for X1 signal ’)
34 figure , plot (f ,10* log10 ( Px2 ) ) ;
35 xlabel ( ’ Frequency , w2 ’)
36 ylabel ( ’ Spectral Estimate P_ { X2 }( dB ) ’) ;
37 title ( ’ BT Spectrum for X2 signal ’)
38
39 function X = convm (x , p )
40 N = length ( x ) +2* p -2;
41 x = x (:) ;
42 xpad = [ zeros (p -1 , 1) ; x ; zeros (p -1 ,1) ];
43 for i = 1: p
44 X (: , i ) = xpad (p - i +1: N - i +1) ;
45 end
46
47 function R = covar (x , p )
48 x = x (:) ;
49 m = length ( x ) ;
50 x = x - ones (m ,1) *( sum ( x ) / m ) ;
51 R = convm (x , p ) ’* convm (x , p ) /( m -1) ;
52 end
53
65 x = x (:) ;
66 if nargin == 3
67 n1 = 1;
68 n2 = length ( x ) ;
69 end
12
70
1 close all ;
2 clear ;
3 clc ;
4 \
5 % to generate a discrete time random process
6
13
14 % Generate white noise
15 n = sqrt ( sigma_n_squared ) * randn (1 , N ) ;
16
24 % Spectrum Estimation
25 Px1 = min_variance ( x1 ,4) ; % order 4
26 Px2 = min_variance ( x2 , 4)
27 % To plot the spectrum estimate
28 figure , plot ( - pi /0.5 L *: pi , Px1 ) ;
29 xlabel ( ’ Frequency , w1 ’)
30 ylabel ( ’ Spectral Estimate P_ { X1 } ’) ;
31 title ( ’M - V Spectrum for X1 signal ’)
32 figure , plot ( Px2 ) ;
33
34 function Px = min_variance (x , p )
35 x = x (:) ;
36 R = covar (x , p ) ;
37 [v , d ] = eig ( R ) ;
38 U = diag ( inv ( abs ( d ) + eps ) ) ;
39 V = abs ( fft (v ,1024) ) .^2;
40 Px = 10* log10 ( p ) -10* log10 ( V * U ) ;
41 end
Listing 4: Blackman Tukey method for spectral estimation.
14
1.6 Inference
Welch’s method is an improvement on the standard periodogram spectrum
estimating method and on Bartlett’s method, in that it reduces noise in the
estimated power spectra in exchange for reducing the frequency resolution.
The frequency resolution for periodogram method is 651.8986 Hz. When the
frequency resolution in Welch method has reduced to 571.89 Hz and Bartlett
method has remained the same as 651.8986 Hz when the overlaps of 50 re-
alizations were carried out with segment length = 100. Bartlett’s method
is a simple averaging method that divides the signal into non-overlapping
segments, applies a window function to each segment, computes the peri-
odogram of each segment, and averages them. Since it doesn’t involve any
additional processing beyond averaging, the frequency resolution is deter-
mined solely by the length of the segments (L) and the sampling frequency
(f s).
The difference in frequency resolution between Bartlett’s method and
Welch’s method depends on the amount of overlap used in the Welch method,
the length of the segments, and the specific characteristics of the signal be-
ing analyzed. Generally, the Welch method tends to provide slightly better
frequency resolution compared to Bartlett’s method due to the overlap of
segments.
In Bartlett’s method, each segment is non-overlapping, meaning that the
entire segment contributes only to its corresponding frequency bins in the
resulting power spectrum. The frequency resolution is directly determined
by the length of the segments.
In the Welch method, segments typically overlap, meaning that each seg-
ment contributes to multiple frequency bins in the resulting power spectrum.
The overlap allows more data to contribute to each frequency bin, resulting
in better frequency resolution compared to Bartlett’s method.
The difference in frequency resolution between the two methods depends
on the overlap factor used in the Welch method. Higher overlap factors
result in more overlap between segments and therefore better frequency reso-
lution, while lower overlap factors result in less overlap and potentially lower
frequency resolution.
The minimum variance method optimizes the window length and overlap
to minimize the variance of the estimate while still ensuring a reasonable
degree of frequency resolution. This method might achieve a better effective
frequency resolution compared to Bartlett’s method by efficiently utilizing
15
the available data. The BT method combines the advantages of Bartlett’s
method and the periodogram method by using overlapping segments and
applying a window function to each segment. By overlapping segments, the
BT method effectively increases the number of samples contributing to each
frequency bin, which can improve frequency resolution compared to Bartlett’s
method.
In the case of varying the magnitude of the sinusoidal signal components
in the input signal namely V1 andV2 , the power spectral density estimation
will likely show higher power at the frequency corresponding to the sinusoid
having higher amplitude, V . However this can also be affected by the phase
difference between the sinusoids under consideration and the presence of
noise.
When the variance of the noise signal increases, it means that the noise
signal has higher amplitude fluctuations. As a result, the noise will contribute
more power to the overall signal. The increased noise power may mask or
obscure weaker signal components.
With higher noise variance, the noise components contribute more to each
segment’s periodogram, potentially increasing the variability or uncertainty
in the estimated PSD. However, if the noise is stationary and uncorrelated
across segments, the averaging process may still reduce the noise contribution
and provide a reliable PSD estimate in Welch and Bartlett method.
Increasing noise variance can result in higher power spectral density values
at frequencies corresponding to the noise components. The periodogram’s
estimate may become less reliable as noise overwhelms the signal, especially
if the noise is non-stationary.
16
Figure 1: The input signal X1 with ω2 = 0.15π.
17
Figure 3: Periodogram for signal X1 with ω2 = 0.15π.
18
Figure 5: Power spectral density P (X1 ) estimation based on Bartlett method
for signal with ω2 = 0.15π.
19
Figure 7: Power spectral density estimation based on Welch method for X1
signal with ω2 = 0.15π were 50 realizations were studied and averaged to
Pavg (x(ejω )).
20
Figure 9: The power spectral density estimation P (X1 ) based on non - para-
metric Minimum Variance Method for signal X1 with ω2 = 0.15π.
21
Figure 10: The power spectral density estimation P (X2 ) based on non -
parametric Minimum Variance Method for signal X2 with ω2 = 0.2π.
Figure 11: The power spectral density estimation P (X2 ) based on non -
parametric Blackman Tukey Method for signal X2 with ω2 = 0.15π.
22
Figure 12: The power spectral density estimation P (X2 ) based on non -
parametric Blackman Tukey Method for signal X1 with ω2 = 0.2π.
23
(a) PSD estima- (b) PSD estima- (c) PSD estima-
tion : Periodogram, tion : Periodogram, tion : Periodogram,
V1 = 10V, V2 = 5V, ω2 = V1 = 5V, V2 = 10V, ω2 = V1 = 10V, V2 =
0.15π 0.15π 10V, ω2 = 0.15π
Figure 13: PSD stimation base don periodogram method for varied
V1 andV2 values, i.e., amplitudesf orsinusoids
24
(a) Bartlett , V1 = (b) Bartlett , V1 = (c) Bartlett,
10V, V2 = 5V, ω2 = 10V, V2 = 10V, ω2 = V1 = 5V, V2 = 10V, ω2 =
0.15π 0.15π 0.15π
(e) Bartlett,
(d) Bartlett, V1 = V1 = 10V, V2 = (f) Bartlett, V1 =
10V, V2 = 5V, ω2 = 0.2π 10V, ω2 = 0.2π 5V, V2 = 10V, ω2 = 0.2π
Figure 14: PSD estimation based on Bartlett method for varied V1 andV2
values, i.e., amplitudes for sinusoids.
25
(a) Welch , V1 = (b) Welch , V1 = (c) Welch, V1 =
10V, V2 = 5V, ω2 = 5V, V2 = 10V, ω2 = 10V, V2 = 10V, ω2 =
0.15π 0.15π 0.15π
(f) Welch, V1 =
(d) Welch, V1 = (e) Welch, V1 = 5V, V2 = 10V, V2 = 10V, ω2 =
10V, V2 = 5V, ω2 = 0.2π 10V, ω2 = 0.2π 0.2π
26
(a) BT method , V1 = (b) BT Method , V1 = (c) BT Method, V1 =
10V, V2 = 10V, ω2 = 5V, V2 = 10V, ω2 = 10V, V2 = 10V, ω2 =
0.15π 0.15π 0.15π
(f) BT Method, V1 =
(d) BT Method, V1 = (e) BT Method, V1 = 10V, V2 = 10V, ω2 =
10V, V2 = 5V, ω2 = 0.2π 5V, V2 = 10V, ω2 = 0.2π 0.2π
Figure 16: PSD estimation based on Blackman Tukey method for varied V1
and V2 values, i.e., amplitudes for sinusoids.
27
(a) MVM , (b) MVM , (c) MVM, V1 =
V1 = 10V, V2 = 5V, ω2 = V1 = 5V, V2 = 10V, ω2 = 10V, V2 = 10V, ω2 =
0.15π 0.15π 0.15π
(f) Welch, V1 =
(d) MVM, V1 = (e) Welch, V1 = 5V, V2 = 10V, V2 = 10V, ω2 =
10V, V2 = 5V, ω2 = 0.2π 10V, ω2 = 0.2π 0.2π
Figure 17: PSD stimation based on Minimum Variance method (MVM) for
varied V1 andV2 values, i.e., amplitudesf orsinusoids
28
(a) Periodogram, σ2 = (b) Periodogram , σ2 = (c) Periodogram, σ2 =
0.25ω2 = 0.15π 0.5, ω2 = 0.15π 0.75, ω2 = 0.15π
Figure 18: PSD estimation based on periodogram method for varied σ 2 , i.e.,
noise variance.
29
(a) Bartlett, (b) Bartlett , σ2 = (c) Bartlett,
σ2 = 0.25ω2 = 0.15π 0.5, ω2 = 0.15π σ2 = 0.75, ω2 = 0.15π
Figure 19: PSD estimation based on Bartlett method for varied σ 2 , i.e., noise
variance.
30
(a) Welch, σ2 = (b) Welch , σ2 = (c) Welch, σ2 =
0.25ω2 = 0.15π 0.5, ω2 = 0.15π 0.75, ω2 = 0.15π
Figure 20: PSD estimation based on Bartlett method for varied σ 2 , i.e., noise
variance.
31
(a) Blackman Tukey, (b) Blackman Tukey , (c) Blackman Tukey,
σ2 = 0.5ω2 = 0.15π σ2 = 0.9, ω2 = 0.15π σ2 = 0.25, ω2 = 0.2π
Figure 21: PSD estimation based on Blackman Tukey method for varied σ 2 ,
i.e., noise variance.
32
(a) MVM, σ 2 = 0.1ω2 = (b) MVM , (c) MVM, σ 2 =
0.15π σ 2 = 0.25, ω2 = 0.15π 0.75, ω2 = 0.15π
Figure 22: PSD estimation based on MVM method for varied σ 2 , i.e., noise
variance.
33
Conclusion
The methods of Bartlett and Welch are designed to reduce the variance
of the periodogram by averaging periodograms and modified periodograms,
respectively. Welch’s method reduces the variance as compared to Bartlett’s
method. In the MV method, the power spectrum is estimated by filtering
a process with a bank of narrowband bandpass filters. The motivation for
this approach may be seen by looking, once again, at the effect of filtering a
WSS random process with a narrowband bandpass filter.The larger the order
of the filter the better the filter will be in rejecting out-of-band power. For
a pth-order filter the MV spectrum estimate requires the evaluation of the
inverse of a (p + 1) x (p + 1) autocorrelation matrix Rx . The filter order is
limited to p ≤ N for an N-length data sample. ‘
34
National Institute of Technology
Calicut
EC6405E
Statistical Signal Processing
Lab Experiment 4
Student:
Faculty in charge:
Aneeta Christopher
Dr. Deepthi P. P.
P230512EC
Aim
Recursive Least Squares Estimation
Consider the adaptive equalization of a linear dispersive channel. Channel
input is an equiprobable random binary sequence of zeroes and ones coded
as ±1. Channel additive noise is white Gaussian with variance V . Let the
channel impulse response be given by,
(
1 2π
[1 + cos (n − 2) ], for n = 1, 2, 3
h(n) = 2 W
0, elsewhere
Part (i)
Choose V to have SNR as 30 dB. Vary W values as W = 2.9, 3.1, 3.3, and 3.5.
Formulate Recursive Least Squares filtering for channel equalization with an
11-tap FIR filter to remove ISI. Use regularization parameter δ = 0.004 and
forget factor λ = 1. Sketch the convergence of RLS by plotting the variation
of MMSE with iteration number for 50 iterations for each W .
Part (ii)
Vary the SNR to 20 dB and repeat the experiment in Part (i).
Theory
Recursive Least Squares Method
In adaptive filter theory, the Recursive Least Squares (RLS) algorithm is
a powerful method for estimating the parameters of a linear model in a
recursive and computationally efficient manner. Consider a linear model
given by:
1
• w(n) is the weight vector to be estimated,
The goal is to find an estimate ŵ(n) of the true weight vector w(n) by
minimizing the mean squared error (MSE) criterion:
where:
The RLS algorithm provides a robust and stable approach for adapt-
ing the filter coefficients in real-time applications. By recursively updating
the inverse correlation matrix, RLS achieves fast convergence and low com-
putational complexity, making it particularly suitable for applications with
changing environments or non-stationary signals.
Procedure
To solve the problem of adaptive equalization of a linear dispersive chan-
nel using Recursive Least Squares (RLS) filtering, the following steps are
undertaken:
2
– Define the channel impulse response h(n) using the given expres-
sion:
(
1
[1 + cos 2π
2 W
(n − 2) ], for n = 1, 2, 3
h(n) =
0, elsewhere
• Part (ii)
3
MATLAB Code and Explanation
1 clc
2 close all
3 clear all
4
19 % Initialize h matrix
20 h = zeros ( length ( W_values ) , sequenceLength ) ;
21
4
31 for i = 1: length ( W_values )
32 conv_results (i , :) = conv (x , h (i , :) , ’ same ’) ;
33 end
34
54 iterations =100;
55 % Initialize arrays to store MMSE values for each W
56 mmse_values = zeros ( iterations , length ( W_values ) ) ;
57
5
63 % Loop over each value of W
64 for w_idx = 1: length ( W_values )
65 % Initialize RLS filter coefficients and
matrices
66 P = eye ( tap_length ) / delta ; % Inverse of
regularization parameter for initial P matrix
67 w = zeros ( tap_length , 1) ; % Weight vector
initialized to zero
68
6
92 % Plot MMSE convergence for each W with specified
colors
93 figure ;
94 colors = [ ’k ’ , ’b ’ , ’r ’ , ’g ’]; % Define colors for
each W value
95 for w_idx = 1: length ( W_values )
96 valid_indices = mmse_values (: , w_idx ) >= 0; %
Filter out negative and zero MMSE values
97 plot ( find ( valid_indices ) , 10* log10 ( mmse_values (
valid_indices , w_idx ) ) , ’ LineWidth ’ , 1 , ’
color ’ , colors ( w_idx ) ) ;
98 hold on ;
99 end
100
117 figure ;
118 % colors = { ’r ’ , ’g ’ , ’b ’ , ’c ’};
119 % legend_entries = cell (1 , length ( W_values ) ) ;
7
120 % Loop through each ’w ’ value to plot on the same
subplot
121 for i = 1: length ( W_values )
122 % print ( length ( w_val_acc (: ,: , i ) ) , i ) ;
123 subplot ( length ( W_values ) , 1 , i ) ; % Adjust the
subplot dimensions as needed
124 for j = 1:11
125 data = squeeze ( w_val_acc (: , j , i ) ) ;
126
141
142 end
143 % Create a single legend for the entire figure
outside the loop
144 % legend ( ’ Location ’ , ’ best ’) ;
145 % Adjust the layout
8
146 sgtitle ( ’ Variation of Weight values with Iterations
for Different eigen spread Values SNR = 30 dB ’) ;
147
148 figure ;
149 for i = 1: length ( W_values )
150 % print ( length ( w_val_acc (: ,: , i ) ) , i ) ;
151 subplot ( length ( W_values ) , 1 , i ) ; % Adjust the
subplot dimensions as needed
152 for j = 1:11
153 data = squeeze ( w_val_acc_2 (: , j , i ) ) ;
154
169
170 end
171 % Create a single legend for the entire figure
outside the loop
172 % legend ( ’ Location ’ , ’ best ’) ;
9
173 % Adjust the layout
174 sgtitle ( ’ Variation of Weight values with Iterations
for Different W values ’)
0.1 Inference
The Recursive Least Squares (RLS) algorithm converges in about 2M itera-
tions, where M is the filter length. This means that the rate of convergence
of the RLS algorithm is typically an order of magnitude faster than that of
the LMS algorithm. The rate of convergence of the RLS algorithm is rela-
tively insensitive to the variations in the eigen value spread ,i.e., W . The
steady state value of the MMSE produced by the RLS algorithm is small, and
when compared with LMS algorithm as well. Hence we can conclude that
the RLS algorithm produces zero misadjustment (at least theoretically). We
can conclude that as the number of iterations n, approaches infinity, the
mean-square error approaches a final value equal to the variance σ 2 of the
measurement error. In other words, the RLS algorithm, in theory, produces
zero excess mean-square error (or, equivalently, zeromisadjustment). The
key findings and conclusions from this experiment are as follows:
10
Figure 1: The plot of MMSE versus number of iterations for SNR of 30 dB
for signal x(n).
Conclusion
In this experiment, we simulated a communication system to analyze the
performance of a Recursive Least Squares (RLS) filter in mitigating noise
and recovering a transmitted binary sequence. RLS converges within twice
the number of taps in the filter as compared to other algorithms. The eigen
spread has not much effect on the convergence of the weight vector in RLS
algorithm.
11
Figure 2: The plot of MMSE versus number of iterations for SNR of 20 dB
for signal x(n).
Figure 3: The variation of weight values with iterations as the eigen spread
W is varied for SNR = 20dB.
12
Figure 4: The variation of weight values with iterations as the eigen spread
W is varied for SNR = 30dB.
13
National Institute of Technology
Calicut
EC6405E
Statistical Signal Processing
Lab Experiment 5
Student:
Faculty in charge:
Aneeta Christopher
Dr. Deepthi P. P.
P230512EC
Aim
Kalman Filter
Design a Kalman filter to track a moving object following the track given by
the function:
x(t) = 0.1(t2 − t)
Consider the process equation X̄(n) = A(n − 1)X̄(n − 1) + W (n). Here
the state vector X̄(n) = [x1 (n) x2 (n)]T where x1 (n) is the position along
the X-axis and x2 (n) is the velocity along the X-axis. The state transition
matrix A(n − 1) is given by:
1 T
A(n − 1) =
0 1
Consider W (n) to have zero mean with variance 0.2, while v(n) has zero
mean and unit variance.
(i) Derive Kalman prediction and filtering (updation) expressions.
(ii) Track the object and sketch the actual position as well as the tracked
position at various instances.
Theory
Kalman Filter
The Kalman filter is a widely used algorithm for estimating the state of a
dynamic system from noisy measurements. It is an optimal recursive filter
that uses Bayesian estimation to compute the state of the system at each
time step, based on the previous state and the current measurements.
1
Kalman Filter Equations
Consider a linear dynamic system described by the following equations:
where:
• X̄(n) is the state vector at time n,
Prediction Step
In the prediction step, the Kalman filter predicts the state of the system at
the next time step based on the previous state and control inputs:
where:
• X̂ − (n) is the predicted state estimate,
2
Update Step
In the update step, the Kalman filter incorporates the measurement infor-
mation to improve the state estimate:
where:
Procedure
1. Define the dynamics of the system: Specify the state transition matrix
A(n), measurement matrix C(n), process noise covariance matrix Q(n),
and measurement noise covariance matrix R(n).
2. Generate the trajectory of the object: Define the function x(t) repre-
senting the object’s position over time.
4. Initialize Kalman filter: Set initial state estimate X̂(0) and initial error
covariance matrix P (0).
5. Perform prediction and update steps: Iterate through each time step,
predicting the state and updating based on measurements.
6. Track the object: Plot the actual position and the tracked position at
various instances to visualize the performance of the Kalman filter.
3
MATLAB Code and Explanation
1 % Define system dynamics
2 dt = 0.10;
3 F = [1 , dt ; 0 , 1];
4 H = [1 , 0];
5 Q = [0.2 , 0; 0 , 0.20];
6 R = 0.5;
7
24 % Update step
25 y = measurements ( i ) - H * x ;
26 S = H * P * H’ + R;
27 K = P * H’ / S;
28 x = x + K * y;
29 P = ( eye (2) - K * H ) * P ;
30
31 predictions ( i ) = H * x ;
32 end
33
34 % Plot results
4
Figure 1: The tracking (red) and true (blue) value of the position of the
object along X axis.)
35 figure ;
36 plot (t , True_position , ’g ’ , ’ LineWidth ’ , 1.5 , ’
DisplayName ’ , ’ True Position ’) ;
37 hold on ;
38 plot (t , predictions , ’r ’ , ’ LineWidth ’ , 1.5 , ’
DisplayName ’ , ’ Kalman Filter Prediction ’) ;
39 hold off ;
40 title ( ’1 D Object Tracking ’) ;
41 xlabel ( ’ Time ’) ;
42 ylabel ( ’ Position ’) ;
43 grid on ;
44 legend ( ’ Location ’ , ’ best ’) ;
0.1 Inference
The code demonstrates the implementation of a Kalman filter for tracking the
position of an object in 1D space. Since the true value and predicted value
have coincided after a few updations, it demonstrates the filter’s capability to
accurately estimate the system’s state despite noise and uncertainties. The
inference highlights the effectiveness of the Kalman filter in state estimation
5
tasks, showcasing its ability to mitigate the impact of noise and uncertainties,
leading to accurate tracking and prediction of the system’s behavior.
Conclusion
The Kalman filter, known for its optimal estimation and noise robustness,
was implemented to track the position of an object in 1D space.
6
• Observability and Controllability: Systems that are well-observable
and controllable facilitate convergence as the Kalman filter can effec-
tively estimate and control the system’s state based on available mea-
surements and control inputs.