0% found this document useful (0 votes)
19 views11 pages

Block LMS Updated

This experiment investigates the Block Least Mean Square (Block LMS) algorithm for denoising a sinusoidal signal affected by Additive White Gaussian Noise (AWGN). Two configurations with different block lengths and filter orders are analyzed, demonstrating the algorithm's effectiveness in minimizing error signals and reconstructing the original signal. The findings highlight the trade-offs between block size, filter length, and computational complexity, validating the Block LMS algorithm's suitability for real-time noise reduction tasks.

Uploaded by

nikkiant16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views11 pages

Block LMS Updated

This experiment investigates the Block Least Mean Square (Block LMS) algorithm for denoising a sinusoidal signal affected by Additive White Gaussian Noise (AWGN). Two configurations with different block lengths and filter orders are analyzed, demonstrating the algorithm's effectiveness in minimizing error signals and reconstructing the original signal. The findings highlight the trade-offs between block size, filter length, and computational complexity, validating the Block LMS algorithm's suitability for real-time noise reduction tasks.

Uploaded by

nikkiant16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Experiment 5

Abstract:
This experiment inves gates the applica on of the Block Least Mean Square (Block LMS) algorithm
for denoising a sinusoidal signal contaminated with Addi ve White Gaussian Noise (AWGN). The
algorithm's performance is evaluated for two configura ons with different block lengths and filter
orders. MATLAB simula ons demonstrate the efficiency of Block LMS in reconstruc ng noisy signals
and minimizing the error signal.

Introduc on:
Noise is a pervasive problem in signal processing, o en degrading the quality of signals and impairing
the extrac on of meaningful informa on. This issue is encountered in various applica ons, such as
audio processing, telecommunica ons, medical imaging, and radar systems. Effec ve noise removal
techniques are therefore crucial to restore signals to their original form. Among the many
approaches available, adap ve filtering is par cularly useful due to its ability to adjust dynamically to
the changing nature of noise and signal characteris cs.

The Least Mean Square (LMS) algorithm is a widely used adap ve filtering method that minimizes
the mean squared error (MSE) between the desired and output signals. However, the conven onal
LMS algorithm processes data sample by sample, which can become computa onally expensive and
me-consuming for large datasets or real- me systems. To address these limita ons, the Block LMS
algorithm is introduced, which processes data in blocks rather than single samples. This approach
reduces computa onal complexity while retaining the adaptability and effec veness of the LMS
algorithm, making it well-suited for real- me applica ons.

In this experiment, the Block LMS algorithm is applied to reconstruct a sinusoidal signal corrupted
with Addi ve White Gaussian Noise (AWGN). The signal x(t)=cos(2π⋅50t), sampled at 250 Hz, serves
as the desired signal, while the noisy signal is used as input to the Block LMS algorithm. The
algorithm processes the signal in blocks of specified lengths and updates the filter coefficients
itera vely to minimize the error.

The performance of the Block LMS algorithm is analyzed for two configura ons:

1. Block length (L) of 40 with filter length (M) of 25.

2. Block length (L) of 75 with filter length (M) of 50.

This study provides insights into the trade-offs between block size, filter length, and computa onal
complexity. Larger blocks and filter lengths are expected to provide more accurate signal
reconstruc on but at the cost of increased computa onal effort. By comparing the results from both
configura ons, this experiment demonstrates the effec veness of the Block LMS algorithm and
highlights its suitability for real-world noise reduc on tasks.
Objec ve:
1. To implement the Block LMS algorithm for denoising sinusoidal signals contaminated with
AWGN.

2. To analyze the algorithm's performance for two configura ons:

o Block length L=40, filter length M=25.

o Block length L=75, filter length M=50.

3. To evaluate the reconstructed signal and error signal for both cases.

4. To highlight the trade-offs between block length, filter order, and computa onal complexity.

Problem Statement:

Methodology:
Signal Genera on:

 A sinusoidal signal D[n]=cos(2π⋅50n/250) is generated and sampled at 250 Hz.

 Noise is added to the signal to produce the noisy signal A1[n].

Block LMS Algorithm:

 The noisy signal is processed in blocks of size L, and the filter coefficients are updated
itera vely to minimize the mean squared error.

 Two configura ons are tested:

o L=40, M=25.

o L=75L, M=50.

 Step size μ=0.01 is used for both cases.

MATLAB Simula on:

 Code is wri en to implement the Block LMS algorithm.

 Subplots are generated to visualize the original signal, noisy signal, es mated signal, and
error signal for both configura ons.
Analysis:

 Results are analyzed by comparing the reconstructed signal and the error signal across the
two configura ons.

Resul ng Plots:
Results:
Configura on 1: L=40, M=25

 The reconstructed signal closely matches the original sinusoidal signal.

 The error signal shows small fluctua ons, indica ng effec ve noise reduc on.

 Computa onal me was moderate due to the smaller block size.

Configura on 2: L=75, M=50

 The reconstructed signal exhibits higher accuracy than the M=25M configura on.

 The error signal is further minimized, demonstra ng be er performance.

 Computa onal me increased due to the larger block size and filter order.

Observa ons:
 Increasing the block size (L) improves noise reduc on but increases computa onal
complexity.
 A higher filter order (M) results in be er reconstruc on accuracy.
 The Block LMS algorithm effec vely minimizes the error signal in both configura ons.
 Configura on 2 (L=75, M=50) achieved superior results but required more computa onal
resources.

Discussion:
The Block LMS algorithm processes the noisy signal in blocks, allowing for efficient computa on and
adaptability. The two configura ons highlight the trade-offs between performance and
computa onal requirements. Configura on 1, with smaller block size and filter order, provided
reasonable noise reduc on with less computa onal effort. Configura on 2, with larger block size and
filter order, achieved be er results at the expense of increased processing me.

Key Takeaways:

 The choice of block size (L) and filter length (M) depends on the applica on's accuracy and
speed requirements.

 The step size (μ) plays a cri cal role in the algorithm's convergence and stability.

 The Block LMS algorithm is well-suited for real- me noise cancella on tasks.

Conclusion:
The Block LMS algorithm is a powerful tool for noise reduc on in signal processing. This experiment
demonstrates its ability to reconstruct sinusoidal signals corrupted with AWGN, with the
reconstructed signal closely matching the original. The trade-offs between block size, filter order, and
computa onal effort were explored, with larger configura ons achieving be er noise reduc on at
the cost of increased complexity. These findings validate the Block LMS algorithm's applicability in
prac cal scenarios requiring efficient and accurate signal denoising.
MATLAB Code:
1. clc;
2. clear;
3. close all;

4. % Continuous-Time Signal
5. t = 0:0.001:0.1; % Time vector for continuous signal
6. x = cos(2*pi*50*t); % Continuous-time signal x(t)
7. figure;
8. plot(t, x, 'LineWidth', 1.5);
9. xlabel('t (sec)');
10. ylabel('x(t)');
11. title('Continuous-Time Signal');
12. grid on;

13. % Discrete-Time Signal (Sampling)


14. fs = 250; % Sampling frequency
15. Ts = 1/fs; % Sampling period
16. n = 0:Ts:39.996; % 10000 samples
17. xn = cos(2*pi*50*n); % Discrete-time signal g(n)
18. figure;
19. stem(n, xn, 'LineWidth', 1.5);
20. xlabel('n (sec)');
21. ylabel('g(n)');
22. title('Discrete-Time Signal');
23. grid on;

24. % block lms ( 25 Filter Coefficients )


25. block_len = 50; % Block length (L)
26. filter_len = 25; % Filter length (M)
27. n_samples = 10000; % Number of samples

28. % Adding noise to the signal


29. n = xn + 0.9 * randn(1, n_samples); % Noisy signal
30. n_z = [n, zeros(1, filter_len)]; % Zero-padded noisy signal
31. weights = zeros(1, filter_len); % Initial weights
32. u = 0.01; % Step size
33. E = []; % Error array
34. estimated = zeros(n_samples, 1); % Estimated signal array

35. for i = 1:n_samples/block_len


36. % Block processing
37. for j = 1:block_len
a. estimated((i-1)*block_len + j) = weights * transpose(n_z((i-
1)*block_len + j : (i-1)*block_len + j + filter_len - 1));
b. E((i-1)*block_len + j) = xn((i-1)*block_len + j) - estimated((i-
1)*block_len + j);
38. end
39. weight_update = 0;
40. for j = 1:block_len
a. weight_update = weight_update + E((i-1)*block_len + j) * n_z((i-
1)*block_len + j : (i-1)*block_len + j + filter_len - 1);
41. end
42. weights = weights + (u/block_len) * weight_update; % Weight update
43. end

44. % Error calculation


45. Err = estimated' - xn;
46. % Plot results for 25 filter coefficients
47. figure;
48. plot(xn);
49. title('Original Signal (25 Filter Coefficients)');
50. ylabel('Amplitude');
51. xlabel('Samples');

52. figure;
53. plot(n);
54. title('Noisy Signal (25 Filter Coefficients)');
55. ylabel('Amplitude');
56. xlabel('Samples');

57. figure;
58. plot(estimated);
59. title('Estimated Signal (25 Filter Coefficients)');
60. ylabel('Amplitude');
61. xlabel('Samples');

62. figure;
63. plot(Err);
64. title('Error Signal (25 Filter Coefficients)');
65. ylabel('Amplitude');
66. xlabel('Samples');

67. % block lms ( 50 Filter Coefficients )


68. block_len = 75; % Block length (L)
69. filter_len = 50; % Filter length (M)
70. n = xn + 0.5 * randn(1, n_samples); % Noisy signal
71. n_z = [n, zeros(1, filter_len)]; % Zero-padded noisy signal
72. weights = zeros(1, filter_len); % Initial weights
73. estimated = zeros(n_samples, 1); % Estimated signal array

74. for i = 1:n_samples/block_len


75. % Block processing
76. for j = 1:block_len
a. estimated((i-1)*block_len + j) = weights * transpose(n_z((i-
1)*block_len + j : (i-1)*block_len + j + filter_len - 1));
b. E((i-1)*block_len + j) = xn((i-1)*block_len + j) - estimated((i-
1)*block_len + j);
77. end
78. weight_update = 0;
79. for j = 1:block_len
80. weight_update = weight_update + E((i-1)*block_len + j) * n_z((i-
1)*block_len + j : (i-1)*block_len + j + filter_len - 1);
81. end
82. weights = weights + (u/block_len) * weight_update; % Weight update
83. end
84. % Error calculation
85. Err = estimated' - xn;
86. % Plot results for 50 filter coefficients
87. figure;
88. plot(xn);
89. title('Original Signal (50 Filter Coefficients)');
90. ylabel('Amplitude');
91. xlabel('Samples');
92. figure;
93. plot(n);
94. title('Noisy Signal (50 Filter Coefficients)');
95. ylabel('Amplitude');
96. xlabel('Samples');
97. figure;
98. plot(estimated);
99. title('Estimated Signal (50 Filter Coefficients)');
100. ylabel('Amplitude');
101. xlabel('Samples');
102. figure;
103. plot(Err);
104. title('Error Signal (50 Filter Coefficients)');
105. ylabel('Amplitude');
106. xlabel('Samples');

You might also like