0% found this document useful (0 votes)
179 views7 pages

Scilab Code For Implementing LMS Algorithm (Function) For P 2

The document contains Scilab code to implement the LMS adaptive filtering algorithm. It defines an lms function that takes in training and input signals, an initial weight vector, and step size to return the weight vectors and errors over iterations. It then calls this function with different step sizes, plots the weight vectors and errors over time, and concludes that a larger step size of 0.1 converges faster with lower error compared to smaller step sizes of 0.001 and 0.01.

Uploaded by

Nikunj Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
179 views7 pages

Scilab Code For Implementing LMS Algorithm (Function) For P 2

The document contains Scilab code to implement the LMS adaptive filtering algorithm. It defines an lms function that takes in training and input signals, an initial weight vector, and step size to return the weight vectors and errors over iterations. It then calls this function with different step sizes, plots the weight vectors and errors over time, and concludes that a larger step size of 0.1 converges faster with lower error compared to smaller step sizes of 0.001 and 0.01.

Uploaded by

Nikunj Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

scilab code for implementing LMS algorithm(function) for

p=2

function [wn,en] = lms(s1,s2,u,w0)


d = fscanfMat(s1); // training signal
x = fscanfMat(s2); // input data

N = length(x);

// no. of iterations

y = zeros(N,1);

w1 = w0;

// intially in absence of past samples w1 =w0

wn = cat(3,w1);
p=2;

x2 = [x(1) ; 0];
y(1) = ((x2')*( wn(:,:,1))); // computing w2
en(1) = d(1)-y(1);

w2 = u*en(1)* [x(1);0];
wn = cat(3,wn,w2);

for i = 2 : N
x1 = [x(i) ; x(i-1)];
y(i) = (x1')*(wn(:,:,i-1));
en(i) = d(i)- y(i);

// lms algorithm

temp_w = wn(:,:,i) + u*en(i)*x1;


wn = cat(3,wn,temp_w);
end

endfunction;

code for providing plots and applying LMS on actual data :

clear all;
clc;
d = 'C:\Users\nikunj patel\Desktop\d.txt'
x = 'C:\Users\nikunj patel\Desktop\x.txt' // giving address of source files
N = length(x);
y = zeros(N,1);

w0 = [0 ; 0];

u1 = 0.001; // defining different step sizes


u2 = 0.01;
u3 = 0.1;
[wn1 en1] = lms(d,x,u1,w0); // calling lms function
[wn2 en2] = lms(d,x,u2,w0);
[wn3 en3] = lms(d,x,u3,w0);

for i = 1:2000
wn1_0(i) = wn1(1,1,i);
wn1_1(i) = wn1(2,1,i);
wn2_0(i) = wn2(1,1,i);
wn2_1(i) = wn2(2,1,i);
wn3_0(i) = wn3(1,1,i);
wn3_1(i) = wn3(2,1,i);
end
i = 1:2000;
plot(i,wn1_0,'r',i,wn2_0,'g',i,wn3_0,'b'); // plotting wn[0]
title('Wn[0] versus n');
xlabel('n');
ylabel('Wn[0]')
legend('u=0.001','u=0.01','u=0.1');
figure;

plot(i,wn1_1,'r',i,wn2_1,'g',i,wn3_1,'b'); // plotting wn[1]


title('Wn[1] versus n');
xlabel('n');
ylabel('Wn[1]');
legend('u=0.001','u=0.01','u=0.1');

figure;

subplot(3,1,1);

// ploting e[n] for different step sizes

plot(en1);
title('error for step size u=0.001');
xlabel('n');
ylabel('e[n]');

subplot(3,1,2);
plot(en2);
title('error for step size u=0.01');
xlabel('n');
ylabel('e[n]');
subplot(3,1,3);
plot(en3);
title('error for step size u=0.1');
xlabel('n');
ylabel('e[n]');

Plots :
1) wn[0] versus n

2) wn[1] versus n :

3) error e[n] versus for step sizes u = 0.001 , 0.01 ,0.1 :

conclusion about step size u :


1)mean square error in step-size u = 0.001 is 1.0403011
mean square error in step-size u = 0.01 is 0.0935675
mean square error in step-size u = 0.01 is 0.0113189
2)also from the graph we can tell that step size u = 0.001 is not able track changes in input data
i.e error e[n] does not become close to zero after few iterations which was actually observed for
u = 0.01 and u = 0.1 hence wn takes a very long time to converge to optimal filter which results
in large mean square error for small step size.
while in case of step size u = 0.1 , wn converges very quickly to optimal filter and mean square
error is also less.
smaller step sizes require more input samples of training data i.e iterations to converge.

You might also like