Adaptive Equalization Techniques Using Recursive Least Square (RLS) Algorithm
Adaptive Equalization Techniques Using Recursive Least Square (RLS) Algorithm
Ravi Garg
Introduction:
In this project, we extend the use of methods of least squares to find a recursive algorithm solution of adaptive transversal filter. Given the LS solution at any time instant n-1, we find the solution at time n recursively using past solution and newly arrived data. This algorithm is known as Recursive Least Squares (RLS) algorithm. We show the convergence rate of RLS algorithm is faster than LMS algorithm by comparing the learning curves of two algorithms for specified channel response.
Problem Formulation:
Suppose, we want to transmit a digital message, which can be a sequence of bits corresponding to voltage levels in modulation technique, through a noisy communication channel with impulse response given by h(n). We can simplify the channel complexity by assuming that the channel noise is AWGN (Additive White Gaussian Noise) in nature and is independent of transmitted signal. So, the received signal u(n) at the demodulator can be given by following equation
where d(n) is the transmitted digital message, L is the length of FIR approximation of channel distortion filter. Now, our aim is to determine (n) such that error between d(n) and (n) is minimized. We use a L-tap transversal filter to determine (n) from u(n). This system is shown in figure 1. The basic structure of L-tap adaptive filter is shown in figure 2. It involves update of tap vector based on current and past data. Several algorithms have been proposed in literature to solve this problem of error minimization. In this project, we use the solution using Recursive Least Square (RLS) algorithm.
Transmitted Signal
u(n)
Figure 2: A simple model for Adaptive Equalizer. Problem of solving any least squares problem using recursive algorithm involves initialization of algorithm. Then, we use the information contained in new data samples to update the old estimates. In this way, the length of observable data changes. So, we define a weighted cost function to minimize,
where is the cost function, is the weighing term and is the error in the estimate at any instant between desired response d(i) and output y(i) produced by transversal filter whose input is The relation between these quantities is given by following equation,
Where tap is input vector at time i. and is tap weight vector at time n. The weighing factor should be less than but close to unity. This factor corresponds to the memory of system. Generally, we use an exponential weighing factor given by,
=1 indicates infinite memory or the ability of the system to remember all the past estimates. The solution of least square problem at any time instants can be given by solving following equation for ,
Where
and
is defined as following,
is a regularizing term. To implement recursion in this problem, we can use the following relation between (n) and (n-1),
To compute the solution of equation, we use matrix inversion lemma which states that for two positive semi-definite M-by-M matrices related by,
where D is a positive-definite N-by-M matrix and C is an M-by-N matrix, inverse of matrix A is given by following equation,
Using this lemma and going through all the recursive calculations, we get RLS algorithm which is summarized as following, Initialize the algorithm by setting ,
Where = small positive constant for high SNR large positive constant for low SNR For each time instant n=1,2,.,compute
We can note from the above mentioned summary of the algorithm that computation of gain vector k(n) proceeds in two stages, First an intermediate quantity denoted by (n) is computed. Then, (n) is used to compute k(n).
Experimental result:
In this section, we present the experimental results obtained by simulating RLS algorithm using Matlab. A random signal with amplitude 1 is transmitted through three realizations of channel. The impulse response for each of three realizations is represented by h1, h2, h3 respectively and their transfer functions are given by,
White noise is added to the output of channel filter such that the resulting SNR is 20 db. The value of is taken as 0.005. A 21-tap FIR filter is used for channel equalization and initial values of this filter is set to be zero for all taps.
Learning Curves:
Learning curves for all three channel response is shown in figure 3. From these curves, we observe that the no. of iteration required for convergence are approximately 30-40. This is very small no. of iterations as compared with LMS algorithm, where the order for no. of iterations was in thousands. The result can be attributed to the fact that we are finding LS solution at every iteration in RLS algorithm, hence, the convergence is faster. A comparison of learning curve for LMS and RLS algorithm is shown in figure 5 for
channel impulse response h1(n), h2(n) and h3(n) respectively. One more important point to note is that, the convergence in RLS algorithm is independent of signal. This has been showed in figure 4. In LMS algorithm, convergence rate depends on Eigen-value spread of the signal, while in RLS algorithm the no. of iterations are roughly 2M, where M is the length of filter taps. Effect of lambda:
Figure 4: Comparison of learning curves for impulse response using RLS algorithm.
F Figure 5: comparison of RLS algorithm with LMS algorithm for three impulse responses.
Figure 6 shows the weight vector of equalized filter after convergence is attained. These tap-weights are the same weights that we got when we used LMS algorithm for convergence.
Figure 6: Tap-weight vector of transversal adaptive filter after convergence of RLS algorithm.
Conclusion:
In this project, we studied the Recursive Least Square algorithm and showed that this algorithm converges at much faster rate as compared with Least Mean square algorithm. We also showed that the convergence is independent of signal Eigen-value spread unlike LMS algorithm. RLS algorithm generally takes about 2M steps to converge, where M is the length of filter tap vector. Finally, the tap vector of equalized transversal filter was found to be exactly similar with the tap vector that we found using LMS algorithm. Hence, it can be concluded that RLS algorithm behaves more efficiently than LMS algorithm in almost all the aspects of measure of performance.