Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
25 views
15 pages
Equa Adap
Uploaded by
www.bryan.ale.x
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save equa_adap For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
0 ratings
0% found this document useful (0 votes)
25 views
15 pages
Equa Adap
Uploaded by
www.bryan.ale.x
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save equa_adap For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
Download
Save equa_adap For Later
You are on page 1
/ 15
Search
Fullscreen
1 Adaptive Equalization Tn Chapter 9, we introduced both optimum and suboptimum receivers that compen- sate for IST in the transmission of digital information through band-limited, nonideal channels. The optimum receiver employed maximum-likelihood sequence estimation for detecting the information sequence from the samples of the demodulation filter. The suboptimum receivers employed either a linear equalizer or a decision-feedback equalizer. In the development of the three equalization methods, we implicitly assumed that the channel characteristics, either the impulse response or the frequency response, were known at the receiver. However, in most communication systems that employ equalizers, the channel characteristics are unknown a priori and, in many cases, the channel response is time-variant. In such a case, the equalizers are designed to be adjustable to the channel response and, for time-variant channels, to be adaptive to the time variations in the channel response. In this chapter, we present algorithms for automatically adjusting the equalizer co- efficients to optimize a specified performance index and to adaptively compensate for time variations in the channel characteristics. We also analyze the performance charac teristics of the algorithms, including their rate of convergence and their computational complexity. 10.1 ADAPTIVE LINEAR EQUALIZER In the case of the linear equalizer, recall that we considered two different criteria for determining the values of the equalizer coefficients (cr}. One criterion was based on the minimization of the peak distortion at the output of the equalizer, which is defined by Equation 9.4-22. The other criterion was based on the minimization of the ‘mean square error at the output of the equalizer, which is defined by Equation 9.442. Below, we describe two algorithms for performing the optimization automatically and adaptively. 689690 Digital Communications 10.1-1 The Zero-Forcing Algorithm In the peak-distortion criterion, the peak distortion D(e), given by Equation 9.422, is minimized by selecting the equalizer coefficients {cx}. In general, there is no simple computational algorithm for performing this optimization, except in the special case where the peak distortion at the input to the equalizer, defined as Dp in Equation 9.4-23, is less than unity. When Dp < 1, the distortion D(e) at the output of the equalizer is minimized by forcing the equalizer response gq = 0, for 1 < |n| < K, and qo = 1. In this case, there is a simple computational algorithm, called the zero-forcing algorithm, that achieves these conditions. The zero-forcing solution is achieved by forcing the cross correlation between the error sequence ¢¢ = I, — fy and the desired information sequence {I} to be zero for shifts in the range 0 < |n| < K. The demonstration that this leads to the desired solution is quite simple, We have E(exlf_j) = E [Ue — fdity] (101-1) = E(lg_;) — E(fatt;), j=-Ky...,K ‘We assume that the information symbols are uncorrelated, i.e., E(J.17) = 64j, and that the information sequence {J,) is uncorrelated with the additive noise sequence {n:}. For /;, we use the expression given in Equation 9.441. Then, after taking the expected values in Equation 10.1-1, we obtain E(exlz_;) = 80-4), (10.1-2) ‘Therefore, the conditions E(el{_)) =0, (10.1-3) are fulfilled when go = 1 and q, = 0,1 <|n| < K When the channel response is unknown, the cross correlations given by Equa- tion 10.1-1 are also unknown. This difficulty can be circumvented by transmitting a known training sequence {/;} to the receiver, which can be used to estimate the cross correlation by substituting time averages for the ensemble averages given in Equation 10.1-1. After the initial training, which will require the transmission of a training se- quence of some predetermined length that equals or exceeds the equalizer length, the equalizer coefficients that satisfy Equation 10.1-3 can be determined. ‘A simple recursive algorithm for adjusting the equalizer coefficients is, K (10.14) fet # 4 dedi; §=—Ky.,-10,1.. where cl is the value of the jth coefficient at time t = kT, & = 1, — fy is the error signal ai time t = kT, and A isa scale factor that controls the rate of adjustment, as will be explained later in this section. This is the zero-forcing algorithm. The term e41_j is an estimate of the cross correlation (ensemble average) E (ex/j_;). The averaging operation of the cross correlation is accomplished by means of the recursive first-order difference equation algorithm in Equation 10.1-4, which represents a simple discrete- time integrator.Chapter Ten: Adaptive Equalization Inpat_f tua) oG = Te_Oupet Detector = “Training sequence seteretor bel Tr rlives FIGURE 101-1 An adaptive zero-forcing equalizer. Following the training period, after which the equalizer coefficients have converged to their optimum values, the decisions at the output of the detector are generally suffi- ciently reliable so that they may be used to continue the coefficient adaptation proces This is called a decision-directed mode of adaptation. In such a case, the cross cor relations in Equation 10.1-4 involve the error signal & = 7, — fy and the detected output sequence J,_;, j , K. Thus, in the adaptive mode, Equation 10.1-4 becomes FY =P + Dal; (10.1-5) Figure 10.11 illustrates the zero-forcing equalizer in the training mode and the adaptive mode of operation. The characteristics of the zero-forcing algorithm are similar to those of the least- mean-square (LMS) algorithm, which minimizes the MSE and which is described in detail in the following section. © 10.1-2 The LMS Algorithm In the minimization of the MSE, treated in Section 9.4~2, we found that the optimum equalizer coefficients are determined from the solution of the set of linear equations, expressed in matrix form as rC=§ (10.1-6) 691692 Digital Communications where I is the (2K + 1) x (2K + 1) covariance matrix of the signal samples {ug}, C is the column vector of (2K + 1) equalizer coefficients, and & is a (2K + 1)-dimensional column vector of channel filter coefficients. The solution for the optimum equalizer coefficients vector Cog: can be determined by inverting the covariance matrix I, which can be efficiently performed by use of the Levinson—Durbin algorithm (see Levinson (1947) and Durbin (1959)). ‘Alternatively, an iterative procedure that avoids the direct matrix inversion may be used to compute Cop. Probably the simplest iterative procedure is the method of steepest descent, in which one begins by arbitrarily choosing the vector C, say as Co. This initial choice of coefficients corresponds to some point on the quadratic MSE surface in the (2K + 1)-dimensional space of coefficients. The gradient vector Go, having the 2K + 1 gradient components 42 pee 10, Tyee Kyis then computed at this point on the MSE surface, and each tap weight is changed in the direction opposite to its corresponding gradient component. The change in the jth tap weight is proportional to the size of the jth gradient component. Thus, succeeding values of the coefficient vector C are obtained according to the relation Cu =Cy- AG, k= 0,1,2,. (10.1-7) where the gradient vector G; is ldJ . Ge = 556, = POH & =—E(eVj) (10.1-8) The vector C;, represents the set of coefficients at the kth iteration, ¢ = h, — fy is the error signal at the kth iteration, Vy is the vector of received signal samples that make up the estimate /;, i.e., Vi = [vex -*> Ue -*- Up—x]', and A is a positive number chosen small enough to ensure convergence of the iterative procedure. If the minimum, MSE is reached for some k = ko, then Gy = 0, so that no further change occurs in the tap weights. In general, Jmya(K) cannot be attained for a finite value of ko with the steepest-descent method, It can, however, be approached as closely as desired for some finite value of ko. The basic difficulty with the method of steepest descent for determining the opti- mum tap weights is the lack of knowledge of the gradient vector Gz, which depends on both the covariance matrix I and the vector § of cross correlations. In turn, these quantities depend on the coefficients { fx) of the equivalent discrete-time channel model and on the covariance of the information sequence and the additive noise, all of which may be unknown at the receiver in general. To overcome the difficulty, estimates of the gradient vector may be used. That is, the algorithm for adjusting the tap weight coefficients may be expressed in the form Cru = Cx — AG, (10.1-9) where G, denotes an estimate of the gradient vector G, and €, denotes the estimate of the vector of coefficients. From Equation 10.1-8 we note that G, is the negative of the expected value of the e, Vj. Consequently, an estimate of Gy is G.=-eVi (10.1-10)Chapter Ten: Adaptive Equalization Input eh ‘Traning Output sequence senerator FIGURE 10.1-2 Linear adaptive equalizer based on the MSE criterion. Since E(G,) = Gy, the estimate G, is an unbiased estimate of the true gradient vector G. Incorporation of Equation 10.1~10 into Equation 10.1-9 yields the algorithm Cia = Cc + Ag VE (10.1-11) This is the basic LMS algorithm for recursively adjusting the tap weight coefficients of the equalizer as described by Widrow (1966). Its illustrated in the equalizer shown in Figure 10.1-2. The basic algorithm given by Equation 10.1~11 and some of its possible variations have been incorporated into many commercial adaptive equalizers that are used in high- speed modems. Three variations of the basic algorithm are obtained by using only sign information contained in the error signal «, and/or in the components of V;. Hence, the three possible variations are cusps = ey + Acsgn(e,)uf_;, 1,0,1,...,K (10.1-12) carn) = Cj + Aexcsgn(vj_;), J=AKy. 10,1)... (10.1413) carn) = ey + Acsen(e)esgn(vf_)), f= —Ky.., 10 1...,K (101-14) where csgn(x) is defined as 14 j [Re(x) > 0, Im(x) > 0) 1—j [Re(x) > 0,Im(x) < 0} -14 7 [Re(x) < 0,Im(x) > 0} -1—j [Re(x) <0, Im(x) <0] esgn(x) = (10.1-15) 693694 Digital Communications (Note that in Equation 10.1-15, j = ./—1, as distinct from the index j in Equa- tions 10.1-12 to 10.1-14.) Clearly, the algorithm in Equation 10.1-14 is the most easily implemented, but it gives the slowest rate of convergence relative to the others. Several other variations of the LMS algorithm are obtained by averaging or filtering the gradient vectors over several iterations prior to making adjustments of the equalizer coefficients. For example, the average over N gradient vectors is . pvt Gm = 35 Do envi Vinnren (10.1-16) = and the corresponding recursive equation for updating the equalizer coefficients once every N iterations is Coen = Cen — Gin (10.1-17) In effect, the averaging operation performed in Equation 10.1~16 reduces the noise in the estimate of the gradient vector, as shown by Gardner (1984). An alternative approach is to filter the noisy gradient vectors by a low-pass filter and use the output of the filter as an estimate of the gradient vector. For example, a simple low-pass filter for the noisy gradients yields as an output Gc =wGirtd-me, &O=6o) (10.1-18) where the choice of 0 < w < 1 determines the bandwidth of the low-pass filter, When wis close to unity, the filter bandwidth is small and the effective averaging is performed over many gradient vectors. On the other hand, when w is small, the low-pass filter has a large bandwidth and, hence, it provides little averaging of the gradient vectors. With the filtered gradient vectors given by Equation 10.118 in place of G,, we obtain the filtered gradient LMS algorithm given by Cr = Cy — AG, (10.1-19) In the above discussion, it has been assumed that the receiver has knowledge of the transmitted information sequence in forming the error signal between the desired symbol and its estimate. Such knowledge can be made available during a short training period in which a signal with a known information sequence is transmitted to the receiver for initially adjusting the tap weights. The length of this sequence must be at least as large as the length of the equalizer so that the spectrum of the transmitted signal adequately covers the bandwidth of the channel being equalized. In practice, the training sequence is often selected to be a periodic pseudorandom sequence, such as a maximum length shift-register sequence whose period NV is equal to the length of the equalizer (NV = 2K + 1). In this case, the gradient is usually averaged over the length of the sequence as indicated in Equation 10.1-16 and the equalizer is adjusted once a period according to Equation 10.1-17. This approach has been called cyclic equalization, and has been treated in the papers by Mueller and Spaulding (1975) and Qureshi (1977, 1985). A practical scheme for continuous adjustment of the tap weights may be either a decision-directed mode of operation in which decisions on the information symbols are assumed to be correct and used in place of [yin formingChapter Ten: Adaptive Equalization the error signal ¢,, or one in which a known pseudorandom-probe sequence is inserted in the information-bearing signal either additively or by interleaving in time and the tap weights adjusted by comparing the received probe symbols with the known transmitted probe symbols. In the decision-directed mode of operation, the error signal becomes & = I, — he, where J; is the decision of the receiver based on the estimate /,. As long as the receiver is operating at low error rates, an occasional error will have a negligible effect on the convergence of the algorithm. If the channel response changes, this change is reflected in the coefficients { fi} of the equivalent discrete-time channel model. It is also reflected in the error signal &4, since it depends on { fe). Hence, the tap weights will be changed according to Equation 10.1-11 to reflect the change in the channel. A similar change in the tap weights occurs if the statistics of the noise or the information sequence change. Thus, the equalizer is adaptive. 10.1-3 Convergence Properties of the LMS Algorithm The convergence properties of the LMS algorithm given by Equation 10.1-I1 are gov- cerned by the step-size parameter A. We shall now consider the choice of the parameter A to ensure convergence of the steepest-descent algorithm in Equation 10.1-7, which employs the exact value of the gradient. From Equations 10.1~7 and 10.1-8, we have Cust = Cy — AG, (1-AT)C, + AE (10.1-20) where J is the identity matrix, Fis the autocorrelation matrix of the received signal, Cy is the (2K + 1)-dimensional vector of equalizer tap gains, and & is the vector of cross correlations given by Equation 9.445, The recursive relation in Equation 10.1-20 can be represented as a closed-loop control system as shown in Figure 10.13. Unfor- tunately, the set of 2K + 1 first-order difference equations in Equation 10.1~20 are coupled through the autocorrelation matrix F In order to solve these equations and, thus, establish the convergence properties of the recursive algorithm, it is mathemati- cally convenient to decouple the equations by performing a linear transformation. The appropriate transformation is obtained by noting that the matrix Fis Hermitian and, hence, can be represented as r=uau" (10.1-21) FIGURE 10.1-3 Closed-Loop control system representation of the recursive relation in Equation 10.1~20. Filter Gs ro, 695696 Digital Communications where U is the normalized modal matrix of Fand A is a diagonal matrix with diagonal elements equal to the eigenvalues of I'(see Appendix A). When Equation 10.1~21 is substituted into Equation 10.120 and if we define the transformed (orthogonalized) vectors CZ = U"C, and &° = U"&, we obtain fu = (I= AACE + AE? (10.1-22) This set of first-order difference equations is now decoupled. Their convergence is determined from the homogeneous equation Ci, = (UE - AACR (10.1-23) We see that the recursive relation will converge provided that all the poles lie inside the unit circle, ie., [P= Am) <1, k= Ky. 1 010K (10.1-24) where (2) is the set of 2K + 1 (possibly nondistinct) eigenvalues of F. Since Fis an autocorrelation matrix, it is positive-definite and, hence, Ay > 0 for all k. Consequently convergence of the recursive relation in Equation 10.1-22 is ensured if A satisfies the inequality O
dE lef = Cfogel” (10.1-28) k where the {cf} are the set of transformed equalizer coefficients. The excess MSE is the expected value of the second term in Equation 10.1~28, i.e., k DYE Ele? = Lop” (10.1-29) Kc Jn = Ithas been shown by Widrow (1970) that the excess MSE is k ae Ja = A Imi SO * Sk (10.1-30) 1=(1= Aa)? The expression in Equation 10.1-30 can be simplified when A is selected such that Ady < | for all k. Then k Jn © 4AInin SA ik ~ 10.1-31 FA Ipin te P ¢ ) % LAQK + DJnin(xo + No) Note that xo + No represents the received signal plus noise power. Itis desirable to have Ja < Jnin- That is, A should be selected such that AQK + 1) + No) <1 or, equivalently, 2 — 10.1-32) 8 < GET De tN (Ors) 697698 Digital Communications For example, if A is selected as 0.2 4= ORT Dw END (10.1-33) the degradation in the output SNR of the equalizer due to the excess MSE is less than 1aB. The analysis given above on the excess mean square error is based on the assumption that the mean value of the equalizer coefficients has converged to the optimum value ge. Under this condition, the step size A should satisfy the bound in Equation 10.1~ 32. On the other hand, we have determined that convergence of the mean coefficient vector requires that A < 2/Amsx. While a choice of A near the upper bound 2/2max may lead to initial convergence of the deterministic (known) steepest-descent gradient algorithm, such a large value of A will usually result in instability of the LMS stochastic gradient algorithm. The initial convergence or transient behavior of the LMS algorithm has been in- vestigated by several researchers. Their results clearly indicate that the step size must be reduced in direct proportion to the length of the equalizer as specified by Equa- tion 10.1-32, Hence, the upper bound given by Equation 10.1-32 is also necessary to ensure the initial convergence of the LMS algorithm. The papers by Gitlin and ‘Weinstein (1979) and Ungerboeck (1972) contain analyses of the transient behavior and the convergence properties of the LMS algorithm. The following example serves to reinforce the important points made above re- ‘garding the initial convergence of the LMS algorithm. EXAMPLE 101-1. The LMS algorithm was used to adaptively equalize a communi- cation channel for which the autocorrelation matrix I’ has an eigenvalue spread of Aux nin = 11. The number of taps selected for the equalizer was 2K + 1=11. The input signal plus noise power x + No was normalized to unity, Hence, the upper bound on A given by Equation 10.1-32is0.18. Figure 10.14 illustrates the initial convergence characteristics of the LMS algorithm for A = 0.045, 0.09, and 0.115, by averaging the (estimated) MSE in 200 simulations. We observe that by selecting A = 0.09 (one-half of the upper bound) we obtain relatively fast initial convergence. If we divide A by a factor of 2 to A = 0.045, the convergence rate is reduced but the excess mean square error is also reduced, so that the LMS algorithm performs better in steady state (in a time-invariant signal environment). Finally, we note that a choice of A = 0.115, which 1 FIGURE 10.14 Initial convergence characteristics of the LMS algorithm with different step sizes. (From Digital BOT \s-o0es °°" Signal Processing, by J. G. Proakis and D. G. 3 Manolakis, 1995, Prentice Hall Company. Reprinted é 0 with permission of the publisher.) =o 109 ‘0100200300 40000 ‘Number of temtionsChapter Ten: Adaptive Equalization is still far below the upper bound, causes large undesirable fluctuations in the output MSE of the algorithm. In a digital implementation of the LMS algorithm, the choice of the step-size parameter becomes even more critical. In an attempt to reduce the excess mean square error, itis possible to reduce the step-size parameter to the point where the total mean square error actually increases. This condition occurs when the estimated gradient components of the vector ¢, V7 after multiplication by the small step-size parameter A are smaller than one-half of the least significant bit in the fixed-point representation of the equalizer coefficients. In such a case, adaptation ceases. Consequently, it is important for the step size to be large enough to bring the equalizer coefficients in the vicinity of Cop. If it is desired to decrease the step size significantly, it is necessary to increase the precision in the equalizer coefficients. Typically, 16 bits of precision may be used for the coefficients, with about 10-12 of the most significant bits used for arithmetic operations in the equalization of the data. The remaining least significant bits are required to provide the necessary precision for the adaptation process. Thus, the scaled estimated gradient components Ae Vj usually affect only the least-significant bits in any one iteration, In effect, the added precision also allows for the noise to be averaged out, since many incremental changes in the least-significant bits are required before any change occurs in the upper more significant bits used in arithmetic operations for equalizing the data, For an analysis of roundoff errors in a digital implementation of the LMS algorithm, the reader is referred to the papers by Gitlin and Weinstein (1979), Gitlin et al. (1982), and Caraiscos and Liu (1984). As a final point, we should indicate that the LMS algorithm is appropriate for tracking slowly time invariant signal statistics. In such a case, the minimum MSE and the optimum coefficient vector will be time-variant. In other words, Jmyin(n) is a function of time and the 2(K + 1)-dimensional error surface is moving with the time index n. ‘The LMS algorithm attempts to follow the moving minimum Jmya(n) in the (2K + 1)- dimensional space, but itis always lagging behind due to its use of (estimated) gradient vectors. As a consequence, the LMS algorithm incurs another form of error, called the lag error, whose mean square value decreases with an increase in the step size A. The total MSE error can now be expressed as Frotat = Imun() + Ja + Jr (10.1-34) where J; denotes the mean square error due to the lag. In any given nonstationary adaptive equalization problem, if we plot the errors J, and Jj asa function of A, we expect these errors to behave as illustrated in Figure 10.1-5. We observe that J increases with an increase in A while J; decreases with an increase in A. The total error will exhibit a minimum, which will determine the optimum choice of the step-size parameter. When the statistical time variations of the signal occur rapidly, the lag error will dominate the performance of the adaptive equalizer. In such a case, Ji >> Jnin + Jas even when the largest possible value of A is used. When this condition occurs, the LMS algorithm is inappropriate for the application and one must rely on the more complex recursive least-squares algorithms described in Section 10.4 to obtain faster convergence. 699Chapter Ten: Adaptive Equalization is still far below the upper bound, causes large undesirable fluctuations in the output MSE of the algorithm. In a digital implementation of the LMS algorithm, the choice of the step-size parameter becomes even more critical. In an attempt to reduce the excess mean square error, itis possible to reduce the step-size parameter to the point where the total mean square error actually increases. This condition occurs when the estimated gradient components of the vector ¢, V7 after multiplication by the small step-size parameter A are smaller than one-half of the least significant bit in the fixed-point representation of the equalizer coefficients. In such a case, adaptation ceases. Consequently, it is important for the step size to be large enough to bring the equalizer coefficients in the vicinity of Cop. If it is desired to decrease the step size significantly, it is necessary to increase the precision in the equalizer coefficients. Typically, 16 bits of precision may be used for the coefficients, with about 10-12 of the most significant bits used for arithmetic operations in the equalization of the data. The remaining least significant bits are required to provide the necessary precision for the adaptation process. Thus, the scaled estimated gradient components Ae Vj usually affect only the least-significant bits in any one iteration, In effect, the added precision also allows for the noise to be averaged out, since many incremental changes in the least-significant bits are required before any change occurs in the upper more significant bits used in arithmetic operations for equalizing the data, For an analysis of roundoff errors in a digital implementation of the LMS algorithm, the reader is referred to the papers by Gitlin and Weinstein (1979), Gitlin et al. (1982), and Caraiscos and Liu (1984). As a final point, we should indicate that the LMS algorithm is appropriate for tracking slowly time invariant signal statistics. In such a case, the minimum MSE and the optimum coefficient vector will be time-variant. In other words, Jmyin(n) is a function of time and the 2(K + 1)-dimensional error surface is moving with the time index n. ‘The LMS algorithm attempts to follow the moving minimum Jmya(n) in the (2K + 1)- dimensional space, but itis always lagging behind due to its use of (estimated) gradient vectors. As a consequence, the LMS algorithm incurs another form of error, called the lag error, whose mean square value decreases with an increase in the step size A. The total MSE error can now be expressed as Frotat = Imun() + Ja + Jr (10.1-34) where J; denotes the mean square error due to the lag. In any given nonstationary adaptive equalization problem, if we plot the errors J, and Jj asa function of A, we expect these errors to behave as illustrated in Figure 10.1-5. We observe that J increases with an increase in A while J; decreases with an increase in A. The total error will exhibit a minimum, which will determine the optimum choice of the step-size parameter. When the statistical time variations of the signal occur rapidly, the lag error will dominate the performance of the adaptive equalizer. In such a case, Ji >> Jnin + Jas even when the largest possible value of A is used. When this condition occurs, the LMS algorithm is inappropriate for the application and one must rely on the more complex recursive least-squares algorithms described in Section 10.4 to obtain faster convergence. 699700 Digital Communications FIGURE 10.1-5 Excess mean square error J and lag error Jj as a function of the step size. (From Digital Signal Processing, by J. G. Proakis and D. G. Manolakis, 1995, Prentice Hall Company. Reprinted with permission of the publisher.) Mean square error 10.1-5 Accelerating the Initial Convergence Rate in the LMS Algorithm As we have observed, the initial convergence rate of the LMS algorithm for any given channel characteristic is controlled by the step-size parameter A. The initial conver- ‘gence rate is strongly influenced by the channel spectral characteristics, which are related to the eigenvalues (2,,} of the received signal covariance matrix. If the channel amplitude and phase distortions are small, the eigenvalue ratio Amax/Amun is close to unity and, hence, the equalizer converges to its optimum tap coefficients relatively fast. On the other hand, if the channel exhibits poor spectral characteristics, such as rela- tively large attenuation in a part of its spectrum, the eigenvalue ratio Aiax/Amin > 1 and, hence, the convergence rate of the LMS algorithm will be slow. ‘A considerable effort has been spent by researchers on methods to accelerate the initial convergence of the LMS algorithm, A simple remedy is to begin with a large step size, say Ag, and reduce the step size as the tap coefficients converge to their optimum values. In other words, we use a sequence of step sizes, Ap > Ay > Ap > = > An = A, where A is the final step size to be used in steady-state operation of the LMS algorithm, An alternative method for accelerating initial convergence has been proposed and investigated by Chang (1971) and Qureshi (1977). This method is based on introducing additional parameters in the LMS algorithm by replacing the step size with a weighting matrix W. In such a case, the LMS algorithm is generalized to the form: Cu = Ce — WE: =C.+ wre -8) (10.1-35) y+ WeV; where W is the weighting matrix. Ideally, W = F'~', or if Fis estimated, then W can be set equal to the inverse of the estimate. When the training sequence for the equalizer is periodic with period N, the co- variance matrix I is Toeplitz. and circulant and its inverse is circulant, In this case, the multiplication by the weighting matrix W can be simplified considerably by the implementation of a single finite duration impulse response (FIR) filter with weights equal to the first row of W, as indicated by Qureshi (1977). That is, the fast update algorithm that is equivalent to multiplying the gradient vector 6; by W is simply im- plemented as shown in Figure 10.1-6, by inserting the FIR filter with N coefficientsChapter Ten: Adaptive Equalization Wo, Wi, «++. Wy-1 in the path of the periodic input sequence before it is used for tap coefficient adjustment, Qureshi (1977) described a method for estimating the weights from the received signal. The basic steps are as follows: 1. Collect one period (N symbols) of received data vo, v1, . delay line. Compute the N-point discrete Fourier transform (DFT) of {vq} denoted as (R,}- . Compute the discrete power spectrum | R,|*. If we neglect the noise, |R,|* corre- sponds to N times the eigenvalues of the circulant covariance matrix of the signal at the input to the equalizer. Then, add NV times the estimate of the noise variance a? to |Rn\? 4. Compute the inverse DFT of the sequence 1/(|Rn|? +N6?),n=0,1,...,N—1. This yields the sequence {w,) of filter coefficients for the filter shown in Figure 10.1-6. 5. The algorithm for adjusting the equalizer tap coefficient now becomes vy-1 in the equalizer PN we cP =O 6 utp FON NA (10.1-36) m= Recived “gna from demodslator FIGURE 10.1-6 Fast start-up technique for an adaptive equalizer. 701
You might also like
Unit-3: Equalization and Diversity Techniques
PDF
No ratings yet
Unit-3: Equalization and Diversity Techniques
69 pages
An Introduction To Adaptive Filtering & It's Applications: Asst - Prof.Dr - Thamer M.Jamel
PDF
No ratings yet
An Introduction To Adaptive Filtering & It's Applications: Asst - Prof.Dr - Thamer M.Jamel
81 pages
Unit-Iii Equalization Techniques: Digital Equalizer Types
PDF
No ratings yet
Unit-Iii Equalization Techniques: Digital Equalizer Types
22 pages
Unit3 2
PDF
No ratings yet
Unit3 2
105 pages
Adaptive Filter With LMS
PDF
No ratings yet
Adaptive Filter With LMS
39 pages
Adaptive Equalizer
PDF
No ratings yet
Adaptive Equalizer
24 pages
Equalization and Diversity Techniques For Wireless Communicati 2
PDF
No ratings yet
Equalization and Diversity Techniques For Wireless Communicati 2
27 pages
Unit III
PDF
No ratings yet
Unit III
109 pages
Equalization Notes
PDF
No ratings yet
Equalization Notes
41 pages
Equalization in A Wideband TDMA System: - Three Basic Equalization Methods
PDF
No ratings yet
Equalization in A Wideband TDMA System: - Three Basic Equalization Methods
29 pages
Sumer Study Group FIR Channel
PDF
No ratings yet
Sumer Study Group FIR Channel
68 pages
EC 8652 WC Unit 4 Lecture
PDF
No ratings yet
EC 8652 WC Unit 4 Lecture
126 pages
Ut3 2
PDF
No ratings yet
Ut3 2
25 pages
Adaptive Equalizers
PDF
No ratings yet
Adaptive Equalizers
19 pages
Equalization - Module 4
PDF
No ratings yet
Equalization - Module 4
15 pages
Channel Estimation
PDF
0% (1)
Channel Estimation
26 pages
Unit 4
PDF
No ratings yet
Unit 4
108 pages
Wireless Channel Impairment Mitigation Techniques
PDF
No ratings yet
Wireless Channel Impairment Mitigation Techniques
51 pages
WC EQUALIZER Part1
PDF
No ratings yet
WC EQUALIZER Part1
18 pages
WCN Unit-4
PDF
No ratings yet
WCN Unit-4
51 pages
Approximate ML Decision-Feedback Block Equalizer For Doubly Selective Fading Channels
PDF
No ratings yet
Approximate ML Decision-Feedback Block Equalizer For Doubly Selective Fading Channels
8 pages
Equalization 1finalslide PDF
PDF
No ratings yet
Equalization 1finalslide PDF
93 pages
Equalization and Diversity: School of Information Science and Engineering, SDU
PDF
No ratings yet
Equalization and Diversity: School of Information Science and Engineering, SDU
93 pages
(Sici) 1099 1115 (199711) 11:7 621::aid Acs458 3.0.co 2 C
PDF
No ratings yet
(Sici) 1099 1115 (199711) 11:7 621::aid Acs458 3.0.co 2 C
20 pages
Adaptive Filters and Applications: Supervised by Prof. Dr. Ehab A. Hussein
PDF
No ratings yet
Adaptive Filters and Applications: Supervised by Prof. Dr. Ehab A. Hussein
41 pages
Dynamic Programming Training Period For An MSE Adaptive Equalizer
PDF
No ratings yet
Dynamic Programming Training Period For An MSE Adaptive Equalizer
8 pages
Adaptive Equalizer
PDF
No ratings yet
Adaptive Equalizer
34 pages
Unit 4
PDF
No ratings yet
Unit 4
58 pages
Eecs 2017 87
PDF
No ratings yet
Eecs 2017 87
37 pages
Golay Codes
PDF
No ratings yet
Golay Codes
50 pages
Channel Equalization Techniques To Mitig PDF
PDF
No ratings yet
Channel Equalization Techniques To Mitig PDF
5 pages
Introduction To Equalization: Guy Wolf Roy Ron Guy Shwartz
PDF
No ratings yet
Introduction To Equalization: Guy Wolf Roy Ron Guy Shwartz
50 pages
Sample Research Paper 1 PDF
PDF
No ratings yet
Sample Research Paper 1 PDF
11 pages
Adaptive Filter
PDF
100% (2)
Adaptive Filter
35 pages
Project Report
PDF
No ratings yet
Project Report
24 pages
ICE M1121 M1021 Student
PDF
No ratings yet
ICE M1121 M1021 Student
9 pages
Chapter 8
PDF
No ratings yet
Chapter 8
28 pages
Progress Report
PDF
No ratings yet
Progress Report
17 pages
Adaptive Equalization: Oladapo Kayode
PDF
No ratings yet
Adaptive Equalization: Oladapo Kayode
17 pages
738 Submission
PDF
No ratings yet
738 Submission
6 pages
Semi-Blind Fast Equalization of QAM Channels Using Concurrent Gradient-Newton CMA and Soft Decision-Directed Scheme
PDF
No ratings yet
Semi-Blind Fast Equalization of QAM Channels Using Concurrent Gradient-Newton CMA and Soft Decision-Directed Scheme
10 pages
Equalization Ed Us at
PDF
No ratings yet
Equalization Ed Us at
50 pages
WMC Unit II
PDF
No ratings yet
WMC Unit II
9 pages
Open Handed Lab Adaptive Linear Equalizer: Objective
PDF
No ratings yet
Open Handed Lab Adaptive Linear Equalizer: Objective
5 pages
EE4601 Communication Systems: Week 13 Linear Zero Forcing Equalization
PDF
No ratings yet
EE4601 Communication Systems: Week 13 Linear Zero Forcing Equalization
14 pages
Equalization
PDF
No ratings yet
Equalization
5 pages
Performance Analysis
PDF
No ratings yet
Performance Analysis
5 pages
Channel Estimation Techniques - Last
PDF
No ratings yet
Channel Estimation Techniques - Last
26 pages
Equal I Zat I Ont Echni Ques
PDF
No ratings yet
Equal I Zat I Ont Echni Ques
7 pages
Adaptive Equalization
PDF
No ratings yet
Adaptive Equalization
3 pages
1 Review Continued: 1.1 Channels, Noise and Intersymbol Interference
PDF
No ratings yet
1 Review Continued: 1.1 Channels, Noise and Intersymbol Interference
17 pages
Clinical Research Example
PDF
No ratings yet
Clinical Research Example
11 pages
A New Adaptive Algorithm For Joint Blind Equalization and Carrier Recovery
PDF
No ratings yet
A New Adaptive Algorithm For Joint Blind Equalization and Carrier Recovery
5 pages
Blind Channel Equalization: Mohammad Havaei, Sanaz Moshirian, Soheil Ghadami
PDF
No ratings yet
Blind Channel Equalization: Mohammad Havaei, Sanaz Moshirian, Soheil Ghadami
6 pages
L9 8up
PDF
No ratings yet
L9 8up
7 pages