0% found this document useful (0 votes)
126 views

BMSP

Finite impulse response (fir) filters have a limited number of output terms. This quality has several important implications for digital filter design and applications. Fir filters are generally realized nonrecursively, which means there is no feedback involved in computation of the output data.

Uploaded by

naveen991606
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
126 views

BMSP

Finite impulse response (fir) filters have a limited number of output terms. This quality has several important implications for digital filter design and applications. Fir filters are generally realized nonrecursively, which means there is no feedback involved in computation of the output data.

Uploaded by

naveen991606
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20
5 Finite Impulse Response Filters Jesse D. Olson A finite impulse response (FIR) filter has a unit impulse response that has a limited number of terms, as opposed to an infinite impulse response (IIR) filter which produces an infinite number of output terms when a unit impulse is applied to its input. FIR filters are generally realized nonrecursively, which means that there is no feedback involved in computation of the output data. The output of the filter depends only on the present and past inputs. This quality has several important implications for digital filter design and applications. This chapter discusses several FIR filters typically used for real-time ECG processing, and also gives an overview of some general FIR design techniques. 5.1 CHARACTERISTICS OF FIR FILTERS 5.1.1 Finite impulse response Finite impulse response implies that the effect of transients or initial conditions on the filter output will eventually die away. Figure 5.1 shows a signal-flow graph (SFG) of a FIR filter realized nonrecursively. The filter is merely a set of “tap weights” of the delay stages. The unit impulse response is equal to the tap weights, so the filter has a difference equation given by Eq. (5.1), and a transfer function equation given by Eq. (5.2). aT) = aor x(nT - kT) (5.1) H(z) = bo + biz"! + baz? +... + bNZN (5.2) 100 Finite Impulse Response Filters 101 X(Z)—p >| Figure 5.1 The output of a FIR filter of order N is the weighted sum of the values in the storage registers of the delay line, 5.1.2 Linear phase In many biomedical signal processing applications, it is important to preserve certain characteristics of a signal throughout the filtering operation, such as the height and duration of the QRS pulse. A filter with linear phase has a pure time delay as its phase response, so phase distortion is minimized. A filter has linear phase if its frequency response H(e/®) can be expressed as H(ei®) = Hy (0) e~ Ka + B) (63) where H(6) is a real and even function, since the phase of H(e/®) is -a0-B —_; Hi()>0 4H) = tog B—m 2 H\(0) <0 (5.4) FIR filters can easily be designed to have a linear phase characteristic. Linear phase can be obtained in four ways, as combinations of even or odd symmetry (defined as follows) with even or odd length. A(N — 1 — 8) = A(R), even symmerry HN ~1-k) =-h(k), odd symmetry J. for OSKSN (5.5) 5.1.3 Stal ity Since a nonrecursive filter does not use feedback, it has no poles except those that are located at z = 0. Thus there is no possibility for a pole to exist outside the unit circle, This means that it is inherently stable. As long as the input to the filter is bounded, the output of the filter will also be bounded. This contributes to ease of design, and makes FIR filters especially useful for adaptive filtering where filter coefficients change as a function of the input data. Adaptive filters are discussed in Chapter 8. 102 Biomedical Digital Signal Processing 5.1.4 Desirable finite-length register effects When data are converted from analog form to digital form, some information is lost due to the finite number of storage bits. Likewise, when coefficient values for a filter are calculated, digital implementation can only approximate the desired values, The limitations introduced by digital storage are termed finite-length register effects. Although we will not treat this subject in detail in this book, finite- length register effects can have significant negative impact on a filter design. These effects include quantization error, roundoff noise, limit cycles, conditional stability, and coefficient sensitivity. In FIR filters, these effects are much less significant and easier to analyze than in TIR filters since the errors are not fed back into the filter. See Appendix F for more details about finite-length register effects. 5.1.5 Ease of design All of the above properties contribute to the ease in designing FIR filters. There are many straightforward techniques for designing FIR filters to meet arbitrary fre- quency and phase response specifications, such as window design or frequency sampling. Many software packages exist that automate the FIR design process, of- ten computing a filter realization that is in some sense optimal. 5.1.6 Realizations There are three methods of realizing an FIR filter (Bogner and Constantinides, 1985). The most common method is direct convolution, in which the filter’s unit impulse sequence is convolved with the present input and past inputs to compute each new output value. FIR filters attenuate the signal very gradually outside the passband (ie., they have slow rolloff characteristics). Since they have significantly slower rolioff than IIR filters of the same length, for applications that require sharp rolloffs, the order of the FIR filter may be quite large. For higher-order filters the direct convolution method becomes computationally inefficient. For FIR filters of length greater than about 30, the “fast convolution” realization offers a computational savings. This technique takes advantage of the fact that time-domain multiplication, the frequency-domain dual of convolution, is compu- tationally less intensive. Fast convolution involves taking the FFT of a block of data, multiplying the result by the FFT of the unit impulse sequence, and finally taking the inverse FFT. The process is repeated for subsequent blocks of data. This method is discussed in detail in section 11.3.2. ‘The third method of realizing FIR filters is an advanced, recursive technique in- volving a comb filter and a bank of parallel digital resonators (Rabiner and Rader, 1972). This method is advantageous for frequency sampling designs if a large number of the coefficients in the desired frequency response are zero, and can be Finite Impulse Response Filters 103 used for filters with integer-valued coefficients, as discussed in Chapter 7. For the remainder of this chapter, only the direct convolution method will be considered. 5.2 SMOOTHING FILTERS One of the most common signal processing tasks is smoothing of the data to reduce high-frequency noise. Some sources of high-frequency noise include 60-Hz, movement artifacts, and quantization error. One simple method of reducing high- frequency noise is to simply average several data points together. Such a filter is referred to as a moving average filter. §.2.1 Hanning filter One of the simplest smoothing filters is the Hanning moving average filter. Figure 5.2 summarizes the details of this filter. As illustrated by its difference equation, the Hanning filter computes a weighted moving average, since the central data point has twice the weight of the other wo: ya?) =} [x@) + 2x(nP- 7) + xT - 27) (56) ‘As we saw in section 4.5, once we have the difference equation representing the numerical algorithm for implementing a digital filter, we can quickly determine the transfer equation that totally characterizes the performance of the filter by using the analogy between discrete-time variables and 2-domain variables. Recognizing that x(nT) and y(n7) are points in the input and output sequences associated with the current sample time, they are analogous to the undelayed z- domain variables, X(z) and Y@) respectively. Similarly x(nT — 7), the input value one sample point in the past, is analogous to the z-domain input variable delayed by one sample point, or X(z)z-!. We can then write an equation for output ¥(z) as a function of input X(z): Y(2) = 4 [X@) + 2X2)! + X(2)zr-2] (5.7) The block diagram of Figure 5.2(a) is drawn using functional blocks to directly implement the terms in this equation. Two delay blocks are required as designated by the ~2 exponent of z. Two multipliers are necessary to multiply by the factors 2 and 1/4, two summers are needed to combine the terms. The transfer function of this equation is HG)=} [1 +24 +22] (65.8) 104 Biomedical Digital Signal Processing This filter has two zeros, both located at z= 1, and two poles, both located at z = 0 (see section 4.6 to review how to find pole and zero locations). Figure 5.2(b) shows the pole-zero plot. Note the poles are implicit; they are not drawn since they influence all frequencies in the amplitude response equally. ya) V4 2 plane (a) (b) Figure 5.2 Hanning filter. (a) Signal-flow graph. (b) Pole-zero diagram, ‘The filter’s amplitude and phase responses are found by substituting e/@T for z in Eq. (5.8): 1 5 HOT) = 4 [1 + 2d + eP0T] (9) We could now directly substitute into this function the trigonometric relationship OT = cos(W) + jsin(wT) (6.10) However, a common trick prior to this substitution that leads to quick simplifica- tion of expressions such as this one is to extract a power of ¢ as a multiplier such that the final result has two similar exponential terms with equal exponents of op- posite sign H(wt) = 4 [ever (cia? +2 +e-J0r)] (6.11) Now substituting Eq. (5.10) for the terms in parentheses yields H(o!) =} {eT [cos(a) + jsin(wT) + 2+ cos(wl) -jsin(w1)] }} (5.12) Finite Impulse Response Filters 105 The sin(@T) terms cancel leaving 1 - i H(@D) = 4 [2+2cos(@neso ] (8.13) This is of the form Re/® where R is the real part and @ is the phase angle. Thus, the magnitude response of the Hanning filter is IRI, or lM(@n)| A i+ costain| (6.14) Figure 5.3(a) shows this cosine-wave amplitude response plotted with a linear ordinate scale while Figure 5.3(b) shows the same response using the more familiar decibel plot, which we will use throughout this book. The relatively slow rolloff of the Hanning filter can be sharpened by passing its output into the input of another identical filter. This process of connecting multiple filters together is called cascad- ing filters. The linear phase response shown in Figure 5.3(c) is equal to angle @, or ZH(OT) = -oT (5.15) Implementation of the Hanning filter is accomplished by writing a computer program. Figure 5.4 illustrates a C-language program for an off-line (ie., not real- time) application where data has previously been sampled by an A/D converter and left in an array. This program directly computes the filter's difference equation (Eq. (5.6)]. Within the tor () loop, a value for x(n) (called xnt. in the program) is obtained from the array idb[). The difference equation is computed to find the output value y(n7) (or yne in the program). This value is saved into the data array, replacing the value of x(n7). Then the input data variables are shifted through the delay blocks. Prior to the next input, the data point that was one point in the past x(aP ~ T) (called xm1 in the program) moves two points in the past and becomes x(a = 27) (or xm2), The most recent input x(n7) (called xnz) moves one point back in time, replacing x(nT — T) (or x1). In the next iteration of the cor ( loop, anew value of x(nT) is retrieved, and the process repeats until all 256 array values are processed. The filtered output waveform is left in array idb(). ‘The Hanning filter is particularly efficient for use in real-time applications since all of its coefficients are integers, and binary shifts can be used instead of multipli- cations. Figure 5.5 is a real-time Hanning filter program. In this program, the computation of the output value »(n7) must be accomplished during one sample interval T. That is, every new input data point acquired by the A/D converter must produce an output value before the next A/D input. Otherwise the filter would not keep up with the sampling rate, and it would not be operating in real time. 106 Biomedical Digital Signal Processing ‘Ampliade 0 or cr 03 on os ts (@) 30) Amplitude (4B) 40] x ts ) so ~. so a 130 —~ Phase (degrees) o ° Or 02 oF oF 3s ‘is (©) Figure 5.3 Hanning filter. (a) Frequency response (linear magnitude axis). (b) Frequency re- sponse (dB magnitude axis). (c)Phase response. Finite Impulse Response Filters 107 In this program, sampling from the A/D converter, computation of the results, and sending the filtered data to a D/A converter are all accomplished within a for() loop. The wait () function is designed to wait for an interrupt caused by an AJD clock tick. Once the interrupt occurs, a data point is sampled with the adget () function and set equal to xnt. The Hanning filter's difference equation is then computed using C-language shift operators to do the multiplications efficiently. Expression <<1 is a binary shift to the left by one bit position corresponding to multiplication by a factor of two, and >>2 is a binary shift right of two bit positions representing division by four. 7* Hanning filter Difference equation: y(n?) = (aint) + 24x (nT - 7) + x(n? - 2TVV/a © language implementation equation: yat = (xnt + 2m + xm2)/4; “/ main() { int i, xnt, xml, xm2, ynt, idb[256]; xm2 0; xml = 0; for(i = 0; i <= 295; i++) xnt = idb{il; ynt = (xnt + 24xml + xm2) /4; idb(i] = ynt; xm2 = xmlz xml = xnt; ’ ) Figure 5.4 C-language code to implement the Hanning filter. Data is presampled by an ADC and stored in array iab (1. The filtered signal is left in ib). ‘The computed output value is sent to a D/A converter with function daput () ‘Then the input data variables are shifted through the delay blocks as in the previous program, For the next input, the data point that was one point in the past x(n7 - 7) (called xm1 in the program) moves two points in the past and becomes x(nT - 27) (or xm2). The most recent input x(n) (called xnt) moves one point back in time, replacing x(7 — 7) (or xm1). Then the for () loop repeats with the wait () function waiting until the next interrupt signals that a new sampled data point is available to be acquired by adget () as the new value for.x(n7). 108 Biomedical Digital Signal Processing 7* Real-time Hanning filter Difference equation: y(aT) = (x(nt) + 24x (nT = 7) + x(nT = 2T))/4 © language implementation equat yat = (xnt + xmic<1 + xm2)>>2; “/ fdefine AD 31; #define DA 32; main () ‘ int i, xnt, xml, xm2, ynt; xm2 = 07 xml = 07 tmic 2000; /* Start ADC clock ticking at 2000 ps */ /* intervals (2 ms period for 500 sps) */ for( 77) { wait? /* Wait for ADC clock to tick */ xnt = adget (AD); yat = (xnt + xmi<>2; @aput (ynt, DA); xm2= xml; xml = xnt; , Figure 5.5 C-language code t jplement the real-time Hanning filter. 5.2.2. Least-squares polynomial smoothing This family of filters fits a parabola to an odd number (2L + 1) of input data points in a least-squares sense. Figure 5.6(a) shows that the output of the filter is the midpoint of the parabola. Writing the equation for a parabola at each input point, we obtain PAT + KT) = a(n) + BOT) + c(nT)K2 (5.16) where k ranges from —L to L. The fit is found by selecting a(n7), b(nT) and c(nT) to minimize the squared error between the parabola and the input data. Setting the partial derivatives of the error with respect to a(nT), b(n7), and c(nT) equal to zero results in a set of simultaneous equations in a(xT), b(nT), c(nT), k, and p(nT - kT). Solving to obtain an expression for a(n7), the value of the parabola at k = 0, yields an expression that is a function of the input values. The coefficients of this expres- sion are the tap weights for the least-squares polynomial filter as shown in the Finite Impulse Response Filters 109 signal-flow graph of Figure 5.6(b) for a five-point filter. The difference equation for the five-point parabolic filter is (nT) = 4 [(-3x(nT) + 12x(nT — T) + 17x(nT - 27) + 12x(nP - 37) - 3x(nT - 47) (6.17) Its transfer function is HG) =35 [-3+ 1227! + 172-2 + 122-3 - 32-4] (5.18) Amplitude aT-47 aT-27 Time ¥(2) () Figure 5.6 Polynomial smoothing filter with L = 2. (a) Parabolic fitting of groups of 5 sampled data points. (b) Signal-flow graph. Figure 5.7 shows the tap weights for filters with L equal to 2, 3, 4, and 5, and Figure 5.8 illustrates their responses. The order of the filter can be chosen to meet the desired rolloff. 110 Biomedical Digital Signal Processing i Tap weights T 353, 12, 17, 12, -3) 3 FC 3,6 7, 6 3, 2) 4 PT C2L 14, 39, 54, 59, 54, 39, 14, -21) 3 T Fay (736, 9, 44, 69, 84, 89, 84, 69, 44, 9, -36) Figure 5.7 Tap weights of polynomial smoothing filters (Hamming, 1977). ‘Amplitude (dB) Figure 5.8 Polynomial smoothing filters (a) Amplitude responses. (b) Phase responses. Solid line: L = 2. Circles: Dashed line: L = 4. Finite Impulse Response Filters m1 5.3 NOTCH FILTERS A common biomedical signal processing problem involves the removal of noise of a particular frequency or frequency range (such as 60 Hz) from a signal while passing higher and/or lower frequencies without attenuation. A filter that performs this task is referred to as a notch, bandstop, or band-reject filter. One simple method of completely removing noise of a specific frequency from the signal is to place a zero on the unit circle at the location corresponding to that frequency. For example, if a sampling rate of 180 samples per second is used, a zero at 27/3 removes 60-Hz line frequency noise from the signal. The difference equation is, yan) = [e) + x(n? - 1) + x(n - 20] (5.19) ‘The filter has zeros at z=-0.5 + j 0.866 (5.20) and its amplitude and phase response are given by Wacwnl =|} [1 + 2cosor)] (6.21) ZH?) =-0T (6.22) Figure 5.9 shows the details of the design and performance of this filter. The rela- tively slow rolloff of this filter causes significant attenuation of frequencies other than 60 Hz as well. 5.4 DERIVATIVES The response of the true derivative function increases linearly with frequency. However, for digital differentiation such a response is not possible since the fre- quency response is periodic. The methods discussed in this section offer trade-offs between complexity of calculation, approximation to the true derivative, and elimi- nation of high-frequency noise. Figure 5.10 shows the signal-flow graphs and pole- zero plots for three different differentiation algorithms: two-point difference, three- point central difference, and least-squares polynomial fit. 112 Biomedical Digital Signal Processing X(2) z 2 Y(z) 113 z plane @ (b) Amplinae (6B) Frequency (H2) © Phase (Segre) Figure 5.9 ‘The 60-Hz notch filter. (a) Signal-flow graph. (b) Pole-zero plot. (c) Frequency re- sponse. (4) Phase response. Finite Impulse Response Filters 113 Xz); z “1 viz) vr Z plane (a) X(z): ziti 2+ 4 va) rer 2 Plane ) yz) 7 plane © Figure 5.10 Signal flow graphs and pole-zero diagrams for derivative algorithms. (a) Two-point, (b) Three-point central difference. (c) 5-point least squares polynomial. 114 Biomedical Digital Signal Processing 5.4.1 Two-point difference ‘The two-point difference algorithm, the simplest of these derivative algorithms, places a zero at z = 1 on the unit circle. Its amplitude response shown in Figure 5.11(a) closely approximates the ideal response, but since it does not go to zero at fs/2, it greatly amplifies high-frequency noise. It is often followed by a low-pass filter. Its difference equation is ynT) =} [xa7) -x(nT -1T)] (5.23) Its transfer function is H@=% 0-2) (5.24) 5.4.2 Three-point central difference The three-point central difference algorithm places zeros at z = 1 and z = -1, so the approximation to the derivative is poor above fs/10 seen in Figure 3.11(a). However, since the response goes to zero at fs/2, the filter has some built-in smoothing. Its difference equation is 1 MAT) = 57 (nT) — x(nT - 27)] (5.25) Its transfer function is H@) = Oa (=z) (5.26) 5.4.3 Least-squares polynomial derivative approximation This filter is similar to the parabolic smoothing filter described earlier, except that the slope of the polynomial is taken at the center of the parabola as the value of the derivative. The coefficients of the transfer equations for filters with L = 2, 3, 4, and 5 are illustrated in Figure 5.12. Figure 5.10(c) shows the signal-flow graph and pole-zero diagram for this filter with L = 2. Note that the filter has zeros at z = +1, as did the three-point central difference, with additional zeros at z = -0.25 + j0.968. ‘The difference equation for the five-point parabolic filter is yal) = Ww [2x(nT) +x(nP ~T) = x(n - 31) -2M(nT - AT] (5.27) Finite Impulse Response Fitters 115 Its transfer function is 24] (5.28) H@)=7gR+r--3 ‘As Figure 5.11(a) shows, the response only approximates the true derivative at low frequencies, since the smoothing nature of the parabolic fit attenuates high frequencies significantly. Phase (degrees) (b) Figure 5.11 Derivatives. (a) Amplitude response. (b) Phase response. Solid line: Two-point. Circles: Three-point central difference, Dashed line: Least-squares parabolic approximation for L=2, 116 Biomedical Digital Signal Processing £ Tap weights 3 Tet.) 3 Fr O10, 4 aor (4: 3:2,1,0, 5 Tor 6.43.2, 1,0, 5.4.4 Second derivative Figure 5.12 Least-squares derivative approximation coefficients for L = 2, 3, 4, and 5. Figure 5.13 shows a simple filter for approximating the second derivative (Friesen et al., 1990), which has the difference equation y(nT) = x(nT) — 2x(nT - 2T)) + x(n - 47) (5.29) This filter was derived by cascading two stages of the three-point central difference derivative of Eq. (5.26) and setting the amplitude multiplier to unity to obtain the transfer function H(z) = (1-22) x (1-2) = (1-22-24) 7-H zip 2 x2 (a) 2 plane (b) Figure 5.13 Second derivative. (a) Signal-flow graph. (b) Unit-circle (5.30) ve) diagram, Finite Impulse Response Filters 7 5.5 WINDOW DESIGN A desired frequency response Ha(6) (a continuous function) has as its inverse dis- crete-time Fourier transform (IDTFT), which is the desired unit pulse sequence, hak) (a discrete function). This sequence will have an infinite number of terms, so it is not physically realizable. The objective of window design is to choose an ac- tual A(&) with a finite number of terms such that the frequency response H(e/®) will be in some sense close to Ha(8). If the objective is to minimize the mean-squared error between the actual frequency response and the desired frequency response, then it can be shown by Parseval’s theorem that the error is minimized by directly truncating /ig(k). In other words, the pulse response h(k) is chosen to be the first N terms of fa(X). Unfortunately, such a truncation results in large overshoots at sharp transitions in the frequency response, referred to as Gibb’ s phenomenon, illustrated in Figure 5.14. Desired response ‘Ampitude (dB) ° 0.5 fits Figure 5.14 The overshoot that occurs at sharp transitions in the desired frequency response due to truncation of the pulse response is referred to as Gibb’s phenomenon. To understand this effect, consider that direct truncation of ha(k) is a multiplica- tion of the desired unit pulse sequence by a rectangular window wp(&). Since mul- tiplication in the time domain corresponds to convolution in the frequency domain, the frequency response of the resulting A(4) is the convolution of the desired fre- quency response with the frequency response of the window function. 118 Biomedical Digital Signal Processing For example, consider that the frequency response of a rectangular window of infinite length is simply a unit pulse. The convolution of the desired frequency response with the unit pulse simply returns the desired response. However, as the width of the window decreases, its frequency response becomes less like an impulse function and sidelobes become more evident. The convolution of these sidelobes with the desired frequency response results in the overshoots at the transitions. For a rectangular window, the response function W(e/9) is given by a) of where NV is the length of the rectangular window. There are two important results of the convolution of the window with the desired frequency response. First the window function smears the desired response at transitions. This smearing of the transition bands increases the width of the transition from passband to stopband. This has the effect of increasing the width of the main lobe of the filter. Second, windowing causes undesirable ripple called window leakage or sideband ripple in the stopbands of the filter. By using window functions other than a rectangular window, stopband ripple can be reduced at the expense of increasing main lobe width. Many types of windows have been studied in detail, including triangular (Bartlett), Hamming, Hanning, Blackman, Kaiser, Chebyshev, and Gaussian windows. Hanning and Hamming windows are of the form WR(e!9) = (5.31) wu(k) = wr(k) [e +(l- evcos( for 0

You might also like