0% found this document useful (0 votes)
1K views878 pages

Digital Signal Processing 2e Mitra PDF

Uploaded by

Meesa susmith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
1K views878 pages

Digital Signal Processing 2e Mitra PDF

Uploaded by

Meesa susmith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 878
DIGITAL SIGNAL PROCESSING A Computer-Based Approach Second Edition Sanjit K. Mitra Department of Blecttical and Computer Engineering University of California, Santa Barbara McGraw-Hill About the Author Sanjit K. Mitra received his M.S. and Ph.D. in electrical engineering from the University of California, Berkeley, and an Honorary Doctorate of Technology from Tampere University of Technology in Finland After holding the position of assistant professor at Comell University until {965 and working at AT&T Bell Laboratories, Holmdel, New Jersey, until 1967, he joined the faculty of the University of California at Davis. Dr. Mitra then transferred to the Santa Barbara campus in 1977, where he served as department chairman from 1979 to 1982 and is now a Professor of Electrical and Computer Engineering, De. Mitra ‘has published more than 500 journal and conference papers, and 11 books, and holds 5 patents. He served as President of the IBBE Circuits and Systems Society in 1986 and is currently a member of the editorial boards for four journals: Multidimensional Systems and Signal Processing; Signal Processing; Journal af the Franklin Institute; and Automatika, Dr, Mitra has received many distinguished industry and academe awards, including the 1973 E, E. Terman Award, the 1985 AT&T Foundation Award of the American Society of Enginecring Education, the 1989 Education Award of the IEE Circuits and Systems Society, the 1989 Distinguished Senior U.S. Scientist Award from the Alexander von Humboldt Foundation of Germany, the 1996 Technical Achievement Award of the IEEE Signal Processing Society, the 1999 Mac Van Valkenburg Society Award and the CAS Golden Jubilce Medal of the IEEE Circuits & System Society, and the [BFE Millenoium Medal in 2000, He is an Academician of the Academy of Finland, Dr. Mitra is a Fellow of the IEEE, AAAS, and SPIE and is a member of EURASIP and the ASEE. Preface The field of digital signal processing (DSP) has seen explosive growth during the past three decades, as phenomenal advances both in research and application have been made. Fueling this growth have been the advances in digital computer technology and software development. Almost every electrical and computer engineering department in this country and abroad mow offers one or more courses in digit signal processing, with the first course usually being offered at the senior level. This book is intended for a two-semester course on digital signal processing for scriors or first-year graduate students. It is also writen at a level suitable for self-study by the practicing engineer cr scientist. Even thought the first editicn of this book was published barely two years ago, based on the feedback received from professors who adopted this book for their courses and many readers, it was clear that anew edition was needed to incorporate the suggested changes to the contents. A number of new topics have been included in the second edition. Likewise, a number of topics that are interesting but not practically usefal have been removed because of size limitations. I was also felt that more worked-out examples were needed to explain new and difficuit concepts, ‘The new topies included in the second edition arc: caleuiation of total solution, zer0-input response, zero-state response, and impulse response of finite-dimensionat discrete-time systems (Sections 2.6.l- 2.6.3), correlation of signals and ils applications (Section 2.7), inverse systems (Section 4.9), system icientification (Section 4.10), maiched fitter and its application (Section 4.14). sampling of bandpass sigoals (Section 5.3), design of highpass. bandpass, and bandstop analog filters (Section 5.5), effect of sample-and- hold operation (Section 5.11), design of highpass, bandpass, and bandstop IIR digital filters (Section 7.4), design of FIR digital filters with least-rocan-square etror (Section 7.8), constrained least-square design of FIR digital filters (Section 7.9), perfect reconstruction ¢wo-channel FIR tikter banks (Section 10.9), cosine- ‘modalated L-channel filter banks (Section 10.11), spectral analysis of random signals (Section 11.4), and sparse antenna array design (Section 11.14}, The topics that have been removed from the first edition are as follows; state-space representation of LTT discrete-time systems from Chapter 2. signal flow-graph representation and state-space structures from Chapter 6, impulse invariance method of IIR filter design and FIR filter design based on the frequency-sampling approach from Chapter 7, reduction of product Tound-off errors from state-space structures from Chapter 9, and voice privacy system from Chapter #1. ‘The fractional sampling rate conversion using the Lagrange interpolation has been moved to Chapter 10 ‘Materials in each chapter até now organized more logically, xiv Preface A key teacure of this book is the extensive use of MaTLab® -based! examples thal illustrate the pro- gram’s powerlul capability to solve signa processing problems. The book uses a three-stage pedagogical structure designed to take full advantage of M-ct.as and to avoid the pitfalls of a " cookbook” approach to problem solving. First, each chapter begins by developing the essential theory and alzorithms. Second, the material is illustrated with examples solved by hand calculation. And third, solutions are derived using Marts, From the beginning, MATLAB codes are provided with enough details to permit the students 10 repeat the examples on their computers. In addition to conventional theoretical problems requiring ana- lytical solutions, cach chapter also includes « large number uf problems requiring solution via MATLAB. ‘This book requires a minimal knowledge of Matias. [believe students learn the intricacies of problem solving with MATLAB faster by using teed. complete programs, and then writing simple programs 1 soive specitie problems that are included at the ends of Chapters 2 to 11. Because computer verification enhances the understanding of tie underlying theories and, as in the rst edition, a targe Tibrary of worked-out MarLaws programs are included in the second edition, The eriginal MarLag programs of the first edition have been updated (o run on the newer versions of MATLAB and the Signat Processing Toolbox, In addition, new MATLAB programs and code frugments have been added in this edition. Tae reader can rum these programs to verify the resulis included in the book. Altogether there are 90 Marian programs in the text that have been tested under version 5.3 of MatLas and version 4.2 of the Signal Processing Toolbox. Some of the programs listed in this book are not necessarily the fastest with regard to their execution speeds, nor arc they the shortest, They have been writen for maximum clarity without deraited explanations, A second attractive feature of this book is the inclusion of 231 simple but practical examples that expove the reader to real-life signal processing problems which has been made possible by the use of computers in solving practical design problems. This book aiso covers many topics of current interest not normally (ound in an upper-division text. Additional topies are also introduced to the teader through problems at the end of each chapter. Finally. the book concludes with a chapter that focuses on several mnportant, practical applications of digital signal processing. These applications are easy to fallow and do not require knowledge of other advanced-level courses, ‘The prerequisite for this book is a junior-level course in Tinear contingous-time and discrete-time systems, which is usually requited in most universities. A minimal review of linear systems and transforms is provided in the text, and basic materials from linear system theory are included, with important materials summarized intables, This approach permits the inclusion of more advanced materials without significantly increasing the length of the book. ‘The book is divided into 11 chapters. Chapter { presents an introduction to the field of signal processing, and provides an overview of signals and signal processing methods, Chapter 2 discusses the time-domain Tepresentations of discrete-time signals and discrete-time systems as sequences of numbers and describes classes of such signals and systems commonly encountered. Several basic discrete-time signals that play important roles in the time-domain characterization of arbitrary discrete-time signals und discrete-time systems are then introduced. Next, a number of basic operations to generate ather sequences from one or more sequences are described. A combination of these operations is also used in developing a discrete-time system. ‘The problem of representing a continuous-time signal by a discrete-time sequence is examined for a simple case. Finally, the time-domain characterization of discrete-time random signals is discussed, ‘Chapter 3 is devoted to the transform-domain representations ofa discrete-time sequence. Specifically discussed are tho discrete-time Fourier transform (DTFT), the discrete Fourier transform (DFT), and the ‘cunstorm, Properties of each of these transforms are reviewed and a few simple applications outlined. ‘The chapter ends with a discussion of the ansform-domain representation of a random signal ‘This book concentrates almost exclusively on the lincar time-invariant discrete-time systems, and | MAIL is a ogisteed trademark of The MathWorks, In, 24 Prime Park Way, Natich, MA 01760-1500, Phase: 56617-7000. buap:/lewwanathoworks-com. Preface wv Chapter 4 discusses their transform-domain representations. Specific properiies of such transform-demain representations are investigated, and several sirnple applications are considered. Chapter 5 is concerned primarily with the discrete-time processing of continuous-time signals. ‘The ‘conditions fer discrete-time representation of a bandlimited continuous-time signal under idea sampling and its exact recovery trom the sampled version are first derived. Several interface circuits are used for the discrete-time processing of continuous-time signals. Two of these circuits are the anti-aliasing filter and the reconstruction filter, which are analog lowpass filters. As a result, a brief review of the basic theory behind some commonly used analog filter design methods is included, and their use is illustrated with MATLAB, Other interface vircuits discussed in this chapter are the sample-and-hold circuil, the analog-to- fi ln such eases, the desired signal can be recovered from the noise-comrupted signal by passing the lafter through a lowpass filter with a cutoff frequency J. where fi < fe ~ fa. A common source of noise is power lines ridiating electric and magnetic fields. The noise generated by power lines appears as a 60-H sinusoidal sigaal comupting the desired signal and can be removed by passing the cosrupted signal through a notch filter with w notch frequency a 60 Hz" 1.2.3. Generation of Complex ‘As indicated earlier, a signal can be real-valued or complex-valued, For convenience, the former is usually calleda real signal white the latter is called acomplex signal. Alinaturally generated signals are real-valued. In some applications, itis desirable to develop a complex signal from a real signal having more desirable io many couriries, powerlines generate S0-HZ ose 1.2. Typical Signal Processing Operations 5 lap sem 4 A de Bq 4 | 2 ve ae Rom mat (a) Lovo er out tiaipen fe egpa 1 1 os os 4 i. i. os 4s ta 20 40 ” 80 joo to. oe md ) © ‘Bancipass filter output a Bandstop Alter owipus 1 os < Bo 9] a F as “ . wD cy cy 80, yon 0 40 ca 80 100 a) Rama (dy © Figure 1.2: (a) Input signal, (b) ouput of a lowpass filter with a cutoff at 80 Hz, (c) output of a highpass filter with a ‘cutoff at 150 Hz, (4) output of a bandpass filter with cutoffs at 80 Hz and 150 iz, and (e) output of a handstop filter ‘with cutoffs at 80 Hz and 150 Hz. 6 Chapter 1: Signals and Signal Processing properties. A complex signal can be generated from a real signal by employing a Hilbert transformer that is characterized by an impulse response hr (t) given by [Fre94], (Opp83] 4 a2) at Aar®) To illustrate the method, consider a real analog signal x(t) with a continuous-time Fousier transform (CEPT) X(G2) piven by . XM) =f ane td. 1.3) X(j) is called the spectrum of x(t), The magnitude spectrum of a real signal exhibits even symmetry while the phase spectrum exhibits odd symmetry. Thus, the spectrum X (j82) of a real signal x(#) contains both positive and negative frequencies and can therefore be expressed as X (JO) = XpJQ) + Xn FM), aay where Xp(/) is the portion of X (j®) occupying the positive frequency range and Xq{jS2) is the portion of X(/) occupying the negative frequency range. If x(t) is passed through a Hilbert transformer, its ‘outpui #(r) is given by the linear convolution of x(#) with har (0): ioe f hurt ~ vix(r) dr, as} ‘The spectrum X (J) of #(1) is given by the product of the continuous-time Fourier transforms of x(#) and Ayr (0. Now the continuous-time Fourier transform yr (JS) of hz7 (1) of Eq. (1.2) is given by =i, Q>0, Har iQ) = { jase a6 Hence, . KGQ) = Myr GDX FQ) = —jXp(JM) + jXn(IM. an As the magnitude and the phase of X(j9) are an even and odd function, respectively, it follows from Eq, (1.7) that 2(¢) is also a real signal. Consider the complex signal y(¢) formed by the sum of x(r) and Re: vO) = x0) + F809. 8) ‘The signals x(1) and £(t) are called, respectively, the in-phase and quadrature components of y(t), By making use of Eqs. (1.4) and (1.7) in the continuous-time Fourier transform of y(t), we obtain ¥UQ) = XQ) + jXUD = 2XpUQ). ag In other words, the complex signal y(), called an analytic signal, has only positive frequency components. A block diagram representation of the scheme for the analytic signal generation from a real signal is sketched in Figure 1.3, 1.2.4 Modulation and Demodulation For transmission of signals over long distances, a transmission media such as cable, optical fiber, or the atmosphere is employed. Each such medium has a bandwidth that is more suitable for the efficient (tansmission of signals in the high-frequency range. As a result, for the transmission of a low-frequency 1.2. Typical Signal Processing Operations 7 _—— a) a) aioe ie ‘Transformer Figure 1.3: Generation of an analytic signal using a Hilbest transformer. signal over a channel, it is necessary to transform the signal to a high-frequency signal by means of a modulation operation. At the receiving end, the modulated high-frequency signal is demodulated, and the desired low-frequency signal is then extracted by further processing, There arc four major types of modulation of analog signals: amplitude modulation, frequency modulation, phase modulation, and pulse amplitude modulation, Of these schemes, amplitude modulation is conceptually simple and is discussed hese [Fre¥4l, [Opp83}. In the amplitude modulation scheme, the amplitude of a high-frequency sinusoidal signal A cos(Qut), called the carrier signal, is varied by the low-frequency bandlimited signat x(t), called the modulating signal, generating a high-frequency signal, called the moduluted signal, y(¢) according to w(t) = Ax(2) cosiQu1) 10) “Ths, amplitude modulation can be implemented by forming the product of the modulating signal with the carrier signal. To demonstrate the frequency-tanslation property of the amplitude modulation process, Yet x(0) = cos(S231) where 2} is much smaller than the carrier frequency 2, fe., % << 2. Brom Eq. (1.10) we therefore obtain YO) = Acos(Ait) - cos( ot) F608 (Bo + MM) + F608 ((o — 2) 4) cap Thus, the modulated signal y(t) is composed of two sinusoidal signals of frequencies M1» + and 2, ~ £2) which are close to 2, as $21 has been assumed to be much smaller than the carrier frequency £2, Ik is instructive to examine the spectrum of y(¢). From the properties of the continuous-time Fourier transform it follows that the spectrum ¥ (£2) of y(t) is given by A A YUM = FXGlQ~ WN + FX GO+M)), (12) Where X (2) is the spectrum of the modulating signal x(¢). Figure 1.4 shows the spectra of the modulating signal and that of the modulated signal under the assurnption that the carrier frequency ©, is greater than Sm, the highest frequency contained in x(¢). As seen from this figure, y(¢) is now a bandlimited high- frequency signal with a bandwidth 29, centered at 2, ‘The portion of the amplitude-modulated signal between ©, and ,-+ is called the upper sideband ‘whereas the portion between Q, and % — Qn is called the lower sideband. Because of the generation of two sidebands and the absence of a carrier component in the modulated signal, the process is called dowbie-sideband suppressed carrier (DSB-SC) modulation. ‘The demodulation of »(1), assuming 2, > Om, is cattied out in two stages. First, he product of y(t) with « sinusoidal signal of the same frequency as the carrer is farmed, This results in PUL) = y(t) COs Rot = Axit} COS? Not 1.43) Chapter 1: Signals and Signal Processing Figure 1.4: (a) Spectrumof the modalating signal x(1), and {b) spectrum of the modulated signal y(t), Forconvenience, ‘both spectra are shown as real functions. RQ) Figure 1.5; Spectrum of the product of the modulated signal and she carrier. which can be rewritten as 1) = s0contign = Aa + Mato ea 2, ay This result indicates that the product signal is composed of the original modulating signal scaled by a factor 1/2 and an amplitude-modulated signal with a carrier frequency 25%. The spectrum R(j2) of 7(#) is as indicated in Figure 1.5. ‘The original modulating signal can now be recovered from r(t) by passing it through a lowpass filter with a cutoff frequency $2, satisfying the relation mq < 82; < 2Q_— Q. The ‘output of the filter is then a scaled replica of the modulating signal. Figure 1.6 shows the block diagram representations of the amplitude modulation and demodulation schemes. The underlying assumption in the demodulation process outlined above is thata sinusoidal signal identical to the carrier signal can be generated at the receiving end, In general, it is difficult to ensure that the demodulating sinusoidal signal has a frequency identical to that of the carrier all the time. To get arourd this problem, in the transmission of amplitude-modulated radio signals, the modulation process is modified so that the transmitted signal includes the carrier signal. This is achieved by redefining the amplitude modulation operation as follows: YU) = ALL + mx(t)} cos(201), 1.15) 1.2. Typical Signal Processing Operations 1) Qo wu Mae FR axe Aces, cose @ tb) Figure 1.6: Schemstic cepresentations of the amplitude modulation and demodulation schemes: (a) modulator, and () demodulator. Koy mA i as “ee Se 70 @ re) Figure 1.7; (a) A sinvsoidal modulating signal of frequency 20 Hz, and (b) modulated carrier with a carrier frequency of 400 Hz based on the DSB modulation. Where m is 4 number chosen to ensure that [1 +x (7)] is positive for all Figure 1.7 shows the waveforms of a modulating sinusoidal signal of frequency 20 Hz and the amplitude-modulated carrier obtained a cording to Eq. (1.15) for a carrier frequency of 400 Hz and m = 0.5. Note that the envelope of the modulated carrier is essentially the waveform of the modulating signal. As here the carrier is also present in the modulated signal, the process is called simply double-sidehand (DSB) modulation. At the receiving ‘nd, the carrier signal is separated first and then used for demodulation. 1.2.5 Multiplexing and Demuttiplexing For an efficient utilization of a wideband transmission channel, many narrow-bandwidth low-frequency signals are combined to form a composite wideband signal that is transmitted as a single signal. The process of combining these signals is called multiplexing which is implemented to ensure that a replica of the original aarrow-bandwidth low-frequency signals can be recovered at the receiving end. The tecovery process is called demultiplexing. One widely used method of combining different voice signals in a telephone communication system is the frequency-division multiplexing (FDM) scheme |Cou83], [Gpp83]. Here, each voice signal, typically bandlimited to a low-frequency band of width 20m, is frequency-translated into a higher frequency band using the amplitude modulation method of Eq. (1.10). The carrier frequency of adjacent amplitude modulated signals is separated by 2p, with 2, > 2m to ensure that there is no overlap in the spectra of the individual modulated signals after they are added to form a baseband composite signal. This signal is then modulated onto the main carrier developing the FDM signal and transmitted, Figure 1.8 illustrates the frequency-division multiplexing scheme. 10 Chapter 1: Signals and Signal Processing Xm (42) Xm rh. a 2 +8 @& ) Figure 1.8: {llusuauion of the frequency-division multiplexing operation. (a) Spectra of three low-frequency signals, and (b) spectra of the modulated composite signal, Aconif) wo wo — 0 Hlben | 200 Transformer we Asia 2) Figure 1.9; Singte-sideband modulation scheme employing a Hilbert transformer. ‘At the receiving end, the composite baseband signal is first derived from the FDM signal by demod- ulation. ‘Then each individual frequency-translated signal is first demultiplexed by passing the composite signal through a bandpass filter with a center frequency of identical value as that of the corresponding carrier frequency and a bandwidth slightly greater than 2S2,,, The output of the bandpass filter is then demodulated using the method of Figure 1,6(b) to recover a scaled replica of the original voice signal. In the case of the conventional amplitude modulation, as can be seen from Figure 1.4, the modulated nal has a bandwidth of 2, whereas the bandwidth of the modulating signal is %m. To increase the capacity of the transmission medium, 2 modified form of the smplitude modulation is often employed in which either the upper sideband or the lower sideband of the modulated signal is tcansmitted. The corresponding procedute is called single-sideband (SSB) modulation to distinguish it from the double- sideband modulation scheme of Figure 1.6(a). ‘One way to implement single-sideband amplitude modulation is indicated in Figure 1.9, where the Hilbert transformer is defined by Eq, (1.6). The spectra of pertinent signals in Figure 1.9 are shown in Figure 1.10, 1.2. Typical Signa! Processing Operations 1" * “wo * . pia. a ta, eT me Figure 1.10; Spectra of pertincot signals in Figure 1.9. 1.2.6 Quadrature Amplitude Modulation We observed earlier that DSB amplitude modulation is half as efficient as SSB amplitude modulation with regard to utilization of the specirum. The quadrature amplitude madulation (QAM) method uses DSB modulation to modulate two different signals so that they both occupy the same bandwidth; thus QAM only takes up as much bandwidth as the SSB roodulation method. To understand the basic idea behind the QAM approach, let 21 (¢) und x2(1) be two bandlimited low-frequency signals with a bandwidth of 27, as indicated in Figure 14(a). The two modulating signals are individually modulated by the two carrier signals A cos(Nqt) and A sin({pt), respectively, and are summed, resulting in a composite signal y(¢) given by YU) = Axi (0) cos(pt) + Axp(O) sin( Rot). 1.16) Note that the two carrier signals have the same carrier frequency 2, but have a phase difference of 90°. In general, the cartier A cos(S1) is called the in-phase component and the carrier A sin(Qyt) is called the quadrature component. The spectrum ¥(J82) of the composite signal y(1) is now given by YD = FAX (2) +X GBF BY} 455 (Xe FB — 2) — Xp G(R + DWY)), (17) and is seen to occupy the same bandwicth as the modulated signal obtained by a DSB modulation To recover the ofiginal modulating signals, the composite signal is multiplied by both the in-phase and the quadrature components of the carrier separately. resulting: in two signals: FUCA) = YD) cOstat, cary raG) = yer) sin Qot). Substituting y(7) trom Bq. (1-16) in Eq, (1.18), we obtain after some algebra Pit) = Saf) + Sar eos(22yr} + 4an(t) sine), 20) = F220) + Fx 67) sin(2Q1) — $477) cos(2ot). Lowpass filtering of (1) and r2(#) by Gers with a cutoff at Qm yields the two modulating signals. Figure 1.11 shows the block diagram representations of the quadrature amplitude modulation and demodulation schemes. sy 12 Chapter 1: Signals and Signal Processing 1 0 Tae La ban ayia = fee for 377 Aces, cont ae] Dae pine Taps La ey ayo oe Bey filter at ©) Figure 1.11: Schematic representations of dhe quadrature amplitude modulation and demodulation schemes: (a) modulator, und (by demodulator Asin the case of the DSB suppressed carrier modulation method, the QAM method also requites at the rocciver an exact replica of the carrier signal employed in the tansmitting end for accurate demodulation. Cis therefore not employed in the direct transmission of analog signals, but finds applications in the transmission of discrete-time data sequences and in the transmission of analog signals converted imo discrete-time sequences by sampling and anaiog-to-digital conversion. 1.2.7 Signal Generation An equally important part of signal processing is synthetic signal generation. One of the simplest such signal generators is a device generating a sinusoidal signal, called an oscillator. Such adevice is an integral part of the amplitude-modulation and demodulation system described in the previous two sections, It also has various other signal processing applications, ‘There are applications that require the generation of other types of periodic signals such as square waves and triangular waves. Certain types af random signals with a spectrum of constant amplitude for all frequencies, called white noise, often find applications in practice, One such application, is in the generation of discrete-time synthetic speech signals. 1.3 Examples of Typicat Signals? ‘To better understand the breadth of the signal processing task, we now examine a number of examples of some typical signals and their subsequent processing in typical applications. Electrocardiography (ECG) Signal The electrical activity of the heart iy represented by the ECG signal (Sha81]. A typical BCG sipnel trace is shown in Figure 1.12(a}. The ECG trace is essentially a periodic waveform. One such period of the BCG was eform as depicted ia Figure 2.12(b) represents one cycle-of the blood transfer process from the heart to sie arteries, “This part of the waveform is generated by an electrical impulse originating at the wiria) node in the right atrium of the heart. The impulse causes contraction of the atria, which forces the bined in each auium t squeeze into its corresponding ventricle. The resulting signal is called the P-wave. The atrioventricular node delays the excitation impulse until the blood transfer from the atria to the ventricles iy completed, resulting in the P-R interval of the ECG waveform. The excitation impulse then causes contraction of the ventricles, which squeezes the blood into the arteries. This generates the ca tits beer adapted from Handbook for Digital Signal Processing. Sanit K, Mitra and James F. Kalter, Bas, 1993, Jolmn Wiley & Sons. Adspted by persion ot Fok Wiley & Sons 1.3. Examples of Typical Signals 13 aa pee ata @) 10, zi os 2 = -o8 s Di o2 03 04 05 06 ‘Seconds (b) Figure 1.12: (3) A typical ECG trace, and (b) one eycle of wn ECG waveform, QRS part of the ECG waveform. During this phase the atria are relaxed and filled with blood. The T-wave of the waveform represents the relaxation of the ventricles. The complete process is repeated periodically, generating the ECG trace Each portion of the ECG waveform carries various types of information for the physician analyzing a atient’s heart condition [Sha8 1), For example, the amplitude and timing of the Pand QRS portions indicate the condition of the cardiac muscle mass. Loss of amplitude indicates muscle damage, whereas increased amplitude indicates abnormal heart rates. Too long a delay in the atrioventricular node is indicated by a very long P-R interval. Likewise, blockage of some or all of the contraction impulses is reflected by intermittent synchronization between the P- and QRS-waves. Most of these abnormalities can be treated with various drugs, and the effectiveness of the drugs can again be monitored by observing the new ECG waveforms taken afier the drug treatment. Jn practice, there are various types of extemally produced artifacts that appear in the ECG signal [fomé1]. Unless these interferences are removed, it ix difficult for a physician to make a correct diagnosis. A common source of noise is the 60-Hz power lines whose radiated electric and magnetic fields are coupled to the ECG instrument through capacitive coupling and/or magnetic induction, Other sources of interference are the electromyographic signals that arc the potentials developed by contracting muscles, ‘These and other interferences can be removed with careful shielding and signal processing techniques. 14 Chapter 1: Signals and Signal Processing nn Ane ey Ane A tert aR emt enattaAl tht nntne va nerectenee te Lean tee RRA ca GAS alpen AL eal Vina ANA AN aaah Nar a Ste pont ay ncaa A het et Samhain ee Figure 113: Mulliple EEG signal traces Electroencephalogram (EEG) Signal ‘The suramation of the electrical activity caused by the random firing of billions of individual neurons in the brain is represented by the EEG signal (CohR6], (Tom81|. In multiple BEG recordings, electrodes are placed at various positions on the scalp with two common clectrodes placed on the earlobes, and potential diferences between the various electrodes are recorded, A typical bandwidth of this type of EEG ranges from 0.5 to about 100 Hz, with the amplitudes ranging from 2 to 100 mY. An example of multiple EEG traces is shown in Figure 1.13. ‘Both frequency-domain and time-domain analyses of the BEG signal have been used for the diagnosis of epilepsy, sleep disorders, psychiatric malfunctions, ete, Ta this end, the EEG spectrum is subdivided into the following five bands: (I)the delta range, occupying the band from 0.5 to 4 Hz: (2) the seta range, ‘occupying the band from 4 10 8 Hz; (3) the alpha range, occupying the band from 8 to 13 Hz; (4) the beta range, occupying the band from 13 to 22 Fz; and (5) the gamma range, occupying the band from 22 to 30 Hr. ‘The delta wave is normal in the EEG signals of children and steeping adnits, Since it is not common in alert adults, its presence indicates certain brain diseases. ‘The theta wave is usually found in children even though it has been observed in alert adults. The alpha wave is common in all normal humans and is more pronounced in a relaxed and awake subject with closed eyes. Likewise, the beta activity is common in normal adulis. The EEG exhibits rapid, low-voltage waves. called rapid-eve-movement (REM) waves, ima subject dreaming during sleep. Otherwise, in a steeping subject, the EEG contains bursts of alpha-like waves, called steep spindles. “The EEG of an epileptic patient exhibits various types of abnormalities, depending on the type of epilepsy that is caused by uncontrolled neural discharges Seismic Signals ‘These types of signals are caused by the movement of rocks resulting from an earthquake, a volcanic erup- ‘ion, or an underground explosion {Bol93}, The ground movement generates clastic waves that propagate through the body of the earth in alt directions from the source of movement, Three basic types of elastic wives are generated by the earth movement. Two of these waves propagate through the body of the earth, ‘one moving faster with respect to the other. The faster moving wave is called the primary or P-wave, while 1.3. Examples of Typical Signals 15 the slower moving one is called the secondary or S-wave. The third type of wave is known as the surface wave, which moves along the ground surfuce. These seismic waves are converted into electrical signals by a seismograpi and are recorded on a strip chart recorder or a magnetic tape. Because of the three-dimensional nature of ground movement. a seismograph usually consists of three sepurate recording instruments that provide information about the movements in the two horizontal directions and one vertical direetion and develops three records as indicated in Figure 1.14. Each such record is a one-dimensional signal. From the recorded signals it is possible to determine the magnitude of the earthquake or nuclear explosion and the location of the source of the original earth movement. Seismic signals also play an important role in the geophysical exploration for oi! and gas [RobRO} In this type of application linear urrays of seismic sources. such as high-energy explosives, are placed ut regular intervals on the ground surface. The explosions cause seismic waves to propagate through the subsurface geological structures and reflect back to the surface from interfaces between geological stiala, The reflected waves are converted into electrical signals by a composite array of geophones laid out in certain patterns and displayed as a two-dimensional signal that is a function of time and space, called a srace gasher, ws indicated in Figure 1.15. Before these signals are analyzed, some preliminary time and amplitude corrections are made on the data to compensate for different physical phenomena From the corrected data, the time differences between reilected seismic signals are used to map structural Geformations, whereas the amplitude changes usually indicate the presence of hydrocarbons: Diesel Engine Signal Signal processing is playing au important role in tae precision adjustment of diesel engines during produc- tion (Jus81}, Efficient operation of the engine requires the accurate determination of the topmost point of piston travel (called the top decd centers inside the cylinder of the engine. Figure 1.16 shows the signals generated by a dual probe inserted into the combustion chamber of a diesel engine in place of the glow plug. The probe consists of 2 microwave antenna and a photodiode detector. The microwave probe captures signals reflected from the cylinder cavity caused by the up and down motion of the piston while the engine is running. Interestingly, the waveforms of these signals exhibit a symmetry around the top dead center independent of the engine speed, temperature, cylinder pressure, or ais-fuel ratio, The point of symmetry is determined automatically by a microcomputer, and the fuet-injection pump position is then adjusted by the computer accurately to within -£0.5 degree using the luminosity signal sensed by the photodiods detector, ‘Speech Signals ‘The linear acoustic theory of speech production has led to mathematical models for the representation cof speech signals. A speech signal is formed by exciting the vocal tract and is composed of two types of sounds: vviced and unvoiced [Rab78), |Slu84]. The voiced sound, which includes the vowels and a ‘umber of consonants such as B.D. L, M, N. and R, is exeited by the pulsatile airflow resulting from the vibration of the vocal folds. On the other hand, the unvoiced sound is produced downstream in the forward art of the oral cavity (mouth) with the vocal cords at rest and includes sounds like F, S, and SH. Figure 1.17{a) depiets the specch waveform of a male utterance “every salt bieeze comes from the sea” 79]. The toral duration of the speech wavelorm is 2.5 seconds, Magnified versions of the “A” and sczments in the word “salt” ure sketched in Figure 1.17(b) and (c), respectively. ‘The slowly varying low-frequency voiced wavelarm of "A" und the high-frequency unvoiced fricative waveform of “S" are evident from the magnified waveforms. The voiced waveform in Figure 1,17(b) is seen to be quasi-periodic and can be modeled by a sum of a finite number of sinusoids. The lowest frequency of osciltation in this representation is called the fundamental jrequency or pitch frequency. The unvoiced waveform in Figure 1.17(6 bas no regular fine Structare and is more noise-like 16 Chapter 1: Signals and Signal Processing i 0.19 coy Jeo) bikes [fi iartatone : aT BOC eel! 3 ao] boa PCR ie ee Tine tee Eo 2 3 oo Rie evade tenesd a — L | o= 3 S waves 2 Pewee we eb meio i 0.2 $ Ecol pen Meninse ¥ Kl , i lh 812 10 44 16 18 30 32 24263890 ‘Time tser) Figure 1.14: Scismograph record of the Nonridge aftershock, Janusry 29, 1994. Recorded at Stone Canyon Reservoir. Los Angeles, CA. (Courtesy of Institute for Crustal Research, University of California, Santa Barbara, CA.) 1.3. Examples of Typical Signals 7 Figure 1.15; A typical seismic signal trace gather. (Courtesy of Institute for Crustal Research, University of California, Santa Barbara, CA.) One of the major applications of digital signal processing techniques is in the general area of speech Processing. Problems in this area are usually divided into three groups: (1) speech analysis, (2) speech synthesis, and (3) speech analysis and synthesis [Opp78]. Digital speech analysis methods are used in au- tomatic speech recognition, speaker verification, and speaker identification, Applications of digital speech synthesis techniques include reading machines for the automatic conversion. of written text into speech, and retrieval of data from computers in speech form by remote access through terminals of telephones. ‘One example belonging to the third group is voice scrambling for secure transmission, Speech data com pression for an efficient use of the transmission medium is another example of the use of speech analysis followed by synthesis. A typical speech signal after conversion into a digital form contains about 64,000 bits per second (bps). Depending on the desired quality of the synthesized speech, the original data can be compressed considerably, €.g,, down to about 1000 bps. Musical Sound Signal The electronic synthesizer is an example of the use of modem signal processing techniques [Moo77}, {(Ler83]. The natural sound generated by most musical instruments is generally produced by mechanical vibrations caused by activating some form of oscillator that then causes other parts of the instrument to vibrate. Ali these vibrations together in a single instrument generate the musical sound. In a violin the primary oscillator is a stretched piece of string (cat gut). Its movement is caused by drawing a bow across it; this sets the wooden body of the violin vibrating, which in turn scts up vibrations of the air inside as well as outside the insitument, Iv a piano the primary oscillator is a stretched stect wire that is set into vibratory motion by the hitting of a hammer, which in turn causes vibrations in the wooden body (sounding board) of the piano, In wind or brass instruments the vibration occurs in a column of air, and a mechanical change in the length of the air column by means of vaives or keys regulates the rate of vibration, 18 Chapter 1: Signals and Signal Processing armnoery sigal time Figure 1.16: Diesel engine signals. (Reproduced with permission from R. K. Jurgen, Detroit bets on electronics 10 stymnie Japan, JEEE Spectrum, vol. 18, July 1981, pp. 29-32 ©198] IEEE.) ‘The sound of orchestral instruments can be classified into two groups: quasi-periodic and aperiodic. ‘Quasi-periodie sounds can be described by a sum of a finite number of sinusoids with independently varying amplitudes and frequencies. The sound waveforms of two different instrumeats, the cello and the bass drum, are indicated in Figure 1.18(a) and (b), respectively. In each figure, the top waveform is the plot of an entire isolated note, whereas the bottom plot shows an expanded version of a portion of the note: 10 ms for the cello and 80 ms for the bass drum. The waveform of the note from a cello is seen to be quasi-periodic. On the other hand, the bass drum waveform is clearly aperiodic. The tone of an orchestral instrument is commonly divided into three segments called the attack part, the steady-state part, and the decay part. Figure 1,18 illustrates this division for the two tones. Note that the bass drum tone of Figure 1.18(b) shows no steady-state part. A reasonable approximation of many tones is obtained by splicing together these parts. However, high-fidelity reproduction requires a more complex model. Time Series The signals described thus far are continuous functions with time as the independent variable. In many cases the signals of interest are naturally discrete functions of the independent variables, Often such sigaals are of finite duration. Examples of such signals are the yearly average number of sunspots, daily stock Prices, the value of total monthly exports of a country, the yeerly papulation of animal species in a certain geographical area, the annual yields per acre of crops in a country, and the monthty totals of international airline passengers over certain periods. This type of finite extent signal, usually called a rime series, occurs in business, economics, physical sciences, social sciences, engineering, medicine, and many other fields. Plots of some typical time series are shown in Figures 1.19 10 1.21 1.3. Examples of Typical Signals 19 Figure 1.17: Speech waveform example: (a} sentence-length segment, (b) magaified version of the voiced segment {the leter A), and (¢) magnified version of the unvoiced segment (the letter S). (Reproduced with permission from J. L. Flanagan et sl, Speech coding, JEEE Trans. on Communications, vol. COM-27, Aptit 1979, pp. 710-737 ©1979 IEEE) ‘There are many reasons for analyzing a particular time series {Box70]. In some applications, there may be a need to develop a model to determine the nature of the dependence of the data on the independent variable and use it to forecast the future behavior of the series. As an example, in business planning, reasonably accurate sales forecasts ate necessary. Some types of series possess seasonal or periodic components, and it is important to extract these components. ‘The study of sunspot numbers is important for predicting climate variations. Invariably, the time series data are noisy, and their representations require models based on their statistical properties. Images As indicated earlier, an image is a two-dimensional signal whose intensity at any point is a function of two spatial variables. Common examples are photographs. till video images, radar and sonar images, and chest and dental x-rays. An image sequence, such as that seen in a television, is essentially a three-dimensional signal for which the image intensity at any point is a function of three variables: two spatial variables and time. Figure 1.22(a) shows the photograph of a digital image. 20 Chapter 1: Signals and Signal Processing Figure 1.18: Waveforms of (a) tte cello and (>) the bass drum. (Reproduced with permission fromJ. A. Moorer, Signal processing aspects of computer music: A survey, Proceedings uf the IEEE, vol. 65, August 1977, pp. 118-1137 (8197 IEEE). iP |) A Figure 119; Seasonally adjusted quarterly Gross National Product of United States in 1982 dollars from 1976 1 1986, (Adapted from {LU091 |) ‘The basic problems in image processing arc image signal representation and modeling, enhancement, restoration, reconstruction from projections, analysis, and coding [Jai89] Each picture element in a specific image represents a certain physical quantity; a characterization of the element is called the image representation. For example, a photograph represents the luminances of various objects as seen by the camiera, An infrared image taken by a satellite of an airplane represents the temperature profile of a geographical area. Depending on the type of image and its applications, various 1.3. Examples of Typical Signals at ean emp se eh ‘Figure 1.20; Monthly mean St, Louis, Missouri temperature in degrees Celsius for the years 1975 t0 1978. (Adapted from [Mar87]) stan Figure 1.21; Monthly gasoline demand in Ontario, Canada (in millions of gallons) trom January 1971 to December 1975. (Adapted from [Abr83).) types of image models are usually defined. Such models are also based on perception, and on local or global characteristics. The nature and performance of the image processing algorithms depend on the image model being used. Image enhancement algorithms are used to emphasize specific image features to improve the quality of the image for visual perception or to aid in the analysis of the image for feature extraction. These include methods for contrast enbancement, edge detection, sharpening, linear and nontinear filtering, zooming, and noise removal. Figure 1.22(b} shows the contrast-enhanced version of the image of Figure 1.22(a) developed using a nonlinear filter [Thu00]. The algorithms used for elimination or reduction of degradations in an image. such as blurring and geometric distortion caused by the imaging system and/or its surroundings are known as image restoration, Image reconstruction from projections involves the devetopment of a two-dimensional image slice of a throe-dimensional object from a number of planar projections obtained from various angles, By creating a ‘number of contiguous slices, a three-dimensional image giving an inside. view of the object is developed, Image analysis raethods are employed to develop 2 quantitative description and classification of one or more desired objects For digitat processing, an image needs tobe sampled and quantized using an analog-to-digital converter. A reasonable size digital image in te original form takes a considerable amount of memory space for storage, For example, an image of size 512 x 512 samples with 8-bit resolution per sample contains over 2 million bits, Jmage coding methods arc used to reduce the tota number of bits in an image without any degradation in visual perception quality as in speech coding. ¢.g., down to about i bit per sample on the average. 22 Chapter 1: Signals and Signal Processing Figure 1.22: (a) A digital image, and (b) its contrast-enhanced version, (Reproduced with permission from Nonlinear Image Processing, 8. K. Mitra and G. Sicuranza, Fas., Academic Press, New York NY, ©2000.) 1.4 Typical Signal Processing Applications* ‘There are numerous applications of signal processing that we often encounter in our daily life without being, aware of them, Due (o space limitations, it is not possible to discuss all of these applications. However, an overview of selected applications is presented, 1.4.1 Sound Recording Applications ‘The recording of most musical programs nowadays is usually made in an acoustically inert studio. The sound from each instrument is picked up by its own microphone closcty placed to the instrument and is recorded on a single track in a multitrack tape recorder containing as many as 48 tracks, The signals from individual tracks in the master recording are then edited and combined by the sound engincer in a mix-down. system to develop a two-track stereo recording. There are a namber of reasons for following this approach. First, the closeness of each individual microphone to its assigned instrument provides a high degree of separation between the instruments and minimizes the background noise in the recording. Second, the sound part of one instrument can be rerecorded later if necessary. Third, during the mix-down process the sound engineer can manipulate individual signals by using a variety of signal processing devices to alter the musical balances between the sounds generated by the instruments, can change the timbre, and can ‘add natural room acoustics effects and other special effects [Ble78], (Far76]. ‘Various types of signal processing techniques are utilized in the mix-down phase, Some are used to ‘modify the spectral characteristics of the sound signal and to add special effects, whereas others arc used to improve the quality of the transmission medium. The signal processing circuits most commonly used are: (1) compressors and limiters, (2) expanders and noise gates, (3) equalizers and filters, (4) noise reduction systems, (5) delay and reverberation systems, and (6) circuits for speciel effects [Ble78). [Ear76], [Hub89], [Wor89]. These operations are usually performed on the original analog audio signals and are implemented using analog circuit components. However, there is a growing trend toward all digital implementation and its use in the processing of the digitized versions of the analog audio signals [Ble78]. 2 This section has been adagecd from Handbook for Digiual Signal Processing, Sant K, Mira and James F. Kaiser Eds, ©1993, Jot Wiley & Sons. Adapted by permission of John Wiley & Sons. 1.4. Typical Signal Processing Applications 23 aot cernoresson te 8 E S Sapna vanabie “"P dhneshola Figure 1.23: Transfer characteristic of a typical compressor. | : 2 stat an ae ; i “Time g Ame Rekave Time a & igre 1.28 Pens chanting pi compre Compressors and limiters, These devices are used for the compression of the dynamic range of an audio signal. The compressor can be considered as an amplifier with two gain levels: the gain is unity for inpot signal levels below a certain threshold and less than unity for signals with levels above the threshold, ‘The dhreshotd level is adjustable over a wide range of the input signal. Figure 1.23 shows the transfer characteristic of a typical compressor. ‘The parameters characterizing a compressor are its compression ratio, threshold level, attack time, and release time, which are itlustrated in Figure 1.24, ‘When the input signal level suddenly rises above a presctibed threshold, the time taken by the com- Pressor to adjust its normal unity gain to the lower value is called the attack ime. Because of this effect, the output signal exhibits a slight degree of overshoot before the desired output level is reached. A zero altack Lime is desirable to protect the system from sudden high-level transients. However. in tis case, the iapact of sharp musics! attacks is eliminated, resulting in a dall “lifeless” sound [Wor89]. A longer attack lime causes the output to sound more percussive than normal. Similarly, the time taken by the compressor to reach its normal unity gain value when the input level suddenly drops below the threshold is called the release time or recovery time. Ifthe input signal fluctuates rapiely around the threshold in a small region, the compressor gain also fluctuates up and down, In such 4 situation, the rise and fall of background noise results in an audible effect called breathing or pumping, ‘which can be minimized with a longer release time for the compressor gain. ‘There are various applications of the compressor uni in musical recording (Ear76]. For example, it can be used to eliminate variations in the peaks of an electric bass output signal by clamping them to a constant level, thos providing an even and solid bass fine. To maintain the original character of the instrument, it is 24 Chapter 7: Signals and Signal Processing 8 $ Expansion 2 = é 4 Le Reverberaton ‘system ‘Figure 1.33: Block diagram of a complete delay-reverberation system in a: monophonic system PPLE ‘network system, Left Right Figure 1.34: Localization of sound source using delay systems and attenuation network, Special effects. By feeding in the same sound signal through an adjustable delay and gain control, as indicated jn Figure 1.34, it is possible to vary the localization of the sound source ftom the left speaker to the right for a listener located on the plane of syrumetry. For example, in Figure 1.34, a 0-dB loss in the left channel and a few milliseconds delay in the right channel give the impression of a localization of ‘the sound source at the left, However, lowering of the left-channel signal level by a few-dB loss results in a phantom image of the sound source moving toward the center. This scheme can be further extended to provide a degree of sound broadening by phase shifting one channel with respect the other through allpass networks* as shown in Figure 1.35. “An allpass network is characterized by a magnitude spectrum which iy equal 10 cme forall frequencies Chapter 1: Signals and Signa) Processing Phase = Phase =9 +0 Allpass Alipass lnctwork work} Left Right Figure 1.38: Sound broadening using allpass networks. ay Las] x ALD Figure 1.36: A possible application of delay systems and reverberation in a stereophosic system, Another application of the delay-reverberation system is in the processing of a single tsack into a pseudo-sterco format while simulating a natural acoustical environment, as illustrated in Figure 1.36. The delay system can also be used to generate a chorus effect from the sound of a soloist. The basic scheme used is illustrated in Figare 1.37. Rach of the delay units has a variable delay controlled by a low-frequency pseudo-random noise source to provide a random pitch variation {Ble78}. Itshould be pointed cut bere that additional signal processing is employed to make the stereo submaster developed by the sound engineer more suitable for the record-cutting lathe or the cassette tape duplicator. 1.4.2. Telephone Dialing Applications Signal processing plays a key role in the detection and generation of signaling tones for push-button telephone dialing (Dar76]. In telephones equipped with TOUCH-TONE® dialing, the pressing of each burton generates a unique set of two-tone signals, called duual-tone multfrequency (DTM) signals. that are processed at the telephone central office to identify the number pressed by determining the wo associated tone frequencies. Seven frequencies are used to code the 10 decimal digits and the two special buttons marked“ *" and" #”. The low-band frequencies are 697 Hz, T10 Hz, 852 Hz, and 941 Hz. The remaining three frequencies belonging to the bighband are 1209 Hz, 1336 Hz, and 1477 Hz. The fourth high-band frequency of 1633 Hz is not presently in use and has been assigned for future applications to ‘permit the use of four additional push-buttons for special services. The frequency assignments used in the ‘TOUCH-TONE® dialing scheme are shown in Figure 1.38 [ITU84]. 1.4. Typical Signal Processing Applications at — ourrur i CTT a y Figure 1.37: 4 scheme for implementing chorus effect. Ch EHEHA mr TE RH SHE} se-{7}-[8 +19 L{c}-— out me} 0] # oy 1209 He 1336 He 177 Hy 1633 He corn —| Figure 1.38: The tone frequency assignments for TOUCH-TONE® dialing. ‘The scheme used to identify the two frequencies associated with the button that has been pressed is shown in Figure 1,39, Here, the two tones are first separated by a lowpass and a highpass filter. The passband cutoff frequency of the lowpass filter is slightly above 1000 Hz, whereas that of the highpass filtec is slightly below 1200 Hz. The output of each filter is next converted into a square wave by a limiter and then processed by a bank of bandpass filters with narrow passbands. The four bandpass filters in the low-frequency channe] have center frequencies at 697 Hz, 770 Fiz, 852 Hz, and 941 Hz. The four bandpass filters in the high-frequency channel have center frequencies at 1209 Hz, 1336 Hz, 1477 Hz, and 1633 Hz, The detector following each bandpass filter develops the necessary dc switching signal if its input voltage is above a certain threshold. All the signal processing fuactions described above are usually implemented in practice in the analog domain. However, increasingly, these functions are being implemented using digital techniques. See Section 11.1 32 Chapter 1: Signals and Signal Processing Bandra . Barre peor [+ 997 Tenipes| fcc» a farine| 4 pecee [> 0 Low group Faneas a Detector | + 852 He Tompas Bandrass|_| |. ie camer Lf Banas bes oat a ie wom fm ieee no Bandpass}__[ + inet agp [Dea F136 | i, Barhas| | Desa | 177 He B Binet 4 Detector | 1663 He Figure 1.39: The tone detection scheme for TOUCH TONE® dialing 1.4.3. FM Stereo Applications For wireless transmission of a signal occupying a Jow-frequency range, such as an audio signal, it is necessary 1o transform the signal to a high-frequency range by modulating it onto a high-frequency carrier. At the receiver, the modulated signal is demodulated to recover the low-frequency signal. ‘The signal processing operations used for wireless transmission are modulation, demodulation, and filtering. Two commonly used modulation schemes for radio are amplitude modulation (AM) and frequency modulation FM). We next review the basic idea behind the FM stereo broadcasting and reception scheme as used in the United States [Cou83]. An important feature of this scheme is that at the receiving end, the signal can bbe heard over a standard monaural FM radio with a single speaker or over a stereo FM radio with two speakers, The system is based on the frequency-—_] Wy +8 © Figure 1.44: Various signal paths in a telephone network. (2) Transmission path from talker A to listener B, (b) echo path for talker A, and (¢) echo path for listener B. ‘The effect of the echo can be annoying to the talker, depending on the amplitude and delay of the echo, i.e., on the length of the trunk circuit. ‘The effect of the echo is worst for telephone networks involving ‘geostationary satellite circuits, where the echo delay is about 540 ms. Several methods are followed to reduce the cffect of the echo. In trunk circuits up to 3000 km in length, adequate reduction of the echo is achieved by introducing additional signal loss in both directions of the four-wire circuit. In this scheme, an improvement in the signal-to-echo ratio is realized since the echo undergoes loss in both directions while the signals are attenuated only once. For distances greater than 3000 km, echoes are controlled by means of tin echo suppressor inserted in the ‘trunk circuit, as indicated in Figure 1.45. The device is essentially a voi" activated switch implementing two functions, 1 first detects the direction of the conversation and then blocks the opposite path in the four-wire circuit, Even though it introduces distortion when both subscribers are talking by clipping parts of the speech signal, the echo suppressor has provided a reasonably acceptable solution for terrestrial transmission, For telephone conversation involving satellite circuits, an elegant solution is based on the use of an echo canceler. The circuit generates a replica of the echo using the signal in the receive path and subtracts it from the signal in the transmit path, as indicated in Figure 1.46. Basically. itis an adaptive filter seructure ‘whose parameters are adjusted using certain adaptation algorithms until the residual signal is satisfactorily 1.5. Why Digital Signal Processing? ‘Transmit. 4 yn L re Suppressor ecavel T ‘Figure £.45: Echo suppression scheme, ant @—$fiowce] UE neche i Figure 1.46: Echo cancellation scheme. SS Gar ae “ome SE oes elt {"[ convester | | POOF {"] Converter | | “Filer — | cusput Figure 1.47: Scheme for the digital processing of an analog signal, minimized.” ‘Typically, an echo reduction of about 40 dB is considered satisfactory in practice. To eliminate the problem generated when both subscribers are talking, the adaptation algorithm is disabled when the signal in the tanstait path contsins both the echo and the signal generated by the speaker closer to the hybrid coil 1.5 Why Digital Signal Processing?® In some sense, the origin of digital signal processing techaiques can be traced back to the seventeenth cen- tury when finite difference methods, numerical integration methods, and numerical interpolation methods Were developed to solve physical problems involving continuous variables and functions. ‘The more recent imerest in digital signal processing arose in the 1950s with the availability of large digital computers. Initial applications were primarily concemed with the simulation of analog signal processing methods. Around the beginning of the 1960s, researchers began to consider digital signal processing as a separate field by itself, Since then, there have been significant and myriad developments and breakthroughs in hoth theory and applications of digital signal processing. Digita) processing of an analog signal consists basically of three steps: conversion of the analog, into a digital form, processing of the digital version. and finally, conversion of the provessed digital back into an analog form, Figure 1.47 shows the overall scheme in a block diagram form. Por a veview of adaptive étiering mertiods vee [Ci093). This section bas been adapted fiom Handbook for Digital Signal Processing, Sanit X. Mitra and James F, Kaiser, Eds, 21992, Sho Wey && Sons. Adapted by permission of John Wiley B Sons, 38 Chapter 1: Signais and Signal Processing a se re @ «b) ACER RAE Tie, se Tine. use © @ Apna, ito pid ue imei we ©) wo Figare 1.48: Typical waveforms of signals appearing at various stages in Figure 1.47. (a) Analog input signal, (2) ‘output of the S/H circuit, (c) A/D converter output, (€) ourpat of the digital processor, (¢) D/A converter output, and {D analog output signal, In (c) and (4), the digital HIGH and LOW levels are shown as positive and negative pulses for clarity. Since the amplitude of the analog input signal varies with time, a sample-and-hold (S/H) cirenit is used firs: to sample the analog input at periodic intervals and hold the sampled value constant at the input of the analog-to-digital (A/D) converter to permit accurate digital conversion. The input to the A/D converter is a stairease-type analog signal if the S/H circuit holds the sampled valve until the next sampling instant. ‘The output of the A/D converter is a binary data stream that is next processed by the digital processor implementing the desired signal processing algorithm. The output of the digital processor, another binary data stream, is then converted into a staircase-type analog signal by the digital-to-analog (D/A) converter. ‘The lowpass filter at the output of the D/A converter then removes all undesired high-frequency components and delivers at its output the desired processed analog signal. Figure 1.48 illustrates the waveforms of the pertinent signals at various stages in the above process, where for clarity the two levels of the binary signals are shown as a positive and a negative pulse, respectively. In contrast to the above, a direct analog processing of an analog signal is conceptually much simpler since it involves only a single processor, as illustrated in Figure 1.49. Tt is therefore natural to ask what the advantages are of digital processing of an analog signal. ‘There are of course many advantages in choosing digital signal processing, The most important ones are discussed next (Bel84], {Pro92]. 1.5. Why Digital Signal Processing? 39 Analog_| Analog | Apalog inpat | Processor Fottpat Figure 1.49: Analog processing of unalog signals. Unlike analog circuits, the operation of digital circuits does aot depend on precise values of the digital signals, As a result, a digital circuit is less sensitive to tolerances of component values and is fairly independent of temperature, aging, and most other externa) parameters. A digital circuit can be reproduced veasily in volume quantities and does not require any adjustments either during construction or later white in use. Moreover, itis amenable to full integration, and with the recent advances in very large scaie integrated 0, a right-sided sequence is usually called a causal sequence, * Likewise. a left-sided sequence x{n) has zero-valued samples for n > Nz, ie. xft]=0 form > Np, (2.8) where Nz isa finite integer which can be positive or negative. Tf Nz = 0, a left-sided sequence is usually called an anticausal sequence. A genetal neo-sided sequence is defined for all values of n in the range 90 giv (hin=(-2r ts. 4 $s Hoenet, it pombe mat (eli as 9 seguance i Length 4 and defined for 4s apelin ih canard pes: feel = $2, 1.3, 2, 9, On Faamapkco of ume areaartices ges! fre (gee fl abil eft) i ite previous Smmple tse inuticsied belie abet fend uebal den2.. 605, 08, 0, 0), fish) = (cto taal ATK AIS FE =. Oy 2.1. Discrete-Time Signals 47 ond 3 via Figure 2.6: Discrete-time system of Example 2.3. 4 an} vial ado) ote 11 ain 3] via 21 Figure 2.7: Discreve-time system of Example 2.4. Combination of Basic Operations In most applications, combinations of the above basic operations are used. Illustrations of such combine: tons are given in the following two examples. EXAMPLE, £3 Fa ae ner Soe ee tin mings Si pace aeratlins, We analy othe tes Ye determine tow the sequence yin} is gmeriied fa a soquaice Sinh Tom the Gefiitcn: ofthe unit delay block given in Figuice 2.360) follow Rat the left delay bln enevpiey # secwucntax{n'— 51. the sokddly delay black develops aj — 2 an:tnally, therapies setey bck sepcare cin 31 The Balipliess labeled ay or, , and ou fentratesequepces oy #{m), arya], 3 > 2), indeasts — 3}. sespecshvely. The sequence yf Ai then cbt hy adding wel erpfn TI, ayalie 2], wt eatin I shal = oysfni+ asthe — 11 +#ysty —2) aaah —3) ata EXAMDUE 24 SWE ticks inalyex ihe hiciete sue Spacey of Figiee 22) Obsepee: Gra thal tha ida moat Gels Mocks exnerain the seqoenoes rin — I-und sf — 2]. whereas the FW righesna lag blocks deoslon the ce ee py aM ote-k isse deinyed aeqacacen along with 31 ae ten applied inte vs masters Iatelod ba. bea, am develop the sexuere Fe) seth dpe 2 y= 1 amd yf — 29 which ie then afte ts ye fe Mal = bexiol+ Bart — 1-4 toate ~ 2) yim-= Wasegyie —2t ay ‘Sampling Rate Alteration Another quite useful operation isthe sampling rate alteration that is employed :o generate a new sequence Sid & sampling rate higher or lower than that of a given sequence. ‘Thus, if x] Is 4 sequence hath a PILPG rate of Fr Hi and itis used to generate another sequence yf] with a desired sampling rate Of Fy He, then the sampling rate alteration ratio is given by 48 Chapter 2: Discrete-Time Signals and Systems in the ciel tin fe ted ate] A] Ad Fv int @ w Figure 2.8 Represeatation of hasic sampling rale alteration devives: (4) up-sampler. and (bj down-sampler lmptseneweupsimledty Figure 2.9: Illustration of the ap sampling process. Fi, 7 =k (2.46) If R & 1, the process is culled interpolation and results in a sequence with a higher sampling rate. On the other hand, if R ~< 1, the sampling rate is decreased by a process called decimation. ‘The basic operations employed in the sampling.rate alteration process are called up-sampling and down- sampling. These operations play important roles in multirate discrete-time systems and are considered in ‘Chapter 10. In up-sampling by an integer factor L > 1, Z — 1 equidistant zero-valued samples are inserted by the up-sampler between each two consecutive samples of the input sequence x{] to develop an ourput sequence yl] xccording (o the relation sate) = [AIMED ns Onc BDL... « 217 otherwi Note that the sampling rate of yt] is Z times farger than tha! of the original sequence xn} ‘The block-diageam representation of the up-sampler, also called a sampling rute expander, is shown in Figure 2.8(a). Figure 2.9 illustrates the ‘ap-sampling operation for an up-sampling factor of = 3 Conversely, the down-sampliag operation by an integer factor M > 1 on a sequence x[a} consists of Keeping every Afth sample of x in}and removing Mf — 1 in-between samples, generating an output sequence Ju} according to the relation lit] =x laf 2.18) ‘This results in a sequence yin] whose sampling rate is (1/1)th that of x(a}. Basically, all input samples ‘willl indives equal to an integer multiple of M are retained at the omtput and all others are discarded. The schematic representation of the down-sampler or sampling rate compressor is shown in Fig- le 2.816). Figure 2.10 illustrates the down-sampling operation for a down-sampling factor of M = 2.1. Discrete-Time Signais 43 Inpseciense sett w Figure 2.10: Ise Figure 2.11; (a) An even sequence, anu (b) an odd sequence. 2.4.3 Classification of Sequences A discrete-time signal can be classified in various ways. One classification discussed earlier is in terms of the number of samples defining the sequence, Another classification is with respect to the symmetry ‘exhibited by the samples with respect (o the time index »r = 0. A discrete-time signal can also be classified in terms of its other properties such as periodicity, summability, enezgy, and power. Classification Based on Symmetry A sequence x[7| is called a conjugate-symmetric sequence if x{n] = x*[~n]. A real conjugate-symmettic sequence is called an even sequence. A sequence x[n] is called a conjugate-antisymmetric sequence if aint = —x*[-n]. A real conjugate-antisymmetric sequence is called an odd sequence. For a conjugate- antisymmetric sequence xn], the sample value at x = 0 must be purely imaginary. Consequently, for an ode! sequence x{0] = 0, Examples of even and odd sequences are shown in Figure 2.11 Any complex sequence xn} can be expressed as a sum of its conjugate-symmetric part zeeLn] and its conjugate-antisymmetric part xl] ala] = xe) + xual], tg) 50 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain where 1 salai= 5 @lnl + e'l-nl). (2.208) seal] = § (alal = 1m) 2.206) As inulicated by Eqs (2.202) and (2.200), the computation of the conjugate symametic and conjugate- antisymmetric parts of 4 sequence involves conjugation, sime-reversal, addition, and multiplication opera- tlone, Becauscof the time-reversal operation. the decomposition of a Gnte-lengih sequence into asum of @ Conjugnte-tyrnmetri sequence and conjugate antisymmetric equonce is posible, i the parent sequence is of odd length defined for a syrumetric interval, —M = 0 = M. EXAMPLE 25 Consider the dniu-length sexquence of Jenysh 7 de6ined for —3 = = Uatnth = 0 dea, 2 a pe sy) = 92, Th dewnnine i conjupsiesiyrimeic part gear end PactouNe ANTM: Purr, we Moe tein it, 1, 2, 4b He i sho tie nevenned wersioa in thes ive by Utiont =, 2 8416 42, 2 DO) sing (2-2) wer ta serve 3, 15. lncitnil 05, 03+)3, 3444S 4, -13- a5, 0S = ' Aisin, sin £ig (2.2065) we pet Inentall = 1-15, 05 +), (IS — EX =, = A$, 03=}, 19, lean te cuity Werte that sev tn = ey =m woh teat =~ Likewise, any real sequence af] can be expressed as a sum of its even part xgy{7t] and its odd part pala] ala] = xevle) + xoaln], (2.21) where \ Xeul tt] z (fal +al-n)). Q22a) reall = 5 Gein] — «fn (2.22) For a length-N sequence defined for 0 = n = N — 1. the above definitions of symmetry ae not applicable. The definitions of symmetry in the case of finite-length sequences are given instead using 2 modulo operation with all such symmetric and antisymmetric parts of a length-N sequence being also of Jengih NV and defined for the same range of values of the time index m. Thus, a length-N sequence x[t] can be expressed as 21m] = Xpeslt} + Xpcalt], OS RS N—1, (2.23) 2.1, Discrete-Time Signals 51 where xperli] and rpeofn] denote, respectively, the periodic conjugate-symmetric part and the periodic conjugate-antisymmetric part, defined by * xi] = x"lmw]) i hin 421N—n), 0fn5N-1, (2.24ay Fpeuitt) (cin) ~ 0), or decaying (c, < 0) amplitudes form > 0. Figure 2-16 depicts a Chapter 2: Discrete-Time Signals and Systems in the Time-Domain m og - gett in | i i: og ill, ra ii ilk alk dh je ale pial i. “7 tT qf ETSI) fastest Lerlinel! a TTT i PL PeL ET BT tI Pe Oo , 3 resol edd z] 1 le slelelalels E Pa sf ETUC TE La Figure 215; 4 family ae he tan be ee Oo Dek al sequences sven ty nl = 1.50 retgen, = Lt and semen (4) a = 0. (0) ag = C21, (6) De 87 jos 2035 "Time adex n “Time index 2 fa) ) Figure 2.17: Examples of real exponential sequences: (a) xin} = 0,2(1.2)". {b) xa} = 2000.9)" complex exponential sequence witha decaying amplitude. Note that in the display of a complex exponential sequence, its real and imaginary parts are shown separately. ‘With both A and a teal, the sequence of Eq, (2.42) teduces to a real exponential sequence. Forn > 0 such a sequence with |a| < 1 decays exponentially as n increases and with {al > 1 grows exponentially as n increases. Examples of reall exponential Sequences obtained for two values of a are shown in Figure 217. ‘We shall show later in Section 3.1 that a large class of sequences can be expressed in terms of complex exponential sequences of the form ef”. Note that the sinuscidal sequence of Eq. (2.39) and the complex exponential sequence of Eq. (2.43a} with ¢ = Oare periodic sequences of period V as long as aN isanimteger multiple of 277, i.¢., aN = 2ar where W and r are positive integers. The smallest possible N satisfying this condition is the fundamental period of the sequence, To verity this, consider the two sinusoidal sequences «;[n] = cos(eyn + $) and afr] = cos(eonin + N) + 6). Now Xai} = ons (a(n +N) + 6) = costann + 9) cose N ~ sin(onnt + $) sin aN ‘which will be equal to cos(won +) = x1le] only if sin. aN = O.andcos N= 1. These two conditions are satisfied if and only if oN is an integer raultiple of 27, i.c., woN = 2ar (2.44a) 58 Chapter 2: Discrete-Time Signais and Systems in the Time-Domain (244b) If 2/a is 6 noninteger rational numbet, then the period will be a multiple of 27/ale. If 2 ay is not 2 rational number. then the sequence is aperiedic even though it has a sinusoidal envelope. For example, x(n] = cos(W/3n + @) is an aperiodic sequence. KEXAMPLE 29 Let ur deermitie dhe perings of the simnoidal hogpuraces of Figure 2.15. te Figu fee = 0) and Kj (2-040) atti with = Gand iy valwe of NV. Mere. Use inallest vale of 2 fe ewan! in 1 ‘Llescings a Fire 214), ah, = Nr, wal ence, apptying fa, (2 44a) we wl had f= 20, Fotlowigeainitar ees we-artive ai re 2.1510 = fey Pagure 2 Sith = 0 fe Figure 2 15 Fipore- 2.1.40). ae 20 for Pie 215% ¢1 od Nm 5 tar Figure 2 thse figures, The number « in the two sequences of Eqs. (2.39) ard (2.43a) is called the angular frequency. Since the time instant n is dimensionless, the unit of angular frequency a, and phase is simply radians, Tf the unit of n is designated as samples, then the unit of w» and @ is radians per sample. Often in practice, the angular frequency w is expressed as =n, (2.45) where / is frequency in eycles per sample ‘Two interesting properties of these sequences are discussed next, Consider two complex exponential sequences xy [n) = e/" and xan] = €!%2" with 0 = cw < Qn and xk < cy < Ink + 1), where k is any positive or negative integer. If ey = ay + 2th, 2.46) then ee ‘Thus these two sequences are indistinguishable, Likewise, two sinusoidal sequences x1[n] = cos(win +6) and slr} = cos(wan + 0) with 0 < cy < 2x and 2k = wn -< 2a(k + 1), where k is any positive or negative integer, are indistinguishable from one another if a = 0 + 2k, ‘The second interesting feature of discrete-time sinusoidal signals can be seen from Figure 2.15, The frequency of oscillation of the discrete-time sinusoidal sequence x(t] = Acos(won) increases as a, increases from 0 to 2, and then the frequency of oscillation decreases as us, increases from 7 to 2" As @ result of the first property, a frequency @» in the neighborhood of w = 2ark is indistinguishable from a frequency ay — 2irk in the neighborhood of w = 0, and a frequency a, in the neighbothood of = (2k + 1) is indistinguishable from a frequency @, — (2k + 1) in the neighborhood of w= m for any integer value of &. Therefore, frequencies in the neighborhood of a, = 2ark are usually called low: Frequenctes, aud frequencies in the neighborhood of a, = m(2k + 1) are called high frequencies. For example, vif] = cos(0.lzrn) = cos(1.97n) shown in Figure 2.15(b) is a low-frequency signal, whereas, valet] = cos(0.87 7) ~ cos(t.2rrn) shown in Figure 2.15(d) and (h) isa high-frequency signal. Another application of the modulation operation discussed earlier in Section 2.1.2 is to transform 4 Sequence with low-frequency sinusoidal components to a sequence with high-frequency components by modulating the former with a sinusoidal sequence of very high frequency, as ilustrated in the Following example. SXAMPUE Eig ts fn} = cotilj eh anetrzfal em 2ebdleah lth Ht anf > ly 5 OL The pret seqpence yf} = x1labigie)] i tiem phen by val: Zeemlioyn) comin}, 2.2. Typical Sequences and Sequence Representation 59 (Using trigomimaetric dewiry wt arrive wt Wie! eiittae) 4 gi + c0sCmry i. ‘The new seqrenee pin] fs than composi of rey wlaumGiAll sexsaracun of Requenrien ar -+ ine moder should fe note! that because of se peopeety ives by Fig. 2.48) i iy ay = a. he Sequence cevvy +74) appear. asa KivsTroquemey npn peuence cau (3Y — a} ~ arn Mak nr allege = rexyaency uf vale 2.2.2 Sequence Generation Using MATLAB ‘Matas includes a number of functions that can be used for signal generation. Some of these functions of interest are exp, sin, cos, square, sawtooth For example, Program 2.1 given below can be employed to generate a complex exponential sequence of the form shown in Figure 2.16 % Program 2.1 @ Generation of complex exponential sequence % a = input! Type in real exponent = '); b = inputi'Type in imaginary exponent = csa+ ba K = input(‘Type in the gain constant = ‘); N= input (‘type in length of sequence - '); nos Lin; % = Ktexp(e*n) ; stem(n,real (x) }; slabel( ‘Time index n‘);ylabel (‘Amplitude’); title('Real part’); disp (‘PRESS RETURN for imaginary part‘); pause stemin, imag (x) ) 7 xlabel(‘Time index a‘);ylabel (‘Amplitude’); vitlet/Imaginary part’); Likewise, Program 2.2 listed below can be employed to generate a real exponential sequence of the form shown in Figure 2.17 % Program 2_2 % Generation of real exponential sequence @ = input(’type in argument = *); K = input’ype in the gain consvant = '); N= input (Type in length of sequence © *); ‘The Pearce of shih frequency signal cost + ar)n) a8. low-frequency signal cosa — a ~ ag)n) is called aliasing (see Section 2.3, 60 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain Figure 2.18; An arbitrary sequence xfn}. n= 92N; xs Kra.tn: stem(n, x) xlabel(‘Time index n’);ylabel (‘amplitude’); title({"\alpha = /,num2etr(a)])s Another type of sequence generation using MaTLan is given later in Example 2.14 2.2.3 Representation of an Arbitrary Sequence An arbitrary sequence can be represented in the time-domain as a weighted sum of some basic sequence and its delayed versions. A commonly used basic sequence in the representation is the unit sampte sequence. For example, the sequence x(n} of Figure 2.18 can be expressed as ata) = 0.58{0 +2] + 1.58[n — 1) ~ b[a — 21 + dla — 4] + 0.758En — 61. (247) An implication of this type of representation is considered later in Section 2.5.t, where we develop the general expression for calculating the output sequence of certain types of discrete-time systems for an arbitrary input sequence, Since the unit step sequence and the unit sample sequence are simply related through Eq. (2.38), itis also possible to represent an arbitrary sequence as a weighted combination of delayed unit step sequences (Problem 2.24), 2.3 The Sampling Process Weindicated earlier that often the discrete-time sequence is developed by uniformly sampling &continuous- ime signal xa(£), as illustrated in Figure 2.2, The relation between the two signals is given by Eq. (2.2). where the time variable 1 of the continuous-titne signal is related to the time variable n of the discrete-time signal only at discrete-time instants fq given by nen anton a St 2. Fr Sr es with Fr = 1/T denoting the sampling frequency and Str = 29 Fp denoting the sampling angular frequency. For example, if the continuous-time signal is alt) = A cosl2x fol +h) = Acos(Qot +0), (2.49) the correspondiag discrete-time signal is given by xTn] = Acos(QonT +4) = Acs. AS Acos(won + 6), 2.50) 2.3, The Sampling Process 61 Figure 2.19: Ambiguity in the discrete-time representation of continuous-time signals, 1 (¢) 18 shown with the solid line, 22(F is shown with the dashed line, ¢3(F) is shown with the dashed-dot line, and the sequence obtained by sampling is shown with circles. where is the normalized digital angular frequency of the discrete-time signal x[71. The unit of the normalized digital angular frequency « is radians per sample, while the unit of the normalized analog angular fre- quency ©, is radians per second and the unit of the analog frequency f, is hertz if the unit of the sampling period 7 is in seconds, EXAMIPLE.211 _ Comajder tte tree seqjismnces: generated by sunplig the tere conine functions of frequencies 3 Hi. 7 Hz, and (3-H, respecireniy. i (0) = contort), int?) = con( ier). anid iy) = o0( Dt?) ‘sith a mampllng ate of UO Hs, bx. wth T= 0.1 see. The derived wequencin wee therefore: mind =cove8.6me9),, gain) contlAen), ay(nt = cost.6ma7, Plots of thee sequences (ihown with ciecles) and thew purge she functions ate piven im Figate 2.19. Note fiver ene ol that ach sagan es exes the come sample ‘value foe aay greenn. This adao can be werifici by ila} = coat An) = dos{2 ~ 0.62 }0} = cmd Bora) ails] < conch bin} wendl(2y + (L624) = canter, ‘Ata comilt all hme sequences shown are hdentical nat iti afficull 1 MRIOCUME 2 URIQUe CoMiMCUs tle Faction his {rifact, altcrsinnt waveform ol fremlenckes riven by (Gk 39 Me with Abe ‘nonnegative gn, lw essence lal = Sern) when snl are Tee ee In the general case, the family of continuous-time sinusoids Fant) = Acos(H(Qyt +4) + kz), kk =O.4E42,.., 2.52) leads to identicat sampled signals: AeMAT) = Acos (Ry + REIN +H) — Acos (Fee peme + *) @r 2S ae ) = Acostann +6) = xIn]. 2.53) =Acns( 62 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain ‘The above phenomenon of a continuous-time sinusoidal signal of higher frequency acquiring the idemtity of a sinusoidal sequence of lower frequency after sampling is called aliasing. Since there are an infinite number of continuous-time functions that can lead to given sequence when sampled periodically, additional conditions need to be imposed so that the sequence {[7]} = {x4 (n7)} can uniquely represent the parent continuous-time function x(t). In which ease, x9(F) can be fully recovered from a knowledge of ix[A]). EXAMPLE 212 Deicnnie he ubscrsie-tine signal wfr} cbutned by unloredy sampling: a apne sute ol 200 He ecombiuoan.tine signs! yy{r) compoved af a weightes samy of Hive skmunoudal signals of frequencies 0) His, 150 Hts, 1708. 250 81nd 390 We au ven hobow ey(f) =Gcautddini) + Bain i00e') + 2emecstteer) + deontSOOm) + LO minted) The siping perved Fm 1/2000 = 0.05 sec. Herc, the wonerated dtinrete-fme signal yi In prvee by WD) —Gessa4 Spiny Fae SAY Deval | PeAD +4 eoHED Fed + teint 370, = ficult Yarn) Sade ft 3) = BA im) Dm De eh) 4 Ace tft = OS ah AO ie (ae Fed =f koudth Amin) — Jat Site) + Devel em) + aLenAG Sec) — Mhagst-7ar9) = Heusen) + Seat Sad + 0.04895 ‘The dace time niga er] ie cormpoand of a mripihced wary thexe incre tire i sipmly ef prernatinest Meat Sar dee nade ree th sa ene i sical nen gai pron iy wry Sein = Ss _seenpling raitho fullowing continua tie smal compan ofa weighted ant oC thr== veranda! commuter inlpnals of Frequoncies 40 Hix, 8) Ha. ued TO Hix ae euli) = Reon(tOarn) + Seonthoore + u.d091 — (oein(Eatee) ‘Abid exaiiple ofa cockinusu tienes penetating the sare dlocretetine deguesiie meted Leyland + o04L 10D + HNL ZBiDme! = SemAONeT | + Fine FONL webct: cetmnpmeest af a seraptiedd ta uf five carrsioedad coneirwarnt-fi of fiedpienc ies 30 Ha 34) Pte Tica nad o80 Me rapociety fora a 1 sini ti Para) It follows from Eq. (2.51) that if 7 > 2p, then the corresponding normalized digital frequency co of the discrete-time signal x{n] obtained by sampling the parent continuous-time signal x, (¢) will be in the range —7 ~ w < 7, implying no aliasing. On the other hand, if 27 < 28,, the normalized digital frequency will fold into a lower digital frequency a2. = (27 2,/S2r}2q in the range —* < w < m because of aliasing. Hence, to prevent aliasing, the sampling frequency $27 should be greater than 2 times the frequency @p of the sinusoidal signal being sampled. Generalizing the above result, we observe that if we have an arbitrary continuous-time signal x9(¢) that can be represented as a weighted sum of a number of sinusoidal signals, then x9(¢) can also be represented uniquely by its sampled version {x{nl) if the sampling Frequency Sr chosen to be greates than 2 times the highest frequency contained in 4). The condition to be satisfied by the sampling frequency to prevent aliasing is called the sumplin, m. whichis formally derived lterin Session 52.0 ‘ping eer 2.4, Discrete-Time Systems 63 atn] ial L sink Input sequence ‘Output sequence Figure 2.20: Schematic representation of a discrete-time system. ‘The discrete-time signal obtained by sampling a continuous-time signal xq(F) may be represented 25 a sequence {xalnT]). However, we shall use the more common notation {x{vt]} for simplicity (with T tassumed to be normalized to 1 sec). It should be noted that when dealing with the sampled version of a continuous-time function, itis essential 1o know the aumerical value of the sampling period 7. 2.4 Discrete-Time Systems ‘The function of a discrete-time system is to process a given input sequence to generate an output sequence. In most applications, the disczete-time system used is a single-input, single-output system, as shown schematically in Figure 2.20. The output sequence is generated sequentially, beginning with a certain value of the time index 7, and thereafter progressively increasing the valuc of n. If the beginning time Index is ng, the output ylro} is first computed, then y[no + 1] is computed, and so on. We restrict our attention in this text to this class of discrete-time systems with certain spccific properties as described later in this section. Ina practical discrete-time system, all signals are digital signals, and operations on such signals also lead to digital signals. Such a discrete-time system is usually called a digital filter. However, if there is no ambiguity, we shall refer to a discrete-time system also as a digital filter whether or not it has been implemented using finite precision arithmetic. ‘Simple Discrete-Time Systems ‘Thedevices implementing the basic operations shown in Figures 2.5 and 2.8 canbe considered as elementary discrete-time systems. The modulator and the adder are examples of two-input, single-output discrete-time systems. The remaining devices are examples of single-input, single-output discrete-time systems, More complex discrete-time systems are obtained by combining two or more of these elementary discrete-time systems as illustrated in Figures 2.6 and 2.7, respectively. Some additional examples of discrete-time systems are given below. AMPLE LL) perp impo examnpie af a diacrsie-tme myer i tse arramaliion defied ry the rape: ouput relat ints ate = = E ans fone ‘Tih ott rey at rots te sit of tHe eng eee Cm NA an The reviews Sep yim — Uf ut time inntant 9 — | whic fs the sae fal perineal aap falas frm —29 fr be em mata S| The system theres camutitndy adda. LW acesuialsies all dap aaimple ss frei — 50 0 # = Wt ws, 64 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain Orisinal uncomupted sequence Figure 2.21: (a) The original uncorrupted sequence sIn), and (b) the noise sequence d{n}. ‘Thun eation can ues Bei in =f a a wal= SO ster Stet Sat) wa! * eras ee i ‘Tse sscaad fiir of the wccwmmator i peed for a cauaal InpuT nexoe iy minh af TH elt thin ‘cami ‘We next illustrate the operation of a discrete-time system by two examples. EXAMPLE 214 Aither simple example af a discrerorime- sysern it Uh /M-poins mown tee c defined by ; he. ain = bE vin (38) fa7 Suck stents often ated i amushing randam aretha sicu. Conaicey for cumple e-signal stab cucuped By anone vin] for n 2 O.ypsaling ina meanest data gives by HUH ita + dt Me sonia the wes cine sf] nv yet beer ehh om] otf tla ad the moving aterige Ahr op (2.50) ken give reasonaldy good ecula [SchPS). We Livre a apposed ig, MATEAN and ssa fr aingiciy the crighaal encrerepied sprain get hy sta) = aUaG 99"). ‘3 ‘The Mariam proghuia higeintrute on undlul{n| is given Belosr where the anit Both texpicncesiare sheen in Fim 224 a © bregtien 24 SMudteation ot #igna) Generantos ’ . 1 Hosa estima gai Ha Say . i Eten # Kandthed 29442 ‘ ‘Senerabe the uncorrupted alynal a Dit a 2.4, Discrete-Time Systems 65 @ tb) 2.22; Pertinent signals of Example 2.14: sf] is the original uncomupted sequence, d[n] is the noise sequence, x(n] = sla] + din), and yl is the output of the moving-average filter. 66 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain and ¢ { > ve @ Se ee b) yin} " , [S ly ros, or ett ey ss wn" © Figure 2.23: Mustration of the linear interpolation method. Note that in Figure 2.22(b), the output y[n] of the 3-point moving-average filter is nearly equal to the desired uncorrupted input s(n], except that it is delayed by one sample. We shall show later in Section 4.26 that a delay of (M — 1)/2 samples is inherent in an M-point moving-average filter. ExaMrunnas ae tarh is cbeee tiem epi lia ie ranuel rhilawaion ernployed fadcrateuniple vabuont 3 ‘ implersensed ty frat pasting sei ia Babe] is thom pam ee ee nl es E cmples. wre so rtm a ne gs Figpire 2.00 for a aes vt we ia oF the facta neladie aa ye). the Ht a ebeiplie ia the sbaibon int = ai +L iain — alae wD asa Likewise, for frctorof-3 interpotame the outpat ia obtained axiny the elation ral aula) fost Mt male 24 3 taal Baal De gan) 2A, Discrete-Time Systems o7 2.4.1 Ciassification of Discrete-Time Systems ‘There are various types of classification of discrete-time systems hat are described next. These classifica- tions are based on the input-output relation of the system. Linear System The most widely used discrete-time system, and the one that we shall be exclusively concerned with in this text, is a linear system for which the superposition principle always holds. More precisely, for a linear discrete-time system, if yi[n] and yo[n] are the responses to the input sequences xi{n) and x2fr], respectively, then for an input fn] = axital + Azaln], the response is given by yin] = ayiln) + Byatn). ‘The superposition property must hold for any arbitrary constants, « and f. and for all possible inputs, x1{n] and x2{n]. The above property makes it very easy to compute the response to a complicated sequence thal can be decomposed 2s a weighted combination of some simple sequences, such as the unit sample sequences or the complex exponential sequences. In this case, the desired output is given by a similarly ‘weighted combination af the outputs to the simple sequences. EXAMPLE | Cone tbe disuse ine sccunnulaor of Hxampie 2.19, From it ingat-utpatsebabow 4 ‘by Exh (2.94) wer cere that ie cng» (m) sail yin} Sor ingtr a fm ued as) ure piven by a ntl = 32 ste, toe : nai} 3 oat! nen ‘Thee ouxput ym] coe 0 am ipa ry fe) + Ay lore gives ny * 2 : a nt= 3 lenil+ anise F> s+a J ated Atl, = — —s bet crgree irene 3 Liter yen omades the medi ‘ceariullutor defined by ‘with w-counal imme apptiod | a ‘Noowe Ghe-oartpua jy [9 ait fm) tha Amp ipeaieb nym! ae = mie) =f + See. = yall = ok1 + Sake. = Hence, he outpat y(n} fo we input ay fm 4 Han) bof he fern Hnl= 9-114 So tanlti + este = 9-1) he Soe al, ‘gas: f= fa) a 68 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain (Da ii wth tat) ayia) + Ayala Sol te eat a lek oH), = P=] ‘hich in ad wo pn} By. KD IF +Ant-t. 1-1 arn ‘Far the symrernf Eq. (285) ws be liken, te abcree conti: ret Pd ar al ital eoodinbans yj 9 rift) saat all comstons oc ond. This conditing cannot be nase undoes the system of Eq.£235) ix inically at ‘Feit with w aero balial cohdidion. For indore iaddal cotditbina the disctete-tinse uystein of Bq. (2 $5) sowlincar Itcan be easily verified that the discrete-time systems of Eqs. (2.14), (2.15), (2.17), (2.18), (2.56), and (2.58) are linear systems (Problem 2.25). However, the linearity of the discrete-time system of Eq. (2.15) depends on the type of input being applied. EXAMPLE Comlter jhe acronis wpe aver by L380) phe = 27 fe} aif = fxm ie 1 “The cup yi and yin for jpn xan xi ate ive by nit = afint yf — Hen th pila = afte) ~ apie — teak + ‘Tha bug y fn die fa iat ey Em} Blige heen ey ln Dm fy BD + Bagh DF i —Becxy it — Wf Hayle — 000 fig fo +1 + flaghee =e [feel— ne — tines ul + [dient — nee ey) Sab Ces indent ~ aya ~ Vex] + 1) = eal bend — UE (Op te other nant, enbel + Aeitel =a [afl] — arte — I peste +t] 468 (F001 — zim — ug ee ai] Aa “Therefoce. the syuirat jp.wanlinecr ‘Shift-nvariant System ‘The shift-invariance property is the second condition imposed on most digital filters in practice. For a shift-invariant discrete-time system, if y, [1] is the response to an input x; {(n], then the response to an input alr] = 2y[n — na] is simply yl] = yur — no], 2.4. Discrete-Time Systems 69 where np is any positive or negative integer. This relation belween the input andl outpstt must hold for any arbiteary input sequence and 1s corresponding output. In the case of sequences and systems with indices n telated (o discrete instants of time, the above restriction is more commonly called the time-invariance property. The time-invariance property ensures that for a specified input, the output of the systent Independent of the time the input is being applied, A linear time-invariant (LTD discrete-time system satisfies both the Jinearity and the time-invariance propertics. Such systems are mathemmatically easy to analyze, and characterize, and as a consequence, easy to design. In addition, highly useful signal processing algorithms have been developed utilizing this class of systems over the last several decades. In this text, we consider almost entirely this type of discrete-time system (Q.ATV ia w time-varying: system TY how this. we abeerwe from ‘ra bs given by EXAMPLE218 The ap-aurpler of ey 2.07) tat ite end) fe| e ame lage =| nivty <=9, ° athurwine, ali = Line) al, in = 8, $0, 2b = 10. chess (hi from Big. (2,17) Hin =nolfL1. = Maine + Littest 2b, a otherwise. ert Likewise, it can be shown that the down-sampler of Eq, (2.18) is a time-varying system, ‘Causal System In addition to the above two properties, we impose, for practicality, additional restrictions of causality and stability on the class of discrete-time systems we deal with in this text. In a causal discrete-time system, the noth output sample y[o] depends only on input samples x[] for < np and does not depend on input samptes form > ng. Thus, if yi[n) and yaln) are the responses of a causal discrete-time system to the inputs ui] and afin}, respectively, then tala] = ualn] forn < N implies also that vila] = yaln] fora < N. Simply speaking, for a causal system. changes in output samples do not precede changes in the input samples. It should be pointed out here that the definition of causality given above can be applied only 10 discrete-time systems with the same sampling rate for the input and the output.> It can be easily shown that the discrete-time systems of Egs. (2.14), (2.15), (2.54), (2.55), and (2.56) are causal systems. However, the discrete-time systems defined by Eqs. (2.58) and (2.59) are noncausal systems. It should be noted that these two noncausal systems can be implemented as causal systems by simply delaying the output by one and two samples, respectively. #8 te input aed output sampling rates are notshe same, the dentin of causality ha to be modified 70 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain Stable System ‘There are various definitions of stability. We define a discrete-time system to be stable if and only if, for every bounded input, the output is also bounded. This implies that, ifthe response to x(7] is the sequence tn] and if Ixinl| < By for all values of n, then \ylal < By for all values of n where B, and B, are finite constants. This type of stability is usually referred to as ounded-input, bounded-output (BIBO) stability. EXAMPLE 119 Consles the At point msing-onerng fics of Exam 2.14, 1 fllons fem Eg. (2-54) at foraboasded input isin < My, the augsimats af tv wih same of catpat ie gieen by at pat r 1 Taek, = | ae lale — 43S 4a Wott gp De eal = gp De nln — Hs indicurrng thas tee eysieen is BEBO wahle. Passive and Lossless Systems A discrete-time system is said to be passive if, for every finite energy input sequence x{n}, the output sequence y[n] has, at most, the same energy, ie., DY vets YS lta? <0. (2.62) If the above inequality is satisfied with an equal sigu for every input sequence, the discreie-time system is said to be Lossless. EXANLPLE 120 Conner the ulmcoste-tintie eye ielined by fmf = seufn'= WW] with Wm poailive tees. {ae output energy given 7 = 3S wii =e? Seti. — —s ‘Hoece, i ina panies iden Hi < Ham wenden ays if bol et. As we shall see later, in Section 9.9, the passivity and the losslessness properties are crucial to the design of discrete-time systems with very low sensitivity to changes in the filter coefficients. 2.4.2 Impulse and Step Responses ‘The response ofa digital filter to a unit sample sequence {5[1]} is called the unit sample response, or simply, the impulse response, and is denoted as {h{n}]. Correspondingly. the response of a discrete-time system toa unit step sequence (y[m)}, denoted as {s[n]}, is its unit step response or simply, the siep response. AS wwe show next, a linear time-invariant digital filter is completely characterized in the time-domain by its ‘impulse response or its step response. 25. Time-Domain Characterization of LT! Discrete-Time Systems: nA EXAMPLE 2.21 The jnpulse mesons [hi] 1 the dincrepsteneeyatem of Ti 2-14) i obtain hy sting si}-= An} erm is Ain} oaXin} + opti L+anhin — 21+ walle, “The imiputee erigioons fs thus aindte-length seqaumace of hength 4 given by iti fay, oz. on. ale t EXAMPLE 222 The impulse reapene |hin]) uf the sincrehetne-scoimidatar of fag. (2) s mbeioed by setting if) = Ht eevalling is EXAMPLE223 Theimnpuine response |All of the factoraaf? knterpolitar af Bg: C2 51) acchtsined byiseting gh) = Ale] ad is piven by: inh =n + Le — 11 Hf + 1 “The iempitse reipome ts seen to be thie Jeng sajuence ot big iad cn te Werte aiinematey an iene te 2.5 Time-Domain Characterization of LTI Discrete-Time Systems Jn most cases an LT discrete-time system is designed as an interconnection of simple subsystems. Each subsystem in turn is implemented with the aid of the basic building blocks discussed earlier in Section 2.1.2, In order to be able to analyze such systems in the time-domain, we need to develop the pertinent relationships between the input and the output of an LTI discrete-time system, and the characterization of the interconnected system, 2.5.1 Input-Output Relationship A consequence of the linear, time-invariance property is that an LTI discrete-time system is completely specified by its impulse response; i.c., knowing the impalsc response, we can compute the output of the system to any arbitrary input. We develop this relationship now. Let At] denote the impulse response of the LTT discrete-time system of interest, ic., the response o an input 3[7]. We first compute the response of this filler to the input x[n] of Eq. (2.47). Since the discrete-time system is time-invariant, its response to 5[n — 1] will be Ain — 1]. Likewise, the responses 10 dfn + 2], S[r — 4], and 8{n — 6] will be, respectively, hin + 2). Alm — 4], and hla — 6]. Because of Tincarity, the response of the LT! discrete-time system to the input x[n) = 0.58 [n + 2] + 1.5801 — 1] — dfn — 2] + 41m — 41 4.0.7531n — 61 72 Chapter 2: Discrete-Time Signals and Systems in the 7ime-Domain will be simmply ybi] = O.Sh[4 +214 LSh[a — 1] = hfe — 2] + fla — 4] 4 0.75hI 0 — 6), ICfollows from the above result that an arbitrary input sequence v(m] can be expressed as a weighted Jinear combination of delayed and advanced unit sampie sequences in the forma stl =o até" — 41, een where the weight x[X} on the right-hand side denotes specifically the kth sample value of the sequence fala}. The response of the CTI disercie-time system to the sequence x [k]6l — k] will be x1elala — &). As a result, the response y[nJ of the discrete-time system to 1/1] will be given by yl = > xlkihin — 41, (2.64a) which can be alternately written as vilnl= 0 xb 41aLa} 2.640) by a simple change of variables. The above sum in Eqs. €2.644a) and (2.64b) is called the convolution sum of the sequences xIn] and ie[n], ond represented compactly as: vlad = sf }@ ble] (2.653 where the notation @ denotes the convolution sum.* ‘The convolution sum operation satisfies several useful properties. t, the operation is commutative, 1G clal = oll ala). 2.66) Second, the convolution operation, for stable and single-sided sequences, is associative. ie. CUMS xMVO@a12] = 1 bN@ Lali@aslnp, (2.67) and last, the operation is distribaaive. i.e. AI@ Cole] + asl = I@arke] + alel@-xabeh (2.68) Proof of these properties is teft as an exercise (Problems 2.37 10 2.39). ‘The convolution sum operation of Eq, (2.64a) can be interpreted as follows. We first time-reverse the sequence Al& arriving.ath|—X]. We then shift 4{--] to the right by 1 sampling periods if» = 0, or w the left by n sampling periods ifn < 0, tw form the sequence fin — kJ. Next we form the product sequence v(A] = x[Jhfer — &}. Summing all samples of v{&| then yields the nth sample of yfrr] of the convolution sum, The process of generating vfk] is illustrated in Figure 2.24. This process is implemented for each value of rin the range —o0 sit ’ x at fe 27 stoma seh meh ea gu 2st ca te 74 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain -48 © ® ip xfkIAI2— A) ‘ SIIOT ITs Ses.ere son ah) @ Figure 2.25: Mustration of the convolution process. she} 0) term 9, ‘The sequence (ylrll mnctsained scive ix sikh kn Piguee 226 Tk should be noted that the sum of the indices of each sample product inside the summation of either Eg, (2.64a) or Eq, (2.64b) is equal to the index of the sample being generated by the convolution sum ‘operation. For example, the sum in the computation of y{3] in the above example involves the products 2.5. Time-Domain Characterization of LT! Discrete-Time Systems 75 Figure 2.26: Sequence generated by the convolution, x{O1A[3], x{1]A12], x12)A11}, and x{3]h10]. The sum of the indices in each of these four products is equal to 3 which is the index of the sample y13]. ‘As can be seen from Example 2,24. the convolution of two finite-length sequences results in a finite- length sequence. In this example, the convolution of a sequence {x{n]} of length 5 with a sequence {In} of length 4 resulted in a sequence {y[a]} of length 8 In general, if the lengths of the two sequences being convolved are M and N, then the resulting sequence after convolution is of length M +N — 1 (Problem 2.40). In Marcas, the statement ¢ = conv (a,b) implements the convolution of two finite-length se- quences @ and b, generating the finite-length sequence c. The process is illustrated in the following example. (EXAMPLE 225 We repeat ihe problem of the poryhow cxampie uiing Pringriin 2.5 give era Pragrme 22! § Hlustracion of Canyetuesen ¥ 2 + thpuk (fype is ihe BErar seilence = “iy + input Type in the epcond eaquance = -}p § s eonv(a, tb H+ lungenei—t; fos Gri site disp ourput eatwation «1 pispicl ates in, et} label { “Eine rnfex n°) ylabel Geptinude ha ‘Dubing execution. the input dais wxpuestes we the te secvence tt be cmwelved Bal ane exsineed im a wees Fora? kde square tach i ipaficuted bole, By “The program then computes che convolution und dinplayr the suiting seqaance sho bine olitpe acuitice 2 -* 2 t 2 3 2 > Sr grin pana in Figs 2.27. An enti pen ahae net nhe prvi 76 Chapter 2; Discrete-Time Signals and Systems in the Time-Domain i Amplitude bot 2 5 6 Time index Figure 2.27; Sequence generated by convolution using MATT AR, cmH am b> BAA Aint byte > fins Pa) toot Figure 2.28, The cascade vomection 2.5.2 Simple interconnection Schemes ‘Two widely used schemes for developing complex LTI discrete-time systems from simple LT di system sections are described next, Cascade Connection In Figure 2.28, the output of one filter is connected to the input of a second filter and the two filters are suid to be connected in cascade. The overall impulse response /{7t] of the cascade of two filters of impulse responses hy[n] and hain} is given by Ala] = hi [n1@ hole]. (2.69) Note that, in general, the ordering of the filters in the cascade has no effect on the overall impulse response because of the commutstive property of convolution. It can be shown that the cascade connection of two stable systems is stable. Likewise, the cascade connection of two passive (lossless) systems is passive (lossless). ‘The cascade connection scheme is employed in the development of an inverse system. Te the two LT systems in the cascade connection of Figure 2.28 are such that Ai@haln) = sir], (2.70) then the UTI system ha[n is said to be the inverse of the LT! system it], and vice versa. As a result of ‘the above relation, ifthe input 10 the cascaded system is x[7),its output is also x{n]. An application of this concept is in the recovery of a signal from its distorted version appearing at the output of a transmission channel. This is accomplished by designing an inverse system if the impulse response of the channel ix known, ‘The following example illustrates the development of an inverse system. Aye yey =- {nna f> Agel Figure 2.29: The parallei comnection. EXAMPLE 226 From Example 222 de impulse: response of the diac ete'tieke ceaiatate is the init step snence js), Therefate, from 5 (2.70) the iene sysivin muut amily the exndliien. inyDAzIn} =H. Wi festbows forse Eg, (274) the Agta} =O foe a = an MalOh= 1. Bhid=s, =24 Asa vent atti * ot dain tern cd ‘Tino che ample response af Ua verse aj i given bry ‘dete afd = les 1h, ‘stich ia called « heckmans digirence syrem, Parallel Connection The connection scheme of Figure 2.29 is called the parallel connection, and here the outputs of the two filters are added to form the new output while the same input is fed to both filters. The impulse response of the overall filter is given here by Ala] = hy ln) + hale). (2.72) Itis a simple exercise to show that the parallel connection of two stable systems is stable. However, the parallel connection of two passive (lossless) systems may or may not be passive (lossless). EXAMPLE 237 Conder the discme-tine vystem of Figur 230 composed of an interconsecos of fou siimphe dincrete-tine aysters with impale responses gibet by feta) = Atat-+ dale — 1h ain = kin = fate — 1h, Ale] = 2AoK Aaint = 24)" aim The oweradl irmpuiise rempomse: Als] is gimen by Pa mtn + Bate MSCs n= gil iw 78 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain afr] st agen] Figure 2.30: The discrete-time system of Example 2,27. Aatesieaadet = Coted ~ LAi9 — I 2aind-=atal— jn. 1p Aafettohatnl = (16011 —48in = 11) @ (—2(})" nim) =~ (4) tats 4 (8) wie — n= — C4)" stars 3)" mt =— (4) eb= age Tees Wind m= Sind = Litt ~ 11 ate) — 4 Al Ltn = an 2.5.3 Stability Condition in Terms of the Impulse Response Recall from Section 2.4.1 that a discrete-time system is defined to be stable, or precisely. bounded. input, bounded-output (BIO) stable, if the output sequence y[n} of the system remains bounded for all bounded input sequences [1], We now develop the stability condition for an LTI discrete-time system. We show that an LTI digital filter is BIBO stable if and only if its impulse response sequence {A[n]} is absolutely summable, ic., S= PV itll < 00. (2.73) ‘We prove the above statement for a real impulse response hn]. The extension of the proof for a complex impulse response sequence is left as an exercise (Problem 2.59). Now, if the input sequence x{n] is bounded, ie. |z[n]] = 8, < 09, then the ‘output amplitude, from Eq. (2.646), is blll =| D> alketn -+ SD bet ete — a, kee te Oand sgn(c) = —1if'c < 0, and |K| <1. Note that since |xfll < 1. {lah} where spn(c) is obviously bounded. For this input, y[7] at Ois MOL= > sgnth[kDALK] = S = B, = 0 2.76) Therefore Iylall = B, implies $< 0. EXAMPLEE28 Cinsider ocaveal LIT didcreie-tine wysierre with wx impulse ampere gryen toy ne nyt For shia eyaery So Eber So ie tS antes “Thetefiine S <2 Wal < | forwhich the shove system iv BIBO stable, Qu ii eubor hae, if rl’ = 1. the bow syste Fs bet BIBO stab an thn oly tie Hier teen seroehpulne report stp fo inte Vali Of My ait Ny. Heme, he inp eypotine sequance is absolutly suai locpeaieat if the value af wm dong. 0x it oct Eline Hence, the yl of Fa 77) fe BUBKO aeabie 2.5.4 Causality Condition in Terms of the impulse Response ‘We now develop the condition for an LTI discrete-time system to be causal. Let xy{n] and xz{n] be 1wo input sequences with sila] = sain] forn Sng. (2.78) ny of an LTI discrete-time system with an From Fg. (2.64b) the corresponding output samples at impulse response {h[n]} are then given by oo on = yiltol = > Alklsifte —k] = Safe Ino — + SO Alkane — 1, (2.798) =a Ss 1 eo a at DD Alkteatne - = Yo abedeatng — E+ S2 Aldean — KI. (2.798) oS ke alto If the LTT discrete-time system is also causal, then y;[ng| must be equal to yz[no]. Now, because of Eq, (2.78), the first sum on the right-hand side of Eq. (2.79a) is equal to the first sum on the right-hand side of Eq. (2.79b). This implies that the second sums on the right-hand side of the above two equations ‘must be equal. As xi{n) may not be equal to x2[2] for m > no, the only way these two sums will be equal is if they are cach equal to zero, which is satisfied if ikl =O fork <0, (2.80) Ed Chapter 2: Discrete-Time Signals and Systems in the Time-Dom As a result, an LTI discrete-time system is causal if and only if its impulse response sequence (AI |} i8 a causal science satistying the condition of Eq. (2.801 It follows from Example 2.2 that the discrete-time system of Eq. (2.14) is a causal system since its impulse response satisfies the causality condition of Eq. (2.80). Likewise, from Example 2.22 we observe that the discrete-time accumulator of Eg. (2.54) is also a causal system. On the other hand, From Example 2.23 it cun be seen that the factor-of-2 linear inte-polator defined by Eq. (2.58) is & noncausal system because its impulse response does not satisfy the causality condition of Eq. (2.80). However, a noncausal discrete-time system with a finite-leagth impulse response can often be realized as a causal systein by inserting, a delay of an appropriate amount. For example. a causal version of the discrete-time Factor-of-2 Linear interpolator is obtained hy delaying the outpat by one sample period with an input-output elation given by pln] = xale— 114} valor = 24-4 aut) 2.6 Finite-Dimensional LTI Discrete-Time Systems An important subclass of LT! diserete-time systems is characterized by a linear constant coefficient differ- ence equation of the form y Savin — ey m where.«{n] and |] are, respectively, the input and the outputot the system, and fd; ) and {px} are constants. The onder of the discrete-time system is given by max, Mf). which is the order of the difference equation characterizing the system. It is possible to implement an LTI system characterized by Eq. (2.81) since the computation here involves two finite sum of products even though such a system, in general, has an impulse response of infinite length. ‘Tae ouput »{n] can then be computed recursively from Eq. (2.81). If we assume the system (0 be causal, then we can rewrite Ey. (2.81) to express yf) explicitly as a function of xf]: ” Spun a), 28D = 8 4 via} Lae Yate a (2.82) aa ep provided dp #0. The output y[x] can be computed for ail n > #1, knowing x[n] and the initial conditions Whee = Th ym 2h. vite N]. 2.6.1 Totai Solution Calculation ‘The procedure for compating the solution of the constant coefficient difference equation of Fay. (2.81) is very similar to that employed in solving the constant coefficient «ifferential equation in the case of an LT continuous-time system. Inthe case of the discrete-time system of Eg, (2.81) the output response yf) also consists of two components which are computed independently and then added to yield the total solution: slid = sel + yp lh (2.83) In the above equation the component y,[1t] is the solution of Ey. (2.81) with the input x(n} = the solution of the homogeneous difference equation: Le. itis 284) Sayin a= & 2.6. Finite-Dimensional LT! Discrete-Time Systems 81 and the component yp[7] is a solution of Eq. (2.81) with x[} ¢ 0. ycln] is called the complementary solution, while yp{n] is called the particular solution resulting from the specified input x{7], often called the forcing function. The sum of the complementary and the particular solutions as given by Eq. (2.83) is called the ‘otal solution. We first describe the method of computing the complementary solution yc{rr). To this end we assume that itis of the forms elm] = A". (2.85) Substituting the above in Eq. (2.84) we arrive at a * Vaot = art 8 a = RON aN + da + dy |h+dw) =0. (2.86) The polynomial Sf! gd ~* is called the characteristic polynomial of the discrete-time system of Eq, (2.81). Let A122. -.-, Ay denote its N roots. If these roots are all distinct, then the general form of the complementary solution is given by del] = aid} 4 a2d3 + Few ahy ean where a. a2,....¢ty are constants determined from the specified initial conditions of the discrete-time system, The complementary solution takes a different form in the case of multiple roots. For example, if Ay is of multiplicity £ and the remaining NV ~ L roots. Az. 23,-...Aw—z. are distinct, then Eq. (2.87) takes the form volt] = aya + ona pegn2Af te parm A bans aZ te bawaQ ey. (2.88) Next, we consider the determination of the particular solution ypl2] of the difference equation of Fg, 2.81). Here the procedure is to assume that the particular sclution is also of the same form as the specified input x[n]if x[n] has the form A3(Qo # Ai. = 4, 2,....N)forall'n. Thus, ifx[n]isa constant, then pin] is also assumed to be constant, Likewise, if x[7] is a sinusoidal sequence, then yp[n] is also assumed (o be a simusoidal sequence, and so on. We illustrate below the determination of the total solution by means of an example. KXAMPLE 290 Levu determine dye nal stop for > Qf = siocrmedone xyatem shmncterinet by the following difference equation yal vie — 1) — Golem — T= afl, ay or ate tg et = Bye} and sith init comin. yf—1) = Cana yf 3] = 2 ‘We fina dewerrning the: form of the complementary solution Serting sm EO and yin] = 3% tn Bay (BP) wr seri at tu Mee the Pots of the tiaras phlyrnrmal AT +A.— Gare hy = — solutian ta of the form fled me ody" 4+ et 82 Chapter 2; Discrete-Time Signals and Systems in the Time-Domain id the partitulae obits We wxeuED xpln) = # taining, Ose there sq, £2.00) we pt A+ a-ha hich fue’ > 0 yeh foe =? ‘The uaa sutiow bs turf ef the fim vini—-ail—f aay 2, 9 20) Gail “Fin intend i cbmc any tie spect es nia conan, Pon ap, PY and C2981 9 wet at ae 2 ytd actin! Ht ‘Solvini thene twotejidivine Weattine at si) = la ey = a “Fuh ual ures by phir bs vig = —b3)" +48" zo [f the input excitation is of the same form as one of the terms in the complementary solution, then it is necessary to modify the form of the particular solution as illustrated in the following example. EENANIPLE 291 Wocesermine snc dol stu fey = Olaf ta einer euiio ig (288) fr i pli Him) = (2) fel with te ave sha sande tthe previo case indistinct compar sei ok ti 97, SAL te aie fore ga the rpesatied unpet, Hence ve! noed 10 select foram Sor the articutar soho which ibis and dex sot coma aaty vies iia vs thowe cmt oh te ceanplommennary Soto, We wana yall = Haat ‘Satin. ve ee oi, (2.993 0 pri” = min — LB" 2 dln 0 ae Fare (Pwd obtiin trom theaterve eset f= '04, The deal soliton imme Of fe Forme sist at eo Fone) 0) asi ‘Tu gdeseramine tine ratte oer) ard rp. me embe nef the pects aa convo: Hom i Maa #0) ‘weamibe at in? ene? +o -Day Hal ewyf3e pant! 0k 1s et Sehr wy yey = =O. or = 01945 Thine th Ul neti Bh vith= Snir dates Oana, me 2.6. Finite-Dimensional Ti Discrete-Time Systems 83 2.6.2 Zero-Input Response and Zero-State Response An alternate approach (o determining the total solution yf) of the difference equation of Eq. (2.81) is by ‘computing its zero-input response yn] and zero-state response y,s(tt]. The component y,;[7] is obtained by solving Eq. (2.81) by setting the input x{] = 0, and the component y-.17] is obtained by solving Bq, (2.81) by applying the specified input with all initial conditions set to zero. ‘The total solution is then given by yin} + vest]. ‘This approach is illustrated by the following example. ine dincrote-tune ryan of Exel EXAMPLE 232 We deturwione the ftal sation 2.30 by parmpatiog: ‘he evo-inpu renpone anit the xeru-etoie teapanne ‘The mer np response r.s[m) Ol Eq. (2.89) ls given by the ciumgleinentary aalatiem wf Eq C250) wheve the Commas oy Andie ine chosen je skint the specified initial consdifans, Now fice Bey (2.848) we pet yO =~ 1) 46-21-12, HL = =O] Ori th 7+ 6 A, Mec trom Ey (2:90) wi get ADF =a WIN) me may Bap. Salvin these nop mete of eqgiatiions wre verve a ib) = 34, ory ms = iL, Tost ifil= S43)" - 16d) ee, ‘The senwatate cespoone ix determined Noun Ba C61 by evatuating te ecoatant ry etd! tu nally the ext Initial comiricinn Frum Ex, (280 ie pet 40) xf = a Pit aE} — git ® ‘Seeet, trom Fg, (2.99) wn the hue set a uatiin her arrive al gj = Jag 6 A The Sefcatae reap ‘ioe > Cth tana | ~ 2 Hm Oks einer by. IP" 4 840 —2 vale ‘Hence soul tuto vf bs piven By tbe sus, + >) fn mete i Mon aa colic ie seemtical ut thal ceived int Eaaraphe 2.30 wi expeciee 2.6.3 Impulse Response Caicutation The impulse response h[7] of a causal LT discrete-time system is the output observed with input xl] = 4in\. Thus, it is simply the zero-state response with xInj = S[n|. Now for such an input, x[n] = 0 for n > 0, and thus, the particular solution is zero, In] = 0. Hence the impulse response can be computed from the complementary solution of Eq, (2.87) in the case of simple roots of the characteristic equation by determining the constants a; to satisfy the zero initial conditions. A similar procedure can bbe followed in the case of multiple roo's of the characteristic equation. A system with all zero initial conditions is often called a relaxed system, a4 Chapler 2: Discrete-Time Signals and Systems in the Time-Domain ie sys of {thin example we deseémine he ioppulee response Him Df the cama ince: Pndim Bi. (2.90) we get afer} oy +an a2a rim (he abies: either at ARDY om wy earz A= 3a + ‘Nest from Hq. G87) wih fal weet ajor=1 ALLE AIO) =O. ‘Soule OF the hive Jal kets tal eqicthon ynih oy = Ot erg = OL Thue, te mnpuise responee in ven ey AEA} = OE 40.4)", 0. 1 follows from the form of the complementary solution given by Eq. (2.88) that the impulse response of a finite-dimensional LTI system characterized by a difference equation of the form of Eq. (2.81) is of infinite length. However, as illustrated by the following example, there exist infirite impulse response LTI discrete-time systems tha cannot be characterized by the difference equation foun of Eq. (2.81). EXAMPLE 294 The-syaern detines ty te cnpite eipnise ' Ae) ya 11 does fot hue a ropreseatariai tn ti ine a2 tnnear coun cer icsen dafferrs extort, his the absive eystent io Cau und lnc BYE sae It show be motes Since the impulse response h(n] of a causal discrete-time system is a causal sequence, Fq. (2.82) can also be used to calculate recursively the impulse response form > 0 by sciting initial conditions to zero values, ie., by setting y[-1] = yl~2] = --» = yf—N1] = 0, and using a unit sample sequence S[n] as the input x[n}, The step response of a causal LT] system can similarly be computed recursively by setting zero initial conditions and applying a unit step sequence as the input. It should be noted that the causal discrete-time system of Eq, (2.82) is linear only for zero initial conditions (Problem 2.45), 2.6.4 Output Computation Using MaTLAg ‘The causal LTI system of the form of Eq. (2.82) can be simulated in Maras using the function 11 ter already made use of in Program 2.4. In one of its forms, the function ¥ © filter(p, dix) Processes the input data vector x using the system characterized by the coefficient vectors p and d to generate the output vector y assuming zero initial conditions. The length of y is the same as the length of x, Since the function implements Eq. (2.82), the coefficient dy must be nonzero. ‘The following example illustrates the computation of the impulse and step responses of an LI system described by Eq. (2.82). 2.6, Finite-Dimensional LT! Discrete-Time Systems 85 Timelnder @ Figure 2.31: (a) Impulse response and (b) step response of the system af Eq, (2.93). EXAMPLE 235 Pogmun 26 given telow can be employed to compwic the impulue maponss of = cual ‘Riisclimenaleasl (1 dincrete Hime xyatom of the fors-of Ey. CEE) The peegran calla forthe (allowing iaret datx; donired Fong o€ Ue inate respi nad the Bller coeficlent vectors p and spe — a) (2.105) reall ‘obtuined by setting yf} = xij in Eq. (2.103). Note from Eq, (2.105) that ry xf) + S72 «2p the energy pf the signal xf). Prom #9, (2,104) it follows that -eL€L = rust —£] implying that r(€] is n fiametion for real +} An examination of Eq. (2.103) reveals that the expression for the cross-correlation looks quite similar {0 that of the convohution given by Eq. (2.642). This similarity is much clearee if we rewrite Eq. (2.103) as & rol} = SO vindel-té — my seI@t—el. (2,106) The ubove result implies that the cross-correlation of the sequence y(n] with the reference sequence xf] can be computed by processing sfir] with an LTL discrete-time system of impuise response x|—n]. Likewise, the autocorrelation of 4] can be determined by passing it through an LT} discrete-time system of impulse response c|—n1 90 Chapter 2; Discrete-Time Signals and Systems in the Time-Domain 2.7.2 Properties of Autocorrelation and Cross-correlation Sequences: We next derive some basic properties of the autocorrelation and crass-correlation sequences [Pro¥2] Consider two finite-energy sequences x[r] and y4a]. Now, the encrgy of the combined sequence «x [i] ~ li — €] is also finite and ionnegative. ‘That is DS Pi t2e Yost a+ YO pe D tart + vin — en? = rool) + aryl] | ryylO)2 & 2.107) where ry [0] & > Vand ry,{0] = We can rewrite Eq. (2.107) as rail rll] fa] le alse ei] ° for any finite value of. Or in other words, the enttix relO) rill rypl€l ryfO) » > Dare energies of the sequences xn] and y[a]. respectively, is positive semidefinite, This implies rel f0l 3] = 0 or equivalently _ _ brash s Vre(Olr IO] = YEE. (2.108) ‘The above inequality provides an upper bound for the eross-correlation sequence samples. If we set ybi] = fr, the above reduces to (reef Ell = rest = E This is a significant result as it states that at zero lag (£ = 0), the sumple value of the autocortel sequence has its muxiraum value. ‘To derive an additional property of the cross-vorrelation sequence consider the case yla} = + balm NY where N is an integer and # > Uis an arbitrary number. In this case Ey = bE,. and therefore VEE. Using the above result in Eg. (2.108) we get Ip HE, BrlO) Ss ryclé] < hrf} 2.7.3 Correlation Computation Using MaTLAag. The cross-correlation and the autocorrelation sequences can be casily computed using Ma1LAR as illustrated in the following two examples 2.7. Correlation of Signals a1 TT ee a a tases ewes (a) b) Figure 2.32: (a) Crass-corzelation sequence and (b) autocorrelation sequence. EXAMPLE 235 Consider sho ope fide emth pquences a= a = = ee ay) Aaa <0 4 0-2 ap ‘lng MUA. we detéinind and pli the eiise-catveuiion weave Fay ‘We use Program: fe cept cote expan ao Gases A beegran 22 \ COMBOTaEiOn of Croae-rerrelarion Sequence " . x = thpueeinype te Uhe eeforence tequehie = *) z ¥ = inbutiype in the emcond aiqoence = “Tr A Commca the esrrelacing sequence i= dengehiy! nd: « ilengeh haji F + convissthepiriyh ii & = betieenes ‘acomth. Fs habe (‘tisa Lodwox!)1: vhabed (Amps tude ae ws ariel anial ipl ez ¥oiresinal: Ae iil css fat spl cll Ke Uh ley a eg bach The pina te cps lh sie corel meer oy te Nei eet th sce tumo-reversed, snd pote it we shown in-Figure 23200, EXAMPLE 259 in his ccnemple, o¢ evdhants apd plor the awiororrsition df hf i ip cumple. Newt, Tere cele ei in We: rou ee paste ot tety Pega cnsote en ouch ten pence frit ci he ‘the wration in erisred twos the prvigeam lepers peloagnemiroee parsed 47 is prcsas toe Sopeeal lis ge pees ate vows is Five 2 520%) A cat gern fag, re (0) be the maxaman nena eerie eet eee wee, . Sdemonatraing, the fact hat te exoya-cortiation cast be shyed | nerta wl 92 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain i | i. 1. ! lite Tethtel te) Parte etltel tel tapes rein @ @) ‘Figure 2.33: (a) Delay stimation from cross-comrelation sequence and (b) autocorrelation sequence of a noise- ‘cornupted aperiodic sequence. ‘Next Popran 27 b saodiied te generate @ ciutice formed by adding u candi moe ceamputed iin te Phin pe geri at yr erry re renamed sae carrie pra repectod. He ausccerlati sill exbiies a procouncal peak at 270 lag 1 should be noted that the autocorrelation and cross-correlation sequences can also be computed using the Maras function xcorr. However, the correlation sequences generated using this function are the time-reversed version of those generated using Programs 27 and 2.8. The cross-correlation ry{¢] of two sequences x[7] and y(n) can be computed using the statement r = xcorr (x,y), while the ‘autocorrelation r,(¢] of the sequence x[1] is detemmined using the statement r= xcorr (x). 2.7.4 Normalized Forms of Correjation For convenience in comparing and displaying, normalized forms of autocorrelation and cross-correlation given by Felt) Pref = (2.110) rexf0y @ay are often used. It follows from Eqs. (2.108) and (2.109) that |perTéll < 1 and Jprylé]| < 1 independent of the range of values of x[n} and yf). 2.7.5 Correlation Computation for Power and Periodic Signals In the case of power and periodic signals, the autocorrelation and cross-correlation sequences are defined slightly differently. For a pair of power signals, xf] and y{n], the cross-correlation sequence is defined as « rl) = lim = 3 xinlyte — a, 12) and the autocorrelation sequence of x[n] is given by rt (i eT x DD xtiktn - 4. (2.113) "ak 2.7. Correlation of Signals 93 Likewise. if Her] and br] are two periodic signals with period V, then their eross-comzelation sequence is given by rales yD tats =a Qa and the amovorrelation seguerice oF [21] is given by reall = yy So flnliln —€1 allows fropy the above definitions thar both rzs[f] and 7 ;£¢ are aiso periodic sequences with w peried he peritdicity properties of the aucocortelation sequence ean be exploited 10 determine the period 8 ol « perindie signal that may Nuye been corrupted by «ft additive random disturbance. Let ¥[x] be a posiiive periodic signal corrupted by the random noise [m1] esulting in the signal wl] = Fla} + din which is observed for 0 << 00 0 ure essentially due to the peaks of "51? und can be used to determine whether *|m] isa periodic sequence in is perlonl if the peaks occar at periodic intervals. 2.74 Cerrelation Computation of a Periodic Sequence Using MATLAB We illustrate next the determination of the period of « noise-corrupied periodic sequence using MATLAB. 94 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain | Angin Laeintee te ies (a) (b) Figure 2.34: (2) Autocorrelation sequence of the noise-corupted sinusoid, and (b) autocorrelation sequence of the EXAMPLE 240 We deiennisic the potiod of the siminontal anquence s(n) = cus(Q. Diva) 0 = = 20 conrupied hy at-additive uidirrnly irisvtedtanclonn pose of meine tn the mange [005.05] ‘Te thi ons we uke Progracs 2-40 Goneguie the-sinicornelmion noquctice if the sdise-corm a a pied sinumoida the cotrelatton mip Hinteoie & Ye x ed) WGenerate the noipe-core ct fe Reais. FAABs3 Mabel (tag: drvexn)s viene ¢°Ampiteue | ¢ ‘The plc ainraind by erming ths prog jy ahurwn in Phyo 2a). As xn he veer em bk thet trong petk ot' mero lig. However, them ote dintinct peakh at lage that are mnitiples of W inbesling Ue period ‘of tit stiniGid Sequence 10 be IB = eapectmd.’ “Figute 2.3400) sues the pli ic albococrelaion sequence tadle} of the pole comgement. As can be: seen from this plo. ryyln] tows & very von peak oniy ut ate lg "The amplifies ae cider smaller a uttcr-vahuen of the fay an nape Vales of Yh nosse exjuence ace ‘uncometaind with each othe 2.8 Random Signais ‘The underlying assumption on the discrete-time signals we have considered so far is that they can be uniquely determined by well-defined processes such as a mathernatical expression or a rule or a lookup (able. Such a signal is usually called a deterministic signal since all sample values of the sequence are ‘The decaying amplirader ofthe peaks os the 36 dec cof tbe peaks 2s the lag index ¢ increases are due to the Ante Henaths ofthe periodic sequences which ‘eau in reducing the number of nonzero produets to the computation of the convolution sare, 2.8. Random Signals 95 ‘well defined for all values of the time index, For exainpie, the sinusoidal sequence of Eq, (2.39) and the exponential sequence of Eq, (2.42) are deterministic sequences. Signals for which each sample value is generated in a random fashion and cannet be predicted uhead of time comprise another class of signals, Such a signal, called a random signal or 2 stochastic signal, cannot be reproduced at will, not even using the process generating the signal, and therefore needs to be modeled using statistica’ infoanation zbout the signal. Some common examples of random signals ste speccl, music, and seismic signals, The etror signal zonerated by Forming the difference between the ideal sampled version of & continuous-time signal and its quantized version generated by a practical analog-to- digital converter is usuaily modeled as 3 random signal for analysis purposes. The noise sequence «ifr1} ‘of Figure 2.21(b) generated using the va::c function of Ma1zap is also an example of a randora signal The ciserete-time random signal ar process consists of a typically infinite, collection or ensemble of discrete-time sequences {XIn]}. One particular sequence in this collection ix{n}} is called a realization of the random process. Ata given time index ni, the observed sample value xf} is the value taken by the random variable X{n). Thus, a random process is a family of random variables {XJ}. In general, the Tange of simple values iS a cominuum, We review: in this section the important statistical properties of the random variable and tae random process. 2.8.1 Statistical Properties of a Random Variable ‘The statistical properties of a random variable depend on its probability distribution funetion or, equiva- lentty, its probabitity density function, which are defined next. The probability that the random variable X takes a value in x specified range from —o0 10 w is givea hy its probability distribuwion function Pyle) = ProbabilityLX = a]. (2.017) ‘The probability density function of X is defined by a; Pxtay = 2.118) if X can avsume therefore given by continuous range of values, From Eq. (2.118) the probability distribution function is rcer= [pra ans) ‘The probability density function sutistics the following two properties: Px(@) = 0, (2.1208) fo reiada (2.120) Likewise. the probability distribution funetion satisfies the fo lowing propertics, which fallow tromEgs. (2.119), (2.12049, aad (2.1200): O= Py(a) <1. 2.1218) Pylay) = Px(a2), for alle: = a. (2.121) Pr(mreh =, Pyboey = 2.1216) Probabilityfas < X < 02] = Py (org) — Px(a). 1218) See Section 95. 96 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain | > ey Figure 2.35: Probability density function of Eq. 2,125). ‘Acandom variable is characterized by a number of statistical properties. For example, the rth moments are defined by By = ECR) =f a! py(a) de, 2.322) where r is any nonnegative integer and E(.) denotes the expectation operator. A random variable is completely characterized by all ts moments. In most cases, all Such moments are not known a prion or are difficult to evaluate. Three more commonly used statistical properties characterizing a random variable are the mean ot expected value my, the mean-square value E(X*), and the variance 02 as defined below: my = E(X) -f apxta) de, 2.123a) Be?) f px(a)da, (2.123b) of & (IX — mel") =f (a ~ my)" px(or dat, (2.:123e) ‘These three properties provide adequate information about a random variable in most practical cases. It ‘can be easily shown that of = E(X*) — (xy? 2.124) ‘The square root of the variance, ox, is called the standard deviation of the random variable X. It fotlows from Eq. (2.124) that the variance and the mean-square value are equal for a random variable with zero mean. it can be shown that the mean value mx is the best constant representing a random variable X in a minimum mean-squared error sense, i.e, E([X — «]*) is a minimum for « = my and the minimum mean-square error is given by its variance oy (Problem 2.78). This implies that if the variance is small, then the value assumed by X is likely to be close to mx and if the variance is large, the value assumed by X is likely to be far from my. We illustrate the concepts introduced so far by means of an example, EXAMPLE 241 Leta random eirisble X he charactrstied fy a prolslity eeaiy meter 2.8. Random Signals 97 ‘Prom Bega: (2.121d) and (2.225) we cai compote the probability that 2X is ft a xpeciTiod rang. Gor example, te ‘i 3 ie fe range 0.5 <= KS in piven by Protublliry {0.5 < X= 1-51 = Aytt §)— Posy = ap a i which iy the area of the shaded pectin J Figure 2.35. From Dy. (2.125a) the incon wadise i cbbuniauad ui ua a()-S)mend, sud frum, Bap. (2.1250) the sec quar alien crap eat) = [ot (| Suiting te noe rm (2124) eb sacance prem ¥2 af =A) — 6 Two probability density functions, commonly encountered in digital signal processing applications, are the uniform density function defined by alpa forasach, Px(@) {= hemwice, @.127 and the Gaussian density function, also called the normal density function, defined by ete Rf 1 a (2.128) ox Jin » where the parameters my and ay are, respectively, the mean value and the standard deviation of X and lie in the range —oo < mx < 00 and ox > 0. These density functions are plotted in Figure 2.36. Various other density functions are defined in the literature (Problem 2.79). pxa) = Pome ‘Piesermige the mean aan wr watiance of x unitoraaly datritueed nindo veriable X defined Foun Kags, (2.42% art (21296 we arcive at az ilies namie laste aa 230 In the case of two random variables X and ¥ theie joint statistic properties as well as their individual statistical properties are of practical interest. The probability that X takes a value in a specified range from 93 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain py ab) ‘Fignre 2.36: (a) Unitorm and (b} Gaussian probability density functions, 190 (0 a and that ¥ takes a value jn a specitied range from —o0 10 8 is given by their joint probability distribution fiaction Pevta a= f Prove udu dv, (2.133) ‘The joint probability density function satisfies the following two properties: pxvta, B) = 0, (2.1340) [of rere aoa The joint probability distribution function satisties the following properties, which are a direct consequence of Eqs. (2.133), (2.1 34a), and (2.134b) 0 5 Prefer, B) Si, (2.135a) Pyylay, Bi) < Prva. #2) foros = cand B = Br, (2.1356) Pry(=o0,~00) 20, Pry Ctoo, +00) = I (2.135¢) The joint statistical properties of two random variables X and ¥ are described by their cross-correlation and ct0ss-covariance, as defined by (2.1340) gar £001) = ff” abpesa dwap, (2.1363 2.8. Random Signals 99 igure 2.37: Range of the random variables with the joint probability density function of Bq, (2.139). yay = E(X — mx — my) =f [te- moe — mirpx ya: 6) dade =gxy —mxmy, (2.137) where mx and my are, respectively, the mean of the random variables X and ¥. The ewo random variables X and ¥ are said to be linearly independent or uncorrelated if B(KY) EQOE), (2.1384) and statistically independent if Pyx,ylot, B) = Px(a)Pr(B). (2.138b) {It can be shown that if the random variables X and ¥ are statistically independent, then they arc also linearly independent (Problem 2.80). However, if X and ¥ are linearly independent, they may not be statistically independent. ‘The statistical independence property makes it easier to compute the statistical properties of a random variable that is a function of several independent random variables. For example, if X and ¥ are statistically independent random variables with means mx and my. respectively. then it can’be shown that the mean of the random variable V = aX +6Y, wherea and b are constants, is given by my = amy +bmy. Likewise, if the variances of X and ¥ are of and oy, respectively, the variance of V is given by a = ato? + b°a} «Problem 2.82), EXAMPLE 24) Cimslder we: two-runom vutiahtes Waid ¥ donesibed by uniteealy dlteibotel on pobablliy demsity functiea given by hosesh (ah ‘Determmi the vate of te cooééant A, spl det Scnjet tt rch tht 3 ac Ui he nips (08 Ear = 1,05 A= thghenia by the shaded epic ie Piquer 2:57, lavcking the property of Ea. (215346), we wba, “Lk wom elf al ales: tnd hence. A = 1/2 Nest. applying Bq. (2.133), we get rumtnmsersnosr sind |" [dada 100 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain woe sete ots ge i refit art it : # ae “eee index % ince n Figure 2.38; Sample realizations of the random sinusoidal signal of ig. (2.140) for 2.8.2 Statistical Properties of a Random As indicated eather, the random discrete-time signal is a sequence of random variables and consists of & typically infinite collection or ensemble of discrete-time sequences. Figure 2.38 shows four possible Fealizations of a random sinusoidal signat AXinl) = (A costa.n + ®)). 2.140) ignal with «1, = 0.067, where the amplitude A and the phase «b are statistically independent random variables with uniform probability distribution in the range 0 < @ < 4 forthe amplitude and inthe range 0 < @ < 20 for the phase. ‘The statistical properties of the random signal (Xln]} at time index # are given by the statistical properties of the random variable X{n}. Thus, the mean or expected value of {X[a]} at time index n is given by: rein = EAD =f ap xin a) dew 41) ‘The mean-square value of {X(s1]} at time index x is given by £ (xt?) =f 0? pxinsors im) dec. 42.142), ‘The variance o3,,, of (X{r]} at cime index n is defined by F(X) — mara?) = & (x0) — Grnxin)?- (2.143) Jn yeneral, the mean, mean-square value, and vatiance of a random discrete-time signal are functions of the time index 1, and can be considered as sequences. So far we have assumed the random variables and the random signals to be real-valued. It is straight- forward to generalize the treatment to comptex-valued random variables and random signals. Forexample, the nth sample of a complex-valued random signal (X[n]) is of the form Xin) = rel) + iXinloe], (14ay 2 7m 28, Random Signals 101 where (Xre(n]} and {Xim[n]} are real-valued sequences called the real and imaginary parts of (XL), respectively. The mean valuc of a complex sequence at time index nis thus given by mixin) = E (Xn) = E Lela) + FE Lila] = x, ny + FX, aI) (2.145) Likewise, the variance o,,) of {XI} at time index 7 is given by oem = & (te ~ maint”) = E (Xt?) — (lexi) 2.148) Often, the statistical relation of the samples of a random discrete-time sigaal at two different time indices m and n is of interest. One such relation is the autocorrelation, which for a complex randam. discrete-time signal (X{n}} is defined by dy x(m, a] = E(X[m)X"[a]), (2.147) where * denotes complex conjugation. Substituting Eq. (2.144) in Eq, (2.147) we obtain the expression for the autocorrelation of Xn]: bx xl, Y= Px Xee IMMA] + OX iy Kim Ms A FOX eXinlIM, V+ JOX qx, [HI (2.148) where Px eX,-lm, 2] = E (Xrelm 1X rein). (2.1498) Pr Kom tt 1 = E (XimlXimlD (2.1498) Pr eXinktm = E (Krell Xim(n]) (2.149) Oke Xoel mM] = E (Kil mXyela)- (2.1494) Another relation is the autocovariance of {X{n]}, defined by veal, n] = E (Xlen] — mgm (Xl) — mixin) bx xt, mn} — mx) xIa)*- (2.150) ‘As can be seen from the above, both the autocorrelation and the autocovariance are functions of two time. indices m and m and can be considered as two-dimensional sequences. EXAMPLE 2440 We. eh eee a ores oe kigned of Bg. C2140), ‘Sith rebut rs upd Aan ie phe ke gives pita [} Ose 8 feast) ant paisa [je Uses. ain mporvely: ince tetsu wail re imiily epee theo prcbubiy dena fcton reolndi= [gr Bge ese seet asm, 102 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain then vale of ihe runutoon process {1m} is tha pe sien deff comin vanense “iE l((o4) f" mnson) = * sistant = satiny 8. isa The meanequire value is ginen by F(x) = (1 [ Peatiann ora =i (fess) ("inns ents) =f, asap which js alo the variance sims he rendoet preistes je of sero amen ‘The sumconstation funetiee is given by Pex bn ysl = £OChe TAD wie fae felines Semone 4 9100 = Fo mie =). e.is6 The correlation between two different random discrete-time signals {X (nl) and {Yn} is described by the cross-correlation function oxvimsnl = B(xtm¢" tn) = fF f B" Pximirini(e m, B,n) da dp, (2.157) and the cross-covariance function yxy lon n] = E (Xm } — m xtm})(¥ [0] — mynd") = bxv lm, 1] ~ mximp mya)", (2.158) where pxpmrtaj(m, on, B) is the joint probability density function of X{n] and Yin}. Both the cross- correlation and the cross-covariance functions can also be considered as two-dimensional sequences. The two random discrete-time signals {X[n]} and (¥(]} are uncorrelated if yxy{m, ] = 0 for all values of the time indices m and 1. 2.8.3 Wide-Sense Stationary Random Signal In general, the statistical properties of a random discrete-time signal {Xn}, such as the mean and variance of the random variable Xn), and the autocorrelation and the autocovariance functions, are time-varying functions. The class of random signals often encountered in digital signal processing applications are the so-called wide-sense stationary (WSS) random processes for which some of the key statistical properties are either independent of time or of the time origin. More specifically, for a wide-sense stationary random 2.8. Random Signals 403 process (X{n)} the mean £(X|r]) has the same constant value ax for all values of the time index 7, and the autecorrelation and the aulacovariance functions depend only on the difference of the time indices m and, i.e my = E(X[n)), for afl n. (2.159) Oxxléh = dy xin + én} = EX ia + £1X"[n |), for all 4 and ¢, (2.160) vaxl€l = yexby + font = £ (Xr + et = ary) (X [a] ~ mx)") dx xfe] ~ lax? for all n and &. @.16l) Nove that in the case of a WSS random process, the autocorrelation and the autocovariance functions are one-dimensional sequences, The mean-square value of a WSS random process (X[7]} is given by £(IXnI?) = dy v0}, 2.162) and the variance is given by OR = yx lO = xx 10] — txt (2.163) Ik follows from Fgs. (2.154) and (2.156) that the random process of Eq, (2.140) is a widersense stationary signal ‘The cross-correlation and cross-covariance functions bewween two WSS random processes (X[n]} and {bare given by yy (E) = E (Xin + el¥ Lal}. (2.164) yey le] = E (Xn +) = my Yb] — yy) dxvlf] — mx(my)” (2.165), ‘The symmetry properties satisfied by the autocorrelation, aulacovariance, er0ss-correlation, and eross- covariance Tunetions ate: dvxl-El = Oey TA, (2.16643 yent=€l= rile). (2.1660) oxrl~€ = OP lel. (2.1666) vyxvl—Al = viylehe (2.166d) Front the above syimmetry properties it can be seen that sequences x xléh yxxlEl Oxyl€). ard yxyiE] ary always two-sided sequences. Some additional useful properties concerning these functions are: dxxiMbry 101 = bar fell. (2.1678) yxxtOlyvvi0} = |yey fell? (2.1676) xxi0| > tox alel]. (2.1676) ysxl6l = bexx {4 (2.1670) A consequence of the above properties is that the autocorrelation and autocovariance functions of a WSS. random process assume their maximum values at = Q, Ln addition, it can be shown that, for a WSS signal with nonzero mean, i-€.. xin) % 0, and with no periodic components, bin Oxxlel = [en xinl? (2.168) 104 Chapter 2: Discrete-Time Signais and Systems in the Time-Domain Af Xr] has a periodic component, then dy-x [] will contain the same periodic component as illustrated in Example 2.40, EXAMPLE £45 Determme the mean and vanance of 0 WSS rea] Algal wil x distocorrelution functa piety by i eaxtg= Ge ata) ‘Applying Bq. (2-468), ie obtnls Io gigi |? m7, ute Rem, dae emt mle Ub tie) EVE Net, tron Bige: CZ. 162) ane (21 17) we series met emerson valve mx 10 anc a aeaui, the waa Hb bod = Fa. 2.84 Concept of Power in a Random Signal ‘The average power of a deterministic sequence x{n] was defined earlier and is given by Eq. (2.29), To ‘compute the power associated with a random signal [X(n]} we use instead the following definition: 1 2 © (jin sats D xen 2.170) Px Tn most practical cases, the expectation and summation operators in Eq, (2.170) can be interchanged, resulting in a more simple expression given by r< 2 aT XS § (eer ). e@iny In addition, if the random signal has a constant mean-square value for all values of n, a WSS signal, then Bq. (2.171) reduces to Px = £ (IXin?) (2.172) Px From Eqs. (2.162) and (2.163) it follows that for a WSS signal, the average power is given by Px = Gxx[0) = of +|mgxP. (2.173) 2.8.5 Ergodic Signal Inmany practical situations, the random signal of interest cannot be described in terms of asimple analytical expression, as in Eq. (2,140), to permait computations of its statistical propertics, which invariably involves the evaluation of definite imtegrals or summations. Often a finite portion of a single realization of the random signal is available, from which some estimation of the statistical properties of the ensemble must be made, Such an approach can lead to meaningful results if the ergodicity condition is satistied. More precisely, a stationary random signal is defined to be an ergodic signal if all its statistical properties can ‘be estimated from a single realization of sufficiently large finite length. For an ergodic signal, time averages equal ensemble averages derived via the expectation operator in the lirnit as the length of the realization goes to infinity. For example, for a real ergodic signal we can Compute the mean value, variance, and autocovariance as: DY x, (2.1748) nao m= i che TM ET 2.9. Summary 105 " DY ainy-mxy. (2.174b) mu patel im 32 Gtr] mad vin = mad) (Tae) ato DT ‘The limiting operation required to compute the ensemble averages by means of time averages is still not practical in most situations and therefore replaced with a finite sum to provide an estimate of the desired statistical properties. For example, approximations to Eq. (2.174a)-(2.174c) that are often used ate: 1 of vit wri (2.17Sa) ee Sa Lela, 42.1756) = 1 Pant = a 7 Ll) Glas Im) QV750) = 2.9 Summary In this chapter we introduced some important and fundamental concepts regarding the characterization of discrete-time signals and systems in the time-domain, Certain basic discrete-time signals that play impor- tant roles in discrete-time signal processing have been defined, along with basic mathematical operations used for generating more complex siginats and systems. ‘The relation between a continuous-time signal and the discrete-time signal generated by sampling the former at uniform time intervals jas been examined. ‘This text deals almost exclusively with linear, time-invariant (LTD discrete-time systems that find numerous applications in practice. These systems are defined and their convolution sum representation in the time-domain is derived. The concepts of causality and stability of LTI systems are introduced, Also discussed 18 an important class of LTY systems described by an input-output retation composed of a linear ‘constant coefficient difference equation aid the procedure for computing its output for a given input and initial conditions, The LTI discrete-time system is usually chassified in terms of the length of its impulse response. The concepts of the autocorrelation of a sequcitce and the cross-correlation between a pair of sequences are introduced, Finally, the chapter concludes with a review of the time-domain characterization of a discrete-time random signal in terms of some of its statistical properties. For further details on discrete-time signals and systems, we refer the reader to the texts by Cadzow {Cad?3), Gabei and Roberts (Gub87], Haykin and Van Veen [Hay99], Jackson [Jae91], Lathi (Lat98}, Oppenheim and Willsky [Opp83], Strum and Kirk [Stu88]. and Ziemer et al. (Zie83). Additional materials on probability theory and statistical properties of random discrete-time signals can be found in Cadzow [Cad87), Papoulis (Pup65), Peebles [Pee87], Stark and Woods (Sta94], and Therrien (The92}. Further insights can often be obtained by considering the frequency-domain representationsof diserete- time signals and LT¥ disezete-stime systems. These arc discussed in the following two chapters. 106 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain 2.10 Problems 2.1 Consider the following length-7 sequences defined for —3 1, 66 (ele = 4 sine, feb [etn = Feast taney 2.8 ta) Show that a cauval real sequence x2} ean be fully recovered from its even part sey{n] for all» = 0. chereas it can be recovered trom its edd part xygltl for al 2 > O. cc {s it possible to fully recover 9 causal complex sequence yl from its conjugate antisymmetric part yeafr!? Can vf] be fully recovered From its conjugate symmetric part yes]? Justify your answers, 2.9 Show that the even and odd ports of a real sequence ace, respectively, even amd udd sequences. 2.10 Show that the periodic conjugate symmetric part tpesirr] and the periodie conjugate antisymmetric part xpealt] of alength-W sequence «{n].0 <= V ~ |, as defined in Bs. (224a) and (2.246) can be aiternacely expressed as Gelt) = sed] +acel" NL OS nS NI (2.1760) calm Pbacin~ V1, OSnS NT (2.1760) seal where tesla] and vu (n|are, respectively, the conjugate symmetric and conjugate antisymmetric pants of x(n) 2.11 Show thal the periodic cont gate symmetric purt pe] and the perindic conjugate antisymmetric part xpealy of a length-V sequence «fa). | A — 1 ns defined in Eqs. (2.24a) and (2.24b) ean also be expressed as SallteiN ap lees Nei 19a} = Rewxi01. 2.17%) fon} CIN =p denen —1 21776) Imgrl0}. 77a) 2.12 Show that an absolotely stummable sequence nas finite energy. buta finite energy sequence may not be absolutely sumimabte, 213 Show thatthe square suoumable sequence syle] = 1 of Eq, (2.27) isnot absolutely summable, 244 Show that the sequence xp{u] = 884} va. Note that yi—1] i.a suitable initial approximation to @ 2.1 Develop a general expression for the culput y[t] of an LTT diserete-time system in terms of its inpus efor} and {the unit step response s(t] OF the systera 2.52 A periodic sequence #2] wich period 4 is applied as an input (o an LITI discrete-time system characterized by ‘an impulse response Aln| generating an ougpat yl). Is »[m] a periodic sequence” If its. what is its period? 2.33 Consider the following sequences: (i) x\ [1] = 28)n — 1] ~ 0.58[n ~ 3), (ii) xgln) = ~38ln — 1) + Sle +2), iit) halen] = 2ile| + dla — 1] — 38le — 3], and Gv) gin = =8|n — 2) —0.58[n — 11-4 34le — 3). Determine the following sequences obtained by a linear convolution of a pair of the above sequences: (a) yy [rT = xilml@Ar lal. (b9 vole) = x2iaI@Azlal, (6) yale] = vs feV@heba, and (a) vsfn} = «olm 1694 I. 2.34 Let gf] be a finite Jengih sequence defined for Ny Sm < Nz, with Nz > Ny. Likewise, let Ala} be a finite length sequence defined for Sd) My. Define ylv] = glal@aln]. (a) What is the length of lr]? Co) What is the range of the index m for which pla] is defined” 110 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain 2.35 Let ya} = x) Le 1@2ler| and Pf] = sie — Ny @Dazle — Nz], Express vl) in cerens of yf. 2.36 Let ghet] = sf }@>xol 20h] and hla} ala ifn — Mi}@saln — NABaxsb4 — 5]. Express hfe] in terms of 2.37 Prove that the convolution sum operation is commutative and distributive. 2.38 Consider the following three cequences: ' for =O, fore = 3, 9, otherwise: subi} =A (aconstand, tr) = ale, etn) Show that ral @1NI@ x1 kel ¥ ole Dr loI@xi lo. 2.39 Prove that the convolution operation is associative for stable and single-sided sequences, 2.40 Show thet the convolution of a lenuth-M sequence with a length-’ sequence leads to 4 sequence of length OfEN 0, 2AL Let xf] be a length. sequence given by 1 Osn a= (0, eierne Determine yln} = x[n}@.cln] and show thatitis «triangular sequence with amaximum sample value of NV. Determine the locations of the samples with the following values: 1/4, N/2 and N 2A Let xin} and hl] be two lengt-N sequences given by 1 OzneN—1 sll {6' Oiervise meno s tt (A ernie. Determine the location and the value of the lugest positive sample wf yin] = xfu}@hbr] without performing the coavelution operation 2.43 Consider two real sequences jin] and sin] expressed as a sum of theit respective even and odd parts, HUA] = heck) + Reali, and e011 = gevin] + goal], Bor each of the following sequences, determine if itis even of oud. heviM@xovln, —(b) toulO@zevlel, CV hoalal@enalal- 2.44 Let y(n] be the sequence obtained by a linear comolution of two eats finite-length sequences Af) and fa) For each pair of ya] and A] listed below, determine x|n}. The test sample in each sequence ts its value at n=. fa toll) = (1, 1. 1, 3, 30, 28, 48), tnt) = 1, 2. 3. 4), ¢b) Lobeald =f. 3, 6, 10, 15. 14. 12. 9, 5), df) =, 2.3, 4, SH, (9) Uy = (14 F8, ~3 — 1, 2475, 9.734 F125, 58+ jS.67), dala) = B+ ID 14H, 24 5h 2.45 Consider a causal discrete-time system characterized by a first-order linear, constant-coefficient difference egu- tuon given by. pin] =ayln = H+ bxlnd, 2 0, where sin] and x(n] are, respectively, the ovtput and inpat sequences, Compute the expression forthe output sample Wn} in terms ofthe initial condition y{—t) and the input samples, 2.10. Problems am 44) Ts the system time-invariant if y{—1] = 17 Ts the system linear if y[—1] = 17 (by Repeat part (a) if vt~1] = 01 (c) Generalize the results of parts (a) and (b) to the case of an Nth-order causal discrete-time system given by Ey. 29%). 2.46 causal LT? discrete-time system is said to have an overshoot in is step response if the response exhibits an oscillatory bebavior with decaying amplitudes around a final constant value, Shaw that che system has no overshoot in i step response ifthe impulse response h{7] of the system is nonnegative for allln = 0, 2.47 "The sequence of Fibonacci numbers fiir] is a cual sequence defined by Fin)= fin= + fln—2) mer with 10} = Oand #11 = 1 {3} Develop an exact formula to calculate fr directly for any n. ly Show that ff) 1s the impulse zesponse of a causal L:TT system described by the difference equation [Joh89] yf] = yin — Vt yl 2 pate 2.48 Consider a first-order complex digital flter characterized by a difference equation yin} = ayn ~ Wan, where sir} isthe real input sequence, yf] = vel] + f¥imln is the complex output sequence with yreln| and yamin? denoting ts real and imaginary pacts, and = a + jb is u complex constant, Develop an equivalent two-output, single-input real difference equation representation of the abave complex digital filer. Show that the single-input, single-output digital filter colating ¥se(t} to x[/] is described by a second-order difference equation. 2.49 Determine the expression for the impulse response of the faetor-of-3 linear interpolator of Eq, (2.59), 2.50 Detcrmive the expression for the impuise response of the Factor-of-L linear interpotator 2.51 Let ALU], HU], and k{2) denote the fist three impulse response samples of the first-order causal LIT system ‘of Problem 2.56, Show that the coefficients of the difference equation characterizing this system can be uniquely determined fiom these impulse response samples. 2.52 Leta causal HR digitel ter be described by the difference equation Sanya & where v[se] and xr! denote the output and the input sequences, respectively, If Alm) denotes i show that “ YX parts - 4. (2.180) pulse response, Pk= DOM ESO. m= From the sbave result, show that pn = AUnI@dn. 2.83 Consider « cascade of two causal stable LT] systems characterized by impulse responses a” fn] and B® fn}, whore 0 < a «< Land 0 <8 < 1, Determine the expression for the impalse response hn} of the cascade 2.54 Determine the irapulse cesponse ¢{2t] of the inverse system of the LI discrete-time system of Example 2.28. 112 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain 2,58 Determine the impalse response gla] characterizing the inverse system of the LTT discrete-time system of Problem 2.45. 2.56. Consider the causal LTT systern deseribed by the difference equation yin) = poxtm) + prvi — 1) dyin Ue where x1] and y(s] denote, respectively, its input and output. Detersaine the difference equation representation of its inverse system. 2.87 Determine the expression forthe impulse response of each of the LTT systenis shown in Figure P2. Ayla fain}-@— fan Aled T Ain] hytn ery | = Sian] fran Agta a) ) Figure P22 2.58 Determine the overall impulse response of the sysiern of Figure P2.3, where Ian} = 20h — 38+ ade = Bin — N+ 2a + 2, hab) = Sal — S1-b 75] — 31-4 28 — 1] — Sn] + 38m + 1) aot fit -—fn,tn Ml Ala Figure P2.3 2.89 Prove that the BIRO stability condition of Ba, (2.73) also holds for an LTT digital filter with a complex impulse response. 2.60 Is the cascade connection of two stable LTI systems also stable? Justify your answer, 2.61 Is the parallel connection of two stable LTT systems also stable? Justify your answer, 2.62 Prove that the cascade connection of wo passive (lassless) ETI systems is also passive (lossless). 2.63 Is the parallel connection of two passive (hossless) LTT systems also passive (lossless)? Justify your answer. 2.64 Considera causal FIR filter of length + | with an impulse reyponse given by (glnll.n = 0.1... £. Develop the difference equation representation of the form of Ey. (2.81) where M+ N = L of acaisalfnite-dimensional HR Aigital filter with an impulse response (A(mH} such that fn} = gla] for ms O01... Le 2.10. Problems. 13 2.65 Compute the output of the accumulator of Eq. (2.55) for an input z[n] = nuln} and the following initial conditions: (ar y{=1] = 0, and ¢b) »[=1] = ~2. 2.66, In the rectangular method of numerical integration, the integral on the right-hand side of Eq, (2.98) is expressed. m [C1 ptodra rca ony aus bevel te dle qt psn fe mts nea of une neon, 24 Uap neste ne-ayng nedsnt in soem cts yin} = { gue HA n= 0 2.68 Determine the (oral solution for n 2 Oof the difference equation yl] + 05yin — 1) = 2utnh, ‘with the initial condition y[—1} = 2 2.69 Determine the total solution for n 2 O-of the difference equation ya] + O.Lyler = 1] = 8.06y40— 21 = 2" eke}, ‘with the initial condition y{—1] = 1, and y{—2] = 0. 2.20 Detei the total solution for n 2 0 of the difference equation Yin) +0.Lyle ~ 1) —0.06yl0 — 2] = xln] —2efn — 1), with the initial condition y[—1] = 1, and yf-~2] = 0, when the forcing function is xf) = 2"1{n). 2.71 Determine the impulse response h[n] ofthe LI system described by the difference equation yin] + 0.5y[n — 1} = 210} 2.72 Determine the impulse response hn] ofthe LI system described by the differeace equation vin} + O.by[n ~ 1} —0.06ylir — 2] = ln] 241A = I 273 Show that the sum D929 [n¥ (44)"| converges if [Aj] < 1. 2.74 (a) Evaluate the autocorrelation sequence of each of the sequences of Problem 2.1. (b) Evaluate the cross-correlation sequence rxy[é] between the sequences x{] and y[n], and the crost-correlation sequence ray (€} between the sequences x[n] and w{n] of Problem 2.1. 2.78 Determine the autocorrelation sequence of each of the following sequences and show that itis an even sequence ineach case. What is the location of the maximum value of the autocorrelation sequence in each case? {a} x) [n] =e" uf], ow xim=(f QErEN oh 2.36 Determine the autocorrelation sequence and its period of each of the following periodic sequences, 414 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain (a) 11m) = costars), where M is a positive integer, () Hin} =n modulo 6, © alm) = (“1 2.77 Let X and ¥ be tworrandom variables. Show that E(X +) = E(X) + EU) and £(cX) = cE(X), where eis 2.78 Determine the valve of the constant « that minimizes the mean-square error E([X — x2), and then find the ‘minimum value of the mean-square error. 2.79 Compute the mean value and the variance of the random variables withthe probability density fonctions listed betow (Pars) (a) Cauchy distribution: py (x)= 72, @) Laplacian distribuion: p(s) = Ge, (©) Binomia! distribution: py) = They (pe — PY — Oy (@) Poisson distribution: py (0) = Pq “PE See— 8). (©) Rayleigh distribution: px (x) Bre A pes), In the above equations, 5() is the Dirac delta function and (x) is the unit step function, 2.80 Show that if ihe ewo random variables X and ¥ are statistically independent, then they are also linearly indepen- dem. 241 Prove Eq. (2.124). 2.82 Let x(n} and v(n] be ewo statistically independent stationary random signals with means mx and my, and variances o? and 9%, respectively. Consider the random sigoal u{rc] obtained by a linear combiaution of x{n} and yin), ie. vf] = axin] + byl], where @ and b ase constants, Show that the mean my and the variance a2 of ula] are sive by my = ame + bmy anda? = a2o2 + 6202, respectively. 2.83 Let cla] and y(n) betwo independent 2er0-mnean WSS random signals with autocorrelations xn and qs fr. respectively. Consider the random signal ulm} obtained hy a linear combination of x(n] and yf), be. vial — xin] + Bylo], where @ and & are constants. Express the autocorrelation and eross-correlations, gyuDa} Gola and ryt], i terms of s(n and dysfre). What would be the results if either x{n or yl] was zero-mean? 2.84 Prove the symmetry properties of Eqs. 2.1664) through (2.1664). 285 Verify the inequalities of Eqs. (2.167a) through (2.1674) 2.6 Prove Bq. (2.168), 2.87 Determine the mean and variance of a WSS real signal with an autocorrelation function given by 9+ 110 + Laer = tea lO = Teh oF 2.14. Manas Exercises 115 2.11 Mar.as Exercises M21 Write a MATLAB program to generate the following sequences und plot them using the function stem: (a) unit sample sequence |r} (b) unit step sequence fn], and (¢) ramp sequence n(n). The input parameters specified bby the user are the desired length 1 of the sequence and the sarmpling frequency Fin Hz. Using this program generate the fist 100 samples of each of the above sequences with a sampling rate of 20 kHz. M22 The square wave and the sawtooth wave are two periodic sequences as sketched in Figure P2.4, Using the functions sawtooth and square write a MATLAB program to generate the above (wo sequences and plot them uasing the function sem. The input data specified by the user are: desired length Z of the sequence, peak value A, and the period N. For the square wave sequence an additional user-speciied parameter is the duty cycle, which is the percent of the period for which the signal is positive. Using this program geaecate the first 100 samples of each of the above sequences with a sampling rate of 20 kHz, a peak value of 7, a period of 13, and a duty eycle of 60% for the square wave. \ ull ll sil" fe “Te oa” J @ A wl . ot 1 -4 ) Figure P24 M23 (a) Using Program 2.1 generate the sequences shown in Figures 2.16 and 2.17, (b) Generate and plot the complex exponential sequence 2.5¢-©4+/7/9"" for Q 00. M 2.10 Write a Maran program to compute the square root using the algorithm of Eq. (2.179) in Problem 2.30 and. show that the output y{/t] of this system for an input x{a] = azetn) with y{—1] = 1 converges Wo /@ as n —+ 00. Plot th: emor as a function of n for several different values of @. How would you compote the square-root of a number a ‘with ¢ value greater than one? M2.11 Using the function impz write a MATLAB program to compute and plot the impulse response of a causal finite-dimensional discrete-time system charscterized by a difference equation of the form of Eq. 2.81). The input data to the program are the desired length of the impulse response, and the constants {4} and id} of the difference ‘equation. Generate and pict the fist 41 samples of the impulse response ofthe system of Bq, (293). (M 2.12 Using Program 27. determine the autocorrelation and the cross-correlation sequences of Problem 2.74. Are ‘your results the samme as those determined in Problern 2.74? M213 Modify Program 2.7 to determine the autocorrelation sequence of a sequence comupied with a uniformly distributed random signal generated using the M-function vans. Using te modified program deronstrate that the autocorrelation sequence of a noise-cormupted signal exhibits a peak at zero lag. M214 (a) Writea MATLAB program to generate the random sinusoidal signal of Eg. (2,140) and plot four possible realizations of the random signal. Comment on your results. oh Compute the mean and varisnce of a single realization of the above random signal using Eqs. (2.174) and (2.1740). How close are your answers to those given in Example 244? M25 Using the M-function rand generate 2 uniformly distributed length-1000 random sequence it the range (1. Using Eqs. (2.174a) and (2.1746), compute the mean and variance of the random signal Discrete-Time Signals in the Transform Domain In Section 2.2.3 we pointed out that any arbitrary Sequence can be represented in the time-domain as a weighted linear combination of delayed unit sample sequences [5{n — kj]. An important consequence of this representation, derived in Section 2.5.1, is the input-output characterization of an LTI digital filter in the time-domain by means of the convolation sum describing the output sequence in terms of a weighted linear combination of its delayed impulse responses. We consider in this chapter an alternate description, of a sequence in terms of complex exponential sequences of the form {e#"} and {z7") where z is a complex variable. This leads to three particularly useful representations of discrete-time sequences and LTI discrete-time systems in a transform domain! These trunsform-domain representations are reviewed here along with the conditions for their existence and their properties. Maria has been used extensively to illustrate various concepts and implement a number of useful algorithms. Applications of these concepts are discussed in the following chapters. ‘The first transform-domain representation of a discrete-time sequence we discuss is the discrete-time Fouricr transform by which a time-domain sequence is mapped into a continuous function of a frequency variable, Because of the periodicity of the discrete-time Fourier transform, the parent discrete-time se- ‘quence can be simply obtained by computing its Fourier series reoresentation. We then show that for a length-N sequence, N equally spaced samples of its discrete-time Fourier transform are sufficient to describe the frequency-domain representation of the sequence and from these N frequency samples, the original V samples of the discrete-time sequence can be obtained by a simple inverse operation. These NV frequency samples constitute the discrete Fourier transform of a length-N sequence, 2 second transform- dosmain representation, We next consider @ generalization of the discrete-time Fourier transfosm, called the z-transform, the third type of transform-domain representation of a sequence. Finally, the transform domain representation of a random signal is discussed. Each of these representations is an important tool in signal processing and is used often in practice. A thorough understanding of these three transforms is therefore very important fo make best use of the signal processing algorithms discussed in this book. 3.1. The Discrete-Time Fourier Transform The discrete-time Fourier transform (DTFT) of, simply, the Fourier transform of a discrete-time sequence xl] is a representation of the sequence in terms of the complex exponential sequence [e~/"} where vis the real frequency variable. The DTFT representation of a sequence, if it exists, is unique and the original sequence can be computed from its DTFT by an inverse transform operation. We first define the forward transform and derive its inverse transform. We then describe the condition for its existence and summarize its important properties. periodic sequences ean be represented in the Requency domain by means ofa disecete Fourier series (see Problem 3:34) ay Discrete-Time Signals in the Transform Demain 118 Chapter 3: 3.1.1 Definition The discrete-time Fourier transform X(e/”) of a sequence x{n] is defined by Xe) = SE xlewte, G1) In general Xe!) is a complex function of the real variable « and can be written in rectangular form as KX (CI?) = Xrelel*} + jXimie?), G23 where Xi-(e/) and Xim(e/) are, respectively, the real and imaginary parts of X(¢!”), and are real functions of », X(e!") can alternately be expressed in the polar form as X (ei) ae |X (6/0/00), 3.3) where (ce) = arg X(e/*)}. GA) ‘The quantity |X (e/*)| is called the magnitude function and the quantity 8(w) is called the phase function with both fonctions again being real functions of w. In many applications, the Fourier transform is called the Fourier spectrum and, likewise, 1X (e/*)| and @(w) are referred to as the magnitude spectrum and phase spectrum, respectively, The complex conjugate of X (e/®) is denoted as X*(e!®). The relations between the rectangular and polar forms of X (e/*") fotlow from Eqs. (3.2) and (3.3), and are given by Xrelel*) = | Xe!) cos 6(e0), Ximle!) = [X (e!*)] sin Oo), 1X Ce)? = x2 ef) + XP (eM), _ Xim(e/) tan @fw) = Kee?) It can be easily shown that for a real soquence x{71]. |X (e/”)] and Xe(e/*) are even functions of w, ‘whereas 6(00) and Xim(e") are odd functions of w (Prublem 3.1), Note from Eq. (3.3) that if we replace 9(co) with 8¢co) + 2k, where k is any integer, X(e?) remains ‘unchanged implying that the phase function cannot be uniquely specified for any Fourier transform. Untess otherwise stated, we will assume that the phase function (co) is restricted to the following range of values, a1 <0@) ‘Time-shitting: xin tial ee Geeley Frequeney-shitiing ef" gf] G (ere) Differcotiation ati) in frequency nell 1 be Comoluton gal tn} Giei® cote Modulation alalinel EIB Geo Heilya9 Parevatsretion > elaitint = 2 f” Giel)t1%e do Sequence Discrete-Time Fourier Transform. Xeeiry Xe) ston} xe) Retebnl} Kester) = F1xCet + xem) Fitri Xeated®) = HX Ce) — XM Hy cole) Xectely eal] Kanto Note: Xes(e/} and Xeg(e/") are the conjugate-symumetric and conjugate-antisyrametri¢ parts of X(ei#), respectively. Likewise, xosfl and Xcein| are the conjugate-symumetric and conjugate-antisymmetsic parts of x[a. respectively, 3.1. The Discrete-Time Fourier Transform 127 ‘Table 3.4: Symmouy reluions of the discrete-bne Fourier ansform of a real sequence. Sequence Discrete-Time Fourier Transferm, ste rest] Keele”) Soult Mintel) Keefe) = xte ey Nieto) = Keeler Symmetry relations Xighell) =» Xi e224) [rte = 1x eI) argh ie} = — migtxceJ)y Note: sevfa) aad xpa(7) denote the even and odd pans of x12}, respectively EXAMPLEST Wh corsquse the eewngy nf the mequence typ [mi of Bay C121. Froey ig (HLM meee Mp piety dia = [aaa Honce, ip pla} i a Binte-enerpy sequen Recall from Eq, (2.105) that the autocorrelation sequence rg[¢] of gli) can be expressed as DY stelsi-€ - oi = 2l@al-- G.20) rl Now from Table 3.3, the DTFT of g{~€] is Gle~). Therefore, using the convolution property of the DTFT given in Table 3.2, we observe that the DTFT of g[£]@al—E) is given by G(e!*)Gie~!) IG(e!#)/?, where we have used the fact that for a real sequence z{n], G(e!”) = G*(e/). As a result, the energy density spectrum Sege*) of a real sequence g[n] can be computed by taking the DTFT of its autocorrelation sequence reglél, Le, 2p 128 Chapter 3; Discrete-Time Signals in the Transform Domain Analogously. the DTFT S,4(e!) of the cross-correlation sequence rg4[é] of two sequences gf] and ‘h[n] is called the cross-energy density spectrum: Sen6eF) = SO ry (flee (3.22) 3.1.6 OTFT Computation Using MATLAB The Signal Processing Toolbox in Mart.spincludes anumber of M-files toaid in the DTFT-based analysis of Giserete-time signals. Specifically, the functions that can be used are freqz, abs, angle, and unwrap, In addition, the built-in MaTLas functions rea and imag are also useful in some applications. The function Ereqz can be used to compute the values of the DTFT of a sequence, described as a rational function in ein the form of Eq, (3.17) at a prescribed set of discrete frequeney points @ = wr. For a reasonably accurate plot, a fairly large nurober of frequency points should he selected. There are various forms of this function: Ho> freqz(num,den,w), Hos freaz(nun,den,£,PTI, [H,w] = freqetmum,den,k}, [H,£] = freqz(num,@en,k, FT), [Rew] = freqz (num,den, x, ‘whole’, Ix, £] = freqz(num,den,x,/whole’,PT), freq2(num, den). The function trea returns the frequency response values as a vector H of a DTFT defined in terms of the vectors mum and Gen containing the coefficients (p:} and {d;), respectively, at a prescribed set of frequency points. InH > treqz (nium, den. w). the prescribed set of frequencies between O and 27 are given by the vector. InH = freaz (num, den, f, PD) the vector £ is used to provide the prescribed frequency points whose values mustbe in the range Oto FT /2 with F' being the sampling frequency. The {otal umber of frequency points can be specified by Ic in the argument of £reqz, In this case, the DTFT Values H are computed al k equally spaced points between () and 2, and returned as the output data vector w or computed at k equally spaced points between O and F'T/2, and returned as the output data vector £. For faster computation, it is recommended that the number & be chosen as a power of 2, such as 256 of 512, By including ‘whole’ in the argument of freaz, the range of frequencies becomes 0 to 27 or 0 to PP, as the case may be. After the DTFT values have been determined, they can be plotted either showing their reat and imaginary parts using the functions real and imag or in terms of their magnitude and phase components using the funetions abs and ang le. The function angle computes the phase angle in fadians. If desired, the phase can be unwrapped using the function unwrap. freqz (num, den) with no ‘cutput arguments computes and plots the magnitude and phase response values as a function of frequency in the current figure window. We illustrate the DTFT compuiation using Mar. 4p in the folk ng example. EXAMPLES Programs 3,\ canbe employed telecine the vuluer nf the DTFT of wtmal asquetice deteriboe Jie Aexciona Aanction fm e~-*" rier Traterotie Cea Hea: tie desired fenath of orp 3.1. The Discrete-Time Fourler Transform, 129 6 emer eie: gee = © Compute ws Ope rRiDt hoe /Feesiurmum, Oey, w) seauie elev wid abe ch} fy gris. Sitla("Hagh(tude Bpgetinim!) slabel ( \emaga 5uh0 2 oh sunnier (2.2541 Bt ato (Nl) parkd 11+ Magnitude!) rartiaaett ‘The par dara guested hy the peg ke the mirabes uf Lequericy pains head whic the DIFF isto be the vectin rind es eitaining ye coelficsenis ofthe numezaine and theddedomarator of tc FITPT respectively, powersai'e "!Theas wycnaes sat oa entctes aide spun: bracket. The pinyin Corpus the EYTE'T ilies athe presesibel Ueipienuy clots anal jkots Ue real seed itingincy pure, aay! (her mage ane hive wpectiome. It-stojl be:eite! thet heme of the’ nyrummeity petty of Mie DUET 0 real aejueice a indieste i Take 3-3 the DTT is reslaniee! ony ut ke ccpully spuped valies af wi Berieee O aed “Ad it exanryple We edn er the phiting of te Flay ETT: RR = Oe + ORE — one + One Teast TSE > ee pa ‘The impul da sed fi tie: DFT ten abure-are aa follow: h = ane nium = {000m ous o-noaT den = fk 2 144 ‘Toe preynam fist commpuies the TTT a the peciied 25% dicrete lepqucncy ata wagnally spiced Bear (nd ar, H then Sompytes the sal and iccagioury poet. andthe moeysituele aad phase of llc TET ul hone fragoemcien sehen posed a shea i Pipe. As conte ene (rots eter. the ythave-spream diaplaps a iscuetienaty ia 3594 around sy = 072. Thi scondinity, can be nena! wing the fusetion unieeas ae he nares have plorti a thown pn Pine 1 3.1.7 Linear Convolution Using DTFT At important property of the DTT is given by the convolution theorem in Table 3.2, which states that the DIET ¥(e/*) of a sequence y[n] generated by the linear convolution of two sequences, gl] and Abul. is simply given by the product of their respective DTFTs, Gée!#) and H(e!”). This implies that the linear 130 Chapter Discrete-Time Signals in the Transform Domain ep mame , 7\] os /\ 4 / 1 Be J \ a oe ae fw ©) _ Mognice gor i oa os oe @ Figure 3.5: Plots ofthe real and imaginary parts, and the magnitude and phase spectrums of the DTFT of Example 3.8 wrapped phase specs “o 02 ue os a Figure 3.6: Unwrapped phase spectrum of the DTFT of Example 3.8. convolution [2] of two sequences, gin| and hf], can de implemented by computing first their DTFTs, GC(e'™) and H(e?), torming the product ¥ (e?”) = Ge!) (e/"), and then computing the inverse DTFT of the product. In some applications, particularly in the case of infinite-length sequences, this DTET-based approach may be mote convenient 10 carry out than the direct convolution. 3:2. The Discrete Fourior Transform 134 3.2 The Discrete Fourier Transform Tn the case of a finite-length sequence xi], 0 N- itis trencated to the first WV’ samples, whereas, if R= N, the vector x is zero-padded at the end to make it into a length-N’ sequence, Likewise, the function ££ {X) computes the R-point IDFT of a vector X, where R is the Jength of %, while 1 ££ (, 1} computes the IDFT of X, with the size N of the IDFT being specified by the user. As before, if R > N, it is automatically truncated to the first V samples, whereas, if R < N, the DFT vector x is zero-padded at the end by the program to make it into a length-V DPT sequence, In addition, the function di tmex (N} in the Signal Processing Toolbox of Marian can be used to compute the N x N DFT matrix Dyy defined in Fq (3.40). To compute the inverse of the N x DET ‘matrix, one can use the function con j (atm (N}17N. We illustrate the application of the above M-fites in the following threr examples. EXAMPLE 10 hing Micri.na we deaucmine te f-poune 1 (UIA) Uf the: Foltowiay Nope mmypence Ce eee alt= [fh las To tae wl so ct nse Pepe 3.2 girth Belen, ‘rang econ: che peng requmee Ue eng taal, Ta eevore cormce DFT vali Mf mos be gresterihan Gr const an N., Atte thevage entered. it cummputen the Mf-pnind DFT amt pets the ctighnal NV point sequence. wnt the rangi seal phase lf the DPT acquumee ax indica i igre 9:7 for N ne B und Af = 16 at NET Compltation At he per cm dama. A Maar buco”) 3.2. The Discrete Fourier Transform 135, Original ime-dormin sequence woe ep ee eb O8 | a | Bou 02 % 1 2 3 405 6 Tents fa) Mgntute of DFT sts Pac ofthe DFT stops . 1 is 2 2 ® weet 5 $ % 1s 5 % is Frewery ide k equ nde (by Figure 3.7: (a) Original length. sequence of Eq, (3.44) and (b) its M-point DFT for N = @ and M = 16. 196 ‘Chapter 3: Discrete-Time Signals in the Transfarm Domain % Compute ita N-point IDFT us LEC: A Plot the DPT and its IDPT wed imy stem{k-1,U) selabel (“Frequency index k'}: ylabel (‘Amplitude’) Eitle("Original OFT samples’) pause subplot (2.1.1) n= O:2:N-1y stenin. real(u)) title(*Real part of the time-donain samples") xlabel ("Time index n*}: ylabel (*Ampli tude") subplot (2,1, 2) stem(n, imag(u}) title(Imaginary part of che tine-donsin samples’) wlabel ("Tine index n'}) ylabel (‘Amplitude’) {As the program is run, i calls for the inpat data consisting of the length of dhe DET and the length of the IDET. It then compates the IDFF of the ramp DET sequence af Eq. (3.45) and plots the original DFT sequence and its [BFT ‘as indicated in Figure 3.8. Note that even though the DFT sequence is real, its IDFT is a complex time-domain sequence as expected, IENAMPLE 343. Lewr = 3aand A = 16 for the finine-tength sequence x(a} of Eq. (3.33). From Eq. (3.36) its \6-poiat DFT is therefore given by &, fork 3, 8, fork = 13, 0, otherwise ‘We desermine the DTFT X (e/* of the length-16.x{] by computing a $12-point DFT using the MATLAB program indicated betow. x "Program 34 § Wumerical Computation of TPT Using DFT 4 © § Generate the length-16 sinusoidal sequence i = 0:25; x = cos(2#piske3/16); N Compute ite 5iz-point Der K > fet ix: HE = eft (x,512); ‘Plot the Frequency response be 02511; BLOtIL/512 abe (xe) ) hold plot tk/16-abetx). '071 slabel (‘Normalize sngular frequency’) yilabel ("Nagnitude’) Figure 3.9 shows the plot of DTFT X(e!™) along wilh the FT samples (K]. As indicated in this gure the DET vais HUH), intone by cles, are precisely the requ samples of he TFT fe!) af ao = 8, 33. Relation between the DTFT ang the DFT, and Their Inverses 197 Origins! DFT samples - fea 1 t m il pe Real par of the tne-domsia samples !magimary part ofthe time-domain seonples Anplitade Angie poet] | 04 -— Fos — % Sg ™ “ ja “] “Lt poeta tee t Ge 8 ow ne My Pa 6 ‘Dime index a Tine index 0 (by © Figure 3.8: (a) Original DFT sequence of length K' = B, and (b) its 13-point IDFT. Figure 3.9: The magnitudes of the DTFT Xie) and the DFT XTK] of the sequence xf] of Bq. (3.33) with r = 3 and # = 16. The DTFT is plotied as a solid line and the DET sazaples are shown by circles. 3.3 Relation between the DTFT and the DFT, and Their inverses We now examine the explicit relation between the DTET and the V-point DFT of a Tength-¥ sequence, and the relation between the DTFT of a length-M sequence and the N-point DFT obcained by sampling the DTFT, 138 Chapter 3: Discrete-Time Signals in the Transform Bomain 3.3.1 DTFT from DFT by Interpotation As indicated by Eq, (3.23), the W-point DFT X{&] of length-N sequence x{rt] is simply the frequency samples of its DTFT X (e’*) evaluated at N uniformly spaced frequency points, © = ay = 2rk/N radians, @ = & = N—1. Given the W-point DFT X[k] of a length-N sequence, it is also possible to determine its DFT X(e!*) uniquely. To this end, we first determine x[:r] using the IDPT retation of Eq. (3.26) and then compute its DTFT using Eq. (3.1), resulting in x wipe xe) = So xtmge = = [s xuawe] jon w= fo Lima =r Sete ie, us & where we have used Eq. (3.24). Now the right-hand summation in the above expression can be rewritten as y eden Ink/Nn een 28) 4% oe aR eat HPL gin (ONS2=4) =F ilen B AN) si (onstaey : a eS aan Substimuting Eq, (3.47) in Eq. (3.46) we obtain the cesired relation expressing Xe!) in terms of X{]: Del gi Xe = = xt =m vg fle @ak/M W123 aay 3.3.2 Sampling the DTFT Consider a sequerice (x[n]} with a discrete-time Fourier transform (DTFT) X(e!*). We sample X(e!"”) at N equally spaced points a, = 27k/N, 0 < k = N ~ 1, developing the N frequency samples (X(e!")}. ‘These W frequency samples can be considered as an N-point DET ¥[k] whose N-point inverse DET is a Jength-M sequence {y[n]}],0 wie, Gas where Wy = e7/27/™), An inverse DPT of ¥[&] yields Nel H m= aw” vl & YEW (3.50) 8.3. Relation between the DTFT and the DFT, ang Their Inverses 139 ‘Substituting Eq. (3.49) in Eq. (3.50), we get ww yim = > neh fab to = ws 1 kins ya [5 wy as Recall from Eq. (3.28) that 1, for anemia, 0, otherwise. (3.52) Making use of the above identity in Bq. (3.51), we finally arrive at the desired relation vin So atetmay, Osmew—1 oss ‘The above relation indicates that y[7] is obtained from x[n) by adding an infinite number of shifted replicas of an) to x[2t], with each replica shifted by an integer multiple of N sampling instants, and observing the sum only for the interval 0 = n = W — 1. To apply Eq. (3.53) to finite-length sequences we assume the samples outside the specified range are zeros. Thus. if x{n] is a finite-length sequence of length M loss than or equal to V, then yl] = x(n] for 9 > N: 140 Chapter 3: Discrete-Time Signals in the Transform Domain Yvert = Tater asa) Xitel Define u new sequence xe] obtained from x|2r| by augmenting with Af — N zero-valued samples seal =f 2.55) Making we of s-[] in By. (3.54) we arrive at tei) = 7 abel 3561 witch Is Seen to be an Mf-point DET X,(¢| of the Jength-Af sequence a-la|. The DET X.1é] can be computed very efficiently using the FFT algorithm if Mf is an integer power of 2 ‘The Matias function frecu, described in Section 3.1.6, employs the above approach to evuluate the frequency response of a rational DTFT expressed as a rational tunetion in e~#" ata prescribed sec of diserete frequencies. Ireomputes tie DFTs of the ourterator andi the denominator separately by considering each as finite Jength sequences, and then expresses the ratio of the DFT samples at cach frequency point to evaluite the DTFT. 3.4 Discrete Fourier Transform Properties Like the DTET. the DFT also satisfies a number of properties that are useful in signal processing appli- cations. Some of these properties are essentially identical to those of the DTFT, while some others are somewhat different. A summary of the DPT properties are included in Tables 3.5. 3,6, und 3.7. Their proois are again quite straightiorward und ave been lefl as exercises, Most of these properties can also be verified using MaTLan. We discuys next those properties that are different from their counterparts for the DTFT. 3.4.1 Circular Shift of a Sequence This property is analogous to the nme-shifting property’ of the DTET as given in Table 3.2, but with a subtle difference. Let us consider length-4 sequences defined for 0 < n = N ~ 1. Such sequences have sample values equi to zero form < O and a > W. If xin] is such 2 sequence, then, for any arbitrary integer 1, the shifted sequence 1)[n] = alr — nyL is no longer defined for the range 0 0 (right circular sift, the above equation implies alt ~ tole form =n eN= 1, XIN = habit], for 0 St = ry B58 ‘The concept of a circular shift of a finite-length sequence is illustrated in Figure 3,10. Figure 3.101) shows a length-6 sequence «|11]. Figure 3.10(b) shows its circularly shifted version shifted to the right by Fong =r mala 3.4. Discrete Fourier Transform Properties 441 ‘Table 3.5: General properties of the DET. Type of Property Length-N” Sequence v-point DFT girl cu Att Hii Linearity xin + Bia GUL + BATA Circular time-shifting alts = teabw| wacky Circa se, Seequency-shifling weed Ce Bl Duality Gint Nigi-By semen realy OSE elle — mis) cutie) wiution Modulation slatatn wa wt Parseval's relation Stee? = gp Yo ee? = i Length-1 Sequence N-point DET sled aT int Kw] Toba XK Reletalt Xposlkd = $18 ee XR J lni{ end) X pool] = FUT — XRD Sess Re( Xf} Areal] sma XE Note: pest] and ges] are the periodie conjugate-symmetric and periodic conjugate-anusvmmettic pans of x], respectively. Likewise, Xpeslt and Xpeq[K] are the periodic conjugate symmettic and periodic canjagate-antisymmettic pats of X{k|, respectively, 142, Chapter 3: Discrete-Time Signals in the Transform Domain ‘Table 3.7: Symictey properties of the DFT of a real sequence Length-¥ Sequence N-point DET Xk] = Ref Xk) + 4 ImiXtED speln] Ref X11) pol! tnx) KI = XtKiy Re X[K] = Re XU-2>y7} Syummetry relations Im X[k] = — Im Xk] [UAL = XT) arg XK] Note: pel} and spol] are the periodic even and periodie add parts of [| respectively, orzses * cr sas” OTe sas @ a © gure 3.10 Mustration oF a circular shift ofa finte-lengih sequence, (a) abn}, (b) sl ~ igh = an + S]ghe and (alta — Aig] = xt + 2g. | sampte period or, equivalently, shifted to the left by 5 sample periods. Likewise, Figure 2,40(c) depicts ‘ts circularly shifted version shifted to the right by 4 sample periods ot, equivalently, shifted to the left by 2 sample periods. As can be seen from Figure 3.10(h) and (¢), a right circular shift by 1, is equivalent to a left circular shift by N — 2 sample periods. 1t should be noted that a circular shift by an integer number ni» greater than J is equivalent (oa circular shift by 1.) y- If we view the length-W sequence displayed on the circumference of a cylinder at N equally spaced points, then the circular shift operation can be considered as a clockwise or anticlockwise rotation of the sequence by ng sample spacings on the cylinder. 3.4, Discrete Fourier Transform Properties 143, a all -_ te 2 wats Figure 3.11: Two lenpth-4 sequences. B42 Cireular Convolution “This propery is analogous to the lineur convolution of Eg. (2.64). but with a subtle difference. Consider two length-# sequences, ¢{] and hf] respectively. Their linear convolution reselts in a length-(2N — 1) sequence yz{n] given by sulal =F elmihte = mk On ON = nab (3.59) where we have assumed that both \V-length sequences have been zero-padded to extend their lengths to 2N — 12 The longer length of ye [2] results from the titne-reversal of the sequence Alm] and its linear shitting to the right. The first aonzero value of yz] is y,[0] = g[0J[0], and the last nonzero value of pebulis yel2N ~ 2 = glN ~ 1ALW ~ 1). ‘To develop a convolution-like operation resulting in a tength-N sequence yc[r], we need to define 4 circuiar time-reversal and then apply a circular time-shift. The resulting operation, called a circular convotation. is defined below-* wot ela) = >> glalh [tn — my]. (3.60) ao) Since the above operation involves two length-N sequences, it is often referred to as an N-point cércular convolution, denoted as. velnl = gin@ kb. Gon Like the linear convolution, the circular convolution is commutative (Problem 3.65), ie. sla@ hla = hinl® gb) (3.62) ‘We illustrate the concept of circular convolution through several examples. EXAMPLEALS — Bétermine the 4-posn circnlar eonvsitntion of the bwo kengtb-§ sexpaciccs gel and jn ve by sere tt 2 0/1), Mets G che abil shisched én Figure 3.01 Ube real oa ength-itsequence yf} green by Hine) = 32 xf ihliw medi, Os act Freue gs (4250, yc TOL i ven oy As indicated in Section 25.1, the sum of the indices of each sample product inside the summation ix equal to the index af the ‘sample being generated hy the inear convolution operation Hote thar here the sum ef Ue indices ofeach sample product inside he summation modulo Is equal othe index ofthe sample being generate by the circular convolution operation, Chapter 3: Discrete-Time Signals in the Transform Domain 144 2 2 2 1 : deol, [leer site, well’, ote atten toot i (a> tb) w) (ad Figure 3.12; The circularly time reversed sequence and its circvlaly shifted versions: éa) h{ {mr} (b) f(1 ~ male (6 #{(2 = mgh and (4) AL = mI Yelnl e ¥ lal x c s orzs" orrsase6” @ & Figure 3.13: Results of convolution of the two sequences of Figure 3.11. (a) Circular convolution, and (b> linear convolution. vel) = 3 ethane ess = ‘Theciroplarty time ceva anquence Al(--rn)4| ix ahwon in Figure 3.120). By forveind the ptechiets oF ei wit thar a{-mvis | foreach vatue ofan inte mnie O wt «3 und sisi the preds, w. am i cl) = glOWNO + sf LIACS + wP2InLa + CIAL = er Gi XBL =i ia Ved here Di the oc A DEE macs. Lei the pnt OF af the begueout Af) CAB i piven by st mo) 1 2 6 =t 2 te aay | = 1 Wane = ay 1 1 hee ‘Anlst 4 povan Tw th pcued e ) = TABRELAL vam |_| elena |_ [37 yen} |=] emjapy [=| 3 | +) Feb GANG a we Arrive at dh sede ite costo sul: ei, 8] yk pe $ rel ef =f ae at a =a lie] a wea | =a") oO aa] feeeeah | Ba o [=|s]- 97 an a fot ae z 3 which in eum 19 che tilts gevew in igs (3. S.the0 (3.701, EXAMPLE .17 sit ein he errant ee fog ma ota ea ar a lr wih @ 4 £3 inl= aon 146 Chapter 3: Discrete-Time Signals in the Transform Domain eres ams) tell = [eS Se We med deweinite tie Finke ciecalit coed oy fah ae bf tod SF stent = mis om From the sbsive PAE eH > sel La LOT + ao TNSEST 4 OTT el Abell mAASMeG ETT + eel ODT chao ny pen By, (05771 (3:7) C2.) aie wk HO} = aIB AIO) — (8 <3) — 2 Neatwecamapte yi. Rim i C179) HV aedilhcl t= ef MHhy tO + et2bnelOL A [SIR] + Be AY = cb SAG ST elle oui suf a MI nd (7. Contin OVAL + OY 10 DPT a The he sho ye pcm. ee arrive ithe remsiniey suspen OF 9S slog Exe 017) HAE] — @IOTAES) = ef) IWEA) © eEZINIO me ol Ae hee 2h es, V3) = WAOWAEN] + LV UAET E+ se STALN) + L HIOD =fhe) +(x T+ Om 520 tes, 94} = aU + IAT] el = 2 ee LA he M3} = al OL + Opt =O bey Sane Ya HOB =O et Wis eebiheot thon the above that 1a} le preckiey tin renal yj, cSavabiation ple] and susie 312i} cba ey = nor ‘The /V-point circular convolution operation of Eq. (3.61) can be written in matrix form as G82) veld 410) ALN 1) AW =2) ATI gt yell} ALN ALO) ALN 1] s+ ALJ ell) vel |] ali ail] a] +. AIF] | gf) vel¥ 1 Lalw ay aN 2) tw 31 tod Letw— ‘The elements in each diagonal of the N x W matrix of Eq. (3.82) are equal. Such a matrix is called a circulant matrix, 3.5 Computation of the DFT of Real Sequences {n most practical applications, sequences of interest are real. In such cases, the symmetry properties of the DFT given in Table 3.7 can be exploited to make the DFT computations more efficient. 3.5. Computation of the DFT of Real Sequences 147 3.8.1 NePoint OF Ts of Two Real Sequences Using a Single N-Point DFT Let gin] and Alm be two real sequences of tength N each with G[&| and H[&] denoting their respective N-point DFTs. These two N-point DFTs can be computed efficiently using a single V-poim DFT X{k] of 2 complex length-/ sequence x(n] defined by ada] = gla] + ala) (3.83) From the above, g[n] = Re{x[a]} and hla From Table 3.6 we arrive at Ln{xln}}, GIkl Ik] [XA] + XU kad} (G84) ay AXUR) — Xk]. G85) Note that X*((—A}w} = XUN — dd BXAMPLICSAN nib exarple we tina ‘of Example 3,15 wing o single 4-point DPT Hane the Sempron of (he Aspect 1 6 the twe6 real exparmicee From By, C3635 the corpbes sence ifn] = ela]-+ jAl bse by sink (t+ 72 Bod: yeh te OFT den io 1 my +e a8 SH poy ee =o || tea r my Ff = St y= “ xe) ay) ee bey en he aba t — ia Thereliorn WEA ye) 72 = oan Sudurinriyy Bye, (3.86) and (3. Td Be on Vey =2) th mie i=p 0 ney 1358) seriFjing the resulte tsi in Kaarigle 3.16, 3.5.2 2N-Point DFT of a Real Sequence Using a Single N-Point DFT Let vfa] be aseal sequence of length 2N with ¥[£] denoting its 2.V-point DFT. Define two real sequences g(t] and [rn] of length N each as sr} = vn], hf) eet, On sfni poant [DFT |Length(N+M—1) ey 9B le = Leng "Lw-1) zeros point DET [H7-1k) Figure 3.14; DF'T-based implementation of the linear convolution of two finite-length sequences. 3.6 Linear Convolution Using the DFT Linear convolution is a key operation in most signal processing applications. Since an N-point DFT can be implemented very efficiently using approximately N (og, NY) arithmetic operations, it is of interest to investigate methods for the implementation ofthe linear convolutionusing the DFT. Earlier in Example 3.17, ‘we have already illustrated for a very specific case how to implement a lineer convolution using a circular convolution. We first generalize this example for the linear convolution of two finite-length sequences of ‘unequal lengths, Later we consider the implementation of the lincar convolution of a finite-length sequence with an infinite-length sequence. 3.6.1. Linear Convolution of Two Finite-Length Sequences Let gin] and h(n) be finite-length sequences of lengths Nand Mf, respectively. Denote L = M +N —1 Define two length-L sequences, geln) = (sinh 05" esp 92) hain = {HOH obtained by appending g[7] and h{n] with zero-valued samples. Then vila) = gln@hln] = ycin] = geln}Oheinl. 3.93) To implement Faq. (3.93) using the DFT, we first zero-pad g{n] with (Mf — 1) ze10s to obtain gel], and zero-pad hn] with (JV — 1) zeros to obtain h,[a}. Thea we compute the (V + M ~ 1)-point DFTs of geln] and fie(t], respectively, resulting in Ge{t] and H[k]. An (N+ M ~ 1)-point LDFT of the product Gelé1H,[k] results in yz[n]. The process involved is sketched in Figure 3.14. ‘The following exainple uses MATLAB to illustrate the above approach. EXAMPLE 1.20 Program 3.4 com’ scl tn deceraie the linear comophon of te Mie tength nepuenses vin di DPT baned-apecich aac compare the melt usd. a izeet linear coowctation. Ths input at to be ‘Sibcod in wecor farthalHnide nquare Packets ars the wi sewuscooes Wo econ The prtigemm plot he raulh ‘of hs near convolution abéatned ining the DET-bases approschy stl the difereece between thin resale unl the sequence ubtainedtuning adie! iar canvelition ining the M-Rit con. Cis ais pngearn wy wert dees fof Exatnple 3.17 as desist is Figue 118 ® Prodram 2S © binder Comvolue athe aPt 150 Chapter 3: Discrete-Time Signals in the Transtorm Domain Rest of DFT-paed line conesion Higure 3.15: Plots of the output and the error sequences. ‘ Wmead to the two, sequences soa Input (Atype ie the Firat esquemce « “17 = faputt'fygme in the second sequen: § determine the Imgth of the reralt of comolytson & ' i = Length twiatenattti et Comite the ve iy eure-pedatig Hs EGE Gell i 2 Peteenine the ier yi DEES Ome) 5 A Plot the iuince generated: by LT-ba: lon apd the ersur frov dine Leen ater KVR) siane} (time inex a! bsy label i'AmpLltaite jy Sent kanalx gt HeT-bayed linewr convelut fon"?! anes) | aubpick 2 fede (hk whe ferro) y wiabel (Tide (nue hoy (Flabel\ rane tivke("Magnitade of error sequence 3.6.2 Linear Convolution of a Finite-Lengthh Sequence with an Infinite- Length Sequence We consider now the DFF based implementation of wet yle] = SO Alelale - €1 = Ab I@at). G94) =o 3.6, Linear Convolution Using the DFT 181 where f[7] isa tiaite-Fength sequence of length Mf and stn] is of infinite length (or a fiaige-Tengih sequence of length much greater than 34}, There arc two uiffereni approaches to solving this problem, as described below [St066 Overiap-Add Method In this method, we frst segment «]. assumed to be a causal sequence here without any loss of general im a set of contiguous finite-tength subsequences x,,[0} of length N each: alin] = SO wml — me), 395) wet where afn+mN], O2nsN-1, val ={5. otherwise: 3.961 Substituting Eq, (3.95) in Ba. (3.94) we get SIA) = SO yl — 1, G97) mo where Ynltt| = klm1@ rula]. Since hia] is of length M and x(t] is of length MY, the linear convolution 'Ga1@ xml] is of length (W + Mf — 1), AS a result, the desired linear convolution of Eq, (3.94) has been broken up into « sum of an infinite number of short-length linear convolutions of length (N + M — 1) each. Bach of these short convolutions can be implemented using tae method outlined in Figure 3.44, where now the DFTs (and the IDPT) are computed on the basis of (N+ M ~ 1) points. There is one more subtlety to take care of before we can implement Eq. (3.97} using the DFT-based method. Now the first short convolution in Eq. (3.97) given by h[i@ xoln], whichis of length (N + Mf — 1), is, defined for = x = N+ M — 2. The second short convolution in Eq, 3.97), given by h{i@) xan}, is also of length (W + M — 1} but is detined for N Mf, the first M — 1 samples of the circular convolution are incorrect and are rejected, while the remaining N ~ Mf +1 samples correspond to the correct samples of the linear convolution of flyr] and xin). Now consider an infinitely long or a very long sequence x(n]. We break it up as a collection of smaller length (length-4) sequences x [7] as indicated below: mit] =xlnt2ml Asn=3, Osmeos, (3.100) Next, we form welt] = A[n]@xmin|, or equivalently, MO}sm10) + ALL eS] + [2b fT, LObemi 11 + AU Lae O] + [Zep 3], AlOMmof21 + ALTE] + LZ DLO], MO} mE31 + ALI el 21 + 12D lon 3.7. The z-Transform 185 Computing the above form = 0.1.2.3... ere substitucing the values of sp [a] from Eq, (3.100), we aeeest ayfD] = ALO |s(0] + PUI DRA31 = ALU 12) = Reject ofl] = MOI L1} EAL SLO] + Leff Reject uyl2) = A[0|0)/2] + ALLL ALZIVIOP= v2) Sane u93) = HONE] FMM 4 APU] = Bk Save yf) = ALO 2+ ALUSISL + 12) + Reject v1) = AlGLA (SL + AULT 21 + AECL] © Reject wil} = ALOITT+ AEE a2e2I = yl, Sate v3] = ALOTWIS] + AUNT] + AIDE = IS], Sane vw2fO] = ALO 14] + AE LS) + HIE), Reject walt) = AU ]a(5} + Hf etd) + 2) 171 < Reject wwf] = ALOLsTON + A(T feL 3] + ML 2bab4] © Save wrL3} = REO} (7) + A Mf] + C216] — Save M{ should he noted that to determing y{O] and y11], we need to form x rr x0 Oe MM =O, 2b xO}, x1) = KT. and compute w—s[r] = ln |@a 121] for O < mn < 3. reject w [0] and wif}. and save w 112] = (OL and w 13] = yt. Generalizing above. let hla] he a sequence of length Mf, and x[ih, the mth section of an infinitely tong sequience a[rr] defined by tala] = axle tN M+ DL OSn< NA 1, 3.102) be of length W with Ms N. If wml} denotes the N-point circular convolution of Alm] and xm[n). ic, winlell = AIRIG snk}, then we reject the first AM ~ 1 samples of wali and “abot” the remaining N= M + 1 saved samples of wa ot] form yz [1], the linear convolution of f{[m] and x{7r|. TF we denote the saved portion of winds] a8 vali h ie. Youle Uinta Moen 2 (03) gilt mN MED) = yin Matsa eh G104 The above process is iWhustrated in Figure 3.18, The approach is called the overlup-save method since the input is segmented into overlapping sections and part of the results of the cizcular convolution are suved and abutted to determine the Jinear convolution result 3.7 The z-Transform The discrete-time Fourier ansform provides a frequency-domain representation of discrete-time signals and LTT systems, Because of the convergence condition, in many cases, the discrete-time Fourier transforin of a sequence may got exist, and as a result, it iy noi possible to make use of such frequency-domain characterization in these cases. A generalization of the discrete-time Founer transform defined in Eq. G1) Jeudsto the z-transform. which may exist for many sequences for which the discrete-time Fourier transform. 156 Chapter 3: Discrete-Time Signals in the Transform Domain Mota overlap fe) Figure 3.18 Mloiration ofthe ovetapcive method, (a) Overlapped segments of the sequence a] of Figure 3.6 } sequetives generared by an 11-point citeular esavolution, and (<) sequence obrained by rejecting the first Four samples of 1; 22} and sbutting the zeansiaing samples. » seeing eas fom 3.7. The z-Transform 187 does not exist. Moreover, the use of z-transform techniques permits simple algebraic manipulations. Consequently, the z-transform has become an important tool in the analysis and design of digital filters. ‘We first define the z-transform of a sequence and study its properties by treating it as a generalization of the discrete-time Fourier transform. This leads to the concept of the region of convergence of a z- transform that is investigated in detail. We then describe the inverse transform operation and point out two straightforward approackes for the computation of the inverse of a real rational z-transform. Next, the properties of the z-transform are reviewed. 3.7.1 Definition For a given sequence g [1 its z-transform G(z) is defined as Zgial= Do etal’, (3.105) where z = Re(z) + jlm(z) is a complex variable. If we let z = re, then the right-hand side of the above expression reduces to (3.106) which can be interpreted as the discrete-time Fourier transform of the modified sequence {¢[n]r—"). For r= 1G.e. [z| = 1), the z-transform of g[7] reduces to its discrete-time Fourier transform, provided the later exists. The contour |z| = 1 is a circle in the z-plane of unity radius and is called the wnit circle. Like the discrete-time Fourier transform, there are conditions on the convergence of the infinite series of Bq. (3.105), For a given scquence, the set R of values of z for which its z-transform converges is called the rexion of convergence (ROC). It follows from our earlier discussion on the uniform convergence of the discrete-time Fourier transform that the series of Eq. (3.106) converges if gln]r~" is absolutely summable, ie. if 107) Jn general, the region of convergence of a z-transform of a sequence g[n] is an annular region of the =-plane: Rew < lel < Ry (3.108) where 0 < Ry- < Rey % 00. It should be noted that the z-transform as defined by Eq. (3.105) is a form of 2 Laurent series and is an analytic function at every point in the ROC. This in turn implics that the s-transform and all its derivatives are coatinuous functions of the complex variable z in the ROC. EXAMPLE S22 Let us dutermine the ztranstion 22) of the Jar 158 Chapter 3: Discrete-Time Signals in the Transform Domain Table 3.8: Some commonly used z-transform pars. Sequence z-Transform Roc alo 1 All values of = win les) a pint la] > [at fr" coswpmiule keler ( sinexgn vital lao The z-transform 1) of the unit step sequence je(7] can he obtained from Bq, (3.110) by setting w = 1 we) po tories <1 Bulb z ‘The ROC of (2) is thus the annular tegion | « |z| < 90, Note that the unit step sequence is not absolutely summable, and, as a result, its Fourier wansform does not converge uniformly. EXAMPLE 123 er ho nical sejucitce wf =i if =H) thal Big i ear a 1 ina here new tbe HOC ie ree annie reyio isp < or Itshould be noted that in both of the above examples. the z-transforms are identical even though their parent sequences are different. The only way a unique sequence can be associated with a z-transform is by specifying its ROC. We shall discuss further the importance of the ROC in the following section. Itfotlows from the above that the Fourier transform G(e/”) of a sequence gin] converges uniformly if and only if the ROC of the z-transform G (z) of the sequence includes the unit circle. On the other hand, the existence of the Fourier transformdoes not always imply the existence of the z-transform. For example the finite-energy sequence hz p[n] of Eq, (3.12) has a Fourier transform hz pe!) given by Eq, (3.11) which. converges in the mean-square sense. However, this sequence does not have a z-transform as Ay elat7—" is not absolutely summable for any value of r. ‘Some commonly used z-transform pairs are listed in Table 3.8. 3.8. Region of Convergence of a Rational z-Transform 159 3.7.2 Rational z-Transforms we systems that we are concemed with in this text, all pertinent <-transforms, , are ratios of €wo polynomials in z~!: In the case of LTI discrete-t are rational functions of g7! i. PD) _ pot pis! tet pwns + pe DG) dette Fda ON bide Gt) G13) where the degree of the numerator polynomial P(z) is M and that of the denominator polynomial D(z) is NV. An altemate representation of a rational z-transform is as a ratio of two polynomials in 2: aay Po + pie! -- + pm iz + pw G FN NDE nn 3.414) © yeh + di dy ya dy on ‘The above equation can be alternately written in factored form as po TT O82 _ vay Pe TL 80 G6) =e Se a G5) do Thea) Ree“) are Ata root2 = § of the numerator polynomial, G (gz) = 0, and asa result, these values of z are known as the zeros of Giz). Likewise, at a root z = Ay of the denominator polynomial, G(Ae) — oc, and these points in the z-plane are called the poles of G(z). Observe from the expression in Eq. (3.115) that there are M finite zeros and N finite potes of G(z). It also follows from the above expression that there are additional (K ~ M) zeros at ¢ ~ 0 (the origin in the z-plane) if N > Af of additional (4 — N) poles at 2 = if 4 OT empl thn de AES bs js mie she care going trop he pian 2 = —016 and Soni a the seer Sy mt Wee ot Fajgrre S21 Nome what 4c), haw a sent at c= Oem pole ad z ounity 3.8. Region of Convergence of a Rational z-Transform 161 on Pena ead Figure 3.21: Pote-zero plot of Z((—0.6)" In]. In general, the ROC depends on the type of the sequence of interest as defined earlier in Section 2.1.1 ‘We examine n the next four examples the ROCs of the z-transforms of several different types of sequences. EXAROPLE 325 Chive « fimire-ferath mequence fn] détined Yor Jar Whaneas the wocnnd! term ooawerges for [2] = ber, and tinnce. there ia inn entiag of the bo BEC. For a sequence with a rational z-transform, the ROC of the z-transform cannot contain any poles and is bounded by the poles. ‘This property of such z-transforms can be seen from the z-transform of a unit step sequence given by Eq. (3.116) and illustrated by the pole-zero plot of Figure 3.20. Another example is the sequence defined in Example 3.24 and its z-transform given by Eq. (3.117), with the corresponding pole-zero plot given in Figure 3.2t ‘To show that it is bounded by the poles, assume that the z-transform X (z) has simple poles atc and 6, with |a| < [6]. If the sequence is also assumed to be a right-sided sequence, then it is of the form ata] = {ri(ay" +r2(8)") el — Nod, 123) where N, is a positive or negative integer. Now the z-transform of a right-sided sequence (7)" in — No] ists if ~ Y love 1M for some z. It can be seen that the above holds for |z| > |y| but not for |z| = |y’|. The right-sided sequence ‘of Eq. (3.123) has thus an ROC defined by [A] < |z| < oo. A similar argument shows that if X(z) is the z-transform of a left-sided sequence of the form of Eq, (3.123), with In — Na} replaced by je[—n — Nol, then its ROC is defined by 0 = |z| < |e]. Finally, for a two-sided sequence, some of the poles contribute: to terms form < 0 and the others to terms for » 0. The ROC is thus bounded on the outside by the pole with the smallest magnitude that contributes for m < 0 and on the inside by the pole with the largest magnitude that contributes for n > 0, Figure 3.22 shows the three possible ROCs of a rational z-transform with poles at z =a and z = f and with each ROC associated with a unique sequence. In general, if the rational z-transform has NV poles with R distinct magnitudes, then it has R + 1 ROCs and, as a result, R + 1 distinct sequences having the same

You might also like