0% found this document useful (0 votes)
22 views186 pages

DSP Book

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views186 pages

DSP Book

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 186

Digital signal processing

Lecture notes

1

Chapter 1 Introduction

• Signal: A signal is defined as a physical quantity that varies


with independent variable/s (time, space or other variable).
• Example of signals around us: electrical voltage signal,
voice, image, and video.
• Signal processing: When certain operations are performed
on the signal and the latter undergoes some change, it is
called signal processing.

1.1 Classification of signals


1.1.1 Continuous Time and Discrete Time Signals:

• Continues time signal is a signal that is specified for every


value of time t (i.e., Audio and video recordings).
• Discrete time signal is a signal that is specified only at
discrete points of t (i.e. monthly sales of a corporation).
• Signals from physical systems often functions of
continuous time.
• Sampling is the operation of converting CT signals to DT
signal.
• To distinguish between continuous-time and discrete-time
signals, we will use the symbol ‘t’ to denote the continuous-
time independent variable and ‘n’ to denote the discrete-time

2

independent variable. A CT signal is called a function x(t) .A
DT signal is called a sequence x[n].

1.1.2 Continuous-Amplitude versus Discrete-Amplitude signals

• Continuous-Amplitude signal is a signal whose amplitude


can take any value.
• Discrete-Amplitude signal is one whose amplitude can take
on only a finite number of values.
• Converting continuous valued to discrete valued signals
called quantization, it is an approximation process.

3

1.1.3 Analog and Digital signals:

• Analog signal is a signal whose amplitude can take any


value for continuous range of time (continuous function v of
continuous variable t).
• Digital signal is one whose amplitude can take on only a
finite number of values at certain time instants (discrete set
of possible function values).

1.1.4 Periodic and Aperiodic (nonperiodic) Signals:

• Periodic signal: Periodic signal repeats itself with some


period positive constant 𝑇0 = 1⁄𝑓 , where 𝑇0 is the
0
fundamental period, and 𝑓0 is the fundamental frequency.
𝑥(t + T) = 𝑥(t)

4

• An example of a periodic signal is show in Fig. (a).
• Aperiodic signal (nonperiodic): Aperiodic signal does not
repeat itself, as in Fig. (b).

• A DT signal x[n] is periodic with period N if


𝑥[𝑛 + 𝑁] = 𝑥[𝑛]
1.1.5 Even vs. Odd signals:

• Even signal: signals that are symmetric around the vertical


axis 𝑓(𝑡) = 𝑓(−𝑡).

5

Odd signal: signals that are symmetric around the horizontal
axis 𝑓(𝑡) = −𝑓(−𝑡).

1.1.6 Causal, Anticausal, and Noncausal Signals

• Casual signal: signals that are zero for all negative time, as
in Fig. (a).
6

• Anticasual signal: signals that are zero for all positive time,
as in Fig. (b).
• Noncasual signal: signals that have nonzero values in both
positive and negative time, as in Fig. (c).

1.1.7 Finite and Infinite Length Signals:

• Finite signal: signal that are defined for only certain values
of time 𝑡1 < 𝑡 < 𝑡2 .


7

• Infinite signal: signal that are defined for all possible values
of time −∞ < 𝑡 < ∞

1.1.8 Energy and power signals

• The energy of the signal 𝑓(𝑡) is defined as



𝐸𝑓 = ∫ |𝑓(𝑡)|2 𝑑𝑡
−∞

• The average power of the signal 𝑓(𝑡) is defined as


𝑇
1 2
𝑃 = lim ∫ |𝑓(𝑡)|2 𝑑𝑡
𝑇→∞ 𝑇 −𝑇
2

Energy signal: is a signal with finite energy and zero average


power.
Power signal: is a signal with finite average power and infinite
energy.
A signal cannot be both enegy and power.
Periodic signals are power signals.
For a periodic signal:
8

𝑇0
1 1 𝑇
𝐸𝑓 = ∞, and 𝑃 = 2
∫ |𝑓(𝑡)|2 𝑑𝑡 = ∫0 0 |𝑓(𝑡)|2 𝑑𝑡
𝑇0 −𝑇0 𝑇0
2

Example:
Determine energy and average power of the two signals in
the following figure

Solution:
a) 𝑝𝑎𝑣𝑔 = 0,

b) 𝐸 = ∞

Example:
Determine the power of the following periodic signal.
𝑔(𝑡) = 𝐶 cos(𝜔0 𝑡 + 𝜃)

Solution:
9

1 𝑇0 1 𝑇0
𝑃 = ∫ |𝑔(𝑡)|2 𝑑𝑡 = ∫ 𝐶 2 𝑐𝑜𝑠 2 (𝜔0 𝑡 + 𝜃) 𝑑𝑡
𝑇0 0 𝑇0 0
𝐶 2 𝑇0 1 1
𝑃= ∫ ( + cos(2𝜔0 𝑡 + 2𝜃)) 𝑑𝑡
𝑇0 0 2 2
𝐶 2 𝑇0 1 𝐶 2 𝑇0 1
𝑃= ∫ ( ) 𝑑𝑡 + ∫ ( cos(2𝜔0 𝑡 + 2𝜃)) 𝑑𝑡
𝑇0 0 2 𝑇0 0 2
𝐶 2 𝑇0 1 𝐶 2 𝑇0 1
𝑃= ∫ ( ) 𝑑𝑡 + ∫ ( cos(2𝜔0 𝑡 + 2𝜃)) 𝑑𝑡
𝑇0 0 2 𝑇0 0 2
𝐶 2 𝑇0 𝐶2
𝑃= +0=
𝑇0 2 2

1.2 Some useful signal models


1.2.1 Unit impulse function

• The unit impulse function δ(t) was first defined by Dirac


("Dirac delta") as Delta function used for sampling.
1, 𝑡 = 0
𝛿 (𝑡) = { .
0, 𝑥 ≠ 0

∫ 𝛿 (𝑡) 𝑑𝑡 = 1
−∝

10

• Multiplication of a function with dirac delta, and time
shifting version.

• The discrete-time delta dirac is defined as follows:

1.2.2 Unit Step function:

• In much of our discussion, the signals begin at t = 0. Such


signals can be conveniently described in terms of unit step
function u(t) shown in the following figure

11

1, 𝑡 ≥ 0
𝑢(𝑡) = { .
0, 𝑡 < 0

• The DT unit step function is defined as follows

1.2.3 Real exponentials

𝑥(𝑡) = 𝐶𝑒 𝑎𝑡

12

1.2.4 Complex exponential

𝑥(𝑡) = 𝐶𝑒 𝑗𝜔0 𝑡
2𝜋
Complex exponential is a periodic signal with period 𝑇0 = 𝜔 ,
0

13

𝑥 (𝑡) = 𝐶𝑒 𝑗𝜔0 𝑡 = 𝐶 (cos 𝜔0 𝑡 + 𝑗 sin 𝜔0 𝑡)
1.2.5 Sinusoidal signal

𝑥(𝑡) = 𝐴 cos(𝜔0 𝑡 + ∅)

14

15

1.2.6 Sinc function
sin 𝑥
𝑠𝑖𝑛𝑐 (𝑥 ) =
𝑥
sin 𝑥
𝑠𝑖𝑛𝑐 (0) = lim =1
𝑥→0 𝑥
Zero crossing at: 𝑥 = 𝑛𝜋

1.3 DT Signal Representation using unit impulse


• The amplitudes of the DT signal samples are sketched only
at their corresponding time indices, where 𝑥(𝑛) represents
the amplitude of the 𝑛𝑡ℎ sample and 𝑛 is the time index or
sample number.
• For example, the signal represented in the following figure,

16

𝑥(0): Zeroth sample amplitude at the sample number 𝑛 = 0.
𝑥(1): First sample amplitude at the sample number 𝑛 = 1.
𝑥(2):Second sample amplitude at the sample number 𝑛 = 2.
𝑥(3): Third sample amplitude at the sample number 𝑛 = 3.
In general we can represent the DT signal with the following
representation:
𝑥 (𝑛) = ⋯ + 𝑥 (−2)𝛿 (𝑛 + 2) + 𝑥 (−1)𝛿 (𝑛 + 1) + 𝑥 (0)𝛿 (𝑛)
+ 𝑥 (1)𝛿 (𝑛 − 1) + 𝑥 (2)𝛿 (𝑛 − 2) + ⋯
Generally,

𝒙(𝒏) = ∑ 𝒙(𝒌)𝜹(𝒏 − 𝒌)
𝒌=−∞

17

1.3.1 Unit impulse and unit step

18

1.4 Some useful signal operations
1.4.1 Time shifting

19

1.4.2 Time reversal

1.4.3 Time scalling and compression

20

1.4.4 Combined time scaling and time shifting

21

1.5 Analog to Digital converter (ADC):
• To transform the analog signal into digital signal, the analog
signal is passed through 3 stages as follows:

Sampling Quantization Encoding

ADC
Sampling:
• The waveform is then sampled at or above the Nyquist rate
𝑓𝑠 to produce a series of analogue samples or pulses.

22

• The sampling is done by multiplying the analog signal by a
train of unit impulses or by a digital unit step function with
sampling period of 𝑇𝑠 .

• The sampling theorem guarantees that an analog signal can


be in theory perfectly recovered as long as the sampling rate,
𝑓𝑠 , is at least twice of the highest-frequency component of
the analog signal to be sampled, 𝑓𝑚𝑎𝑥 . Otherwise, the signal
cannot be reconstructed (aliasing distortion).

• The condition is described as:


𝑓𝑠 ≥ 𝑁𝑦𝑞𝑢𝑖𝑠𝑡 𝑟𝑎𝑡𝑒,
𝑁𝑦𝑞𝑢𝑖𝑠𝑡 𝑟𝑎𝑡𝑒 = 2𝑓𝑚𝑎𝑥 , 𝑓𝑠 = 1/𝑇𝑠
• Quantizing: These analogue samples are then converted to
digital by the comparison of each sample with a dedicated
discrete values 𝑀.

23

• Encoding: This coding process generates a 𝑁 binary bits
for each sample. The minimum number of required bits for
𝑀 quantization levels is:
𝑁 = 𝑙𝑜𝑔2 𝑀

1.6 Systems
• A system is any physical device that performs an operation
on a signal. Such operation is referred to as signal processing.
• A system is characterized by its inputs, outputs (responses)
and mathematical model relying the outputs with inputs.
• Differential equations and s-plane are used to model the CT
system, while difference equations and z-plane are used for
DT systems.

24

1.6.1 Classification of DT systems

Discrete-time systems can be classified into:


• Memory (instantaneous) and Memoryless (dynamic)
Systems.
• Linear and Non-linear Systems.
• Time Invariant and Time Variant Systems.
• Casual and Non-Casual Systems.
• Stable and unstable systems.

1.6.1.1 Memory (instantaneous) and Memoryless (dynamic)


Systems.
• A system is referred to as memoryless if the output 𝑦(𝑛) at
every value of 𝑛 depends only on the input 𝑥(𝑛) at the same
value of 𝑛.

25

Example: which of the following systems is considered
as memoryless system?
𝑦(𝑛) = 𝑥(𝑛)
𝑦(𝑛) = 3𝑥(𝑛) + 𝑥 2 (𝑛)
𝑦(𝑛) = 3𝑥(𝑛) + 4𝑥(𝑛 − 1)
𝑦(𝑛) = 2𝑥(𝑛) + 6𝑥(𝑛 + 2)
Solution:
𝑦(𝑛) = 𝑥(𝑛) (Memoryless)
𝑦(𝑛) = 3𝑥(𝑛) + 𝑥 2 (𝑛) (Memoryless)
𝑦(𝑛) = 3𝑥(𝑛) + 4𝑥(𝑛 − 1) (Not memoryless)
𝑦(𝑛) = 2𝑥(𝑛) + 6𝑥(𝑛 + 2) (Not memoryless)
1.6.1.2 Linear and Non-linear Systems.
• Linear system is the system, where the output due to the
weighted sum inputs is equal to the same weighted sum of
the individual outputs obtained from their corresponding
inputs.
• For Example, if a system outputs:
𝑦1 (𝑛) is the system output corresponding to the input 𝑥1 (𝑛)
𝑦2 (𝑛) is the system output corresponding to the input 𝑥2 (𝑛).
 Thus, the system is called linear system if 𝛼𝑦1 (𝑛) +
𝛽𝑦2 (𝑛) is the corresponding output of the input 𝛼𝑥1 (𝑛) +
𝛽𝑥2 (𝑛), otherwise the system is non-linear.

26

Example: which of the following systems is considered
as linear system?
𝑦(𝑛) = 𝑥(𝑛)
𝑦(𝑛) = 3𝑥(𝑛) + 𝑥 2 (𝑛)
𝑦(𝑛) = 3𝑥(𝑛) + 4𝑥(𝑛 − 1)
𝑦(𝑛) = 2𝑥(𝑛) + 6𝑥(𝑛 + 2)
𝑦(𝑛) = 𝑙𝑜𝑔10 𝑥(𝑛)
Solution:
𝑦(𝑛) = 𝑥(𝑛) (Linear)
𝑦(𝑛) = 3𝑥(𝑛) + 𝑥 2 (𝑛) (Non-linear)
𝑦(𝑛) = 3𝑥(𝑛) + 4𝑥(𝑛 − 1) (Linear)
𝑦(𝑛) = 2𝑥(𝑛) + 6𝑥(𝑛 + 2) (Linear)
𝑦(𝑛) = 𝑙𝑜𝑔10 𝑥(𝑛) (Non-linear)

1.6.1.3 Time Invariant and Time Variant Systems.


• For the time invariant system, if 𝑦1 (𝑛) is the system
output due to the input𝑥1 (𝑛), then the shifted system input
𝑥1 (𝑛 − 𝑛0) will produce a shifted system output 𝑦1 (𝑛 −
27

𝑛0) by the same amount of time 𝑛0 , as in Fig. (a), otherwise
the system is time variant, as in Fig. (b).

(a)

(b)
• Example: which of the following systems is considered as
time invariant system?
A. 𝑦(𝑛) = 2𝑥(𝑛 − 5)
B. 𝑦(𝑛) = 2𝑥(3𝑛)
28

29

Example: which of the following systems is considered
as time invariant system?
𝑦(𝑛) = 𝑥(𝑛)
𝑦(𝑛) = 3𝑥(𝑛) + 𝑥(2𝑛 − 1)
𝑦(𝑛) = 3𝑥(3𝑛) + 4𝑥(𝑛 − 1)
𝑦(𝑛) = 2𝑥(𝑛) + 6𝑥(𝑛 + 2)
𝑦(𝑛) = 2𝑥(𝑛) + 6𝑥(5𝑛 + 2)
Solution:
𝑦(𝑛) = 𝑥(𝑛) (Time Invariant)
𝑦(𝑛) = 3𝑥(𝑛) + 𝑥(2𝑛 − 1) (Time Variant)
𝑦(𝑛) = 3𝑥(3𝑛) + 4𝑥(𝑛 − 1) (Time Variant)
𝑦(𝑛) = 2𝑥(𝑛) + 6𝑥(𝑛 + 2) (Time Invariant)
𝑦(𝑛) = 2𝑥(𝑛) + 6𝑥(5𝑛 + 2) (Time Variant)

1.1.1.1 Casual and Non-Casual Systems.


• A causal system is the one in which the output 𝑦(𝑛) at time
𝑛 depends only on the current input 𝑥(𝑛) at time n, and its
past input sample values such as 𝑥(𝑛 − 1), 𝑥(𝑛 −
2), … . , as in Fig. (a).
• Otherwise, if a system output depends 𝑦(𝑛) on the future
input values such as 𝑥(𝑛 + 1), 𝑥(𝑛 + 2), …, the system is
noncausal, as in Fig. (b).
• The noncausal systems cannot be realized in real time.
However, they are realizable when the independent variable
is other than time (e.g. space).

30

Example: which of the following systems is considered as
casual system?
𝑦(𝑛) = 𝑥 (𝑛) + 2𝑥(𝑛 − 1) + 4𝑥(𝑛 − 2)
𝑦(𝑛) = 3𝑥(𝑛) + 𝑥 2 (𝑛 + 1) + 𝑥(𝑛 − 2)
𝑦(𝑛) = 3𝑥(𝑛) + 4𝑦(𝑛 − 1)
𝑦(𝑛) = 2𝑥(𝑛) + 6𝑥(𝑛 + 2)
𝑦(𝑛) = 𝑙𝑜𝑔10 𝑥(𝑛 − 2)
Solution:
𝑦(𝑛) = 𝑥(𝑛) + 2𝑥 (𝑛 − 1) + 4𝑥(𝑛 − 2) (Casual)
𝑦(𝑛) = 3𝑥(𝑛) + 𝑥 2 (𝑛 + 1) + 𝑥(𝑛 − 2) (Non-casual)
𝑦(𝑛) = 3𝑥(𝑛) + 4𝑦(𝑛 − 1) (Casual)
𝑦(𝑛) = 2𝑥(𝑛) + 6𝑥(𝑛 + 2) (Non-casual)
𝑦(𝑛) = 𝑙𝑜𝑔10 𝑥(𝑛 − 2) (Casual)

1.6.1.4 Stable and Unstable Systems.


• A system is stable if and only if, each bounded input
produces a bounded output (BIBO).

31

1.7 Digital signal processing
1.7.1 Basic elements of Digital signal processing system

• Analog Filter: The analog signal is fed to an analog filter,


which is applied to limit the frequency range of analog
signals prior to the sampling process, which significantly
attenuates aliasing distortion.
• ADC: The band-limited signal at the output of the analog
filter is then sampled and converted via the ADC unit into
the digital signal, which is discrete both in time and in
amplitude.
• DSP: The digital signal processor then processes the digital
data according to DSP rules such as lowpass, highpass, and
bandpass digital filtering, or other algorithms for different
applications. The DSP unit can be a special type of general-

32

purpose digital computer, a microprocessor, advanced
microcontroller, digital circuits such as ASICs, field-
programmable gate arrays, or specialized digital signal
processors (DSP chips).
• DAC: The DAC unit, converts the processed digital signal
to an analog output signal, which is continuous in time and
discrete in amplitude (usually a sample-and-hold signal).
• Reconstruction Filter: The reconstruction filter is designated
as a function to smooth the DAC output voltage levels back
to the analog signal for the real-world applications.
1.7.2 Advantages of digital signal over analog signal.

• Digital communication can withstand channel noise and


distortion much better than analog.
• Digital communication has the advantage of the viability of
regenerative repeaters.
• Digital hardware implementation is flexible and permits the
use of microprocessors, digital switching, and integrated
circuits.
• Digital signals can be coded to yield extremely low error
rates and high fidelity and privacy.
• It is easier and more efficient to multiplex several digital
signals.
• Digital communication is more efficient than analog in
exchanging SNR for bandwidth.
• Digital signal storage is relatively easy and inexpensive. It
also can search and select information from distant
electronic database.

33

1.7.3 Basic Digital Signal Processing Examples

• DIGITAL FILTERING.
• SIGNAL FREQUENCY (SPECTRUM) ANALYSIS.
• Digital Audio and Speech: Digital audio coding such as CD
players, MP3 players, digital crossover, digital audio
equalizers, digital stereo and surround sound, noise
reduction systems, speech coding, data compression and
encryption, speech synthesis and speech recognition.
• Digital telephone: Speech recognition, high-speed modems,
echo cancellation, speech synthesizers, DTMF (dual-tone
multifrequency) generation and detection, answering
machines.
• Automobile Industry: Active noise control systems, active
suspension systems, digital audio and radio, digital controls,
vibration signal analysis.
• Electronic Communications: Cellular phones, digital
telecommunications, wireless LAN (local area networking),
satellite communications.
• Medical Imaging Equipment: ECG analyzers, cardiac
monitoring, medical imaging and image recognition, digital
X-rays and image processing.
• Multimedia: Internet phones, audio and video, hard disk
drive electronics, iPhone, iPad, digital pictures, digital
cameras, text-to-voice, and voice-to-text technologies.

34

1.7.4 Application Areas

35

Problem set #1
1- Consider the signal x(t) shown in Fig.1, illustrate the following :
a) Advance shift x(t +1) b) Delay x(t-1)
c) Time–reversal x(-t ) d) Time Reversal and shift x(-t +1)
e) Time–Scaling x(3/2t) f) Time–Scaling and shift x (3/2 t + 1)

2- Let x(n) = u(n) – u(n-10)


a) Draw X(n) b) Decompose x(n) into even and odd
components
3- Let the signal is defined as :

 0
 n  2

x (n )   1 - 2 n  4

 0
 n 4
• For each of the following signals, determine the value of n for which the
function is equal to zero :
a) x(n – 3) b) x( n + 4) C) x (-n ) d) x (- n + 2) e) x(-n –
2)
• Answer: a) n < 1 and n > 7 b) n < - 6 and n > 0 c) n < - 4 and
n>2
d) n < -2 and n > 4 e) n < - 6 and n > 0
4- Determine whether or not each of the following signals is periodic.
a) x1(n) = u(n) – u(-n)

k 

b) x 2
(n )   [(n  4k )  (n  1  4k )]
k  

Answer : a) Non – Periodic b) Periodic

5- For the following periodic signal, specify its fundamental period.


a) x1(n) = eiπ n b) x2(n) = 3 ejπ(n – ½)/5
• Answer : a) n =2 b) n =10

36

6- Determine the signal energy and power of the following signals :
a) x1 (t) = e-2t u(t) b) x2 (t) = e j(2t + π/4)
c) x3 (t) = Cos t d) x4 (n) = (1/2)n u(n)
e) x5 (n) = ej( π /2n + π/8) f) x6 (n) = Cos ( π n /4)

37

38

problem set 2

39

2 Linear time-invariant
system
2.1 LTI systems

2.2 Discrete-Time LTI Systems:


• A discrete-time system is defined mathematically as a
transformation or operator that maps an input sequence with
values x(𝑛) into an output sequence with values 𝑦(𝑛). This
can be denoted as:
𝑦(𝑛) = 𝑇{𝑥(𝑛)}

40

2.3 DIFFERENCE EQUATIONS AND IMPULSE
RESPONSES
• A causal linear time-invariant system can be completely
described by its difference equation or unit-impulse response.
2.3.1 FORMAT OF DIFFERENCE EQUATION

41

2.3.2 SYSTEM REPRESENTATION USING ITS IMPULSE
RESPONSE

• A linear time-invariant system can be completely


described by its unit-impulse response.
• Unit-impulse response h[n] is defined as the system response
due to the impulse input δ(n) with zero-initial conditions.

Using the time-invariance property of the system, we can


conclude that when the input of this system is delayed by
time ‘k’ the output is also delayed by ‘k’, as indicated in the
following figure.

42

By applying the linearity property, we will got the following
results

Thus, the general form of the system output, with full


knowledge of its unit impulse response h[n], is

𝑦(𝑛) = 𝑥(𝑛) ∗ ℎ(𝑛) = ∑ 𝑥 (𝑛 − 𝑘)ℎ(𝑘)


𝑘=−∞

= ∑ ℎ(𝑛 − 𝑘)𝑥(𝑘)
𝑘=−∞
where (∗) is the convolution.

For a continuous time system, we use the convolution


integral to calculate the system output as follows:

43


𝑦(𝑡) = 𝑥 (𝑡) ∗ ℎ(𝑡) = ∫ 𝑥 (𝑡 − 𝜏) ℎ(𝜏)𝑑𝜏
−∞

= ∫ ℎ(𝑡 − 𝜏) 𝑥 (𝜏)𝑑𝜏
−∞
Example: Given the linear time-invariant system
𝑦(𝑛) = 0.5𝑥(𝑛) + 0.25𝑥(𝑛 − 1),
A. Determine the unit-impulse response ℎ(𝑛).
B. Draw the system block diagram.

44

Example

Solution

45

46

2.3.3 Notes on convolution process

2.3.4 Causality and stability from the unit impulse response of the
system

47

3 Fourier analysis for CT
signals
3.1 Introduction
3.1.1 Sine wave

48

3.1.2 Frequency and phase

• Frequency is the rate of change with respect to time.


• Change over a long span of time means low frequency.
If a signal does not change at all, its frequency is zero.
If a signal changes instantaneously, its frequency is
infinite.
Phase describes the position of the waveform relative to time 0.

49

3.1.3 Time-domain and frequency domain plots of sine wave

3.1.4 Periodic signals

50

The above signal can be decomposed into a sumation of 3
sinusoids as shown in the following figure, the amplitude of
each sinusoide is calculated using fourier series

3.2 TRIGONOMETRIC FOURIER SERIES


• For a signal with period 𝑇0. This space of periodic signals
of period 𝑇0 has a well-known complete orthogonal basis
formed by real-valued trigonometric functions. Consider a
signal set:

• A sinusoid of angular frequency 𝑛𝜔0 is called the 𝑛𝑡ℎ


harmonic of the sinusoid of angular frequency 𝜔0 = 2𝜋𝑓0
51

when n is an integer. The sinusoid of angular frequency ω0
serves as an anchor in this set, called the fundamental tone
of which all the remaining terms are harmonics.

• The constant term 1 is the zeroth harmonic in this set because


cos (0×ω0t) = 1, is related to the DC component.
• We can show that this set is orthogonal over any continuous
interval of duration T0 = 2π/ω0, which is the period of the
fundamental.
• Therefore, we can express a signal g(t) by a trigonometric
Fourier series over the interval [𝑡1, 𝑡1 + 𝑇0] of duration
𝑇0 as:

• The Fourier series contains real-valued sine and cosine terms


of the same frequency. We can combine the two terms in a
single term of the same frequency using the well-known
trigonometric identity of

52

• For consistency we denote the dc term 𝑎0 by 𝐶0, that is,
𝐶0 = 𝑎0
The trigonometric Fourier series can be expressed in the
compact form of the trigonometric Fourier series as

3.3 Exponential Fourier series:


• The exponential Fourier series is another form of the
trigonometric Fourier series and provides a more convenient
orthogonal representation of periodic signals.
• Each sinusoid of frequency 𝑓 can be expressed as a sum of
two exponentials, 𝑒 𝑗2𝜋𝑓𝑡 and 𝑒 −𝑗2𝜋𝑓𝑡 .
• The exponential Fourier series is also periodic with period
T0.
• a periodic signal 𝑔(𝑡) can be expressed over an interval of
𝑇0 second(s) as an exponential Fourier series

53

Example
Find the exponential Fourier series of the following periodic
wave and plot the spectrum

54

3.4 Fourier Transform of Signals
• Fourier Transform: Extracts the frequencies of general
class of signals that are usually aperiodic. In other words, the
Fourier tool transforms the signal from the time domain to
the frequency domain.
• The Fourier Transform, 𝐺(𝑓), of a signal in time domain
𝑔(𝑡) is given by

• The Inverse Fourier Transform transforms the signal from


the frequency domain to the time domain, and given by
−1
1 ∞
𝑔(𝑡) = ℱ [𝐺(𝜔)] = ∫ 𝐺(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔
2𝜋 −∞

= ∫ 𝐺(𝑓)𝑒𝑗2𝜋𝑓𝑡 𝑑𝑓
−∞

3.4.1 Derivation of Fourier Transform from Fourier Series
Let 𝑔(𝑡) be a non-periodic signal and let us construct a new periodic
signal 𝑔𝑇0 (𝑡) by repeating the signal 𝑔(𝑡) at 𝑇0 seconds, as shown
in the following figure

55

Then, the relation between 𝑔(𝑡) and 𝑔𝑇0 (𝑡) is given by,

𝑔(𝑡) = lim 𝑔𝑇0 (𝑡)


𝑇0→∞

Where, 𝑇0 is the time period of the periodic signal 𝑔𝑇0 (𝑡). The fourier
coeffecients of the periodic signal 𝑔𝑇0 (𝑡) can be expressed as
𝑇0
1 1 ∞
𝐷𝑛 = ∫2
𝑇 𝑔𝑇0 (𝑡) 𝑒 −𝑗𝑛𝜔0𝑡 dt = ∫−∞ 𝑔(𝑡) 𝑒 −𝑗𝑛𝜔0 𝑡 dt, and
𝑇0 − 0 𝑇0
2

𝑔𝑇0 (𝑡) = ∑∞
−∞ 𝐷𝑛 𝑒
𝑗𝑛𝜔0 𝑡

Let us now define 𝐺(𝜔) as a continuous function of 𝑓 as,



𝐺(𝜔) = ∫ 𝑔(𝑡) 𝑒 −𝑗𝜔𝑡 dt
−∞

1
Then we have, 𝐷𝑛 = 𝐺(𝑛𝜔0 ), and
𝑇0


𝐺(𝑛𝜔0 ) 𝑗𝑛𝜔 𝑡
𝑔𝑇0 (𝑡) = ∑ 𝑒 0
𝑇0
−∞

56

Then,
∆𝜔
𝑔(𝑡) = lim 𝑔𝑇0 (𝑡) = lim ∑∞
−∞ 𝐺(𝑛∆𝜔)𝑒
𝑗𝑛∆𝜔𝑡
𝑇0→∞ ∆𝜔→0 2𝜋

The sum on the right-hand side of the above equation can be


viewed as the area under the function 𝐺(𝜔)𝑒 𝑗𝜔𝑡 as illustrated in
the following figure.

Then,
1 ∞
𝑔(𝑡) = ∫ 𝐺(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔
2𝜋 −∞

57

58

3.4.2 Fourier transform of some useful signals

Example: Find the Fourier transform of the


unit impulse signal 𝛿(𝑡)
• Solution
• We use the sampling property of the impulse function

• Figures show 𝛿(𝑡) and its spectrum. The bandwidth of this


signal is infinity.

59

Example: Find the Fourier transform of 𝑔(𝑡) =
𝛱(𝑡/𝜏 )

Solution

60

Bandwidth of 𝜫(𝒕/𝝉):
• The spectrum 𝑮(𝒇) in the following Figure peaks at f =0 and
decays at higher frequencies. Therefore, 𝜫(𝒕/𝝉 ) is a
lowpass signal with most of its signal energy in lower
frequency components.
• Signal bandwidth is the difference between the highest
(significant) frequency and the lowest (significant)
frequency in the signal spectrum.
• Because the spectrum extends from 0 to ∞, the bandwidth is
∞ in the present case. However, much of the spectrum is
concentrated within the first lobe (from 𝒇 = 𝟎 to 𝒇 = 𝟏/𝝉 ),
and we may consider 𝒇 = 𝟏/𝝉 to be the highest
(significant) frequency in the spectrum.
• Therefore, a rough estimate of the bandwidth of a
rectangular pulse of width 𝝉 seconds is 𝟐𝝅/𝝉 rad/s, or B =
1/τ Hz.

61

Example: Find the inverse Fourier transform
of 𝐺(𝑓) = 𝛿(𝑓 )

Example: Find the inverse Fourier transform


of 𝐺(𝑓) = 𝛿(𝑓 − 𝑓0 )

62

Example: Find the Fourier transforms of the
everlasting sinusoid 𝑐𝑜𝑠 2𝜋𝑓0𝑡.
• The spectrum of 𝒄𝒐𝒔 𝟐𝝅𝒇𝟎𝒕 consists of two impulses at 𝒇𝟎
and −𝒇𝟎 in the 𝒇 −domain.

• The result also follows from qualitative reasoning. An


everlasting sinusoid 𝒄𝒐𝒔𝝎𝟎𝒕 can be synthesized by two
everlasting exponentials, 𝒆𝒋𝝎𝟎𝒕 and 𝒆−𝒋𝝎𝟎𝒕 . Therefore, the
Fourier spectrum consists of only two components, of
frequencies 𝒇𝟎 and −𝒇𝟎.

63

Table1 Fourier transform of some useful signals

64

3.4.3 Properities of Fourier transform

3.4.3.1 Linearity of the Fourier Transform (Superposition


Theorem):
• The Fourier transform is linear; that is, if

• Then, for all constants 𝑎1 and 𝑎2, we have

65

3.4.3.2 Duality Property:
• The duality property states that if
𝑔(𝑡) ⇐⇒ 𝐺( 𝑓 )

• Then
𝐺(𝑡) ⇐⇒ 𝑔(−𝑓 )
• The duality property states that if the Fourier transform of
𝑔(𝑡) is 𝐺( 𝑓 ), then the Fourier transform of 𝐺(𝑡) , with
𝑓 replaced by 𝑡, is 𝑔(−𝑓 ), which is the original time domain
signal with 𝑡 replaced by −𝑓 .
• Example

66

3.4.3.3 Time-Scaling Property:
• The time-scaling property states that time compression of a
signal results in its spectral expansion, and time expansion
of the signal results in its spectral compression. Thus, If
𝑔(𝑡) ⇐⇒ 𝐺( 𝑓 )
• then, for any real constant 𝑎,

• The function 𝑔(𝑎𝑡) represents the function


𝑔(𝑡) compressed in time by a factor 𝑎 (|𝑎| > 1).
• Similarly, a function 𝐺(𝑓/𝑎) represents the function
𝐺(𝑓) expanded in frequency by the same factor 𝑎.

67

• Intuitively, we understand that compression in time by a
factor a means that the signal is varying more rapidly by the
same factor.
• To synthesize such a signal, the frequencies of its sinusoidal
components must be increased by the factor a, implying that
its frequency spectrum is expanded by the factor a.
• Similarly, a signal expanded in time varies more slowly;
hence, the frequencies of its components are lowered,
implying that its frequency spectrum is compressed.

68

3.4.3.4 Time-Shifting Property:

69

3.4.3.5 Frequency-Shifting Property:
• Frequency-shifting property states that multiplication of a
signal by a factor 𝑒 𝑗2𝜋𝑓0 𝑡 shifts the spectrum of that signal
by 𝑓 = 𝑓0. Thus, if
𝑔(𝑡) ⇐⇒ 𝐺( 𝑓 )
• then

Consider the following signal

Using frequency shifting property, we will get

This operation is known as amplitude modulation and


illustrated in the following figure

70

3.4.3.6 Convolution Theorem
• The convolution of two functions 𝑔(𝑡) and 𝑤(𝑡), denoted
by 𝑔(𝑡) ∗ 𝑤(𝑡), is defined by the integral

• The time convolution property and its dual, the frequency


convolution property, state that if

71

• then (time convolution)

• and (frequency convolution)

• These two relationships of the convolution theorem state that


convolution of two signals in the time domain corresponds
to multiplication in the frequency domain, whereas
multiplication of two signals in the time domain corresponds
to convolution in the frequency domain.
• Bandwidth of the Product of Two Signals:
• If 𝑔1(𝑡) and 𝑔2(𝑡) have bandwidths 𝐵1 and 𝐵2 𝐻𝑧 ,
respectively, the bandwidth of 𝑔1(𝑡)𝑔2(𝑡) is 𝐵1 + 𝐵2 𝐻𝑧.
• This result follows from the application of the width
property of convolution. This property states that the width
of 𝑥(𝑡) ∗ 𝑦(𝑡) is the sum of the widths of 𝑥(𝑡) and 𝑦(𝑡).
• Consequently, if the bandwidth of 𝑔(𝑡) is B Hz, then the
bandwidth of 𝑔2 (𝑡) is 2𝐵 𝐻𝑧, and the bandwidth of 𝑔𝑛 (𝑡) is
𝑛𝐵 𝐻𝑧.

72

Table2 Fourier Transform Properties

73

Problems
1- calculate the Fourier transform of
𝑡
a. 𝑥(𝑡) = ∆ (𝜏)
b. x(t)= [et cos0t ]u(t ),  0
c. 𝑥(𝑡) = 𝑒 −𝑎𝑡 𝑢(𝑡)
d. 𝑥(𝑡) = 𝛼𝑠𝑖𝑛𝑐(𝜋𝛼𝑡)

2- Determine the continuous time signal corresponding to


each of the following transforms.

2 sin[3(  2 )]
a. x ( j ) 
(  2 )
b. x( j )  cos(4   / 3)

3- Show that 𝑔(−𝑡) ⟺ 𝐺 (−𝑓)


Use the result and the fact that 𝑒 −𝑎𝑡 𝑢(𝑡) ⟺ 1⁄(𝑎 + 𝑗2𝜋𝑓)
to find the Fourier transform of 𝑒 𝑎𝑡 𝑢(−𝑡) and 𝑒 −|𝑎|𝑡 for
𝑎 > 0. Then find the Fourier transform of 𝑒 −𝑎|𝑡−𝑡0 | .

4- From the Fourier transform definition show that the Fourier


transform of 𝑟𝑒𝑐𝑡(𝑡 − 5) is 𝑠𝑖𝑛𝑐(𝜋𝑓)𝑒 −𝑗10𝜋𝑓 .

5- Using the inverse Fourier transform definition, show that


−1
𝐹 {𝑟𝑒𝑐𝑡 ((2𝜋𝑓 − 10) /2𝜋)} is 𝑠𝑖𝑛𝑐 (𝜋𝑡)𝑒 𝑗10𝑡 .

74

6- The Fourier transform of the triangular pulse 𝑔(𝑡) in Fig.
1a is given by

Use this information, and the time-shifting and time-scaling properties, to find the Fourier
transforms of the signals shown in Fig. 1b, c, d, e, and f.

Figure 1

75

4 Fourier analysis for DT
signals
4.1 Sampling theorem
• The sampling theorem guarantees that an analog signal can
be in theory perfectly recovered as long as the sampling rate,
𝒇𝒔 , is at least twice of the bandwidth of the analog signal to
be sampled, 𝑩 . Otherwise, the signal cannot be
reconstructed (aliasing distortion).

𝒇𝒔 ≥ 𝑵𝒚𝒒𝒖𝒊𝒔𝒕 𝒓𝒂𝒕𝒆,
𝑵𝒚𝒒𝒖𝒊𝒔𝒕 𝒓𝒂𝒕𝒆 = 𝟐𝑩, 𝒇𝒔 = 𝟏/𝑻𝒔
4.1.1 Proof of sampling theorem

• Consider a continuous-time signal 𝑔(𝑡) whose frequency spectrum


is bandlimited to B Hz [ 𝐹 (𝜔) = 0 𝑓𝑜𝑟 |𝜔| ≥ 2𝐵 ] can be
reconstructed without any error from its samples taken uniformly at
a rate of 𝑓𝑠 ≥ 2𝐵 samples/sec.

76

77

4.2 Different forms of the Fourier transform
• From the sampling theory, we can conclude that:
1. Sampling in time domain leads to periodicity in
frequency domain and vice versa.
2. The period in the frequency domain equals the
sampling frequency.
• Moreover, using the time-frequency duality property, we can
generalize the sampling rule to be :
“Sampling in one domain (time or frequency) leads to
periodicity in the other domain “
78

• The following figure illustrates different forms of Fourier
analysis for continuous and discrete time signals

79

• We can conclude that the frequency response of a signal is
the same for all the four cases shown in the above figure. The
difference is in the nature of the signal (discrete or
continuous and periodic or non-periodic).
4.2.1 Discrete Fourier Transform DFT

• In time domain, representation of digital signals describes


the signal amplitude vs. the sampling time instant or the

80

sample number. However, in some applications, the signal
frequency content is very useful than the digital signal
samples. The representation of the digital signal in terms of
its frequency component in a frequency domain, that is, the
signal spectrum, needs to be developed.
• The DFT maps a finite-length sequence of N samples in time
domain as shown in the figure to another finite-length
sequence of N samples in frequency domain.

where 𝑁 is the period of the sequence in time domain.


• The following figure shows the amplitude response |𝑋(𝑘)|
of the periodic discrete time signal x(n). As shown in the
figure, the amplitude response is periodic and even.

81

We note the following points:
• Only the line spectral portion between the frequencies –f /2
s

and f /2 (folding frequency) represents frequency


s

information of the periodic signal.


• Note that the spectral portion from f /2 to f is a copy of the
s s

spectrum in the negative frequency range from –f /2 to 0Hz


s

due to the spectrum being periodic for every 𝑓𝑠 Hz. For


convenience, we compute the spectrum over the range from
0 to 𝑓𝑠 Hz with nonnegative indices, that is,

• The inverse operation is called inevese discrete fourier


transform IDFT and is used to map N samples in frequency
domain to another finite-length sequence of N samples in
time domain.

82

• The DFT is widely used in many other areas, including
spectral analysis, acoustics, imaging/video, audio,
instrumentation, and communications systems.

• Both the time domain sequence and frequency domain


sequence are assumed to be periodic with period
𝑁 , however the calculations deals with only one period.
4.3 Fourier analysis of continuous time signals using
DFT
• We assume that the process acquires data samples from
digitizing the interested continuous signal for a duration of
T seconds. Next, we assume that a periodic signal x(n) is
obtained by copying the acquired N data samples with the
duration of T to itself repetitively. Finally, we determine the

83

Fourier series coefficients using one-period N data samples
using DFT formula:

The steps of calculating the Fourier coefficients of a


continuous-time signal is illustrated in the following figure

84

4.3.1 DFT and zero padding

• Zero padding is used to obtain the detailed signal spectrum


with a fine frequency resolution
• Zero padding is to add zeros to the time-domain samples
(𝑁 ≥ 𝐿) to increase the number of samples 𝑁 in frequency
domain then increase resolution as shown in the following
figure.
where, L is the actual number of samples in time domain, N
is the total number of samples after adding zeros.

4.4 Frequency Mapping and Frequency Resolution


• Now we explore the relationship between the frequency bin
k and its associated frequency. Omitting the proof, the
calculated N DFT coefficients X(k) represent the frequency
components ranging from 0Hz (or rad/s) to f Hz (or ω rad/s),
s s

hence we can map the frequency bin k to its corresponding


frequency as follows:
85

or in terms of Hz,

• We can define the frequency resolution as the frequency step


between two consecutive DFT coefficients to measure how
fine the frequency domain presentation is and achieve

or in terms of Hz, it follows that

• From the sampling theory introduced in section 1, the Folded


Frequency (highest frequency or the Nyquist frequency)
𝑓𝑠 𝑁
equals = ∆𝑓
2 2

86

4.5 Amplitude, Phase, and Power Spectrum
• The amplitude spectrum is defined as:

• The phase spectrum is defined as:

• The power spectrum is defined as:

4.6 Fast Fourier Transform FFT


• FFT is a fast implementation of DFT
• Ideally, want N to be a power of 2 or close to a power of 2

87

• FFT has many applications like:
1. Spectral Estimation
2. Frequency-Domain Filtering
3. Interpolation
4. Implementation of MCM (multi-carrier modulation) and
OFDM (orthogonal frequency division multiplexing)
communication systems.

88

Example 1

89

Example 2

100
• The frequency resolution: ∆𝑓 = = 25 𝐻𝑧
4

90

Amplitude spectrum
Phase spectrum

Power spectrum

91

Example3

Example 4

92

4.7 SPECTRAL ESTIMATION USING WINDOW
FUNCTIONS
• When we apply DFT to the sampled data in the previous
section, we theoretically imply the following assumptions:
first, the sampled data are periodic to themselves (repeat
themselves), and second, the sampled data is continuous to
themselves and band limited to the folding frequency. The
second assumption is often violated, thus the discontinuity
produces undesired harmonic frequencies. Consider a pure
1-Hz sine wave with 32 samples shown in the following
figure.

93

• As shown in the figure, if we use a window size of N=16
samples, which is multiple of the two waveform cycles, the
second window repeats with continuity. However, when the
window size is chosen to be 18 samples, which is not
multiple of the waveform cycles (2.25 cycles), the second
window repeats the first window with discontinuity. It is this
discontinuity that produces harmonic frequencies that are
not present in the original signal. The following figure shows
the spectral plots for both cases using theDFT/FFTdirectly.

The first spectral plot contains a single frequency component,


as we expected, while the second spectrum has the expected
frequency component plus many harmonics, which do not exist

94

in the original signal. We called such an effect spectral
leakage. The amount of spectral leakage shown in the second
plot is due to amplitude discontinuity in time domain. The
bigger the discontinuity, the more the leakage. To reduce the
effect of spectral leakage, a window function can be used
whose amplitude tapers smoothly and gradually toward zero at
both ends. Applying the window function w(n) to a data
sequence x(n) to obtain the windowed sequence x (n) can be
w

expressed as:

This is illustrated in the following figure, we can observe the


smooth decrease toward zero.

95

The following figure illustrates the effect of using a window
function on decreasing discontinuity in time domain, then
decreasing the spectral leakage in frequency domain.

96

4.7.1 Example of window functions

97

Example 5

98

99

problems

100

101

5 The Z-Transform
5.1 The Z-Transform:
• The z-transform is a very important tool in the analysis of
discrete-time system (similar to Laplace for the continuous-
time systems). It also offers techniques for digital filter
design and frequency analysis of digital signals.
• ZT is more general than the FT used for a broader class of
signals, ZT is used for cases in which FT does not converge
(doesn’t exist).

• The z-transform of a causal sequence x(n), designated by


X(z) or Z(x(n)), is defined as:

• where z is the complex variable:


• The above definition assumes that the digital signal x(n) is a
causal sequence, that is, x(n)=0 for n<0. Thus, the above
equation is referred to as one-sided z-transform or a
unilateral transform

102

• Region of convergence (ROC) is the range of values of z for
which x(z) converges to a finite value.

103

Example:

Solution:

104

5.2 PROPERTIES of the Z-TRANSFORM

105

106

5.3 INVERSE Z-TRANSFORM
The z-transform of a sequence x(n) and the inverse z-
transform of a function X(z) are defined as, respectively:

The inverse of the z-transform may be obtained by at least


three methods:
• Partial fraction expansion and look-up table.
• Power series expansion.
• Inversion Formula Method.
5.3.1 Partial fraction expansion and look-up table:

The general procedure is as follows:

107

• Eliminate the negative powers of z for the z-transform
function X(z).
• Determine the rational function X(z)/z (assuming it is
proper), and apply the partial fraction expansion to the
determined rational function X(z)/z using the formula in
Table 3.
• Multiply the expanded function X(z)/z by z on both sides of
the equation to obtain X(z).
• Apply the inverse z-transform using table 1.

108

109

110

111

5.3.2 Power Series Method

• Power series method uses a long division strategy to obtain


the inverse Z-transform.
• Based on the definition of the unilateral z-transform, we
have

• A rational function 𝑋(𝑧) can also be expressed in the form


of power series, as:

• It can be seen that there is a one-to-one coefficient match


between coefficients an and the sequence 𝑥(𝑛), that is,

• If we wish to use the power series to find the sequence


corresponding to a given 𝑋(𝑧) expressed in closed form, we
must expand 𝑋(𝑧) back into a power series.
• For rational z-transforms, a power series expansion can be
obtained by long division.
• It’s an easy method in general. However, it doesn’t give a
closed form solution

112

113

5.4 SOLUTION OF DIFFERENCE EQUATIONS
USING THE Z-TRANSFORM
5.4.1 FORMAT OF DIFFERENCE EQUATION

• A causal, linear, time-invariant system can be described by


a difference equation having the following general form:

• where 𝑎1, … , 𝑎𝑁, and 𝑏0, 𝑏1, … , 𝑏𝑀 are the coefficients of


the difference equation. Mand 𝑁 are the memory lengths for
input 𝑥(𝑛) and output 𝑦(𝑛), respectively.
• The above equation can further be written as:

• Note that 𝑦(𝑛) is the current output which depends on the


past output samples 𝑦(𝑛 − 1), … , 𝑦(𝑛 − 𝑁) , the current
input sample 𝑥(𝑛) , and the past input samples, 𝑥(𝑛 −
1), … , 𝑥(𝑛 − 𝑁).

5.4.2 Z-Transform Shift Property with Initial condition:

• To solve a difference equation, we have to deal with time-


shifted sequences with initial conditions.
• If the input or output has initial conditions , the general time-
shift property of the z-transform will be defined as:

114

• where 𝑦(−𝑚), 𝑦(−𝑚 + 1), … , 𝑦(−1) are the initial
conditions.
• If all initial conditions are considered to be zero, The z-
transform of time-shifted sequence can be expressed as:
𝑍(𝑌(𝑛 − 𝑚)) = 𝑧 −𝑚 𝑌(𝑧),
𝑍(𝑥(𝑛 − 𝑚)) = 𝑧 −𝑚 𝑋(𝑧),
𝑌(𝑧)
𝐻 (𝑧) = 𝑋(𝑧).
• By applying the Z-transform to the difference equation, we
get the following relation,

5.4.3 SOLUTION OF DIFFERENCE EQUATIONS USING THE Z-


TRANSFORM

• Solving the difference equation means finding the


appropriate form, in which the current output is function of
the input sequence only (not the previous output)

115

• The procedure is as follows:
1. Apply z-transform to the difference equation.
2. Substitute the initial conditions.
3. Solve the difference equation in z-transform domain.
4. Find the solution in time domain by applying the
inverse z-transform.
Impulse, Step, and System Responses:
• Impulse Responses: The impulse response h(n) can be
obtained by solving its difference equation using a unit
impulse input 𝜹(𝒏).

𝑦(𝑛) = ℎ(𝑛)

• Step Responses: The step response can be obtained by


solving its difference equation using a unit step input 𝑢(𝑛).

• System Responses: The system response can be obtained by


solving its difference equation for arbitrary input x(𝑛).

116

Example:

Solution:

117

• Example:

118

• Example:

119

120

121

Example

5.6 The Z-Plane Pole-Zero Plot


• The z-plane pole-zero is considered as a graphical technique
that can investigate the characteristics of the digital system
such as stability.

122

• The transfer function H(z) can be factored into the pole-zero
form as:

The z-plane has the following features:


1. The horizontal axis is the real part of the variable z, and the
vertical axis represents the imaginary part of the variable z.
2. The z-plane is divided into two parts by a unit circle.
3. Each pole is marked on z-plane using the cross-symbol x,
while each zero is plotted using the small circle symbol o.

Example

123

Solution

5.7 Z-Plane Pole-Zero Stability


• The following facts apply to a stable system [bounded-
in/bounded-out (BIBO) stability:
– If the input to the system is bounded, then the output of
the system will also be bounded, or the impulse
response of the system will go to zero in a finite number
of steps.
– An unstable system is one in which the output of the
system will grow without bound due to any bounded
input, initial condition, or noise, or its impulse response
will grow without bound.
– The impulse response of a marginally stable system
stays at a constant level or oscillates between the two
finite values.
• If the outmost poles of the z-transfer function H(z) are
inside the unit circle on the z-plane pole-zero plot, then
the system is stable.
• If the outmost poles of H(z) are outside the unit circle on
the z-plane pole-zero plot, the system is unstable.

124

• If the outmost poles are first-order poles of H(z) and on
the unit circle on the z-plane pole-zero plot, then the
system is marginally stable.
• If the outmost poles are multiple-order poles of H(z) and
on the unit circle on the z-plane pole-zero plot, then the
system is unstable.
• The zeros do not affect the system stability.

Example of stable system

125

Example of unstable system

Example of marginally stable system

126

Example: Classify the systems in the following figure in terms of stability.

127

Problems

128

129

130

131

6 Digital filter design
6.1 Introduction to Digital Filters
• After ADC, the digitized noisy signal 𝑥(𝑛), where 𝑛 is the
sample number, can be enhanced using digital filtering.
• For example, if our useful signal contains low-frequency
components, the high-frequency components above the
cutoff frequency of our useful signal are considered as noise,
which can be removed by using a digital lowpass filter. Then,
we set up the DSP block to operate as a simple digital
lowpass filter.
• After processing the digitized noisy signal 𝑥(𝑛), the digital
lowpass filter produces a clean digital signal 𝑦(𝑛). We can
apply the cleaned signal 𝑦(𝑛) to another DSP algorithm for
a different application or convert it to analog signal via DAC
and the reconstruction filter.
• The digitized noisy signal and clean digital signal,
respectively, are plotted in the above figure, where the top
plot shows the digitized noisy signal, while the bottom plot
demonstrates the clean digital signal obtained by applying
the digital lowpass filter.
• Typical applications of noise filtering include acquisition of
clean digital audio and biomedical signal and enhancement
of speech recording and others.

132

Example: Two-band digital crossover

• The figure shows a typical two-band digital crossover


system consisting of two speaker drivers: A woofer and a
tweeter.
• The woofer responds to low frequencies,
• The tweeter responds to high frequencies.
• The incoming digital audio signal is split into two bands
using a digital low pass filter and a digital high pass filter in
parallel.
• Then the separated audio signals are amplified.
• Finally, they are sent to their corresponding speaker drivers.

133

Example: Interference Cancellation in ECG

• In ECG recording, there often exists unwanted 60-Hz


interference.
• This interference comes from the power line and includes
magnetic induction.
• Displacement currents in leads or in the body of the patient,
effects from equipment interconnections, and other
imperfections.
• One of the effective choice can be the use of a digital notch
filter, which eliminates the 60-Hz interference while keeping
all the other useful information.
6.2 Difference Equation and Digital Filtering
• Let 𝒙(𝒏) and y(𝒏) be a DSP system’s input and output,
respectively. We can express the relationship between the
input and the output of a DSP system by the following
difference equation.

134

• It is can be observed that the DSP system output is the
weighted summation of the current input value 𝑥(𝑛) and its
past values: 𝑥(𝑛 − 1), … , 𝑥(𝑛 − 𝑀) , and past output
sequence: 𝑦(𝑛 − 1), … , 𝑦(𝑛 − 𝑁).
• The system can be verified as linear, time invariant, and
causal.
• If the initial conditions are specified, we can compute system
output (time response) 𝑦(𝑛) recursively. This process is
referred to as digital filtering.
• In the time domain, the filter output equal to the convolution
between the input and the system transfer function as
𝑦(𝑛) = 𝑥(𝑛) ∗ ℎ(𝑛)
• This can be mapped to the corresponding difference equation

• However, in the z-domain, the filter output equal to the direct


multiplication between the input and the system transfer
function as
𝑌(𝑧) = 𝑋(𝑧). 𝐻(𝑧)
• That can be mapped to the following equations

135

6.3 Digital Filter Frequency Response
• Frequency response means the reaction (response) of the
system at different frequency ranges.
• To find the system frequency response we substitute 𝑧 =
𝑒 𝐽𝑤𝑡 into the z-transfer function H(z) to acquire the digital
frequency response, which is converted into the magnitude
frequency response and phase response, that is

• For the angular frequency 𝛺 equal to

• The magnitude frequency response and phase response, can


be expressed as

• The magnitude frequency response, often expressed in


decibels, is defined as:

136

Frequency Response properties
1. Periodicity.
(a) Frequency response: 𝐻(𝑒 𝑗Ω ) = 𝐻(𝑒 𝑗(Ω+𝑘2𝜋) ).
(b) Magnitude frequency response: |𝐻(𝑒 𝑗Ω )| = |𝐻(𝑒 𝑗(Ω+𝑘2𝜋) )|.
(c) Phase response: ∠𝐻(𝑒 𝑗Ω ) = ∠𝐻(𝑒 𝑗(Ω+𝑘2𝜋) ).
2. Symmetry.
(a) Magnitude frequency response:|𝐻(𝑒 −𝑗Ω )| = |𝐻(𝑒 𝑗Ω )|.
(b) Phase response: ∠|𝐻(𝑒 −𝑗Ω )| = ∠|𝐻(𝑒 𝑗Ω )|.

137

138

6.4 Basics Types of Filtering
6.4.1 Low pass filter (LPF)

• In LPF, the low-frequency components are passed through


the filter while the high-frequency components are
attenuated.
• Ωp and Ωs are the passband cutoff frequency and the
stopband cutoff frequency.

Example of non-ideal LPF Filter


𝑧
𝐻 (𝑧) =
𝑧 − 0.5

139

6.4.2 High pass filter (HPF)

• In HPF, the high-frequency components are passed through


the filter while the low-frequency components are attenuated.
• Ωp and Ωs are the passband cutoff frequency and the
stopband cutoff frequency.

Example of non-ideal HPF Filters


𝐻(𝑧) = 1 − 0.5𝑧 −1

140

6.4.3 Band pass filter (BPF)

• The BPF attenuates both low- and high-frequency


components while keeping the middle frequency
components.
• ΩpL and ΩsL are the lower passband cutoff frequency and
lower stopband cutoff frequency, respectively
• ΩpH and ΩsH are the upper passband cutoff frequency and
upper stopband cutoff frequency, respectively.

Example of non-ideal BPF Filters

141

6.4.4 Band stop filter (BSF)

• The band stop (band reject or notch) filter rejects the middle-
frequency components and accepts both the low- and the
high-frequency components.
• ΩpL and ΩsL are the lower passband cutoff frequency and
lower stopband cutoff frequency, respectively
• ΩpH and ΩsH are the upper passband cutoff frequency and
upper stopband cutoff frequency, respectively.

Examples of non-ideal BSF Filters

142

6.5 FIR & IIR Filter Design
• FIR refers to finite impulse response. FIR filter is
represented by the following input output relationship:

• Thus, the transfer function, which depicts the FIR filter can
be expressed as:

• Designing an FIR filter means finding coefficients


𝑏0, 𝑏1, … . , 𝑏𝑘 that achieve a required frequency response.
• IIR refers to infinite impulse response. IIR filter is described
using the difference equation

• FIR filter is simple to be designed and always stable.

143

6.6 Realization of Digital Filters

𝑌(𝑧) = 𝑋(𝑧). 𝐻(𝑧)


• Filters described by 𝐻(𝑧) can be realized using a
combination of adder, multiplier, and delay unit as shift
registers.

144

• Digital filters described by the transfer function 𝐻(𝑧) may
be generally realized into the following forms:
1. Direct-form I
2. Direct-form II
3. Cascade
4. Parallel

6.6.1 Direct form I realization

145

6.6.2 Direct form II realization

146

• Comparison between Direct-form I Realization vs
Direct-form II Realization
• Both realizations require the same number of multipliers.
• The direct-form II realization requires two accumulators,
while the direct-form I requires one accumulators.
• The direct-form II structure reduces number of delay
elements (saving memory). The other benefit relies on the
fixed-point realization, where the filtering process involves
only integer operations.
• The other benefit relies on the fixed-point realization, where
the filtering process involves only integer operations.
Scaling of filter coefficients in the fixed-point
implementation is required to avoid the overflow in the
accumulator.

147

• For the direct-form II structure, the numerator and
denominator filter coefficients are scaled separately so that
the overflow problem for each accumulator can easily be
controlled.

6.6.3 CASCADE (SERIES) REALIZATION

6.6.4 Parallel REALIZATION

148

Example:

149

150

Problems

151

7 FIR digital Filter Design
• FIR refers to finite impulse response. FIR filter is
represented by the following input output relationship:

• Thus, the transfer function, which depicts the FIR filter can
be expressed as:

• Designing an FIR filter means finding coefficients


𝑏0, 𝑏1, … . , 𝑏𝑘 that achieve a required frequency response.
• FIR filter is simple to be designed and always stable.

Realization Structures of FIR Filters


• Given the transfer function of an FIR filter in the following
equation:

• we can obtain the difference equation as:

152

Thus, using the direct-I form, the filter realization can be like
the shown figure

7.1 Designing an FIR Filter using the Fourier


Transform Method
• For a given Ω𝑐 , Ωℎ , Ω𝑙 , the impulse response of the filter,
ℎ(𝑛) , can be found from the table below, where Ω𝑐 =
2𝜋𝑓𝑐 𝑇𝑠 .
• The obtained filter by this method is a noncausal z-transfer
function of the FIR filter, since the filter transfer function
contains terms with the positive powers of z, which in turn
means that the filter output depends on the future filter inputs.
• To remedy the noncausal z-transfer function, we delay the
truncated impulse response ℎ(𝑛) by 𝑀 samples to yield the
following causal FIR filter:

• where the delay operation is given by

153

154

Example 1

Solution:

155

156

• Figures display the magnitude and phase responses of three-
tap (M=1), in the first figure, and 17 tap (M=8) FIR, in the
second figure, lowpass filters.
• The oscillations (ripples) exhibited in the passband (main
lobe), and stopband (side lobes) of the magnitude frequency
response constitute the Gibbs effect.
• The Gibbs oscillatory is due to the truncation of the infinite
impulse response.
• Using a larger number of the filter coefficients will:
• Produce the sharp roll-off characteristic of the transition
band
• Increase time delay and increase computational complexity
for implementing the FIR filter.

157

Example [2]:

158

7.2 Designing an FIR Filter using window
• The window method is developed to remedy the undesirable
Gibbs oscillations in the passband and stopband of the
designed FIR filter.
• The FIR filter coefficients for Fourier transform with
Window Method can be expressed as:

Window types:

159

Example [3]

160

Effect of using different types of window on the filter
transfer function

161

Practical FIR design for customer specifications

• The normalized transition band is defined as:

where 𝒇𝒑𝒂𝒔𝒔 and 𝒇𝒔𝒕𝒐𝒑 are the passband frequency edge and
stop frequency edge.
• The filter length using the Hamming window can be
determined by

• the cutoff frequency used for design is determined by:

• The stopband attenuation is defined as:

162

• The passband ripple is defined as:

How to design an FIR filter using window?


• Choose the appropriate window from the following table.
• Obtain the FIR filter coefficients h(n) via the Fourier
transform method.
• Multiply the generated FIR filter coefficients by the selected
window sequence.
• Delay the windowed impulse sequence ℎ𝑤(𝑛) by M samples
to get the windowed FIR filter coefficients

163

Example

164

Example

• The normalized transition band is defined as:

• Selecting the rectangular window will result in a passband


ripple of 0.74dB and stopband attenuation of 21dB. Thus,
this window selection would satisfy the design requirement.
• Although all the other windows satisfy the requirement as
well but this one results in a small number of coefficients.
Next, we determine the length of the filter as:

• We choose the odd number N=25. The cutoff frequency is


determined by (1850+2150)/2=2000Hz.

165

Example:

166

Problems

167

8 IIR digital filter design
• IIR refers to infinite impulse response. IIR filter is
represented by the following input output difference
equation:

• Thus, the transfer function, which depicts the IIR filter can
be expressed as:

• Designing an IIR filter means finding coefficients


𝑏0, 𝑏1, … . , 𝑏𝑀 and 𝑎1, … . , 𝑎𝑁 that achieve a required
frequency response.
• Realization Structures of IIR Filters
There are different technique for IIR filter realization as
explained before:
1. Direct-form I
2. Direct-form II
3. Cascade
4. Parallel

168

 Design OF IIR FILTERS
There are different technique to design IIR filter, we will study
three of them:
1 Impulse Invariant Design Method.
2 Frequency Transformation of Lowpass IIR Filter.
3 Bilinear Transformation Design Method
4 Pole-Zero Placement Method.

169

8.1 Impulse Invariant Method for IIR Filter Design

• The transfer function of an analog filter can be obtained in


La Place domain using any of the existing techniques such
as BILINEAR, BUTTERWORTH AND CHEBYSHEV.
• Given an impulse response of a designed analog filter an
equivalent digital filter impulse response can be obtained
through the following steps
1. The analog impulse response can be achieved by taking
the inverse Laplace transform of the analog filter H(s).

2. Then, we sample the analog impulse response with a


sampling interval of T and use T as a scale factor, as

3. Taking the z-transform on both sides of previous


equation yields the digital filter as

• Due to the interval size for approximation in practice, we


cannot guarantee that the digital sum for digital filter has
exactly the same value as the one from the integration for
analog filter unless the sampling interval T approaches zero.
• This means that the higher the sampling rate, that is, the
smaller the sampling interval, the more accurately the digital
filter gain matches the analog filter gain.

170

• Solution:

8.2 Frequency Transformation of Low-pass IIR Filter


• Analog filter design techniques have focused on the design
of lowpass filters. Other types of frequency-selective filters
171

such as highpass, bandpass, bandstop, and multiband filters
are equally important.
• The traditional approach to the design of many continuous-
time frequency-selective filters is to first design a frequency-
normalized prototype lowpass filter and then, using an
algebraic transformation, derive the desired filter from the
prototype lowpass filter.
• Similarly for discrete-time filter, we can first design a
discrete-time prototype lowpass filter using either the
bilinear transformation or impulse invariance and then
perform an algebraic transformation on it to obtain the
desired frequency-selective discrete-time filter.
• Frequency-selective filters of the lowpass, highpass,
bandpass, and bandstop types can be obtained from a
lowpass discrete-time filter by use of transformations very
similar to the bilinear transformation used to transform
continuous-time system functions into discrete-time system
functions.
• Note that we associate the complex variable Z with the
prototype lowpass filter and the complex variable z with the
transformed filter. Then, we define a mapping from the Z-
plane to the z-plane of the for

• Design steps using Frequency Transformation of Lowpass


IIR Filter

172

1. Design a prototype lowpass filter using bilinear
transformation or impulse invariance method.
2. Applying the mapping transformation from the
prototype lowpass Z-plane to the desired filter of z-
plane using the previous equation and the following
table.
Table1 Transformation Table from a Low Pass Digital Filter Prototype to High pass, Band pass, and Band stop filter

Note: the first transformation is the transform of a lowpass filter into another
lowpass filter with different passband and stopband edge frequencies.

173

174

8.3 Bilinear Transformation (BLT) Design Method
• As we explained the steps of designing IIR filter are
1. transforming digital filter specifications into analog
filter specifications.
2. performing analog filter design.
3. applying BLT.

• BLT converts an analog filter to a digital filter through the


following procedure:
1. Prewarp the digital frequency specifications to the analog
frequency specifications.
For the lowpass filter and the highpass filter:

For the bandpass filter and the bandstop filter :

175

2. Perform the prototype transformation using the lowpass
prototype 𝐻𝑃 (𝑠).

3. Substitute the BLT to obtain the digital filter

176

177

8.4 Pole-Zero Placement Method for IIR Filter design
• This technique utilizes the effects of the pole-zero placement
on the magnitude response in the z-plane.
• In the z-plane, when we place a pair of complex conjugate
zeros at a given point on the unit circle with an angle, we
will have a numerator factor of (𝑧 − 𝑒 𝑗𝜃 )(𝑧 − 𝑒 −𝑗𝜃 ) in the
transfer function.
• When a pair of complex conjugate poles are placed at a given
point within the unit circle, we have a denominator factor of
(𝑧 − 𝑟 𝑒 𝑗𝜃 )(𝑧 − 𝑟 𝑒 −𝑗𝜃 ) , where 𝑟 is the radius chosen to be
less than and close to 1 to place the poles inside the unit
circle.
• Therefore, we can reduce the magnitude response using zero
placement, while we increase the magnitude response using
pole placement.

• Placing a combination of poles and zeros will result in


different frequency responses, such as lowpass, high pass,
band pass, and band stop.
• Practically, the pole-zero placement method has a good
performance when the band pass and band stop filters have
very narrow bandwidth requirements.
178

• Practically, the pole-zero placement method has a good
performance when the low pass and high pass filters have
either very low cutoff frequency close to DC or very high
cutoff frequency close to the folding frequency
8.4.1 First-Order Low Pass Filter Design:

• The design equations for a bandpass filter using pole-zero


placement are summarized as

• The transfer function is

• where K is a scale factor to adjust the bandpass filter to have


a unit passband gain given by

179

8.4.2 First-Order High Pass Filter Design:

• The design equations for a bandpass filter using pole-zero


placement are summarized as

• The transfer function is

• where K is a scale factor to adjust the bandpass filter to


have a unit passband gain given by

8.4.3 Second-Order Band Pass Filter Design:

• The design equations for a bandpass filter using pole-zero


placement are summarized as

180

• where K is a scale factor to adjust the bandpass filter to have
a unit passband gain given by

8.4.4 Second-Order Band stop (Notch) Filter Design:

• The design equations for a band stop filter using pole-zero


placement are summarized as

• where K is a scale factor to adjust the bandpass filter to have


a unit passband gain given by

181

• Solution:

182

Solution

183

Problems

184

185

References
[1] B.p.Lathi “Signal Processing and Linear Systems”, 2nd edition.
[2] Tan, Li, and Jean Jiang. Digital signal processing: fundamentals and
applications. Academic press, 2018.

186

You might also like