Tsei 50
Tsei 50
i=1
2
i
x
i
(n), (1.2)
where x
i
(n) are the individual bits of dierent signicance, i.e., 0 or 1.
In for example CD players, the word length is typically 16, the sampling
frequency is 44.1 kHz. So, 1 hour of music corresponds to 4410060602 320
MSamples (notice the 2 to get stereo sound). Since we are using 16 bits (2
bytes), it boils down to approximately 640 MBytes. A normal compact disk
allows some 720 MByte of storage. This is also full, uncompressed format.
Now the digital signal, x
d
(n), can be fed into the digital system so that
some type of signal processing can be performed on it. The output of the
digital system is called y
d
(n).
The digital signal, y
d
(n), is then fed to a D/A converter (discrete-time
system) which typically contains a weighting mechanism together with a
reconstruction block. The output of the D/A converter is called y
a
(t).
To lter out components that occur at multiples of the clock frequency
one is adding an analog reconstruction lter (continuous-time system).
The output of the system, y(t), is now once again an analog signal.
1.10.1 Why Digital Systems?
The advantages:
Insensitive towards component variations, i.e., more reliable perfor-
mance, no variations in component values over frequency, temperature,
age, etc.
Functionality, digital systems can perform tasks that analog compo-
nents cannot do (or at least cannot do at a reasonable cost).
CHAPTER 1. INTRODUCTION 13
Flexibility, the system can easily be reprogrammed or tuned, etc.
Precision enables a higher accuracy than corresponding analog cir-
cuits.
The disadvantages:
Power consumption, the paradox is that the digital system nor-
mally consumes more (and much more) power than the analog.
Implementation overhead, the A/D and D/A converters are required
and they normally do not come for free.
1.11 What to do in This Course?
Study the function and functionality of the discrete-time systems, i.e, the
relation between input and output sequences, x(n) and y(n).
Mainly study linear and time-invariant systems. (This means that
we are going to consider the samples to be continuous-value (real numbers)
and not quantized/discrete values.) In e.g. matlab implementations one
could talk about bit-true descriptions, where we look at numbers with a
certain word length.
How to implement a physical device that performs the function is studied
in other courses. For example how are the 0s and 1s represented in the real,
physical world.
1.11.1 What is a Linear System?
Suppose that we have a discrete-time system that performs a function on the
input sequence x(n) as x(n) y(n) and produces an output sequence, y(n).
If it holds that the system scales
a x(n) a y(n), (1.3)
then we have a homogenous system (a is a real number).
Further assume that if we apply two (dierent) sequences to the system
and thus get two (dierent) output sequences, i.e.,
x
1
(n) y
1
(n) (1.4)
x
2
(n) y
2
(n) (1.5)
If then also
x
1
(n) +x
2
(n) y
1
(n) +y
2
(n) (1.6)
CHAPTER 1. INTRODUCTION 14
holds, the system is additive.
Combining these two gives us the denition of a linear system. The
following equation must hold if the equations above are met:
a
1
x
1
(n) +a
2
x
2
(n) a
1
y
1
(n) +a
2
y
2
(n) (1.7)
or generally speaking:
k=
k=
a
k
x
k
(n)
k=
k=
a
k
y
k
(n). (1.8)
Figure 1.8: Illustrating the linear system.
1.11.2 What is a Time-Invariant System?
Suppose
x(n) y(n) (1.9)
then, the system is time-invariant if (and only if)
x(n k) y(n k). (1.10)
Essentially this means that if we apply the same input sequence at dif-
ferent time points, the output sequence should be identical. Pretty logical.
CHAPTER 1. INTRODUCTION 15
Figure 1.9: Illustrating the time-invariant system.
Chapter 2
Sequences
2.1 Introduction / Scope
Labs
Lectures aected due to late arrival
2.2 Recapture from Previous Lecture
2.2.1 What is a Linear System?
Suppose that we have a discrete-time system that performs a function on
the input sequence x(n) as x(n) y(n) and thereby produces the output
sequence, y(n).
If it holds that the system scales, i.e., is homogenous as well as additive
it is also linear. Mathematically, we say that:
k=
k=
a
k
x
k
(n)
k=
k=
a
k
y
k
(n), (2.1)
where x
k
y
k
are the individual input and output signals to our system.
Below are a couple of examples to prove if a system is linear or not:
Example Yes
Consider the rst example where the system has an output given by:
y(n) = x(n) +x(n 2). (2.2)
16
CHAPTER 2. SEQUENCES 17
Step 1: Assume we have two dierent input sequences yielding two output
sequences:
x
1
(n) y
1
(n) : y
1
(n) = x
1
(n) x
1
(n 2) (2.3)
x
2
(n) y
2
(n) : y
2
(n) = x
2
(n) x
2
(n 2) (2.4)
Step 2: Assume the linear combination of the inputs:
x
3
(n) = a
1
x
1
(n) +a
2
x
2
(n) (2.5)
and apply it to the system. This yields the following chain of thoughts:
y
3
(n) = x
3
(n) x
3
(n 2) =
=
x
3
(n)
..
a
1
x
1
(n) +a
2
x
2
(n)
x
3
(n2)
..
a
1
x
1
(n 2) a
2
x
2
(n 2)
= a
1
x
1
(n) a
1
x
1
(n 2) +a
2
x
2
(n) a
2
x
2
(n 2)
= a
1
(x
1
(n) x
1
(n 2)
. .
y
1
(n)
) +a
2
(x
2
(n) x
2
(n 2)
. .
y
2
(n)
)
= a
1
y
1
(n) +a
2
y
2
(n).
Step 3: Verify that the output result is identical to the expected requirement
on linearity.
Example No
Consider the second example where the system has an output given by:
y(n) = x(n) x(n 2). (2.6)
Step 1: Assume we have two dierent input sequences yielding two output
sequences:
x
1
(n) y
1
(n) : y
1
(n) = x
1
(n) x
1
(n 2) (2.7)
x
2
(n) y
2
(n) : y
2
(n) = x
2
(n) x
2
(n 2) (2.8)
Step 2: Assume the linear combination of the inputs:
x
3
(n) = a
1
x
1
(n) +a
2
x
2
(n) (2.9)
CHAPTER 2. SEQUENCES 18
and apply it to the system. This yields the following chain of thoughts:
y
3
(n) = x
3
(n) x
3
(n 2) =
=
_
x
3
(n)
..
a
1
x
1
(n) +a
2
x
2
(n)
_
_
x
3
(n2)
..
a
1
x
1
(n 2) +a
2
x
2
(n 2)
_
= a
2
1
x
1
(n) x
1
(n 2) +a
1
a
2
x
1
(n) x
2
(n 2) +. . .
. . . +a
1
a
2
x
1
(n 2)x
2
(n) +a
2
2
x
2
(n) x
2
(n 2)
= a
1
a
1
x
1
(n) x
1
(n 2)
. .
a
1
y
1
(n)
+a
2
a
2
x
2
(n) x
2
(n 2)
. .
a
2
y
2
(n)
+. . .
= a
2
1
y
1
(n) +a
2
2
y
2
(n) +. . . .
Step 3: And we see that there are several additional terms showing up in
the expression, i.e., it is not a linear system.
2.2.2 What is a Time-Invariant System?
Essentially we found in the previous lecture that the system is time-invariant
if a time-shift of the input sequence results in an identical time-shift in the
output sequence. So if x(n) y(n) is the original relation, we require also
that x(n k) y(n k) is valid.
Example Yes
Assume x(n) y(n) and that the system is linear as
y(n) = x(n) x(n 2). (2.10)
Step 1: Dene y(n k) as if it would be time-invariant:
y(n k) = x(n k) x(n k 2). (2.11)
Step 2: Let
x
1
(n) = x(n k) (2.12)
and then we get (following from the denition of a linear system):
y
1
(n) = x
1
(n) x
1
(n 2) = x(n k) x(n 2 k). (2.13)
Step 3: Compare y(n k) with y
1
(n) and check if they are identical:
y
1
(n) = x
1
(n) x
1
(n 2) = x(n k) x(n 2 k) = y(n k) (2.14)
CHAPTER 2. SEQUENCES 19
Input sequence, x(n) Output sequence, y(n)
0.0
2.0
1.0
3.0
0.0 0.0 0.0
n
y(n)
0.0
1.0 1.0
2.0
0.0 0.0 0.0
n
y(n)
Input sequence x(n 2) Time-invariant output y(n 2)
0.0 0.0 0.0
2.0
1.0
3.0
0.0
n
y(n)
0.0 0.0 0.0
1.0 1.0
2.0
0.0
n
y(n)
Time-variant output y(n 2)
0.0 0.0 0.0
1.0
4.0
1.0
0.0
n
y(n)
Figure 2.1: Illustration of a time-variant and a time-invariant system.
Example No
Assume x(n) y(n) and that the system is linear as
y(n) = n x(n). (2.15)
Step 1: Dene y(n k) as if it would be time-invariant:
y(n k) = (n k) x(n k). (2.16)
CHAPTER 2. SEQUENCES 20
Step 2: Let
x
1
(n) = x(n k) (2.17)
and then we get (following from the denition of a linear system):
y
1
(n) = n x
1
(n) = n x(n k) (2.18)
Step 3: Compare y(n k) with y
1
(n) and check if they are identical:
y
1
(n) = n x
1
(n) = n x(n k) =
= (n k) x(n k)
. .
y(nk)
+k x(n k) =
= y(n k) +k x(n k) = y(n k)
2.2.3 What is a Causal System?
Nothing happens before the signal is applied. Leave as exercise to prove...
2.3 Sequences
Sequences are used to represent a discrete-time signal (see example in g-
ure 2.3). Notice that values are not dened between the dierent bins, it
simply does not exist any value. In the continuous-time domain, the x axis
is dened to be in the real number domain (R), whereas in the discrete-time
domain, the x axis is dened to be in the natural number domain (N).
The dierent classes of signals inuence the analyses and the methods
to analyze the signal.
Fundamental sequences that are important to know in order to an-
alyze discrete-time systems
A few fundamental operations for discrete-time LTI systems, mainly
LDEs.
2.3.1 Finite-length Sequences
Sequences that have a nite length are called nite-length sequences. This
means that
x(n) = 0, n < N
1
n < N
2
, (2.19)
i.e., values are only non-zero (or dened) between the value N
1
and N
2
.
CHAPTER 2. SEQUENCES 21
3.0
1.0 1.0 1.0
4.0
1.0
2.0 2.0
3.0
n
y(n)
Figure 2.2: Example of a sequence representing the signal.
2.3.2 Innite-length Sequences
Sequences that are dened forever are called innite-length sequences.
This type could essentially be of three dierent kinds, right-sided, left-sided
or double-sided:
x(n) = 0 n < N
1
(2.20)
x(n) = 0 n > N
2
(2.21)
x(n) = 0 n < N
1
n > N
2
. (2.22)
2.3.3 Periodic Sequences
Periodic sequences are repetitive (and typically innite-length), and they can
be described by the following relation:
x(n) = x(n +N), (2.23)
where N is an integer number.
CHAPTER 2. SEQUENCES 22
Finite-length Innite-length
0.00.00.00.00.00.0
-5.0
-11.0
-6.0
8.0
5.0
6.0
3.0
9.0
10.0
4.0
-12.0
-2.0
14.0
-17.0
2.0
-6.0
7.0
5.0
26.0
4.0
-12.0
9.0
-26.0
-13.0
7.0
-15.0
7.0
-3.0
3.03.0
6.0
4.0
6.06.0
0.00.00.0 n
y(n)
16.0
-9.0
6.0
10.0
-7.0
-2.0
4.0
-7.0
6.0
9.0
-2.0
-10.0
-8.0
-3.0
10.0
7.0
3.0
6.0
8.0
1.0
-19.0
-4.0
-11.0
16.0
8.0
13.0
-14.0
18.0
-7.0
9.0
-9.0
10.0
4.0
-1.0
1.0
-12.0
-8.0
-0.0
-10.0
-6.0
17.0
4.0
7.0
n
y(n)
Right-sided innite-length Left-sided innite-length
0.00.00.00.00.00.00.00.00.0-0.0
-15.0
-5.0
-15.0
-9.0
1.0
-3.0
-2.0
4.0
-5.0
4.0
22.0
10.0
-5.0
0.0
4.0
-2.0
3.0
-17.0
-23.0
-8.0
11.0
-1.0
-5.0 -5.0
-1.0
-9.0
-12.0
15.0
-1.0
-12.0
-15.0
4.0
-3.0
n
y(n)
-3.0
4.0
-15.0
-12.0
-1.0
15.0
-12.0
-9.0
-1.0
-5.0 -5.0
-1.0
11.0
-8.0
-23.0
-17.0
3.0
-2.0
4.0
0.0
-5.0
10.0
22.0
4.0
-5.0
4.0
-2.0
-3.0
1.0
-9.0
-15.0
-5.0
-15.0
-0.0 0.00.00.00.00.00.00.00.00.0 n
y(n)
Figure 2.3: Illustration of a impulse and step function.
2.3.4 Bounded Sequence
The sequence is dened to be bounded as long as the highest absolute value
of the sequence is a nite integer:
|x(n)| M < . (2.24)
This denition makes a lot of sense once we discuss stability of a system.
2.3.5 Absolute-Summable Sequence
If the sequence is absolute-summable, the properties of the sequence are
further limited:
n=
|x(n)| < . (2.25)
CHAPTER 2. SEQUENCES 23
1.0
-2.0
3.0
2.0
1.0
-2.0
3.0
2.0
1.0
-2.0
3.0
2.0
1.0
-2.0
3.0
2.0
n
y(n)
Figure 2.4: Example of a periodic sequence with N = 4.
2.3.6 Absolute-Square Summable Sequence
If the sequence is absolute-square summable,the properties of the sequence
are even further limited:
n=
|x
2
(n)| < . (2.26)
Notice that the term absolute is somewhat pointless in the expression
above, however, please remember that in the digital world, there are ways
to also represent imaginary numbers and then the expression above makes
sense.
2.3.7 Even and Odd Sequences
An even sequence is reected in the y axis and can be described by the
following relation:
x(n) = x(n) (2.27)
An odd sequence is reected in the y axis too, but then also inverted (re-
ected in the x axis) and can be described by the following relation:
x(n) = x(n) (2.28)
CHAPTER 2. SEQUENCES 24
Even sequence, x
e
(n) Odd sequence, x
o
(n)
8.0
7.0
6.0
5.0
4.0
3.0
2.0
1.0
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
n
y(n)
-8.0
-7.0
-6.0
-5.0
-4.0
-3.0
-2.0
-1.0
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
n
y(n)
Figure 2.5: Illustration of a even and odd sequence.
2.3.8 (Pseudo)Energy
(Pseudo)Energy can be dened as
E =
n=
|x
2
(n)|. (2.29)
In some cases the energy can be innite, i.e., E . Notice also that it
does not have any unit/dimension.
2.3.9 Average (Pseudo)Power
The average (pseudo)power can be dened as:
P = lim
N
1
2N + 1
N
N
|x(n)|
2
(2.30)
If the sequence is periodic (x(n) = x(n + N)), we can dene the average
power as
P =
1
N
N1
0
|x(n)|
2
(2.31)
Some strange results that come out of the above denitions are that:
E < P = 0 and P > 0 E =
2.4 Operations on Sequences
We will now compile a few important operations on sequences that we will
bump into during the course.
CHAPTER 2. SEQUENCES 25
2.4.1 Addition
Addition is simply done bin by bin:
z(n) = x(n) +y(n) (2.32)
Addition is performed by an adder.
2.4.2 Scaling and Multiplication
Multiplication is done bin by bin too.
z(n) = x(n) y(n), (2.33)
Scaling is essentially similar as:
y(n) = A x(n). (2.34)
(One could go a little bit wild and dene the constant as a(n) = A (n) +
A (n 1) + . . . = A
k
(n k)). Scaling and multiplication is realized
with a multiplier.
2.4.3 Shift
We also need to take a look at the shift function where the sequence is shifted
to the right or to the left. This is written as:
y(n) = y(n k). (2.35)
A time shift is typically realized with a D-latch or what we call a T element.
2.4.4 Reection
A reection of the sequence is sometimes also required and can be written
as:
y(n) = y(k n). (2.36)
This could for example be a FILO (rst in - last out) register in a digital
system.
2.4.5 Convolution
This is something we will look into during next lecture, but it should be part
of this list:
y(n) = x(n) h(n) =
k
x(k) h(n k) =
k
x(n k) h(k) (2.37)
CHAPTER 2. SEQUENCES 26
Sequence, x(n) Shifted sequence, x(n 2)
0.0
2.0
1.0
3.0
0.0 0.0 0.0
n
y(n)
0.0 0.0 0.0
2.0
1.0
3.0
0.0
n
y(n)
Reected and shifted x(2 n)
0.0
2.0
1.0
3.0
0.0 0.0 0.0
n
y(n)
Figure 2.6: Illustration of shift and reection.
2.5 Fundamental Sequences
A (rather small) set of fundamental sequences can be identied when analysing
LTI systems.
Unit impulse sequence Step sequence
0.0 0.0 0.0 0.0
1.0
0.0 0.0 0.0 0.0
n
y(n)
0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0
n
y(n)
Figure 2.7: Illustration of an impulse and step function.
CHAPTER 2. SEQUENCES 27
2.5.1 Unit Impulse Sequence
The unit impulse, sometimes also called the delta function, is dened as
(n) =
_
0 n = 0
1 n = 0
The unit impulse has the property that if it is applied to the input of an
LTI system, the systems characteristic system response is generated. The
output, y(n) is identical to the system function, h(n), and is referred to as
the impulse response of the system.
Representing a Series with Unit Impulse Sequence
It is quite easy to realize that an arbitrary sequence can be written as a
weighted sum of shifted unit impulse functions. For example, assume that
the sequence is x(0) = 2, x(1) = 3, x(4) = 1, . . . it is seen that we can write
this as:
x(n) = 2 (n 0) + 3 (n 1) + 1 (n 2) +. . . (2.38)
or
x(n) = 2(n) + 3(n 1) + 1(n 2) +. . . (2.39)
which in general can be expressed as
x(n) =
k=
x(k)(n k). (2.40)
(It might appear to be a little bit awkward to do this kind of representa-
tion of something that simple, but the relation is used in derivation of the
convolution sum that we will look at in the next lecture).
2.5.2 Unit Step Sequence
The unit step sequence is dened as
(n) =
_
0 n < 0
1 n 0
If this function is applied to a linear system, the yielding output is g(n) and
is referred to as the step response of the system. Notice also that the unit
impulse sequence can be expressed in terms of the step sequence as:
(n) = u(n) u(n 1) (2.41)
which states that in a very crude way the (n) can be seen as the deriva-
tive of the step resonse. Or in terms of output sequence, the unit response
is the derivative of the step response.
CHAPTER 2. SEQUENCES 28
2.5.3 Sinusoidal
The sequence can be described by a sinusoid function as
x(n) = A sin(
0
n +). (2.42)
This is important in the sense that it will relate to the Fourier series, that
states that any (well, under certain circumstances) periodic function can be
expressed as a sum of sinusoids. If we then are studying linear systems, we
know that if the input can be written as a sum of weighted products, so can
also the output. (We will get back to this later on).
Notice the interesting fact that in the discrete-time world, the sinusoidal
is not necessarily a periodic function (!) If the sequence is periodic we require
that there is an non-innite number, N, so that x(n) = x(n+N). This means
that (ignoring A and ):
x(n +N) = sin(
0
(n +N)) = sin(
0
n +
0
N) = x(n)
0
N = 2m
0
= 2
m
N
must hold (m is an integer), i.e., the angular frequency must be some fraction
of .
Sine sequence Exponential sequence
-0.4
-0.9
-0.9
-0.3
0.5
1.0
0.8
0.1
-0.6
-1.0
-0.7
0.0
0.7
1.0
0.6
-0.1
-0.8
-1.0
-0.5
0.3
0.9
0.9
0.4
n
y(n)
14.0
11.2
8.9
7.2
5.7
4.6
3.7
2.9
2.3
1.9
1.5
1.2
1.0
0.8
0.6 0.5 0.4 0.3 0.3 0.2 0.2 0.1 0.1
n
y(n)
Figure 2.8: Illustration of a impulse and step function.
The sinusoidal is also referred to as the eigen function since it produces
a sinusoidal at the output too.
2.5.4 (Real) Exponential
If the sequence is given by
x(n) = A a
n
n, (2.43)
CHAPTER 2. SEQUENCES 29
where A and a are real numbers. This type of sequence is commonly gener-
ated as a transient once a system is turned-on or if there is a dramatic
change in the input signal.
2.5.5 Complex Exponential
This is somewhat of a special case, but remember that in the digital world,
complex number can be dened and handled. The complex exponential can
be written as
x(n) = C e
n+jn
= (2.44)
= |C|e
n
cos(n +) +j|C|e
n
sin(n +), (2.45)
where C = |C|e
j
is a complex constant. Eectively this is also a complex
eigen function. Applying this type of sequence at the input will produce a
similar sequence at the output.
2.6 Discrete-Time Systems
Already before we have concluded that we will mainly look at linear (additive
and homogenous), time-invariant, and causal. We will now also introduce yet
another nice and very important property: stability.
2.6.1 Stability
A discrete-time system is stable if every bounded input sequence yields a
bounded output sequence. That is:
|x(n)| M
1
|y(n)| M
2
n. (2.46)
This is referred to as BIBO (bounded input, bounded output), since it does
not make sense to talk about a stable system if the input signal is not sta-
ble. Of course one could have limiters, etc., that makes the output bounded
eventhough the input is unbounded, then however, the system is not linear
anymore. An LTI system is stable i
n=
|h(n)| < (2.47)
Some systems can be marginally stable like for example oscillators. In
this case the output toggles eventhough the input is for example constant.
However, the output values do no go towards innity.
CHAPTER 2. SEQUENCES 30
Example Yes
For example
y(n) = 2x(n) + 3x(n 2) (2.48)
is stable, since
|y(n)| = |2x(n) + 3x(n 2)|
|2x(n)| +|3x(n 2)| =
= 2|x(n)| + 3|x(n 2)|
5M
1
= M
2
assuming that the input sequence is also bounded to |x(n)| M
1
.
Example No
Assume that we have a system that generates the output as
y(n) = n x(n). (2.49)
Even if x(n) is bounded to M
1
, we see that when n y(n) also goes to
.
2.7 Conclusions
We can conclude that these are the following properties we want our system
to have:
Discrete-time (a sequence of numbers)
Linear (additive and homogenous)
Causal (system does not react until input is applied)
Time invariant (it does not matter when we apply the signal, it should
be the same output anyway)
Stable (system input and output levels are bounded)
And parts of this course will focus on how to investigate the system to check
that this requirements are met.
Chapter 3
Convolution
This lecture is mainly focusing on convolution and how to derive the output
result from a given system and given input signal. First we need to recapture
some things though.
3.1 Recapture from Previous Lectures
We identied some desirable/required system properties:
Discrete-time (a sequence of numbers, nothing is dened in-between
the samples)
Linear (additive and homogenous properties)
Causal (does not react until input is applied)
Time invariant (consistent regardless of when we apply a signal)
Stable (input and output levels are bounded under all conditions)
We discussed some dierent denitions and properties:
odd and even signals, x(n) = x(n) and x(n) = x(n)
Pseudo-energy E =
n=
|x(n)|
2
.
Pseudo-power P = lim
N
1
2N+1
N
N
|x(n)|
2
As well as some dierent common sequences:
Step
Impulse
31
CHAPTER 3. CONVOLUTION 32
Exponential
Sinusoidal
Complex exponential (combining both above)
3.2 LTI Systems
So the big question is: provided an input signal, x(n), how do we calculate
the output signal, y(n)? There are three dierent approaches available:
Convolution
Graphical approach
Analytical approach
Linear Dierence Equations (LDE) - where the equations must be given
in analytical form
Gives a particular (stationary) and a homogenuos (transient) so-
lution
Iterations might be required to solve the equation due to feedback
Frequency-domain analysis
Transform input to frequency domain description
Apply the system transfer function (much easier in frequency do-
main, a multiplication instead of a convolution)
Transform output back to sequence domain
In this lecture we will look at convolution and its dierent ways to ap-
proach the problem. Next lecture we will look at linear dierence equations,
and after that we go for the frequency-domain analysis instead.
3.3 Convolution
3.3.1 But First ...
Consider a system as shown in gure 3.3.1 a). There is a delay element (T) in
front of the system, and essentially the input to the system as such becomes
x(n 1). If the system is time-invariant, we are now able to push the T
CHAPTER 3. CONVOLUTION 33
through the system and achieve the result shown in gure 3.3.1 b). We know
from the denition that if x(n) y(n), then also x(n k) y(n k). The
relation between the global inputs x(n) and y(n) is maintained. Let us now
Figure 3.1: Time invariant.
recapture from last lecture that we were able to write the input signal in a
quite cryptical way:
x(n) =
k=
x(k) (n k) (3.1)
Notice that this formula is a general description of a linear, time-invariant
system. See the graphical approach in gure 3.3.1. First write the sum as a
Figure 3.2: Graphical illustration of the derivation of the convolution for-
mula.
multiple sum of weighted versions, x(k), time-shifted versions of (n). Now,
since we assume the system is linear and time invariant we are able to push
the system box through the adders, through the scaling, and through the
Ts. All subboxes then also merge into a single box in front of everything.
The input to that box is (n) and hence the output must be h(n). Then we
see that the h(n) is aected by the same shift,scale, and sum network as the
(n) itself was, i.e., resulting in the following formula:
x(n) =
k=
x(k) h(n k) (3.2)
One can easily see that by replacing m = n k in the sum we get:
x(n) =
m=
x(n m) h(m) (3.3)
Of course to prove the concept one can more analytically just go for the
following:
y(n) = f(x(n)) = f
_
x(k) (n k)
_
, (3.4)
where f() is the transfer function. Since the system is linear, we know that
it is additive and therefore:
y(n) =
f
_
x(k) (n k)
_
(3.5)
CHAPTER 3. CONVOLUTION 34
and also homogenous and therefore it holds that
y(n) =
x(k) f
_
(n k)
_
(3.6)
We have previously dened the impulse response as exactly f((n)) = h(n),
and since the system is time invariant, we also know that f((n k)) =
h(n k), i.e.,
y(n) =
m
_
k
x(k)h
1
(mk)
_
. .
x
1
(m)
h
2
(n m), (3.12)
but it also comes out quite neatly in a graph using the same approach we
did earlier or we can use the commutative properties above. Leave that as
an exercise.
3.3.3 Two Dierent Types of Systems
One can divide the systems into two dierent kinds, dependent on the be-
havior of the impulse response.
CHAPTER 3. CONVOLUTION 35
IIR Systems, Innite-lenght Impulse Response
The length of the impulse response is innite, it kind of exists forever it
might be approaching 0 rapidly, but still it is non-zero. A common response
is for example:
h(n) =
_
A b
n
n 0
0 n < 0
(3.13)
This type of system contains some feedback function, such as:
y(n) = x(n) +b y(n
..
) (3.14)
In this type of systems all terms in the sum must be evaulated (well, since
it is a causal system, all positive k up to n, but since n increases with time
we need to store all values that have ever occured - or the accumulated sum
thereof.):
y(n) =
n
k=0
x(k) h(n k) (3.15)
In an implementation we must be careful with parasitic eects (rounding
errors) since we cannot integrate forever in hardware.
FIR Systems, Finite-length Impulse Response
The length of the impulse response is nite, i.e., values are non-zero during
a limited period. Typically we could have
h(n) =
_
1
N
0 n N 1
0 n < 0, n N
(3.16)
This now means that we can evaluate the convolutional sum with a limited
number of terms:
y(n) =
N1
k=0
x(k) h(n k) (3.17)
and in this particular case it evalues to:
y(n) =
N1
k=0
x(n k)
1
N
=
1
N
N1
k=0
x(n k), (3.18)
which actually calculates a so called moving-average of the signal. In fact -
we can generalize this observation to state that any linear system has a type
of weighted averaging eect on the input signal.
In an implemenation we will be able to calculate the output result ex-
actly, since there is a limited number of terms. As long as the number is
reasonably low of course.
CHAPTER 3. CONVOLUTION 36
3.3.4 Graphical Convolution
Show the way to do graphical convolution in a number of graphs below.
There are at least two methods:
draw the x(n) sequence (or vice versa) shift it repeatedly in graphs
and then apply a sum of the overlapping bins with the dierent shifted
sequences weighted by h(0), h(1), etc.
draw the x(n) sequence in the diagramme and then also the h(n)
below, i.e., the reversed one and then we will slide it to the right and
sum up all the bin-by-bin products.
We use the same example as we will do for the analytical convolution. The
impulse response of the LTI system is:
h(n) =
_
n 0 n 3
0 n < 0, n N
(3.19)
3.3.5 Analytical Convolution
Another way is to do it analytically by simply solving the sum if a reasonably
simple description of the input and impulse response is given.
An Example...
For example, if the input signal is given by
x(n) = u(n), (3.20)
i.e., innite-length sequence, but either 1 or 0. Let the impulse response be
identical to the example in the graphical approach:
h(n) =
_
n 0 n 3
0 n < 0, n N
(3.21)
i.e., a nite-length sequence (FIR system).
Before we do the analysis, we make an observation. Consider the following
expression:
k=m
(n k) (3.22)
This is essentially an integration/accumulation of the delta function. We
know that (n k) is only non-zero once n = k. This means that the sum
above equals 1 only if n m, otherwise it is 0.
CHAPTER 3. CONVOLUTION 37
Input sequence shifted Comment / Action
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
n
y(n)
x(n 0) multiply by h(0) = 0
0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
n
y(n)
x(n 1) multiply by h(1) = 1
0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
n
y(n)
x(n 2) multiply by h(2) = 2
0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
n
y(n)
x(n 3) multiply by h(3) = 3
0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0
n
y(n)
x(n 4) multiply by h(4) = 0
Figure 3.3: Illustration of graphical convolution (approach 1).
CHAPTER 3. CONVOLUTION 38
Input sequence shifted Comment / Action
0.0 0.0 0.0 0.0
3.0
2.0
1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
n
y(n)
h(n 0)
0.0 0.0 0.0 0.0 0.0
3.0
2.0
1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
n
y(n)
h(n 1)
0.0 0.0 0.0 0.0 0.0 0.0
3.0
2.0
1.0
0.0 0.0 0.0 0.0 0.0 0.0
n
y(n)
h(n 2)
0.0 0.0 0.0 0.0 0.0 0.0 0.0
3.0
2.0
1.0
0.0 0.0 0.0 0.0 0.0
n
y(n)
h(n 3)
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
3.0
2.0
1.0
0.0 0.0 0.0 0.0
n
y(n)
h(n 4)
Figure 3.4: Illustration of graphical convolution (approach 1).
CHAPTER 3. CONVOLUTION 39
The convolutional sum becomes:
y(n) =
k0
x(k) h(n k), (3.23)
but due to the properties of the input signal, this boils down to (u(n) = 1
for n 0):
y(n) =
k=0
h(n k). (3.24)
and due to the FIR, we have
y(n) =
n
k=0
h(n k). (3.25)
Then we use the trick to write a signal as a sum of :
h(n) =
n
k=0
h(k) (n k) (3.26)
From the denition of the h(n) above, we see that
h(n) = (n 1) + 2 (n 2) + 3 (n 3) (3.27)
and we can therefore write the convolutional sum as
y(n) =
n
k=0
(n k 1) + 2 (n k 2) + 3 (n k 3) (3.28)
Reordering the sum gives us
y(n) =
n
k=0
(n k 1) + 2
n
k=0
(n k 2) + 3
n
k=0
(n k 3) (3.29)
In each sum, we now do a variable substitution, e.g. m = k + 1, etc., i.e.:
y(n) =
n+1
m=1
(n m) + 2
n+2
m=2
(n m) + 3
n+3
k=3
(n m) (3.30)
We can now use the previous observation and write down the output results:
y(n) = 0 n 0
y(1) = 1
y(2) = 3
y(n) = 6 n 3
CHAPTER 3. CONVOLUTION 40
Notice that this was the step response of our circuit and notice also how
it relates to the impulse response! The dierences between consecutive
bins in the step response are 0, 1, 2, 3, 0, 0, etc., i.e., identical to the impulse
response itself.
The obvious outcome of this is that the end value of the step response,
g(), equals the sum over all values in the impulse response, since:
g(n) =
n
k=0
h(n), (3.31)
which is quite convinient to have in the back of your head for sanity checks,
etc.
And Another Example...
Lets go a little bit more analytical and assume that we have the impulse
response as
h(n) =
_
a
n
n 0
0 n < 0
(3.32)
and the input signal as
x(n) =
_
b
n
n 0
0 n < 0
(3.33)
Notice that the impulse response now is given by an innite series, i.e., we
have an IIR system in this example. The output is still given by the convo-
lutional sum:
y(n) =
k=
x(k) h(n k) =
n
k=0
x(k) h(n k). (3.34)
Notice the limits on the sum we cannot sum furher than to n anyway, since
it is a causal system. Inserting the expressions of x and h yields:
y(n) =
n
k=0
b
k
a
nk
=
n
k=0
b
k
a
n
a
k
=
n
k=0
_
b
a
_
k
a
n
, (3.35)
which can be written as:
y(n) = a
n
k=0
_
b
a
_
k
. (3.36)
CHAPTER 3. CONVOLUTION 41
Elaborate on bounded input and bounded output in this example.
State that for example the input is bounded and what requirements
does it put on the a? The sum expresses a geometric series. If n
the sum converges if the factor |b/a| is less than unity.
y(n) = a
n
1
_
b
a
_
n+1
1
b
a
(3.37)
Add also comments on negative and positive values as well as values
where b > a and vice versa. Notice that if b/a is larger than unity, we
have to reorder the sum above and write it as:
y(n) = b
n
k=0
_
a
b
_
k
(3.38)
which now suddenly evolves into
y(n) = b
n
1
_
a
b
_
n+1
1
a
b
(3.39)
instead. And we also have the special case when a = b = c and then it sums
to:
y(n) =
n
k=0
b
k
a
nk
=
n
k=0
c
k
c
nk
=
n
k=0
c
n
= c
n
k=0
1 = (n+1) c
n
, (3.40)
which converges for large n as long as c < 1.
3.3.6 Some Special Cases
If the input signal is periodic, x(n) = x(n +N), we get for example:
y(n) =
N1
m=0
_
k=
h(m +kN)
_
x(n m) (3.41)
Chapter 4
LDEs and Sequence-Domain
Wrap-up
In this lecture we will go through Linear Dierence Equations and also wrap-
up the sequence domain before we go to the frequency domain analysis. You
will now notice that there is a little gap in time before frequency domain
lectures begin and use that time to practice you skill in sequence domain
analysis.
4.1 Recapture the Convolution
Remember how we made some dierent eorts to convolve two signals. We
tried two dierent graphical approaches and one analytical on an FIR system
and we wrapped up with analytical approach on an IIR system. Notice that
graphical approach is not possible for IIR systems.
Another potentially simpler way of performing the convolution is to con-
sider a polynomial multiplication. Assume from the previous examples once
again that we have the impulse response as:
h(n) =
_
n 0 n 3
0 n < 0, n N
(4.1)
and the input being the unit step.
x(n) = u(n). (4.2)
We will now assume that we can write the signals as polynomial sequences.
The x(n) = u(n) is always unity for n larger than or equal to zero:
x(q) = 1 +q +q
2
+q
3
+q
4
+. . . (4.3)
42
CHAPTER 4. LDES AND SEQUENCE-DOMAIN WRAP-UP 43
and the impulse response only exists in three dierent bins (others are zero):
h(q) = 0 1 + 1
..
q + 2
..
q
2
+ 3
..
q
3
+ 0 q
4
+ 0 q
5
+. . . (4.4)
or less messy:
h(q) = q + 2q
2
+ 3q
3
. (4.5)
Notice that we made a sloppy and mathematically horrible substitution
(but this is essentially the frequency domain convolution, it kind of boils
down to the same set of operations)
(n k) q
k
(4.6)
Now we are going to multiply these two polynomials
y(q) = h(q) x(q) (4.7)
i.e.,
y(q) =
_
q + 2q
2
+ 3q
3
_
_
1 +q +q
2
+q
3
+q
4
+. . .
_
(4.8)
We will now multiply and identify term-by-term as:
y(q) = q
_
1 +q +q
2
+q
3
+q
4
+. . .
_
+
+ 2q
2
_
1 +q +q
2
+q
3
+q
4
+. . .
_
+
+ 3q
2
_
1 +q +q
2
+q
3
+q
4
+. . .
_
Inserting the qs into the parantheses:
y(q) = q +q
2
+q
3
+q
4
+q
5
+q
6
+. . . +
+2q
2
+ 2q
3
+ 2q
4
+ 2q
5
+ 2q
6
+. . . +
+3q
3
+ 3q
4
+ 3q
5
+ 3q
6
+. . .
Now identify the resulting polynomial term-by-term (c.f. bin-by-bin in the
previous cases):
y(q) = 1
..
q + 3
..
q
2
+ 6
..
q
3
+ 6
..
q
4
+ 6
..
q
5
+ 6
..
q
6
+. . . (4.9)
Which might be the most convenient approach actually. In the real world,
one would obviously use a computer to do the work instead and that is also
what you are going to do in the labs.
CHAPTER 4. LDES AND SEQUENCE-DOMAIN WRAP-UP 44
4.2 LDEs
Lets switch to linear dierence equations (LDEs) instead. A typical LTI
system can be described by the linear relation between input and output
signals as:
k=
b
k
y(n k) =
k=
a
k
x(n k) (4.10)
a
k
and b
k
must obviously be constants. We will assume that the system is
causal and that we have a limited order of the system, this means that:
M
k=0
b
k
y(n k) =
M
k=0
a
k
x(n k) (4.11)
This is now an Mth order system. Notice however that it also has M + 1
dierent solutions. To narrow down the number of solutions to one we require
an additional M conditions. This is quite often done by specifying the start
values - the initial conditions, etc.
Obviously there is a special case when b
k
= 1, 0, 0, . . . , 0. In this special
case we get an FIR system as:
y(n) =
M
k=0
a
k
x(n k). (4.12)
Notice also that the impulse response is directly visible (!) in this equation
as a
k
= h(k).
4.2.1 So how do we Solve/Calculate y(n) now then?
First of all, the solution can be analytically be written as the combination of
two terms:
y(n) = y
p
(n) +y
h
(n), (4.13)
where y
p
(n) is the particular solution and y
h
(n) is the homogenous solution.
The latter is the solution to the following equation:
M
k=0
b
k
y
h
(n k) = 0 (4.14)
and the former can be solved by assuming a certain description given the
characteristics of the input signal. For example, if the input x(n) is a con-
stant, one can often assume that the output, y
p
(n), then will be given by
some expression like: y
p
(n) = A n+. Dependent on the input signal one has
to do dierent assumptions. The equation then needs to be solved together
with the M intial conditions provided. i
CHAPTER 4. LDES AND SEQUENCE-DOMAIN WRAP-UP 45
Iterative Solution
First assume it is a causal LTI system and that x(n) = 0 for n < 0. This
means (again) that:
M
k=0
b
k
y(n k) =
M
k=0
a
k
x(n k) (4.15)
and it follows from the assumptions that also y(n) = 0 for n < 0. The
equation can be rewritten as:
y(n) =
M
k=0
a
k
b
0
x(n k)
M
k=1
b
k
b
0
y(n k) (4.16)
or slightly more conveniently:
y(n) =
M
k=0
c
k
x(n k) d
k
y(n k), (4.17)
where c
k
= a
k
/b
0
and d
k
= b
k
/b
0
and d
0
= 0. So ... lets start with n = 0.
y(0) = c
0
x(0) +c
1
x(1) +c
2
x(2) +. . . +c
M
x(M)
d
0
y(0) d
1
y(1) . . . d
M
y(M)
y(1) = c
0
x(1) +c
1
x(0) +c
2
x(1) +. . . +c
M
x(M + 1)
d
0
y(1) d
1
y(0) . . . d
M
y(M + 1)
y(2) = c
0
x(2) +c
1
x(1) +c
2
x(0) +. . . +c
M
x(M + 2)
d
0
y(2) d
1
y(1) . . . d
M
y(M + 2)
. . .
. . .
We new from the assumptions that y and x was all-zero for negative n, and
that d
0
= 0, which enables us to simplify the equation system.
y(0) = c
0
x(0)
y(1) = c
0
x(1) +c
1
x(0) d
1
y(0)
y(2) = c
0
x(2) +c
1
x(1) +c
2
x(0) d
1
y(1) d
2
y(0) . . .
. . .
As we can see from the system above, the previously calculated values can
be inserted in the next iteration. In row (2) we nd the y(0) term that was
previously calculated, in row (3) we nd y(0) and y(1), etc.
CHAPTER 4. LDES AND SEQUENCE-DOMAIN WRAP-UP 46
An Example
Assume that the causal LTI system is described by the following LDE:
y(n) +b
1
y(n 1) = x(n), (4.18)
i.e., b
0
= a
0
= 1. Let the input signal be given by the unit impulse:
x(n) = (n). (4.19)
The iterative solution becomes:
y(0) = x(0) b
1
y(1) = 1 b
1
0 = 1
y(1) = x(1) b
1
y(0) = 0 b
1
1 = b
1
y(2) = x(2) b
1
y(1) = 0 b
1
(b
1
) = b
2
1
y(3) = x(3) b
1
y(2) = 0 b
1
b
2
1
= b
3
1
. . .
. . .
and we can hopefully predict the continuation of that series and we get
something like
y(n) = (b
1
)
n
. (4.20)
Notice however the beauty with this example! We have applied the impulse
response at the input of the system. This means that the output is the
impulse response and thereby we are able to use this result in a standard
convolutional formula for other input signals, i.e.,
y(n) =
n
k=0
y
(k)|
x(n)=(n)
x(n k), (4.21)
which gives us two handles to calculate the system output (iteration and/or
convolution).
Same Example - Another Input Signal
Let the system once again be given by
y(n) +b
1
y(n 1) = x(n), (4.22)
but now assume that x(n) = u(n) is equal to the unit step. Therefore, we
have
y(n) +b
1
y(n 1) = u(n) (4.23)
CHAPTER 4. LDES AND SEQUENCE-DOMAIN WRAP-UP 47
and we assume a causal LTI system (as usual). This assumption actually
gives us a lot of information for all the M possible solutions (well, M = 1 in
this case, but anyway ...)
First we solve the homogenous equation:
y
h
(n) +b
1
y
h
(n 1) = 0 (4.24)
We identify the characteristic function of the equation using the dierence
operator, q :
(1 +b
1
q
1
) y(n) = 0 (4.25)
where q
1
essentially describes a shift of the sequence by one step. We see
from the equation above that if the equation should be met we require that
the rst paranthesis equals to 0, since y(n) is sometimes non-zero. This
means that
1 +b
1
q
1
= 0 q +b
1
= 0 q
0
= b
1
(4.26)
The maths now say that the homogenous functions equals to a linear com-
bination of all the roots to the operator equation, or
y
h
(n) = K (q
0
)
n
y
h
(n) = K (b
1
)
n
(4.27)
For the particular solution, we can do the assumption that
y
p
(n) = An +B (4.28)
and then we solve
y
p
(n) +b
1
y
p
(n 1) = u(n), (4.29)
i.e.,
A n +B +b
1
A (n 1) +B b
1
= 1 (4.30)
which reordered becomes
A(1 +b
1
) n +B(1 +b
1
) A b
1
= 1 (4.31)
We see that the rst term is probably not very likely, the right side of the
equation is constant, i.e., A = 0, and hence
B(1 +b
1
) = 1 B =
1
1 +b
1
(4.32)
We now can combine the two solutions
y(n) =
1
1 +b
1
. .
yp(n)
+K (b
1
)
n
. .
y
h
(n)
(4.33)
CHAPTER 4. LDES AND SEQUENCE-DOMAIN WRAP-UP 48
Now looking at the start values gives us
y(0) +b
1
y(n 1)
. .
=0
= 1 (4.34)
which means that
y(0) =
1
1 +b
1
. .
yp(0)
+ K
..
y
h
(0)
, i.e., (4.35)
K =
1
1 +b
1
(4.36)
Chapter 5
Frequency Domain Analysis
5.1 Recapture from Previous Lectures
5.1.1 Convolution
Analytical
Can become somewhat tedious exercise, especially for FIR systems, where
the short impulse response is normally not conveniently expressed in
mathematical terms.
Graphical
Impractical for IIR system, though it can be useful to investigate the rst
few samples to understand the system. We looked at a method to use
polynomial multiplication to also understand the convolved result, (1 +
x +x
3
) (1 + 2x + 4x
2
).
5.1.2 Realization/Implementation
We looked at three dierent realizations of FIR and IIR systems. The
FIR system only contained feed-forward components whereas the IIR system
contained several feedback components. Using the function:
y(n) =
M
k=0
a
k
b
0
x(n k)
M
k=1
b
k
b
0
y(n k) (5.1)
we can nd dierent ways of realizing the system. There were two dierent
methods to implement the IIR system (direct and transposed form). The
transposed version oered less memory (T) components. The transposed
49
CHAPTER 5. FREQUENCY DOMAIN ANALYSIS 50
version is simply achieved by combining the two sums in the expression above
as:
y(n) =
M
k=0
a
k
b
0
x(n k)
M
k=1
b
k
b
0
y(n k) (5.2)
=
M
k=0
c
k
x(n k)
M
k=1
d
k
y(n k) (5.3)
=
M
k=0
_
c
k
x(n k) d
k
y(n k)
_
(5.4)
where
a
k
b
0
= c
k
and
b
k
b
0
= d
k
with especially d
0
= 0. We see that we can apply
a time-shift operator on the linear combination of x and y, thus minimizing
the number of memory elements. The way to implement this type of system
(e.g. building the physical apparatus) is not covered in this course. Typical
ways of implementing a system would be by e.g. using a computer, a signal
processor, an ASIC (application-specic integrated circuit) of some kind.
5.2 Frequency (Transform) Domain Intro
We will now take a step towards a slightly more convinient way of describing
the signals. We will now talk about the transform domain. The Fourier
transform
x(n) X
_
e
jT
_
(5.5)
and the z-transform (c.f. s-transform, or Laplace transformin the continuous-
time domain)
x(n) X
_
z
_
(5.6)
are the two major transforms we study in this course. There are several others
that have slightly dierent mathematical properties, but can essentially be
used for the same purpose.
Notice that the transform is nothing magical as such, it is mere a mathe-
matical tool that is used to simplify the analysis and understanding of your
system. Typical steps are (1) sequence, to (2) transform and then back to (3)
sequence, and actually as we have seen in previous lectures, the analyses can
be done in the sequence domain too. The implementation (and realization)
in the direct forms are done in the sequence domain.
Some properties of the transforms are:
CHAPTER 5. FREQUENCY DOMAIN ANALYSIS 51
The convolution in time / sequence domain corresponds to multipli-
cation in the tranform domain. This simplies analysis and especially
calculations.
The Fourier transform contains information about the frequency con-
tents of the signal, i.e., it describes the signals spectral domain instead
of the temporal domain. One can for example transfer information in
dierent bands. Compare with radio and its dierent stations, etc.
The two transforms (Fourier and z) are quite similar and one can slop-
pily say that the Fourier transform is a relaxed special case of the z-
transformed. This means that the z transform can be used on a wider
range of signals, but also puts tighter requirements on the signal in
terms of stability and causal.
X
_
e
jT
_
.
= X(z) |
z=e
jT (5.7)
or
X
_
e
jT
_
.
= X(z = e
+jT
) |
=0
(5.8)
The z-transform is more three dimensional in nature (due to the
and ). The is a kind of damping factor in the transform. We will
later on see that for the z-transform you talk more about a plane than
about an axis as you do with Fourier transforms.
5.3 Fourier Series
We will start witha a special case, the Fourier series. The series is very
similar to the Fourier transform, but applied for periodic signals.
If x(n) is periodic, i.e., x(n) = x(n +N), then x(n) can be written as:
x(n) =
N1
k=0
C
k
e
j2k
n
N
=
N1
k=0
C
k
e
k
j2n
N
(5.9)
which is a sum of a nite number of frequency components (e
j2k
n
N
) weighted
by C
k
. The coecient C
k
are uniquely given for a specic input function,
and they are derived as
C
k
=
1
N
N+n
0
1
n=n
0
x(n)e
j2k
n
N
(5.10)
CHAPTER 5. FREQUENCY DOMAIN ANALYSIS 52
The coecients express the frequency contents of the period signal. We can
compare with Eulers formula and therefore assume that we have
x(n) = cos
_
2
n
4
_
(5.11)
which is periodic with N = 4. From Euler we already know that the cosine
can be written as
cos
2n
4
=
e
j2
n
4
+e
j2
n
4
2
=
e
j2n
4
+ e
j2
j2n
4
2
=
e
j2n
4
+e
3
j2n
4
2
(5.12)
Reordering the function slightly gives us:
cos
2n
4
=
1
2
e
1
j2n
4
+
1
2
e
3
j2n
4
(5.13)
Lets now compare this expression with the denition of the Fourier series
above. We can identify term by term as:
C
0
= 0 C
1
=
1
2
C
2
= 0 C
3
=
1
2
(5.14)
We know from previous lecture that there are cases when the sinusoid math-
ematically is nonperiodic in the discrete-time case. Euler still however lets
us rewrite the function, but we will not be able to map the Euler expression
with the denition sum. One could truncate the sum, but one would then
nd so called leakage around the non-zero coecients, i.e., the coecients
close to the (ideally) non-zero terms will also be non-zero.
The coecients can also be written in polar form:
C
k
= |C
k
| e
jarg{C
k
}
(5.15)
which means that the product from the sum can be written as
C
k
e
k
j2n
N
= |C
k
| e
k
j2n
N
+jarg{C
k
}
(5.16)
From the polar form we see how each frequency component is multiplied by
|C
k
| and rotated by jarg{C
k
}.
As long as x(n) is a sequence consisting of real numbers only, the absolute
Fourier series coecients are even, i.e., |C
0
| = |C
N
|, |C
1
| = |C
N1
|, etc. The
phase instead is an odd function, i.e., argC
0
= argC
N
, etc.
We introduce a frequency variable, T, which we set equal to
T =
2k
N
, (5.17)
CHAPTER 5. FREQUENCY DOMAIN ANALYSIS 53
i.e.,
e
jTn
= e
k
j2n
N
. (5.18)
T is a frequency variable, or actually an angle, but can be related to the
frequency f when sampling a continuous-time function. Assume for example
x(n) = x
a
(nT) = sin(Tn). f has the unit [1/s], has [rad/s] and T [rad],
where T = 1/f
s
has [s].
Notice that the Fourier series coecients are limited to one rotation, i.e.,
T goes from 0 to 2. (Or if one likes, from to , since the exponential
functions can be shifted with j2 an arbitrary number of times without
changing value.)
For Fourier series you can also refer to j method in the continuous-time
domain. We have a steady sinusoid as reference signal and then the |C
k
| is
the gain for the particular frequency, and
k
= arg{C
k
} is the phase shift of
the tone.
5.4 Fourier Transform
If the signal/sequence is nonperiodic, the Fourier series cannot be applied
properly without getting so called leakage (we will look at this in more detail
when we discuss Discrete Fourier Transforms and coherent sampling).
The Fourier transform will result in a continuous spectrum instead of the
discrete bins as in the Fourier series case. (Actually it is not too complicated
to derive the Fourier series from the Fourier transform ...) It is dened as:
X
_
e
jT
_
=
n=
x(n) e
jTn
(5.19)
(Notice now that there are no more discrete coecient/bins as such in the
Fourier transform, but instead a continuous function with respect to the
frequency variable, T). We can directly see that the function is periodic,
since adding another 2 to the exponential in the sum will just result in
another complete rotation around the complex origin.)
The inverse Fourier transform can be calculated as:
x(n) =
1
2
_
X
_
e
jT
_
e
jTn
d(T) (5.20)
Instead of weighted sequences, we now have weighted superimposed frequency
components that express the frequency contents of a signal. As for Fourier
CHAPTER 5. FREQUENCY DOMAIN ANALYSIS 54
series, we can write on polar form:
X
_
e
jT
_
= |X
_
e
jT
_
|
. .
Amplitude spectrum
e
jargX
_
e
jT
_
. .
Phase spectrum
(5.21)
Once again, as for the Fourier series, as long as the x(n) is a real-valued
function, the amplitude spectrum is an even function and the phase spectrum
is an odd function. The resulting spectrum, X(e
jT
), is also a period function
since
X
_
e
jT
_
= X
_
e
jT+j2
_
=
n=
x(n) e
jTn+j2
=
n=
x(n) e
jTn
e
j2
..
=1
=
n=
x(n) e
jTn
In the continuous-time domain, the X
_
j
_
is not (necessarily) periodic and
the is also unlimited. In the discrete-time, only the T = [, ] region is
studied. And also since x(n) normally is a real-valued function it is sucient
to study the [0, ] range instead.
High frequencies: High frequencies in the continuous-time domain
is essentially when , but for the discrete-time case it is when T
. Notice however that the highest possible frequency in the discrete-time
domain is in some sense given by the sampling theorem. (This is not exactly
true, since any higher frequency will fold onto the low-frequency range). It
is however the highest possible frequency that is ideally reconstructable in a
sampled system.
5.4.1 Example of Fourier Transform
Let the signal sequence be given by
x(n) = a
n
u(n) (5.22)
and we require that |a| < 1 to get a bounded signal. Set up the expression
for the Fourier transform:
X
_
e
jT
_
=
n=
x(n) e
jTn
=
n=0
a
n
e
jTn
. (5.23)
CHAPTER 5. FREQUENCY DOMAIN ANALYSIS 55
This can be written as
X
_
e
jT
_
=
n=0
_
a e
jT
_
n
=
1
1 a e
jT
. (5.24)
The amplitude spectrum equals
|X
_
e
jT
_
| =
1
|1 a e
jT
|
=
1
_
1 +a
2
2acos(T)
. (5.25)
We know that
e
jT
= cos(T) +jsin(T) (5.26)
or
a e
jT
= a cos(T) +a jsin(T) (5.27)
which makes the absolute to be given by
|1 a e
jT
|
2
=
_
1 a e
jT
_
_
1 a e
jT
_
(5.28)
which equals
_
1 a e
jT
_
_
1 a e
jT
_
= 1 +a
2
a
_
e
jT
+e
jT
_
(5.29)
Once again using Euler
1 +a
2
2a
e
jT
+e
jT
2
= 1 +a
2
2a cos(T) (5.30)
The phase spectrum equals
arg X
_
e
jT
_
= arg (1 a e
jT
) = atan
a sin(T)
1 a cos(T)
(5.31)
(the minus comes from that
1
complex
mathematically results in a 180-degree
() phase shift.
5.4.2 Convergence
From the above example we realize that the sum must converge, otherwise
it is impossible to use the transform (as is also the reason for introducing the
z transform). Consider the denition of Fourier transform below:
X
_
e
jT
_
=
n=
x(n) e
jTn
(5.32)
CHAPTER 5. FREQUENCY DOMAIN ANALYSIS 56
The sum above is valid as long as it accumulates to be a nite number. This
is achieved as long as also the input is absolutely summable. This means
that
n=
|x(n)| < (5.33)
We know from mathematics that
|
n=
x(n) e
jTn
| <
n=
|x(n) e
jTn
|
<
n=
|x(n)| |e
jTn
|
=
n=
|x(n)|
This means that as long as the input is absolutely summable, the Fourier
transform exists. However, for periodic inputs the input is innitely long,
but still a valid output signal could be given by an LTI system. (The sum
does not converge, is not dened.) So what do we do? We introduce the
Dirac impulse, (T). The Dirac impulse is dened as
(T) = lim
0
P
(T), (5.34)
where
P
(T) =
_
1
|T| < /2
0 otherwise
The Dirac has some interesting properties. First of all, the area is unity (see
function above too):
_
(T)d(T) = 1 (5.35)
It can be multiplied and integrated with a function/signal as, f(T):
_
(T T
0
)f(T)d(T) = f(T
0
). (5.37)
The Diract impulse cuts out a section/an innitecimal small segment of the
function/signal f(T).
CHAPTER 5. FREQUENCY DOMAIN ANALYSIS 57
So when does this functional show up? Lets look at the inverse Fourier
transform and the case of a cosine signal sequence:
x(n) = cos(
0
Tn) (5.38)
and
x(n) =
1
2
_
X
_
e
jT
_
e
jTn
d(T). (5.39)
Before we combine these two equation we take a look at Hr. Dr. Euler again:
x(n) = cos(
0
Tn) =
e
j
0
Tn
+e
j
0
Tn
2
=
1
2
e
j
0
Tn
+
1
2
e
j
0
Tn
. (5.40)
The combined equation becomes
1
2
e
j
0
Tn
+
1
2
e
j
0
Tn
=
1
2
_
X
_
e
jT
_
e
jTn
d(T), (5.41)
i.e.,
e
j
0
Tn
+e
j
0
Tn
=
1
X
_
e
jT
_
e
jTn
d(T), (5.42)
Now compare with the expressions for the Dirac impulse, and associate the
function, f(T), with the exponential in the integral above, i.e.
f(T) = e
jTn
(5.43)
we then see that the following
X
_
e
jT
_
= (T
0
T) + (T +
0
T) (5.44)
is a solution to the equation above. So in short, when it comes to periodical
signals and sinuoidals, etc., the Dirac impulse typically shows up in the
spectrum too.
5.4.3 Properties of the Fourier Transform
Linearity
The additive and homogoneous properties hold:
a
1
x
1
(n) +a
2
x
2
(n) a
1
X
1
_
e
jT
_
+a
2
X
2
_
e
jT
_
(5.45)
Shift
x(n k) e
jTk
X
_
e
jT
_
(5.46)
CHAPTER 5. FREQUENCY DOMAIN ANALYSIS 58
Convolution
x
1
(n) x
2
(n) X
1
_
e
jT
_
X
2
_
e
jT
_
(5.47)
Parsevals Formula
The power is conserved between the two transforms.
E =
n
|x(n)|
2
=
1
2
_
|X
_
e
jT
_
|
2
d(T) (5.48)
Chapter 6
z-transform
In this lecture we will continue the frequency-domain analysis and now focus
on the z-transform which is an extention of the Fourier transform. But
rst ...
6.1 Recapture from Previous Lectures
In the previous lecture we looked at the Fourier series and the Fourier trans-
form.
6.1.1 Fourier Series Why?
Why and when should one look at the Fourier series? It is best applicable
for periodic, nite-length sequences. The advantage is in fact that we
also get a limited number of bins in our transform. Thereby it is easier to
calculate the power in a certain frequency component.
Remember from the denition:
x(n) =
N1
k=0
C
k
e
k
j2n
N
=
N1
k=0
C
k
e
jTn
, (6.1)
where we have replaced
2k
N
with the paired variable T. (Compare the
expression with the inverse Fourier transform...) As k increases, we also see
that increases. The Fourier series are periodic and it does not make sense
to dene them for k > N or k < 0 just in the same way as it did not make
sense to dene the Fourier transform for values below 0 and above 2. If the
input sequence was real-valued only, we also found that the phase spectrum
was odd and that the amplitude spectrum was even around 0 and , thus
making it sucient to look at the interval 0 to or k = 0 to k = N/2.
59
CHAPTER 6. Z-TRANSFORM 60
6.1.2 Fourier Transform Example
If the sequence is non-periodic we have to use another transform, since the
Fourier series are then not dened. The rst continuous transform we look
at is the Fourier transform, which is dened as:
X
_
e
jT
_
=
n=
x(n) e
jTn
(6.2)
The convergence of the sum is most easily studied in the sequence-domain,
but mostly it boils down to requiring that
x(k)h(nk) =
a
k
b
nk
= b
n
_
a
b
_
k
= b
n
1
_
a
b
_
n+1
1
a
b
, (6.6)
assuming that |a/b| < 1. Or we could go to the tables! We nd from table
that
X
_
e
jT
_
=
1
1 a e
jT
H
_
e
jT
_
=
1
1 b e
jT
(6.7)
We get
Y
_
e
jT
_
= H
_
e
jT
_
X
_
e
jT
_
=
1
1 b e
jT
1
1 a e
jT
, (6.8)
which we can rewrite as
Y
_
e
jT
_
=
B
1
..
be
jT
+
A
1
..
ae
jT
. (6.9)
CHAPTER 6. Z-TRANSFORM 61
We then try to nd the solution to A and B, and put:
A +B = 1 B A = 0,
which has the solutions
A =
a
b
1
a
b
B =
1
1
a
b
so we get
Y
_
e
jT
_
=
1
1
a
b
1 b e
jT
a
b
1
a
b
1 a e
jT
. (6.10)
Then we apply the table again, yielding the output sequence:
y(n) =
1
1
a
b
b
n
u(n)
a
b
1
a
b
a
n
u(n), (6.11)
which equals
y(n) =
b
n
1
a
b
_
1
a
b
_
a
b
_
n
_
u(n) = b
n
1
_
a
b
_
n+1
1
a
b
u(n), (6.12)
corresponding quite well to the convolved result. In this particular example
it was probably easier to go with the convolutional sum. But normally the
sequences become more complex.
6.2 The z-transform
The z-transform is a generalization of the Fourier transform. It can handle a
wider class of sequences and converges when the Fourier transform cannot.
The denition of the double sided z-transform is:
X(z) =
n=
x(n) z
n
, (6.13)
where z = a +jb = |r|e
arg{r}
= e
+jT
, etc. (There are several dierent ways
to represent the complex number).
6.2.1 Inverse Transform
The inverse z-transform is dened as
x(n) =
1
j2
_
X(z) z
n1
dz (6.14)
which is a complex integral over a contour in the region of convergence (see
below). However in this course, we will not spend time on solving this inte-
gral, instead we will rely on two things: tables and re-ordering of terms.
CHAPTER 6. Z-TRANSFORM 62
6.2.2 Properties
Linearity
The z-transform is linear, i.e., if x
1
(n) X
1
(z) and x
2
(n) X
2
(z), then
a
1
x
1
(n) +a
2
x
2
(n) a
1
X
1
(z) +a
2
X
2
(z).
Shift
A time-shift results in a similar result as for the Fourier transform, i.e., if
x(n) X(z), then x(n k) z
k
X(z).
Convolution
A convolution in the sequence domain corresponds to a multiplication in
the transform domain, i.e., if x
1
(n) X
1
(z) and x
2
(n) X
2
(z), then
x
1
(n) x
2
(n) X
1
(z) X
2
(z).
6.2.3 Region of Convergence (ROC)
Now, one has to be more careful with the convergence requirements. In
the complex z-plane there will be a region of convergence, in which the z-
transform will converge, i.e., where it is bounded by a nite number for all
z. The region is very often donut-shaped and can be described by:
|R
1
| < |z| < |R
2
| (6.15)
in which
X(z) < for |R
1
| < |z| < |R
2
| (6.16)
Notice that the region of convergence together with the transform must be
given/specied to uniquely dene the sequence (!). As we will see shortly,
there are dierent sequences that give rise to the same transform, but then
for dierent region of convergences. Before we head on, we will just illustrate
the connection with Fourier transform again. Assume for example that the
complex variable can be written as z = r e
jT
(which is always the case...).
Write the z-transform as:
X
_
r e
jT
_
=
n=
x(n)
_
r e
jT
_
. .
z
n
=
n=
_
x(n) r
n
_
e
jTn
(6.17)
Notice now that the z-transform is the Fourier transform of the sequence
[x(n) r
n
], where r
n
is a convergence function. The convergence function
CHAPTER 6. Z-TRANSFORM 63
damps (or amplies) the sequence so that the transform converges. The
r could be dened to go towards innity to really force down the sum to
converge. One could simply say that the original sequence is altered so that
it can be transformed a transform of the transform. For the special case,
if we set r = 1 the function becomes
X
_
1 e
jT
_
=
n=
x(n) e
jTn
(6.18)
and essentially if the unity circle is inside the region of convergence
(ROC), the Fourier transform equals the z-transform for r = 1. If it is not
inside, it does not converge.
6.2.4 The Examples
To illustrate the fact that two dierent sequences can give the same result,
we look at the following two cases:
x(n) =
_
0 n < 0
a
n
n 0
and
x(n) =
_
a
n
n 1
0 n 0
Calculate the z-transform of the rst sequence:
X(z) =
n
x(n) z
n
=
n0
a
n
z
n
=
n0
_
a z
1
_
n
=
1
1 az
1
. (6.19)
The last step of the calculation of the geometric sum, requires that the sum
converges. According to the maths, this is achieved as long as |a z
1
| < 1,
i.e.,
|z| > a. (6.20)
Notice also that the sum does not converge for the special case of z = 0,
since the z
n
then evaluates to . Therefore, one normally also denes a so
called zero in the origin to illustrate this fact.
The z-transform of the second example becomes:
X(z) =
n
x(n) z
n
=
n1
a
n
z
n
=
m1
_
z
a
_
m
=
z
a
k0
_
z
a
_
k
, (6.21)
CHAPTER 6. Z-TRANSFORM 64
which equals
X(z) =
z
a
1
1
z
a
=
z
a
a
z
a
z
1
=
1
a z
1
1
=
1
1 a z
1
(6.22)
To have the transform converging, i.e., being valid, we require that the geo-
metrical sum converges and therefore the |z/a| < 1, i.e., |z| < |a|.
Notice that the two results are identical in terms of function of z, but the
denition/specication of the ROCs make them unique. In our special case
with a we see that there are two regions one inside the circle with radius a
and one outside(!)
6.2.5 Region of Convergences
Right-sided sequences
If the sequence is right-sided and innitely long, i.e., x(n) = 0, for all n < N
L
,
then the convergence region will be of the type:
|z| > R
R
, (6.23)
i.e., the z has to be in the region furthest away from the origin.
Left-sided sequences
If the sequence is left-sided and innitely long, i.e., x(n) = 0, for all n > N
R
,
then the convergence region will be of the type
|z| < R
L
, (6.24)
i.e., the z has to be in the region closest to the origin.
Double-sided sequences
If the sequence is double-sided and innitely long, the convergence region
will be of the type:
R
R
< |z| < R
L
, (6.25)
i.e., the z must be within a band.
CHAPTER 6. Z-TRANSFORM 65
Finite-length sequences
If the sequence is nite-length, i.e., stretches innitely long in both directions,
the transform will converge for all z except z = if there are bins on the
negative side, i.e., N
L
< 0 and z = 0 if there are bins in the positive side,
N
R
> 0.
Sometimes the double-sided transform is denoted X
II
and the single-sided
transform X
I
. But, maybe more importantly: If the system is causal, we
always have the single-sided transform and it makes life a little bit
more comfortable.
6.2.6 Example
Assume that the transform is given by
X(z) =
z
2
(z 1)(z 0.1)
(6.26)
and calculate the inverse sequence: x(n). Rewrite as
X(z) = z
z
(z 1)(z 0.1)
= z
_
A
z 1
+
B
z 0.1
_
(6.27)
and solve the polynomial equation, i.e.,
0.1A B = 0 A+B = 1
This implies that
A =
1
0.9
B =
0.1
0.9
This means that the transform can be written as
X(z) = z
_ 1
0.9
z 1
0.1
0.9
z 0.1
_
=
1
0.9
z
z 1
0.1
0.9
z
z 0.1
(6.28)
Going to the tables, will give us a plurality of answers. The rst one is
simple, look for right-sided sequences only. This means that we can
directly nd the inverse as:
x(n) =
10
9
u(n)
1
9
0.1
n
u(n) (6.29)
The region of convergence is given by |z| > 1 and |z| > 0.1. We need to
take the intersection between the two ROCs, and hence |z| > 1. For the
left-sided case we get (from the tables) that
x(n) =
10
9
u(n 1) +
1
9
0.1
n
u(n 1) (6.30)
CHAPTER 6. Z-TRANSFORM 66
The region of convergence is given by |z| < 1 and |z| < 0.1. We need to take
the intersection between the two ROCs, and hence |z| < 0.1.
For the double-sided case we get two subcases. Either the rst fraction
is right-sided and the other is not, and vice versa.
x(n) =
10
9
u(n 1)
1
9
0.1
n
u(n) (6.31)
ROCs are now |z| < 1 and |z| > 0.1 and the intersection becomes 0.1 < |z| <
1.0. The second sub-case is given by
x(n) =
10
9
u(n) +
1
9
0.1
n
u(n 1) (6.32)
ROCs are now |z| > 1 and |z| < 0.1 and since they do not intersect, this case
is not a valid case.
6.3 A Discussion on Frequency Response
Consider an LTI system with the input sequence x(n), output sequence y(n)
and impulse response h(n). We know that the output is the input sequence
convolved with the impulse response and that in the frequency/transform
domain it is a multiplication. Therefore, the H(e
jT
) expresses the systems
frequency properties: how each frequency component is aected by the sys-
tem through a phase shift and a gain.
For the z-transform though, H(z) not really the frequency response as
such. It is a generalization, and the frequency response as observed by the
surrounding world is derived by looping around the unit circle in the z-plane,
i.e., setting the radius r = 1 and hope that the Fourier transform is stable
in that region.
This actually implies that the z-transform is not useful to display fre-
quency properties, but instead to investigate other properties of the system,
such as stability, if it is causal or not.
Chapter 7
Transfer Functions, etc.
This lecture is about dierent transfer functions and how to derive them.
7.1 Recapture from Previous Lectures
First we will recapture from the previous lecture. We talked about the z-
transform.
7.1.1 z-transform
The z-transform was dened as
X(z) =
n=
x(n) z
n
, (7.1)
and was designed to be able to handle a wider range of signals in order to
make them converge so that they can be analyzed. There is this inherited
damping factor (the absolute value of the |z|) to pull down the amplitude of
|x(n)| so that the sum converges.
Double-sided
We saw that each transform could have a set of two inverse transforms, and
if we had a more complex case where the output could be written as a sum
of transforms, we had to investigate each one of those cases. For a case with
two embedded transforms there were four dierent cases to investigate, if
there would be three embedded transforms, there would be eight cases, etc.
A helper is that if the system (and input signal) is causal, we know that
there is only one solution.
67
CHAPTER 7. TRANSFER FUNCTIONS, ETC. 68
Region of Convergence ROC
The region of convergence must be specied since that will uniquely dene
your inverse transform (and vice versa). Typically the region of convergence
was a donut-shaped graph in the z-plane and could be described by the inner
and outer radius of that donut. If the unity circle is inside the ROC, the
Fourier transform can be derived from the z-transform by setting |z| = 1.
No frequency information
Only if the unity circle is inside the ROC, we are able to also derive true
frequency information from the z-transform. Otherwise the z-transform only
becomes useful since the maths are sometimes a little bit easier and that we
can use a new concept with poles and zeros instead of frequency charac-
teristics. So ...
7.2 Transfer Functions
Consider the following LTI system with x(n) as an input, h(n) as an impulse
response, and y(n) as an output function. We know that
y(n) = x(n) h(n) Y (z) = H(z) X(z), (7.2)
where
H(z) =
n
h(n) z
n
(7.3)
is the transform of the impulse response and for stable systems, we know
that |z| = 1 is inside the ROC and then we can also dene the
H
_
e
jT
_
= H(z) for z = e
jT
(7.4)
Revival Convolution and the polynomial method
Remember from the lectures about the convolution when we suggested the
polynomial multiplication method. This origins from the z-transform. In
that case we used the q as time-shift operator, but in fact it was q = z
1
.
See the example:
h(n) = (n) + 3(n 1) + 4(n 2)
x(n) = (n) + 2(n 1) +(n 2)
CHAPTER 7. TRANSFER FUNCTIONS, ETC. 69
In our previous case, we replaced this with
h(q) = 1 + 3q + 4q
2
x(q) = 1 + 2q +q
2
However, look at the z-transform of the two expressions ((n) 1).
H(z) = 1 + 3 z
1
+ 4 z
2
X(z) = 1 + 2 z
1
+z
2
and then we multiply them to get the output:
Y (z) = H(z) X(z) = (1 + 3 z
1
+ 4 z
2
) (1 + 2 z
1
+z
2
) (7.5)
which becomes
Y (z) = 1 + 3z
1
+ 4z
2
+ 2z
1
+ 6z
2
+ 8z
3
+z
2
+ 3z
3
+ 4z
4
(7.6)
and reordering the terms:
Y (z) = 1 + (3 + 2) z
1
+ (4 + 6 + 1) z
2
+ (8 + 3) z
3
+ 4z
4
(7.7)
and nally
Y (z) = 1 + 5z
1
+ 11z
2
+ 11z
3
+ 4z
4
(7.8)
which can be inverse transformed both inputs are causal, i.e., only one
solution (!!!):
y(n) = (n) + 5(n 1) + 11(n 2) + 11(n 3) + 4(n 4) (7.9)
7.2.1 LDE z-transforms
Consider the following LDE system
M
k=0
b
k
y(n k) =
N
k=0
a
k
x(n k) (7.10)
and apply the z-transform to it:
M
k=0
b
k
z
k
Y (z) =
N
k=0
a
k
z
k
X(z) (7.11)
or
Y (z)
M
k=0
b
k
z
k
= X(z)
N
k=0
a
k
z
k
(7.12)
CHAPTER 7. TRANSFER FUNCTIONS, ETC. 70
which we can write as
H(z) =
Y (z)
X(z)
=
N
k=0
a
k
z
k
M
k=0
b
k
z
k
(7.13)
So, very often the z-transform of a sequence can be written as a ratio:
H(z) =
A(z)
B(z)
=
a
0
+a
1
z
1
+a
2
z
2
+. . . +a
N
z
N
b
0
+b
1
z
1
+b
2
z
2
+. . . +b
M
z
M
, (7.14)
The expression can be rewritten as:
H(z) = z
MN
a
0
z
N
+a
1
z
N1
+a
2
z
N2
+. . . +a
N
b
0
z
M
+b
1
z
M1
+b
2
z
M2
+. . . +b
M
, (7.15)
and further
H(z) =
a
0
b
0
..
G
z
MN
(z z
1
)(z z
2
)(z z
3
) (z z
M
)
(z p
1
)(z p
2
)(z p
3
) (z p
M
)
, (7.16)
where z
k
are zeros to H(z), i.e., roots to the polynomial A(z), solutions
to H(z) = 0, and A(z
k
) = 0, and p
k
are poles to H(z), i.e., roots to the
polynomial B(z), solutions to H(z) = , and B(p
k
) = 0. If x(n) is a real
number, then all poles are either real or complex-conjugate pairs. G = a
0
/b
0
is the level constant of the system. One can for example show that if z 0
then H(0) = G.
7.2.2 Poles and Zeros
The system can now be characterized through its poles and zeros (and level
constant). See the picture below, where we have drawn a couple of dia-
grammes. Notice that we alwyas have complex-conjugated pairs for real-
valued sequences and thereby the we always reect in the pole-zero place-
ment in the x axis. Notice also how this correlates with the mirroring around
the T = of the amplitude spectrum and reection of the phase spectrum.
Example
Assume the following dierence equation:
y(n) = b
1
y(n 1) +x(n) (7.17)
CHAPTER 7. TRANSFER FUNCTIONS, ETC. 71
and set the input to be x(n) = (n). Dene the impulse response, h(n). We
use the z-transform on the expression and get:
Y (z) = b
1
z
1
Y (z) +X(z) (7.18)
The transfer function is:
H(z) =
Y (z)
X(z)
=
1
1 +b
1
z
1
=
z
z +b
1
(7.19)
or to align with the nomenclature above
H(z) = 1 z
10
1
z +b
1
(7.20)
We have one zero in the origin and one pole in b
1
. The level constant is
unity.
7.3 Causal Systems
There are two quite obvious facts to be considered when it comes to causal
and stable systems
1. Causal system: h(n) = 0 for n < 0. The ROC for H(z) is always of
the type |z| > R
1
, since the sequence is right-sided.
2. The ROC cannot contain any poles, since H(p
k
) , i.e., the response
would not be bounded, eventhough input is bounded.
Combining these two statements imply that the system cannot have any
poles for z . This means that a causal system has at least as
many poles as it has zeros! This means that the order of the numerator
polygon, A(z) must be less than or equal to the order of the denominator
polygon B(z) as long as
H(z) =
A(z)
B(z)
(7.21)
(Do not mix this formula up with the case of H(z) = Y (z)/X(z), since Y (z)
is derived from the expression above and not vice versa). If there would be
more zeros than poles the transfer function would go towards for z .
Consider the two examples in the graphs below showing the three (!) dierent
cases.
This is a required condition for stable and causal systems, but not su-
cient.
CHAPTER 7. TRANSFER FUNCTIONS, ETC. 72
Example No
Consider the transfer function
H(z) =
z
2
1
z + 0.5
=
(z 1) (z + 1)
z + 0.5
(7.22)
The level constant is unity, there are two zeros and one pole. (Draw the
diagramme). If z is a large number, we get the approximate
H(z) =
(z 1) (z + 1)
z + 0.5
z z
z
z as z (7.23)
This means that this system rst of all cannot have a ROC where |z| > R
1
,
since for z = which would then be inside the ROC the transfer function
goes towards innity too. This then also implies that the system is non-causal
(it must be a left-sided sequence).
Example Yes
Consider the transfer function with one pole in -0.5 and one zero in -1 (draw
the diagramme):
H(z) =
z + 1
z + 0.5
z
z
1 (7.24)
which approximately is unity for large z which is a nite number. This
means that the region can be of both types: |z| < R
2
and |z| > R
1
, and we
can still not say if it is causal or non-causal in this case.
7.3.1 So lets look at the stability
The requirement on the impulse response to guarantee stability is that the
absolute sum must be a nite number, i.e.:
n
|h(n)| < (7.25)
Lets look at the transform, which also needs to be bounded
|H(z)| = |
n
h(n)z
n
| <
n
|h(n)z
n
| <
n
|h(n)| |z
n
|
. .
(7.26)
Look at the last term (marked). If the |z
n
| = 1 we know that the expression
is less than innity. If it is larger, |z| > 1, we cannot say directly if it
converges or not and vice versa. This means that: it is a stable system i
the unity circle is inside the ROC.
CHAPTER 7. TRANSFER FUNCTIONS, ETC. 73
Lets summarize
Causal system ROC is of the type |z| > R
1
.
Causal and stable: ROC is of type |z| > R
1
, R
1
< 1 to include the
unity circle and poles cannot be inside the ROC, since H(p
k
) .
This eventually means that the causal system is stable i the poles of its
transfer function is inside the unity circle. (Show diagrammes). This also
means that the Fourier transform is dened.
7.4 Back to the Frequency Response
The correlation between pole-zero placement and the amplitude/phase spec-
trum can be sketched by walking along the unity circle and measure the
distance to the poles and zeros.
7.4.1 Example with a zero
Assume a simple zero at z
1
(yes, it is not causal):
H(z) = z z
1
(7.27)
and then set z = e
jT
to investigate the frequency response. We get
H(e
jT
) = |H(e
jT
)| e
arg {H(e
jT
)}
= e
jT
z
1
(7.28)
As we wander around the unity circle, we will now get the vector between
the point on the unity circle and the zero. The length of the vector will be
the |H(e
jT
)| and the angle from the zero to the point will be arg {H(e
jT
)}
and hence they will generate the amplitude and phase spectra.
Example
Draw diagrammes with a zero at z
1
= 0.5.
7.4.2 Example with a pole
Assume a simple pole at p
1
(yes, it is causal):
H(z) =
1
z p
1
(7.29)
CHAPTER 7. TRANSFER FUNCTIONS, ETC. 74
and then set z = e
jT
to investigate the frequency response. We get
H(e
jT
) = |H(e
jT
)| e
arg {H(e
jT
)}
=
1
e
jT
p
1
(7.30)
As we wander around the unity circle, we will now get the vector between the
point on the unity circle and the pole. The length of the vector will be the
1/|H(e
jT
)| and the angle from the zero to the point will be arg {H(e
jT
)}
and hence they will generate the amplitude and phase spectra.
Example
Draw diagrammes with a pole at p
1
= 0.5.
7.4.3 Example with poles and zeros
The generalized example:
H(z) =
a
0
b
0
..
G
z
MN
(z z
1
)(z z
2
)(z z
3
) (z z
M
)
(z p
1
)(z p
2
)(z p
3
) (z p
M
)
(7.31)
can be written as
H(z) =
a
0
b
0
..
G
z
MN
N
n=1
(z z
k
)
M
m=1
(z p
m
)
(7.32)
If we now wander around the unity circle we will nd that the amplitude
spectrum is the product of all vector lengths to zeros divided by the product
of all vectors to poles. The phase is the sum of all the angles to zeros minus
the sum of all angles to the poles.
If there are zeros on the unity circles we will cross the zero y-axis in the
frequency domain. If we are close to a zero and pole there is a dip and peak,
respectively, as well as a strong phase shift.
Chapter 8
Sampling and Reconstruction
This lecture is about sampling and reconstruction of signals. We will see that
there is an (ideal) way to actually recreate all information from a sampled
signal, i.e., in the ideal world, the information carried in the signal is not
destroyed, once we sample it.
8.1 Recapture from Previous Lecture(s)
However, rst some recapture from previous lectures, and mainly we dis-
cussed poles and zeros last time.
8.1.1 Poles and zeros
We introduced a new concept called poles and zeros that are unique for
the z-transform and not found in e.g. the Fourier transform. We could write
an LDE transfer function as:
H(z) =
A(z)
B(z)
=
a
0
+a
1
z
1
+a
2
z
2
+. . . +a
N
z
N
b
0
+b
1
z
1
+b
2
z
2
+. . . +b
M
z
M
, (8.1)
and rewrite it as
H(z) =
a
0
b
0
..
G
z
MN
(z z
1
)(z z
2
)(z z
3
) (z z
M
)
(z p
1
)(z p
2
)(z p
3
) (z p
M
)
(8.2)
where the z
k
are the roots to the numerator polynomial and p
k
are the roots
to the denominator polynomial. z
k
are referred to as the zeros, and p
k
are the
poles. There were some dierent properties for stable and causal systems:
75
CHAPTER 8. SAMPLING AND RECONSTRUCTION 76
1. In a causal system (right-sided sequence) all poles need to be outside
the ROC, since otherwise z = p
k
: H(p
k
) .
2. In a causal system the number of poles must be equal to or larger than
the number of zeros otherwise the system is not stable
3. If the input sequence is bounded, the unit circle must be inside the
ROC to guarantee stability
4. In a causal system is determined by the pole farthest away from the
origin (in a noncausal system it is the closes pole).
5. (1 + 3): All poles must be inside the unit circle
Comment on the ROC
Consider the general transfer function
H(z) =
a
0
b
0
..
G
z
MN
N
n=1
(z z
n
)
M
m=1
(z p
m
)
(8.3)
which can be written as
H(z) =
M
m=0
A
m
z +B
m
z p
m
=
A
0
z +B
0
z p
0
+
A
1
z +B
1
z p
1
+. . . (8.4)
as long as all terms are causal, the ROCs will be given by |z| > |p
m
|. We
then need to look att the intersection of all these regions.
ROC =
0mM1
ROC
m
= {z : |z| > p
max
} (8.5)
where p
max
is the maximum absolute value of all poles, p
m
.
8.1.2 z-transform vs Fourier transform
We looked at the connection between Fourier transform, frequency response,
and the the z transform, by walking along the unit circle and measure the
distance to all poles and zeros as well as the angle by studying the vectors
from poles and zeros to the point along the unit circle.
When the poles are close to the unit circle there is a peak in the amplitude
spectrum. When the zeros are close (or even on) the unit circle, there are
dips in the amplitude spectrum.
(Show a graph).
CHAPTER 8. SAMPLING AND RECONSTRUCTION 77
8.1.3 Transient and Steady-State
The steady-state (stationary) response on the input signal is found as y
s
(n)
and the transient response as y
t
(n) in the solution to the LDE: y(n) = y
s
(n)+
y
t
(n). Notice that this was neglected in the formula above. Consider the
system in the z-transform:
Y (z) = H(z) X(z) = G z
MN
N
n=1
(z z
n
)
M
m=1
(z p
m
)
. .
H(z)
K z
QR
R
r=1
(z z
r
)
Q
q=1
(z p
q
)
. .
X(z)
(8.6)
This can (somehow) be rewritten as:
Y (z) = F z
f
N
n=1
(z z
n
)
M
m=1
(z p
m
)
. .
Ys(z)
+W z
w
R
r=1
(z z
r
)
Q
q=1
(z p
q
)
. .
Yt(z)
(8.7)
where we split the expression into two dierent clusters. One of the clusters
contain the poles from the system response and the other contain the poles
from the input signal. Then we can inverse transform the two parts and
obtain the transient and steady-state results. Notice also the (desired) result
that the transient only depends on the input signal itself.
Example
Assume we have the (bounded) input signal
x(n) = a
n
u(n) (8.8)
and the stable impulse response of our causal LTI system
h(n) = b
n
u(n), (8.9)
and then determine the output sequence. We can quickly nd the z-transforms
from the table:
X(z) =
z
z a
H(z) =
z
z b
(8.10)
Obviously we have the system output transform as the product of the two
transforms:
Y (z) = H(z) X(z) =
z
2
(z a)(z b)
= z
z
(z a)(z b)
(8.11)
CHAPTER 8. SAMPLING AND RECONSTRUCTION 78
Then we do the rewriting as
Y (z) = z
A
z a
+z
B
z b
(8.12)
which gives us
A+B = 1
Ab Ba = 0
which has the solution
A =
a
a b
B =
b
a b
,
i.e., we have
Y (z) =
a
a b
z
z a
. .
Yt(z)
b
a b
z
z b
. .
Ys(z)
(8.13)
and the inverse (according to table) becomes:
y(n) = y
t
(n) +y
s
(n) =
a
a b
a
n
u(n)
b
a b
b
n
u(n) (8.14)
Notice that the poles as such are separated, but there is a coupling between
the two signals that cross-depend on the poles, in this particular case it is
the gain.
8.2 Sampling
Show the sampling and recontruction boxes in a graph.
Consider the continuous-time signal, x
a
(t), as input to the sampler. It is
(uniformly) sampled at a time period of T referred to as the sampling period
andwhere f
s
= 1/T is the sampling frequency (i.e., the number of samples
taken every second). The sampling process creates a sequence of discrete-
time data, x(nT) = x(n). The reconstruction block creates a continuous-time
signal, x
r
(t).
Poissons Summation Formula
First, we look at the Poisson Summation Formula. The input signal can
be written as the inverse (continuous-time) Fourier transform
x
a
(t) =
1
2
_
X
a
(j)e
jt
d (8.15)
CHAPTER 8. SAMPLING AND RECONSTRUCTION 79
and since we are looking at certain sampled points, this can be written as
x
a
(nT) =
1
2
_
X
a
(j)e
jnT
d (8.16)
Nothing prevents us from dividing the integral range into an innite number
of subranges - each one of them 2/T wide:
x
a
(nT) =
k
1
2
_
(+2k)/T
(+2k)/T
X
a
(j)e
jnT
d (8.17)
We can also do a variable substitution inside the sum, = 2k/T (no
worries, we do not loose any information, the index k is shifted from the
range into the expression of X
a
and the exponential):
x
a
(nT) =
k
1
2
_
/T
/T
X
a
_
j
_
2k
T
_
_
e
j
_
2k
T
_
nT
d (8.18)
which equals (since the number of unit circle rotations is an interger number
we can remove any 2 k n factor in the exponential):
x
a
(nT) =
1
2
k
_
/T
/T
X
a
_
j
_
2k
T
_
_
e
jnT
d (8.19)
Changing the integration variable, = /T, yields
x
a
(nT) =
1
2T
k
_
X
a
_
j
_
2k
T
_
_
e
jnT
d(T). (8.20)
Changing the order of the sum and integral gives us the expression
x
a
(nT) =
1
2
_
1
T
k
X
a
_
j
_
2k
T
_
_
. .
X
_
e
jT
_
e
jnT
d(T) (8.21)
in which we recognize the denition of the inverse of the discrete-time Fourier
transform. This means that
X
_
e
jT
_
=
1
T
k
X
a
_
j j
2k
T
_
. (8.22)
This is a very important formula/result but what does it say?
CHAPTER 8. SAMPLING AND RECONSTRUCTION 80
It says that the discrete-time spectrum is a weighted (1/T) sum of
frequency-shifted (at k times 2/T) continuous-time spectra. Notice
that the frequency shift (in Hertz) is an even number times the sam-
pling frequency. This also means that if the signal is wider in terms
of frequency (bandwidth) than the sampling frequency, there will be
overlaps in the spectra (due to the sum).
Show a set of spectra. In the following three cases we see dierent types
of impact of the sampling and how Poissons formula helps us understand
how the signals can be overlapping eachother. When the signals are non-
overlapping, we see that the spectrum is not destroyed/interfering with other
parts of the spectrum
Show an example with sinusoidals. If they are overlapping or not. Show
that overlapping folds components into the base band and refer to subsam-
pling.
Example
Consider the cosine
x
a
(t) = cos(
0
t) (8.23)
which has the Fourier transform
X
a
(j) = ( +
0
) +(
0
), (8.24)
i.e., two inntely high pulses (with nite area, ). Now assume we sample at
a sample frequency of T = 1/f
s
or
s
= 2f
s
. The sequence then becomes
x(nT) = x
a
(t)|
t=nT
= cos(
0
nT) = cos(
0
n/f
s
) = cos
_
2
f
0
f
s
n
_
(8.25)
Show graph of the dierent cases in the spectrum domain. See it
also from a sequence point of view: If the input frequency can be written as:
f
0
= m f
s
+ f
0
, where 0 f
0
< f
0
we can write
x(n) = cos
_
2
f
0
f
s
n
_
= cos
_
2
mf
s
+ f
0
f
s
n
_
= cos
_
2mn + 2
f
0
f
s
n
_
(8.26)
and there is no doubt that the integer number of rotations can be ignored:
x(n) = cos
_
2
f
0
f
s
n
_
(8.27)
which means that the resulting frequency can be written as the remainder
frequency if the signal frequency is divided by the sample frequency. Look
CHAPTER 8. SAMPLING AND RECONSTRUCTION 81
at the other special case: f
0
= m f
s
/2 +f
0
, where 0 f
0
< f
0
/2. Then
we get
x(n) = cos
_
2
mf
s
/2 + f
0
f
s
n
_
= cos
_
mn + 2
f
0
f
s
n
_
(8.28)
If m is an even number, were back to the previous case, if m = 2 + 1 is an
odd number we can rewrite as
x(n) = cos
_
2n +n + 2
f
0
f
s
n
_
= cos
_
n + 2
f
0
f
s
n
_
(8.29)
Since cos( +) = cos( ) we see that
x(n) = cos
_
n + 2
f
0
f
s
n
_
= cos
_
2
f
s
/2 f
0
f
s
n
_
(8.30)
8.2.1 Sampling Theorem
The sampling theorem is a very important theorem touching the basis of all
sampled systems. It is desirable to reconstruct the continuous-time
waveform so that x
r
(t) == x
a
(t). What kind of requirements do that put
on our system?
1. How should we select the sampling frequency/period fs, T, so that
x
a
(t) can be reconstructed from x(n)? The sampling theorem must be
fullled. The sampling theorem states that the bandwidth of the signal
must be less than half the sampling frequency:
BW < f
s
/2 =
1
2T
. (8.31)
The half sampling frequency is often referred to as the Nyquist fre-
quency.
2. If the sampling theorem is fullled, how do we perform the reconstruc-
tion (ideal reconstruction)?
3. If the sampling theorem is not fullled, what would be the error/distortion?
These questions are simplest answered by studying the relationship be-
tween X
a
(j), X
_
e
jT
_
, and X
r
(j). For example, if ideal reconstruction is
possible, then x
a
(t) == x
r
(t) X
a
(j) == X
r
(j).
CHAPTER 8. SAMPLING AND RECONSTRUCTION 82
8.3 Reconstruction
A continuous-time signal, x
r
(t), can be obtained from the samples, x(n), by
using puls amplitude modulation (PAM).
x
r
(t) =
n
x(n) p(t nT), (8.32)
where T is the sampling period. Lets apply the Fourier transform on both
sides:
X
r
(j) =
_
x
r
(t) e
jt
dt =
_
n
x(n) p(t nT) e
jt
dt (8.33)
Reorder the sum and integral
X
r
(j) =
n
_
n
x(n)
_
p(t nT) e
jt
dt
. .
P(j)e
jnT
= P(j)
n
x(n) e
jnT
. .
X
_
e
jT
_
= P(j) X
_
e
jT
_
So for ideal reconstruction it becomes quite simple: choose P(j) = T
1
for
frequencies up to T = (half the sampling frequency) and zero otherwise.
(Show graph).
But ... life is not that simple. Lets look at what that would imply on
the time-domain signal:
p(t) =
1
2
_
P(j)e
jt
d =
1
2
_
fs
fs
T e
jt
d =
1
2
_
e
jt
jf
s
t
_
fs
fs
(8.34)
p(t) =
e
jfst
e
jfst
j2f
s
t
=
e
jfst
e
jfst
2j f
s
t
=
sin(f
s
t)
f
s
t
. .
sinc(fst)
(8.35)
The sinc signal is a signal that exists for all t, i.e., for all negative time
values too (double-sided), i.e., it is not possible to implement it. Show
1
See the 1/T in the Poisson formula
CHAPTER 8. SAMPLING AND RECONSTRUCTION 83
graph. The sinc takes its maximum value at x = 0 and is zeros at t = n/f
s
and local peaks at t = (2m+ 1)/f
s
.
In practice one quite often uses a pulse function in the time domain
instead:
p(t) =
_
1 0 t < T
0 otherwise
(8.36)
The Fourier transform is
P(j) =
_
p(t) e
jt
dt =
_
T
0
e
jt
dt =
_
e
jt
j
_
T
0
=
e
jT
1
j
(8.37)
This can be written as (using Euler)
P(j) =
T
2
e
jT/2
e
jT/2
e
jT/2
jT/2
. .
sinc(fT)
= T e
jT/2
sin(T/2)
T/2
. .
sinc(fT)
(8.38)
Notice now how the spectrum is weighted and also images are not fully
attenuated as desired. Show graph. An additional lter can preferrably be
used and/or oversampling techniques, ie., always guarantee a proper spacing.
Show graph. The spectrum has its zero values at T/2 = m and (local)
peak values at T/2 = (2m + 1) (m = 0). This means at f = m f
s
for
zeros and f = (2m + 1) f
s
for peaks.
8.3.1 Distortion
If the sampling frequency is not fullling the sample theorem, i.e., either the
sampling frequency is too low or the signal is not band limited, there will be
distortion due to the folded spectrum. We saw this quite clearly when we
looked at the eect of the Poisson formula. We can write the error as the
dierence between input and output (assuming that the system performs a
one-to-one mapping):
e(t) = x
r
(t) x
a
(t) (8.39)
In the transform domain, this becomes
E(j) = X
r
(j) X
a
(j) (8.40)
The power of the error must be
E
e
=
_
(x
r
(t) x
a
(t))
2
dt =
1
2
_
|X
r
(j) X
a
(j)|
2
d (8.41)
CHAPTER 8. SAMPLING AND RECONSTRUCTION 84
according to Parseval. In fact, the error consists of two types of errors: the
cut-o tail and the range that is overlapped by the tail from the neighbouring
spectrum. See graph. Therefore, if we have a (very) band-limited spectrum,
the error must be (once again using Parsevals formula):
E
e
= 4
1
2
_
/T
|X
a
(j)|
2
d (8.42)
if the neighbouring spectrum only leaks into the adjacent range. To be more
accurate, we need to use the Poisson formula too for wide-band signals:
E
e
=
1
_
/T
|X
a
(j)|
2
d +
1
2
_
/T
/T
|
k=0
X
a
_
j
j2k
T
_
|
2
d (8.43)
The power of the input signal is
E
a
=
_
x
2
a
(t)dt =
1
2
_
|X
a
(j)|
2
d (8.44)
and therefore the signal-to-noise ratio (input-referred) is given by
E
a
E
e
=
_
|X
a
(j)|
2
d
2
_
/T
|X
a
(j)|
2
d +
_
/T
/T
|
k=0
X
a
_
j
j2k
T
_
|
2
d
(8.45)
We are not going to spend time on an example here, the theory clearly states
the steps of operations and the rest is number crunching.
8.3.2 Anti-Aliasing Filter
The use of an anti-alising lter will help removing half of the error. The lter
is typically designed to remove the tail, to prevent overlapping (or folding).
8.3.3 Reconstruction Filter
The reconstruction lter can be used to further suppress images, but also to
compensate for the sinc-weighting of the spectrum.
Chapter 9
Discrete Fourier Transform
The topic of this lecture is the Discrete Fourier Transform (DFT) and its
related Fast Fourier Transform (FFT). But rst some recapture...
9.1 Recapture from Previous Lecture(s)
9.1.1 Poissons Summation Formula
The Poisson Summation Formula is very important, rst of all it states how
the spectrum of the discrete-time sequence looks like provided the continuous-
time spectrum. But from the formula, we can also quickly understand the
Sampling Theorem
X
_
e
jT
_
=
1
T
k
X
a
_
j j
2k
T
_
. (9.1)
Notice the scaling parameter (!).
9.1.2 Reconstruction
Ideal reconstruction is not possible, since the resulting pulse waveform is dou-
ble sided, instead we need to use some type of waveform dened for positive
t and which is also implementable in the analog/continuous-time world.
In the time-domain the pulse amplitude modulation (PAM) is described by:
x
r
(t) =
n
x(n) p(t nT), (9.2)
And in the transform domain we could write:
X
r
(j) = P(j) X
_
e
jT
_
(9.3)
85
CHAPTER 9. DISCRETE FOURIER TRANSFORM 86
If we choose P(j) to be a brick wall lter. In the time-domain this signal
is described by a so called sinc function:
p(t) = sinc(f
s
t) (9.4)
This is not possible to achieve and instead we choose a brickwall pulse instead:
p(t) =
_
1 0 t < T
0 otherwise
(9.5)
which in the transform domain resulted in a sinc-shaped frequency. The sinc
shaped frequency is not an ideal ltering of the signal and we will have atten-
uation of desired signal spectrum as well as additional frequency components
from other replica spectra.
9.1.3 Additional Filters
There is a set of lters that can be used: An anti-aliasing lter to lter out
the tail of the spectrum and a reconstruction lter to compensate for the
sinc attenuation of the spectrum and to further suppress the images. Show
graph.
9.1.4 Error / Distortion
If the sampling theorem is not fullled, i.e., if there are tails and/or substan-
tial parts of the spectrum being folded into the signal band (see Poissons
summation formula). The signal will be distorted. Using Parsevals formula
we can calculate how much error is caused by this.
E
a
E
e
=
_
|X
a
(j)|
2
d
2
_
/T
|X
a
(j)|
2
d +
_
/T
/T
|
k=0
X
a
_
j
j2k
T
_
|
2
d
(9.6)
An anti-aliasing lter will remove all the upper frequencies and (in the ideal
world) cut of the inuence of the folded tails onto the signal spectrum.
Thereby, the distortion can be written as:
E
a
E
e
=
_
|X
a
(j)|
2
d
2
_
/T
|X
a
(j)|
2
d
= 1 +
In-band
..
_
/T
0
|X
a
(j)|
2
d
_
/T
|X
a
(j)|
2
d
. .
Out-of-band
(9.7)
CHAPTER 9. DISCRETE FOURIER TRANSFORM 87
9.2 Discrete Fourier Transform (DFT)
So, back to the Fourier Transform:
X
_
e
jT
_
=
n
x(n)e
jTn
(9.8)
We see that this is a continuous function, continuous variable T. However
in practice the continuous function cannot be stored in e.g. a computer.
Compare for example with the Fourier series which has a limited (for peri-
odic) number of points which can be stored in e.g. a memory. What we need
to do is to understand how a Fourier spectrum can be used in a discrete-time
environment. For this purpose we use the Discrete Fourier Transform
(DFT).
9.2.1 Denition of the DFT
The DFTis dened for a nite-length sequence, x(n), where n = 0, 1, 2, . . ..
We refer to the N-point DFT as:
X(k) =
N1
n=0
x(n)e
j2kn/N
, k = 0, 1, . . . , N 1 (9.9)
and its inverse
x(n) =
1
N
N1
k=0
X(k)e
j2kn/N
, n = 0, 1, . . . , N 1 (9.10)
Lets look at some properties:
X(k) =
N1
n=0
x(n)e
j2
n
N
k
, k = 0, 1, . . . , N 1 (9.11)
In the exponential, we can add any arbitrary number of (full) rotations
around the unit circle, i.e.,
X(k) =
N1
n=0
x(n)e
j2nj2
n
N
k
=
N1
n=0
x(n)e
j2
n
N
Nj2
n
N
k
=
N1
n=0
x(n)e
j2
n
N
(Nk)
(9.12)
Then take the conjugate on both sides:
X
(k) =
_
N1
n=0
x(n)e
j2
n
N
(Nk)
_
=
N1
n=0
x(n)e
j2
n
N
(Nk)
= X(N k),
(9.13)
CHAPTER 9. DISCRETE FOURIER TRANSFORM 88
where we assume that we only consider real-valued sequences x(n). Obviously
the conjugate can be conjugated and we get:
X(k) = X
(N k) (9.14)
This means in terms of amplitude and phase spectra that we have
|X(k)| = |X(N k)| arg
_
X(k)
_
= arg
_
X(N k)
_
, (9.15)
and we recognize the reection (and inversion for the phase) around from
the Fourier transform. The DFT calculates the Fourier transform at evenly
distributed angles in [0, 2). Show graph.
To correlate this properly with the continuous Fourier transform, we have
to consider three dierent cases:
1. x(n) is indeed a nite-length sequence consisting of N points. Then
(and only then) the DFT is a true sampling of the continuous Fourier
transform, i.e.,
X(k) = X
_
e
jT
_
|
T=2k/N
= X
_
e
j2k/N
_
(9.16)
for k = 0, 1, . . . , N 1.
2. x(n) is an ininite-length sequence and has to be truncated to N points.
This means that the DFT is not a true sampling of the continuous
Fourier transform, i.e.,
X(k) = X
_
e
jT
_
|
T=2k/N
(9.17)
it is however quite often likely that they are approximately equal, es-
pecially if the N is a large number.
3. x(n) is a nite-length sequence and periodic with a period of N (or
N/m where m is an integer). Then the DFT is identical to the Fourier
series coecient scaled by the length of the sequence, i.e.,
X(k) = N C
k
(9.18)
which is a completely true representation of the signal in the frequency
domain (!!!).
CHAPTER 9. DISCRETE FOURIER TRANSFORM 89
Increased Resolution
The resolution of the DFT can be increased by using so called zero-padding.
This means that instead of N samples we use say L samples where we use
the zero-valued part of the sequence, say
x
2
(n) =
_
_
_
x(n) 0 n N 1
0 N n L 1
dont care otherwise
(9.19)
We now get
X(k) =
L1
n=0
x(n)e
j2
n
L
k
=
N1
n=0
x(n)e
j2
n
L
k
, k = 0, 1, . . . , L 1 (9.20)
Show some examples in a graph.
9.2.2 Eects on innite-length sequences
Assume that the sequence is innitely long and create a new sequence by
windowing it so that it becomes a nite-length sequence:
x
w
(n) = x(n) w(n) (9.21)
where w(n) is a window created by a window function or weighting function.
(Notice that there is nothing preventing us to use this concept for a nite-
length sequence too ...) In the simplest case we let w(n) be a brick wall or
rectangular window:
w(n) =
_
1 0 n N 1
0 otherwise
(9.22)
Then we get
x
w
(n) =
_
x(n) 0 n N 1
0 otherwise
(9.23)
The DFT for the windowed sequence becomes
X
w
(k) = X
w
_
e
jT
_
|
T=2k/N
, (9.24)
but
X(k) = X
_
e
jT
_
|
T=2k/N
, (9.25)
CHAPTER 9. DISCRETE FOURIER TRANSFORM 90
since X
w
(e
jT
) = X(e
jT
). Actually, since we have a multiplication/product
in the sequence domain, this corresponds to a (cyclic) convolution in the
transform domain, i.e.,
x
w
(n) = w(n) x(n) W
_
e
jT
_
X
_
e
jT
_
= X
w
_
e
jT
_
(9.26)
Remember from the case with ideal reconstruction when we used a brickwall
pulse in the time domain and its resulting sinc function (stretching over
all ). In this case we have something similar, we have a brickwall in the
sequence domain and its corresponding function in the transform domain is
a sinc too. If the signal is unlimited and not periodic within the N samples
the cyclic convolution will introduce erronous components that fold into and
accumulate in the signal band. These components are called leakage or ripple.
The leakage/ripple are dependent on the windowing function and the
sequence itself. One can choose dierent windowing function to let the error
have dierent impact on the spectrum. Dierent types of sequences are less
sensitive to the choice of window function.
9.2.3 Periodic Sequences
For periodic sequences that repeat themselves at a period of M where mM =
N (length of DFT), where m is an integer, the DFT corresponds well with
the Fourier Series for the signal. Consider
x(n) =
N1
k=0
C
k
e
j2
n
N
k
, (9.27)
where
C
k
=
1
N
N1
n=0
x(n) e
j2
n
N
k
. (9.28)
and we directly see that X(k) = N C
k
. If we would still have a periodic
sequence (inherently) but that the sequence is truncated with N not being
equal to m M there will be leakage and ripple just as for the innite-length
case above.
9.3 Fast Fourier Transform (FFT)
The FFT is a faster way of computing the DFT, since if N = 2
M
where M is
an integer, the terms e
jk/N
can be subdivided into groups and the DFT can
be computed more eciently and faster. The FFT is equal to the DFT when
CHAPTER 9. DISCRETE FOURIER TRANSFORM 91
N = 2
M
and if the sequence is shorter, one often uses zero padding to ll up
the sequence to the closest upper 2
M
length. The improvement in speed cn
become quite signicant, for N = 1024 the number of computations in the
DFT is some 10
6
whereas for the FFT it is 10
4
.
9.4 Coherent Sampling
In general it is recommended to try to use a test signal for your system
which uses so called coherent sampling. For example, if you want to test
your system performance with a sinusoid, it is the best choice to choose the
sampling frequency and signal frequency to be relatively prime and to chose
the data capture length (N samples in the DFT/FFT) to be N = 2
M
and
equal to a certain number of periods. Let the input signal be given by
x(t) = sin(2f
0
t) (9.29)
and the sampled signal (at t = nT) to be
x(n) = x(nT) = sin
_
2
f
0
f
s
n
_
. (9.30)
We now want to capture full periods of the signal within one DFT/FFT
frame of N samples. This means that
N =
f
s
f
0
, (9.31)
or
f
0
=
N
f
s
, (9.32)
since quite often the sampling frequency and the capture length are xed
in the system.
The trick here now is to choose to be a prime number and N = 2
m
(so that FFT can be utilized). The reasoning behind the choice of a prime
number is that then (and only then) we guarantee that the signal does not
repeat itself within the FFT, and more importantly: thereby we guarantee
that we sample more amplitude levels than we would otherwise). Secondly
we also guarantee that we sample the true spectrum, since now the FFT
essentially is equal to the Fourier series (ignoring the N scale factor).
If coherent sampling cannot be guaranteed we need to look at the window
function.
CHAPTER 9. DISCRETE FOURIER TRANSFORM 92
9.5 Window Functions
There are quite a few dierent window functions to be considered and they
have pros and cons dependent on what type of signal you want to observe.
The convolution in the frequency domain will have dierent impact on the
result. See plots.
Chapter 10
Stochastic Processes
Todays lecture is about stochastic processes or probability theory (sort of
...). But rst ...
10.1 Recapture from Previous Lecture(s)
10.1.1 DFT/FFT
Since it is not possible to store (and operate on) a continuous function, such
as the Fourier transform we need to sample it, An N-point DFT is dened
as
X(k) =
N1
n=0
x(n) e
j2
n
N
k
(10.1)
where the inverse is given by
x(n) =
N1
k=0
X(k) e
j2
n
N
k
(10.2)
The DFT is dened for k = 0, 1, 2, . . . , N 1 and then it repeats itself. We
also know that if x(n) are real-valued, the DFT/FFT also has the following
property:
X(k) = X
(N k) (10.3)
Another property is that the term X(0) is the sum of all samples in the
captured length:
X(0) =
x(n) (10.4)
We have three dierent cases to look at: (1) innite-length, (2) nite-
length, and (3) periodic sequences. For nite-length and periodic sequences
93
CHAPTER 10. STOCHASTIC PROCESSES 94
the DFT/FFT are exact samples of the Fourier transform, uniformly spaced
around the unit circle. In fact, for the periodic case, we have:
X(k) = N C
k
(10.5)
Windowing
For the innite-length sequences though, we need to truncate the signal to
make it t into the memory/DFT length. Therefore, the DFT/FFT is not
an exact sampling of the Fourier transform To reduce eects due to the
truncation of the signal, one applies a windowing technique, which weighs
the samples to compensate for the sinc-weighting due to truncation. and
therefore on needs to apply window techniques to suppress ripple.
Coherent Sampling
In the real world, when testing your system with a sinuosoid function, you
should try to use so called coherent sampling to avoid the windowing
eects.
f
0
=
2
m
f
s
, (10.6)
And select to be a prime number and ensure that the FFT lenght is some
2
m
where m is an integer.
10.2 Probability Theory
Remember the density function? For example, the probability function can
be found to be the integral of the density function:
P(x a) =
_
a
p
x
(x)dx (10.7)
Draw some examples for the uniform and normal distributions. The uniform
distribution from a to b is given by
p(x) =
1
b a
(10.8)
within the limits, otherwise 0. For the normal distribution it is
1
2
2
e
(x)
2
2
2
(10.9)
Show some graphs!
CHAPTER 10. STOCHASTIC PROCESSES 95
We dene the expectation value (the average of all values) as
E{x} = m
x
=
_
xp(x)dx (10.10)
and the variance (the spread of all values) is given by
E{(x m
x
)
2
} =
2
x
= E{x
2
} m
2
x
=
_
(x m
x
)
2
p(x)dx (10.11)
or in general we can extend this to
E{g(x)} =
_
g(x)p(x)dx (10.12)
Example m
X
and
X
The average value for our uniform distribution becomes
m
x
= E{x} =
_
xp(x)dx =
_
b
a
x
b a
dx =
_ x
2
2
b a
_
b
a
=
b a
2
(10.13)
and the variance becomes
2
x
= E{x
2
} m
2
x
=
_
x
2
p(x)dx m
2
x
=
_
b
a
x
2
b a
dx
(b a)
2
4
=
=
_
x
3
3(b a)
_
b
a
(b a)
2
4
=
b
3
a
3
3(b a)
(b a)
2
4
=
= (b a)
2
_
1
3
1
4
_
=
(b a)
2
12
Notice however that these values/properties are only for the amplitude/value
of one variable at one time point. We will therefore start to extend our
world: rst look at a sequence instead, and the expectation value can now
be expressed as a function of the time instead, such as:
m
x
(n)
x
(n) (10.14)
Which might not come as a big surprise. For every new sample we generate
from our box of random values we can dene a new distribution and thereby
a new mean value and standard deviation/variance. Lets make it a little bit
more complex...
CHAPTER 10. STOCHASTIC PROCESSES 96
10.3 Stochastic Processes
Let us now assume that instead of picking a value from the distribution, we
instead pick a whole sequence of numbers instead. (Illustrate this!) The
sequence selected is now called X(n) and its density function is p
X
(n). Then
it becomes a little bit more tricky to dene the expected value of the variable.
Example
Show idea with two functions being picked: (n) and u(n). Notice now
in the (n) function that the expected value is 0 (the eect of the (0) = 1
will die out fairly soon. In the u(n) case, the m
x
is 1 instead. We have two
dierent (individual) distribution functions, but of course, mathematically,
we could if we liked dene a combined distribution function/density
function.
10.3.1 Autocorrelation Function
Before we head on, we also dene the autocorrelation function as:
r
XX
(n
1
, n
2
) = E{X(n
1
) X(n
2
)} (10.15)
which states how a sample in the sequence correlates with another one (later
or earlier). Sometimes
1
we also write this as
r
XX
(k) = E{X(n) X(n k)} (10.16)
Notice that the autocorrelation function also can be applied to deterministic
sequences (!). This is a property that is for example used in matched lters.
The signal power is
r
XX
(n, n) = r
XX
(k = 0) = E{X
2
(n)} (10.17)
10.3.2 Power Spectrum
The power spectrum describes the frequency characteristics of the stochastic
process and is simply the Fourier transform of the autocorrelation function:
R
XX
_
e
jT
_
=
k
r
XX
(k)e
jT
(10.18)
1
Assuming some properties of the stochastic process that we will see shortly ...
CHAPTER 10. STOCHASTIC PROCESSES 97
which is rather straight-forward. The total signal power can be derived using
Parsevals formula and we have:
E{X
2
(n)} =
1
2
_
R
XX
_
e
jT
_
d(T) = r
XX
(n, n) = r
XX
(k = 0) (10.19)
10.3.3 Cross-correlation Function
Similar to the above, but now assume we would have two dierent stochastic
processes, X and Y . We can then use the cross-correlation function to
check if the two processes are independent or not:
r
XY
(n
1
, n
2
) = E{X(n
1
) Y (n
2
)} (10.20)
If E{X(n
1
) Y (n
2
)} = E{X(n
1
)} E{Y (n
2
)}, the two processes X and Y
are uncorrelated.
10.3.4 Ensemble Average
We introduce the so called ensemble average, which is the average over all
dierent sequences coming out of the process. If we go directly for the gen-
eralization, we have:
E{g
n
(X(n))} =
_
g
n
(x) p
X
(n)dx (10.21)
which is an expectation value that depends on the sequence index, n (g
n
(x)
is a function that depends on the index n).
10.3.5 Stationary Processes
A smaller set of the stochastic processes above are called stationary pro-
cesses and for them we have that
m
x
(n) = m
x
= constant (10.22)
and
x
(n) =
x
= constant (10.23)
This means that the average and variance are independent on time point
when it is taken. (C.f. time-invariance).
CHAPTER 10. STOCHASTIC PROCESSES 98
10.3.6 Ergodic Processes
Another even tighter set of stochastic processes are so called ergodic pro-
cesses and for them we have that the expectation value in terms of amplitude
is also equal to the average over time. This means:
m
X
= E{X(n)} = lim
N
1
2N + 1
N
n=N
x(n) (10.24)
The autocorrelation function is now
r
XX
(k) = E{X(n)} = lim
N
1
2N + 1
N
n=N
x(n)x(n k) (10.25)
Now were almost done with the theory. To simplify the maths further,
we will therefore look at ergodic, stationary, stochastic processes, with yet
another twist:
10.3.7 White Processes
A white process has the nice property that two dierent samples in the
sequence of samples are uncorrelated/indepenedent stochastic variables. This
means that:
E{X(n
1
) X(n
2
)} = E{X(n
1
)} E{X(n
2
)} (10.26)
Thereby the autocorrelation function can be written as
r
XX
(k) =
_
2
X
+m
2
X
k = 0
m
2
X
otherwise
(10.27)
or more conveniently: r
XX
(k) =
2
X
(k) + m
2
X
. The power spectrum can
therefore be written as:
R
XX
_
e
jT
_
=
k
r
XX
(k)e
jT
=
2
X
+ 2 m
2
X
p
(T 2p) (10.28)
and the spectrum repeats itself every 2, hence the sum.
10.4 Noise
Now we are getting close, and we start by looking at noise. Noise is an
unwanted part of the signal, and is undeterministic, i.e., there is no way for
CHAPTER 10. STOCHASTIC PROCESSES 99
us to predict the next sample by observing old ones. We can say something
about the statistical properties and the probability for the next sample to
be a value between a and b. But not more than that. In this course we will
look at to types of quantization noise essentially with the same properties:
1. Quantization noise due to sampling
2. Quantization (round-o) noise due to limited wordlength operations
10.4.1 Model
The quantized output noise is noted x
Q
(n) and the error is the output minus
the input: e(n) = x
Q
(n)x(n). The quantization operation is very nonlinear
and hard to properly analyze. A linear model for the noise is illustrated. If
the number of bits is high, i.e., the quantization noise is small compared to
the signal, the error is close to be a stochastic process.
Signal-to-Noise Power Ratio
To be able to analyze properly we instead go towards using statistical meth-
ods (as the ones above) and describe the signal and noise power instead,
especially the ratio between them:
SNR = 10 log
10
_
average power of x(n)
average power of e(n)
_
(10.29)
The autocorrelation function can be used to calculate the power of a stochas-
tic sequence.
10.4.2 Filtering Stochastic Processes
We are not going to go into the details of ltering of stochastic processes in
this course, but the immportant result though is that:
r
Y Y
(k) = r
hh
(k) r
XX
(k) =
n
r
hh
(n) r
XX
(k n) (10.30)
Which states that the autocorrelation function of the output signal from an
LTI system can be written as the convolution between the deterministic auto-
correlation function and the input autocorrelation function. In the transform
domain, this means that the power spectrum of the output can be written
as:
R
Y Y
_
e
jT
_
= |H
_
e
jT
_
|
2
R
XX
_
e
jT
_
(10.31)
CHAPTER 10. STOCHASTIC PROCESSES 100
This is sometimes also referred to as the super formula. More conveniently
this function can be used to look at how the power is amplied/suppressed
to the output of the system. The output mean value equals
m
Y
= m
X
k
h(k) = m
X
H(e
j0
) = m
X
H(1) (10.32)
i.e., the DC gain of the DC component.
Filtering White Noise
If the input signal is white noise, we have its autocorrelation function as:
r
XX
(k) =
2
X
(k) +m
2
X
(10.33)
Use the convolution formula to nd the output from an LTI system:
r
Y Y
(k) =
n
r
hh
(n) r
XX
(k n) =
n
r
hh
(n)
_
2
X
(k) +m
2
X
_
(10.34)
The delta function cuts out the k sample from the sum, and we have
r
Y Y
(k) =
2
X
r
hh
(k) +m
2
X
n
r
hh
(n) =
2
X
r
hh
(k) +m
2
X
|H(1)|
2
(10.35)
where the last term has been derived by simply assuming an LTI system fed
with a DC component. We also know that
E{Y
2
} = r
Y Y
(0) =
2
Y
+m
2
Y
=
2
X
r
hh
(0) +m
2
X
|H(1)|
2
(10.36)
From which we conclude that
2
Y
=
2
X
r
hh
(0) =
2
X
n
h
2
(n), (10.37)
i.e., the power of the output signal equals the power of the input signal times
the power of the impulse response.
10.4.3 Quantization Noise
We previously said that if the quantization error is small enough, the error
due to the nonlinear operation can be considered to be a (white) stochastic
process and a linear model can be created. Lets look at ADC sampling. If we
ramp the input (continuous-time) function and quantize it along a piece-wise
linear staircase function we see that we will do a small error (e(n)) centered
CHAPTER 10. STOCHASTIC PROCESSES 101
around each stair step. This sawtooth describes how the error is distributed
in the amplitude domain. (Illustrate!)
In fact the error is a rectangular, uniform distribution. The width of
the distribution is equal to one step, centered from /2 to /2, where
equals the amplitude of one step. In an N bit ADC, with a full range of V
ref
,
the step size must be
=
V
ref
2
N
(10.38)
Previously in this lecture, we derived the average and variance for this type
of distribution:
m
X
= 0
2
X
=
2
12
=
V
2
ref
3 2
2N+2
(10.39)
The quantization noise power (P
q
) is given by the
X
above, and lets see
what happens if we apply a peak input sinusoid to the ADC. Then the signal
power becomes:
P
s
=
1
2
_
V
ref
2
_
2
(10.40)
We can form the signal-to-noise ratio (SNR) as:
SNR = 10 log
10
P
s
P
q
= 10 log
10
1
2
_
V
ref
2
_
2
V
2
ref
32
2N+2
= 10 log
10
3 2
2N
2
1.76 + 6.02 N
(10.41)
For more bits, the quantization noise plays a smaller role. Now, round-o
noise inside the digital circuits can be treated in the same way, and this lets
us analyze noise throughout e.g. a lter.
Example
Consider the following lter. Show for the dierent lter structures in
the example from the book. Page 278. The transfer function is given
by
y(n) = b
1
y(n 1) +a
0
x(n) +a
1
x(n 1) (10.42)
where b
1
= 0.5, a
0
=, and a
1
=. Do the z-transform:
Y (z) = b
1
z
1
Y (z) +a
0
X(z) +a
1
z
1
X(z) H(z) =
a
0
+a
1
z
1
1 b
1
z
1
(10.43)
We assume a stable system, i.e., |b
1
| < 1 and get the Fourier transform as:
H
_
e
jT
_
=
a
0
+a
1
e
jT
1 b
1
e
jT
(10.44)
CHAPTER 10. STOCHASTIC PROCESSES 102
Then lets go to the noise sources. Insert one after each multiplier. Show in diagramme.
Then take them step-by-step and sum their contributions (all others are set
to 0) to the output. For e
1
we have a new transfer function as
y(n) = e
1
(n) +b
1
y(n 1) (10.45)
The transfer function becomes
H
1
(z) =
1
1 b
1
z
1
(10.46)
which has the inverse
h
1
(n) = b
n
1
u(n) (10.47)
The noise power at the output due to e
1
then becomes
P
Y e,1
= P
e,1
n
h
2
1
(n) =
Q
2
1
12
n
b
2n
1
=
Q
2
1
/12
1 b
2
1
(10.48)
Lets look at the next amplier: for e
2
we have a new transfer function
as
y(n) = e
2
(n) +b
1
y(n 1), (10.49)
i.e., h
2
is identical to h
1
, so the noise power at the output due to e
2
then
becomes
P
Y e,2
= P
e,2
n
h
2
2
(n) =
Q
2
2
/12
1 b
2
1
(10.50)
Lets look at the next amplier: for e
3
we have a transfer function as
y(n) = e
3
(n) +b
1
y(n 1) (10.51)
The transfer function yet again becomes identical to h
1
, so the noise power
at the output due to e
3
then becomes
P
Y e,3
= P
e,3
n
h
2
3
(n) =
Q
2
3
/12
1 b
2
1
(10.52)
Now we can sum up all these contributions to form the total noise power at
the output:
P
Y e
= P
Y e,1
+P
Y e,2
+P
Y e,3
=
1
12
Q
2
1
+Q
2
2
+Q
2
3
1 b
2
1
(10.53)
Assuming a sinusoid input at
0
T enables us to calculate the SNR, since
P
Y
= P
X
|H(e
j
0
T
)|
2
(10.54)
CHAPTER 10. STOCHASTIC PROCESSES 103
and
SNR = 10 log
10
P
X
|H(e
j
0
T
)|
2
1
12
Q
2
1
+Q
2
2
+Q
2
3
1b
2
1
(10.55)
Assume that all quantizations are equal, i.e.,
Q
i
= = V
ref
/2
N
(10.56)
and that the input signal has a peak power of V
2
ref
/8. Then this gives us:
SNR = 10log
10
V
2
ref
8
|H(e
j
0
T
)|
2
1
12
3
V
2
ref
2
2N
1b
2
1
= 10log
10
2
2N
|H(e
j
0
T
)|
2
1 b
2
1
2
(10.57)
Using the properties of the logarithm:
SNR 6.02 N + 20 log
10
|H(e
j
0
T
)| + 10 log
10
1 b
2
1
2
(10.58)
etc., etc.
Chapter 11
Multi-Rate Systems
104
Chapter 12
105
Chapter 13
Errata / Comments
13.1 Chapter 11 in the Solutions
Exercise 11.1
Compute the mean, variance, and average power of the quantization errors
that occur when truncating real numbers. Assume uniform quantization
and twos complement represenation with quantization step Q and number
range between -1 and 1.
The question states we should truncate the numbers and not round
them. This means that all values from e.g., 0.5000 . . . 01 to 0.5999 . . . 99
should be truncated down to 0.5. (One can then easily realize that the
quantization noise will have a mean value unlike in the case with round-
o errors.)
The number range as such does not play any role in this context, since
the number of bits is not stated. We could however assume that we
have N bits in general and we get the following relation:
Q =
(1 (1))
2
N
= 2
1N
(13.1)
Since we are truncating the values, the density function of the trun-
cation error will be dened from 0 to Q and have a constant value of
p(x) =
1
Q
, i.e.,
p(x) =
_
2
N1
0 x 2
1N
0 otherwise
(13.2)
The average value (mean value) is calculated as
m
x
=
_
xp(x)dx =
_
2
1N
0
x 2
N1
dx = 2
N1
1
2
2
2(1N)
= 2
N
(13.3)
106
CHAPTER 13. ERRATA / COMMENTS 107
which is equal to Q/2 if we choose not to use the N as a parameter.
The variance is given by:
V {x} = E{(x m
x
)
2
} = E{x
2
} m
2
x
=
_
x
2
p(x)dx m
2
x
(13.4)
which equals
V {x} =
_
x
3
3
_
2
1N
0
2
N1
. .
1/Q
2
2N
. .
m
2
x
=
1
3
2
3(1N)
2
N1
2
2N
(13.5)
which boils down to
V {x} =
1
3
2
2N
_
=
Q
2
12
_
(13.6)
.
Eventually we need to calculate the average power,
2
x
= E{x
2
}, which
we infact already have done above:
E{x
2
} =
_
x
3
3
_
2
1N
0
2
N1
=
1
3
2
3(1N)
2
N1
=
4
3
2
2N
=
Q
2
3
(13.7)
Example 11.2
Compute the number of bits required to reach 60 dB and 120 dB SNR in
A/D conversion. Assume uniform quantization (rounding) and a sinusoidal
input.
Compared to the previous case, the quantization noise will now have
a distribution which is centered around 0 and stretching from Q/2
to Q/2 with an amplitude of 1/Q. We let the amplitude go from V
0
to +V
0
, so that the Q becomes 2V
0
/2
N
for an N-bit converter. This
means that
p(x) =
_
2
N1
/V
0
V
0
2
N
x V
0
2
N
0 otherwise
(13.8)
The peak amplitude of a sinusoid will be equal to V
0
, and we now from
rst grade that the power of such a sinusoid can be expressed as
P
s
=
1
2
V
2
0
(13.9)
CHAPTER 13. ERRATA / COMMENTS 108
The quantization noise power equals
E{x
2
} =
_
x
2
p(x)dx =
2
N1
V
0
_
x
3
3
_
V
0
2
N
V
0
2
N
=
2
N1
V
0
2 V
3
0
2
3N
3
(13.10)
which equals
P
e
= E{x
2
} =
V
2
0
2
2N
3
(13.11)
We can now form the ratio between signal and noise power, the SNR:
SNR = 10 log
10
P
s
P
e
= 10 log
10
1
2
V
2
0
V
2
0
2
2N
3
= 10 log
10
3
2
2
2N
(13.12)
This can be expressed in dB as:
SNR 1.76 + 6.02 N [dB] (13.13)
Lets get back to the question how many bits are required to meet
a 60-dB specication and 100-dB specication on the SNR? So:
N
60
=
60 1.76
6.02
9.7 9 [bits] (13.14)
and
N
100
=
100 1.76
6.02
16.3 16 [bits], (13.15)
respectively.