0% found this document useful (0 votes)
29 views209 pages

DSP - 24 10 2022

Uploaded by

patriciarule
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views209 pages

DSP - 24 10 2022

Uploaded by

patriciarule
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 209

Digital Signal Processing

Andrzej Kotyra, Ph.D, D.Sc


room: E317, phone: 81-5384317, [email protected]
Books:

1. A.V.Oppenheim, A. S. Wilsky, S.H. Nawab : Signals and Systems, Prentice-Hall, 1997


2. S. Haykin: Signals and systems, Wiley, New York,1999
3. R.K. Rao Yarlagadda: Analog and Digital Signals and Systems, Springer

And others

Outline:
1. Signals – classification. LTI – Linear Time Invariant Systems

2. Properties of LTI systems in frequency domain. Expansion of continuous function


in a series of the orthogonal function Fourier Transform (continuous) - properties
Examples of FT calculation.

3. Window function. Sampling Theorem.

© Andrzej Kotyra
5. Short-Time Fourier Transform, Time-frequency resolution. Heisenber Uncertainty
principle

6. Wavelet transformation continuous, discrete, Multiresolution analysis. Expansion


in wavelet series. Wavelet properties

7. Z- transform – properties, examples, Region of Convergence. Properties of the


Z- Transform.

8. Digital Filters.

© Andrzej Kotyra
Fundamental terms

Signal processing - conversion one form of a signal into


another which is more desirable (at the given moment)

Discrete systems - systems whose signals on inputs and outputs


are discrete

Digital systems - systems whose signals on inputs and outputs are


digital

Digital signal processing - converting signals that are discrete in


value and argument (independent variable)
Basic concepts

A signal (generally) is a function that transfers information about the state


or behaviour of a certain physical system. A signal can vary in time (1D
signal = e.g. temperature changes, speech signals) or position (2D - still
image).

Mathematical form: function of one or more variables.

classification regarding continuity or discrete nature of argument and


function:
continuous argument and value - continuous (analog) signals
discrete argument, continuous value - discrete signals
continuous argument, discrete value
discrete argument and value - digital signals (special case: binary signals -
only two values are allowed, most often “0” and “1”.
signal classification

Signals

deterministic random

non-stationary stationary
periodic

almost periodic non-ergodic ergodic

modulated normal distribution

impulse, with limited energy uniform distribution

infinite duration (time), limited energy


continuous time, continuous amplitude discrete time, continuous amplitude

time, s sample number

continuous time, discrete amplitude discrete time, discrete amplitude

time, s sample number


random voice signal

time, s time, s
Example signals
sin(2π5t) sin(2π5t) + sin(2π10t)

time, s time, s
sin(2π5t)+ sin(2π(π )t )
sin(2π5t) + 0,2 sin(2π25t)

time, s
time, s
Example signals
Amplitude modulation

time, s time, s
Frequency modulation

time, s
time, s
Parameters of deterministic signals

mean value
continuous signal x(t) discrete signal x[n]

t2 n
1 1 2

t2 − t1 ∫t1 ∑
x= x(t)dt x= x[n]
n2 − n1 + 1 n=n
1

T - period N - period
+N
1 T 1
T→∞ 2T ∫−T N→∞ 2N − n + 1 ∑
x = lim x(t)dt x = lim x[n]
n=−N

t0 - any (given) point n0 - any (given) point

n +N
1 t0+T 1 0
T ∫t0 ∑
xT = x(t)dt xN = x[n]
N n=n
0
Parameters of deterministic signals

energy of the signal

continuous signal x(t) discrete signal x[n]

t2 n2

∫t
2 2
E= x(t) dt

E= x[n]
1 n=n1

+T +N

T→∞ ∫−T
2 2

E∞ = lim x(t) dt E∞ = lim x[n]
N→∞
n=−N

t0 - any (given) point n0 - any (given) point

t0+T n0+N

∫t
2 2

ET = x(t) dt ET = x[n]
0 n=n0
Parameters of deterministic signals

mean power of the signal

continuous signal x(t) discrete signal x[n]

t2 n2
1 1
t2 − t1 ∫t1
2 2
P= x(t) dt

E= x[n]
n2 − n1 + 1 n=n
1

T - period N - period

1 +T 1 +N
T→∞ 2T ∫−T
2 2

P∞ = lim x(t) dt P∞ = lim x[n]
N→∞ 2T
n=−N

t0 - any (given) point n0 - any (given) point


t0+T n +N
1 1 0
T ∫t0
2 2

PT = x(t) dt PT = x[n]
N n=n
0
Parameters of deterministic signals

RMS of the signal: P

P1
1dB = 10 log P0 = reference power
P0

A1
1dB = 20 log
A0

A0 = the reference amplitude (voltage, current, etc.)


Transformation of the independent variable

Reflection x(t) y(t) = x(-t)

- T1 0 T2 t - T2 0 T1 t

Time scaling

x(t) x(2t) x( 12 t)
1,0 1,0 1,0

-1 0 1 t 1 1
2
0 2 t -2 0 2 t
Transformation of the independent variable

Time shifting x(t) y(t) = x(t - 2)


1,0 1,0

1 0 1 t 0 1 3 2 5 t
2 2 2 2

Time shifting and scaling Correct way


x(t) v(t) = x(t + 3) y(t) = v(2t)
1,0
1,0 1,0

1 0 1 t 4 3 2 1 0 1 t 3 2 1 0 1 t

Time shifting and scaling Incorrect way


x(t) x(2t) y(t)
1,0 1,0 1,0

31 21
2 2

1 0 1 t 1 1 t -3 -2 -1 0 t
2 0 2
(Digital) signal processing chain

Analog Analog Step-like


Digital
Signal Signal Signal
Signal
(analog)

Anti-alising filter
S&H ADC
(LP)

Signal
Processor

Output filter
Actuator DAC
(optional)

Digital
Analog Signal
Signal Analog
Signal
Digital vs analog signal processing

Pros

+ Flexible functionality (can be programmed)


+ Lower sensitivity to distortions
+ Better tolerance to element’s parameters (cost ↓)
+ Tuning is not necessary (cost ↓)
+ Accuracy is defined by word length

Cons

- Hard to process (expansive) high frequency signals


- Hard to process extreme strong/weak signals
- Must be powered (no passive systems)
Discrete – time signals

x = {x[n]} −∞ < n < ∞ Discrete signals are represented by series of


numbers
n-th sample out of series
A discrete-time signal is usually (but not always) obtained from sampling a
continuous signal at a regular time period known as the sampling period, which we
will represent by the parameter T.

x[n] = x(nT ) −∞ < n < ∞ T - sampling period, 1/T – sampling frequency


xa – analog signal
x[n] - n-th sample
x[0]
x[−1] x[1]
x[−3]

x[4]
7 8 9 10 11
-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 n
Analog signal

32 ms

Discrete signal

256 samples

Discrete signals operations:

• addition, multiplication x + y = {x[n] + y[n]} x ⋅ y = {x[n] ⋅ y[n]}

• multiplication by a constant value α ⋅ x = {α ⋅ y[n]}

• delay y[n] = x[n − n0] n0 – integer number


Basic 1-D Signals :
Unit sample

{1,
0, n≠0
1 δ[n] =
n=0
δ[n] = u[n] − u[n − 1]
-8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 n

{0,
1, n≥0
1 u[n] =
Unit step n<0
∞ n

∑ ∑
-8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 n
u[n] = δ[n − k] = δ[k]
k=0 k=−∞
Exponent decaying

{0,
Aα n, n≥0
x[n] =
n<0

x[n] = Aα nu[n]
-8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 n
Sinusoid

x[n] = A cos(ω0n + φ)
-8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 n
Complex exponential signal

x[n] = Aα n = | A | e jφ ⋅ | α | e jω0n = | A | | α | e j(ω0n+φ) = cos(ω0n + φ)


| A | | α | cos(ω0n + φ) + j | A | | α | sin(ω0n + φ)
where: A, α - complex α = | α | e jω0n | A | e jφ
If | α | = 1, x[n] - complex

Continuous signals - sinusoid and complex exponential signal = 2π/f

Discrete signals are periodical if and only if when:

⋁ ⋀
x[n] = x[n + k ⋅ N ]
Discrete sinusoid: N∈ℕ k,n∈ℂ


A cos(ω0n + φ) = A cos(ω0n + ω0 N + φ) ⇒ N = k
ω0
Discrete sinusoid (and complex exponential) are not necessary periodic with 2π /ω0
cos(ω0n) for different ω0

π 15π
ω0 = 0 , ω0 = 2π ω0 = , ω0 = ,
8 8
1 1

0.8
0.5
0.6

0.4 n

0.2
-0.5
n
-1

π 31π π 7π
ω0 = ,ω = , ω0 = , ω0 = ,
1 16 0 16 1
4 4
0.5 0.5

n n

-0.5 -0.5

-1 -1
Any discrete signal p[n]:
a[1]
a[-3]
p[n]
2 7
-2 0 1 3 4 5 6 8 n

a[2] a[7]

can be expressed as a weighted sum of unit samples

p[n] = a[−3]δ[n + 3] + a[1]δ[n − 1] + a[2]δ[n − 2] + a[7]δ[n − 7]

Every discrete signal can be expressed in that way:



x[n] = x[k]δ[x − k]
k=−∞
Systems

x[n] y[n] = T{x[n]}


T{ }
E.g.

delay system y[n] = x[n − nd ] −∞ < n < ∞


M
1 2


moving average system y[n] = x[n − k]
M2 + M1 + 1 k=−M
1

Memoryless systems

Its output signals depends only on the present value of the input signal
e.g.
i 2(t)R = u 2(t)/R y[n] = (x[n])2
Linear systems – systems fulfilling superposition principle:

T{ax1[n] + bx2[n]} = aT{x1[n]} + bT{x2[n]} For every n, (a,b - const.)


T{x1[n] + x2[n]} = T{x1[n]} + T{x2[n]} additivity

∧ and

T{ax[n]} = aT{x[n]} homogeneity

Nonlinear Systems
It is sufficient to find only one n for which additivity or homogeneity principles are not
fulfilled - easier to proof comparing to linearity.

Accumulator systems n


y[n] = x[k]
k=−∞
It possible to prove that accumulator system is nonlinear
Casual systems

A system is casual if the present value of the output depends only on the
present or past values for every n

Example equation of casual system

1
y[n] = (x[n] + x[n] + x[n − 1] + x[n − 2])
3

Example equation of noncasual system

1
y[n] = (x[n + 1] + x[n] + x[n − 1])
3
The compressor system

y[n] = x[Mn] −∞ < n < ∞ M>0


M is positive integer

Every M-th signal is taken (the others are skipped). It is not a time-invariant system

Stable systems

A system is stable in BIBO (bounded input bounded output) sense if and only if bounded
input produces bounded output

The (input) signal is x[n] is called to be bounded if there exists such a B > 0 that for every n the
following formula is fulfilled

| x[n] | ≤ B < ∞
Linear Time-Invariant systems
LTI system is completely defined by its impulse response - h[n]

Let hk[n] denotes system response on impulse δ[n-k].


Every discrete signal can be expressed as weighted sum of individual discrete
impulses, ∞

{ k=−∞
∑ }
y[n] = T{x[n]} = T x[k]δ[n − k] Convolution sum

From the principle of superposition it follows, that:


∞ ∞

∑ ∑
y[n] = x[n]T{δ[n − k]} = x[n]hk[n]
k=−∞ k=−∞
From the property of time invariance it follows, that:


T{δ[n − k]} = h[n − k] = hk[n] y[n] = x[k]h[n − k]
k=−∞
y[n] = x[n] ⋆ h[n − k] Discrete convolution
Representation of the output if LTI system as the superposition to separate samples

Input Impulse
signal: response:
h[n]
3
-5 -4 -3 -2 -1 0 1 2 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5

x[−2]h[n + 2]
x[−2]δ[n + 2]
-5 -4 -3 -2 -1 0 1 2 3 4 5
-5 -4 -3 -2 -1 0 1 2 3 4 5

x[0]δ[n] x[0]h[n]

-5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5

x[3]δ[n − 3] x[3]h[n − 3]
3 3 4 5

-5 -4 -3 -2 -1 0 1 2 4 5 -5 -4 -3 -2 -1 0 1 2

Output
x[n] = x[−2]δ[n + 2] + x[0]δ[n] + x[−3]δ[n − 3] signal:
3 4 5
-5 -4 -3 -2 -1 0 1 2
Example 1: Analytical evaluation of the output signal for LTI system having impulse response h[n]

{0,
1, for 0 ≤ n ≤ N − 1 Input signal: x[n] = a nu[n]
h[n] = u[n] − u[n − N ] =
for the others

a) y[ n ] = 0
∞ n

∑ ∑
b) y[n] = x[k] ⋅ h[n − 1] = x[k] ⋅ h[n − 1]; for 0 ≤ n ≤ N − 1
k=−∞ k=0
N2
n
1 − a n+1 a N1 − a N2+1

a k; for 0 ≤ n ≤ N − 1 ⟹ y[n] = = , N2 ≥ N1

y[n] = 1−a
1−a k=N
k=0 1

c) n
k a n−(N−1) − a n+1 n−(N−1) 1 − a
N


y[n] = a ; for N − 1 < n ⟹ y[n] = =a
k=n−(N−1)
1−a 1−a

0, for n < 0
Hence: 1 − a n+1
y[n] = 1−a
for 0 ≤ n ≤ N − 1

( 1 − a ),
n−N+1 1 − aN
a for N − 1 < n
Sequence involved in computing a discrete convolution in ex.1

If n<0

h[n-k] x[k] y[n] = 0


a)

0 k

n − (N − 1) n
Sequence involved in computing a discrete convolution in ex.1

h[n-k] x[k]
b) 0≤n≤ N−1

0 k

n − (N − 1) n
Sequence involved in computing a discrete convolution in ex.1

x[k] h[n-k]
c) 0 < n − (N − 1)

0 k

n − (N − 1) n
a)

0 k

Calculation of the output signal:


y[n] = 0

y[n]

k
0
a)

0 k

Calculation of the output signal:


y[n] = 0

y[n]

k
0
a)

0 k

Calculation of the output signal:


y[n] = 0

y[n]

k
0
a)

0 k

Calculation of the output signal:


y[n] = 0

y[n]

k
0
a)

0 k

Calculation of the output signal:


y[n] = 0

y[n]

k
0
a)

0 k

Calculation of the output signal:


y[n] = 0

y[n]

k
0
a)

0 k

Calculation of the output signal:


y[n] = 0

y[n]

k
0
a)

0 k

Calculation of the output signal:


y[n] = 0

y[n]

k
0
b)

0 k

Calculation of the output signal:


n
k 1 − a n+1

y[n] = a for 0 ≤ n ≤ N − 1 ⟹ y[n] =
k=0
1−a

y[n]

k
0
b)

0 k

Calculation of the output signal:


n
k 1 − a n+1

y[n] = a for 0 ≤ n ≤ N − 1 ⟹ y[n] =
k=0
1−a

y[n]

k
0
b)

0 k

Calculation of the output signal:


n
k 1 − a n+1

y[n] = a for 0 ≤ n ≤ N − 1 ⟹ y[n] =
k=0
1−a

y[n]

k
0
b)

0 k

Calculation of the output signal:


n
k 1 − a n+1

y[n] = a for 0 ≤ n ≤ N − 1 ⟹ y[n] =
k=0
1−a

y[n]

k
0
b)

0 k

Calculation of the output signal:


n
k 1 − a n+1

y[n] = a for 0 ≤ n ≤ N − 1 ⟹ y[n] =
k=0
1−a

y[n]

k
0
c)

0 k

Calculation of the output signal:


n

( 1−a )
k a n−(N−1) − a n+1 1 − a n+1
= a n−(N−1)

y[n] = a for N − 1 > n ⟹ y[n] =
k=0
1−a

y[n]

k
0
c)

0 k

Calculation of the output signal:


n

( 1−a )
k a n−(N−1) − a n+1 1 − a n+1
= a n−(N−1)

y[n] = a for N − 1 > n ⟹ y[n] =
k=0
1−a

y[n]

k
0
c)

0 k

Calculation of the output signal:


n

( 1−a )
k a n−(N−1) − a n+1 1 − a n+1
= a n−(N−1)

y[n] = a for N − 1 > n ⟹ y[n] =
k=0
1−a

y[n]

k
0
c)

0 k

Calculation of the output signal:


n

( 1−a )
k a n−(N−1) − a n+1 1 − a n+1
= a n−(N−1)

y[n] = a for N − 1 > n ⟹ y[n] =
k=0
1−a

y[n]

k
0
c)

0 k

Calculation of the output signal:


n

( 1−a )
k a n−(N−1) − a n+1 1 − a n+1
= a n−(N−1)

y[n] = a for N − 1 > n ⟹ y[n] =
k=0
1−a

y[n]

k
0
c)

0 k

Calculation of the output signal:


n

( 1−a )
k a n−(N−1) − a n+1 n−(N−1) 1 − a n+1

y[n] = a for N − 1 > n ⟹ y[n] = =a
k=0
1−a

y[n]

k
0
LTI systems properties

Commutativity x[n] ⋆ h[n] = h[n] ⋆ x[n]

Distribution of the (discrete) convolution operation over addition

x[n] ⋆ (h1[n] + h2[n]) = x[n] ⋆ h1[n] + x[n] ⋆ h2[n]

Cascade connection Parallel connection

h1[n]
h1[n] h2[n]
x[n] y[n] +
x[n] y[n]
h2[n]
h2[n] h1[n]
x[n] y[n]

h1[n] ⋆ h2[n]
x[n] y[n] h1[n] + h2[n]
x[n] y[n]
Stability

∞ The impulse response is absolutely summable


A LTI system is stable ⇔ S= | h[k] | < ∞ the required and the necessary condition

−∞
Proof:
∞ ∞

∑ ∑
| y[n] | = h[k]x[n − k] ≤ | h[k] | ⋅ | x[n − k] |
⋁⋀
| x[n] | < M ⇒
M n k=−∞ k=−∞
∞ S =∞

<M | h[k] | < ∞
k=−∞
In order to show, that it is also a necessary condition we must show that if then a bounded input can
be found that will cause an unbounded output. Suppose, that:
bounded by unity
h*[−n] ∞
| h[k] |2

{0,
, h[n] ≠ 0
x[n] =
∑ ∑ | h[k] |
| h[n] | y[0] = x[0 − k] | h[k] | = =S
h[n] = 0 k=−∞ k=−∞

Hence if S = ∞ , it is possible for a bounded input sequence to produce unbounded


output sequence.
LTI systems with input and output described by N-th order
linear constant-coefficient difference euaqtion

N N

∑ ∑
ak y[n − k] = br x[n − r] generally, this equation is not necessary
casual
k=0 r=0

Without the initial condition (or conditions) it one cannot determine output signal

Example 2 Determine difference equation for the accumulator:


y[n] = x[n]
k=−∞
n−1


y[n] = x[n] + x[n] x[n] + y[n]
k=−∞
n−1
z −1


y[n − 1] = x[n] y[n] = x[n] + y[n − 1] y[n − 1]
k=−∞
Example 3 Consider a system that is described by difference equation

y[n] = ay[n − 1] + x[n]


Input signal (B – const.) : x[n] = Bδ[n] under auxiliary condition y[−1] = c

For n>−1
y[0] = ac + B
y[1] = ay[0] + 0 = a(ac + B) = a 2c + aB
y[2] = ay[1] + 0 = a(a 2c + aB) = a 3c + a 2 B
y[3] = ay[2] + 0 = a(a 3c + a 2 B) = a 4c + a 3B

y[n] = a n+1c + a n B

For n<−1 y[n] = ay[n − 1] + x[n] ⟹ y[n − 1] = a −1(y[n] − x[n])

y[n] = a −1(y[n + 1] − x[n + 1])


Because: y[−1] = c for n<−1
y[−2] = a −1(y[−1] − x[−1]) = a −1c

y[−3] = a −1(y[−2] − x[−2]) = a −1a −1c = a −2c

y[−4] = a −1(y[−3] − x[−3]) = a −1a −2c = a −3c

y[n] = a n+1c

y[n] = a n+1c + a n Bu[n] for −∞ > n > ∞

• The system is nonlinear, because when B = 0 y[n] = a n+1c


• The system is not time-invariant:

y[n − n0] = y1[n] = a n+1c + Ba n−n0u[n − n0]


Summary:
The output for a given input id not uniquely specified. Auxiliary information or
conditions are required
If the auxiliary information is in the form of N sequential values of the output, later
values can be obtained by rearranging the difference equation as a recursive
relation running forward in n, and prior values can be obtained by rearranging the
difference equation as a recursive relation running backward in n.
Linearity, time invariance, and casuality of the system will depend only on the
auxiliary conditions. If an additional condition is that the system is initially at rest,
then the system will be linear, time invariant and casual.

Finite-duration Impulse Response Systems


Impulse response has a finite number of nonzero samples
If every sample is of finite amplitude ⇒ FIR system is always stable

Infinite-duration Impulse Response Systems

IIR is stable, if S is convergent, for example for:


h[n] = a nu[n], | a | < 1
LTI systems in frequency domain

If the input signal is given as: x[n] = e jωn, n ∈ (−∞, ∞)

then LTI system response can be calculated:


∞ ∞ ∞ ∞
h[k]e jω(n−k) = e jωn h[k]e−jωk
∑ ∑ ∑ ∑
y[n] = x[k]h[n − k] = h[k]x[n − k] =
k=−∞ k=−∞ k=−∞ k=−∞

H(e jω) = h[k]e−jωk
∑ phase
k=−∞
jω jarg(H(e jω))
y[n] = e jωn jω
H(e ) H(e ) = HRe(e jω) jω
+ jHIm(e ) = H(e ) e jω

amplitude
Example 4: Find the frequency response of ideal delay system defined as:
y[n] = x[n − nd ] where nd - shift in time domain - integer number

y[n] = e jω(n−nd ) = e−jωnd e jωn ⇒ H(e jω) = e−jωnd x[n] = e jωn, n ∈ (−∞, ∞)
Input sequence

H(e jω) = 1 arg(H(e jω)) = − ωnd
ak e jωk n n ∈ (−∞, ∞)

If an input sequence is in the form of: x[n] =
k
ak H(e jωk)e jωk n n ∈ (−∞, ∞)

Then, from the principle of superposition y[n] =
k
If we can find a representation of x[n] as a superposition of complex exponential sequence,
then we can find y[n] having the frequency response of the system.

Example 5: Find the output sequence of the ideal delay system when an sinusoid is input signal.

A jφ jω n A −jφ −jω n A jφ jω n A −jφ −jω n


x[n] = A cos(ω0 + φ) = e e 0 + e e 0 x1[n] = e e 0 x2[n] = e e 0
2 2 2 2

Applying the superposition principe, one yields: e j(ω0+φ) + e−j(ω0+φ)


cos(ω0 + φ) =
2
A A
y[n] = y1[n] + y2[n] = H(e jω0) e jφe jω0 n + H(e−jω0) e−jφe−jω0 n =
2 2 A
[H(e 0)e e 0 + H(e 0)e e 0 ]
jω jφ jω n jω −jφ −jω n
∞ 2
H(e−jω0) = h[k] ⋅ e jω0 = H*(e−jω0) if h[k] is real

−jω0 −jω0 arg[H(e jω0 )]
H(e ) = H(e ) e
k=−∞

y[n] = A H(e−jω0) cos(ω0n + φ + θ) θ = arg[H(e jω0)] phase of the system for ω0


For the ideal delay system (Ex. 4):

H(e jω) = 1 arg[H(e jω)] = − ωnd

y[n] = A H(e jω) cos(ω0n + φ + θ) = A cos(ω0n + φ − ωnd ) = A cos(ω0(n − nd ) + φ)

Frequency response of discrete-time LTI system is always a periodic function of the


frequency variable with period 2π
∞ ∞
jω −jωn
h[n]e−j(ω+2π)n e−j(ω+2π)n = e−jωn e−j2πn = e−jωn
∑ ∑
H(e ) = h[n]e = ⏟
n=−∞ n=−∞ =1
H (e j(ω+2π)) = H (e jω)

Example 6: Frequency response of the moving average system

1 N2
a N1 − a N2
{0,
, −M1 ≤ n ≤ M2 k

h[n] = M1 + M2 + 1 a = , N1 ≥ N2
otherwise k=N
1 − a
1

∞ ∞
jω −jωk jω 1 −jωn 1 e jωM1 − e−jω(M2+1)

H(e ) = h[k]e

H(e ) = e =
k=−∞
M1 + M2 + 1 n=∞ M1 + M2 + 1 1 − e−jω
jω 1 e jω(M1+M2+1)/2 − e−jω(M1+M2+1)/2 −jω(−M +M +1)/2
H(e ) = e 1 2 =
M1 + M2 + 1 1−e −jω

1 e jω(M1+M2+1)/2 − e−jω(M1+M2+1)/2 −jω(M −M )/2


e 2 2 =
M1 + M2 + 1 e jω/2 −e −jω/2

1 sin[ω(M1 + M2 + 1)/2] −jω(M −M )/2


e 2 2 =
M1 + M2 + 1 sin(ω/2)

H(e jω ) Low-pass filter


1

arg (H(e jω ))
Suddenly applied
LTI system response on a input in the form of: x[n] = e jωu[n] exponent signal
Assuming, that the signal was applied at n = 0
Unit step

0, n<0
y[n] =
( k=0 )
n −jωk jωn
∑ h[k]e e , n≥0
For n ≥ 0 :
∞ ∞ ∞

(∑ ) ( k=n+1 ) ( k=n+1 )
−jωk jωn −jωk jωn jω −jωn
h[k]e−jωk e jωn
∑ ∑
y[n] = h[k]e e − h[k]e e = H(e )e −
k=0
Steady-state
response Transient response

The transient response is identical with system response, when


is an input signal
x[n] = e jωn n ∈ (−∞, ∞)
The transient response may decay:
∞ ∞
−jωk jωn
∑ ∑
h[k]e e ≤ | h[k] |
k=n+1 k=n+1
If an impulse response has finite length, i.e. h[n] = 0 except 0≤n≤M

y[n] = H(e jω)e jωn n>M−1


If an impulse response has infinite length, then:

∞ ∞ ∞
h[k]e−jωk e jωn ≤
∑ ∑ ∑
| h[k] | ≤ | h[k] |
k=n+1 k=n+1 k=0


if | h[k] | < ∞ the system is stable
k=0

The condition for stability is also a sufficient condition for the existence of the
frequency response function.
∞ ∞ ∞
jω −jωk −jωk
∑ ∑ ∑
H(e ) = h[k]e ≤ | h[k]e |≤ h[k]
k=−∞ k=−∞ k=−∞


| h[k] | < ∞ The general condition of the existence of impulse response
k=0
Approximation of a vector by another vector

Vector – the ordered pair of points

Vector is characterized both by magnitude and direction

There are many ways to approximate a given vector V1 by other vector V2

V1 = c12V2 + Ve

Approximation error

Vector compoment V1
along the vector V2
… but only one is optimal taking vector’s length into consideration

V1 V1 V1
Ve2
Ve12
Ve1

c1V2 V2 V2 c2V2 c12V2 V2

V1 = C1V2 + Ve1 V1 = C2V2 + Ve2

The best approximation is when length of error vector is as small as possible -


perpendicular projection onto V2

V1 ⋅ V2 = | V1 | | V2 | cos(θ) Dot product of two vectors

V1 ⋅ V2
= | V1 | cos(θ) = C12 | V2 | V1 vector component along the
| V2 | V2 vector
Two vectors V1 i V2 are orthogonal to
each other if V1 ⋅ V2 = 0

It means that: C12 = 0

Signal approximation by another signal

Any real function f1(t) can be approximated by another real function i f2(t) within a certain
range (t1 < t < t2) and it can be expressed in the following way:

f1(t) ≊ c12 f2(t) t ∈ (t1; t2)

Error function : fe(t) = f1(t) − c12 f2(t)


Optimal approximation means minimizing approximation error function

A proper criterion should be chosen so as to minimize error function

Minimizing of error mean value within the range (t1, t2)

t2
1

[ f1(t) − c12 f2(t)]dt
t2 − t1 t1

This criterion seems to be inadequate for positive and negative error values could
compensate each other yielding wrong conclusions

© Andrzej Kotyra
It’s better to choose mean squared error defined as follows:

t2
1
t2 − t1 ∫t1
ε= [ f1(t) − c12 f2(t)]2dt

We have to find c12, so as to make error (ε) as low as possible


=0
dc12
t2

dc12 dc12 { t2 − t1 ∫t1 }


dε d 1
= [ f1(t) − c12 f2(t)]2dt = 0

t2 t2 t2

t2 − t1 [ ∫t1 dc12 ]
1 d 2 d d 2 2
∫t dc12 ∫t dc12
f 1 (t)dt − 2 c12 f1(t)f2(t)dt + c12 f 2 (t)dt = 0
1 1

t2 t2 t2

t2 − t1 [ ∫t1 ]
1
∫t ∫t
= 0dt − 2 f1(t)f2(t)dt + 2c12 f 22(t)dt = 0
1 1

© Andrzej Kotyra
t2
∫t f1(t)f2(t)dt V1 ⋅ V2
c12 = 1
t
= | V1 | cos(θ) = c12 | V2 |
∫t 2 f 22(t)dt | V2 |
1 V1 ⋅ V2
c12 =
| V2 |2

For signals For vectors

t2
Product of two real functions: ⟨f1(t), f2(t)⟩ = ∫ f1(t)f2(t)dt
t1

Real functions f1(t) i f2(t) are orthogonal within


(t1 < t < t2), range if: t2

∫t
f1(t)f2(t)dt = 0
1
A set of functions {f0(t), f1(t), f2(t), f3(t) , ... , fn (t)} is a set of n+1 mutually orthogonal (real)
within (t1 < t < t2), if:

t2

∫t
fj(t)fk(t)dt = 0 j≠k
1

Let any function x(t), be approximated within range (t1 < t < t2), by linear combination of n mutually
orthogonal functions


x(t) ≊ c1 f1(t) + c2 f2(t) + … + cn fn(t) = cr fr(t)
r=0

2
t2 n

t2 − t1 t1 [ ]
1
∫ ∑
Mean squared error: ε= x(t) − cr fr(t) dt
r=0

© Andrzej Kotyra
Function expansion into a series of mutually orthogonal
functions. Fourier series

A real function x(t) is given. The goal is to find its approximation within t∈〈t0;t0+T〉
with a set of orthogonal base functions fk(t)
t0+T

⟨fi(t), fj(t)⟩ = ∫
Dot product of fi, fj at within
〈t0;t0+T〉 f1(t)f 2*(t)dt i≠j
t0
x(t) approximation by ∞ ∞ error

x(t) ≊ ck fk(t)

set of orthogonal
basis functions fk(t) x(t) = ck fk(t) + ε
k=−∞ k=−∞
The goal is to find coefficients ck, so as to minimize (mean squared) error
2
t0+T ∞

[ ]
1
T ∫t0 ∑
ε= x(t) − ck fk(t) dt
k=−∞

2
t0+T ∞

[ ]
∂ 1
T ∫t0 ∑
For any cn x(t) − ck fk(t) dt = 0
∂cn k=−∞
2
1 t0+T 2 ∞ ∞

( k=−∞ )

∫ ∑ ∑
x (t) − 2x(t) ck fk(t) + ck fk(t) dt = 0
∂cn T t0 k=−∞

2
1 t0+T ∂ 2 1 t0+T ∞
1 t0+T ∂ ∞

T ∫t0 ∂cn T ∫t0 ( k=−∞


∑ k k ) T ∫t0 ∂cn ( k=−∞
∑ k k )
[x (t)]dt − 2x(t) c f (t) dt + c f (t) dt = 0

1 t0+T ∂ 2 1 t0+T ∞
2 t0+T
( k=−∞ ∂cn )

∫ ∫ ∫
[x (t)] dt = 0

− 2x(t) ck fk(t) dt = − x(t)fn(t)dt
T t0 ∂cn T t0 T t0
=0
2
1 t0+T ∂ ∞
1 t0+T ∂ ∞

T ∫t0 ∂cn ( k=−∞ ) T ∫t0 ∂cn ( k=−∞ )


ck2 f k2(t) dt +
∑ ∑
ck fk(t) dt =

t0+T t0+T t0+T

T ∂cn ( ∫t0 )
1 ∂
∫t ∫t
… 2c1 f1(t)c2 f2(t)dt + 2c1 f1(t)c3 f3(t)dt + … + 2c1 f1(t)cn fn(t)dt + …
0 0

for the set of base functions {fk(t)} all this expression = 0


hence: 2
t0+T ∞
1 t0+T ∂ ∞

∂cn ( k=−∞ ) T ∫t0 ∂cn ( k=−∞ )


1 ∂ 2cn t0+T 2
T ∫t0 T ∫t0
ck2 f k2(t) dt =
∑ ∑
ck fk(t) dt = f n (t)dt
2 t0+T 2cn t0+T 2
T ∫t0 T ∫t0
eventually: − x(t)fn(t)dt + f n (t)dt = 0

t +T
∫t 0 x(t)fn(t)dt
hence: cn = 0
t +T
∫t 0 f n2(t)dt
0

For complex and orthogonal functions fk(t) and complex signal x(t):

t0+T
∫t x(t)f n*(t)dt
cn = 0
t +T
∫t 0 f n*(t)fn(t)dt
0
Complex, harmonic base functions
Assuming that fn(t) are complex and harmonic signals in the form:

( T/t ) ( T/t )
2π 2π
fn(t) = e jnω0t = e jn(2πf0)t = e j2πn(1/T )t = cos n t + j sin n t

Such a function set is orthogonal, because for any : m≠n:


t0+T t0+T
⟨fm(t), fn(t)⟩ = ∫ fm(t)f n*(t)dt =
∫t
e j2π(m/T )t e−j2π(n/T )t dt =
t0 0

T e j2π[(m−n)/T ](t0+T ) − e j2π[(m−n)/T ]t0


[ ]t0 =
j2π[(m−n)/T ]t t0+T
e =
2πj(m − n) 2πj(m − n)/T

j2π[(m−n)/T ]T
j2π[(m−n)/T ]t0 e −1 e j2π[(m−n)/T ]t0 j2π(m−n)
e = [e − 1] =
2πj(m − n)/T 2πj(m − n)/T

e j2π[(m−n)/T ]t0
2πj(m − n)/T [ cos (2π(m − n)) − j sin (2π(m − n)) − 1] =

t0+T t0+T t0+T


If m = n, ⟨fm(t), fm(t)⟩ = ∫ fm(t)f m*(t)dt =
∫t
e j2π((m − m)/T)t
dt =
∫t
e0dt = T
t0 0 0
Then: fn(t) → e jnω0t
t +T t +T t +T
∫t 0 x(t)f n*(t)dt ∫t 0 x(t)e−jnω0t dt ∫t 0 x(t)e−jnω0t dt 1 t0+T
T ∫t0
cn = 0
t +T
= 0
t +T
= 0
t +T 0
= x(t)e−jnω0t dt
∫t 0 fn(t)f n*(t)dt ∫t 0 e−jnω0t e jnω0t dt ∫t 0 e dt
0 0 0

n = 0, ± 1, ± 2,…
1 t0+T
T ∫t0
If n = 0 c0 = x(t)dt The mean value for t ∈ ⟨t0, t0 + T⟩

Lets assume that x(t) has real values.

then: x*(t) = x(t)



If we conjugate two sides of the equation and reverse summation
ck e jkω0t

x(t) = direction: k→−k
k=−∞
∞ ∞ ∞ ∞ ck = c*
−k
e−jkω0t = e jkω0t ck e jkω0t = e jkω0t
∑ ∑ ∑ ∑
x*(t) = c*
k
c*
−k
⟹ c*
−k c* = c−k
k
k=−∞ k=−∞ k=−∞ k=−∞
∞ ∞ ∞

∑( k
+ c−k e−jkω0t) = ( )=
jkω0t −jkω0t

ck e jkω0t = c0 + jkω0t c0 + c e + c* e

x(t) = ce k k
k=−∞ k=1 k=1

2 Re (ck e jkω0t) ,

c0 + because: x + x* = a + jb + a − jb = 2Re(x)
k=1
( ak )
bk
If we write: ck = ak − jbk | ck | = ak2 + bk2 φk = arctan −

Re (ck e jkω0t) = Re [(ak − jbk )(cos(kω0t) + j sin(kω0t))] = ak cos(kω0t) + bk sin(kω0t)


∞ ∞
2 Re (ck e ) = c0 + 2 ∑ (ak cos(kω0) + bk sin(kω0))
jkω0t

eventually: x(t) = c0 +
k=1 k=1

1 t0+T t0+T t0+T


1 1
∫ ∫
−jnω0t
ck = x(t)e dt = x(t)cos(kω0t)dt − j x(t)sin(kω0t)dt = ak − jbk
T t0 T t0 T t0

1 t0+T 1 t0+T
T ∫t0 T ∫t0
ak = x(t)cos(kω0t)dt bk = x(t)sin(kω0t)dt

∑(
If expansion is in a form: x(t) = c0 + ak cos(kω0) + bk sin(kω0))
k=1

2 t0+T 2 t0+T
∫ ∫
than ak = x(t)cos(kω0t)dt bk = x(t)sin(kω0t)dt
T t0 T t0

(more common form)


Fourier series expansion - summary

Exponential form Trigonometric form

∞ ∞

∑(
ck e jkω0t x(t) = c0 + ak cos(kω0) + bk sin(kω0))

x(t) =
k=−∞ k=1

2 t0+T

Amplitude (wight) of k-th function e jkω0t
ak = x(t)cos(kω0t)dt
T t0
1 t0+T 1 t0+T
T ∫t 0 ∫
ck = x(t)e−jkω0t dt c0 = x(t)dt
T t0
2 t0+T

bk = x(t)sin(kω0t)dt
T t0

If the function is periodical (with period of T), rthe expansion concerns the range equal to the
function period, t0 is any point.

If the function is even, the trigonometric expansion does not have bk

If the function is odd, the trigonometric expansion does not have ak


Fourier expansion - Gibbs effect

Numerical implementation of the Fourier series allows only a finite number of expansion
coefficients, which is a source of error (known as the Gibbs effect)
Fourier transform

Definition:
Fourier transform Inverse Fourier transform

1 ∞
∫−∞ 2π ∫−∞
−jωt
X( jω) = x(t)e dt x(t) = X( jω)e jωt dω

Fourier integral measures „amount of oscillation of ω” within the function x(t)

The required conditions of existing Fourier transform is fulfilling by x(t) Dirichlet


conditions
(x)
1 1
sin

∫−∞
1. | x(t) | dt < ∞ 0.5

-0.3 -0.2 -0.1 0.1 0.2 0.3

2. x(t) has a finite number of both minima and maxima -0.5

-1

3. x(t) has a finite number of discontunuity points in every finite range


Continuous Fourier transform can be derived as limiting case of Fourier series.



1 t0+T ∞
2π 1 t0+T
( T ∫t0 ) ∑ ( T 2π ∫ )
−jkω0t jkω0t
jkω0t
=∑ x(t)e dt e = x(t)e−jkω0t dt e jkω0t

x(t) = ck e
k=−∞ k=−∞ k=−∞ t0

2π Discrete change of pulsation becomes


If: T → ∞ ⟹ ω0 = → dω continuous and summation by k turns into
T
integration by ω
1 ∞ T/2

( −T/2 )
2π 1 ∞ ∞

∫ 2π ∫−∞ ( ∫−∞ )
x(t)e−jkω0t dt e jkω0t

lim = x(t)e−jωt dt e jωt dω
T→∞ 2π T
k=−∞

1 ∞
2π ∫−∞
X( jω)e jωt dω

For any real x(t) :


∞ ∞

∫−∞ ∫−∞
X( jω) = x(t)cos(ωt)dω + j x(t)sin(ωt)dω = XR( jω) + jXI ( jω)

1 ∞
2π ∫−∞
X( jω) = X( jω) e j arg(X( jω)) x(t) = X( jω) e−j[ωt + arg(X( jω))]dω
Fourier Transform properties

Linearity ax(t) + by(t) ⇔ aX(ω) + bY(ω)


Proof: The property comes out of linearity of integration

Symmetry

X( jt) ⇔ 2πx(−ω)

Scaling

a (a)
1 ω
x(at) ⇔ X

Scaling in time domain reflect inverse scaling in frequency domain

Compression of the time scale – expansion in frequency domain and vice versa
Scaling
Shifting in time domain

x(t − t0) ⇔ e−jωt0 X( jω)

Displacement of a given signal in time domain results phase shift of its Fourier transform.
The modulus (absolute value) of any signal and the modulus of signal obtained as a
result of displacement are equal.

Shifting in frequency domain (complex modulation)

e±jωt0 x(t) ⇔ X (j(ω ∓ ω0))

Modulation (multiplication) of x(t) signal with e±jωt0 results from spectrum


displacement by ±ω0
Real modulation
1
x(t)cos(ω0t) ⇔ [X(ω − ω0) + X(ω + ω0)]
2
−j
x(t)sin(ω0t) ⇔ [X(ω − ω0) + X(ω − ω0)]
2

Proof: Comes from Euler’s Identity, linearity of


integration and complex modulation.

Product of signals

1 ∞
2π ∫−∞
z(t) = x(t)y(t) ⇔ Z( jω) = X( jν)Y (j(ω − ν)) dν = X( jω) ⋆ Y( jω)

Convolution of signals

∫−∞
z(t) = x(t) ⋆ y(t) = x(τ)y ((t − τ)) dτ ⇔ Z( jω) = X( jω)Y( jω)
Derivative of a signal
dn x(t) n
⇔ ( jω) X( jω)
dt n

Integral of a signal
t
1
∫−∞
x(τ)dτ = X( jω) + πX(0)δ(ω)

Correlation of signals

∫−∞
z(t) = x(τ)y*(t − τ)dτ ⇔ Z( jω) = X( jω)Y*( jω)

Parserval equality


1 ∞
∫−∞ 2π ∫−∞
2 2
x(t) dt = X( jω) dω
FT calculation for selected cases

Rectangular pulse:

{1 for | t | ≤ T
0 for | t | > T
pT (t) =

∞ T 1 e −jωT
− e jωT

∫−∞ ∫−T
[e ]−T =
−jωt −jωt T
pT (t)e dt = e−jωt dt = =
−jω −jω

2
sin(ωT ) = 2Tsinc(ωT )
ω
Dirac delta:
Dirac delta can be represented as the boundary case of rectangular pulse when T→0:

( )
1
δ(t) = lim pT (t)
T→0 2T L'Hôpital's rule

∫−∞ T→0 ( 2T ) ( ) ( )
∞ ∞
1 1 sin(ωT )

−jωt
lim pT (t) e dt = lim pT (t) e−jωt dt = lim = 1
T→0 −∞ 2T T→0 ωT
Time tends to 0 Infinite bandwith

ℱ (δ(t)) = 1

Series of Dirac impulses:

∞ ∞
ck e jkω0t Series of Dirac function is a, hence it can
∑ ∑
δT (t) = δ(t − kT ) =
be expressed in terms of Fourier series
k=−∞ k=−∞


1 T/2 1
∞ 1 ∞ jkω t
∫ ∫−∞
ck = δ(t)e −jkω0t
dt = δ(t)dt = 1 ⟹ ∑ δ(t − kT ) = ∑
e 0
T −T/2 T k=−∞
T k=−∞
∞ ∞ ∞
1 ∞ jkω t −jωt
∫−∞ [ ∑ ] ∫−∞ [ T ]
ℱ (δT (t)) = δT (t − kT ) e−jωt dt =

e 0 e dt =
k=−∞ k=−∞
∞ ∞
1 ∞ ∞

T k=−∞ [ −∞ ] T k=−∞
1 2π

e jkω0t e−jωt dt = ∑
2πδ(ω − kω0) =

δ(ω − kω0)
∑ T k=−∞

Sinusoidal (cosinusoidal) signal :


∞ ∞
e jω0t − e−jω0t −jωt
∫−∞ ∫−∞
ℱ (sin(ω0t)) = −jωt
sin(ω0t)e dt = e dt =
2j
∞ ∞

2j ( −∞ )
1 1
∫ ∫ (2πδ(ω − ω0) − 2πδ(ω + ω0)) =
−j(ω−ω0)t −j(ω+ω0)t
e dt − e dt =
−∞ 2j
π
(δ(ω − ω0) − δ(ω + ω0)) ℱ (cos(ωt)) = π (δ(ω − ω0) + δ(ω + ω0))
j
ℱ (cos(ωt)) = π (δ(ω − ω0) + δ(ω + ω0))

Part of cosinusoidal signal

∞ T

∫−∞ ∫−T
ℱ (cos(ω0t)pT (t)) = cos(ω0t)pT (t)e−jωt dt = cos(ω0t)e−jωt dt =

1 [e ]−T 1 [e ]−T
−j(ω+ω0)T T −j(ω−ω0)T T
T −j(ω+ω0)t −j(ω−ω0)t
e +e
∫−T
dt = − + =
2 2 j(ω + ω0) 2 j(ω − ω0)

e−j(ω+ω0)T − e j(ω+ω0)T e−j(ω−ω0)T − e j(ω−ω0)T sin [(ω + ω0)T ] sin [(ω − ω0)T ]
+ = +
2j(ω + ω0) 2j(ω − ω0) (ω + ω0) (ω − ω0)
Spectrum of cosine signal multiplied with rectangle windows consists of main lobe and side
lobes. It is important to reduce side side lobe

The shorter signal in time domain the wider its spectrum and vice versa
It is possible to use other than rectangular window in order to minimize side lobes
(„frequency leakage”). Thus other than rectangular window should be applied.

WHanning(ω) = 0.5Wrectangular(ω) + 0.25Wrectangular(ω − π/T ) + 0.25Wrectangular(ω + π/T )

WHanning(t) = 0.5Wrectangular(t) + 0.25Wrectangular(t)e jπT/T + 0.25Wrectangular(t)e−jπT/T

WHanning(t) = [0.5 + 0.25 (e jπT/T + e−jπT/T)] Wrectangular(t)

Hanning Window
Time windows (selected) - discrete versions

Rectangular (boxcar) window:


w[n] = 1, n = 0,1,2,…, N − 1

Bartlett window (triangle window)


2n N−1
N−1
, 0≤n≤ 2
w[n] = 2n N−1
2− N−1
, 2
≤n≤ N−1

( ( N−1 ) )
Kaiser window (parametric) 2
2n − N + 1
I0 β 1−
w[n] = n = 0,1,2,…, N − 1
I0(β)

∑ [ m! ]
∞ 2
(x /2)m
where: I0(x) = 1 + Modified Bessel function of the 1st
m−1 kind and 0th order

Dolph-Chebyshev window (parametric)

( N)
M

[γ N ]
1 πk 2πkm N−1

w[n + (M + 1)] = C +2 TN−1 β cos cos , −M≤m≤ M M=
k=1
2
General cosine window :

(N − 1) (N − 1) (N − 1)
2πn 4πn 6πn
w[n] = A − B cos + C cos + D cos , n = 0,1,2,…, N − 1

•von Hann window (a.k.a. Hanning window)

(N − 1)
2πn
A = 0.5 B = 0.5 C = 0 D = 0 w[n] = 0.5 − 0.5 cos , n = 0,1,2,…, N − 1

•Hamming window:

(N − 1)
2πn
A = 0.54 B = 0.46 C = 0 D = 0 w[n] = 0.54 − 0.46 cos , n = 0,1,2,…, N − 1

• Blackamann window

A = 0.42 B = 0.5 C = 0.08 D = 0 basic

(N − 1) (N − 1)
2πn 4πn
w[n] = 0.42 − 0.5 cos + 0.08 cos , n = 0,1,2,…, N − 1
• Nuttall window
A = 0.355768 B = 0.487396 C = 0.144232 D = 0.012604

• Blackmann-Nuttall window:

A = 0.3635815 B = 0.4891775 C = 0.1365995 D = 0.0106411

96
Time windows (selected) and their amplitude spectra

97
Time windows (selected) and their amplitude spectra
Sampling theorem
Uniform sampling of continuous signal x(t) can be expressed as signal multiplication with
Dirac delta series
∞ ∞

∫−∞
x(t0) = x(t)δ(t − t0)

xδ(t) = x(t) ⋅ δ(t − kT )
k=−∞
A product in time domain ⇔ convolution in frequency
∞ ωp ∞

2π [ ]
1

X( jω) ⋆ δ(ω − kωs) =

Xδ( jω) = X( jω) ⋆ ωs δ(ω − kωs) =
2π k=−∞
k=−∞

ωp ∞ ωp ∞ X( jω) Spectrum of analogue


X (j(ω − kωs)) signal
∑ ∑
X( jω) ⋆ δ(ω − kωs) =
2π k=−∞ 2π k=−∞
2π Xδ( jω) Spectrum of
T= sampled signal
ωs
f (x) ⋆ δ(x − x0) = f (x − x0) One of properties of the Dirac delta
Spectrum of the sampled signal Xδ(jω) → multiplied and shifted „versions” of the original spectrum
– hence in order to retrieve original spectrum X(jω) having spectrum of the sampled signal Xδ(jω),
X(jω) must be limited.
If fmax denotes the maximum frequency of sampled (analogue) signal x(t), its reconstruction
requires that sampling frequency fp should be at least two times larger than fmax.

f p ≥ 2 f max Shannon theorem

The signal sampled should


have narrowed spectrum

Spectrum of a signal sampled


with ωp, ωp > ωmax /2

Ideal reconstructing filter


ωp, ωg > ωp1 /2

100
The signal sampled of
narrowed spectrum

Spectrum of a signal sampled at


ωp , ωp <ωmax/2

ALIASING

Spectrum of a signal after


reconstruction ωp <ωmax/2

101
Sampling of bandlimited signals

In the case of bandlimited signals, it is possible to reconstruct the signal at sampling rates
lower than those resulting from the Nyquist theorem– no frequency inversion

Spectrum of an example bandlimited X(ω)


signal – analog (before sampling)
ω

– 3ω p – 2ωp – ωp 0 ωp 2ωp 3ωp

Spectrum of an example bandlimited


X(ω)
signal sampled with ωp
ω

– 3ωp – 2ωp – ωp 0 ωp 2ωp 3ωp


Hp(ω)
Spectrum of (ideal) reconstruction
filter (LP) ω

– 3ωp – 2ωp – ωp 0 ωp 2ωp 3ωp

X(ω)Hp(ω)

– 3ωp – 2ωp – ωp 0 ωp 2ωp 3ωp


Sampling of bandlimited signals, cont.

In the case of bandlimited signals, it is possible to reconstruct the signal at sampling rates
lower than those resulting from the Nyquist theorem - frequency inversion

Spectrum of an example bandlimited X(ω)


signal – analog (before sampling)
ω

– 3ω p – 2ωp – ωp 0 ωp 2ωp 3ωp

Spectrum of an example bandlimited


X(ω)
signal sampled with ωp
ω

– 3ωp – 2ωp – ωp 0 ωp 2ωp 3ωp


Hp(ω)
Spectrum of (ideal) reconstruction
filter (LP) ω

– 3ωp – 2ωp – ωp 0 ωp 2ωp 3ωp

X(ω)Hp(ω)

– 3ωp – 2ωp – ωp 0 ωp 2ωp 3ωp


Image (2D) Sampling – spatial domain

f(x, y)
Analog image -
irradiance [W/m2]
spatial 2D function

∞ ∞
δ (x − kΔx, y − lΔy)
∑ ∑
s(Δx,Δy)(x, y) = Sampling function (2D space)
k=−∞ l=−∞

After sampling, function is nonzero at specific


places
Sampled image
∞ ∞
fs(x, y) = f(x, y)s(Δx,Δy)(x, y) = f (x, y) δ (x − kΔx, y − lΔy)
∑ ∑
k=−∞ l=−∞
Image sampling– spatial domain (2D)

Sampling in 2 (spatial) dimensions

105
Sampling in frequency domain

Image Amplitude spectrum

f(x, y) | F(ωx, ωy) |

FT of sampling function:

∞ ∞
δ (ωx − kΔωx, ωy − lΔωy)
∑ ∑
S(Δωx,Δωy)(ωx, ωy) =
k=−∞ l=−∞

106
Multiplication (spatial domain) convolution in frequency (spatial) domain
∞ ∞
δ (ωx − kΔωx, ωy − lΔωy)
∑ ∑
Fs(ωx, ωy) = F(ωx, ωy) ⊗
k=−∞ l=−∞

Let the spectrum of analog signal be limited up to some max. spatial frequency for x
and y directions ωxmax ωymax

A0x sin(ωx0 x)
ωy


Δωx =
ωymax

Δx
ωxmax ωx
A0y sin(ωy0 y)


Δωy =
https://qph.fs.quoracdn.net/main-qimg-059582539613d55d31c570d9d9c36ac9-c
Δy
107
Dirac delta f(x) ⊗ δ(x − x0) = f(x − x0) Can be generalized to
n-th dimensions,

ωy



… …

Δωy
ωymax
ωxmax ωx

Δωx

… …



Δx ↗ ⇒ Δωx ↘
Δy ↗ ⇒ Δωy ↘
108
Δx ↗ ⇒ Δωx ↘ Δy ↗ ⇒ Δωy ↘

ωy



… …

Δωy


ωymax
ωxmax ωx
… …

Δωx

… …

… …


109
Δx ↗ ⇒ Δωx ↘ Δy ↗ ⇒ Δωy ↘

ωy



… …

ωymax

Δωy
… … ωx
ωxmax

… Δωx …



Aliasing !

110

The critical sampling

ωy
… …

Δωy = 2ωymax

… …

ωxmax

ωymax
… … ωx

Δωx = 2ωx
max

… …



The (perfectly) reconstructed signal - ideal LP 2D filter has been applied

Δωx = 2ωxmax Δωy = 2ωymax


111
2D spatial sampling

| H(ωx , ωy ) |
The ideal filter for reconstruction/restoration:

y
ω
1
, | ωx | < ωxmax, | ωy | < ωymax
{0,
ωxmax ⋅ ωymax
H(ωx, ωy) =
for others
ωy
max
ωx

ax
m
x
ω
The restored (or reconstructed) image in FT domin can be expressed as :

F̃(ωx, ωy) = Fs(ωx, ωy) ⋅ H(ωx, ωy) ⟹ f˜(x, y) = fs(x, y) ⊗ h(x, y)


the restored signal
∞ ∞
f˜(x, y) =
∑ ∑
fs(kΔx, lΔy) ⋅ h(x − kΔx, ⋅ y − lΔy)
k=−∞ l=−∞

112
2D sampling in spatial domain

1
, | ωx | < ωxmax, | ωy | < ωymax
{0,
The spectrum of the ideal H(ωx, ωy) = ωxmax ⋅ ωymax
reconstruction filter: for the others

In the original (spatial - 2D):

sin(πxΔωx) sin(π yΔωy)


h(x, y) = ⋅
πxΔωx π yΔωy
For f˜(x, y) = fs(x, y) ⊗ h(x, y)

∞ ∞ sin (π(xΔωx − k)) sin (π(yΔωy − l))


f˜(x, y) =
∑ ∑ s
f (kΔx, lΔy) ⋅ =
k=−∞ l=−∞
π(xΔωx − k) π(xΔωy − l)

∞ ∞
f˜(x, y) =
∑ ∑
fs(kΔx, lΔy) ⋅ sinc(xΔωx − k) ⋅ sinc(yΔωy − l)
k=−∞ l=−∞
113
2D spatial sampling

0.8

0.6

0.4

0.2

-0.2

-0.4
-15 -10 -5 0 5 10 15

1D sinc 2D sinc

The function decays in infinity implementation issue

In practice the signal should be sampled with (much) higher frequency than that resulting
from sampling theorem (the Nyquist frequency). In that case the real (possible to build)
filters, can reconstruct almost the same image (with marginal differences).
114
Aliasing – 2D signal

Too low sampling frequency (in spatial domain) results in visible distortions known as
Moiré effect (pattern)

https://www.intmath.com/math-art-code/moire-effect.php
Short-Time Fourier Transform STFT

Short-Time Fourier Transform STFT, introduced by Gabor 1946 r .


∫−∞
SX(u, ω) = ⟨x, wu,ω⟩ = x(t)w(t − u)e−jωt dt

where
wu,ω = w(t − u)e jωt

w(t) real and symmetric window function w(t) = w(−t)


w(t) shifted (continuously) by u in time domain and modulated with ω
2

∫−∞
2
PS X(u, ω) = SX(u, ω) = x(t)w(t − u)e−jωt dt

- Energy density localized within time window w(t) - spectrogram


∫−∞
If norm of the window function = 1:
∥w(t)∥2 = | x(t) |2 dt = 1
2
Then | w(t) | can be interpreted as function of probability distribution of a free
particle around u, - the centre of the window, generally:
1 2
| w(t) | The particle can be matched with a wave, given by function w(t)
∥w(t)∥2
1 2
The probability density of its momentum can be written as: | w(t) |
∥w(t)∥2

∞ ∞
1
∫−∞ ∫
2
Hence: u= t | w(t) |2 dt Generally: u= t | w(t) | dt
∥w(t)∥ −∞
2

A spread around the center point u is measured by a variance


1

2
σt2 = (t − u)2
| w(t) | dt
∥w(t)∥ −∞
2

Then σt2 is a radius of window in time domain, 2 σt2 denotes its width


1
∫−∞
Generally: Δ= σt2 = (t − u)2 | w(t) |2 dt
∥w(t)∥
According to Parseval’s theorem the same calculations can be made in frequency domain ω

1 2
The probability density of its momentum in frequency domain: | W(ω) |
2π∥w(t)∥2

Center of window in frequency domain:

1 ∞
2π ∫−∞
ξ= ω | W(ω) |2 dt
W(ω) - denotes FT of window function w(t)


1

2
σω2 = (ω − ξ) 2
| W(ω) | dω r.m.s length in the frequency domain
2π∥w(t)∥ −∞
2


1
∫−∞
Δω = σω2 = (ω − ξ)2 | W(ω) |2 dω
2π∥w(t)∥
σt2 measures energy concentration around u (in time domain), similarly σω2 is a measure of
energy concentration around ξ (in frequency domain)

The joint time-frequency energy localization (energy concentration on time-frequency


plane) is: σt2σω2

Let’s assume u = 0, ξ = 0
∞ ∞
1
2π∥w(t)∥ ∫−∞ ∫−∞
2 2
σt2σω2 =
4
| t ⋅ w(t) | dt | ω ⋅ W(ω) | dω

Parseval equality:


1 ∞
1 ∞
∫−∞ 2π ∫−∞ ∫−∞ 2π ∫−∞
2
| w(t) |2 dt = | W(ω) |2 dω hence | w′(t) | dt = | jω ⋅ W(ω) |2 dω

It yields,
∞ ∞
1
∫ ∫
2 2
σt2σω2 = | t ⋅ w(t) | dt | w′(t) | dω
∥w(t)∥ −∞
4
−∞


© Andrzej Kotyra
∞ ∞ ∞

∫−∞ ∫−∞ ∫−∞


2 2
Applying Schwarz inequality | f (t) | dt ⋅ | g(t) | dt ≥ | f (t) ⋅ g(t) |2 dt

∞ ∞ 2

∥w(t)∥4 [ ∫−∞ ]
1 1
∥w(t)∥4 ∫−∞ ∫−∞
2 2
σt2σω2 = | t ⋅ w(t) | dt | w′(t) | dω ≥ | t ⋅ w′(t) ⋅ w*(t) | dt

2 2
∞ ∞

∥w(t)∥ [ −∞ 2 ] 4∥w(t)∥ [ −∞ ]
1 t 1
∫ ∫
2
≥ 4
[w′(t)w*(t) + w′*(t)w(t)dt ≥ 4
t( | w(t) | )′dt

since lim t w(t) = 0, integration by parts gives:


|t|→+∞

2

4∥w(t)∥4 [ ∫−∞ ]
1 2 1
σt2σω2 ≥ | w(t) | dt =
4

1
σt2σω2 ≥ Heinseberg uncertainty principle
4





Equality of Heisenberg Principle exists only and if only, there exists b fulfilling:

w′(t) = − 2bt ⋅ w(t)


⇓ a,b - complex
there exist a, such that
w(t) = a exp(−bt 2)
ω

ξ2 2Δ ω

2Δt

ξ1
Heisenberg boxes

b1 b2 t
• Joint time-frequency resolution is limited
• the better resolution in time domain (thinner window in time d.) the worse resolution in
frequency domain (wider window in freq. d) and vice versa.
• The best resolution BOTH in time and frequency - Gabor window - STFT = Gabor
transform.

STFT for an example signal – rectangular windows

Window length = 512

20

40

60

80

© Andrzej Kotyra
STFT for an example signal – rectangular windows

Window length = 2048

20

40

60

80

© Andrzej Kotyra
STFT for an example signal – rectangular windows

Window length = 8192

20

40

60

80

© Andrzej Kotyra
Wavelet transform

∫−∞
(s, τ) = f (t)ψ*
s,τ(t)dt Continuous wavelet transform

f(t) should be square integrable:


∫−∞
| f (t) |2 dt

Hence signal should be of finite energy. Both sinusoid and constant signal not belong to
such class of signals

( )
1 t−τ
ψs,τ(t) = ψ , s ∈ ℝ − {0}, τ ∈ ℝ
s s
A set of functions called wavelets are generated from a prototype function called „mother
wavelet” its by dilation and translation

s - scale coefficient
τ – translation coefficient
𝒲
( )
1 ∞ ∞ 1 t−τ 1
c ∫−∞ ∫−∞
f (t) = Ψ(s, τ) ψ dsdτ
s s s 2
Inverse continuous w. t.

where ∞
| Ψ(ω)2 |
∫−∞
c= dω
ω normalizing factor

Ψ(ω) – Fourier transform of wavelet ψ (t).

t
Substitution for t a new variable t′ = One could write:
s

( a)
τ

Ψ(a, τ) = a f (at′)ψ t′ − dt′

both wavelet and the analyzed fuction could be scaled






Wavelets have certain features

1. Their mean is 0:

∫−∞
ψ (t)dt = 0 Ψ(ω) = 0 for ω = 0

Admissibility allows the invertibility of


2. Admissibility condition
the wavelet transform

| Ψ(ω)2 |
∫−∞
dω < ∞
|ω|

3. They should tend to 0 (for the localization in time) – this is not the required condition

© Andrzej Kotyra
Daubechies wavelet family

Coiflet2 wavelet Meyer2 wavelet Symlet8 wavelet

© Andrzej Kotyra
Wavelet transform – time-ferequency resolution

If t*, 2Δψ denotes centre and window length respectively (ψ (t) window).
Then for ψτ,s(t) wavelet its centre would be at b + st*
Where its with in time domain 2sΔψ
Signal f(t) is definited within time window:

[b + st* − sΔψ , st* + sΔψ]

Following the Parserval equation it is possible to calculate dimensions of an


equivalent window in frequency domain – for Fourier transform of wavelet ψτ,s(t):

[ s ]
ω* 1 ω* 1
− Δψ s, + Δψ s
s s s

ω*, 2Δψ denote centre and window size for Ψ(ω), respectively.

Time-frequency box (Heisenberg box) of wavelet ψτ,s(t) is given as cartesian product:

[ s ]
ω* 1 ω* 1
[b + st* − sΔψ , st* + sΔψ] × − Δψ s,
s s
+ Δψ s
s
ω
2s1Δ ψ

ω∗
s1 2Δψ
s1

|ψs ,τ (ω)|

2s2Δ ψ
ω∗ |ψs ,τ (ω)| 2Δψ
s2 s2

Conclusion: *t *t t
• For small scales (s) width of Heisenberg box is small in time domain and large in
frequency domain
• Conversly – for large scales (s) Heisenberg box size is large and small in frequency
domain
• Both high-frequency and low frequency components of a signal are represented on a time-
frequency plane with a resolution determined by a dimensions of Heisenberg box

© Andrzej Kotyra
Expansion in wavelet series

Scale and translation coefficients are discretized, continuous signal


A set of wavelets is (usually) orthogonal

∫−∞
cj,k = f (t)ψj,k(t)dt Wavelet expansion of a contionuous signal f(t) of finite energy

t − kτ0s0j
( )
1
ψj,k = ψ Discrete set of wavelets
s0j s0j

Usually: s0 = 2, τ0 = 1 diadic sampling

Change in scale coeff. corrsponds to octave change in frequency; integral multiple power of a 2

Signal reconstruction:

∑∑
x(t) = c cj,k ⋅ ψj,k(t); c − constant value
j k
s Diadic sampling of a t-f plane

S0

2S0

4S0

The required condition for lossless recontruction of a signal is so called stability condition
2

∑ ⟨
A∥f (t)∥2 ≤ f (t), ψs,τ(t)⟩ ≤ B∥f (t)∥2
j,k
A, B positive numbers 0<A≤B<∞

A falmily of wavelets ψτ,s(t) fulfilling the above condition is called frame with bounds of A,
B

© Andrzej Kotyra
If A = B then a frame is called tight and a set of wavelets forms an orthogonal
basis:

{0 ⇔ j ≠ m,

1 ⇔ j = m, and k = n
∫−∞
ψj,k ψ*
m,n(t) =
and k ≠ n

where ∥ψj,k(t)∥ = 1

If A≠B lossless reconstruction is still possible thought for signal decomposition


and reconstruction two wavelet sets are used - dual frame.

Orthogonality provides that there is no information redundancy in wavelet


representation – this is not a required condition but still desirable
Wavelet series: Lack of shift invariance

t
x j
s τ0

Amplitudes of wavelet coefficients are different for signal and its shifted
„version”

© Andrzej Kotyra
Discrete wavelet transform

Continuous W.T . – continuous signals ; infinite number of wavelets


Wavelet expansion – discrete signal, infinite number of wavelets

Practical computation is impossible due to infinite number of additions and multiplications

Spectrum of a wavelet – bandpass filter of changing and scale dependent width

Changing the scale of a wavelet (time domain) 2 times result in shifting and narrowing its
frequency counterpart by 2 – reaching zero frequency is practically impossible (possible in
infinity)

Introduction of so called scaling function (low-pass filter) is necessary

Spectrum of a scaling function (ϕ) (LP - filter)

Spectra of wavelets (ψ)

… j=n+3 j = n + 2 j = n+1 j=n

ωn /8 ωn /4 ωn /2 ω
ωn
Multiresolution analysis

Details
2 D1
D1
Signal

Approximation Details
2 2 D2
A1 D2

Approximation 2 Details 2
A2 D3

Approximation
2 A3

Stage 1 Stage 2 Stage 3


j=1 j=2 j=3
Overview of selected wavelets

Daubechies wavelets

The Daubechies wavelets are named after their inventor (or discoverer?) - Ingrid
Daubechies. They are characterized by a maximal number of vanishing moments for given
support.

∫−∞
Moments of wavlets (of p-th order) is defined as follows: Mp = t pψ (t)dt

Wavelet transform could be expressed as Taylor expansion using wavelet moments

s[ ]
1 f (1)(0) f (2)(0) f (n)(0) n+1
( f (0,s)) =
2 3
f (0)M0s + M1s + M2s + … + Ms + O(s n+2)
1! 2! n!

!"#$%&'('#"%$)*$%&'+,+-%.)'+-"%)'.*".'/"#-0-.12,-33$2$-%.)'3,4'(1.*',45-4'(,06%,+$"0'
/$00'7-'8-4,9':*".'$);'"%6'(,06%,+$"0')$&%"0'<('.,',45-4'(1='2"%'7-'4-(4-)-%.-5'2,+(0-.-06'
$%')2"0$%&')("2-9'>%'.*-,46;'+,4-'#"%$)*$%&'+,+-%.)'+-"%)'.*".')2"0$%&'3<%2.$,%'2"%'
4-(4-)-%.'+,4-'2,+(0-?')$&%"0)'"22<4".-069':*<)'('$)'"0),'2"00-5'.*-'"22<4"26',3'.*-'
/"#-0-.
𝒲
© Andrzej Kotyra
Daubechies 1 (Haar)

Falka Daubechies 1 (1 moment zanikający; długość nośnika wynosi 1) nazywana jest także
falką Haara - zwarta postać analityczna
Daubechies 2
Daubechies 3
Daubechies 6
Daubechies 14
Coiflet 4
Symlets 4

Symlets (symmetrical wavelets) are modified Daubechies wavelets where symmetry was
taken into consideration. Other features are like for Daubechies wavelets.
Analyzed Signal (length = 1024)
Symlet4 wavelet
2

1.5

0.5

0
100 200 300 400 500 600 700 800 900 1000
Ca,b Coefficients - Coloration mode : init + by scale + abs

61
57
53
49
45
41
37
33
29
25
21
17
13
9
5
1

Scale of colors from MIN to MAX


Analyzed Signal (length = 1000) Coiflet4 wavelet
1
0.8
0.6

0.4
0.2

100 200 300 400 500 600 700 800 900 1000
Ca,b Coefficients - Coloration mode : init + by scale + abs

61
57
53
49
45
41
37
33
29
25
21
17
13
9
5
1

Scale of colors from MIN to MAX


Analyzed Signal (length = 1000)
Haar wavelet
1
0.8
0.6

0.4
0.2

100 200 300 400 500 600 700 800 900 1000
Ca,b Coefficients - Coloration mode : init + by scale + abs

61
57
53
49
45
41
37
33
29
25
21
17
13
9
5
1

Scale of colors from MIN to MAX

© Andrzej Kotyra
Analyzed Signal (length = 1000) Coiflet4 wavelet

0.5

-0.5

100 200 300 400 500 600 700 800 900 1000
Ca,b Coefficients - Coloration mode : init + by scale + abs

61
57
53
49
45
41
37
33
29
25
21
17
13
9
5
1

Scale of colors from MIN to MAX


Analyzed Signal (length = 1000) Daubechies5 wavelet

0.5

-0.5

100 200 300 400 500 600 700 800 900 1000
Ca,b Coefficients - Coloration mode : init + by scale + abs

61
57
53
49
45
41
37
33
29
25
21
17
13
9
5
1

Scale of colors from MIN to MAX

© Andrzej Kotyra
Decomposition at level 5 : s = a5 + d5 + d4 + d3 + d2 + d1 . Daubechies5 wavelet
1
s 0
-1
1
a
5 0
-1
0.2
d
5 0
-0.2
1
d
4 0
-1
0.5
d
3 0
-0.5
0.2
d 0.1
2 0
-0.1
0.4
0.2
d
0
1
-0.2
100 200 300 400 500 600 700 800 900 1000

© Andrzej Kotyra
Z Transform

Definition: ∞ Two-sided Z transform


−n

X[z] = (x[n]) = x[n] ⋅ z
n=−∞ Complex variable

x[n] ⋅ e−jnω

X[ω] = Fourier transform x[n]
n=−∞

x[n] ⋅ z −n

X[z] = One-sided Z transform
n=0

Substitute: z = re jω

X[re jω] = x[n] ⋅ (re jω)−n

n=0

If r=1 → Fourier transform of x[n] Fourier transform of product of x[n] series and r -n
sequence (Discrete Fourier
Transform)
𝒵
Thus, Z transform can be regarded as a generalisation of Fourier Transform of x[n]
series
Im(z)
Calculation of Z transform over a unit circle → Fourier
Transform

ω z=1 → ω=0
1 Re(z) π
z=j → ω=
2
z =−1 → ω =π

The set of points in the complex plane for which the Z-transform for x[n] is convergent →
Region of Convergence (ROC)

| x[n]r −n | < ∞
∑ Convergence condition of Z.T.
n=−∞

Z. T can be convergent for such series, for which F.T. is not

e.g. the unit step: x[n] = u[n] In a general case it is not absolutely summable,
but when r > 1 it is!

| u[n]r −n |
∑ is convergent for r >1
n=−∞
∞ ∞
| x[n]z −n | < ∞
From convergence of

x[n]z −n it follows, that

n=−∞ n=−∞
ROC depends only on z and contains all the z, for which the Z.T . exists

Im(z)
ROC is in the form of ring in the complex
plane. Its outer radius can → ∞

Re(z)
If ROC contains the unit circle, then there
exists DFT

Polynomial , its zeros – zeros of the transform

Z. T usually is expressed in the form of rational function ratio of two polynomial functions

P[z] p0 + p1z −1 + p2 z −2 + p3z −3 + … + pM−1z −(M−1) + pM z −M


X[z] = =
Q[z] q0 + q1z −1 + q2 z −2 + q3z −3 + … + qN−1z −(M−1) + qN z −N
The polynomials are expressed using z-1 rather than z – for the sake of compatibility
with definition of Z.T.

Polynomials in numerator and denominator can ba expressed as polynomials of degre 1

M M
P[z] p0 ∏l=1 (1 − γl z −1) z M p0 ∏l=1 (1 − γl)
X[z] = = =
Q[z] N
q0 ∏l=1 (1 − λl z )
−1 N
z N q0 ∏l=1 (1 − λl)

γl, l = 1…M Radicals of the numerator → zeros of the transform

λl, l = 1…N Radicals of the denominator → poles of the transform

Im(z)
Poles

×
×
Re(z)
Zeros ×
Example 7 : Find Z transf. and ROC for right-handed series x[n] = a nu[n]
∞ ∞
a nu[n] ⋅ z −n = (az −1)n
∑ ∑
X[z] =
−∞ n=0

X[z] is convergent | az −1 |n < ∞
∑ ROC |z| > |a|
n=0

−1 n 1 z zero (z=0)

X[z] = (az ) = =
n=0
1 − az −1 z−a pole (z=a)

Im(z)

Unit circle

×
1 Re(z)
Example 8 : Find Z transf. and ROC for left-handed series x[n] = − a nu[−n − 1]

∞ −1 ∞ ∞
n −n

X[z] = − a u[−n − 1]z a n z −n = − a −n z n = 1 − (a −1z)n

=−
∑ ∑
n=−∞ n=−∞ n=1 n=0

The Z.T. is convergent in this case when | a −1z | < 1 ⇒ | z | < | a |



−1 n 1 1 − a −1z − 1 z zero (z=0)
= =

X[z] = 1 − (a z) = 1 −
n=0
1−a z
−1 1−a z −1 z−a pole (z=a)

for left-handed series


Im(z)

The same result (prev. Example) , but


ROC is different !

×
1 Re(z)
(2) ( 3)
Example 9 Find Z transf. and ROC for sum of two series n n
1 1
x[n] = u[n] + − u[n]

∑ (2) ∑ ( 3)
∞ ∞ n n

∑ {( 2 ) ( 3)
∞ n n 1 1
}
1 1 −n
X[z] = u[n] + − u[n] z −n = u[n]z + − u[n]z −n
n=−∞ n=−∞ n=−∞

1 1
1 + 3 z −1 + 1 − 2 z −1
∑(2 ) ∑( 3 )
∞ n ∞ n 1 1
1 −1 1 = + =
= z + − z −1
( )( z )
1 −1 1 −1 1 −1 1 −1
n=0 n=0
1− 2
z 1+ 3
z 1 − z 1 +
2 3

2 (1 − z ) z (z − 12 )
1 −1 1 zeros: 0; 1/12
12
= =
(1 − z ) (1 z ) (z − 2 ) (z + 3 )
1 −1 1 −1 1 1
2
+ 3 poles: 1/2; -1/3

1 −1 1 1 1
ROC: z <1 ∧ − z −1 < 1 ⇒ < |z| ∧ < |z|
2 3 2 3
( 3)
n

(2)
n
1 1
ROC for u[n] ROC for − u[n]

Im(z) Im(z)

× ×
1 1
1 Re(z) − 1 Re(z)
2 3

Im(z)

ROC for

(2) ( 3)
n n
1 1
u[n] + − u[n] × ×1
1 1 1 Re(z)
− 2
3 12
Example 10 : Find Z transf. and ROC for sum of two series

( 3) (2)
n n
1 1
x[n] = − u[n] − u[−n − 1]

{( 3 ) { (2)
n 1 n

} 1 + 13 z −1 }
1 1
| z | > 1 1 1
− u[n] = − u[−n − 1] = | z | <
3 1 − 12 z −1 2

1 1
( ) ( )
n n

{ }
1 1 = +
− u[n] − u[−n − 1] 1 1
3 2 1 + 3 z −1 1 − 2 z −1

Im(z)
2 (1 − z ) 2z (1 − 12 )
1 −1 1
12
= =
(1 − z ) (1 z ) ( 2)( 3)
1 −1 1 −1 1 1
2
+ 3
1 − 1 +

1 1 × ×1
|z| > ∧ |z| < −
1 1 Re(z)
3 2 3 12 2

The same result as in ex. 9, but ROC is different


𝒵
𝒵
𝒵
Example11 : Find Z transf. and ROC of finite series

{0
an 0<n<N−1
x[n] =
for the others

1 − (az −1)
N
N−1 N−1 1 zN − aN
∑(
n −n
az )
−1 n = N−1

X[z] = a z = =
1 − az −1 z z−a
n=0 n=0

N−1 n
−1

ROC az <∞ ⇒ | a | < ∞, z ≠ 0
n=0
Im(z) 15-th
order pole

N = 16, 0 < a < 1 zeros of the transform: π


8

zk = ae j(2πk /N) k = 0,1,…, N × a


1 Re(z)

ROC the whole comlex plane except the


point (0,0)
Example12 : Find Z.T. and ROC (two-sided series)

x[n] = a n −∞<n<∞

x[n] = a n = a nu[n] + a nu[−n − 1]

∞ ∞ ∞ −1
( az ) + ( az −1) =
a nu[n] ⋅ z −n + a nu[−n − 1] ⋅ z −n = −1 n n
∑ ∑
X[z] =
∑ ∑
n=−∞ n=−∞ n=0 n=−∞
∞ ∞ ∞ ∞

∑(
az ) +
∑( )
−1 n −n −1 n −n
∑ ∑
(az) = az − 1 + (az)
n=0 n=1 n=0 n=1

OZ | a −1z | < 1 ⇒ | z | < | a | | z −1a | < 1 ⇒ | z | > | a |

ROC does not exist – there’ s no Z transform – there’ s no z for which the
transform would be convergent
Example 13 : Find Z.T. and the ROC for the series x[n] = a n cos[ω0n]u[n]

1
x[n] = a cos[ω0n]u[n] = u[n][(ae jω0)n + (ae−jω0)n] =
n
2

[ ]
1 ∞ 1 ∞
1 1 1
( ) ( )
−1
n n
jω0 z −jω0 z −1
∑ ∑
ae + ae = + =
2 n=0 2 n=0 2 1 − ae z
jω0 −1 1 − ae z
−jω0 −1

1 2 − az −1[e jω0 + e−jω0] 1 − az −1 cos(ω0)


= =
2 1−e z −e z +a z
−jω0 −1 jω 0 −1 2 −2 1 − az [e
−1 −jω0 +e ]+a z
jω0 2 −2

1 − az −1 cos(ω0)
=
1 − 2az −1 cos(ω0) + a 2 z −2

ROC: | ae−jω0 z −1 | < 1 | e−jω0 | | az −1 | < 1 |a| < |z|


ROC properties

1. ROC is in a form of a disk or ring. Its centre is always located in centre of


complex plane
2. F.T. of x[n] is absolutely convergent only and if only ROC of Z.T. contains the
unit circle.
3. ROC doesn’t contain any poles
4. IF x[n] is an infinite series then ROC is the whole complex plane except points
z = 0 or z = ∞
5. If x[n] is an infinite right-handed series, ROC is located outside of a circle
(without edge) for which its radius is equal to a modulus of the largest pole of
Z{x[t]}
6. If x[n] is an infinite left-handed series, ROC is located inside of a circle for which
its radius is equal to a modulus of the smallest pole of Z{x[t]}
7. If x[n] is an infinite two-handed series, ROC is in form of a ring bouded by the
largest and the smallest moduli of poles (of course it does not contain any poles)
8. ROC is a connected region

© Andrzej Kotyra
Z transform properties

1. Linearity

x1[n] ↔ X1[z] ROC: RX1 x2[n] ↔ X2[z] ROC: RX2

ax1[n] + bx2[n] ↔ aX1[z] + bX2[z]

ROC is at least equal RX1 ∩ RX2 If poles and zeros are not cancel each other

{0
n n an 0≤n≤ N−1
x[n] = a u[n] − a u[n − N ] =
for the others
ROC
|z| > |a| |z| > |a| N−1 N−1
(az )
n
a n z −n = −1
∑ ∑
X[z] =
N−1 n
n=0 n=0
az −1

<∞ ⇒ | a | < ∞, z ≠ 0
n=0
ROC for sum of two (or more) series is greater than ROC of the series taken separately
𝒵
𝒵
𝒵
2. Translation in time domain

x[n − n0] ↔ z −n0 X[z] ROC can be different

3. Multiplication with an exponential series

[ z0 ]
z
z0n ⋅ x[n] ↔ X

rw < | z | < rz ⇒ rw | z0 | < | z | < rz | z0 |


ROC: ROC:

(z0 ⋅ x[n])
n
(x[n])

4. Differentiation of X(z)

d(X[z])
n ⋅ x[n] ↔ − z ROC is like for (x[n])
dz
𝒵
𝒵
𝒵
𝒵
𝒵
𝒵
5. Conjugation of complex series

x*[n] ↔ X*[z*] ROC - the same

6. Inversion in time domain

[ z* ]
1
x*[−n] ↔ X*
1 1
rw < | z | < rz ⇒ | z0 | < | z | < | z0 |
rw rz
ROC: ROC:
(x[n]) (z0 ⋅ x[n])
n

7. Convolution

x1[n] * x2[n] ↔ X1[z] ⋅ X2[z]


𝒵
𝒵
𝒵
𝒵
𝒵
Inverse Z.T.

Series x[n] is calculated on the ground of its transform and ROC

1
2πj ∮Γ
x[n] = X[z] ⋅ z n−1dz Γ Closed contour , containing z = 0

{0, n ≠ 0
1, n = 0
∮Γ
n−1
x[n] z dz = The Cauchy theorem

1 ∞

2πj ∮Γ { k=−∞ }
1
2πj ∮Γ
X[z] ⋅ z n−1dz = x[k]z −k z n−1dz =

∞ The inverse Z transform can be

∑ { }
1
∮Γ
calculated in the following ways:
x[k] z −k+n−1 = x[n]
k=−∞
2πj
• the straightforward method
• the “long” division of polynomial method
• decomposition into simple fraction method
• residuum method
The straightforward method
n2
bnz −n

X[z] = , for example:
n=n1

X[z] = 6 + 3z −1 − 4z −2 + 2z −3 + z −4
= x[0] + x[1]z −1 + x[2]z −2 + x[3]z −3 + x[4]z −4
Hence:
x[0] = 6 x[1] = 3 x[2] = − 4 x[3] = 2 x[4] = 1

for the others n x[n] = 0

Long division method

2 + 2z −1 + z −2
X[z] =
1 + z −1
2 + 2z −1 + z −2 Polynomial in the numerator

2 ⋅ (1 + z −1) ( − ) 2 + 2z −1 Polynomial in the numerator multiplied by 2

z −2 the difference

z −2 ⋅ (1 + z −1) ( − ) z −2 + z −3 Polynomial in the numerator ⋅ z −2

−z −3 the difference

−z −3 ⋅ (1 + z −1) ( − ) −z −3 − z −4 Polynomial in the numerator


⋅ − z −2
z −4
etc.
Hence: X[z] = 2 + z −2 − z −3 + z −4 − z −5 + … therefore,

0, n<0
2, n=0
x[n] = x[n] = δ[n] + δ[n − 1] + (−1)nu[n]
0, n=1
(−1)n, n≥2

∞ ∑m=0 bm z −m
ck z −k =

Under assumption, that Z. T. is written in

the following form:
k=0
∑n=0 anz −n
The general formula for polynomial coefficients, that results from division:
k
b0 b1 − c0a1 b2 − c1a1 − c0a2 bk − ∑i=1 ck−iai
c0 = c1 = c2 = ck =
a0 a0 a0 a0
Transform in the form of polynomial/
Partial fraction decomposition method
polynomial

2 + 2z −1 + z −2 1 + z −1 + z −1(1 + z −1) + 1 −1 1
X[z] = = =1+z + , |z| > 1
1+z −1 1+z −1 1+z −1

The method requires converting to a form so as a table of Z-tranform could be applied.

Generally:
If a degree of polynomial in numerator is greater than that in denominator, than:
M
K ∑m=0 cm z −m
X[z] = X1[z] + X2[z] = ck z −k +
∑ N
∑n=0 cnz −n
M≤N
k=0
X1[z] Directly form z transform table ( „Direct” method)

b0 + b1z −1 + b2z −2 + … + bM z −M B[z]


X2[z] = =
a0 + a1z + a2z + … + aN z
−1 −2 −N (1 − p1z −1)(1 − p2z −1)…(1 − pN z −1)
c1 c2 cN
= c0 + + +…+
(1 − p1z ) (1 − p2z )
−1 −1 (1 − pN z −1)
bN
c0 = ck = x[z](1 − pk z −1) k = 1,2,…, N
aN z=pk

If X2[z] has a multiple (m) pole:


m
ck dl z 1−l
∑ 1 − pk z −1 ∑ (1 − pk z −1)l
X2[z] = c0 + +…+ +…
k j=1

[ ]
1 d m−j (1 − pl)m
dl,j = ⋅ m−j X[z]
(m − j)! dz z
z=pk
Using of residues


If X[z] is the rational function (i.e. polynomial/polynomial), than: x[n] = ρk
k
ρk residues of F[n] = z n−1X[z]

ρk = (z − pk)z n−1X[z] for a single pole


z=pk
1 d m−1
ρk = ⋅ m−1 [(z − pk)mF[z]] for multiple (m) pole
(m − 1)! dz z=pk

2 + 2z −1 + z −2 2z 2 + 2z + 1 2z 2 + 2z + 1
X(z) = = =
1+z −1 z 2 + z z(1 + z)
2
n−1 2z + 2z + 1 z n(2z 2 + 2z + 1)
F(z) = z =
z(1 + z) z 2(1 + z)
n ≥ 2 one single pole p1 = −1
n = 1 two single poles p1 = −1, p2 = 0
n = 0 two poles : p1 = −1 (single), p2 = 0(double)
FFT algorithms

FFT – Fast Fourier Transform – is one of the commonly used algorithms


used in digital signal processing (DSP).
The goal – to decrease computation time

Idea- 1886 – Carl F. Gauss in astronomy


1967 Cooley, Tukey

Calculating of 8192-point DFT required


(in 1967)
– directly using definition: 30 minutes
– FFT: 5 seconds

© Andrzej Kotyra
DFT algorithms
• Sequential algorithms
– direct
– Radix-2
– DIT - decimation in time
– DIF - decimation in frequency
– Radix-4 , radix-2n class (Radix-8, Radix-16)
– for any N (e.g. Bluestein algorithm)
– others

• Parallell algorithms
– without inter-processor permutations
– with inter-processor permutations
– others
Stright method

1 N−1 2π 1 N−1
x[n] ⋅ e−j N kn = x[n] ⋅ WN−kn
N∑ ∑
X[k] = WN = e j N

n=0
N n=0

Matrix form of the equation given above:

X[0] 1 1 1 ⋯ 1 x[0]
X[1] 1 WN−1 WN−2 ⋯ WN−(N−1) x[1]
1
X[2] = 1 WN−2 WN−4 ⋯ WN−2(N−1) ⋅ x[2]
N
⋮ ⋮ ⋮
X[N − 1] 1 WN−(N−1) WN−2(N−1) ⋯ WN−(N−1)(N−1) x[N − 1]

Assuming, that a signal is complex we have – N 2 complex multiplications and


N(N − 1) complex additions
For N = 1024 there are:

1048676 complex multiplications and 1047552 complex additions


Radix-2 - DIT
N = 2n
(Cooley-Tukey)
k = 0,1,2,…, N − 1
N N
N−1 2 −1 2 −1

x[n] ⋅ e−j N kn =

X[k] = −j 2π
N k(2n) −j 2π
N k(2n+1)
∑ ∑
x[2n] ⋅ e + x[2n + 1] ⋅ e
n=0 n=0 n=0
It is possible to extract two subseries of the input data: with odd and even indexes
N N Does not
2 −1 2 −1
−j 2π −j 2π −j 2π depend on n
N k(2n) N k(2n) N k
∑ ∑
= x[2n] ⋅ e + x[2n + 1] ⋅ e ⋅e
n=0 n=0
N
2 −1
N
2 −1
−j 2π
N k −j 2π
N k(2n)

−j 2π
N k(2n) +e x[2n + 1] ⋅ e

= x[2n] ⋅ e
n=0 n=0

−j 2π
Let’s denote e N (for simplicity) as WN

N N
2 −1 2 −1

x[2n] ⋅ WN2nk +WNk x[2n + 1] ⋅ WN2nk


∑ ∑
=
n=0 n=0
−j⋅2⋅ 2π 2π
Because: WN2 =e N =e−j⋅ N/2 1
= WN/2
N N
2 −1 2 −1

x[2n] ⋅ WN2nk + WNk x[2n + 1] ⋅ WN2nk =


∑ ∑
X[k] =
n=0 n=0
N N
2 −1 2 −1
nk
+ WNk nk
∑ ∑
x[2n] ⋅ WN/2 x[2n + 1] ⋅ WN/2
n=0 n=0
nk
WN/2 is present in two sums → only once can be calculated → decreasing

computational time
N
Calculation of X[k] can be splitted; independently for X[k], k = 0,1,2,…, −1
2
N N N
and for k = , + 1, + 2,…, N − 1, hence:
2 2 2

[ 2]
N
X[k] = X[k] + X k+
N
k = 0,1,2…, N − 1 k = 0,1,2…, − 1
2
N N
2 −1 2 −1

[ ]
N n(k+N/2) n(k+N/2)
+ WNk+N/2
∑ ∑
X k+ = x[2n] ⋅ WN/2 x[2n + 1] ⋅ WN/2 =
2 n=0 n=0

N N
2 −1 2 −1

[ 2] ∑
N n(k+N/2) n(k+N/2)
+ WNk+N/2

X k+ = x[2n] ⋅ WN/2 x[2n + 1] ⋅ WN/2
n=0 n=0
N
k = 0,1,2.…, − 1
2

n(k+N/2) nk n⋅N/2 nk −j N/2 ⋅n⋅N/2 nk −j2π⋅n
WN/2 = WN/2 ⋅ WN/2 = WN/2e = WN/2 e

nk
= WN/2 ⋅ [cos(2πn) − j sin(2πn)] = WN/2
nk

WNk+N/2 = WNk ⋅ WNN/2 = k −j 2π


WN e N ⋅ N2
= WNnk e−jπ = WNk [cos(π) − j sin(π)] = −WNk
N N
2 −1 2 −1

[ 2] ∑
Hence: N nk
− WNk nk

X k+ = x[2n] ⋅ WN/2 x[2n + 1] ⋅ WN/2
n=0 n=0
N N
Therefore: N2 −1 2 −1
N
−1 −1
( XN[k%] = ∑ x[2n] ⋅ WN/2 2 nk
nk + WN
k 2
k ∑ x[2n + 1] ⋅ WN/2
nk
nk
X &k + # = ∑ n=0
x (2 n )⋅ W N 2 − W ⋅ ∑
N n=0 x (2 n + 1)⋅ W N 2
N ' 2$ $ n =0
!!#!! " $
n =0
!! !#!!! "
k = 0,1,2.…, − 1
2 A[k] B[k]
NN2 −1 NN
2 −1

X &[k + 2 ]# = ∑x(2n )⋅ WNN/22 − WNN ⋅∑ x(2n + 1)⋅ WN/2


N −1
nk k 2
−1
nk
X( k + N % = x[2n] ⋅ Wnk
2
− W k x[2n + 1] ⋅ W nk

' 2$ n=0
∑ n=0
∑ N 2
$
n =0
!!#!!
" $
n =0
!!
!#!!!
"
A[k] B[k]

X[k] = A[k] + WNk ⋅ B[k] N


k = 0,1,2.…, − 1
[ 2]
N 2
X k+ = A[k] − WNk ⋅ B[k]

In order to calculate the second part of Fourier coefficients calculations, previous calculcation
results can be applied (for the first half)
In order to calculate N - point DFT its is sufficient to calculate two N/2 point DFTs for
k = 0,1,2.…, N/2 − 1, and the apply its results once again for one N point DFT

X[k] = A[k] + WNk ⋅ B[k]


N
k = 0,1,2.…, − 1
[ 2]
N 2
X k+ = A[k] − WNk ⋅ B[k]
For example:
Let N = 8
3 3
x[2n] ⋅ W4nk = x[2n] ⋅ W4nk
∑ ∑
A[k] =
n=0 n=0
k = 0,1,2,3
3 3
x[2n + 1] ⋅ W4nk = x[2n + 1] ⋅ W4nk
∑ ∑
B[k] =
n=0 n=0

k = 0,1,2,3
3
x[2n] ⋅ W4nk

A[k] = k = 0,1,2,3 X[k] = A[k] + W8nk ⋅ B[k]
n=0

x[0] A[0]
+ X[0]
x[2] A[1]
+ X[1]
x[4] N=4 A[2]
+ X[2]
x[6] A[3]
+ X[3]

3
x[2n + 1] ⋅ W4nk X[k + 4] = A[k] − W8nk ⋅ B[k]

B[k] = k = 0,1,2,3
n=0

x[1] B[0]
W80 −1 + X[4]
x[3] B[1]
W81 −1 + X[5]
x[5] N=4 B[2]
W82 −1 + X[6]
x[7] B[3]
W83 −1 + X[7]
N /2 - point DFT can be splitted into two DFTs of N /4 points.

N N N
2 −1 4 −1 4 −1
nk
x[2(2n)] ⋅ W42nk + x[2(2n + 1)] ⋅ W4(2n+1)k
∑ ∑ ∑
A[k] = x[2n] ⋅ WN/2 =
n=0 n=0 n=0

1 1
x[4n] ⋅ W42nk+W4k x[4n + 2] ⋅ W42nk
∑ ∑
= k = 0,1,2,3
n=0 n=0

−j 2π nk
W42nk 4 ⋅2nk = e−j 2 ⋅nk = W2

Because: =e

1 1
x[4n] ⋅ W2nk+W4k x[4n + 2] ⋅ W2nk
∑ ∑
Hence: A[k] =
n=0 n=0
Similarly, as in the case of calculating X[k], also A[k] can be splitted into two subseries for
k = 0,1,2,…, (N/4) − 1 and for k = (N/4), (N/4) + 1, (N/4) + 2, …, (N/2) − 1

[ 2]
N
A[k] = A[k] + A k+
N
k = 0,1,2…, (N/2) − 1 k = 0,1,2…, − 1
k = 0,1,2,3 k = 0,1 4

N N
4 −1 2 −1

[ 4] ∑
N n(k+N/4) k+N/4 n(k+N/4)

A k+ = x[4n] ⋅ WN/4 + WN/2 x[4n + 2] ⋅ WN/4
n=0 n=0

n(k+N/4) nk n⋅N/4 nk −j N/4 ⋅n⋅N/4 nk −j2π⋅n
WN/4 = WN/4 ⋅ WN/4 = WN/4e = WN/4 e
nk
= WN/4 ⋅ [cos(2πn) − j sin(2πn)] = WN/4
nk
= W2nk

2π N
k+N/4 k N/4 k −j N ⋅ 4 nk −jπ
WN/2 = WN/2 ⋅ WN/2 = WN/2 e = WN/2 e =

[cos(π) − j sin(π)] = −WN/2 = − W4


k k k
WN/2
Eventually:
N N
4 −1 2 −1

[ 4] ∑
N nk k nk

A k+ = x[4n] ⋅ WN/4−WN/2 x[4n + 2] ⋅ WN/4
n=0 n=0
1 1
A [k + 2] = x[4n] ⋅ W2nk−W4k x[4n + 2] ⋅ W2nk
∑ ∑
n=0 n=0
Therefore: NN NN
−1−1 2 −1
−1
24
( N % nk k
2
nk
X & k + A[k] ∑ N ∑
# ∑= x x[4n]
(2 n ) ⋅
W Wnk + W
W k x[4n
x (2 n+ 2]
1)⋅ W
W nk

' 2
=
$ $ n=0
⋅ 2
N 2 − 4 ⋅ ∑
n=0
+ ⋅ 2N 2
N n =0
!!#!! " $
n =0
!! !#!!! "
k = 0,1,2…, − 1
4 C[k] D[k]
N N4 −1 N
2N−1

[ ] &k + # =
N −1
nk k 2 nk
−1
(= A[kN+%2] = ∑ x[4n] ⋅ W
x(2n )⋅ WN 2 − WN ⋅ ∑ x(2n + 1)⋅ WN2 2
A k+ 2
nk2 − Wk4 x[4n + 2] ⋅ Wnk
4X n=0 ∑ n=0 ∑
' 2$ $
n =0
!!#!!
" $
n =0
!!
!#!!!
"
C[k] D[k]
Similarly B[k] can be calculated following the same pattern → E[k], F[k]
1
x[4n] ⋅ W2nk A[k] = C[k] + W4k ⋅ D[k]

C[k] = k = 0,1
n=0

x[0] C[0]
+ A[0]
x[2] N=2 C[1]
+ A[1]
1
x[4n + 2] ⋅ W2nk

D[k] = k = 0,1 A[k + 2] = C[k] − W4k ⋅ D[k]
x[4] n=0
D[0]
W40 −1 + A[2]
x[6] N=2 D[1]
W41 −1 + A[3]
1
x[4n] ⋅ W2nk B[k] = E[k] + W4k ⋅ F[k]

E[k] = k = 0,1
n=0

x[1] E[0]
+ B[0]
x[3] N=2 E[1]
+ B[1]
1
x[4n + 2] ⋅ W2nk

F[k] = k = 0,1 B[k + 2] = E[k] − W4k ⋅ F[k]
x[5] n=0 F[0]
W40 −1 + B[2]
x[7] N=2 F[1]
W41 −1 + B[3]
x[0] C[0]
+
The single butterfly for N = 2
x[4] C[1]
W20 −1 +
x[k]
+
x[2] D[0] x[k + N/2]
+ W20 −1 +
x[6] D[1]
W20 −1 +

Because:
x[1] E[0]
+ 2π

x[5] W20 = e−j 2 ⋅0 =1


E[1]
W20 −1 +

Resulting in:
x[3] F[0] x[k]
+ +
x[7] F[1] x[k + N/2]
W20 −1 + −1 +

186
8th point FFT (Radix-2 DIT )
x[0] C[0] A[0] X[0]
+ + +

x[4] C[1] A[1] X[1]


W20 −1 + + +

x[2] D[0] A[2] X[2]


+ W40 −1 + +

x[6] D[1] A[3] X[3]


W20 −1 + W41 −1 + +

x[1] E[0] X[4]


B[0]
+ + W40 −1 +

x[5] E[1] B[1] X[5]


W20 −1 + + W41 −1 +

x[3] F[0] X[6]


B[2]
+ W40 −1 + W40 −1 +

x[7] F[1] X[7]


B[3]
W20 −1 + W41 −1 + W41 −1 +

I stage II stage III stage


The general case:

Xm−1[p] Xm[p]
Computation in Computation

(m - 1)-th step results in m-th


step
Xm−1[q] W N
k
−1 Xm[q]

Xm[p] = Xm−1[p] + WNk ⋅ Xm−1[p]

Xm[q] = Xm−1[q] + WNk ⋅ Xm−1[q]

Each butterfly requires one complex multiplication and two complex additions

At every step of Radix-2 algorithm N/2 butterflies are calculated



−j ⋅k
k k
W N
− Rotation coefficient
Im(z)
W =e N
N

3
W4
6
W8 7
5
W8
W8
0 0 0
W8 W4 W2
1
W2 W42 W84 Re(z)
1
3 W8
W8
2
W8
1
W4

Taking into account redundant number of rotation coefficients further


simplification is possible, resulting in less mathematical operation required.
Radix-2 - DIF

N−1
x[n] ⋅ WN−kn

X[k] = k = 0,1,2,…, N − 1
n=0
Frequency points X[k] are decimated rather than x[n] points - independently the
odd and the even ones

N−1 N−1
x[n] ⋅ WN−2kn x[n] ⋅ WN(−2k+1)n

X[2k + 1] =

X[2k] =
n=0 n=0
k = 0,1,2,…, (N/2) − 1

N−1 (N/2)−1 N−1


x[n] ⋅ WN−2kn = x[n] ⋅ WN−2kn + x[n] ⋅ WN−2kn

X[2k] =
∑ ∑
n=0 n=0 n=N/2

N−1 (N/2)−1 N−1


x[n] ⋅ WN(−2k+1)n = x[n] ⋅ WN(−2k+1)n + x[n] ⋅ WN(−2k+1)n
∑ ∑ ∑
X[2k + 1] =
n=0 n=0 n=N/2
N−1 (N/2)−1 (N/2)−1
x[n] ⋅ WN−2kn = x[n] ⋅ WN−2kn+ x[n + N/2] ⋅ WN−2k(n+N/2)

X[2k] =
∑ ∑
n=0 n=0 n=0
n → n + N/2
WN−2k(n+N/2) = WN−2knWN−kN = WN−2kn = WN/2
−kn

(N/2)−1
[x[n] + x[n + n /2]] ⋅ WN/2
−kn

X[2k] =
n=0

N−1 (N/2)−1
x[n] ⋅ WN(−2k+1)n = x[n] ⋅ WN−(2k+1)n+

X[2k + 1] =

n=0 n=0
(N/2)−1
x[n + N/2] ⋅ WN−(2k+1)(n+N/2)

n=0
(N/2)−1 (N/2)−1
x[n] ⋅ WN−(2k+1)n+WN−(2k+1)(N/2) x[n + N/2] ⋅ WN−(2k+1)n
∑ ∑
=
n=0 n=0
(N/2)−1 (N/2)−1
x[n] ⋅ WN−(2k+1)n+WN−(2k+1)(N/2) x[n + N/2] ⋅ WN−(2k+1)n
∑ ∑
X[2k + 1] =
n=0 n=0
− j2πkN − j2πN
WN−(2k+1)(N/2) = WN−kNWN−N/2 = e N e 2N = −1 ⋅ 1 = − 1

(N/2)−1 (N/2)−1
x[n] ⋅ WN−(2k+1)n − x[n + N/2] ⋅ WN−(2k+1)n
∑ ∑
X[2k + 1] =
n=0 n=0
(N/2)−1

∑ [
X[2k + 1] = x[n] − x[n + N/2]] ⋅ WN −(2k+1)n

n=0
(N/2)−1

∑ {[
X[2k + 1] = x[n] − x[n + N/2]] WN } WN/2 −n −kn

n=0
DFT of the original signal can be replaced with DFT of the two components:

x1[n] = x[n] + x[n + N/2]

x2[n] = [x[n] − x[n + N/2]] WN−n


8th point FFT (Radix-2 DIF )
Radix-2 computation gain

In every step of FFT algorithm N /2 complex multiplications and N complex additions


are required.
Every additional step having N samples requires separate FFT step. Thus, there are
always B = log2N steps in FFT.

The FFT algorithm requires N /2⋅log2N complex multiplications and N ⋅log2N complex
additions.
Multiplications Additions

N Two Two
N-point DFT N/2-point Gain N-point DFT N/2-point Gain
DFT DFT
8 64 36 43,75% 56 42 42,86%
16 256 136 46,88% 240 128 46.67%
32 1024 528 48,44% 992 512 48,39%
64 4096 2080 49,22% 4032 2048 49,21%
128 16384 8256 49,61% 16256 8192 49,61%
256 65536 32896 49,80% 65280 32768 49,80%
Various FFT algorithms comparison

Assumption: every multiplication and addition take 1µs

Some FFT Variants


Problem
DFT Radix-2 Radix-4 Radix-8 Radix-16 Split-Radix
Length
(N = 2s) (N = 22m) (N = 23m) (N = 24m) (N = 22m)
1 1 1
N = 2s 8N 2 5N log2N 4 𝑁 log 2 𝑁 4 𝑁 𝑙𝑜𝑔2 𝑁 4 𝑁 𝑙𝑜𝑔2 𝑁 4N log2N
4 12 32
64 0,033 s 0.0019 s 0.0016 s 0.0016 s — 0.0015 s
128 0,131 s 0.0045 s — — — —
256 0,524 s 0.0102 s 0.0087 s — 0.0083 s 0.0082 s
512 2,097 s 0.0230 s — 0.0188 s — —
1024 8,389 s 0.0512 s 0.0435 s — — 0.0410 s
2048 33,554 s 0.1126 s — — — —
4096 2,233 min 0.2458 s 0.2089 s 0.2007 s 0.1981 s 0.1966 s
8192 8,933 min 0.5325 s — — — —
16384 35,783 min 1.1469 s 0.9748 s — — 0.9175 s

E. Chu, A. George, Inside the FFT Black Box, CRC Press LLC, 2000
Discrete Cosine Transform

DCT definition:

( )
N−1
π(2n + 1)k
k = 0,1,2,…, N − 1

X[k] = c[k] ⋅ x[n] ⋅ cos
n=0
2N
DCT

1 2
c[0] = c[k] = k = 0,1,2,…, N − 1
N N

( )
N−1
π(2n + 1)k k = 0,1,2,…, N − 1

IDCT x[k] = c[k]X[k] ⋅ cos
k=0
2N
DCT can be calculated by separation into odd and even samples

( ) ( )}
(N/2)−1 (N/2)−1

{ ∑
π(4n + 1)k π(4n + 3)k

X[k] = c[k] ⋅ x[2n] ⋅ cos + x[2n + 1] ⋅ cos
n=0
2N n=0
2N

( ) ( )}
(N/2)−1 (N/2)−1

{ ∑
π(4n + 1)k π(4n + 3)k

= c[k] ⋅ x̃[n] ⋅ cos + x̃[N − n − 1] ⋅ cos
n=0
2N n=0
2N

Changing summation limits and joining components of the sum:

( )
N−1 N−1

[ ]
π(4n + 1)k −jπk/2N
x̃[n] ⋅ e−j2πk/N
∑ ∑
X[k] = c[k] ⋅ x̃[n] ⋅ cos = Re c[k] ⋅ e
n=0
2N n=0

Hence: X[k] = Re [c[k] ⋅ e−jπk/2N DFTN (x̃[n])]

Similarly, it is possible to calculate IDCT


Calculation of inverse discrete Fourier transform using FFT algorithm

1 N−1 2π
N−1

x[n] ⋅ e−jk N n X[k] ⋅ e jk N n
N∑ ∑
X[k] = x[n] =
n=0 n=0

Forward DFT The inverse DFT

*
N−1
(a + b) = a* + b* (a ⋅ b) = a* ⋅ b*
(∑ )
* *

x*[n] = X[k] ⋅ e jk N n

n=0
*
N−1

(∑ )
N−1 2π
−jk 2π
N n x[n] = X*[k] ⋅ e−jk N n

x*[n] = X*[k] ⋅ e
n=0 n=0

1. A series X*[k] should be calculated


2. Forward FT of a conjugated series (using FFT)
3. Multiply every coefficient by N
4. Calculate a conjugation of results obtained in 3 - x [n]
Other FFT applications

1. Calculation of a product of two or more polynomials,

2. Calculation of convolution,

3. Calculation of product of Toepolitz matrixes, (of constant values on


its diagonal),

4. Calculation of Discrete Hartley Transform

5. Aproximation with Chebyschev polynomials

6. Solving of differential equations.

© Andrzej Kotyra
Filters - classification
(LP – Low Pass)

1 − δpass ≤ | HLP( jω) | ≤ 1 + δpass; | ω | ≤ ωpass


0 ≤ | HLP( jω) | ≤ 1 + δpass; ωpass < | ω | < ωstop
0 ≤ | HLP( jω) | ≤ δstop; ωstop ≤ | ω |

© Andrzej Kotyra
(HP – High Pass)

0 ≤ | HHP( jω) | ≤ δstop; | ω | ≤ ωstop


0 ≤ | HHP( jω) | ≤ 1 + δpass; ωstop < | ω | < ωpass
1 − δpass ≤ | HHP( jω) | ≤ 1 + δpass; ωstop ≤ | ω |
(BP – Band Pass)

1 − δpass ≤ | HBP( jω) | ≤ 1 + δpass; ωpass1 ≤ | ω | ≤ ωpass2


0 ≤ | HBP( jω) | ≤ 1 + δstop; | ω | ≤ ωstop1 or | ω | ≥ ωstop2
0 ≤ | HBP( jω) | ≤ 1 + δpass; ωstop1 < | ω | < ωpass1 or ωstop2 < | ω | < ωpass2
(BP – Band Stop)

1 − δpass ≤ | HBS( jω) | ≤ 1 + δpass; | ω | ≤ ωpass1 or | ω | ≥ ωpass2


0 ≤ | HBS( jω) | ≤ δstop; ωstop1 ≤ | ω | ≤ ωstop2
0 ≤ | HBS( jω) | ≤ 1 + δpass; ωstop1 < | ω | < ωpass1 or ωstop2 < | ω | < ωpass2
Every filter can have many different implementations

Butterworth – no ripples in both passband and stopband flat and smooth freq. plot

Chebyshev type I – ripples in passband no ripples in stopband, nonlinear phase-


frequency plot, stepness of slopes better than for Butterworth filters

Chebyshev type II – ripples in stopband no ripples in passband, nonlinear phase-


frequency plot, stepness of slopes better than for Butterworth filters

Elliptic – ripples both in passband and stopband; the best possible steepness of the
slope, highly nonlinear phase-frequenc plot

Bessel – no ripples in both passband and stopband , low steepness; highly linear
phase-frequency plot, only low-pass

© Andrzej Kotyra
Digital filters
Digital Filter algorithm (software, hardware) that transforms (converts) an input signal x[n]
into an output signal y[n], that possesses the desired properties (depended on specific
application)- np. noise reduction, compression, changing frequency/phase properties of the
signal

Digital filters - advantages comparing to analog f.

!Frequency plot parameters – attenuation levels considerable better than for analogue
counterparts (od the same order) – greater steepness – (possible) no ripples in transient band.

!Possible filter of linear phase plot in pass band - no phase distortions in usable band
!Filter parameters are constant in time, no electronic elements (resistors, cpacitors, etc.) are
required – problem of element ageing does not exist.

!Filter coefficients as well as its structure can be changed in „real time” No circuit switching
needed

!Possible adaptation filters– their frequency/phase plots can be changed according to defined
input signal properties.
Digital filters - drawbacks

"Pass band is limited by Nyquist frequency,


"narrowd range of signal amplitude (np. r.m.s of noise for 16bit ADC is 10µV – typical analog
filter (operational amplifier) - 2µV) – Result of multiplication stored in register is cutted off –
another source of distortion /noise

The following DF will be regarded as LTI systems.

N M
y[n] = ∑ bl x[n − l ]− ∑ ak y[n − k ]
l =0 k =1
M N M N
y[n] + ∑ ak y[n − k ]= ∑ bl x[n − l ] ∑ a y[n − k ]= ∑ b x[n − l ]
k l
k =1 l =0 k =0 l =0

Z transform
N
−l
b
∑l z
M N Y (z )
∑ ak z ⋅ Y (z )= ∑ bl z −l ⋅ X (z ) filter transmittance : H (z ) =
−k l =0
= M
X (z )
k =0 l =0 1 + ∑ ak z − k
k =1
© Andrzej Kotyra
Digital filters - classification

Recursive D.F.
Digital filters
Non-recursive D.F

Recursive D.F. - properties:

⌘ At least one of transmittance denominator coefficient ak is nonzero.


⌘ Output signal of the recursive filter depends on actual input signal as well as input
signal amplitude in the past.
⌘ Usually, impulse response of recursive filter has an infinite duration (except when all
poles are zompensated by zzeros) → infinite impulse response filters (IIR)
⌘ Casual IIR filter is stable only and if only its transmittance poles are placed inside an
unit circle

© Andrzej Kotyra
Properties of non-recursive filters:
N
−l
Y (z ) ∑b z l
⌘ All the coefficients of transmittance denominator ak are equal to zero. H (z ) = = l =0
M
X (z )
1 + ∑ ak z − k
k =1
⌘ Output signal of non-recursive filter depends only on samples of the input signal.
⌘ Impulse response of non-recursive filter is always finite in time → filters of finite impulse
response (FIR) are always stable
⌘ Not every finite-response filters is non-recursive
Digital filter realization: → how to convert (trensform) given an input signal into output
signal → e.g.using difference equation

Filter structure description: block diagram, signal graph, matrix form

Basic elements of block diagram

x x + y x a x x( n ) x( −
z −1

y a
adder multiplier (time) delay block

© Andrzej Kotyra
Example12 Draw a block diagram and graph for a filetr descibed by following difference equation:

y[n] = x[n]+ bx[n − 1]+ ay[n − 1]


Block diagram:

x[ n] y[ n ]

z−1 b a z−1

Graph:

x[ n ] y[ n ]

z
−1
b
z−1 a

You might also like