DSP - 24 10 2022
DSP - 24 10 2022
And others
Outline:
1. Signals – classification. LTI – Linear Time Invariant Systems
© Andrzej Kotyra
5. Short-Time Fourier Transform, Time-frequency resolution. Heisenber Uncertainty
principle
8. Digital Filters.
© Andrzej Kotyra
Fundamental terms
Signals
deterministic random
non-stationary stationary
periodic
time, s time, s
Example signals
sin(2π5t) sin(2π5t) + sin(2π10t)
time, s time, s
sin(2π5t)+ sin(2π(π )t )
sin(2π5t) + 0,2 sin(2π25t)
time, s
time, s
Example signals
Amplitude modulation
time, s time, s
Frequency modulation
time, s
time, s
Parameters of deterministic signals
mean value
continuous signal x(t) discrete signal x[n]
t2 n
1 1 2
t2 − t1 ∫t1 ∑
x= x(t)dt x= x[n]
n2 − n1 + 1 n=n
1
T - period N - period
+N
1 T 1
T→∞ 2T ∫−T N→∞ 2N − n + 1 ∑
x = lim x(t)dt x = lim x[n]
n=−N
n +N
1 t0+T 1 0
T ∫t0 ∑
xT = x(t)dt xN = x[n]
N n=n
0
Parameters of deterministic signals
t2 n2
∫t
2 2
E= x(t) dt
∑
E= x[n]
1 n=n1
+T +N
T→∞ ∫−T
2 2
∑
E∞ = lim x(t) dt E∞ = lim x[n]
N→∞
n=−N
t0+T n0+N
∫t
2 2
∑
ET = x(t) dt ET = x[n]
0 n=n0
Parameters of deterministic signals
t2 n2
1 1
t2 − t1 ∫t1
2 2
P= x(t) dt
∑
E= x[n]
n2 − n1 + 1 n=n
1
T - period N - period
1 +T 1 +N
T→∞ 2T ∫−T
2 2
∑
P∞ = lim x(t) dt P∞ = lim x[n]
N→∞ 2T
n=−N
P1
1dB = 10 log P0 = reference power
P0
A1
1dB = 20 log
A0
- T1 0 T2 t - T2 0 T1 t
Time scaling
x(t) x(2t) x( 12 t)
1,0 1,0 1,0
-1 0 1 t 1 1
2
0 2 t -2 0 2 t
Transformation of the independent variable
1 0 1 t 0 1 3 2 5 t
2 2 2 2
1 0 1 t 4 3 2 1 0 1 t 3 2 1 0 1 t
31 21
2 2
1 0 1 t 1 1 t -3 -2 -1 0 t
2 0 2
(Digital) signal processing chain
Anti-alising filter
S&H ADC
(LP)
Signal
Processor
Output filter
Actuator DAC
(optional)
Digital
Analog Signal
Signal Analog
Signal
Digital vs analog signal processing
Pros
Cons
x[4]
7 8 9 10 11
-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 n
Analog signal
32 ms
Discrete signal
256 samples
{1,
0, n≠0
1 δ[n] =
n=0
δ[n] = u[n] − u[n − 1]
-8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 n
{0,
1, n≥0
1 u[n] =
Unit step n<0
∞ n
∑ ∑
-8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 n
u[n] = δ[n − k] = δ[k]
k=0 k=−∞
Exponent decaying
{0,
Aα n, n≥0
x[n] =
n<0
x[n] = Aα nu[n]
-8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 n
Sinusoid
x[n] = A cos(ω0n + φ)
-8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 n
Complex exponential signal
⋁ ⋀
x[n] = x[n + k ⋅ N ]
Discrete sinusoid: N∈ℕ k,n∈ℂ
2π
A cos(ω0n + φ) = A cos(ω0n + ω0 N + φ) ⇒ N = k
ω0
Discrete sinusoid (and complex exponential) are not necessary periodic with 2π /ω0
cos(ω0n) for different ω0
π 15π
ω0 = 0 , ω0 = 2π ω0 = , ω0 = ,
8 8
1 1
0.8
0.5
0.6
0.4 n
0.2
-0.5
n
-1
π 31π π 7π
ω0 = ,ω = , ω0 = , ω0 = ,
1 16 0 16 1
4 4
0.5 0.5
n n
-0.5 -0.5
-1 -1
Any discrete signal p[n]:
a[1]
a[-3]
p[n]
2 7
-2 0 1 3 4 5 6 8 n
a[2] a[7]
∑
x[n] = x[k]δ[x − k]
k=−∞
Systems
∑
moving average system y[n] = x[n − k]
M2 + M1 + 1 k=−M
1
Memoryless systems
Its output signals depends only on the present value of the input signal
e.g.
i 2(t)R = u 2(t)/R y[n] = (x[n])2
Linear systems – systems fulfilling superposition principle:
⇔
T{x1[n] + x2[n]} = T{x1[n]} + T{x2[n]} additivity
∧ and
Nonlinear Systems
It is sufficient to find only one n for which additivity or homogeneity principles are not
fulfilled - easier to proof comparing to linearity.
Accumulator systems n
∑
y[n] = x[k]
k=−∞
It possible to prove that accumulator system is nonlinear
Casual systems
A system is casual if the present value of the output depends only on the
present or past values for every n
1
y[n] = (x[n] + x[n] + x[n − 1] + x[n − 2])
3
1
y[n] = (x[n + 1] + x[n] + x[n − 1])
3
The compressor system
Every M-th signal is taken (the others are skipped). It is not a time-invariant system
Stable systems
A system is stable in BIBO (bounded input bounded output) sense if and only if bounded
input produces bounded output
The (input) signal is x[n] is called to be bounded if there exists such a B > 0 that for every n the
following formula is fulfilled
| x[n] | ≤ B < ∞
Linear Time-Invariant systems
LTI system is completely defined by its impulse response - h[n]
{ k=−∞
∑ }
y[n] = T{x[n]} = T x[k]δ[n − k] Convolution sum
∑ ∑
y[n] = x[n]T{δ[n − k]} = x[n]hk[n]
k=−∞ k=−∞
From the property of time invariance it follows, that:
∞
∑
T{δ[n − k]} = h[n − k] = hk[n] y[n] = x[k]h[n − k]
k=−∞
y[n] = x[n] ⋆ h[n − k] Discrete convolution
Representation of the output if LTI system as the superposition to separate samples
Input Impulse
signal: response:
h[n]
3
-5 -4 -3 -2 -1 0 1 2 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5
x[−2]h[n + 2]
x[−2]δ[n + 2]
-5 -4 -3 -2 -1 0 1 2 3 4 5
-5 -4 -3 -2 -1 0 1 2 3 4 5
x[0]δ[n] x[0]h[n]
-5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5
x[3]δ[n − 3] x[3]h[n − 3]
3 3 4 5
-5 -4 -3 -2 -1 0 1 2 4 5 -5 -4 -3 -2 -1 0 1 2
Output
x[n] = x[−2]δ[n + 2] + x[0]δ[n] + x[−3]δ[n − 3] signal:
3 4 5
-5 -4 -3 -2 -1 0 1 2
Example 1: Analytical evaluation of the output signal for LTI system having impulse response h[n]
{0,
1, for 0 ≤ n ≤ N − 1 Input signal: x[n] = a nu[n]
h[n] = u[n] − u[n − N ] =
for the others
a) y[ n ] = 0
∞ n
∑ ∑
b) y[n] = x[k] ⋅ h[n − 1] = x[k] ⋅ h[n − 1]; for 0 ≤ n ≤ N − 1
k=−∞ k=0
N2
n
1 − a n+1 a N1 − a N2+1
∑
a k; for 0 ≤ n ≤ N − 1 ⟹ y[n] = = , N2 ≥ N1
∑
y[n] = 1−a
1−a k=N
k=0 1
c) n
k a n−(N−1) − a n+1 n−(N−1) 1 − a
N
∑
y[n] = a ; for N − 1 < n ⟹ y[n] = =a
k=n−(N−1)
1−a 1−a
0, for n < 0
Hence: 1 − a n+1
y[n] = 1−a
for 0 ≤ n ≤ N − 1
( 1 − a ),
n−N+1 1 − aN
a for N − 1 < n
Sequence involved in computing a discrete convolution in ex.1
If n<0
0 k
n − (N − 1) n
Sequence involved in computing a discrete convolution in ex.1
h[n-k] x[k]
b) 0≤n≤ N−1
0 k
n − (N − 1) n
Sequence involved in computing a discrete convolution in ex.1
x[k] h[n-k]
c) 0 < n − (N − 1)
0 k
n − (N − 1) n
a)
0 k
y[n]
k
0
a)
0 k
y[n]
k
0
a)
0 k
y[n]
k
0
a)
0 k
y[n]
k
0
a)
0 k
y[n]
k
0
a)
0 k
y[n]
k
0
a)
0 k
y[n]
k
0
a)
0 k
y[n]
k
0
b)
0 k
y[n]
k
0
b)
0 k
y[n]
k
0
b)
0 k
y[n]
k
0
b)
0 k
y[n]
k
0
b)
0 k
y[n]
k
0
c)
0 k
( 1−a )
k a n−(N−1) − a n+1 1 − a n+1
= a n−(N−1)
∑
y[n] = a for N − 1 > n ⟹ y[n] =
k=0
1−a
y[n]
k
0
c)
0 k
( 1−a )
k a n−(N−1) − a n+1 1 − a n+1
= a n−(N−1)
∑
y[n] = a for N − 1 > n ⟹ y[n] =
k=0
1−a
y[n]
k
0
c)
0 k
( 1−a )
k a n−(N−1) − a n+1 1 − a n+1
= a n−(N−1)
∑
y[n] = a for N − 1 > n ⟹ y[n] =
k=0
1−a
y[n]
k
0
c)
0 k
( 1−a )
k a n−(N−1) − a n+1 1 − a n+1
= a n−(N−1)
∑
y[n] = a for N − 1 > n ⟹ y[n] =
k=0
1−a
y[n]
k
0
c)
0 k
( 1−a )
k a n−(N−1) − a n+1 1 − a n+1
= a n−(N−1)
∑
y[n] = a for N − 1 > n ⟹ y[n] =
k=0
1−a
y[n]
k
0
c)
0 k
( 1−a )
k a n−(N−1) − a n+1 n−(N−1) 1 − a n+1
∑
y[n] = a for N − 1 > n ⟹ y[n] = =a
k=0
1−a
y[n]
k
0
LTI systems properties
h1[n]
h1[n] h2[n]
x[n] y[n] +
x[n] y[n]
h2[n]
h2[n] h1[n]
x[n] y[n]
h1[n] ⋆ h2[n]
x[n] y[n] h1[n] + h2[n]
x[n] y[n]
Stability
∑
A LTI system is stable ⇔ S= | h[k] | < ∞ the required and the necessary condition
−∞
Proof:
∞ ∞
∑ ∑
| y[n] | = h[k]x[n − k] ≤ | h[k] | ⋅ | x[n − k] |
⋁⋀
| x[n] | < M ⇒
M n k=−∞ k=−∞
∞ S =∞
∑
<M | h[k] | < ∞
k=−∞
In order to show, that it is also a necessary condition we must show that if then a bounded input can
be found that will cause an unbounded output. Suppose, that:
bounded by unity
h*[−n] ∞
| h[k] |2
∞
{0,
, h[n] ≠ 0
x[n] =
∑ ∑ | h[k] |
| h[n] | y[0] = x[0 − k] | h[k] | = =S
h[n] = 0 k=−∞ k=−∞
N N
∑ ∑
ak y[n − k] = br x[n − r] generally, this equation is not necessary
casual
k=0 r=0
Without the initial condition (or conditions) it one cannot determine output signal
∑
y[n] = x[n]
k=−∞
n−1
∑
y[n] = x[n] + x[n] x[n] + y[n]
k=−∞
n−1
z −1
∑
y[n − 1] = x[n] y[n] = x[n] + y[n − 1] y[n − 1]
k=−∞
Example 3 Consider a system that is described by difference equation
For n>−1
y[0] = ac + B
y[1] = ay[0] + 0 = a(ac + B) = a 2c + aB
y[2] = ay[1] + 0 = a(a 2c + aB) = a 3c + a 2 B
y[3] = ay[2] + 0 = a(a 3c + a 2 B) = a 4c + a 3B
y[n] = a n+1c + a n B
y[n] = a n+1c
amplitude
Example 4: Find the frequency response of ideal delay system defined as:
y[n] = x[n − nd ] where nd - shift in time domain - integer number
y[n] = e jω(n−nd ) = e−jωnd e jωn ⇒ H(e jω) = e−jωnd x[n] = e jωn, n ∈ (−∞, ∞)
Input sequence
⇓
H(e jω) = 1 arg(H(e jω)) = − ωnd
ak e jωk n n ∈ (−∞, ∞)
∑
If an input sequence is in the form of: x[n] =
k
ak H(e jωk)e jωk n n ∈ (−∞, ∞)
∑
Then, from the principle of superposition y[n] =
k
If we can find a representation of x[n] as a superposition of complex exponential sequence,
then we can find y[n] having the frequency response of the system.
Example 5: Find the output sequence of the ideal delay system when an sinusoid is input signal.
1 N2
a N1 − a N2
{0,
, −M1 ≤ n ≤ M2 k
∑
h[n] = M1 + M2 + 1 a = , N1 ≥ N2
otherwise k=N
1 − a
1
∞ ∞
jω −jωk jω 1 −jωn 1 e jωM1 − e−jω(M2+1)
∑
H(e ) = h[k]e
∑
H(e ) = e =
k=−∞
M1 + M2 + 1 n=∞ M1 + M2 + 1 1 − e−jω
jω 1 e jω(M1+M2+1)/2 − e−jω(M1+M2+1)/2 −jω(−M +M +1)/2
H(e ) = e 1 2 =
M1 + M2 + 1 1−e −jω
arg (H(e jω ))
Suddenly applied
LTI system response on a input in the form of: x[n] = e jωu[n] exponent signal
Assuming, that the signal was applied at n = 0
Unit step
0, n<0
y[n] =
( k=0 )
n −jωk jωn
∑ h[k]e e , n≥0
For n ≥ 0 :
∞ ∞ ∞
(∑ ) ( k=n+1 ) ( k=n+1 )
−jωk jωn −jωk jωn jω −jωn
h[k]e−jωk e jωn
∑ ∑
y[n] = h[k]e e − h[k]e e = H(e )e −
k=0
Steady-state
response Transient response
∞ ∞ ∞
h[k]e−jωk e jωn ≤
∑ ∑ ∑
| h[k] | ≤ | h[k] |
k=n+1 k=n+1 k=0
∑
if | h[k] | < ∞ the system is stable
k=0
The condition for stability is also a sufficient condition for the existence of the
frequency response function.
∞ ∞ ∞
jω −jωk −jωk
∑ ∑ ∑
H(e ) = h[k]e ≤ | h[k]e |≤ h[k]
k=−∞ k=−∞ k=−∞
∑
| h[k] | < ∞ The general condition of the existence of impulse response
k=0
Approximation of a vector by another vector
V1 = c12V2 + Ve
Approximation error
Vector compoment V1
along the vector V2
… but only one is optimal taking vector’s length into consideration
V1 V1 V1
Ve2
Ve12
Ve1
V1 ⋅ V2
= | V1 | cos(θ) = C12 | V2 | V1 vector component along the
| V2 | V2 vector
Two vectors V1 i V2 are orthogonal to
each other if V1 ⋅ V2 = 0
Any real function f1(t) can be approximated by another real function i f2(t) within a certain
range (t1 < t < t2) and it can be expressed in the following way:
t2
1
∫
[ f1(t) − c12 f2(t)]dt
t2 − t1 t1
This criterion seems to be inadequate for positive and negative error values could
compensate each other yielding wrong conclusions
© Andrzej Kotyra
It’s better to choose mean squared error defined as follows:
t2
1
t2 − t1 ∫t1
ε= [ f1(t) − c12 f2(t)]2dt
dε
=0
dc12
t2
t2 t2 t2
t2 − t1 [ ∫t1 dc12 ]
1 d 2 d d 2 2
∫t dc12 ∫t dc12
f 1 (t)dt − 2 c12 f1(t)f2(t)dt + c12 f 2 (t)dt = 0
1 1
t2 t2 t2
t2 − t1 [ ∫t1 ]
1
∫t ∫t
= 0dt − 2 f1(t)f2(t)dt + 2c12 f 22(t)dt = 0
1 1
© Andrzej Kotyra
t2
∫t f1(t)f2(t)dt V1 ⋅ V2
c12 = 1
t
= | V1 | cos(θ) = c12 | V2 |
∫t 2 f 22(t)dt | V2 |
1 V1 ⋅ V2
c12 =
| V2 |2
t2
Product of two real functions: ⟨f1(t), f2(t)⟩ = ∫ f1(t)f2(t)dt
t1
∫t
f1(t)f2(t)dt = 0
1
A set of functions {f0(t), f1(t), f2(t), f3(t) , ... , fn (t)} is a set of n+1 mutually orthogonal (real)
within (t1 < t < t2), if:
t2
∫t
fj(t)fk(t)dt = 0 j≠k
1
Let any function x(t), be approximated within range (t1 < t < t2), by linear combination of n mutually
orthogonal functions
∑
x(t) ≊ c1 f1(t) + c2 f2(t) + … + cn fn(t) = cr fr(t)
r=0
2
t2 n
t2 − t1 t1 [ ]
1
∫ ∑
Mean squared error: ε= x(t) − cr fr(t) dt
r=0
© Andrzej Kotyra
Function expansion into a series of mutually orthogonal
functions. Fourier series
A real function x(t) is given. The goal is to find its approximation within t∈〈t0;t0+T〉
with a set of orthogonal base functions fk(t)
t0+T
⟨fi(t), fj(t)⟩ = ∫
Dot product of fi, fj at within
〈t0;t0+T〉 f1(t)f 2*(t)dt i≠j
t0
x(t) approximation by ∞ ∞ error
∑
x(t) ≊ ck fk(t)
∑
set of orthogonal
basis functions fk(t) x(t) = ck fk(t) + ε
k=−∞ k=−∞
The goal is to find coefficients ck, so as to minimize (mean squared) error
2
t0+T ∞
[ ]
1
T ∫t0 ∑
ε= x(t) − ck fk(t) dt
k=−∞
2
t0+T ∞
[ ]
∂ 1
T ∫t0 ∑
For any cn x(t) − ck fk(t) dt = 0
∂cn k=−∞
2
1 t0+T 2 ∞ ∞
( k=−∞ )
∂
∫ ∑ ∑
x (t) − 2x(t) ck fk(t) + ck fk(t) dt = 0
∂cn T t0 k=−∞
2
1 t0+T ∂ 2 1 t0+T ∞
1 t0+T ∂ ∞
1 t0+T ∂ 2 1 t0+T ∞
2 t0+T
( k=−∞ ∂cn )
∂
∫ ∫ ∫
[x (t)] dt = 0
∑
− 2x(t) ck fk(t) dt = − x(t)fn(t)dt
T t0 ∂cn T t0 T t0
=0
2
1 t0+T ∂ ∞
1 t0+T ∂ ∞
T ∂cn ( ∫t0 )
1 ∂
∫t ∫t
… 2c1 f1(t)c2 f2(t)dt + 2c1 f1(t)c3 f3(t)dt + … + 2c1 f1(t)cn fn(t)dt + …
0 0
t +T
∫t 0 x(t)fn(t)dt
hence: cn = 0
t +T
∫t 0 f n2(t)dt
0
For complex and orthogonal functions fk(t) and complex signal x(t):
t0+T
∫t x(t)f n*(t)dt
cn = 0
t +T
∫t 0 f n*(t)fn(t)dt
0
Complex, harmonic base functions
Assuming that fn(t) are complex and harmonic signals in the form:
( T/t ) ( T/t )
2π 2π
fn(t) = e jnω0t = e jn(2πf0)t = e j2πn(1/T )t = cos n t + j sin n t
j2π[(m−n)/T ]T
j2π[(m−n)/T ]t0 e −1 e j2π[(m−n)/T ]t0 j2π(m−n)
e = [e − 1] =
2πj(m − n)/T 2πj(m − n)/T
e j2π[(m−n)/T ]t0
2πj(m − n)/T [ cos (2π(m − n)) − j sin (2π(m − n)) − 1] =
n = 0, ± 1, ± 2,…
1 t0+T
T ∫t0
If n = 0 c0 = x(t)dt The mean value for t ∈ ⟨t0, t0 + T⟩
∑( k
+ c−k e−jkω0t) = ( )=
jkω0t −jkω0t
∑
ck e jkω0t = c0 + jkω0t c0 + c e + c* e
∑
x(t) = ce k k
k=−∞ k=1 k=1
∞
2 Re (ck e jkω0t) ,
∑
c0 + because: x + x* = a + jb + a − jb = 2Re(x)
k=1
( ak )
bk
If we write: ck = ak − jbk | ck | = ak2 + bk2 φk = arctan −
∫
1 1
∫ ∫
−jnω0t
ck = x(t)e dt = x(t)cos(kω0t)dt − j x(t)sin(kω0t)dt = ak − jbk
T t0 T t0 T t0
1 t0+T 1 t0+T
T ∫t0 T ∫t0
ak = x(t)cos(kω0t)dt bk = x(t)sin(kω0t)dt
∑(
If expansion is in a form: x(t) = c0 + ak cos(kω0) + bk sin(kω0))
k=1
2 t0+T 2 t0+T
∫ ∫
than ak = x(t)cos(kω0t)dt bk = x(t)sin(kω0t)dt
T t0 T t0
∞ ∞
∑(
ck e jkω0t x(t) = c0 + ak cos(kω0) + bk sin(kω0))
∑
x(t) =
k=−∞ k=1
2 t0+T
∫
Amplitude (wight) of k-th function e jkω0t
ak = x(t)cos(kω0t)dt
T t0
1 t0+T 1 t0+T
T ∫t 0 ∫
ck = x(t)e−jkω0t dt c0 = x(t)dt
T t0
2 t0+T
∫
bk = x(t)sin(kω0t)dt
T t0
If the function is periodical (with period of T), rthe expansion concerns the range equal to the
function period, t0 is any point.
Numerical implementation of the Fourier series allows only a finite number of expansion
coefficients, which is a source of error (known as the Gibbs effect)
Fourier transform
Definition:
Fourier transform Inverse Fourier transform
∞
1 ∞
∫−∞ 2π ∫−∞
−jωt
X( jω) = x(t)e dt x(t) = X( jω)e jωt dω
∫−∞
1. | x(t) | dt < ∞ 0.5
-1
∞
∞
1 t0+T ∞
2π 1 t0+T
( T ∫t0 ) ∑ ( T 2π ∫ )
−jkω0t jkω0t
jkω0t
=∑ x(t)e dt e = x(t)e−jkω0t dt e jkω0t
∑
x(t) = ck e
k=−∞ k=−∞ k=−∞ t0
( −T/2 )
2π 1 ∞ ∞
∫ 2π ∫−∞ ( ∫−∞ )
x(t)e−jkω0t dt e jkω0t
∑
lim = x(t)e−jωt dt e jωt dω
T→∞ 2π T
k=−∞
1 ∞
2π ∫−∞
X( jω)e jωt dω
∫−∞ ∫−∞
X( jω) = x(t)cos(ωt)dω + j x(t)sin(ωt)dω = XR( jω) + jXI ( jω)
1 ∞
2π ∫−∞
X( jω) = X( jω) e j arg(X( jω)) x(t) = X( jω) e−j[ωt + arg(X( jω))]dω
Fourier Transform properties
Symmetry
X( jt) ⇔ 2πx(−ω)
Scaling
a (a)
1 ω
x(at) ⇔ X
Compression of the time scale – expansion in frequency domain and vice versa
Scaling
Shifting in time domain
Displacement of a given signal in time domain results phase shift of its Fourier transform.
The modulus (absolute value) of any signal and the modulus of signal obtained as a
result of displacement are equal.
Product of signals
1 ∞
2π ∫−∞
z(t) = x(t)y(t) ⇔ Z( jω) = X( jν)Y (j(ω − ν)) dν = X( jω) ⋆ Y( jω)
Convolution of signals
∫−∞
z(t) = x(t) ⋆ y(t) = x(τ)y ((t − τ)) dτ ⇔ Z( jω) = X( jω)Y( jω)
Derivative of a signal
dn x(t) n
⇔ ( jω) X( jω)
dt n
Integral of a signal
t
1
∫−∞
x(τ)dτ = X( jω) + πX(0)δ(ω)
jω
Correlation of signals
∫−∞
z(t) = x(τ)y*(t − τ)dτ ⇔ Z( jω) = X( jω)Y*( jω)
Parserval equality
∞
1 ∞
∫−∞ 2π ∫−∞
2 2
x(t) dt = X( jω) dω
FT calculation for selected cases
Rectangular pulse:
{1 for | t | ≤ T
0 for | t | > T
pT (t) =
∞ T 1 e −jωT
− e jωT
∫−∞ ∫−T
[e ]−T =
−jωt −jωt T
pT (t)e dt = e−jωt dt = =
−jω −jω
2
sin(ωT ) = 2Tsinc(ωT )
ω
Dirac delta:
Dirac delta can be represented as the boundary case of rectangular pulse when T→0:
( )
1
δ(t) = lim pT (t)
T→0 2T L'Hôpital's rule
∫−∞ T→0 ( 2T ) ( ) ( )
∞ ∞
1 1 sin(ωT )
∫
−jωt
lim pT (t) e dt = lim pT (t) e−jωt dt = lim = 1
T→0 −∞ 2T T→0 ωT
Time tends to 0 Infinite bandwith
ℱ (δ(t)) = 1
∞ ∞
ck e jkω0t Series of Dirac function is a, hence it can
∑ ∑
δT (t) = δ(t − kT ) =
be expressed in terms of Fourier series
k=−∞ k=−∞
∞
1 T/2 1
∞ 1 ∞ jkω t
∫ ∫−∞
ck = δ(t)e −jkω0t
dt = δ(t)dt = 1 ⟹ ∑ δ(t − kT ) = ∑
e 0
T −T/2 T k=−∞
T k=−∞
∞ ∞ ∞
1 ∞ jkω t −jωt
∫−∞ [ ∑ ] ∫−∞ [ T ]
ℱ (δT (t)) = δT (t − kT ) e−jωt dt =
∑
e 0 e dt =
k=−∞ k=−∞
∞ ∞
1 ∞ ∞
T k=−∞ [ −∞ ] T k=−∞
1 2π
∫
e jkω0t e−jωt dt = ∑
2πδ(ω − kω0) =
∑
δ(ω − kω0)
∑ T k=−∞
2j ( −∞ )
1 1
∫ ∫ (2πδ(ω − ω0) − 2πδ(ω + ω0)) =
−j(ω−ω0)t −j(ω+ω0)t
e dt − e dt =
−∞ 2j
π
(δ(ω − ω0) − δ(ω + ω0)) ℱ (cos(ωt)) = π (δ(ω − ω0) + δ(ω + ω0))
j
ℱ (cos(ωt)) = π (δ(ω − ω0) + δ(ω + ω0))
∞ T
∫−∞ ∫−T
ℱ (cos(ω0t)pT (t)) = cos(ω0t)pT (t)e−jωt dt = cos(ω0t)e−jωt dt =
1 [e ]−T 1 [e ]−T
−j(ω+ω0)T T −j(ω−ω0)T T
T −j(ω+ω0)t −j(ω−ω0)t
e +e
∫−T
dt = − + =
2 2 j(ω + ω0) 2 j(ω − ω0)
e−j(ω+ω0)T − e j(ω+ω0)T e−j(ω−ω0)T − e j(ω−ω0)T sin [(ω + ω0)T ] sin [(ω − ω0)T ]
+ = +
2j(ω + ω0) 2j(ω − ω0) (ω + ω0) (ω − ω0)
Spectrum of cosine signal multiplied with rectangle windows consists of main lobe and side
lobes. It is important to reduce side side lobe
The shorter signal in time domain the wider its spectrum and vice versa
It is possible to use other than rectangular window in order to minimize side lobes
(„frequency leakage”). Thus other than rectangular window should be applied.
Hanning Window
Time windows (selected) - discrete versions
( ( N−1 ) )
Kaiser window (parametric) 2
2n − N + 1
I0 β 1−
w[n] = n = 0,1,2,…, N − 1
I0(β)
∑ [ m! ]
∞ 2
(x /2)m
where: I0(x) = 1 + Modified Bessel function of the 1st
m−1 kind and 0th order
( N)
M
[γ N ]
1 πk 2πkm N−1
∑
w[n + (M + 1)] = C +2 TN−1 β cos cos , −M≤m≤ M M=
k=1
2
General cosine window :
(N − 1) (N − 1) (N − 1)
2πn 4πn 6πn
w[n] = A − B cos + C cos + D cos , n = 0,1,2,…, N − 1
(N − 1)
2πn
A = 0.5 B = 0.5 C = 0 D = 0 w[n] = 0.5 − 0.5 cos , n = 0,1,2,…, N − 1
•Hamming window:
(N − 1)
2πn
A = 0.54 B = 0.46 C = 0 D = 0 w[n] = 0.54 − 0.46 cos , n = 0,1,2,…, N − 1
• Blackamann window
(N − 1) (N − 1)
2πn 4πn
w[n] = 0.42 − 0.5 cos + 0.08 cos , n = 0,1,2,…, N − 1
• Nuttall window
A = 0.355768 B = 0.487396 C = 0.144232 D = 0.012604
• Blackmann-Nuttall window:
96
Time windows (selected) and their amplitude spectra
97
Time windows (selected) and their amplitude spectra
Sampling theorem
Uniform sampling of continuous signal x(t) can be expressed as signal multiplication with
Dirac delta series
∞ ∞
∫−∞
x(t0) = x(t)δ(t − t0)
∑
xδ(t) = x(t) ⋅ δ(t − kT )
k=−∞
A product in time domain ⇔ convolution in frequency
∞ ωp ∞
2π [ ]
1
∑
X( jω) ⋆ δ(ω − kωs) =
∑
Xδ( jω) = X( jω) ⋆ ωs δ(ω − kωs) =
2π k=−∞
k=−∞
100
The signal sampled of
narrowed spectrum
ALIASING
101
Sampling of bandlimited signals
In the case of bandlimited signals, it is possible to reconstruct the signal at sampling rates
lower than those resulting from the Nyquist theorem– no frequency inversion
X(ω)Hp(ω)
In the case of bandlimited signals, it is possible to reconstruct the signal at sampling rates
lower than those resulting from the Nyquist theorem - frequency inversion
X(ω)Hp(ω)
f(x, y)
Analog image -
irradiance [W/m2]
spatial 2D function
∞ ∞
δ (x − kΔx, y − lΔy)
∑ ∑
s(Δx,Δy)(x, y) = Sampling function (2D space)
k=−∞ l=−∞
105
Sampling in frequency domain
FT of sampling function:
∞ ∞
δ (ωx − kΔωx, ωy − lΔωy)
∑ ∑
S(Δωx,Δωy)(ωx, ωy) =
k=−∞ l=−∞
106
Multiplication (spatial domain) convolution in frequency (spatial) domain
∞ ∞
δ (ωx − kΔωx, ωy − lΔωy)
∑ ∑
Fs(ωx, ωy) = F(ωx, ωy) ⊗
k=−∞ l=−∞
Let the spectrum of analog signal be limited up to some max. spatial frequency for x
and y directions ωxmax ωymax
A0x sin(ωx0 x)
ωy
2π
Δωx =
ωymax
Δx
ωxmax ωx
A0y sin(ωy0 y)
2π
Δωy =
https://qph.fs.quoracdn.net/main-qimg-059582539613d55d31c570d9d9c36ac9-c
Δy
107
Dirac delta f(x) ⊗ δ(x − x0) = f(x − x0) Can be generalized to
n-th dimensions,
ωy
…
…
…
…
… …
Δωy
ωymax
ωxmax ωx
…
…
Δωx
… …
…
…
…
…
Δx ↗ ⇒ Δωx ↘
Δy ↗ ⇒ Δωy ↘
108
Δx ↗ ⇒ Δωx ↘ Δy ↗ ⇒ Δωy ↘
ωy
…
…
…
… …
…
Δωy
…
ωymax
ωxmax ωx
… …
Δωx
… …
…
… …
…
109
Δx ↗ ⇒ Δωx ↘ Δy ↗ ⇒ Δωy ↘
ωy
…
…
…
…
… …
ωymax
Δωy
… … ωx
ωxmax
… Δωx …
…
…
…
…
Aliasing !
…
110
…
The critical sampling
ωy
… …
Δωy = 2ωymax
…
… …
ωxmax
ωymax
… … ωx
Δωx = 2ωx
max
…
… …
…
…
…
| H(ωx , ωy ) |
The ideal filter for reconstruction/restoration:
y
ω
1
, | ωx | < ωxmax, | ωy | < ωymax
{0,
ωxmax ⋅ ωymax
H(ωx, ωy) =
for others
ωy
max
ωx
ax
m
x
ω
The restored (or reconstructed) image in FT domin can be expressed as :
112
2D sampling in spatial domain
1
, | ωx | < ωxmax, | ωy | < ωymax
{0,
The spectrum of the ideal H(ωx, ωy) = ωxmax ⋅ ωymax
reconstruction filter: for the others
∞ ∞
f˜(x, y) =
∑ ∑
fs(kΔx, lΔy) ⋅ sinc(xΔωx − k) ⋅ sinc(yΔωy − l)
k=−∞ l=−∞
113
2D spatial sampling
0.8
0.6
0.4
0.2
-0.2
-0.4
-15 -10 -5 0 5 10 15
1D sinc 2D sinc
In practice the signal should be sampled with (much) higher frequency than that resulting
from sampling theorem (the Nyquist frequency). In that case the real (possible to build)
filters, can reconstruct almost the same image (with marginal differences).
114
Aliasing – 2D signal
Too low sampling frequency (in spatial domain) results in visible distortions known as
Moiré effect (pattern)
https://www.intmath.com/math-art-code/moire-effect.php
Short-Time Fourier Transform STFT
∫−∞
SX(u, ω) = ⟨x, wu,ω⟩ = x(t)w(t − u)e−jωt dt
where
wu,ω = w(t − u)e jωt
∫−∞
2
PS X(u, ω) = SX(u, ω) = x(t)w(t − u)e−jωt dt
∫−∞
If norm of the window function = 1:
∥w(t)∥2 = | x(t) |2 dt = 1
2
Then | w(t) | can be interpreted as function of probability distribution of a free
particle around u, - the centre of the window, generally:
1 2
| w(t) | The particle can be matched with a wave, given by function w(t)
∥w(t)∥2
1 2
The probability density of its momentum can be written as: | w(t) |
∥w(t)∥2
∞ ∞
1
∫−∞ ∫
2
Hence: u= t | w(t) |2 dt Generally: u= t | w(t) | dt
∥w(t)∥ −∞
2
∞
1
∫
2
σt2 = (t − u)2
| w(t) | dt
∥w(t)∥ −∞
2
Then σt2 is a radius of window in time domain, 2 σt2 denotes its width
∞
1
∫−∞
Generally: Δ= σt2 = (t − u)2 | w(t) |2 dt
∥w(t)∥
According to Parseval’s theorem the same calculations can be made in frequency domain ω
1 2
The probability density of its momentum in frequency domain: | W(ω) |
2π∥w(t)∥2
1 ∞
2π ∫−∞
ξ= ω | W(ω) |2 dt
W(ω) - denotes FT of window function w(t)
∞
1
∫
2
σω2 = (ω − ξ) 2
| W(ω) | dω r.m.s length in the frequency domain
2π∥w(t)∥ −∞
2
∞
1
∫−∞
Δω = σω2 = (ω − ξ)2 | W(ω) |2 dω
2π∥w(t)∥
σt2 measures energy concentration around u (in time domain), similarly σω2 is a measure of
energy concentration around ξ (in frequency domain)
Let’s assume u = 0, ξ = 0
∞ ∞
1
2π∥w(t)∥ ∫−∞ ∫−∞
2 2
σt2σω2 =
4
| t ⋅ w(t) | dt | ω ⋅ W(ω) | dω
Parseval equality:
∞
∞
1 ∞
1 ∞
∫−∞ 2π ∫−∞ ∫−∞ 2π ∫−∞
2
| w(t) |2 dt = | W(ω) |2 dω hence | w′(t) | dt = | jω ⋅ W(ω) |2 dω
It yields,
∞ ∞
1
∫ ∫
2 2
σt2σω2 = | t ⋅ w(t) | dt | w′(t) | dω
∥w(t)∥ −∞
4
−∞


© Andrzej Kotyra
∞ ∞ ∞
∞ ∞ 2
∞
∥w(t)∥4 [ ∫−∞ ]
1 1
∥w(t)∥4 ∫−∞ ∫−∞
2 2
σt2σω2 = | t ⋅ w(t) | dt | w′(t) | dω ≥ | t ⋅ w′(t) ⋅ w*(t) | dt
2 2
∞ ∞
∥w(t)∥ [ −∞ 2 ] 4∥w(t)∥ [ −∞ ]
1 t 1
∫ ∫
2
≥ 4
[w′(t)w*(t) + w′*(t)w(t)dt ≥ 4
t( | w(t) | )′dt
2
∞
4∥w(t)∥4 [ ∫−∞ ]
1 2 1
σt2σω2 ≥ | w(t) | dt =
4
1
σt2σω2 ≥ Heinseberg uncertainty principle
4





Equality of Heisenberg Principle exists only and if only, there exists b fulfilling:
ξ2 2Δ ω
2Δt
ξ1
Heisenberg boxes
b1 b2 t
• Joint time-frequency resolution is limited
• the better resolution in time domain (thinner window in time d.) the worse resolution in
frequency domain (wider window in freq. d) and vice versa.
• The best resolution BOTH in time and frequency - Gabor window - STFT = Gabor
transform.

STFT for an example signal – rectangular windows
20
40
60
80
© Andrzej Kotyra
STFT for an example signal – rectangular windows
20
40
60
80
© Andrzej Kotyra
STFT for an example signal – rectangular windows
20
40
60
80
© Andrzej Kotyra
Wavelet transform
∫−∞
(s, τ) = f (t)ψ*
s,τ(t)dt Continuous wavelet transform
∫−∞
| f (t) |2 dt
Hence signal should be of finite energy. Both sinusoid and constant signal not belong to
such class of signals
( )
1 t−τ
ψs,τ(t) = ψ , s ∈ ℝ − {0}, τ ∈ ℝ
s s
A set of functions called wavelets are generated from a prototype function called „mother
wavelet” its by dilation and translation
s - scale coefficient
τ – translation coefficient
𝒲
( )
1 ∞ ∞ 1 t−τ 1
c ∫−∞ ∫−∞
f (t) = Ψ(s, τ) ψ dsdτ
s s s 2
Inverse continuous w. t.
where ∞
| Ψ(ω)2 |
∫−∞
c= dω
ω normalizing factor
t
Substitution for t a new variable t′ = One could write:
s
( a)
τ
∫
Ψ(a, τ) = a f (at′)ψ t′ − dt′
1. Their mean is 0:
∫−∞
ψ (t)dt = 0 Ψ(ω) = 0 for ω = 0
3. They should tend to 0 (for the localization in time) – this is not the required condition
© Andrzej Kotyra
Daubechies wavelet family
© Andrzej Kotyra
Wavelet transform – time-ferequency resolution
If t*, 2Δψ denotes centre and window length respectively (ψ (t) window).
Then for ψτ,s(t) wavelet its centre would be at b + st*
Where its with in time domain 2sΔψ
Signal f(t) is definited within time window:
[ s ]
ω* 1 ω* 1
− Δψ s, + Δψ s
s s s
ω*, 2Δψ denote centre and window size for Ψ(ω), respectively.
[ s ]
ω* 1 ω* 1
[b + st* − sΔψ , st* + sΔψ] × − Δψ s,
s s
+ Δψ s
s
ω
2s1Δ ψ
ω∗
s1 2Δψ
s1
|ψs ,τ (ω)|
2s2Δ ψ
ω∗ |ψs ,τ (ω)| 2Δψ
s2 s2
Conclusion: *t *t t
• For small scales (s) width of Heisenberg box is small in time domain and large in
frequency domain
• Conversly – for large scales (s) Heisenberg box size is large and small in frequency
domain
• Both high-frequency and low frequency components of a signal are represented on a time-
frequency plane with a resolution determined by a dimensions of Heisenberg box
© Andrzej Kotyra
Expansion in wavelet series
∫−∞
cj,k = f (t)ψj,k(t)dt Wavelet expansion of a contionuous signal f(t) of finite energy
t − kτ0s0j
( )
1
ψj,k = ψ Discrete set of wavelets
s0j s0j
Change in scale coeff. corrsponds to octave change in frequency; integral multiple power of a 2
Signal reconstruction:
∑∑
x(t) = c cj,k ⋅ ψj,k(t); c − constant value
j k
s Diadic sampling of a t-f plane
S0
2S0
4S0
The required condition for lossless recontruction of a signal is so called stability condition
2
∑ ⟨
A∥f (t)∥2 ≤ f (t), ψs,τ(t)⟩ ≤ B∥f (t)∥2
j,k
A, B positive numbers 0<A≤B<∞
A falmily of wavelets ψτ,s(t) fulfilling the above condition is called frame with bounds of A,
B
© Andrzej Kotyra
If A = B then a frame is called tight and a set of wavelets forms an orthogonal
basis:
{0 ⇔ j ≠ m,
∞
1 ⇔ j = m, and k = n
∫−∞
ψj,k ψ*
m,n(t) =
and k ≠ n
where ∥ψj,k(t)∥ = 1
t
x j
s τ0
Amplitudes of wavelet coefficients are different for signal and its shifted
„version”
© Andrzej Kotyra
Discrete wavelet transform
Changing the scale of a wavelet (time domain) 2 times result in shifting and narrowing its
frequency counterpart by 2 – reaching zero frequency is practically impossible (possible in
infinity)
ωn /8 ωn /4 ωn /2 ω
ωn
Multiresolution analysis
Details
2 D1
D1
Signal
Approximation Details
2 2 D2
A1 D2
Approximation 2 Details 2
A2 D3
Approximation
2 A3
Daubechies wavelets
The Daubechies wavelets are named after their inventor (or discoverer?) - Ingrid
Daubechies. They are characterized by a maximal number of vanishing moments for given
support.
∞
∫−∞
Moments of wavlets (of p-th order) is defined as follows: Mp = t pψ (t)dt
s[ ]
1 f (1)(0) f (2)(0) f (n)(0) n+1
( f (0,s)) =
2 3
f (0)M0s + M1s + M2s + … + Ms + O(s n+2)
1! 2! n!
!"#$%&'('#"%$)*$%&'+,+-%.)'+-"%)'.*".'/"#-0-.12,-33$2$-%.)'3,4'(1.*',45-4'(,06%,+$"0'
/$00'7-'8-4,9':*".'$);'"%6'(,06%,+$"0')$&%"0'<('.,',45-4'(1='2"%'7-'4-(4-)-%.-5'2,+(0-.-06'
$%')2"0$%&')("2-9'>%'.*-,46;'+,4-'#"%$)*$%&'+,+-%.)'+-"%)'.*".')2"0$%&'3<%2.$,%'2"%'
4-(4-)-%.'+,4-'2,+(0-?')$&%"0)'"22<4".-069':*<)'('$)'"0),'2"00-5'.*-'"22<4"26',3'.*-'
/"#-0-.
𝒲
© Andrzej Kotyra
Daubechies 1 (Haar)
Falka Daubechies 1 (1 moment zanikający; długość nośnika wynosi 1) nazywana jest także
falką Haara - zwarta postać analityczna
Daubechies 2
Daubechies 3
Daubechies 6
Daubechies 14
Coiflet 4
Symlets 4
Symlets (symmetrical wavelets) are modified Daubechies wavelets where symmetry was
taken into consideration. Other features are like for Daubechies wavelets.
Analyzed Signal (length = 1024)
Symlet4 wavelet
2
1.5
0.5
0
100 200 300 400 500 600 700 800 900 1000
Ca,b Coefficients - Coloration mode : init + by scale + abs
61
57
53
49
45
41
37
33
29
25
21
17
13
9
5
1
0.4
0.2
100 200 300 400 500 600 700 800 900 1000
Ca,b Coefficients - Coloration mode : init + by scale + abs
61
57
53
49
45
41
37
33
29
25
21
17
13
9
5
1
0.4
0.2
100 200 300 400 500 600 700 800 900 1000
Ca,b Coefficients - Coloration mode : init + by scale + abs
61
57
53
49
45
41
37
33
29
25
21
17
13
9
5
1
© Andrzej Kotyra
Analyzed Signal (length = 1000) Coiflet4 wavelet
0.5
-0.5
100 200 300 400 500 600 700 800 900 1000
Ca,b Coefficients - Coloration mode : init + by scale + abs
61
57
53
49
45
41
37
33
29
25
21
17
13
9
5
1
0.5
-0.5
100 200 300 400 500 600 700 800 900 1000
Ca,b Coefficients - Coloration mode : init + by scale + abs
61
57
53
49
45
41
37
33
29
25
21
17
13
9
5
1
© Andrzej Kotyra
Decomposition at level 5 : s = a5 + d5 + d4 + d3 + d2 + d1 . Daubechies5 wavelet
1
s 0
-1
1
a
5 0
-1
0.2
d
5 0
-0.2
1
d
4 0
-1
0.5
d
3 0
-0.5
0.2
d 0.1
2 0
-0.1
0.4
0.2
d
0
1
-0.2
100 200 300 400 500 600 700 800 900 1000
© Andrzej Kotyra
Z Transform
Substitute: z = re jω
∞
X[re jω] = x[n] ⋅ (re jω)−n
∑
n=0
If r=1 → Fourier transform of x[n] Fourier transform of product of x[n] series and r -n
sequence (Discrete Fourier
Transform)
𝒵
Thus, Z transform can be regarded as a generalisation of Fourier Transform of x[n]
series
Im(z)
Calculation of Z transform over a unit circle → Fourier
Transform
ω z=1 → ω=0
1 Re(z) π
z=j → ω=
2
z =−1 → ω =π
The set of points in the complex plane for which the Z-transform for x[n] is convergent →
Region of Convergence (ROC)
∞
| x[n]r −n | < ∞
∑ Convergence condition of Z.T.
n=−∞
e.g. the unit step: x[n] = u[n] In a general case it is not absolutely summable,
but when r > 1 it is!
∞
| u[n]r −n |
∑ is convergent for r >1
n=−∞
∞ ∞
| x[n]z −n | < ∞
From convergence of
∑
x[n]z −n it follows, that
∑
n=−∞ n=−∞
ROC depends only on z and contains all the z, for which the Z.T . exists
Im(z)
ROC is in the form of ring in the complex
plane. Its outer radius can → ∞
Re(z)
If ROC contains the unit circle, then there
exists DFT
Z. T usually is expressed in the form of rational function ratio of two polynomial functions
M M
P[z] p0 ∏l=1 (1 − γl z −1) z M p0 ∏l=1 (1 − γl)
X[z] = = =
Q[z] N
q0 ∏l=1 (1 − λl z )
−1 N
z N q0 ∏l=1 (1 − λl)
Im(z)
Poles
×
×
Re(z)
Zeros ×
Example 7 : Find Z transf. and ROC for right-handed series x[n] = a nu[n]
∞ ∞
a nu[n] ⋅ z −n = (az −1)n
∑ ∑
X[z] =
−∞ n=0
∞
X[z] is convergent | az −1 |n < ∞
∑ ROC |z| > |a|
n=0
∞
−1 n 1 z zero (z=0)
∑
X[z] = (az ) = =
n=0
1 − az −1 z−a pole (z=a)
Im(z)
Unit circle
×
1 Re(z)
Example 8 : Find Z transf. and ROC for left-handed series x[n] = − a nu[−n − 1]
∞ −1 ∞ ∞
n −n
∑
X[z] = − a u[−n − 1]z a n z −n = − a −n z n = 1 − (a −1z)n
∑
=−
∑ ∑
n=−∞ n=−∞ n=1 n=0
×
1 Re(z)
(2) ( 3)
Example 9 Find Z transf. and ROC for sum of two series n n
1 1
x[n] = u[n] + − u[n]
∑ (2) ∑ ( 3)
∞ ∞ n n
∑ {( 2 ) ( 3)
∞ n n 1 1
}
1 1 −n
X[z] = u[n] + − u[n] z −n = u[n]z + − u[n]z −n
n=−∞ n=−∞ n=−∞
1 1
1 + 3 z −1 + 1 − 2 z −1
∑(2 ) ∑( 3 )
∞ n ∞ n 1 1
1 −1 1 = + =
= z + − z −1
( )( z )
1 −1 1 −1 1 −1 1 −1
n=0 n=0
1− 2
z 1+ 3
z 1 − z 1 +
2 3
2 (1 − z ) z (z − 12 )
1 −1 1 zeros: 0; 1/12
12
= =
(1 − z ) (1 z ) (z − 2 ) (z + 3 )
1 −1 1 −1 1 1
2
+ 3 poles: 1/2; -1/3
1 −1 1 1 1
ROC: z <1 ∧ − z −1 < 1 ⇒ < |z| ∧ < |z|
2 3 2 3
( 3)
n
(2)
n
1 1
ROC for u[n] ROC for − u[n]
Im(z) Im(z)
× ×
1 1
1 Re(z) − 1 Re(z)
2 3
Im(z)
ROC for
(2) ( 3)
n n
1 1
u[n] + − u[n] × ×1
1 1 1 Re(z)
− 2
3 12
Example 10 : Find Z transf. and ROC for sum of two series
( 3) (2)
n n
1 1
x[n] = − u[n] − u[−n − 1]
{( 3 ) { (2)
n 1 n
} 1 + 13 z −1 }
1 1
| z | > 1 1 1
− u[n] = − u[−n − 1] = | z | <
3 1 − 12 z −1 2
1 1
( ) ( )
n n
{ }
1 1 = +
− u[n] − u[−n − 1] 1 1
3 2 1 + 3 z −1 1 − 2 z −1
Im(z)
2 (1 − z ) 2z (1 − 12 )
1 −1 1
12
= =
(1 − z ) (1 z ) ( 2)( 3)
1 −1 1 −1 1 1
2
+ 3
1 − 1 +
1 1 × ×1
|z| > ∧ |z| < −
1 1 Re(z)
3 2 3 12 2
{0
an 0<n<N−1
x[n] =
for the others
1 − (az −1)
N
N−1 N−1 1 zN − aN
∑(
n −n
az )
−1 n = N−1
∑
X[z] = a z = =
1 − az −1 z z−a
n=0 n=0
N−1 n
−1
∑
ROC az <∞ ⇒ | a | < ∞, z ≠ 0
n=0
Im(z) 15-th
order pole
x[n] = a n −∞<n<∞
∞ ∞ ∞ −1
( az ) + ( az −1) =
a nu[n] ⋅ z −n + a nu[−n − 1] ⋅ z −n = −1 n n
∑ ∑
X[z] =
∑ ∑
n=−∞ n=−∞ n=0 n=−∞
∞ ∞ ∞ ∞
∑(
az ) +
∑( )
−1 n −n −1 n −n
∑ ∑
(az) = az − 1 + (az)
n=0 n=1 n=0 n=1
ROC does not exist – there’ s no Z transform – there’ s no z for which the
transform would be convergent
Example 13 : Find Z.T. and the ROC for the series x[n] = a n cos[ω0n]u[n]
1
x[n] = a cos[ω0n]u[n] = u[n][(ae jω0)n + (ae−jω0)n] =
n
2
[ ]
1 ∞ 1 ∞
1 1 1
( ) ( )
−1
n n
jω0 z −jω0 z −1
∑ ∑
ae + ae = + =
2 n=0 2 n=0 2 1 − ae z
jω0 −1 1 − ae z
−jω0 −1
1 − az −1 cos(ω0)
=
1 − 2az −1 cos(ω0) + a 2 z −2
© Andrzej Kotyra
Z transform properties
1. Linearity
ROC is at least equal RX1 ∩ RX2 If poles and zeros are not cancel each other
{0
n n an 0≤n≤ N−1
x[n] = a u[n] − a u[n − N ] =
for the others
ROC
|z| > |a| |z| > |a| N−1 N−1
(az )
n
a n z −n = −1
∑ ∑
X[z] =
N−1 n
n=0 n=0
az −1
∑
<∞ ⇒ | a | < ∞, z ≠ 0
n=0
ROC for sum of two (or more) series is greater than ROC of the series taken separately
𝒵
𝒵
𝒵
2. Translation in time domain
[ z0 ]
z
z0n ⋅ x[n] ↔ X
(z0 ⋅ x[n])
n
(x[n])
4. Differentiation of X(z)
d(X[z])
n ⋅ x[n] ↔ − z ROC is like for (x[n])
dz
𝒵
𝒵
𝒵
𝒵
𝒵
𝒵
5. Conjugation of complex series
[ z* ]
1
x*[−n] ↔ X*
1 1
rw < | z | < rz ⇒ | z0 | < | z | < | z0 |
rw rz
ROC: ROC:
(x[n]) (z0 ⋅ x[n])
n
7. Convolution
1
2πj ∮Γ
x[n] = X[z] ⋅ z n−1dz Γ Closed contour , containing z = 0
{0, n ≠ 0
1, n = 0
∮Γ
n−1
x[n] z dz = The Cauchy theorem
1 ∞
2πj ∮Γ { k=−∞ }
1
2πj ∮Γ
X[z] ⋅ z n−1dz = x[k]z −k z n−1dz =
∑
∑ { }
1
∮Γ
calculated in the following ways:
x[k] z −k+n−1 = x[n]
k=−∞
2πj
• the straightforward method
• the “long” division of polynomial method
• decomposition into simple fraction method
• residuum method
The straightforward method
n2
bnz −n
∑
X[z] = , for example:
n=n1
X[z] = 6 + 3z −1 − 4z −2 + 2z −3 + z −4
= x[0] + x[1]z −1 + x[2]z −2 + x[3]z −3 + x[4]z −4
Hence:
x[0] = 6 x[1] = 3 x[2] = − 4 x[3] = 2 x[4] = 1
2 + 2z −1 + z −2
X[z] =
1 + z −1
2 + 2z −1 + z −2 Polynomial in the numerator
z −2 the difference
−z −3 the difference
0, n<0
2, n=0
x[n] = x[n] = δ[n] + δ[n − 1] + (−1)nu[n]
0, n=1
(−1)n, n≥2
∞
∞ ∑m=0 bm z −m
ck z −k =
∑
Under assumption, that Z. T. is written in
∞
the following form:
k=0
∑n=0 anz −n
The general formula for polynomial coefficients, that results from division:
k
b0 b1 − c0a1 b2 − c1a1 − c0a2 bk − ∑i=1 ck−iai
c0 = c1 = c2 = ck =
a0 a0 a0 a0
Transform in the form of polynomial/
Partial fraction decomposition method
polynomial
2 + 2z −1 + z −2 1 + z −1 + z −1(1 + z −1) + 1 −1 1
X[z] = = =1+z + , |z| > 1
1+z −1 1+z −1 1+z −1
Generally:
If a degree of polynomial in numerator is greater than that in denominator, than:
M
K ∑m=0 cm z −m
X[z] = X1[z] + X2[z] = ck z −k +
∑ N
∑n=0 cnz −n
M≤N
k=0
X1[z] Directly form z transform table ( „Direct” method)
[ ]
1 d m−j (1 − pl)m
dl,j = ⋅ m−j X[z]
(m − j)! dz z
z=pk
Using of residues
∑
If X[z] is the rational function (i.e. polynomial/polynomial), than: x[n] = ρk
k
ρk residues of F[n] = z n−1X[z]
2 + 2z −1 + z −2 2z 2 + 2z + 1 2z 2 + 2z + 1
X(z) = = =
1+z −1 z 2 + z z(1 + z)
2
n−1 2z + 2z + 1 z n(2z 2 + 2z + 1)
F(z) = z =
z(1 + z) z 2(1 + z)
n ≥ 2 one single pole p1 = −1
n = 1 two single poles p1 = −1, p2 = 0
n = 0 two poles : p1 = −1 (single), p2 = 0(double)
FFT algorithms
© Andrzej Kotyra
DFT algorithms
• Sequential algorithms
– direct
– Radix-2
– DIT - decimation in time
– DIF - decimation in frequency
– Radix-4 , radix-2n class (Radix-8, Radix-16)
– for any N (e.g. Bluestein algorithm)
– others
• Parallell algorithms
– without inter-processor permutations
– with inter-processor permutations
– others
Stright method
1 N−1 2π 1 N−1
x[n] ⋅ e−j N kn = x[n] ⋅ WN−kn
N∑ ∑
X[k] = WN = e j N
2π
n=0
N n=0
X[0] 1 1 1 ⋯ 1 x[0]
X[1] 1 WN−1 WN−2 ⋯ WN−(N−1) x[1]
1
X[2] = 1 WN−2 WN−4 ⋯ WN−2(N−1) ⋅ x[2]
N
⋮ ⋮ ⋮
X[N − 1] 1 WN−(N−1) WN−2(N−1) ⋯ WN−(N−1)(N−1) x[N − 1]
−j 2π
Let’s denote e N (for simplicity) as WN
N N
2 −1 2 −1
computational time
N
Calculation of X[k] can be splitted; independently for X[k], k = 0,1,2,…, −1
2
N N N
and for k = , + 1, + 2,…, N − 1, hence:
2 2 2
[ 2]
N
X[k] = X[k] + X k+
N
k = 0,1,2…, N − 1 k = 0,1,2…, − 1
2
N N
2 −1 2 −1
[ ]
N n(k+N/2) n(k+N/2)
+ WNk+N/2
∑ ∑
X k+ = x[2n] ⋅ WN/2 x[2n + 1] ⋅ WN/2 =
2 n=0 n=0
N N
2 −1 2 −1
[ 2] ∑
N n(k+N/2) n(k+N/2)
+ WNk+N/2
∑
X k+ = x[2n] ⋅ WN/2 x[2n + 1] ⋅ WN/2
n=0 n=0
N
k = 0,1,2.…, − 1
2
2π
n(k+N/2) nk n⋅N/2 nk −j N/2 ⋅n⋅N/2 nk −j2π⋅n
WN/2 = WN/2 ⋅ WN/2 = WN/2e = WN/2 e
nk
= WN/2 ⋅ [cos(2πn) − j sin(2πn)] = WN/2
nk
[ 2] ∑
Hence: N nk
− WNk nk
∑
X k+ = x[2n] ⋅ WN/2 x[2n + 1] ⋅ WN/2
n=0 n=0
N N
Therefore: N2 −1 2 −1
N
−1 −1
( XN[k%] = ∑ x[2n] ⋅ WN/2 2 nk
nk + WN
k 2
k ∑ x[2n + 1] ⋅ WN/2
nk
nk
X &k + # = ∑ n=0
x (2 n )⋅ W N 2 − W ⋅ ∑
N n=0 x (2 n + 1)⋅ W N 2
N ' 2$ $ n =0
!!#!! " $
n =0
!! !#!!! "
k = 0,1,2.…, − 1
2 A[k] B[k]
NN2 −1 NN
2 −1
' 2$ n=0
∑ n=0
∑ N 2
$
n =0
!!#!!
" $
n =0
!!
!#!!!
"
A[k] B[k]
In order to calculate the second part of Fourier coefficients calculations, previous calculcation
results can be applied (for the first half)
In order to calculate N - point DFT its is sufficient to calculate two N/2 point DFTs for
k = 0,1,2.…, N/2 − 1, and the apply its results once again for one N point DFT
k = 0,1,2,3
3
x[2n] ⋅ W4nk
∑
A[k] = k = 0,1,2,3 X[k] = A[k] + W8nk ⋅ B[k]
n=0
x[0] A[0]
+ X[0]
x[2] A[1]
+ X[1]
x[4] N=4 A[2]
+ X[2]
x[6] A[3]
+ X[3]
3
x[2n + 1] ⋅ W4nk X[k + 4] = A[k] − W8nk ⋅ B[k]
∑
B[k] = k = 0,1,2,3
n=0
x[1] B[0]
W80 −1 + X[4]
x[3] B[1]
W81 −1 + X[5]
x[5] N=4 B[2]
W82 −1 + X[6]
x[7] B[3]
W83 −1 + X[7]
N /2 - point DFT can be splitted into two DFTs of N /4 points.
N N N
2 −1 4 −1 4 −1
nk
x[2(2n)] ⋅ W42nk + x[2(2n + 1)] ⋅ W4(2n+1)k
∑ ∑ ∑
A[k] = x[2n] ⋅ WN/2 =
n=0 n=0 n=0
1 1
x[4n] ⋅ W42nk+W4k x[4n + 2] ⋅ W42nk
∑ ∑
= k = 0,1,2,3
n=0 n=0
−j 2π nk
W42nk 4 ⋅2nk = e−j 2 ⋅nk = W2
2π
Because: =e
1 1
x[4n] ⋅ W2nk+W4k x[4n + 2] ⋅ W2nk
∑ ∑
Hence: A[k] =
n=0 n=0
Similarly, as in the case of calculating X[k], also A[k] can be splitted into two subseries for
k = 0,1,2,…, (N/4) − 1 and for k = (N/4), (N/4) + 1, (N/4) + 2, …, (N/2) − 1
[ 2]
N
A[k] = A[k] + A k+
N
k = 0,1,2…, (N/2) − 1 k = 0,1,2…, − 1
k = 0,1,2,3 k = 0,1 4
N N
4 −1 2 −1
[ 4] ∑
N n(k+N/4) k+N/4 n(k+N/4)
∑
A k+ = x[4n] ⋅ WN/4 + WN/2 x[4n + 2] ⋅ WN/4
n=0 n=0
2π
n(k+N/4) nk n⋅N/4 nk −j N/4 ⋅n⋅N/4 nk −j2π⋅n
WN/4 = WN/4 ⋅ WN/4 = WN/4e = WN/4 e
nk
= WN/4 ⋅ [cos(2πn) − j sin(2πn)] = WN/4
nk
= W2nk
2π N
k+N/4 k N/4 k −j N ⋅ 4 nk −jπ
WN/2 = WN/2 ⋅ WN/2 = WN/2 e = WN/2 e =
[ 4] ∑
N nk k nk
∑
A k+ = x[4n] ⋅ WN/4−WN/2 x[4n + 2] ⋅ WN/4
n=0 n=0
1 1
A [k + 2] = x[4n] ⋅ W2nk−W4k x[4n + 2] ⋅ W2nk
∑ ∑
n=0 n=0
Therefore: NN NN
−1−1 2 −1
−1
24
( N % nk k
2
nk
X & k + A[k] ∑ N ∑
# ∑= x x[4n]
(2 n ) ⋅
W Wnk + W
W k x[4n
x (2 n+ 2]
1)⋅ W
W nk
' 2
=
$ $ n=0
⋅ 2
N 2 − 4 ⋅ ∑
n=0
+ ⋅ 2N 2
N n =0
!!#!! " $
n =0
!! !#!!! "
k = 0,1,2…, − 1
4 C[k] D[k]
N N4 −1 N
2N−1
[ ] &k + # =
N −1
nk k 2 nk
−1
(= A[kN+%2] = ∑ x[4n] ⋅ W
x(2n )⋅ WN 2 − WN ⋅ ∑ x(2n + 1)⋅ WN2 2
A k+ 2
nk2 − Wk4 x[4n + 2] ⋅ Wnk
4X n=0 ∑ n=0 ∑
' 2$ $
n =0
!!#!!
" $
n =0
!!
!#!!!
"
C[k] D[k]
Similarly B[k] can be calculated following the same pattern → E[k], F[k]
1
x[4n] ⋅ W2nk A[k] = C[k] + W4k ⋅ D[k]
∑
C[k] = k = 0,1
n=0
x[0] C[0]
+ A[0]
x[2] N=2 C[1]
+ A[1]
1
x[4n + 2] ⋅ W2nk
∑
D[k] = k = 0,1 A[k + 2] = C[k] − W4k ⋅ D[k]
x[4] n=0
D[0]
W40 −1 + A[2]
x[6] N=2 D[1]
W41 −1 + A[3]
1
x[4n] ⋅ W2nk B[k] = E[k] + W4k ⋅ F[k]
∑
E[k] = k = 0,1
n=0
x[1] E[0]
+ B[0]
x[3] N=2 E[1]
+ B[1]
1
x[4n + 2] ⋅ W2nk
∑
F[k] = k = 0,1 B[k + 2] = E[k] − W4k ⋅ F[k]
x[5] n=0 F[0]
W40 −1 + B[2]
x[7] N=2 F[1]
W41 −1 + B[3]
x[0] C[0]
+
The single butterfly for N = 2
x[4] C[1]
W20 −1 +
x[k]
+
x[2] D[0] x[k + N/2]
+ W20 −1 +
x[6] D[1]
W20 −1 +
Because:
x[1] E[0]
+ 2π
Resulting in:
x[3] F[0] x[k]
+ +
x[7] F[1] x[k + N/2]
W20 −1 + −1 +
186
8th point FFT (Radix-2 DIT )
x[0] C[0] A[0] X[0]
+ + +
Xm−1[p] Xm[p]
Computation in Computation
Each butterfly requires one complex multiplication and two complex additions
3
W4
6
W8 7
5
W8
W8
0 0 0
W8 W4 W2
1
W2 W42 W84 Re(z)
1
3 W8
W8
2
W8
1
W4
N−1
x[n] ⋅ WN−kn
∑
X[k] = k = 0,1,2,…, N − 1
n=0
Frequency points X[k] are decimated rather than x[n] points - independently the
odd and the even ones
N−1 N−1
x[n] ⋅ WN−2kn x[n] ⋅ WN(−2k+1)n
∑
X[2k + 1] =
∑
X[2k] =
n=0 n=0
k = 0,1,2,…, (N/2) − 1
(N/2)−1
[x[n] + x[n + n /2]] ⋅ WN/2
−kn
∑
X[2k] =
n=0
N−1 (N/2)−1
x[n] ⋅ WN(−2k+1)n = x[n] ⋅ WN−(2k+1)n+
∑
X[2k + 1] =
∑
n=0 n=0
(N/2)−1
x[n + N/2] ⋅ WN−(2k+1)(n+N/2)
∑
n=0
(N/2)−1 (N/2)−1
x[n] ⋅ WN−(2k+1)n+WN−(2k+1)(N/2) x[n + N/2] ⋅ WN−(2k+1)n
∑ ∑
=
n=0 n=0
(N/2)−1 (N/2)−1
x[n] ⋅ WN−(2k+1)n+WN−(2k+1)(N/2) x[n + N/2] ⋅ WN−(2k+1)n
∑ ∑
X[2k + 1] =
n=0 n=0
− j2πkN − j2πN
WN−(2k+1)(N/2) = WN−kNWN−N/2 = e N e 2N = −1 ⋅ 1 = − 1
(N/2)−1 (N/2)−1
x[n] ⋅ WN−(2k+1)n − x[n + N/2] ⋅ WN−(2k+1)n
∑ ∑
X[2k + 1] =
n=0 n=0
(N/2)−1
∑ [
X[2k + 1] = x[n] − x[n + N/2]] ⋅ WN −(2k+1)n
n=0
(N/2)−1
∑ {[
X[2k + 1] = x[n] − x[n + N/2]] WN } WN/2 −n −kn
n=0
DFT of the original signal can be replaced with DFT of the two components:
The FFT algorithm requires N /2⋅log2N complex multiplications and N ⋅log2N complex
additions.
Multiplications Additions
N Two Two
N-point DFT N/2-point Gain N-point DFT N/2-point Gain
DFT DFT
8 64 36 43,75% 56 42 42,86%
16 256 136 46,88% 240 128 46.67%
32 1024 528 48,44% 992 512 48,39%
64 4096 2080 49,22% 4032 2048 49,21%
128 16384 8256 49,61% 16256 8192 49,61%
256 65536 32896 49,80% 65280 32768 49,80%
Various FFT algorithms comparison
E. Chu, A. George, Inside the FFT Black Box, CRC Press LLC, 2000
Discrete Cosine Transform
DCT definition:
( )
N−1
π(2n + 1)k
k = 0,1,2,…, N − 1
∑
X[k] = c[k] ⋅ x[n] ⋅ cos
n=0
2N
DCT
1 2
c[0] = c[k] = k = 0,1,2,…, N − 1
N N
( )
N−1
π(2n + 1)k k = 0,1,2,…, N − 1
∑
IDCT x[k] = c[k]X[k] ⋅ cos
k=0
2N
DCT can be calculated by separation into odd and even samples
( ) ( )}
(N/2)−1 (N/2)−1
{ ∑
π(4n + 1)k π(4n + 3)k
∑
X[k] = c[k] ⋅ x[2n] ⋅ cos + x[2n + 1] ⋅ cos
n=0
2N n=0
2N
( ) ( )}
(N/2)−1 (N/2)−1
{ ∑
π(4n + 1)k π(4n + 3)k
∑
= c[k] ⋅ x̃[n] ⋅ cos + x̃[N − n − 1] ⋅ cos
n=0
2N n=0
2N
( )
N−1 N−1
[ ]
π(4n + 1)k −jπk/2N
x̃[n] ⋅ e−j2πk/N
∑ ∑
X[k] = c[k] ⋅ x̃[n] ⋅ cos = Re c[k] ⋅ e
n=0
2N n=0
1 N−1 2π
N−1
2π
x[n] ⋅ e−jk N n X[k] ⋅ e jk N n
N∑ ∑
X[k] = x[n] =
n=0 n=0
*
N−1
(a + b) = a* + b* (a ⋅ b) = a* ⋅ b*
(∑ )
* *
2π
x*[n] = X[k] ⋅ e jk N n
n=0
*
N−1
(∑ )
N−1 2π
−jk 2π
N n x[n] = X*[k] ⋅ e−jk N n
∑
x*[n] = X*[k] ⋅ e
n=0 n=0
2. Calculation of convolution,
© Andrzej Kotyra
Filters - classification
(LP – Low Pass)
© Andrzej Kotyra
(HP – High Pass)
Butterworth – no ripples in both passband and stopband flat and smooth freq. plot
Elliptic – ripples both in passband and stopband; the best possible steepness of the
slope, highly nonlinear phase-frequenc plot
Bessel – no ripples in both passband and stopband , low steepness; highly linear
phase-frequency plot, only low-pass
© Andrzej Kotyra
Digital filters
Digital Filter algorithm (software, hardware) that transforms (converts) an input signal x[n]
into an output signal y[n], that possesses the desired properties (depended on specific
application)- np. noise reduction, compression, changing frequency/phase properties of the
signal
!Frequency plot parameters – attenuation levels considerable better than for analogue
counterparts (od the same order) – greater steepness – (possible) no ripples in transient band.
!Possible filter of linear phase plot in pass band - no phase distortions in usable band
!Filter parameters are constant in time, no electronic elements (resistors, cpacitors, etc.) are
required – problem of element ageing does not exist.
!Filter coefficients as well as its structure can be changed in „real time” No circuit switching
needed
!Possible adaptation filters– their frequency/phase plots can be changed according to defined
input signal properties.
Digital filters - drawbacks
N M
y[n] = ∑ bl x[n − l ]− ∑ ak y[n − k ]
l =0 k =1
M N M N
y[n] + ∑ ak y[n − k ]= ∑ bl x[n − l ] ∑ a y[n − k ]= ∑ b x[n − l ]
k l
k =1 l =0 k =0 l =0
Z transform
N
−l
b
∑l z
M N Y (z )
∑ ak z ⋅ Y (z )= ∑ bl z −l ⋅ X (z ) filter transmittance : H (z ) =
−k l =0
= M
X (z )
k =0 l =0 1 + ∑ ak z − k
k =1
© Andrzej Kotyra
Digital filters - classification
Recursive D.F.
Digital filters
Non-recursive D.F
© Andrzej Kotyra
Properties of non-recursive filters:
N
−l
Y (z ) ∑b z l
⌘ All the coefficients of transmittance denominator ak are equal to zero. H (z ) = = l =0
M
X (z )
1 + ∑ ak z − k
k =1
⌘ Output signal of non-recursive filter depends only on samples of the input signal.
⌘ Impulse response of non-recursive filter is always finite in time → filters of finite impulse
response (FIR) are always stable
⌘ Not every finite-response filters is non-recursive
Digital filter realization: → how to convert (trensform) given an input signal into output
signal → e.g.using difference equation
x x + y x a x x( n ) x( −
z −1
y a
adder multiplier (time) delay block
© Andrzej Kotyra
Example12 Draw a block diagram and graph for a filetr descibed by following difference equation:
x[ n] y[ n ]
z−1 b a z−1
Graph:
x[ n ] y[ n ]
z
−1
b
z−1 a