Ch-2
Ch-2
RP Singh
School of Electrical and Computer Engineering
Haramaya Institute of Technology
Our focus in the
next few lectures
2
• A discrete-time signal is a function of independent integer variables.
• x(n) is not defined at instants between two successive samples.
1, n=0
• Functional representation: x ( n) =
0, elsewhere
3
Unit Sample Unit Step
Unit Ramp
4
Exponential Signals: x(n) = a
n
for all n
5
Energy of Signals vs. Power of Signals
1 N
E = | x ( n) |
2
P = lim | x ( n) |
2
− N → 2 N + 1 n = − N
6
Periodic vs. aperiodic signals
A signal is periodic with period N (N>0) iff
x(n+N)=x(n) for all n
The smallest value of N where this holds is called the
fundamental period.
7
Symmetric (even) and anti-symmetric (odd) signals:
◦ Even: x(-n) = x(n)
◦ Odd: x(-n) = -x(n)
Any arbitrary signal can be expressed as a sum of two
signal components, one even and the other odd:
xe(n) = 1
2
x(n) + x(−n)
=
+
xo (n) = 1
2
x(n) − x(−n)
x(n) = xe(n) + xo(n)
8
A discrete-time system is a device that performs some
operation on a discrete-time signal.
A system transforms an input signal x(n) into an output
signal y(n) where: y(n) T [ x(n)] .
Some basic discrete-time systems:
◦ Adders
◦ Constant multipliers
◦ Signal multipliers
◦ Unit delay elements
◦ Unit advance elements
9
10
11
y ( n ) = x ( 2 n)
12
◦Addition: y(n) = x1(n) + x2(n)
◦Multiplication: y(n) = x1(n) x2(n)
◦Scaling: y(n) = a x(n)
13
n, −3 n 3 1
x ( n) = y (n) = [ x(n + 1) + x(n) + x(n − 1)]
3
0, elsewhere
Moving average filter
1 1 2
y (0) = [ x(−1) + x(0) + x(1)] = [1 + 0 + 1] =
3 3 3
5 2 5
y (n) = {0,1, ,2,1, ,1,2, ,1,0}
3 3 3
14
n, −3 n 3
x ( n) = y (n) = [ x(n) + x(n − 1) + x(n − 2) + ...]
0, elsewhere
Accumulator
15
Memoryless systems: If the output of the system at an
instant n only depends on the input sample at that time
(and not on past or future samples) then the system is
called memoryless or static,
e.g. y(n)=ax(n)+bx2(n)
Otherwise, the system is said to be dynamic or to have
memory,
e.g. y(n)=x(n)−4x(n−2)
16
In a causal system, the output at any time n only
depends on the present and past inputs.
An example of a causal system:
y(n)=F[x(n),x(n−1),x(n− 2),...]
All other systems are non-causal.
A subset of non-causal system where the system
output, at any time n only depends on future inputs is
called anti-causal.
y(n)=F[x(n+1),x(n+2),...]
17
Unstable systems exhibit erratic and extreme behavior.
BIBO stable systems are those producing a bounded
output for every bounded input:
x(n) M x y(n) M y
Example: y (n) = y 2 (n − 1) + x(n) Stable or unstable?
Solution: x(n) = C (n) Bounded signal
y(0) = C, y(1) = C 2 , y(2) = C 4 ,...,y(n) = C 2n
1 C unstable
18
Superposition principle:T[ax1(n)+bx2(n)]=aT[x1(n)]+bT[x2 (n)]
A relaxed linear system with zero input
produces a zero output.
Scaling property
Additivity property
19
Example: y(n) = x(n2 ) Linear or non-linear?
Solution: y1(n) = x1(n2 ) y2 (n) = x2 (n2 )
y3 (n) = T (a1x1(n) + a2 x2 (n)) = a1x1 (n2 ) + a2 x2 (n2 )
a1 y1 (n) + a2 y2 (n) = a1x1(n2 ) + a2 x2 (n2 ) Linear!
Example: y ( n) = e x ( n )
20
If input-output characteristics of a system do not change with time then it is
called time-invariant or shift-invariant. This means that for every input
x(n) and every shift k x ( n) ⎯
⎯→T
y ( n) x ( n − k ) ⎯
⎯→
T
y (n − k )
21
Time-invariant example: differentiator
x ( n) ⎯
⎯→
T
y (n) = x(n) − x(n − 1)
x(n − 1) ⎯
⎯→
T
y (n − 1) = x(n − 1) − x(n − 2)
Time-variant example: modulator
x(n) ⎯
⎯→
T
y(n) = x(n).Cos(0 .n)
22
LTI systems have two important characteristics:
◦ Time invariance: A system T is called time-invariant or shift-
invariant if input-output characteristics of the system do not
change with time
x ( n) ⎯
⎯→
T
y ( n) x ( n − k ) ⎯
⎯→
T
y (n − k )
23
h(n): the response of the LTI system to the input unit sample
(n), i.e. h(n)=T((n))
Shifting
y (n0 ) = x(k )h(n0 − k ) and
k = −
Multiplying
Summation
25
• Commutative law: x(n) * h(n) = h(n) * x(n)
Distributive law:
x(n) * [h1(n)] + h2 (n)] = x(n) * h1(n) + x(n) * h2 (n)]
26
Associative law:
27
h( n) = ?
n n
1 1
h1 (n) = u (n) h2 (n) = u (n)
2 4
• Solution:
h(n) = h1 (k )h2 (n − k )
k = − Non-zero for
k n−k
1 1
vk (n) = h1 ( k )h2 ( n − k ) = u ( k ) u (n − k ) k 0, n − k 0
2 4
h(n) = 0, n 0
n−k n
n
1 1
k
1 n k = ( 1 ) n (2n +1 − 1), n 0
h( n) = = 2
k =0 2 4
4 k =0 4
n −1
ar
k
=a
(r − 1)
n
k =0 (r − 1)
28
Remember that for a causal system, the output at any point of time,
depends only on the present and past values of the input.
30
Causality in LTI Systems
Neither necessary nor sufficient
Causality Condition : condition for all systems, but
h(n) = 0 for n 0 necessary and sufficient for LTI
systems
Proof :
y ( n) = x ( k ) h( n − k ) = x ( n − k ) h( k )
k = − k = −
But x(n-k) for k>=0 shows
= x ( n − k ) h( k ) the past values of x(n). So y(n)
k =0
depends only on the
so y ( n) = x ( n − k ) h( k ) past values of x(n) and the
k =0 system is causal.
31
Stability: BIBO (bounded-input-bounded-output) stable
32
Stability of LTI Systems
Stability Condition : A linear time-invariant system is stable iff
Sh h( k )
k = −
a ) (Sufficiency) : y[n] = h[k ]x[n − k ] h[k ] x[n − k ] B h[k ]
k = − k = −
x
k = −
unbounded output
33
a n , n 0 a,b=? System Stable
h( n) = n
b , n 0
−1
• Solution: h(n) = a + b n n
n = − n =0 n = −
n 1 2
a = a = 1 + a + a + ... = a 1
n
n =0 n =0 1− a
−1 1 1 1 1
b = = + + + ...)
n
n
(1 2
n = − n =1 b b b b
1 = (1 + + 2 + ...) = ( 1 or b 1)
= 1−
b
34
LTI systems can be divided into 2 types based on their impulse response:
An FIR system has finite-duration h(n), i.e. h(n) = 0 for n < 0 and n ≥ M.
M −1
y ( n) = h( k ) x ( n − k )
k =0
This means that the output at any time n is simply a weighted linear
combination of the most recent M input samples (FIR has a finite memory
of length M).
An IIR system has infinite-duration h(n), so its output based on the
convolution formula becomes (causality assumed)
y ( n) = h( k ) x ( n − k )
k =0
In this case, the weighted sum involves present and all past input samples
thus the IIR system has infinite memory.
35
FIR systems can be readily implemented by their convolution
summation (involves additions, multiplications, and a finite
number of memory locations).
36
Cumulative Average System:
1 n 1 n −1
y ( n) = x(k ), n = 0,1,... y (n − 1) = x(k ), n = 0,1,...
n + 1 k =0 n k =0
n −1
(n + 1) y (n) = x(k ) + x(n)
k =0
= ny(n − 1) + x(n)
n 1
y ( n) = y (n − 1) + x ( n)
n +1 n +1 + Initial Condition
37
Transmitted
waveform
Shifted version of
the transmitted
waveform + noise
39
Cross-correlation is an efficient way to measure the degree to
which two signals (one template and the other the test signal)
are similar to each other.
Shifted version of
the template+ noise
Template
40
Test signal
Cross-Correlation
Machine
Output
41
• The amplitude of each sample in the cross-
correlation signal is a measure of how much
the received signal resembles the target
signal, at that location.
• The value of the cross-correlation is
maximized when the target signal is aligned
with the same features in the received signal.
• Using cross-correlation to detect a known
waveform is frequently called matched filtering.
42
x (n ) Transmitted/Desired Signal
y ( n ) = x ( n − D ) + ( n )
Received/Test Signal
Delayed version Additive noise
of the input
Attenuation factor
rxy (l ) = x(n) y (n − l ) = x(n + l ) y (n), l = 0, +1, + 2,...
n = − n = − − −
43
• Cross-correlation involves the same
sequence of steps as in convolution except the
folding part, so basically the cross-correlation
of two signals involves:
1. Shifting one of the sequences
2. Multiplication of the two sequences
3. Summing over all values of the product
44
The cross-correlation machine and convolution machine are
identical, except that in the correlation machine this flip
doesn't take place, and the samples run in the normal
direction.
rxy (l ) = x(l ) * y (−l ) Cross-correlation is
non-commutative.
Convolution is the relationship between a system's input
signal, output signal, and the impulse response. Correlation
is a way to detect a known waveform in a noisy
background.
The similar mathematics is only a convenient coincidence.
45
rxx (l ) = x(n) x(n − l )
n = −
= x(n + l ) x(n) = rxx (−l )
n = −
46
It can be shown that: rxy (l ) rxx (0)ryy (0) = Ex E y
47
If signals are scaled, the shape of the cross-correlation
sequence does not change. Only the amplitudes are scaled.
It is often desirable to normalize the auto-correlation and
cross-correlation sequences to a range from -1 to 1.
Normalized autocorrelation: xx (l ) = rxx (l )
rxx (0)
rxy (l )
Normalized cross-correlation: xy (l ) =
rxx (0)ryy (0)
48
In this lecture, we learned about:
Representations of discrete time signals and common basic DT signals
Classifications of DT systems:
◦ Static vs. dynamic, time-invariant vs. time-variant, linear vs. non-linear, causal vs.
◦ non-causal, stable vs. non-stable, FIR vs. IIR
LTI systems and their representation
Convolution for determining response to arbitrary inputs
Cross-correlation
49