0% found this document useful (0 votes)
78 views186 pages

Time-Domain Characterization of LTI Discrete-Time Systems

This document discusses discrete-time signals and their representation in the time domain. It defines a discrete-time signal as a sequence of numbers where each number represents the signal value at discrete time instances. A discrete-time signal can be of finite or infinite length. Common operations on discrete-time signals include multiplication, addition, time-shifting, and filtering. Filtering is widely used in signal processing applications to modify the frequency content of a signal. Digital signal processing involves manipulating discrete-time signals using mathematical operations.

Uploaded by

Rishabh Jaiswal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views186 pages

Time-Domain Characterization of LTI Discrete-Time Systems

This document discusses discrete-time signals and their representation in the time domain. It defines a discrete-time signal as a sequence of numbers where each number represents the signal value at discrete time instances. A discrete-time signal can be of finite or infinite length. Common operations on discrete-time signals include multiplication, addition, time-shifting, and filtering. Filtering is widely used in signal processing applications to modify the frequency content of a signal. Digital signal processing involves manipulating discrete-time signals using mathematical operations.

Uploaded by

Rishabh Jaiswal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 186

Discrete-Time Signals:

Time-Domain Representation
What is a signal ?

A signal is a function of an independent


variable such as time, distance, position,
temperature, pressure, etc.
For example…
• Electrical Engineering
voltages/currents in a circuit
speech signals
image signals
• Physics
radiation
• Mechanical Engineering
vibration studies
• Astronomy
space photos
or

• Biomedicine
EEG, ECG, MRI, X-Rays, Ultrasounds
• Seismology
tectonic plate movement, earthquake prediction
• Economics
stock market data
What is DSP?
Mathematical and algorithmic manipulation of discretized and
quantized or naturally digital signals in order to extract the most
relevant and pertinent information that is carried by the signal.

What is a signal?
What is a system?
What is processing?
Signals can be characterized in several ways
Continuous time signals vs. discrete time signals (x(t), x[n]).
Temperature in London / signal on a CD-ROM.
Continuous valued signals vs. discrete signals.
Amount of current drawn by a device / average scores of TOEFL in a
school over years.
–Continuous time and continuous valued : Analog signal.
–Continuous time and discrete valued: Quantized signal.
–Discrete time and continuous valued: Sampled signal.
–Discrete time and discrete values: Digital signal.
Real valued signals vs. complex valued signals.
Resident use electric power / industrial use reactive power.
Scalar signals vs. vector valued (multi-channel) signals.
Blood pressure signal / 128 channel EEG.
Deterministic vs. random signal:
Recorded audio / noise.
One-dimensional vs. two dimensional vs. multidimensional
signals.
Speech / still image / video.
Systems
• For our purposes, a DSP system is one that can mathematically
manipulate (e.g., change, record, transmit, transform) digital
signals.
• Furthermore, we are not interested in processing analog signals
either, even though most signals in nature are analog signals.
Various Types of Processing
Modulation and demodulation.
Signal security.
Encryption and decryption.
Multiplexing and de-multiplexing.
Data compression.
Signal de-noising.
Filtering for noise reduction.
Speaker/system identification.
Signal enhancement –equalization.
Audio processing.
Image processing –image de-noising, enhancement,
watermarking.
Reconstruction.
Data analysis and feature extraction.
Frequency/spectral analysis.
Filtering
• By far the most commonly used DSP operation
Filtering refers to deliberately changing the frequency content of the signal,
typically, by removing certain frequencies from the signals.
For de-noising applications, the (frequency) filter removes those frequencies
in the signal that correspond to noise.
In various applications, filtering is used to focus to that part of the spectrum
that is of interest, that is, the part that carries the information.

• Typically we have the following types of filters


Low-pass (LPF) –removes high frequencies, and retains (passes) low
frequencies.
High-pass (HPF) –removes low frequencies, and retains high frequencies.
Band-pass (BPF) –retains an interval of frequencies within a band, removes
others.
Band-stop (BSF) –removes an interval of frequencies within a band, retains
others.
Notch filter –removes a specific frequency.
A Common Application: Filtering
Components of a DSP System
Components of a DSP System
Components of a DSP System
Analog-to-Digital-to-Analog…?
• Why not just process the signals in continuous time domain? Isn’t it just a waste of time,
money and resources to convert to digital and back to analog?

• Why DSP? We digitally process the signals in discrete domain, because it is


– 􀂪More flexible, more accurate, easier to mass produce.
– 􀂪Easier to design.
• System characteristics can easily be changed by programming.
• Any level of accuracy can be obtained by use of appropriate number
of bits.
– 􀂪More deterministic and reproducible-less sensitive to component values,
etc.
– 􀂪Many things that cannot be done using analog processors can be done
digitally.
• Allows multiplexing, time sharing, multi-channel processing, adaptive filtering.
• Easy to cascade, no loading effects, signals can be stored indefinitely w/o loss.
• Allows processing of very low frequency signals, which requires unpractical
component values in analog world.
Analog-to-Digital-to-Analog…?

• On the other hand, it can be


– 􀂪Slower, sampling issues.
– 􀂪More expensive, increased system complexity,
consumes more power.
• Yet, the advantages far outweigh the
disadvantages. Today, most continuous time
signals are in fact processed in discrete time
using digital signal processors.
Analog-Digital

Examples of analog technology


• photocopiers
• telephones
• audio tapes
• televisions (intensity and color info per scan line)
• VCRs (same as TV)
Examples of digital technology
• Digital computers!
In the next few slides you can see some
real-life signals
Electroencephalogram (EEG) Data
Stock Market Data
Satellite image
Volcano Kamchatka Peninsula, Russia
Satellite image
Volcano in Alaska
Medical Images:
MRI of normal brain
Medical Images:
X-ray knee
Medical Images: Ultrasound
Five-month Foetus (lungs, liver and bowel)
Astronomical images
Discrete-Time Signals:
Time-Domain Representation
• Signals represented as sequences of
numbers, called samples
• Sample value of a typical signal or sequence
denoted as x[n] with n being an integer in the
range   n
• x[n] defined only for integer values of n and
undefined for noninteger values of n
• Discrete-time signal represented by {x[n]}
Discrete-Time Signals:
Time-Domain Representation
• Here, n-th sample is given by
x[ n]  xa (t ) t  nT  xa ( nT ), n   ,  2,  1,0,1,
• The spacing T is called the sampling interval or
sampling period
• Inverse of sampling interval T, denoted as FT , is
1
called the sampling frequency: FT  (T )
Discrete-Time Signals:
Time-Domain Representation
• Two types of discrete-time signals:
- Sampled-data signals in which samples
are continuous-valued
- Digital signals in which samples are
discrete-valued
• Signals in a practical digital signal
processing system are digital signals
obtained by quantizing the sample values
either by rounding or truncation
2 Dimensions
From Continuous to Discrete: Sampling

256x256 64x64
Discrete (Sampled) and Digital (Quantized) Image
Discrete (Sampled) and Digital (Quantized) Image
Discrete (Sampled) and Digital (Quantized) Image

256x256 256 levels 256x256 32 levels


Discrete (Sampled) and Digital (Quantized) Image

256x256 256 levels 256x256 2 levels


Discrete-Time Signals:
Time-Domain Representation
• A discrete-time signal may be a finite-length
or an infinite-length sequence
• Finite-length (also called finite-duration or
finite-extent) sequence is defined only for a
finite time interval: N1  n  N 2
where    N1 and N 2   with N1  N 2
• Length or duration of the above finite-
length sequence is N  N 2  N1  1
Discrete-Time Signals:
Time-Domain Representation
• A right-sided sequence x[n] has zero-valued
samples for n  N1

n
N1

A right-sided sequence
• If a right-sided sequence is called a
N1  0,
causal sequence
Discrete-Time Signals:
Time-Domain Representation
• A left-sided sequence x[n] has zero-valued
samples for n  N 2

N2
n

A left-sided sequence

• If N 2  0, a left-sided sequence is called a


anti-causal sequence
Operations on Sequences
• A single-input, single-output discrete-time
system operates on a sequence, called the
input sequence, according some prescribed
rules and develops another sequence, called
the output sequence, with more desirable
properties
Discrete-time
x[n] y[n]
system
Input sequence Output sequence
Example of an Operation on a Sequence:
Noise Removal

• For example, the input may be a signal


corrupted with additive noise
• A discrete-time system may be designed to
generate an output by removing the noise
component from the input
• In most cases, the operation defining a
particular discrete-time system is composed
of some basic operations
Basic Operations
• Product (modulation) operation:
x[n]  y[n]
– Modulator y[n]  x[n]  w[n]
w[n]

• An application is the generation of a finite-length


sequence from an infinite-length sequence by
multiplying the latter with a finite-length sequence
called an window sequence
• Process called windowing
Basic Operations
• Addition operation:
x[n]  y[n]
– Adder y[n]  x[n]  w[n]
w[n]

• Multiplication operation
A
– Multiplier x[n] y[n] y[n]  A  x[n]
Basic Operations
• Time-shifting operation: y[n]  x[n  N ]
where N is an integer
• If N > 0, it is a delay operation
– Unit delay y[n]  x[n  1]
x[n] z 1 y[n]

• If N < 0, it is an advance operation


x[n] z y[n] y[n]  x[n  1]
– Unit advance
Basic Operations
• Time-reversal (folding) operation:
y[n]  x[n]

• Branching operation: Used to provide


multiple copies of a sequence
x[n] x[n]

x[n]
Combinations of Basic
Operations
• Example -

y[n]  1x[n]   2 x[n  1]   3 x[n  2]   4 x[n  3]


Sampling Rate Alteration
• Employed to generate a new sequence y[n]
'
with a sampling rate FT higher or lower
than that of the sampling rate FT of a given
sequence x[n]
FT'
• Sampling rate alteration ratio is R 
FT

• If R > 1, the process called interpolation


• If R < 1, the process called decimation
Sampling Rate Alteration
• In up-sampling by an integer factor L > 1,
L  1 equidistant zero-valued samples are
inserted by the up-sampler between each
two consecutive samples of the input
sequence x[n]:
 x[n / L], n  0,  L,  2 L,
xu [n]  
 0, otherwise

x[n] L xu [n]
Sampling Rate Alteration
• An example of the up-sampling operation
Input Sequence Output sequence up-sampled by 3
1 1

0.5 0.5
Amplitude

Amplitude
0 0

-0.5 -0.5

-1 -1
0 10 20 30 40 50 0 10 20 30 40 50
Time index n Time index n
Sampling Rate Alteration
• In down-sampling by an integer factor
M > 1, every M-th samples of the input
sequence are kept and M  1 in-between
samples are removed:
y[n]  x[nM ]

x[n] M y[n]
Sampling Rate Alteration
• An example of the down-sampling
operation
Input Sequence Output sequence down-sampled by 3
1 1

0.5 0.5
Amplitude

0 Amplitude 0

-0.5 -0.5

-1 -1
0 10 20 30 40 50 0 10 20 30 40 50
Time index n Time index n
Classification of Sequences
Based on Symmetry
• Conjugate-symmetric sequence:
x[n]  x * [ n]
If x[n] is real, then it is an even sequence

An even sequence
Classification of Sequences
Based on Symmetry
• Conjugate-antisymmetric sequence:
x[n]   x * [ n]
If x[n] is real, then it is an odd sequence

An odd sequence
Classification of Sequences
Based on Symmetry
• It follows from the definition that for a
conjugate-symmetric sequence {x[n]}, x[0]
must be a real number
• Likewise, it follows from the definition that
for a conjugate anti-symmetric sequence
{y[n]}, y[0] must be an imaginary number
• From the above, it also follows that for an
odd sequence {w[n]}, w[0] = 0
Classification of Sequences
Based on Symmetry
• Any complex sequence can be expressed as a
sum of its conjugate-symmetric part and its
conjugate-antisymmetric part:
x[n]  xcs [n]  xca [n]
where
xcs [n]  12 x[n]  x * [ n]

xca [n]  12 x[n]  x * [ n]


Classification of Sequences
Based on Symmetry
• Any real sequence can be expressed as a
sum of its even part and its odd part:
x[n]  xev [n]  xod [n]
where
xev [n]  12 x[n]  x[ n]

xod [n]  12 x[n]  x[ n]


Classification of Sequences
Based on Periodicity
• A sequence ~ x [n] satisfying ~
x [n ]  ~
x [n  kN ]
is called a periodic sequence with a period N where N is
a positive integer and k is any integer
• Smallest value of N satisfying ~ x [n ]  ~
x [n  kN ]
is called the fundamental period

• A sequence not satisfying the periodicity condition is


called an aperiodic sequence
Classification of Sequences:
Energy and Power Signals
• Total energy of a sequence x[n] is defined by

x   x[n]
2
n  
• An infinite length sequence with finite sample
values may or may not have finite energy
• A finite length sequence with finite sample
values has finite energy
Classification of Sequences:
Energy and Power Signals
• The average power of an aperiodic
sequence is defined by
K 2
1
Px  lim  x[n]
K  2 K 1 n K

• We define the energy of a sequence x[n]


over a finite interval  K  n  K as
 x,K
K
  x[n]
n K
2
Classification of Sequences:
Energy and Power Signals

• The average power of a periodic sequence


~
x [n] with a period N is given by
N 1
Px 
1
 ~
x [n ]
2
N
n 0
• The average power of an infinite-length
sequence may be finite or infinite
Classification of Sequences:
Energy and Power Signals
• Example - Consider the causal sequence
defined by
3(1)n , n  0
x[n]  
 0, n0
• Note: x[n] has infinite energy
• Its average power is given by
1  K  9( K  1)
Px  lim  9 1  lim  4.5
K  2 K  1  n  0  K  2 K  1
Classification of Sequences:
Energy and Power Signals
• An infinite energy signal with finite average
power is called a power signal
Example - A periodic sequence which has a
finite average power but infinite energy

• A finite energy signal with zero average


power is called an energy signal
Classification of Sequences:
Deterministic-Stochastic
Other Types of Classifications
• A sequence x[n] is said to be bounded if
x[n]  Bx  

• Example - The sequence x[n]  cos 0.3n is a


bounded sequence as
x[n]  cos 0.3n  1
Other Types of Classifications
• A sequence x[n] is said to be absolutely
summable if 
 x[n]  
n  
• Example - The sequence
0.3n , n  0
y[n]  
 0, n  0
is an absolutely summable sequence as
 1
n
 0.3  1  0.3  1.42857  
n 0
Other Types of Classifications
• A sequence x[n] is said to be square-
summable if 
2
 x[n]  
n  
• Example - The sequence
sin 0.4 n
h[n]   n
is square-summable but not absolutely
summable
Basic Sequences
1, n  0
• Unit sample sequence -  [n]  
0, n  0
1

n
–4 –3 –2 –1 0 1 2 3 4 5 6

• Unit step sequence - 1, n  0


[ n]  
0, n  0
1

n
–4 –3 –2 –1 0 1 2 3 4 5 6
Basic Sequences
• Real sinusoidal sequence -
x[n]  A cos(o n  )
where A is the amplitude, o is the angular
frequency, and  is the phase of x[n]
Example - w = 0.1
o
2

1
Amplitude

-1

-2
0 10 20 30 40
Time index n
Basic Sequences
• Complex exponential sequence -
n
x[n]  A  ,    n  
where A and  are real or complex numbers
( o  jo ) j
• If we write   e , A  A e ,
then we can express
j ( o  jo ) n
x[n]  A e e  xre [n]  j xim [n],
where
o n
xre [n]  A e cos(o n  ),
o n
xim [n]  A e sin(o n  )
Basic Sequences
• xre [n] and xim [n] of a complex exponential
sequence are real sinusoidal sequences with
constant o  0 , growing o  0 , and
decaying o  0  amplitudes for n > 0
Real part Imaginary part
1 1

0.5 0.5

Amplitude
Amplitude

0 0

-0.5 -0.5

-1 -1
0 10 20 30 40 0 10 20 30 40
Time index n Time index n
1 
x[n]  exp(  j 6 )n 12
Basic Sequences
• Real exponential sequence -
x[n]  A n ,    n  
where A and  are real or complex numbers
a = 1.2 a = 0.9
50 20

40
15
Amplitude

Amplitude
30
10
20

10 5

0 0
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time index n Time index n
Basic Sequences
• Sinusoidal sequence A cos(o n  ) and
complex exponential sequence B exp( jo n)
are periodic sequences of period N if o N  2r
where N and r are positive integers
• Smallest value of N satisfying o N  2r
is the fundamental period of the sequence
• To verify the above fact, consider
x1[n]  cos(o n  )
x2 [n]  cos(o (n  N )  )
Basic Sequences
• Now x2 [n]  cos(o (n  N )  )
 cos(o n  ) cos o N  sin(o n  ) sin o N
which will be equal to cos(o n  )  x1[n]
only if
sin o N  0 and cos o N  1
• These two conditions are met if and only if
o N  2 r 2  N
or r
o
Basic Sequences

• If 2/o is a noninteger rational number, then


the period will be a multiple of 2/o
• Otherwise, the sequence is aperiodic
• Example - x[n]  sin( 3n  ) is an aperiodic
sequence
Basic Sequences
w =0
0
2

1.5
Amplitude
1

0.5

0
• Here 0 10 20
Time index n
30 40

o  0
• Hence period for r = 0

2 r
N 1
0
Basic Sequences
w = 0.1p
0
2

Amplitude 0

-1

-2
0 10 20 30 40
Time index n

• Here o  0.1

2 r
• Hence N   20 for r = 1
0.1
Basic Sequences
• Property 1 - Consider x[n]  exp( j1n)and
y[n]  exp( j2 n) with 0  1   and
2k  2  2(k  1) where k is any positive
integer
• If 2  1  2k ,then x[n] = y[n]

• Thus, x[n] and y[n] are indistinguishable


Basic Sequences
• Property 2 - The frequency of oscillation of
A cos(o n) increases aso increases from 0
to , and then decreases as oincreases from
 to 2
• Thus, frequencies in the neighborhood of
  0 are called low frequencies, whereas,
frequencies in the neighborhood of    are
called high frequencies
Basic Sequences
• Because of Property 1, a frequency  o in
the neighborhood of  = 2k is
indistinguishable from a frequency o  2 k
in the neighborhood of  = 0
and a frequency o in the neighborhood of
  (2 k  1) is indistinguishable from a
frequency o  (2 k  1) in the
neighborhood of  = 
Basic Sequences
• Frequencies in the neighborhood of  = 2k
are usually called low frequencies
• Frequencies in the neighborhood of
 = (2k+1) are usually called high
frequencies
• v1[n]  cos(0.1 n)  cos(1.9 n) is a low-
frequency signal
• v2 [n]  cos(0.8 n)  cos(1.2 n) is a high-
frequency signal
Basic Sequences
• An arbitrary sequence can be represented in
the time-domain as a weighted sum of some
basic sequence and its delayed (advanced)
versions

x[n]  0.5 [n  2]  1.5 [n  1]   [n  2]


  [n  4]  0.75 [n  6]
The Sampling Process
• Often, a discrete-time sequence x[n] is
developed by uniformly sampling a
continuous-time signal xa (t ) as indicated
below

• The relation between the two signals is


x[n]  xa (t ) t nT  xa (nT ), n   ,  2,  1, 0,1, 2,
The Sampling Process
• Time variable t of xa (t ) is related to the time
variable n of x[n] only at discrete-time
instants tn given by
tn  nT  n  2 n
FT T
with FT  1 / T denoting the sampling
frequency and
T  2 FT denoting the sampling angular
frequency
Hertz
The Sampling Process
• Consider the continuous-time signal
x(t )  A cos(2 fot  )  A cos(ot  )
• The corresponding discrete-time signal is
2  o
x[n]  A cos(o nT  )  A cos( n  )
T
 A cos(o n  ) seconds

where o  2 o / T  oT radians per second


is the normalized digital angular frequency
of x[n]
radians per sample
The Sampling Process
• If the unit of sampling period T is in
seconds
• The unit of normalized digital angular
frequency o is radians/sample
• The unit of normalized analog angular
frequency o is radians/second
• The unit of analog frequency f o is hertz
(Hz)
The Sampling Process
• The three continuous-time signals
g1 (t )  cos(6 t)
g 2 (t )  cos(14 t)
g3 (t )  cos(26 t)
of frequencies 3 Hz, 7 Hz, and 13 Hz, are
sampled at a sampling rate of 10 Hz, i.e.
with T = 0.1 sec. generating the three
sequences
g1[n]  cos(0.6 n) g 2 [n]  cos(1.4 n)
g3[n]  cos(2.6 n)
The Sampling Process
• Plots of these sequences (shown with circles)
and their parent time functions are shown below:
1

0.5
Amplitude

-0.5

-1
0 0.2 0.4 0.6 0.8 1
• Note that each sequence has exactly the same
time

sample value for any given n


The Sampling Process
• This fact can also be verified by observing that
g 2 [n]  cos(1.4 n)  cos(2  0.6)n   cos(0.6 n)

g3[n]  cos(2.6 n)  cos(2  0.6)n   cos(0.6 n)


• As a result, all three sequences are identical
and it is difficult to associate a unique
continuous-time function with each of these
sequences
The Sampling Process
• The above phenomenon of a continuous-
time signal of higher frequency acquiring
the identity of a sinusoidal sequence of
lower frequency after sampling is called
aliasing
The Sampling Process
• Since there are an infinite number of
continuous-time signals that can lead to the
same sequence when sampled periodically,
additional conditions need to imposed so
that the sequence {x[n]}  {xa (nT )} can
uniquely represent the parent continuous-
time signal xa (t )
• In this case, xa (t ) can be fully recovered
from {x[n]}
The Sampling Process
• Example - Determine the discrete-time signal v[n]
obtained by uniformly sampling at a sampling
rate of 200 Hz the continuous-time signal

va (t )  6 cos(60 t)  3 sin(300 t)  2 cos(340 t)


• Note:  composed
is 4 cos(500oft )5sinusoidal
10 sin(660 t ) of
signals
frequencies 30 Hz, 150 Hz, 170 Hz, 250 Hz and
330 Hzva (t )
The Sampling Process
1
• The sampling period is T   0.005 sec
200
• The generated discrete-time signal v[n] is
thus given by
v[ n ]  6 cos( 0.3 n)  3 sin(1.5 n)  2 cos(1.7  n)
 4 cos( 2.5 n)  10 sin( 3.3 n)
 6 cos(0.3n )  3 sin(( 2   0.5) n )  2 cos(( 2   0.3) n )
 4 cos(( 2   0.5) n)  10 sin(( 4   0.7 ) n)
The Sampling Process
 6 cos( 0.3 n)  3 sin( 0.5 n)  2 cos( 0.3 n)  4 cos( 0.5 n)
 10 sin( 0.7  n)
 8 cos( 0.3 n)  5 cos( 0.5 n  0.6435)  10 sin( 0.7  n)

• Note: v[n] is composed of 3 discrete-time


sinusoidal signals of normalized angular
frequencies: 0.3, 0.5, and 0.
The Sampling Process
• Note: An identical discrete-time signal is
also generated by uniformly sampling at a
200-Hz sampling rate the following
continuous-time signals:
wa (t )  8 cos( 60  t)  5 cos(100  t  0.6435)  10 sin(140  t)

g a (t )  2 cos( 60  t)  4 cos(100  t)  10 sin( 260  t)


 6 cos( 460  t)  3 sin( 700  t)
The Sampling Process
2o
• Recall o 
T

• Thus if T  2o, then the corresponding


normalized digital angular frequency oof
the discrete-time signal obtained by sampling
the parent continuous-time sinusoidal signal
will be in the range  
• No aliasing
The Sampling Process
• On the other hand, if T  2o , the
normalized digital angular frequency will
foldover into a lower digital frequency
o   2o / T  2  in the range 
because of aliasing
• Hence, to prevent aliasing, the sampling
frequency T should be greater than 2
times the frequency o of the sinusoidal
signal being sampled
The Sampling Process
• Generalization: Consider an arbitrary
continuous-time signal xa (t ) composed of a
weighted sum of a number of sinusoidal
signals
• xa (t ) can be represented uniquely by its
sampled version {x[n]} if the sampling
frequency T is chosen to be greater than 2
times the highest frequency contained in
xa (t )
The Sampling Process
• The condition to be satisfied by the
sampling frequency to prevent aliasing is
called the sampling theorem
• A formal proof of this theorem will be
presented later
Discrete-Time Systems
• A discrete-time system processes a given
input sequence x[n] to generates an output
sequence y[n] with more desirable
properties
• In most applications, the discrete-time
system is a single-input, single-output
system:
x[n]
Discrete time y[n]
System
Input sequence Output sequence
Discrete-Time Systems:
Examples
• 2-input, 1-output discrete-time systems -
Modulator, adder
• 1-input, 1-output discrete-time systems -
Multiplier, unit delay, unit advance
Discrete-Time Systems: Examples
• Accumulator -
n n 1
y[n]   x[]   x[]  x[n]  y[n  1]  x[n]
   

• The output y[n] at time instant n is the sum


of the input sample x[n] at time instant n
and the previous output y[n  1] at time
instant n  1, which is the sum of all
previous input sample values from   to n  1
• The system cumulatively adds, i.e., it
accumulates all input sample values
Discrete-Time Systems:Examples
• Accumulator - Input-output relation can
also be written in the form
1 n
y[n]   x[]   x[]
   0
n
 y[1]   x[], n  0
 0
• The second form is used for a causal input
sequence, in which case y[1] is called
the initial condition
Discrete-Time Systems:Examples
• M-point moving-average system -
M 1
y[n]  1 x[n  k ]

M k 0
• Used in smoothing random variations in
data
• An application: Consider
x[n] = s[n] + d[n],
where s[n] is the signal corrupted by a noise
d[n]
Discrete-Time Systems:Examples
n
s[n]  2[n(0.9) ], d[n] - random signal
8
d[n]
s[n]
6 x[n]
Amplitude
4

-2
0 10 20 30 40 50
Time index n

7
s[n]
6 y[n]
5
Amplitude

0
0 10 20 30 40 50
Time index n
Discrete-Time Systems:Examples
• Linear interpolation - Employed to estimate
sample values between pairs of adjacent
sample values of a discrete-time sequence
• Factor-of-4 interpolation

y[n]

3 4
n
0 1 2 5 6 7 8 9 10 11 12
Discrete-Time Systems:
Examples
• Factor-of-2 interpolator -

y[n]  xu [n]  1 xu [n  1]  xu [n  1]


2
• Factor-of-3 interpolator -
y[n]  xu [n]  1 xu [n  1]  xu [n  2]
3
2
 xu [n  2]  xu [n  1]
3
Discrete-Time Systems:
Classification
• Linear System
• Shift-Invariant System
• Causal System
• Stable System
• Passive and Lossless Systems
Linear Discrete-Time Systems
• Definition - If y1[n] is the output due to an
input x1[n] and y2 [isn]the output due to an
input x2 [n] then for an input
x[n]   x1[n]   x2 [n]
the output is given by
y[n]   y1[n]   y2 [n]
• Above property must hold for any arbitrary
constants  and  , and for all possible
inputs x1[n] and x2 [n]
Accumulator:
Linear Discrete-Time System?
n n
• Accumulator - y1[n]   x1[], y2 [n]   x2 []
   
• For an input
x[n]   x1[n]   x2 [n]
the output is
n
y[n]    x1[]   x2 []
 
n n
   x1[]    x2 []   y1[n]   y2 [n]
   
• Hence, the above system is linear
Causal Accumulator:
Linear Discrete-Time System?

• The outputs y1[n] and y2 [n] for inputs x1[n]


and x2 [n] are given by n
y1[n]  y1[1]   x1[]
0
n
y2 [n]  y2 [1]   x2 []
0
• The output y[n] for an input  x1[n]   x 2 [n]
is given by
n
y[n]  y[1]   ( x1[]   x 2 [])
0
Causal Accumulator cont.:
Linear Discrete-Time System?

• Now  y1[n]   y2 [n]


n n
  ( y1[1]   x1[])   ( y2 [1]   x2 [])
0 0
n n
 ( y1[1]   y2 [1])  (  x1[]    x2 [])
0 0

• Thus y[n]   y1[n]   y2 [n] if


y[1]   y1[1]   y2 [1]
Causal Accumulator cont.:
Linear Discrete-Time System?

• For the causal accumulator to be linear the


condition y[1]   y1[1]   y2 [1]
must hold for all initial conditions y[1],
y1[1] y,2 [1] , and all constants  and 
• This condition cannot be satisfied unless the
accumulator is initially at rest with zero
initial condition
• For nonzero initial condition, the system is
nonlinear
A Nonlinear Discrete-Time System

• Consider
2
y[n]  x [n]  x[n  1]x[n  1]
• Outputs y1[n] and y2 [n] for inputs x1[n]
and x2 [n] are given by
y1[n]  x12 [n]  x1[n  1]x1[n  1]

y2 [n]  x22 [n]  x2 [n  1]x2 [n  1]


A Nonlinear Discrete-Time System cont.

• Output y[n] due to an input  x1[n]   x2 [n]


is given by
2
y[n]  { x1[n]   x2 [n]}
 { x1[n  1]   x2 [n  1]}{ x1[n  1]   x2 [n  1]}
22
  {x1 [n]  x1[n  1]x1[n  1]}
2 2
  {x2 [n]  x2 [n  1]x2 [n  1]}
  {2 x1[n]x2 [n]  x1[n  1]x2 [n  1]  x1[n  1]x2 [n  1]}
A Nonlinear Discrete-Time System cont.

• On the other hand


 y1[n]   y2 [n]
2
  {x1 [n]  x1[n  1]x1[n  1]}
2
  {x2 [n]  x2 [n  1]x2 [n  1]}
 y[n]

• Hence, the system is nonlinear


Shift (Time)-Invariant System
• For a shift-invariant system, if y1[n] is the
response to an input x1[n] , then the response to
an input x[n]  x1[n  n.o ]
is simply y[n]  y1[n  no ]
where no is any positive or negative integer
• The above relation must hold for any arbitrary
input and its corresponding output
• If n is discrete time, the above property is called
time-invariance property
Up-Sampler:
Shift-Invariant System?
• Example - Consider the up-sampler with an
input-output relation given by
 x[n / L], n  0,  L,  2 L, .....
xu [n]  
 0, otherwise
• For an input x1[n]  x[n  no ] the output x1,u [n]
is given by
 x1[n / L], n  0,  L,  2 L, .....
x1,u [n]  
 0, otherwise
 x[(n  Lno ) / L], n  0,  L,  2 L, .....

 0, otherwise
Up-Sampler:
Shift-Invariant System?
• However from the definition of the up-sampler
xu [n  no ]
 x[(n  no ) / L], n  no , no  L, no  2L, .....

 0, otherwise
 x1,u [n]
• Hence, the up-sampler is a time-varying system
Linear Time-Invariant System
• Linear Time-Invariant (LTI) System -
A system satisfying both the linearity and
the time-invariance property
• LTI systems are mathematically easy to
analyze and characterize, and consequently,
easy to design
• Highly useful signal processing algorithms
have been developed utilizing this class of
systems over the last several decades
Causal System

• In a causal system, the no -th output sample


y[no ] depends only on input samples x[n]
for n  no and does not depend on input
samples for n  no
• Let y1[n] and y2 [n] be the responses of a
causal discrete-time system to the inputs x1[n]
x2 [n]
and , respectively
Causal System
• Then
x1[n]  x2 [n] for n < N
implies also that
y1[n]  y2 [n] for n < N
• For a causal system, changes in output
samples do not precede changes in the input
samples
Causal System
• Examples of causal systems:
y[n]  1x[n]   2 x[n  1]   3 x[n  2]   4 x[n  3]
y[n]  b0 x[n]  b1x[n  1]  b2 x[n  2]
 a1 y[n  1]  a2 y[n  2]
y[n]  y[n  1]  x[n]

• Examples of noncausal systems:


1
y[n]  xu [n]  ( xu [n  1]  xu [n  1])
2
Causal System
• A noncausal system can be implemented as
a causal system by delaying the output by
an appropriate number of samples
• For example a causal implementation of the
factor-of-2 interpolator is given by
1
y[n]  xu [n  1]  ( xu [n  2]  xu [n])
2
Stable System
• There are various definitions of stability
• We consider here the bounded-input,
bounded-output (BIBO) stability
• If y[n] is the response to an input x[n] and if
x[n]  Bx for all values of n
then
y[n]  Bfor
y all values of n
Stable System
• Example - The M-point moving average
filter is BIBO stable:
M 1
1
y[n] 
M  x[n  k ]
k 0
• For a bounded input x[n]  Bx we have
M 1 M 1
1 1
y[n] 
M  x[n  k ]  M  x[n  k ]
k 0 k 0
1
 ( MBx )  Bx
M
Passive and Lossless Systems
• A discrete-time system is defined to be
passive if, for every finite-energy input x[n],
the output y[n] has, at most, the same energy,
i.e.
 
2 2
 y[n]   x[n] 
n   n  
• For a lossless system, the above inequality is
satisfied with an equal sign for every input
Passive and Lossless Systems
• Example - Consider the discrete-time system
defined by y[n]   xwith
[n  N ]a positive
integer
• Its output energy is given by
 
2 2 2
 y[n]   x[n]
n   n  
• Hence, it is a passive system if   1and is a
lossless system if  1
Impulse and Step Responses
• The response of a discrete-time system to a
unit sample sequence {[n]} is called the
unit sample response or simply, the
impulse response, and is denoted by {h[n]}
• The response of a discrete-time system to a
unit step sequence {[n]} is called the unit
step response or simply, the step response,
and is denoted by {s[n]}
Impulse Response
• Example - The impulse response of the
system
y[n]  1x[n]   2 x[n  1]   3 x[n  2]   4 x[n  3]
is obtained by setting x[n] = [n] resulting in

]  impulse
h•[nThe 1 [n]   2 [n  1is
response  3 a[nfinite-length
] thus  2]   4 [n  3]
sequence of length 4 given by

{h[n]}  {1,  2 ,  3 ,  4}

Impulse Response
• Example - The impulse response of the
discrete-time accumulator
n
y[n]   x[]
 
is obtained by setting x[n] = [n] resulting
in n
h[n]    []   [n]
 
Impulse Response
• Example - The impulse response {h[n]} of
the factor-of-2 interpolator
1
y[n]  xu [n]  ( xu [n  1]  xu [n  1])
2
is obtained by setting xu [n]   [n] and is
given by
1
h[n]   [n]  ( [n  1]   [n  1])
2
• The impulse response is thus a finite-length
sequence of length 3:
{h[n]}  {0.5, 1 0.5}

Time-Domain Characterization
of LTI Discrete-Time System
• Input-Output Relationship -
It can be shown that a consequence of the
linear, time-invariance property is that an
LTI discrete-time system is completely
characterized by its impulse response
• Knowing the impulse response one
can compute the output of the system for
any arbitrary input
Time-Domain Characterization
of LTI Discrete-Time System
• Let h[n] denote the impulse response of a
LTI discrete-time system
• We compute its output y[n] for the input:
x[n]  0.5[n  2]  1.5[n  1]  [n  2]  0.75[n  5]
• As the system is linear, we can compute its
outputs for each member of the input
separately and add the individual outputs to
determine y[n]
Time-Domain Characterization
of LTI Discrete-Time System
• Since the system is time-invariant
input output
[n  2]  h[n  2]
[n  1]  h[n  1]
[n  2]  h[n  2]
[n  5]  h[n  5]
Time-Domain Characterization
of LTI Discrete-Time System
• Likewise, as the system is linear
input output
0.5[n  2]  0.5h[n  2]
1.5[n  1]  1.5h[n  1]
 [n  2]   h[n  2]
0.75[n  5]  0.75h[n  5]
• Hence because of the linearity property we
get
y[n]  0.5h[n  2]  1.5h[n  1]
 h[n  2]  0.75h[n  5]
Time-Domain Characterization
of LTI Discrete-Time System
• Now, any arbitrary input sequence x[n] can
be expressed as a linear combination of
delayed and advanced unit sample sequences
in the form

x[n]   x[k ] [n  k ]
k  
• The response of the LTI system to an input
x[k ] [n will
 k ] be x[k ] h[n  k ]
Time-Domain Characterization
of LTI Discrete-Time System
• Hence, the response y[n] to an input

x[n]   x[k ] [n  k ]
k  

will be 
y[n]   x[k ] h[n  k ]
k  
which can be alternately written as

y[n]   x[n  k ] h[k ]
k  
Convolution Sum
• The summation
 
y[n]   x[k ] h[n  k ]   x[n  k ] h[n]
k   k  
is called the convolution sum of the
sequences x[n] and h[n] and represented
compactly as
y[n] = x[n] * h[n]
Convolution Sum
• Properties -
• Commutative property:
x[n] * h[n] = h[n] * x[n]
• Associative property :
(x[n] * h[n]) * y[n] = x[n] * (h[n] * y[n])
• Distributive property :
x[n] * (h[n] + y[n]) = x[n] * h[n] + x[n] * y[n]
Simple Interconnection
Schemes
• Two simple interconnection schemes are:
• Cascade Connection
• Parallel Connection
Cascade Connection
h1[n] h2[n]  h2[n] h1[n]

 h1[n] * hh[n] h2[n] [n]


1

• Impulse response h[n] of the cascade of two


LTI discrete-time systems with impulse
responses h1[n] and h 2[n] is given by
h[n]  h1[n] * h 2[n]
Cascade Connection
• Note: The ordering of the systems in the
cascade has no effect on the overall impulse
response because of the commutative
property of convolution
• A cascade connection of two stable systems
is stable
• A cascade connection of two passive
(lossless) systems is passive (lossless)
Cascade Connection
• An application is in the development of an
inverse system
• If the cascade connection satisfies the
relation
h1[n] * h 2[n]  [n]
then the LTI system h1[n] is said to be the
inverse of h 2[n] and vice-versa
Cascade Connection
• An application of the inverse system
concept is in the recovery of a signal x[n]
from its distorted version xˆ[n] appearing at
the output of a transmission channel
• If the impulse response of the channel is
known, then x[n] can be recovered by
designing an inverse system of the channel
channel ^ inverse system
x[n ]
x[n ] h1[n] h2[n] x[n ]

h1[n] * h 2[n]  [n]


Cascade Connection
• Example - Consider the discrete-time
accumulator with an impulse response [n]
• Its inverse system satisfy the condition
[n] * h 2[n]  [n]
• It follows from the above that h2[n] for
0 n<
0 and
h2[1]  1
n for
 h2[]  0 n2
0
Cascade Connection
• Thus the impulse response of the inverse
system of the discrete-time accumulator is
given by
h2[n]  [n]  [n  1]
which is called a backward difference
system
Parallel Connection
h1[n]
  h1[n]  hh[n] h2[n] [n]
1
h2[n]

• Impulse response h[n] of the parallel


connection of two LTI discrete-time
systems with impulse responses h1[n] and
h2[n] is given by
h[n]  h1[n]  h2[n]
Simple Interconnection Schemes
• Consider the discrete-time system where
h1[n]  [n]  0.5[n  1],
h2[n]  0.5[n]  0.25[n  1],
h3[n]  2[n], h1[n] 
n
h4[n]  2(0.5) [n] h2[n]

h3[n] 
h4[n]
Simple Interconnection Schemes

• Simplifying the block-diagram we obtain

h1[n] 
h2[n] h1[n] 

h3[ n ]  h 4[ n ] h 2[ n ] * ( h3[ n ] h 4[ n ])
Simple Interconnection Schemes

• Overall impulse response h[n] is given by


h[n]  h1[n]  h 2[n] * (h3[n]  h 4[n])
 h1[n]  h 2[n] * h3[n]  h 2[n] * h 4[n]
• Now,
h2 [n] * h3[n]  ( 1 [n]  1 [n  1]) * 2[n]
2 4
 [n]  1 [n  1]
2
Simple Interconnection Schemes


h2[n] * h4[n]  ( 1 [n]  1 [n  1]) *  2( 12 ) n [n]
2 4

  ( 12 ) n [n]  12 ( 12 ) n 1[n  1]
  ( 12 ) n [n]  ( 12 ) n [n  1]
  ( 12 ) n [n]   [n]
• Therefore
h[n]  [n]  12 [n  1]  [n]  12 [n  1]  [n]  [n]
BIBO Stability Condition of an
LTI Discrete-Time System
• BIBO Stability Condition - A discrete-
time is BIBO stable if the output sequence
{y[n]} remains bounded for all bounded
input sequence {x[n]}
• An LTI discrete-time system is BIBO stable
if and only if its impulse response sequence
{h[n]} is absolutely summable, i.e.

S  h[n]  
n  
BIBO Stability Condition of an
LTI Discrete-Time System
• Proof: Assume h[n] is a real sequence
• Since the input sequence x[n] is bounded
we have
x[n]  Bx  
• Therefore
 
y[n]   h[k ]x[n  k ]   h[k ] x[n  k ]
k   k  

 B x  h[k ]  B x S
k  
BIBO Stability Condition of an
LTI Discrete-Time System
• Thus, S <  implies y[n]  B y indicating

that y[n] is also bounded
• To prove the converse, assume y[n] is bounded,
i.e., y[n]  B y
• Consider the input given by

sgn(h[ n]), if h[ n]  0


x[n]  
 K, if h[ n]  0
BIBO Stability Condition of an
LTI Discrete-Time System
where sgn(c) = +1 if c > 0 and sgn(c) =  1c
if
< 0 and K 1
• Note: Since x[n] ,1{x[n]} is obviously
bounded
• For this input, y[n] at n = 0 is

y[0]   sgn(h[k ])h[k ] S  B y  
k  
• Therefore, implies S <
y[n]  B y 
Stability Condition of an LTI
Discrete-Time System
• Example - Consider a causal LTI discrete-time system
with an impulse response
n
h[n]  ( )  [n]
• For this system
  1, n
n
S    [ n]      1
n   n 0 1 
• Therefore S < if for which the system is
BIBO stable   1
• If , the system is not BIBO stable
 1
Causality Condition of an LTI
Discrete-Time System
• Let x1[n] and x2 [n] be two input sequences
with
x1[n]  x2 [n] for n  no
• The corresponding output samples at n  no
of an LTI system with an impulse response
{h[n]} are then given by
Causality Condition of an LTI
Discrete-Time System
 
y1[no ]   h[k ]x1[no  k ]   h[k ]x1[no  k ]
k   k 0
1
  h[k ]x1[no  k ]
k  
 
y2 [no ]   h[k ]x2 [no  k ]   h[k ]x2 [no  k ]
k   k 0
1
  h[k ]x2 [no  k ]
k  
Causality Condition of an LTI
Discrete-Time System
• If the LTI system is also causal, then
y1[no ]  y2 [no ]
• As x1[n]  x2 [n] for n  no
 
 h[k ]x1[no  k ]   h[k ]x2 [no  k ]
k 0 k 0
• This implies
1 1
 h[k ]x1[no  k ]   h[k ]x2 [no  k ]
k   k  
Causality Condition of an LTI
Discrete-Time System
• As x1[n]  x2 [n] for n  no the only way
the condition
1 1
 h[k ]x1[no  k ]   h[k ]x2 [no  k ]
k   k  
will hold if both sums are equal to zero,
which is satisfied if
h[k ]  0 for k < 0
Causality Condition of an LTI
Discrete-Time System
• An LTI discrete-time system is causal if
and only if its impulse response {h[n]} is a
causal sequence
• Example - The discrete-time system defined by

y[isn]acausal
1x[nsystem
]   2 x[as
n it1has
]  a3causal
x[n  2impulse
]   4 x[n  3]
response
{h[n]}  {1  2 3  4 }

Causality Condition of an LTI
Discrete-Time System
• Example - The discrete-time accumulator
defined by
n
y[n]   []  [n]
 
is a causal system as it has a causal impulse
response given by
n
h[n]   []  [n]
 
Causality Condition of an LTI
Discrete-Time System
• Example - The factor-of-2 interpolator
defined by
y[n]  xu [n]  1 xu [n  1]  xu [n  1]
2
is noncausal as it has a noncausal impulse
response given by
{h[n]}  {0.5 1 0.5}

Causality Condition of an LTI
Discrete-Time System
• Note: A noncausal LTI discrete-time system
with a finite-length impulse response can
often be realized as a causal system by
inserting an appropriate amount of delay
• For example, a causal version of the factor-
of-2 interpolator is obtained by delaying the
input by one sample period:
y[n]  xu [n  1]  1 xu [n  2]  xu [n]
2
Finite-Dimensional LTI
Discrete-Time Systems
• An important subclass of LTI discrete-time
systems is characterized by a linear constant
coefficient difference equation of the form
N M
 d k y[n  k ]   pk x[n  k ]
k 0 k 0
• x[n] and y[n] are, respectively, the input and
the output of the system
• {d k } and { pk } are constants characterizing
the system
Finite-Dimensional LTI
Discrete-Time Systems
• The order of the system is given by
max(N,M), which is the order of the difference
equation
• It is possible to implement an LTI system
characterized by a constant coefficient
difference equation as here the computation
involves two finite sums of products
Finite-Dimensional LTI
Discrete-Time Systems
• If we assume the system to be causal, then
the output y[n] can be recursively computed
using
N d M p
y[n]    k y[n  k ]   k x[n  k ]
k 1d 0 k 1d 0
provided d 0  0
• y[n] can be computed for all n  no ,
knowing x[n] and the initial conditions
y[no  1], y[no  2], ..., y[no  N ]
Classification of LTI Discrete-
Time Systems
Based on Impulse Response Length -
• If the impulse response h[n] is of finite
length, i.e.,
h[n]  0 for n  N1 and n  N 2 , N1  N 2
then it is known as a finite impulse
response (FIR) discrete-time system
• The convolution sum description here is
N2
y[n]   h[k ]x[n  k ]
k  N1
Classification of LTI Discrete-
Time Systems
• The output y[n] of an FIR LTI discrete-time
system can be computed directly from the
convolution sum as it is a finite sum of
products
• Examples of FIR LTI discrete-time systems
are the moving-average system and the
linear interpolators
Classification of LTI Discrete-
Time Systems
• If the impulse response is of infinite length,
then it is known as an infinite impulse
response (IIR) discrete-time system
• The class of IIR systems we are concerned
with in this course are characterized by
linear constant coefficient difference
equations
Classification of LTI Discrete-
Time Systems
• Example - The discrete-time accumulator
defined by
y[n]  y[n  1]  x[n]
is seen to be an IIR system
Classification of LTI Discrete-
Time Systems
• Example - The familiar numerical integration
formulas that are used to numerically solve
integrals of the form
t
y (t )   x()d
0
can be shown to be characterized by linear
constant coefficient difference equations, and
hence, are examples of IIR systems
Classification of LTI Discrete-
Time Systems
• If we divide the interval of integration into
n equal parts of length T, then the previous
integral can be rewritten as
nT
y (nT )  y ((n  1)T )   x()d
( n 1)T
where we have set t = nT and used the
notation nT
y (nT )   x()d
0
Classification of LTI Discrete-
Time Systems
• Using the trapezoidal method we can write
nT
   T {x (( n  1)T )  x ( nT )}
 x ( ) d
2
( n 1)T
• Hence, a numerical representation of the
definite integral is given by

y (nT )  y ((n  1)T )  T {x((n  1)T )  x(nT )}


2
Classification of LTI Discrete-
Time Systems
• Let y[n] = y(nT) and x[n] = x(nT)
• Then
y (nT )  y ((n  1)T )  T {x((n  1)T )  x(nT )}
2
reduces to
y[n]  y[n  1]  T {x[n]  x[n  1]}
2
which is recognized as the difference equation
representation of a first-order IIR discrete-time
system
Classification of LTI Discrete-
Time Systems
Based on the Output Calculation Process
• Nonrecursive System - Here the output can
be calculated sequentially, knowing only the
present and past input samples
• Recursive System - Here the output
computation involves past output samples
in addition to the present and past input
samples
Classification of LTI Discrete-
Time Systems
Based on the Coefficients -
• Real Discrete-Time System - The impulse
response samples are real valued
• Complex Discrete-Time System - The
impulse response samples are complex
valued
Correlation of Signals
Definitions
• A measure of similarity between a pair of
energy signals, x[n] and y[n], is given by the
cross-correlation sequence rxy []defined by

rxy []   x[n] y[n  ],   0,  1,  2, ...
n  
• The parameter called lag, indicates the
time-shift between the pair of signals
Correlation of Signals
• If y[n] is made the reference signal and we
wish to shift x[n] with respect to y[n], then
the corresponding cross-correlation sequence
is given by

ryx []   n   y[n]x[n  ]

 
m   y[m  ]x[m]  rxy []
• Thus, r [] is obtained by time-reversing
yx
rxy []
Correlation of Signals
• The autocorrelation sequence of x[n] is given
by

rxx []  n   x[n]x[n  ]
obtained by setting y[n] = x[n] in the
definition of the cross-correlation sequence
rxy []
• Note: r [0]   x 2
[ n ]  E , the energy
xx n   x
of the signal x[n]
Correlation of Signals
• From the relation ryx []  rxy []it follows
that rxx []  rxx [] implying that rxx []is an
even function for real x[n]
• An examination of

rxy []   n   x[n] y[n  ]
reveals that the expression for the cross-
correlation looks quite similar to that of the
linear convolution
Correlation of Signals
• This similarity is much clearer if we rewrite
the expression for the cross-correlation as

rxy []  n   x[n] y[(  n)]  x[] * y[]
• The cross-correlation of y[n] with the
reference signal x[n] can be computed by
processing x[n] with an LTI discrete-time
system of impulse response y[ n]
x[n ] y[  n ] rxy [n ]
Correlation of Signals
• Likewise, the autocorrelation of x[n] can be
computed by processing x[n] with an LTI
discrete-time system of impulse response
x[ n]
x[n ] x[  n ] rxx [n ]
Correlation Computation
Using MATLAB
• The cross-correlation and autocorrelation
sequences can easily be computed using
MATLAB
• Example - Consider the two finite-length
sequences
x[n]  1 3  2 1 2  1 4 4 2
y[n]  2  1 4 1  2 3
Correlation Computation
Using MATLAB
• The cross-correlation sequence rxy [n]
computed using Program 2_7 of text is
plotted below
30

20
Amplitude

10

-10
-4 -2 0 2 4 6 8
Lag index
Correlation Computation
Using MATLAB
• The autocorrelation sequence rxx []
computed using Program 2_7 is shown below
• Note: At zero lag, rxx [0] is the maximum
60

40
Amplitude

20

-20
-5 0 5
Lag index
Correlation Computation
Using MATLAB
• The plot below shows the cross-correlation
of x[n] and y[n]  x[n  N ] for N = 4
• Note: The peak of the cross-correlation is
precisely the value of the delay N
60

40
Amplitude

20

-20
-10 -5 0 5
Lag index

You might also like