0% found this document useful (0 votes)
127 views40 pages

Discrete - Time Signals and Systems

1) Signals can be either analog (continuous-time) or digital (discrete-time). Digital signals are processed in digital systems like computers after analog to digital conversion via sampling, quantization, and binary encoding. 2) Common signal types include sinusoidal signals defined by amplitude, frequency, phase and time, and square waves defined by amplitude and periodicity. Special signals include unit impulse and unit step functions. 3) Complex signals have real and imaginary parts and can be expressed in terms of magnitude and phase using polar coordinates. Mathematical representation of signals is important for analysis and design of circuits and systems.

Uploaded by

frankie_t
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
127 views40 pages

Discrete - Time Signals and Systems

1) Signals can be either analog (continuous-time) or digital (discrete-time). Digital signals are processed in digital systems like computers after analog to digital conversion via sampling, quantization, and binary encoding. 2) Common signal types include sinusoidal signals defined by amplitude, frequency, phase and time, and square waves defined by amplitude and periodicity. Special signals include unit impulse and unit step functions. 3) Complex signals have real and imaginary parts and can be expressed in terms of magnitude and phase using polar coordinates. Mathematical representation of signals is important for analysis and design of circuits and systems.

Uploaded by

frankie_t
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

1

CHAPTER 1

DISCRETE – TIME SIGNALS AND SYSTEMS


Signals represent information about data, voice, audio, image, video… There are many ways to
classify signals but here we mainly consider signals as either analog (continuous-time) or digital
(discrete-time). Signal processing is to use circuits and systems (both hardware and software) to act
on an input signal to give the output signal which differs from the input the way we would like to.
Digital systems have many advantages over analog ones such as noise immunity, easiness of
storage and transmission… To convert an analog signal to a digital equivalence we first sample it at
regular intervals, quantize the samples, then code the quantized values into binary numbers. If only
the sampling is used we obtain discrete – time signals. But to process signals in digital systems (such
as computers) we ought to go through all three steps. Usually the last two steps, quantization and
binary encoding, are understood, then the terms discrete-time and digital are equivalent and
interchangeable.
Besides signals that we honour, there are many unwelcome elements such as noise,
interference, jitter … that we want to eliminate or minimize.
Systems can be just simple logic circuits, simple programmes, up to complex structures
including both hardware and software, such as computers. We will discuss various types of digital
systems, of which the linear and time invariant (LTI) ones are usually assumed. Typical systems are
filters.

1.1 CONTINUOUS – TIME SIGNALS


A signal is the variation of an amplitude with time. The amplitude can be voltage, current, power,...
But in circuits and systems the most often used representative is voltage.
Continuous – time (also taken to mean analog) signals have their amplitudes varying
continuously with time. They are generated by electronic circuits, or by natural sources, such as
temperature, voice, video..., and converted to electric signals by sensors or transducers. Signals are
often depicted by their waveforms which are the graphical illustrations for easy visualization.

1.1.1 Mathematical representation of signals


Instead of describing signals by words or by plotting their waveforms, the more objective and concise
way is to express them mathematically, whenever possible. Mathematical representation of signals in
time domain and transform domain is needed in analysis and design of circuits and systems. For
example a simple problem in Fig.1.1 cannot be solved just by language description of the signal and
the circuit.

Sinusoidal signal
Sinusoid or sinewave is the most popular analog signal (Fig.1.2). It is smooth, easy to generate, and
has many properties and applications. The mathematical expression is

x(t) = Acos( Ω t + o) (1.1)


2
x(t)
T
A
Acos0

0 t

–A
Fig.1.2: Sinusoidal signal

where A is the peak value, Ω angular frequency (radians/s), t time (sec), Φ o initial phase (radians), that
is, i.e. phase at t = 0, Ω = 2F with F is frequency (Hz), T = 1 / F = 2/Ω the period (sec).
Above expression contains all parameters we need: Amplitude (peak, rms, average), and
periodicity (period, frequency). Other waveforms, except the constant value, do not have this
compactness. For example, for the symmetric square wave (Fig.1.3) the mathematical expression
consists of one part for amplitude, and the other for periodicity:
T
− ≤t≤0
x(t) = –A , 2 (1.2)
T
0≤t≤
+A , 2
x(t) = x(t  nT) , n = 1, 2, 3 …
The sinusoid and the square wave are deterministic. For random signals, we cannot, in general,
represent them mathematically. Electric noises and interferences are examples of random signals.
x(t)

–T/2 0 T/2 T 2T t
–A

Fig.1.3: Symmetric square wave

1.1.2 Some special signals


There are two singular signals often used in circuit analysis and signal processing.

(a) Unit impulse:


The unit impulse (delta Dirac function) is evolved from a symmetric rectangular pulse of width τ
and amplitude 1/τ when τ → 0 (Fig.1.4). Its mathematical expression is
(t) =  , t=0
0 , t0

∫−∞ δ(t )dt =1 (1.3)
According to this definition,
3

(–t) = (t) (1.4)

1
(t) A(t) (t-t0)


 0  t 0 t 0 t 0 t0 t
2 2
(a) (b) (c) (d)

 (t )
When the impulse has intensity of A instead of 1 we write A (t) (Fig.1.4c). When the unit impulse is
delayed by to we write (t – to) (Fig.1.4d), then
(t – tO) =  , t = tO (1.5)
0 , t  tO
∞ t2
∫−∞ δ(t−t O )dt=∫t 1
δ(t−t O )dt
= 1, t1 < tO < t2
A signal x(t) when multiplied with delayed impulse δ (t−t 0 ) is the value x(to) at to:
x(t)(t – to) = x(to) (1.6)

(b) Unit step:


Fig.1.5 is the unit step. The signal rises suddenly from 0 to 1 at time t = 0 then remains unchanged,
similarly to the closure of an electric switch. Its mathematical definition is
u(t) = 0 , t<0
1 , t0 (1.7)

Au(t) A t 0
u(t) A
1

t 0(c) t
[

0 t 0 t 0
(a) (b)
Fig.1.5: Unit step
The unit impulse (t) and the unit step u(t) are related as follows:
t
u(t )=∫−∞ δ(t ' )dt ' =0 , t<0
1 , t0 (1.8a)
du(t )
δ (t )=
dt (1.8b)

1.1.3 Complex signals


Natural physical quantities, signals included, are real-valued. However sometimes the imaginary
operator j = √−1 is appended to them by reason of mathematical convenience, such as to take into
account the phase difference between voltages and currents in AC circuits. Following is an example
of a complex signal :
4

x(t) = 5cos Ω t – j5sin Ω t


A complex signal comprises a real part and an imaginary one:
x(t) = x R (t) + jx I (t) (1.9)
A complex signal can be expressed in terms of its magnitude and phase in the polar coordinate
(Fig.1.6)
jΦ(t)
x(t) = x R (t) + jx I (t) = |x(t )|e
(1.10)

Imaginary
x(t)

I x (t )
(t) (t)
0
R Real
Fig.1.6: Complex signal and polar coordinate

The magnitude or modulus, demoted by |x(t)| , and phase or phase angle, denoted by Φ(t ) or
argx(t) or ∠ x(t). They are

|x(t )|= √ x 2R (t )+x 2I (t ) (1.11a)


x (t )
Φ(t )=tan−1 I
x R (t ) (1.11b)
A point to note is that magnitude is an absolute value while amplitude is a signed value, but we do
not always need to differentiate the two terms.

Example 1.1.1
A complex signal is x ( t )=5 cosΩt − j 5 sinΩt . Find its real part, imaginary part, magnitude and
phase.

Solution
– Real part: x R (t) = 5cos Ω t
– Imaginary part: x I (t) = –5cos Ω t
2 2 1/2
– Magnitude: |x (t )|=[ (5cosΩt ) +(−5cosΩt ) ] =5 √ 2cosΩt
−5 cosΩt
Φ(t )=tan −1 =tan −1 (−1 )
– Phase: 5 cosΩt = –450 independent of t . 
According to this representation we can consider a complex signal as a vector and write as x(t).
Imaginary
x(t)

xI(t)
xR(t) = x*R(t)
(t)
0
-(t) Real
x*I(t)

x*(t)
5

Two complex quantities having the same real part but opposile imaginary part are complex
Fig.1.7:
conjugates of each otherA(Fig.1.7).
complex signal
Thus x(t)
for and its complex
a given conjugate
complex x*(t)in Equation (1.12), its
signal x(t)
complex conjugate is

x*(t) = xR(t) – jx(t) = |x(t )|e−jΦ(t) (1.12)

1.1.4 Complex exponential signals


Equation (1.1) is a real sinusoidal signal. Complex exponentials, also called complex sinusoids, are
more often used. The general expression is
j( Ω t +Φ 0 )
x (t )= Ae (1.13)
Phasor is the vector representation of the signal (Fig.1.8). It is periodic with an angular period of 2
[]

radians.
Imaginary

t
xI(t)
x(t)

 (t=0)
A 0
0 xR(t) Real

Fig.1.8: Phasor representing complex exponential

From a complex exponential, its real sinusoidal part is deduced by two ways. The first is to take the
real part:
x R (t) = Re[Acos( Ω t + o) + jAsin( Ω t + o)]
= Acos( Ω t + o) (1.14)

Imaginary

x(t)
2xR(t)
A
t

0 -0 Real

x*(t) 


6

This is just the projection the phasor onto the real axis. The second
*
way is to use two phasors, x(t) and
its complex conjugate x*(t) (Fig 1.9), and then take half
Fig.1.9: Adding a phasor x(t) to its complex conjugate of x
the (t)
sum:to form the real part
2 x R (t ) 1
x R (t) [ x(t )+x ¿ (t )]
= 2
1 j ( Ω t+Φ ) − j ( Ω t +Φ )
[
Ae O O + Ae O O ]
= 2
(1.15)
Notice that when the two phasors rotate in opposite directions at angular frequencies Ω and −Ω ,
the addition always gives twice the real sinusoid.

1.2 NOISE
All the unwelcome signals of random nature superimposing on our signal carrying information are
named together as noise. In electronic devices and circuits noise is generated due to the random
motion of electrons (nonuniform speed, collisions…). This is thermal noise. The active electronic
devices, besides thermal noise, also generate shot noise. Some phenomena, such as thunder and
closure of electric switches, cause impulsive noise (high amplitudes but in burts). The sun generates
both thermal and impulsive noises. Noise can be classified as internal (or system) noise, and external
noise or interference.
There is a special noise, rather an interference, that we should not forget, that is the
50Hz/60Hz radiation from electric powerline. This noise induces onto our bodies, and electric circuits
by way of electromagnetic wave. The power supply for a circuit is another source of 50Hz/60Hz
interference.
As for frequency characteristic, one distinguishes white noise, pink noise… The most
convenient to model, also the most frequently mentioned is white noise which has the power spectral
density S(F) unchanged with frequency F. Fig.1.10 shows S(F) has a fixed value of No/2. When the
white noise passes through a filter, the outpul noise will be no longer white due to the frequency
characteristic of the filter.
S(F)
N0/2

0 F(Hz)
Fig.1.10: The power spectral demity of white noise

1.2.1 Probability density function and Cumulative distribution function


Preceding is the dependence of noise on frequency. There are other aspects of noise even more
important. First, noise is modelled as a random variable , denoted x. The probability of occurrence
of noise at different amplitudes is the probability density function (PDF), or just probability, denoted
p(x). The probability distribution function, or cumulative distribution function (CDF), denoted P(x),
is defined as
x
P( x)=∫−∞ p( x)dx
(1.16)
The two basic characteristics of a random variable are the mean (the first moment), denoted m (or
μ ) , which is the first moment taken about the origin, and the variance the second moment,
denoted σ 2 , which is the second moment taken about the mean:
7


m=E[ x ]=∫−∞ xp( x)dx
(1.17)
2 2 ∞ 2
σ =E [( x−m) ]=∫−∞ ( x−m) p( x)dx
(1.18)
where E stands for expectation (or expected value).
The square root of the vaiance is called standard deviation, denoted σ .
In a uniform distribution, the value of x within a range is equally likely:
1
p( x )= , a≤x≤b
b−a (1.19)
The PDF and CDF are shown in Fig. 1.11. The mean and variance are, respectively,
p(x) P(x)

1 1
ba 1
2
0 a m b x 0 a m b x

Fig.1.11: Uniform distribution having mean m

a+b
m=
2 (1.20)
2
2 (b−a )
σ =
12
(1.21)
1.2.1 Gaussian distribution
In reality, many random variables have Gaussian distribution (also called normal
distribution). Caussian white noise PDF and CDF are, respectively,
1 −x 2 /2 σ 2
p ( x )= e
√2 π σ
(1.22)
x
F ( x )=∫−∞ p ( x ) dx
(1.23)
2
Where σ is the variance ( σ is the standard deviation). The distribution has a shape of a ringing
bell (Fig.1.12). Peak probability (at x = 0) is
p(x)
P(x)
pP 1 1.0
2 

1 1
0,5
e 2

- 0 x 0 x
2

8

1 −0/2 σ 2 1
pP = e =
√2 π σ √2 π σ (1.24)
At distance x=±σ the problalslity is
1 −σ 2 /2σ 2 1 1 1
pσ = e = . ≈0 , 606
√2π σ √ e √2 π σ √2 π σ
(1.25)
2
The smaller σ (hence σ ) is the narrower the bell, that is, the more centralized the distribution.
When a DC voltage m is corrupted by adding noise, the probability distribution is the gaussian
distribution of the noise but the mean is shifted to the new mean m (Fig.1.13). The mean m can be
positive or negative. The probability distribution becomes
1 −( x−m )2 /2 σ 2
p ( x )= e
√2 π σ (1.26)
p(x)

pP

p

0 m- m m+ x

Fig.1.13: Gaussian distribution having positive mean m

1.3 SAMPLING OF SIGNALS


Analog signals, in general, are continuous in time. In digital signal processing, we do not use the
whole analog signal but replace it by its amplitudes taken at regular intervals. This is sampling. The
problem is we must sample the signal so that the samples represent correctly the signal, i.e. from the
samples we can reconstruct the original analog signal correctly (but may not perfectly).
1.3.1 Sampling of continuous-time signals
Sampling a continuous-time signal turns it into a correspond discrete-time signal so that it can be
processed on digital systems. Actually, the sampling is followed by two other operations,
quantization and binary encoding. In reality, the analog-to-digital converters (abbreviated ADC or
A/D) do all the three steps.
Analog signal x(t)
Samples x(nT)

4T 5T
t
0 T 2T 3T 6T
T
9

Fig.1.14 depicts the sampling of a signal at regular interval t =n T where n is an


integer, positive and negative, that is, n = 0, 1, 2,.., -1, -2,… This is uniform sampling that we use
routinely. Rarely, nonuniform sampling is mentioned. We denote the samples of the signal x(t) as
x(nT ) or, sometimes, as ^x (n) for convenience.
How can we sample a signal ? Let’s look at Fig.1.15. On top is the signal x(t), in the middle is
the sampling signal s(t) which is a regular sequence of narrow pulses t of amplitude 1. On
multiplying the two signals together we will obtain the instantaneous values of x(t) which are the
samples x(nT ) . Thus sampling is a multiplication of the analog signal x(t) and a sampling signal
(or sampling function) s(t):
x (t )=x(nT )=x(t )s(t ) (1.27)
x(t)

0 t
(a) Analog signal
s(t)
1 t

0 2(T ) 4T 6T 8T t
(b) Sampling signal
x(nT
) xˆ ( n)
6T
0 2T 4T 8T t
(c) The samples (discrete-time signal)

Fig.1.15: Sampling by a sequence of narrow pulses δ (t )


Fig.1.16a Illustrates the process, and Fig.1.16b shows an electric switch (Fig.1.15b) as a way to
implement the sampling: When the contact closes in a short time, the signal passes; and when the
contact opens, no output signal appears.

x(t) xˆ (t )  x(t ) s(t )


x

(a) s(t)

x(t) x(nT )

(b) s(t)

Fig1.16: The principle of sampling (a)Multiplying; (b) Switching


10

The time distance T is called sampling interval or sampling period,


f s =1/T is
sampling frequency (Hz or samples/sec), or sampling rate. The samples are written as x(nT ) but
T is usually taken as 1, hence the samples will be denoted universally, unless otherwise specified,
as x(n). The integer n becomes an index, but sometimes we call it time index or sample.
When looking at Fig.1.14 and 1.16 we may ask if the sampling is appropriate, that is, the
samples are too close or too far away or just right. This is really a big question and will be answered
soon. For the time being, let’s examine the sampling of a sinewave x(t) having period Tx and
frequency F = 1/T at the sampling rate f s (Fig. 1. 17) . The figure shows the same sinewave but
x x
with three different sampling frequency fs. In the first case fs = 8Fx, the samples are quite close and
represent very well the signal (from the samples we can reconstruct the signal). In the second case fs =
4Fx , still the samples can represent the signal (imagine that we connect the successive sample values
to get a triangular wave which is then passed through an analog lowpass filter to smooth out the
waveform). In the last case fs = 2Fx , the sampling rate is twice the signal frequency. This is the critical
case: The samples may or may not represent the signal depending on the positions of sampling points.

0 Tx 0 Tx 0 Tx

(a) fs= 8Fx (b) fs= 4Fx (c) fs= 2Fx

Fig.1.17: Sampling a sinewave of frequency F x =1/T x at different sampling rates


fs
1.3.2 The sampling theorem
Let’s consider a certain continuous-time signal x(t) representing certain information such as voice. Its
magnitude frequency spectrum | X^ ( F )| is assumed to be as in Fig.1.18a where FM is its maximum
frequency.

X(F)
(a)

-FM 0 FM F

X̂ F 

(b) -fs/2 fs/2

-2fs -fs -FM 0 FM fs-FM fs fs+FM 2fs F


Nyquist interval
X̂  F 

(c)

-2fs -fs -fs/2 0 fs/2 fs 2fs F

X̂ F 
(d)

-2fs -fs 0 fs 2fs F


-fs/2 fs/2
11

Fig.1.18:
The signal Two-sided
is sampled frequencyof
by a sequence spectrum
narrow of (a) the
pulses t analog signal,1(b)
of amplitude the samples
as before. when series
The Fourier
f s > F3.1)
expansion (see section of this sampling function
M , (c) the samples when f
is s =2 F M , (d) the samples when f s <2 F M
δt δt ∞
s ( t )= +2 ∑ cos2π mf s t
T x T x m=1 (1.28)
Where
T x is the fundamental period of the signal x(t ) . Hence the samples are
δt δt ∞
^x ( t )=x ( t ) s (t )= x ( t )+2 ∑ x ( t ) cos2 π mf s t
Tx T x m=1 (1.29)
This shows that the frequency spectrum X( F) of the sampled signal consists of that of the analog
signal (with a multiplying factor t/Tx) and its shifted versions to 2fs, 3fs… This spectrum can also
obtain using the Fourier transform (see section 3.2) instead of the Fourier series.
In Fig.1.18b the spectrum bands do not overlap so we can recover the analog signal by
lowpass filtering the central band, or bandpass filtering any other bands. All the frequency bands
contain the same information but at different frequencies. In Fig.18c we still can recover the signal but
the filter must be very precise. In Fig.1.18d the bands overlap and we are in no way to recover the
analog signal. So the limiting case is Fig.1.18c. From this observation, the sampling theoren states as
follows.
In order that the samples represent correctly the original analog signal, the sampling
frequency must be greater than twice the maximum frequency component of the analog signal:
fs > 2FM (1.30)
The limiting frequency 2FM is called Nyquist rate, and the central frequency interval [-fs/2, fs/2] is
called the Nyquist interval.
For example if a waveform contains the fundamental frequency of 1 kHz and a second
harmonic 2 kHz, then the sampling rate must be greater than 2 x 2 kHz = 4 kHz, say 5 kHz or more.
Another example is for the voice in the telephone system. The voice is limited by a high quality analog
filter at FM = 3.4 kHz, then the sampling frequency must be greater than 2 x 3.4 = 6.8 kHz, say 8 kHz
or more.
In the case of Fig.1.18d there is a phenomenon called aliasing that will be discussed next.

1.3.3 Aliasing
We would like to know what happens when the signal is sampled below the Nyquist rate, i.e. the
sampling theorem is not satisfied. Look at Fig.1.19. The low frequency signal x1(t) is sampled 4 times
at S1, S2, S3 and S4 in a period of the signal, that is, fs = 4Fx1. From these samples we would be able to
recover x1(t). For the high frequency signal x2(t) there are the same 4 samples S1, S2, S3 and S4 in its 9
Fx 2
cycles, so the sampling frequency is just (4/9) , that is, under the Nyquist rate. From these
sample points of x2(t) we will recover x1(t) and not x2(t). Thus the high frequency signal when

x1(t) x2 (t)
.
S2

S1 . . . S5
.
S4

12

undersampled will be recovered as a low frequency signal. This phenomenon is called aliasing, and
the recovered low frequency, which is false, is called the alias of the original high frequency signal.

Fig.1.19: The low frequency signal x 1 (t ) and the high frequency signal x 2 (t ) are sampled at
the same points S 1 , S 2 , S 3 , S 4 , S 5
To avoid the aliasing there are two approaches: One is to raise the sampling frequency to satisfy the
sampling theorem, the other is to filter off the unecessary high-frequency component from the
continuous-time signal. We limit the signal frequency by an effective lowpass filter, called
antialiasing prefilter, so that the remained highest frequency is less than half of the intended
sampling rate. If the filter is not perfect we must give some allowance. For example in voice
processing, if the lowpass filter still allows frequencies above 3,4kHz go through even at small
amplitudes, the sampling frequency should be 8 kHz or higher.
The aliasing phenomenon can be shown mathematically. Let’s consider a complex exponential
signal at frequency F which is sampled at interwal T to yield the samples x(nT):

x(t )=e j 2 π Ft ⇒ x(nT )=e j2 π FnT


Now consider other signals at frequency F  mfs , m = 0, 1, 2 … , which are sampled to give xm(nT):
j 2 π( F±mf s ) j 2 π ( F±mf s )nT
x m (t )=e ⇒ x m (nT )=e
Because
j 2 π mf s nT j 2 π mn
fsT = 1 and e =e =1
then
j 2 π ( F+mf s ) j2 π mF s nT
x m ( nT )=e =e j 2 π fnT e ¿ e j 2 π FnT =x ( nT ) (1.31)
This result means that two signals xm(t) and x(t) at different frequencies have the same samples. When
we recover the signals from these samples, those signals lying within the Nyquist interval [-f s/2, fs/2]
(Fig1.18b) will be recovered correctly, whereas the signals having frequencies outside the Nyquist
interval may be aliased into this interval. In general, for an analog signal of frequency F sampled at the
sampling rate fs , first we add and subltract frequencies as follows:
f0 = F  mfs , m= 0, 1, 2, . . . (1.32)
and then look for the frequencies lying within the Nyquist interval, they are the reconstructed
frequencies.

Example 1.3.1
A signal at frequency 50Hz is sampled at 80Hz. What frequency will be recovered ? Repeat when it is
sampled at 120Hz.

Solution
With F = 50 Hz, fs = 80 Hz, the signal is undersampled (not satisfied the sampling theorem). The
Nyquist interval is [-40 Hz, 40 Hz]. The samples do not only represent the frequency F = 50 Hz but all
frequencies F  mfs = 100  m80, m = 0, 1, 2…, that is, the frequencies
13

fo = 50, 50  80, 50  160, 50  240 …


= 50, 130, -30, 210, -110, 290, -190 …
Only the frequency -30 Hz lies within the Nyquist interval, then the recovered signal will be -30 Hz
(30Hz and phase reversal). This signal is the alias of the original signal at 50 Hz. Notice that 30 Hz is
just the difference 80 Hz – 50 Hz.
Now, the sampling frequency is 120 Hz, the sampling theorem is statisfied, then the original
frequency of 50 Hz will be recovered. None of other frequencies f o = 50  m120 = 50, 170, -70, 290,
-190, … lie in the Nyquist interval [-60 hZ, 60 Hz], except the original frequency of 50 Hz. 

Example 1.3.2
A DSP system uses the sampling frequency fs = 20 kHz to process an audio signal frequency limited at
10 kHz, but the lowpass filter still alows frequencies up to 30 khz pass through even at small
amplitudes. What signal will we get back from the samples?
Solution
For sampling rate fs = 20 kHz, the Nyquist interval is [-10kHz, 10kHz]. Thus the audio frequencies 0 –
10kHz will be recovered as is. The audio frequencies from 10 – 20 kHz will be aliased into the
frequency range 10 – 0 kHz, and the audio frequency from 20 – 30 kHz will be aliased into the
frequency range 0 – 10 kHz. The resulting audio will be distorted due to the superposition of the 3
frequency bands. 

Example 1.3.3
Consider the signal
x(t) = 4 + 3cost + 2cos2t + cos3t (t: ms)
(a) Find the Nyquist frequency
(b) If the signal is sampled at half the Nyquist frequency, final the signal x 0(t) that is the alias of
x(t).
Solution
(a) Because the unit of time is ms, the given signal has 4 frequencies
F1 = 0Hz, F2 = 0,5kHz, F3 = 1kHz, F4 = 1,5kHz
The highest frequency is fmax = f4 = 1.5kHz, the Nyquist rate is 2x1.5kHz = 3kHz. We the signal is
sampled at rates greater than 3kHz there will be no aliasing.
(b) When the signal is sampled at 1.5kHz aliasing will occur. Now the Nyquist interval is
[-0.75, 0.75)kHz. The two frequencies f1 and f2 lie within this interval and, thus, will not be aliased.
Whilst the two frequencies f3 and f4 lie outside the Nyquist interval and, thus, will be aliased:
F30 = F3  mFs = 1mod(1,5) = 1 - 1,5 = - 0,5kHz
F40 = F4  mFs = 1,5mod(1,5) = 1,5 - 1,5 = 0kHz
The recovered signal x0(t) have the frequencies f10, f20, f30 and f40. Thus the recovered analog signal is
x(t) = 4cos2F1t + 3cos2F2t + 2cos2F30t + cos2F40t
= 4 + 3cost + 2cos(-t) + cos0 = 5 + 5cost

10 x(t) x 0 (t)

0 T 2T 3T 4T 5T 6T 7T 8T 9T t

Fig. 1. 20: Example 1.3.3 (given signal x(t) and recovered signal x0(t))
14

The signals x(t) and x0(t) are plotted in Fig. 1.20. Notice that x(t) and x 0(t) coincide only at the sample
points. x0(t) lies entirely within the Nyquist interval, that is, it comprises of lower frequency
components and hence smoother.The spectrum of x(t) and then extracting the spectral components
lying within the Nyquist interval (Fig. 1.21).
As mentioned previously, an analog lowpass prefilter must be used to eliminate the unecessary
high frequency content of the input signal so that the sampling frequency will not be too high. When
the filter is not ideal (that is,does not have an abrupt cutoff) it will be hard to eliminate the aliasing
properly.

Example 1.3.4
An audio signal consists of the components
(t) = 2Acos10t + 2Bcos30t + 2Ccos50t + 2Dcos60t
+ 2Ecos90t + 2Fcos125t (t: ms)
The signal passes through an analog prefilter H(f) and then is sampled at the rate of 40kHz and then is
recovered by an ideal analog filter –fs/2, fs/2 (Fig. 1.21a)

xâ(t)
xa(t) Tieàn loïc x(t) Laáy maãu x(nT) Khoâi phuïc x0(t)
H(f) 40kHz lyù töôûng
Tín hieäu Tín hieäu Tín hieäu Tín hieäu
töông töï töông töï sau soá töông töï
vaøo loïc (caùc maãu) taùi laäp

Fig 1. 21a: Example 1.3.4 (an audio DSP system)

Determine the recored analog signal x0(t) for the condition situations as follows.
(a) Without the prefilter, that is, H(f) = 1 at all frequencies.
(b) When H(f) is an ideal lowpass filter having cutoff frequency at 20kHz.
(c) When H(f) is a realistic filter having characteristics as shown in Fig. 1.21b, that is, flat from
O0t0 20Hz then attenuating 60dB/octave. Ignore the effect of phase response
Solution
The audio, signal has the frequency components
FA = 5kHz, FB = 15kHz,
FC = 25kHz, FD = 30kHz,
FE = 45kHz, FF = 62,5kHz
Because only the frequencies FB and FB are audible, the audible signal is
x0(t) = x01(t) = 2Acos10t + 2Bcos30t
We replace the cosinusoids by complex exponentials and take the Fourier transform. For the
first component, that is
2 π jF A t − 2 π jF A t
2 A cos 2 πF A t = Ae + Ae ↔ Aδ ( F−F A ) + Aδ ( F+ F A ) (1.34)
H(f) 1

-30dB/octave
-30dB
-60dB/octave
-60dB
lyù
töôûng
f
0 20 40 60 kHz
Fig 1.21b: Exmple 1.3.4c continued (spectral of x(n))

15

The sampling process repeats this spectral at the multiple positive and negative of f s. The components
C, D, E, F lying outside the Nyquist interval [-20, 20]kHz will give rise to the aliased frequencies
FC=25  = FC - Fs = 25 - 40 = - 15kHz
FD=30  = FD - Fs = 30 - 40 = - 10kHz
FE=45  = FE - Fs = 45 - 40 = 5kHz
FF=62,5  = FF – 2Fs= 62,5 - 2x40 = - 17,5kHz

(a) When there is no prefilter the output signal x(t) is the same as the input signal x a(t). The
recovering postfilter recuves the frequencys components lying within the Nyquist interval [-20kHz,
20kHz) and, also, the signals aliased into this interval:
x0(t) = 2Acos10t + 2Bcos30t + 2Ccos(-215t)
+ 2Dcos(-210t) + 2Ecos25t + 2Fcos(-217,5t)
= 2(A+E)cos10t + 2(B+C)cos30t + 2Dcos20t + 2Fcos35t (1.35)
The audible signal none consists of the orginal 5 and 15kHz together with two aliased frequencies 10
and 17.5kHz. Thus the audible output signal in x0(t) differs from that of the imput signal xa(t).
(b) When an ideal lowpass filter with a sharp cutoff at f 2/2 = 20 kHz is used, the signal x(t) is just
the signal x01(t) comprising frequencies fA and fB as before, all other frequencies are eliminated. Thus
there will be no aliasing.
(c) When a realistic is used as in Fig 1.21b, output signal component at the output of the filter will
be altered in magnitude and phase. For example, the fA component will be as
2 A cos 2 πF A t ⃗H 2 A|H ( F A )| cos [ 2 πF A t+Φ ( F A ) ]
Ignoring the phase change, the output is
x(t) = 2AH(FA)cos10t + 2BH(FB)cos30t + 2CH(FC)cos50t
+ 2DH(FD)cos60t + 2EH(FE)cos90t + 2FH(FF)cos125t

Since the fA and fB components lie within the region where the filter response is 1 (0 dB) then
H(fA)=H(fB)= 1

 Y(f)  (-19.3dB)
B A A B C (-35.1dB)
D C D (-70.1dB)
E E (-98.6dB)
F F f

-70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 kHz


khoaûng
Nyquist

Fig 1.21d: Example 1.34d continued (Output spectral at the realistic filter).

We know that octave n is defined as


F2 F2 n
n=log 2
( )
F1
or F1
=2
(1.36)
16

The octave number from the cutoff frequency f a/2 is


FC 25
log 2 =log 2 =0 . 322 octave
F s /2 20
Therefore the attenuation at FC is
60dB/octave x 0.322 octave = 19.3dB
On the other hand, the attenuation AdB at a frequency F with respect to the cutoff frequency f s/2 is
|H ( F )| |H ( F )|
A dB =20 log10 =10− A /20
|H ( f s /2 )| |H ( f s /2 )|
hay (1.37)
Where H(fs/2)= 1 = 0dB. Thus the magnitude response at fC is
1
|H ( FC )|=10−19 , 3/20=
9
Similar evaluation of other responses is smilar. The result is
1
|H ( F D )|=10−35 , 1/20=
57
1
|H ( F E )|=10−70 , 1/20=
3234
1
|H ( F F )|=10−98 ,6 /20=
85114
Thus the output x(t) at the filterr is
2C 2D
x(t) = 2Acos10t + 2Bcos60t + 9 cos50t + 57 cos60t
2E 2F
+ 3234 cos90t + 85114 cos125t
Because the frequency components outside the Nyquist interval are attenualed, then aliased signals are
also attenualed. Thus the recovered signal is
 E   C
A   B  
x0(t) = 2
 3234  cos10t + 2  9  cos30t
2D 2F
+ 57 cos20t + 85114 cos35t (1.38)
The frequency magnitude response real lowpass prefilter is shown in Fig. 1.22. The stopping
frequency fsb and the attenuation AC are such that the aliasing is attenuated at the desired level. The
sampling frequency is chosen as
fs = ft + fch (1.39)
Thus the Nyquist frequency fs/2 lies in the middle of transition region. The attenuation at a frequency f
compared to a reference frequency f 0 is
|H ( f )|
A ( f )dB =−20 log 10
|H ( f 0 )|
(1.40)
The response of a lowpass filter at frequencies f much greater than the cutoff frequency is
1
|H ( F )|=a
FN F lôùn (1.41)
17

where a is constant depending on type of filter and N is the order of the filter. The attenuation at high
frequencies, taking a as 1, is

 H(F)
Tieàn loïc lyù
töôûng
Vuøng chuyeån
A
tieáp c

-F - s /2 - 0 f fs /2 f
sbf pb pb sb
da f da
daûi
ûi
chaä ûi
chaä
thoâng
n n
Hình 2.12 : Tieàn loïc choáng bieät danh thoâng thaáp thöïc teá
1
A ( f )dB =−20 log 10 =20 N log 10 F
fN f lôùn (1.42)
For a realistic filter having frequency response H(F) the output spectral is
X(f) = H(F)Xa(F) (1.43a)
Where Xa(f) is the spectral of the analog input signal. The dB attenuation is given by
AX(f) = A(f) + AXa(f) (1.43b)
|X ( f ) / X f |
( 0) , and AXa(f) = -20log10 a |X ( f ) / X f |
( 0 ) . When the signal x(t)
Where AX(f) = -20log10
is sampled, its spectral X(f) is respeated at multiple sampling intervals and the attenuation A X(f) will
determined the overlap of the repeated spectra, that is, the degree of aliasing. The prefilter is chosen
such that the attenuation Ax(f) together with the attenuation A xa(f) of the input spectral are high enough
to reduce the aliasing at the desired level.

Example 1.3.5
An audio signal has magnitude response flat upto 4kHz and then attenuated 15dB/octave. The
sampling frequency is 12kHz.
(a) If no preprefer is used, determine the aliasing into the interested audio frequency 1 upto 4
kHz.
(b) Use an appropriate prefilter to reduce the aliased components in the interested audio frequency
range more than 50 dB.
Solution
(a) Due to the sampling, the central spectral band of X(F) repeats after each sampling frequency f s
= 12 kHz (Fig. 1.23) Because of the even symmetry of the spectral bands the shaded parts are equal,
and hence the attenuation a and b at frequencies 4 and 8 kHz are equal and are a = b = 15dB. The
frequencies in the shaded regions are attenuated 15dB or more. Such an attenuation is not good
enough

laëp laïi thöù daûi phoå X(f) -15dB/octave laëp laïi thöù
-1 giöõa 0dB +1

-15dB b
a

-16 -12 -8 -4 0 4 8 12 16 F (KHz)


khoaûng taàn
soá
giöõa
Fig. 1.23 : Exampleï 1.3.5
18

(b) If a prefilter is used with passband frequency F pf = 4 kHz, then the stopband frequency is Fsp
= fa – Fpb = 12 – 4 = 8 kHz. The attenuation at this frequency of the signal itself is 15dB. At this
frequency the attenuation of the filter is Asf (dB).
Because a = b and with the requirement a = 50dB, then
b = 15 + Asb = x  50 (dB)
Asb  50 - 15 = 35dB
Thus the prefilter has a flat response up to 4 kHz, the stopband begins at 8kHz
Example 1.3.6
An analog signal has a flat spectral up to frequency F M, after that the spectral decays α dB/octave. The
autialiasing prefilters has flat response up to also frequency F M after that the esponse decays β
dB/octave. It is required that within the frequency band up to 4 kHz the aliased components must
attenuates more than A dB. Find the minimum sampling frequency.

Solution
The passband cutoff frequency is F pb = FM and the stopband starts at frequency f sb = fs – fM. Above
frequency fM the attenuation at a certain frequency F is the sum of the attenuation of the signal and that
of the prefilter

A x ( f )=α log 2 ( FF )+ β log ( FF )=(∝+ β )log ( FF )


M
2
M
2
M
(1.44)

laëp laïi thöù daûi phoå X(f) -15dB/octave laëp laïi thöù
-1 giöõa 0dB +1

-15dB b
a

-fs -FM -4 0 4 FM fs F
khoaûng taàn
soá
giöõa
Fig. 1.24 : Exampleï 1.3.6
The attenuation is (α + β) dB/octave. Due to the even symmetry of the spectral we have
a = Ax(Fsb) = Ax(fs - FM)
Thus the requirement a ≥ A leads to
Ax(fs - FM) ≥ A
Replacing this into (1.44) we obtain
f s−F M
(∝+ β) log 2 ( FM )
≥A

The mininum sampling frequency Herefore is


fs = FM + 2A/(α + β)FM
19

If the attenuations α and β are given in dB/decode instead of dB/octave the log 10 is used instead of log 2
and the result will be
fs = FM + 10A/(α + β)FM
Above examples show that to relax the requirement on the antialiasing prefilters the sampling
frequency much be much higher than the Nyquist rate (at least a few times), the filters of order 4 are
adequate. If the sampling frequency is close to the Nyquist rate, filters of order 10 or so must be used.
We end up this section with the block diagram of the general complete DSP system (Fig.1.20).
The digital signal output y(n) from the DSP unit is converted by the digital-to-analog converter (DAC
or D/A) back to a coarse analog signal which is then lowpass filtered in the postfilter. The finally
reconstructed analog signal x 0 (t ) is the same as the original input x(t) or different, depending on the
processing of the DSP block and the quality of other blocks.

xa(t)
Antialiasing Analog signal in
prefilter
(lowpass)

Digital Digital
ADC signal
in
signal
DAC
DSP
Sampling out
Quantization x(n) y(n)
Coding

x 0 (t ) Postfilter
Recovered analog signal (lowpass)
0

Fig.1.25: Block diagram of general complete DSP system

1.4 DISCRETE – TIME SIGNALS


As has been said, the samples of a continuous-time signal form the correspronding discrete-time
signal. These samples have to be quantized and binary-encoded to really become a digital signal.
However the two last processes, quantization and coding, can be understood, thus when we say
discrete-time and digital we usually mean the same thing.

3 x(n) 3
2 2
. . . 1 -3 1 1 ...
1 3
-4 -2 -1 0 2 4 5 n
-2 -1 -2

(a) Infinite
duration x(n) 3
2 2
0 0 -2 -1 2 0 0
-4 -3 0 1 3 4 5 n
-1 -1
-2
(b) Finite duration
Fig.1.26: Discrete-time signal
20

Fig.1.25 gives an example. The amplitudes of the samples x(n) can be anything: Positive or
negative, zero or infinity, integer or fraction, real or complex (uaually assumed real). The signal may
be infinite duration, that is, exists at all time, or finite duration, that is, exists for a short duration,
usually taken as around the origin.
For convenience, we can write the two signals in Fig.1.21as, respectively,
x(n) = [ ... 1, -2, 2, 3, 1, -1, 2, -2, 1, 3 ... ]
x(n) = [ -2, -1, 2, 2, -1, 3 ]
In this form, a dicrete-time signal usually called a sequence (or discrete – time sequence). Notice that
we have to specify the sample at origin, for example. by writing it in bold face, or underlined, or
with an arrow.

1.4.1 Basic discrete-time signals


In principle, discrete time signals are, in general, samples of corressponding continuous-time sources.
Thus we also have basic discrete-time signals, similar to the continuous-time case (section 1.1.2).

(a) Unit sample


Unit sample, also called unit impulse, is a signal having amplitude of 1 at the origin, and zero
otherwise (Fig.1.27a):
(n) = 1 , n=0 (1.45)
0, n0

Notice that this discrete-time signal is not the sampled version of the analog counterpart (section 1.1.2)
but still δ (n)=δ(−n) (1.4).

x(n) = (n)
(a) Unit sample 1

-2 -1 0 1 2 3 n

x(n) = u(n)
1
(b) Unit step
...

-2 -1 0 1 2 3 n

x(n) = r(n)
1
(c) Unit ramp
...

-2 -1 0 1 2 3 n
21

Fig.1.27: Three basic signals


(b) Unit step
The unit step is defined as (Fig.1.27b)
u(n) = 1 , n0 (1.46)
0, n < 0 (or n ≤ -1)

(c) Unit ramp


This is a divergent signal (amplitude goes to ∞ as n goes to ∞), defined as (Fig.1.22c)
r(n) = n , n0 (1.47)
0, n < 0 (or n <=-1)

(d) Real exponential


The real exponential is quite a popular signal, defined as
x(n) = an, n0 (1.48)
0, n<0
where a is a real constant. There are four different cases as seen in Fig.1.28 in which two cases are
convergent and the other two are divergent.

x(n)

1
...

-2 -1 0 1 2 3 n

(a) 0 < a < 1

x(n)

1 ...

-2 -1 0 1 2 3 n
(b) a > 1

x(n)

1
3 ...
1
2 n
-2 -1 0

(c) -1 < a < 0


22

x(n)

1 ...
1 3
-2 -1 0 2 n
4

(d) a < -1

Fig.1.28: Real exponential

1.4.2 Sinusoid, digital frequency, periodicity, complex exponential

The cosinusoidal signal (Equation 1.1)) is sampled at period T :

x(t) = Acos( Ω t + 0) = A cos(ΩnT +Φ 0 )


(1.49)
where A is the amplitude, Ω = 2F is the angular frequency (rad/s), F the frequency (Hz), Φo the initial
phase (rad), T = 1/F = 2/Ω the period (sec).
We write a similar expression for discrete-time (digital) cosinusoid :
x(n) = Acos( ωn + 0)
(1.50)
where A is the amplitude, n the time index , and the quantity ω will be discussed shortly. For
example
x(n) = Acos(n/6 + /3)
which is plotted in Fig.1.29, where the sinusoidal waveshape and the periodicity are quite obvious. But
it’s not so always (see later).
x(n)
A
A/2
...
-10 -8 -6 2 4 6 14
-14 -12 -4 -2 0 8 10 12 ... n
0
-A

 
x ( n)  cos n  
 6 3
Comparing (1.49) and (1.50) we have the following very fundamental relation:
23

ω=ΩT (1.51)
The unit of ω is (rad/s) (s) = rad, but usually interpreted as radians/sample. ω is called digital
angular frequency. We can also defined ω = 2f with f the digital frequency (cycles/sample). The
digital sinusoid completes a cycle when
nω=2 π radians
or

ω=
n radians/sample
Hence ω can be considered as the angle extended by two consecutive samples when the samples are
uniformly distributed on a circle whose center is the origin.
Because Ω=2 πF and T =1/f s we have from Equation (1.51)
F
ω=2π
fs (1.52)
or
F
f=
fs (1.53)
(Remember: F is the analog frequency in Hz, f s the sampling frequency in samples/cycle, f the
digital frequency in cycles/sample). Notice that the digital frequencies ω and f depend on both the
analog frequency F and the sampling frequency f s . Notice also that both ω and f are
continuous (only fs is discrete). We will use the digital angular frequency ω more often and
will call it the digital frequency for short. However some authors prefer using f.
The relation between ω and F is shown in Fig.1.25. The upper figure shows the relation in
linear scale, whereas the lower figure shows the relation on a circle. Remember that the analog
frequency F is not periodic, that is, it can have value between −∞ to ∞ , while the digital
frequency ω is of circular nature, that is, it varies around a circle with the periodicity of 2π , and
with the central period [, ] corresponding to the Nyquist interval [-fs/2, fs/2] (Fig.1.18). This
means that the sinusoids having different frquencies ω in that interval are distinct, while the
sinusoids having frequencies outside that interval will be aliased into that interval.

3 fs fs fs 3 fs
...  fs  0 fs ... F (Hz)
2 2 2


2
... -3 -2 - 0 2 3 ...
Nyquist interval

  f 
F  s 
2  4 

(F=fs/2)
- (F=-fs/2) 0

  f 
 F   s 
2  4 


24

The periodicity of discrete sinusoids


In Fig.1.29 the signal is cos (n/6 + /3). The envelope of the samples clearly looks both sinusoidal
and periodic. The discrete signal is really periodic with the period of 12 samples (by counting the
samples and by computing 2/(/6) = 12).
However, there are cases where the discrete signal is periodic but the envelope does not look
sinusoidal even if it looks periodic. For example, the signal x (n)=cos5 πn /6 plotted in Fig.1.31.
The envelope does not bear any shape of a sinusoid, but it looks and it is periodic with a period of 12
samples (by counting, but by computing 2π /(5π /6) the result is not 12 ?).
Also, there are cases where the samples of a discrete-time signal lie on a sinưsoidal and
periodic envelope but the signal is not periodic, that is, the samples do not constitute a periodic
sequence.
x(n)
1 1 1

-1 1 4 6 8 11 13 16 18 20 23
-2 0 2 3 5 7 9 10 12 14 15 17 19 2122 24 n

-1 -1 -1
Fig.1.31: Signal x (n)=cos(5 πn/6 )

So, the periodicity of discrete sinusoids is rather confusing, and we may expect there should be some
criterion. For this, let’s begin with a discrete sinusoid cos ωn periodic with a period of N samples:
cosωn=cos ω(n+N )=cos (ωn+ ωN )
The condition is that ωN must be some integer m of 2π :
ωN =2 πm , m integers
or
ω m
=f =
2π N
(1.54)

This means that ω /2 π (or f) must be a rational function (ratio of two integers). The actual period N
will equal to the denomitator of m/N after simplification (cancelling out the common factor). If
ω/2 π is not a rational number the signal is not periodic (nonperiodic or aperiodic). As said earlier,
even though a discrete sinusoid may or may not be periodic but its envelope of samples is always
periodic (but may not be sinusoidal). Following is some examples.
25

 Signal cos( πn/6 ) is periodic because ω/2 π=(π /6 )/2 π =1/12 (a rational function)
and the period is 12.
 Signal cos 5 πn/6 ω/2 π=(5 π /6 )/2 π =5/12 (a rational function)
is periodic because
and the period is 12 (the period is not 2 π /(5 π/6) as for analog sinusoid).
 Signal cos πn/8 is periodic with period = 16, but cos0.4n is not ( ω/2 π=0 . 4 /2 π is not a
rational function). We can plot out these two signals the first 30 samples to check.
A point about periodicity of discrete sinusoid should be added before we leave the topic: A
small change in digital frequency can lead to a large change in period. For example, with
ω /2 π=51/100 the period is 100, but with /2 = 50/100 = 1/2, the period is just 2.

Complex exponential
We are considering the signals of the type x(n) = an with constant a complex, called complex
exponential, or complex sinusoid by some authors. Let
a = r ej ω (1.55)
then the signal is
x(n) =( r e

) n = rn e
jωn
= 2n ( cosωn+ jsin ωn)
(1.56)
whose real and imginary components are, respectively,
xR(n) = rn cosn ω
xI(n) = rn sinn ω
From these we get the magnitude and phase

x(n) = √ x2R ( n )+ x 2I ( n )=rn


x I (n)
tan−1 −1
(n) = xR ( n) = tan (tgn ω )= n ω

Actually, from Equation (1.56) we can see straightaway these results.

Example 1.4.1
Plot xR(n), xI(n), │x(n)│, and Φ(n) when r = 0.9 and = /10.
Solution
The necessary expressions are
x(n)=0 .9 n e jπn /10
x R (n )=0 . 9n cos πn /10
n
x I (n)=0 . 9 sin πn/10
|x(n)|=0.9n
Φ(n )=πn/10
Fig.1.32 shows the results. Notice, especially, the phase response  (n) . The phase response is by
convention, limited in the range [, ] . At n = 10, the phase is  which is also  .
26

Fig.1.32: Example 1.4.1

1.5 DISCRETE-TIME SYSTEMS


Previous sections were about discrete-time signals. The rest of this chapter reserves for discrete-time
(or digital) systems. A system acts upon the input signal x(n) to produce the output signal y(n)
(Fig.1.33). Let’s denote S[.] the action (or reaction) of the system, we write

Input signal Output signal


x(n) SYSTEM y(n)
S
Fig.1.33 : General model of discrete – time
systems
27

x (n)⃗S y (n) (1.57a)


or
y (n )=S [ x ( n ) ]
(1.58b)
For example, if the system is a multiplier by 2, then
S = 2x
and
y(n) = 2 x x(n)

1.5.1 Input-output signal difference equation


Usually we pay attention only to the action of the system on the input signal and do not care about the
structure (hardware and software) of the system. One way to chareacterize the system is to specify its
input-output signal difference equation (just like the differential equations in analog systems). An
example is the previous equation y(n) = 2x(n), but in most cases the equation is not so simple.

Example 1.5.1
The system is described by its input-output equation
(a) y(n) = x(n-1)

(b) y(n) = x(n+1)


1
(c) y(n) = 3 [x(n-1) + x(n) + x(n+1)]
(d) y(n) = max [x(n-1), x(n), x(n+1)]
The common input signal is
x(n) = n, -3  n  3
0 , otherwise
Find the output signal in each case.

Solution
The common input signal is the sequence (Fig.1.34)
x(n) = [ 0, 3, 2, 1, 0, 1, 2, 3, 0 ]

The rude way to find the output signal is to compute it from the equation for n = 0, 1, 2, … until the
output remains zero, then compute the output at n = -1, -2, … until it remains zero. A better and much
quicker way is to observe the signal equation to understand its action then deduce the output
straightaway.
28

(a) y(n) = x(n-1)


y( 0) = x(-1) =1
y( 1) = x( 0) =0
y( 2) = x( 1) =1
y( 3) = x( 2) =2
y( 4) = x( 3) =3
y( 5) = x( 4) =0
y(-1) = x(-2) =2
y(-2) = x(-3) =3
y(-3) = x(-3) =0
Thus output is
y(n) = [ 0, 3, 2, 1, 0, 1, 2, 3, 0 ]
The output is just the input delayed by one index (sample) , for example, the output y(0) at n = 0 is
the sample x(-1) at n = -1. So the system is an unit delay.
(b) y(n) = x(n+1)
We observe the equation and see that the output y(0) at n = 0 is the input x(1) at n = 1, so the system is
an unit advance. The output is the input shifted to the left (into the past) an index.
1
(c) y(n) = 3 [x(n-1) + x(n) + x(n+1)]
The output signal at index n is the average sum of the last (past) input x(n – 1), the present input x(n)
and the next (future) input x(n + 1). The output is computed as follows.
1 1 2
y(0) = 3 [x(-1) + x(0) + x(1)] = 3 (1+0+1) = 3
1 1
y(1) = 3 [x(0) + x(1) + x(2)] = 3 (0+1+2) = 1
1 1
y(2) = 3 [x(1) + x(2) + x(3)] = 3 (+2+3) = 2
...
Continuing will lead to the complete output sequence
5 2 5
y(n) = [ 0, 1, 3 , 2, 1, 3 , 1, 2, 3 , 1, 0 ]
(d) y(n) = max[x(n-1), x(n), x(n+1)]
The present output is the maximum of the three inputs: last, present, and next. Computation is as
follows.
y(0) = max[x(-1), x(0), x(1)] = max(1,0,1) = 1
y(1) = max[x(0), x(1), x(2)] = max(0,1,2) = 2
y(2) = max[x(1), x(2), x(3)] = max(1,2,3) = 3

Continuing will lead to the complete output sequence


y(n) = [ 0, 3, 3, 3, 2, 1, 2, 3, 3, 3, 0 ] 

Example 1.5.2
For the input sequence
x(n) = [0, 1, 2, 3, 4, 5, 6, 5, 4, 3, 2, 1, 0]
29

find the output sequence of the system


(a) y(n) = x(2n)

(b) y(n) = x
( n2 ) , n even
0 , n odd

Solution
(a) Let’s proceed the evaluation:
y( 0) = x( 0) = 6
y( 1) = x( 2) = 4
y( 2) = x( 4) = 2
...
y(-1) = x(-2) = 4
y(-2) = x(-4) = 2
...
The system keeps one sample then drops the next one. This is the rate compression or down-
sampling or sample decimation.
(b) The evaluation is proceeded as follows:
y( 0) = x( 0) = 6
y( 1) = x(1/2) = 0
y( 2) = x( 1) = 5
y( 3) = x(3/2) = 0
...
y(-1) = x(-1/2) = 0
y(-2) = x( -1) = 5
y(-3) = x(-3/2) = 0
...

The system doubles the number of samples by inserting zero-amplitude sample in between every two
consecutive samples. This is rate expansion or up-sampling or sample interpolation. 

1.5.2 Basic building blocks of discrete-time systems


The structure of discrete-time systems consists of various building blocks. A block may be realized
(implemented) by hardware (logic circuits) or software (programmes) or a combination of both.
Following is a number of system building blocks (or schematic symbols). Using these building blocks
we can draw the block diagram (or signal flow graph) describing any system.
 Signal adder (summer)
x 1 (n )
+ y(n ) = x 1 (n ) +x 2 (n )
x 2 (n )
 Signal subtractor
+
x 1 (n )
-1 + y(n ) = x 1 (n ) - x 2 (n )
x 2 (n ) -

Instead of using a multiplier X(-1) we put a minus sign at the lower entry of the adder to show the
substraction.
30

 Scalar multiplier (scaler, gain)


a
x(n) y(n) = a x(n)
Where the scalar a is assumed to be real and positive or negative then all entries to the adder are
positive (adding).
 Signal multiplier
x 1 (n ) x y(n ) = x 1 (n )x 2 (n )

x 2 (n )
 Signal squaring
x(n ) X y(n ) = x 2 (n )

 Unit time delay


x(n) y(n) = x(n-1)
z 1
The symbol z-1 is used since in the z-transform (see Chapter 4) a multiplicaton with z -1 has the effect of
delay a unit of time. Some authors use the symbol D or T instead of z -1. For double delay:
x(n) y(n) = x(n-2)
or
x(n) y(n) = x(n-2)
 Unit time advance

x(n) y(n) = x(n+1)

Double time advance


x(n) y(n) = x(n+2)
The plus sign in the exponent is optional .

Example 1.5.3
Draw the block diagram (or structure) of the systems
1
[ 4 x (n )+3 x (n−1)−2 x (n−2)]
(a) y(n) = 5
(b) y(n) = 2x1(n) - 3x2(n) + 4x1(n)x2(n)
(c) y(n) = -5x(n) + 2x(n-2) – 0.8y(n-1) + 0.5y(n-3)
Solution

(a)
4 1/ 5
x(n) +
y (n)
z 1 3

z 1 -2

(b)
31

2 +
x1 (n) 2x 1 (n) - 3x 2 (n)
- 3
+
x2 (n) - y(n)
+ y (n)
5 4
x
5x1 (n)x2 (n)

x(n) -5 y(n)
(c) + +

z 2 2 z 1
-0.8

z 2
0.5

(d)
x(n) 4 z 1
3
z 1
-2

1/5
+ + y (n)
Fig.1.35: Example 1.5.3

In Fig. 1.35c we can combine the two adders into just one with 4 inputs.
In the last system there is a feedback from the output back to somewhere in system so that
the output does not only depends on the input at various times but also on some previous outputs.
Such a system is called regenerative or recursive.
These is another way to draw the schematic structure. For example, Fig.1.35a can be of the
form as in Fig.1.35d. Chapter 7 will present filter structure in details. 
Reversely, if the schematic diagram of a system is given we can easily construct its input-
output signal equation.

1.6 TYPES OF DISCRETE-TIME SYSTEMS


Dicrete-time (digital) systems comprise several basic types with different characteristics. The
categorization gives us a deeper understanding of systems and the choice of appropriate analysis
method.

1.6.1 Memoryless systems, and systems with memory


A memoryless (or static) system does not need memory. The input and output signals take place at the
same instant. For example
y(n) = 2x(n)
y(n) = 2x(n) - x2(n)
Actually there is a small delay between input and output due to the propagation delay of the system.
A system with memory (or dynamic) needs memory to store past and future values needed for
the processing. For example
y(n) = x(n) + 0.8x(n-1) : one memory cell
1
y(n) = 3 [x(n-1) +x(n) + x(n+1)] : two memory cells
32

+∞
y ( n )= ∑ x ( n−k )
k =−∞ : infinite memory

1.6.2 Causal and noncausal systems


In causal system, the result comes after the cause, or, at the earliest the same time (simultaneously).
This is to say that the output at index n only depends on the input at n, n – 1, n – 2,…, and not on n +
1, n + 2,… In noncausal systems, on the other hand, the output also depends on future inputs.
Following is some examples.
(a) y(n) = 2x(n) - 3x2(n) : causal
1
(b) y(n) = 3 [x(n-1) + x(n) + x(n+1)] : noncausal due to the last term

y ( n )= ∑ x ( n−k )
(c) k =0 : causal

∑ x (n)
(d) y(n) = n=−∞ : noncausal

∑ x ( n−k )
(e) y(n) = k=−∞ : noncausal
(f) y(n) = x(-n) : noncausal
(g) y(n) = x(n2) : noncausal
(h) y(n) = x(2n) : noncausal
In real-time processing (or on-line processing), systems must be causal, in off-line
processing (or batch processing or block processing), systems can be noncausal since all samples
have been stored in memory, many of those will be future values with respect to the chosen time
origin.
The concept of causality is also applied to signals. Specifically, a signal x(n) can be classified
as
- Causal (or right-sided) if x(n) = 0 for n < 0
- Anticausal (or left-sided) if x(n) = 0 for n  0
- Two-sided (or bilateral) if x(n) exists for all n (<0 and  0)
|n|
For example, the unit step u(n) is causal, u(-n-1) is anticausal, a is two-sided. We can plot out
these signals to really see the difference.

1.6.3 Time-invariant and time-variant systems


The characterstics of a system may change with time so that the output depends on the input as well as
the instant the input is applied. This is a time-variant system. On the other hand, many systems can be
assumed to be time-invariant, that is, the output does not depend on the time the input is applied. The
terms shift-variant and shift-invariant can be used instead of time-variant and time-invariant
respectively.
The time (shift) invariance is judged as follows:
If x(n) y(n)
then x(n – k) y(n – k)

This criterion is illustrated in Fig.1.36


Input signal Output signal
x(n) SYSTEM y(n)
S

x(n) y(n)
delayed by k delayed by k
Fig.1.36 : Time (shift) invariant system

33

Example 1.6.4
Are the following systems time-invariant?
1
[ x(n−1 )+x(n )+x(n+1)]
(a) y(n) = 3
(b) y(n) = nx(n)
(c) y(n) = x(-n)

Solution
(a) For the system
1
y(n) = 3 [x(n-1) + x(n) + x(n+1)]
if the present input is delayed by k (that is by replacing x(n) by x(n – k)…) then the output is
1
y(n-k) = 3 [x(n-1-k) + x(n-k) + x(n+1+k)]
and if the present output is delayed by k (that is by replacing n by n – k)
1
y’(n-k) = 3 [x(n-1-k) + x(n-k) + x(n+1+k)]
Since
y’(n-k) = y(n-k)
the system is time-invariant.
(b) For the system
y(n) = nx(n)
if the present input is delayed by k then the output is
y(n-k) = nx(n-k)
and if the present output is delayed by k then the output is
y’(n-k) = (n-k)x(n-k)
Since
y’(n-k) ¿ y(n-k)
the system is time-variant.
(c) For the system
y(n) = x(-n)
we have
y(n-k) = x(-n-k)
y’(n-k) = x[-(n-k)] = x(-n+k)  y(n-k)
So the system is time-variant. 
34

1.6.4 Linear and nonlinear systems


The significance of linearity and nonlinearity for discrete-time systems is about the same as for analog
systems. Suppose two input signals x1(n) and x2(n) when applied separately to a system give
corresponding outputs y1(n) and y2(n). Now if a linear combination of the two inputs give the same
linear combination of the outputs then the system is linear, otherwise the system is nonlinear. Thus
linearity implies both scalability (proportionality) and superposition. The definition of linearity is
illustrated in Fig.1.37.
Input System Output
S
x(n) y(n)

If x1(n) y1(n)

x2(n) y2(n)

then
a1x1(n) + a2x2(n) a1y1(n) + a2y2(n)
(a1 and a2 are constants)
Fig.1.37: Linear systems

Example 1.6.2
Consider the linearity of the following systems:
(a) y(n) = n2x(n)
(b) y(n) = nx(n2)
(c) y(n) = nx2(n)
(d) y(n) = Ax(n) + B, A, B constants

Solution
(a) The system is
y(n) = n2x(n)
The two separate inputs and conesponding outputs are
y1(n) = n2x1(n)
y2(n) = n2x2(n)
Now for the combined input
x(n) = a1x1(n) + a2x2(n)
the output is
y(n) = n2[a1x1(n) + a2x2(n)] = a1[n2x1(n)] + a2[n2x2(n)] = a1y1(n) + a2y2(n)
So the system is linear.
(b) The system is
y(n) = nx(n2)

The procedure is summarized as follows.


x1(n)  y1(n) = nx1(n2)
x2(n)  y2(n) = nx2(n2)
x(n) = a1x1(n) + a2x2(n)
then
y (n )=a1 nx 1 (n2 )+a2 nx 2 ( n2 )=a1 y 1 (n)+a2 y 2 (n )
35

So the system is linear.


(c) The system is
y(n) = x2(n)
The reasoning is
x1(n)  y1(n) = x12(n)
x2(n)  y2(n) = x22(n)
x(n) = a1x1(n) + a2x2(n)
then
y (n )=[ a1 x 1 (n )+a 2 x 2 (n)]2 =a 21 y 21 (n )+a22 y 22 (n)+2 a1 a 2 x1 (n) x 2 (n )
So the system is nonlinear
(d) The system is
y(n) = Ax(n) + B A, B contants
The reasoning is
x1(n)  y1(n) = Ax1(n) + B
x2(n)  y2(n) = Ax2(n) + B
x(n) = a1x1(n) + a2x2(n)
y(n) = A[a1x1(n) + a2x2(n)] + B
= a1Ax1(n) + a2Ax2(n) + B
= a1[Ax1(n)+B] + a2[Ax2(n)+B] + B - a1B - a2B
= a1y1(n) + a2y2(n) + (1- a1 - a2)B
Due to the presence of the last term the system is nonlinear. When B = 0 (the system is said relaxed)
the system becomes linear.
Hereafter, all systems are assumed to be linear and time (shift) invariant (LTI or LSI),
otherwise stated.
Still, there is an important characteristic of systems, that is, the stability. Systems are either
stable or nonstable (astable). Stability will be discussed in next chapter (section 2.4).

1.7 CHAPTER SUMARY


1.1 Continuous – Time signals
Two elements of signals are amplitude and time . Continuous-time (or analog) signals vary
continuously with time. Waveforms are graphical illustration of signals.
Signals are usually represented mathematically by expressions or equations. Sinusoidal signal
(sinusoid) is very popular and has a very concise mathematical expression showing all parameters
(amplitude, frequency,phase) (1.1). Other waveforms, such as a square wave cannot be expressed
mathematically so concisely (1.2).
Signals can be deterministic, or random such as electric noise. Besides the sinusoid there are
two special signals, namely the unit impulse (1.3) , and the unit step (1.7).
Signals can be real or complex. Concerning a complex signal x(t) we have magnitude (or
modulus) denoted | x (t ) | , phase (or phase angle) denoted  (t ) or arg x(t) or  x(t ) , and complex
*
conjugate x (t ) (1.13). Complex exponential (1.14) is a good representation of complex signals, it can
be considered as a phasor (rotating vector). We should know how to obtain the real part from a
complex exponential (1.15) and (1.16).

1.2 Noise
There exist several types of noise, the most popular is thermal noise. Noise can be internal or external
(interference). White noise has its power spectral density unchanged with frequency. When a white
36

noise passes through a filter, the output will be no longer white. The probability of occurrence at
different amplitudes of noise is very important in analysis, this concerms random variable, probability
density function (PDF), and cumulative distribution function (CDF). The uniform and gaussian
distributions are usually considered. The two main parameters of distributions are mean and variance.

1.3 Signal sampling


Sampling a continuous-time signal turns it into a discrete-time one. In most case we use uniform
sampling. Sampling is a multiplying proces, implemented, in principle, just by a switch (Fig.1.16). We
call T the sampling interval (or sampling period), and
f s =1/T the sampling frequency (or
sampling rate). Time index n is an integer number, positive and negative. For analog signals, we
F T
denote by x and x their frequency and period , resprectively.
In order the samples can represent correctly the original analog signal, the sampling frequency
must be greater than twice its maximum frequency component (
f s  2 FM ) (1.30). This is the

sampling theorem. Nyquist rate is


2 FM and Nyquist interval is [  f s / 2 , f s / 2 ] .
When a signal is sampled below the Nyquist rate, aliasing occurs , which should be avoided in
most cases, by the use of an analog lowpass antialiasing prefilter or by raising the sampling frequency
to satisfy the sampling theorem. Section 1.3.3 shows us how to find the alias (aliased signal). Several
examples illustrate the aliasing, and the requirement on analog prefilters.

1.4 Discrete-time signals


Discrete-time signals must be quantized and binary encoded to become binary digital signals, but these
two processes are usually understood, hence discrete-time signals and digital signals usually mean the
same thing . Smilarly , we don’t usually differentiate discrete-time signal processing (DTSP) and
digital signal processing (DSP) . Example 1.3.2 gives a general DSP system.
Discrete-time signals may be infinite duration or finite daration. A discrete-time signal is just a
sequence of numbers (real or complex) and can be written as such.
Basic digital signals are unit sample or unit impulse (1.45), unit step (1.46), unit ramp (1.47),
real exponential (1.48), sinusoid (1.49) and complex exponential (1.55) and (1.56).
When comparing an analog consinusoid to the corresponding digital consinusoid we obtain the
useful relations (1.52), (1.53)
F F
  Ts  2 f 
f s and fs
where  is the digital angular frequency (radians/sample),   2 f with f is the digital frequency
(cycles/sample),  the analog angular frequency (radians/sec),   2 F with F the analog
f
frequency (Hertz or cycles/sec), and s is the sampling frequency (Hertz or samples/sec).
A sampled analog simusoidal signal may be periodic or not (1.54),

1.5 Discrete-time systems


Discrete-time (or digital) systems process the input signal x(n) to give an output signal y(n) differed
from the input in some aspect (amplitude, frequency, phase…) .
System is described (or characterized) by its input-output signal difference equation.
Especially, y (n)  x(n  1) represents an unit delay, and y (n)  x( n  1) an unit advance.
Section 1.5.2 gives the basic building blocks of discrete-time systems comprising adder
(summer), subtractor, scalar multiplier, signal multiplier, signal squaring, unit delay, and unit advance.
From a given input-output difference equation of a system we can build its structure using the
appropriate basic building blocks, and vice versa. Systems can be nonrecursive (feedforward) or
recursive (with feedback).

1.6 Types of discrete-time systems


37

First we have memoryless systems and systems with memory.


Next come causal and noncausal systems. Real-time processing systems must be causal.
Time-invariant systems (Fig.1.36) are easier to analyze than time-variant systems.
Linearity of systems (Fig.1.37) includes both proportionality an superprosition.
Unless ortherwise specified, we assume only linear and time-invariant (LTI) systems . Time-invariant
also means shift-invariant, hence the name linear and shift-invariant (LSI) systems.

APPENDIX CHAPTER 1
GENERATION OF NOISE

One of the functions, perhaps the most widely used, of digital systems is filtering. Digital
filters, like analog ones, filter out the desired signal from the background noise, or filter off the
background the from the signal. Actually, digital filters do much more in digital systems than analog
filters in analog systems. As mentioned briefly in section 1.2, there are several types of noise
38

generated within or penetrated to systems, making the orginal signals, or data in generale embedded in
a noise background. In various parts of several subsequent chapters, especially in problems, we need
to generate noise for the purpose of illustrating the effectiveness of digital filters.
Noise can be generated by electronic circuits but here we only mention the generation of noise
by software.
In the case of Matlab, successive calls to the function rand, with no including arguments, will
generate a sequence of uniformly distributed random values, that is, uniform white noise, in the
interval [0,1]. More generally, the function rand(m,n) returns a random m×n matrix. From an uniform
white noise r(n), for 0 ≤ n ≤ N, in the interval [0,1], we can generate uniform white noise v(n) in an
arbitrary interval [a,b] by introducing an offset and a scale factor as follows:
v(n) = a + (b – a) r(n), 0≤n≤ N (1.46)
N is assumed large.
The average power of uniformly distributed random variable x is the of expectation (expected
value) of x2:
2 ∞ 2 ∞
Px =E [ x ]=∫−∞ x p( x)dx P x =E [ x2 ]= ∫ x 2 p( x ) dx
−∞

(1.47)
For uniform white noise it is
1 b 2 b 3−a3 b
Pu = ∫ x dx= 1 2 b 3−a 3
b−a a 3 (b−a) Pu= b−a ∫ x dx= 3(b−a)
a

(1.48)
For example, with a = -5, b = 5, and N = 512, the average power computed from above expression is
3 3
5 −(−5 )
Pu = =8 .333
3 (5−(−5 ))

x(n)

n
n
Fig.1.33: Uniform white noise over [-5,5] with N = 512
39

SN(k)

k
Fig.1.34: Power spectrum density (PSD) of uniform white noise
(the horizontal line is the average power Pu )

whereas the actual average value computed from generated values of v(n) is given by
N −1
1
Pa = ∑ v 2 (n) N−1
N n=0 1
P a=
N
∑ v 2 (n)
=8. 781 n=0

(1.49)
As the length N increases the difference between the theoretical power P u and the actual power Pa
decreases. The waveform v(n) of noise is shown in Fig. 1.33 and the power spectrum density (PSD)
SN(k) in Fig. 1.34.
In Matlab, the function randn generates Gaussian white noise having mean m and standard
deviation σ =1 σ =1. If r (n) r(n) , for 0 ≤ n ≤ N 0≤n≤N , is such a white noise random
sequence, we generate Gaussian white noise having mean m and standard deviation σ by using the
function
v (n )=m+σ r (n) v ( n )=m+ σr ( n ) ,0 ≤ n ≤ N
(1.50)
The noise has power at all frequencies but not uniform. The noise average power is given by
x2
∞ 2 ∞ 2 σ2
Pg =∫−∞ x 2 p( x )dx= ∫ 0
x e dx 2
√2 π σ ∞
2
∞ −x
2

σ2 Pg =∫ x 2 p ( x ) dx= ∫ x2 e σ
dx
= −∞ √ 2 π σ 0
2 √2

σ2
¿
2√ 2
Thus the average power of zero-mean Gaussian white noise is proportional to its variance.
40

Just as in the case of uniform white noise, the actual average power computed from generated
values of v(n) is given by (1.49).

You might also like