Discrete - Time Signals and Systems
Discrete - Time Signals and Systems
CHAPTER 1
Sinusoidal signal
Sinusoid or sinewave is the most popular analog signal (Fig.1.2). It is smooth, easy to generate, and
has many properties and applications. The mathematical expression is
0 t
–A
Fig.1.2: Sinusoidal signal
where A is the peak value, Ω angular frequency (radians/s), t time (sec), Φ o initial phase (radians), that
is, i.e. phase at t = 0, Ω = 2F with F is frequency (Hz), T = 1 / F = 2/Ω the period (sec).
Above expression contains all parameters we need: Amplitude (peak, rms, average), and
periodicity (period, frequency). Other waveforms, except the constant value, do not have this
compactness. For example, for the symmetric square wave (Fig.1.3) the mathematical expression
consists of one part for amplitude, and the other for periodicity:
T
− ≤t≤0
x(t) = –A , 2 (1.2)
T
0≤t≤
+A , 2
x(t) = x(t nT) , n = 1, 2, 3 …
The sinusoid and the square wave are deterministic. For random signals, we cannot, in general,
represent them mathematically. Electric noises and interferences are examples of random signals.
x(t)
–T/2 0 T/2 T 2T t
–A
1
(t) A(t) (t-t0)
0 t 0 t 0 t 0 t0 t
2 2
(a) (b) (c) (d)
(t )
When the impulse has intensity of A instead of 1 we write A (t) (Fig.1.4c). When the unit impulse is
delayed by to we write (t – to) (Fig.1.4d), then
(t – tO) = , t = tO (1.5)
0 , t tO
∞ t2
∫−∞ δ(t−t O )dt=∫t 1
δ(t−t O )dt
= 1, t1 < tO < t2
A signal x(t) when multiplied with delayed impulse δ (t−t 0 ) is the value x(to) at to:
x(t)(t – to) = x(to) (1.6)
Au(t) A t 0
u(t) A
1
t 0(c) t
[
0 t 0 t 0
(a) (b)
Fig.1.5: Unit step
The unit impulse (t) and the unit step u(t) are related as follows:
t
u(t )=∫−∞ δ(t ' )dt ' =0 , t<0
1 , t0 (1.8a)
du(t )
δ (t )=
dt (1.8b)
Imaginary
x(t)
I x (t )
(t) (t)
0
R Real
Fig.1.6: Complex signal and polar coordinate
The magnitude or modulus, demoted by |x(t)| , and phase or phase angle, denoted by Φ(t ) or
argx(t) or ∠ x(t). They are
Example 1.1.1
A complex signal is x ( t )=5 cosΩt − j 5 sinΩt . Find its real part, imaginary part, magnitude and
phase.
Solution
– Real part: x R (t) = 5cos Ω t
– Imaginary part: x I (t) = –5cos Ω t
2 2 1/2
– Magnitude: |x (t )|=[ (5cosΩt ) +(−5cosΩt ) ] =5 √ 2cosΩt
−5 cosΩt
Φ(t )=tan −1 =tan −1 (−1 )
– Phase: 5 cosΩt = –450 independent of t .
According to this representation we can consider a complex signal as a vector and write as x(t).
Imaginary
x(t)
xI(t)
xR(t) = x*R(t)
(t)
0
-(t) Real
x*I(t)
x*(t)
5
Two complex quantities having the same real part but opposile imaginary part are complex
Fig.1.7:
conjugates of each otherA(Fig.1.7).
complex signal
Thus x(t)
for and its complex
a given conjugate
complex x*(t)in Equation (1.12), its
signal x(t)
complex conjugate is
radians.
Imaginary
t
xI(t)
x(t)
(t=0)
A 0
0 xR(t) Real
From a complex exponential, its real sinusoidal part is deduced by two ways. The first is to take the
real part:
x R (t) = Re[Acos( Ω t + o) + jAsin( Ω t + o)]
= Acos( Ω t + o) (1.14)
Imaginary
x(t)
2xR(t)
A
t
0 -0 Real
x*(t)
6
This is just the projection the phasor onto the real axis. The second
*
way is to use two phasors, x(t) and
its complex conjugate x*(t) (Fig 1.9), and then take half
Fig.1.9: Adding a phasor x(t) to its complex conjugate of x
the (t)
sum:to form the real part
2 x R (t ) 1
x R (t) [ x(t )+x ¿ (t )]
= 2
1 j ( Ω t+Φ ) − j ( Ω t +Φ )
[
Ae O O + Ae O O ]
= 2
(1.15)
Notice that when the two phasors rotate in opposite directions at angular frequencies Ω and −Ω ,
the addition always gives twice the real sinusoid.
1.2 NOISE
All the unwelcome signals of random nature superimposing on our signal carrying information are
named together as noise. In electronic devices and circuits noise is generated due to the random
motion of electrons (nonuniform speed, collisions…). This is thermal noise. The active electronic
devices, besides thermal noise, also generate shot noise. Some phenomena, such as thunder and
closure of electric switches, cause impulsive noise (high amplitudes but in burts). The sun generates
both thermal and impulsive noises. Noise can be classified as internal (or system) noise, and external
noise or interference.
There is a special noise, rather an interference, that we should not forget, that is the
50Hz/60Hz radiation from electric powerline. This noise induces onto our bodies, and electric circuits
by way of electromagnetic wave. The power supply for a circuit is another source of 50Hz/60Hz
interference.
As for frequency characteristic, one distinguishes white noise, pink noise… The most
convenient to model, also the most frequently mentioned is white noise which has the power spectral
density S(F) unchanged with frequency F. Fig.1.10 shows S(F) has a fixed value of No/2. When the
white noise passes through a filter, the outpul noise will be no longer white due to the frequency
characteristic of the filter.
S(F)
N0/2
0 F(Hz)
Fig.1.10: The power spectral demity of white noise
∞
m=E[ x ]=∫−∞ xp( x)dx
(1.17)
2 2 ∞ 2
σ =E [( x−m) ]=∫−∞ ( x−m) p( x)dx
(1.18)
where E stands for expectation (or expected value).
The square root of the vaiance is called standard deviation, denoted σ .
In a uniform distribution, the value of x within a range is equally likely:
1
p( x )= , a≤x≤b
b−a (1.19)
The PDF and CDF are shown in Fig. 1.11. The mean and variance are, respectively,
p(x) P(x)
1 1
ba 1
2
0 a m b x 0 a m b x
a+b
m=
2 (1.20)
2
2 (b−a )
σ =
12
(1.21)
1.2.1 Gaussian distribution
In reality, many random variables have Gaussian distribution (also called normal
distribution). Caussian white noise PDF and CDF are, respectively,
1 −x 2 /2 σ 2
p ( x )= e
√2 π σ
(1.22)
x
F ( x )=∫−∞ p ( x ) dx
(1.23)
2
Where σ is the variance ( σ is the standard deviation). The distribution has a shape of a ringing
bell (Fig.1.12). Peak probability (at x = 0) is
p(x)
P(x)
pP 1 1.0
2
1 1
0,5
e 2
- 0 x 0 x
2
8
1 −0/2 σ 2 1
pP = e =
√2 π σ √2 π σ (1.24)
At distance x=±σ the problalslity is
1 −σ 2 /2σ 2 1 1 1
pσ = e = . ≈0 , 606
√2π σ √ e √2 π σ √2 π σ
(1.25)
2
The smaller σ (hence σ ) is the narrower the bell, that is, the more centralized the distribution.
When a DC voltage m is corrupted by adding noise, the probability distribution is the gaussian
distribution of the noise but the mean is shifted to the new mean m (Fig.1.13). The mean m can be
positive or negative. The probability distribution becomes
1 −( x−m )2 /2 σ 2
p ( x )= e
√2 π σ (1.26)
p(x)
pP
p
0 m- m m+ x
4T 5T
t
0 T 2T 3T 6T
T
9
0 t
(a) Analog signal
s(t)
1 t
0 2(T ) 4T 6T 8T t
(b) Sampling signal
x(nT
) xˆ ( n)
6T
0 2T 4T 8T t
(c) The samples (discrete-time signal)
(a) s(t)
x(t) x(nT )
(b) s(t)
0 Tx 0 Tx 0 Tx
X(F)
(a)
-FM 0 FM F
X̂ F
(c)
X̂ F
(d)
Fig.1.18:
The signal Two-sided
is sampled frequencyof
by a sequence spectrum
narrow of (a) the
pulses t analog signal,1(b)
of amplitude the samples
as before. when series
The Fourier
f s > F3.1)
expansion (see section of this sampling function
M , (c) the samples when f
is s =2 F M , (d) the samples when f s <2 F M
δt δt ∞
s ( t )= +2 ∑ cos2π mf s t
T x T x m=1 (1.28)
Where
T x is the fundamental period of the signal x(t ) . Hence the samples are
δt δt ∞
^x ( t )=x ( t ) s (t )= x ( t )+2 ∑ x ( t ) cos2 π mf s t
Tx T x m=1 (1.29)
This shows that the frequency spectrum X( F) of the sampled signal consists of that of the analog
signal (with a multiplying factor t/Tx) and its shifted versions to 2fs, 3fs… This spectrum can also
obtain using the Fourier transform (see section 3.2) instead of the Fourier series.
In Fig.1.18b the spectrum bands do not overlap so we can recover the analog signal by
lowpass filtering the central band, or bandpass filtering any other bands. All the frequency bands
contain the same information but at different frequencies. In Fig.18c we still can recover the signal but
the filter must be very precise. In Fig.1.18d the bands overlap and we are in no way to recover the
analog signal. So the limiting case is Fig.1.18c. From this observation, the sampling theoren states as
follows.
In order that the samples represent correctly the original analog signal, the sampling
frequency must be greater than twice the maximum frequency component of the analog signal:
fs > 2FM (1.30)
The limiting frequency 2FM is called Nyquist rate, and the central frequency interval [-fs/2, fs/2] is
called the Nyquist interval.
For example if a waveform contains the fundamental frequency of 1 kHz and a second
harmonic 2 kHz, then the sampling rate must be greater than 2 x 2 kHz = 4 kHz, say 5 kHz or more.
Another example is for the voice in the telephone system. The voice is limited by a high quality analog
filter at FM = 3.4 kHz, then the sampling frequency must be greater than 2 x 3.4 = 6.8 kHz, say 8 kHz
or more.
In the case of Fig.1.18d there is a phenomenon called aliasing that will be discussed next.
1.3.3 Aliasing
We would like to know what happens when the signal is sampled below the Nyquist rate, i.e. the
sampling theorem is not satisfied. Look at Fig.1.19. The low frequency signal x1(t) is sampled 4 times
at S1, S2, S3 and S4 in a period of the signal, that is, fs = 4Fx1. From these samples we would be able to
recover x1(t). For the high frequency signal x2(t) there are the same 4 samples S1, S2, S3 and S4 in its 9
Fx 2
cycles, so the sampling frequency is just (4/9) , that is, under the Nyquist rate. From these
sample points of x2(t) we will recover x1(t) and not x2(t). Thus the high frequency signal when
x1(t) x2 (t)
.
S2
S1 . . . S5
.
S4
12
undersampled will be recovered as a low frequency signal. This phenomenon is called aliasing, and
the recovered low frequency, which is false, is called the alias of the original high frequency signal.
Fig.1.19: The low frequency signal x 1 (t ) and the high frequency signal x 2 (t ) are sampled at
the same points S 1 , S 2 , S 3 , S 4 , S 5
To avoid the aliasing there are two approaches: One is to raise the sampling frequency to satisfy the
sampling theorem, the other is to filter off the unecessary high-frequency component from the
continuous-time signal. We limit the signal frequency by an effective lowpass filter, called
antialiasing prefilter, so that the remained highest frequency is less than half of the intended
sampling rate. If the filter is not perfect we must give some allowance. For example in voice
processing, if the lowpass filter still allows frequencies above 3,4kHz go through even at small
amplitudes, the sampling frequency should be 8 kHz or higher.
The aliasing phenomenon can be shown mathematically. Let’s consider a complex exponential
signal at frequency F which is sampled at interwal T to yield the samples x(nT):
Example 1.3.1
A signal at frequency 50Hz is sampled at 80Hz. What frequency will be recovered ? Repeat when it is
sampled at 120Hz.
Solution
With F = 50 Hz, fs = 80 Hz, the signal is undersampled (not satisfied the sampling theorem). The
Nyquist interval is [-40 Hz, 40 Hz]. The samples do not only represent the frequency F = 50 Hz but all
frequencies F mfs = 100 m80, m = 0, 1, 2…, that is, the frequencies
13
Example 1.3.2
A DSP system uses the sampling frequency fs = 20 kHz to process an audio signal frequency limited at
10 kHz, but the lowpass filter still alows frequencies up to 30 khz pass through even at small
amplitudes. What signal will we get back from the samples?
Solution
For sampling rate fs = 20 kHz, the Nyquist interval is [-10kHz, 10kHz]. Thus the audio frequencies 0 –
10kHz will be recovered as is. The audio frequencies from 10 – 20 kHz will be aliased into the
frequency range 10 – 0 kHz, and the audio frequency from 20 – 30 kHz will be aliased into the
frequency range 0 – 10 kHz. The resulting audio will be distorted due to the superposition of the 3
frequency bands.
Example 1.3.3
Consider the signal
x(t) = 4 + 3cost + 2cos2t + cos3t (t: ms)
(a) Find the Nyquist frequency
(b) If the signal is sampled at half the Nyquist frequency, final the signal x 0(t) that is the alias of
x(t).
Solution
(a) Because the unit of time is ms, the given signal has 4 frequencies
F1 = 0Hz, F2 = 0,5kHz, F3 = 1kHz, F4 = 1,5kHz
The highest frequency is fmax = f4 = 1.5kHz, the Nyquist rate is 2x1.5kHz = 3kHz. We the signal is
sampled at rates greater than 3kHz there will be no aliasing.
(b) When the signal is sampled at 1.5kHz aliasing will occur. Now the Nyquist interval is
[-0.75, 0.75)kHz. The two frequencies f1 and f2 lie within this interval and, thus, will not be aliased.
Whilst the two frequencies f3 and f4 lie outside the Nyquist interval and, thus, will be aliased:
F30 = F3 mFs = 1mod(1,5) = 1 - 1,5 = - 0,5kHz
F40 = F4 mFs = 1,5mod(1,5) = 1,5 - 1,5 = 0kHz
The recovered signal x0(t) have the frequencies f10, f20, f30 and f40. Thus the recovered analog signal is
x(t) = 4cos2F1t + 3cos2F2t + 2cos2F30t + cos2F40t
= 4 + 3cost + 2cos(-t) + cos0 = 5 + 5cost
10 x(t) x 0 (t)
0 T 2T 3T 4T 5T 6T 7T 8T 9T t
Fig. 1. 20: Example 1.3.3 (given signal x(t) and recovered signal x0(t))
14
The signals x(t) and x0(t) are plotted in Fig. 1.20. Notice that x(t) and x 0(t) coincide only at the sample
points. x0(t) lies entirely within the Nyquist interval, that is, it comprises of lower frequency
components and hence smoother.The spectrum of x(t) and then extracting the spectral components
lying within the Nyquist interval (Fig. 1.21).
As mentioned previously, an analog lowpass prefilter must be used to eliminate the unecessary
high frequency content of the input signal so that the sampling frequency will not be too high. When
the filter is not ideal (that is,does not have an abrupt cutoff) it will be hard to eliminate the aliasing
properly.
Example 1.3.4
An audio signal consists of the components
(t) = 2Acos10t + 2Bcos30t + 2Ccos50t + 2Dcos60t
+ 2Ecos90t + 2Fcos125t (t: ms)
The signal passes through an analog prefilter H(f) and then is sampled at the rate of 40kHz and then is
recovered by an ideal analog filter –fs/2, fs/2 (Fig. 1.21a)
xâ(t)
xa(t) Tieàn loïc x(t) Laáy maãu x(nT) Khoâi phuïc x0(t)
H(f) 40kHz lyù töôûng
Tín hieäu Tín hieäu Tín hieäu Tín hieäu
töông töï töông töï sau soá töông töï
vaøo loïc (caùc maãu) taùi laäp
Determine the recored analog signal x0(t) for the condition situations as follows.
(a) Without the prefilter, that is, H(f) = 1 at all frequencies.
(b) When H(f) is an ideal lowpass filter having cutoff frequency at 20kHz.
(c) When H(f) is a realistic filter having characteristics as shown in Fig. 1.21b, that is, flat from
O0t0 20Hz then attenuating 60dB/octave. Ignore the effect of phase response
Solution
The audio, signal has the frequency components
FA = 5kHz, FB = 15kHz,
FC = 25kHz, FD = 30kHz,
FE = 45kHz, FF = 62,5kHz
Because only the frequencies FB and FB are audible, the audible signal is
x0(t) = x01(t) = 2Acos10t + 2Bcos30t
We replace the cosinusoids by complex exponentials and take the Fourier transform. For the
first component, that is
2 π jF A t − 2 π jF A t
2 A cos 2 πF A t = Ae + Ae ↔ Aδ ( F−F A ) + Aδ ( F+ F A ) (1.34)
H(f) 1
-30dB/octave
-30dB
-60dB/octave
-60dB
lyù
töôûng
f
0 20 40 60 kHz
Fig 1.21b: Exmple 1.3.4c continued (spectral of x(n))
15
The sampling process repeats this spectral at the multiple positive and negative of f s. The components
C, D, E, F lying outside the Nyquist interval [-20, 20]kHz will give rise to the aliased frequencies
FC=25 = FC - Fs = 25 - 40 = - 15kHz
FD=30 = FD - Fs = 30 - 40 = - 10kHz
FE=45 = FE - Fs = 45 - 40 = 5kHz
FF=62,5 = FF – 2Fs= 62,5 - 2x40 = - 17,5kHz
(a) When there is no prefilter the output signal x(t) is the same as the input signal x a(t). The
recovering postfilter recuves the frequencys components lying within the Nyquist interval [-20kHz,
20kHz) and, also, the signals aliased into this interval:
x0(t) = 2Acos10t + 2Bcos30t + 2Ccos(-215t)
+ 2Dcos(-210t) + 2Ecos25t + 2Fcos(-217,5t)
= 2(A+E)cos10t + 2(B+C)cos30t + 2Dcos20t + 2Fcos35t (1.35)
The audible signal none consists of the orginal 5 and 15kHz together with two aliased frequencies 10
and 17.5kHz. Thus the audible output signal in x0(t) differs from that of the imput signal xa(t).
(b) When an ideal lowpass filter with a sharp cutoff at f 2/2 = 20 kHz is used, the signal x(t) is just
the signal x01(t) comprising frequencies fA and fB as before, all other frequencies are eliminated. Thus
there will be no aliasing.
(c) When a realistic is used as in Fig 1.21b, output signal component at the output of the filter will
be altered in magnitude and phase. For example, the fA component will be as
2 A cos 2 πF A t ⃗H 2 A|H ( F A )| cos [ 2 πF A t+Φ ( F A ) ]
Ignoring the phase change, the output is
x(t) = 2AH(FA)cos10t + 2BH(FB)cos30t + 2CH(FC)cos50t
+ 2DH(FD)cos60t + 2EH(FE)cos90t + 2FH(FF)cos125t
Since the fA and fB components lie within the region where the filter response is 1 (0 dB) then
H(fA)=H(fB)= 1
Y(f) (-19.3dB)
B A A B C (-35.1dB)
D C D (-70.1dB)
E E (-98.6dB)
F F f
Fig 1.21d: Example 1.34d continued (Output spectral at the realistic filter).
where a is constant depending on type of filter and N is the order of the filter. The attenuation at high
frequencies, taking a as 1, is
H(F)
Tieàn loïc lyù
töôûng
Vuøng chuyeån
A
tieáp c
-F - s /2 - 0 f fs /2 f
sbf pb pb sb
da f da
daûi
ûi
chaä ûi
chaä
thoâng
n n
Hình 2.12 : Tieàn loïc choáng bieät danh thoâng thaáp thöïc teá
1
A ( f )dB =−20 log 10 =20 N log 10 F
fN f lôùn (1.42)
For a realistic filter having frequency response H(F) the output spectral is
X(f) = H(F)Xa(F) (1.43a)
Where Xa(f) is the spectral of the analog input signal. The dB attenuation is given by
AX(f) = A(f) + AXa(f) (1.43b)
|X ( f ) / X f |
( 0) , and AXa(f) = -20log10 a |X ( f ) / X f |
( 0 ) . When the signal x(t)
Where AX(f) = -20log10
is sampled, its spectral X(f) is respeated at multiple sampling intervals and the attenuation A X(f) will
determined the overlap of the repeated spectra, that is, the degree of aliasing. The prefilter is chosen
such that the attenuation Ax(f) together with the attenuation A xa(f) of the input spectral are high enough
to reduce the aliasing at the desired level.
Example 1.3.5
An audio signal has magnitude response flat upto 4kHz and then attenuated 15dB/octave. The
sampling frequency is 12kHz.
(a) If no preprefer is used, determine the aliasing into the interested audio frequency 1 upto 4
kHz.
(b) Use an appropriate prefilter to reduce the aliased components in the interested audio frequency
range more than 50 dB.
Solution
(a) Due to the sampling, the central spectral band of X(F) repeats after each sampling frequency f s
= 12 kHz (Fig. 1.23) Because of the even symmetry of the spectral bands the shaded parts are equal,
and hence the attenuation a and b at frequencies 4 and 8 kHz are equal and are a = b = 15dB. The
frequencies in the shaded regions are attenuated 15dB or more. Such an attenuation is not good
enough
laëp laïi thöù daûi phoå X(f) -15dB/octave laëp laïi thöù
-1 giöõa 0dB +1
-15dB b
a
(b) If a prefilter is used with passband frequency F pf = 4 kHz, then the stopband frequency is Fsp
= fa – Fpb = 12 – 4 = 8 kHz. The attenuation at this frequency of the signal itself is 15dB. At this
frequency the attenuation of the filter is Asf (dB).
Because a = b and with the requirement a = 50dB, then
b = 15 + Asb = x 50 (dB)
Asb 50 - 15 = 35dB
Thus the prefilter has a flat response up to 4 kHz, the stopband begins at 8kHz
Example 1.3.6
An analog signal has a flat spectral up to frequency F M, after that the spectral decays α dB/octave. The
autialiasing prefilters has flat response up to also frequency F M after that the esponse decays β
dB/octave. It is required that within the frequency band up to 4 kHz the aliased components must
attenuates more than A dB. Find the minimum sampling frequency.
Solution
The passband cutoff frequency is F pb = FM and the stopband starts at frequency f sb = fs – fM. Above
frequency fM the attenuation at a certain frequency F is the sum of the attenuation of the signal and that
of the prefilter
laëp laïi thöù daûi phoå X(f) -15dB/octave laëp laïi thöù
-1 giöõa 0dB +1
-15dB b
a
-fs -FM -4 0 4 FM fs F
khoaûng taàn
soá
giöõa
Fig. 1.24 : Exampleï 1.3.6
The attenuation is (α + β) dB/octave. Due to the even symmetry of the spectral we have
a = Ax(Fsb) = Ax(fs - FM)
Thus the requirement a ≥ A leads to
Ax(fs - FM) ≥ A
Replacing this into (1.44) we obtain
f s−F M
(∝+ β) log 2 ( FM )
≥A
If the attenuations α and β are given in dB/decode instead of dB/octave the log 10 is used instead of log 2
and the result will be
fs = FM + 10A/(α + β)FM
Above examples show that to relax the requirement on the antialiasing prefilters the sampling
frequency much be much higher than the Nyquist rate (at least a few times), the filters of order 4 are
adequate. If the sampling frequency is close to the Nyquist rate, filters of order 10 or so must be used.
We end up this section with the block diagram of the general complete DSP system (Fig.1.20).
The digital signal output y(n) from the DSP unit is converted by the digital-to-analog converter (DAC
or D/A) back to a coarse analog signal which is then lowpass filtered in the postfilter. The finally
reconstructed analog signal x 0 (t ) is the same as the original input x(t) or different, depending on the
processing of the DSP block and the quality of other blocks.
xa(t)
Antialiasing Analog signal in
prefilter
(lowpass)
Digital Digital
ADC signal
in
signal
DAC
DSP
Sampling out
Quantization x(n) y(n)
Coding
x 0 (t ) Postfilter
Recovered analog signal (lowpass)
0
3 x(n) 3
2 2
. . . 1 -3 1 1 ...
1 3
-4 -2 -1 0 2 4 5 n
-2 -1 -2
(a) Infinite
duration x(n) 3
2 2
0 0 -2 -1 2 0 0
-4 -3 0 1 3 4 5 n
-1 -1
-2
(b) Finite duration
Fig.1.26: Discrete-time signal
20
Fig.1.25 gives an example. The amplitudes of the samples x(n) can be anything: Positive or
negative, zero or infinity, integer or fraction, real or complex (uaually assumed real). The signal may
be infinite duration, that is, exists at all time, or finite duration, that is, exists for a short duration,
usually taken as around the origin.
For convenience, we can write the two signals in Fig.1.21as, respectively,
x(n) = [ ... 1, -2, 2, 3, 1, -1, 2, -2, 1, 3 ... ]
x(n) = [ -2, -1, 2, 2, -1, 3 ]
In this form, a dicrete-time signal usually called a sequence (or discrete – time sequence). Notice that
we have to specify the sample at origin, for example. by writing it in bold face, or underlined, or
with an arrow.
Notice that this discrete-time signal is not the sampled version of the analog counterpart (section 1.1.2)
but still δ (n)=δ(−n) (1.4).
x(n) = (n)
(a) Unit sample 1
-2 -1 0 1 2 3 n
x(n) = u(n)
1
(b) Unit step
...
-2 -1 0 1 2 3 n
x(n) = r(n)
1
(c) Unit ramp
...
-2 -1 0 1 2 3 n
21
x(n)
1
...
-2 -1 0 1 2 3 n
x(n)
1 ...
-2 -1 0 1 2 3 n
(b) a > 1
x(n)
1
3 ...
1
2 n
-2 -1 0
x(n)
1 ...
1 3
-2 -1 0 2 n
4
(d) a < -1
x ( n) cos n
6 3
Comparing (1.49) and (1.50) we have the following very fundamental relation:
23
ω=ΩT (1.51)
The unit of ω is (rad/s) (s) = rad, but usually interpreted as radians/sample. ω is called digital
angular frequency. We can also defined ω = 2f with f the digital frequency (cycles/sample). The
digital sinusoid completes a cycle when
nω=2 π radians
or
2π
ω=
n radians/sample
Hence ω can be considered as the angle extended by two consecutive samples when the samples are
uniformly distributed on a circle whose center is the origin.
Because Ω=2 πF and T =1/f s we have from Equation (1.51)
F
ω=2π
fs (1.52)
or
F
f=
fs (1.53)
(Remember: F is the analog frequency in Hz, f s the sampling frequency in samples/cycle, f the
digital frequency in cycles/sample). Notice that the digital frequencies ω and f depend on both the
analog frequency F and the sampling frequency f s . Notice also that both ω and f are
continuous (only fs is discrete). We will use the digital angular frequency ω more often and
will call it the digital frequency for short. However some authors prefer using f.
The relation between ω and F is shown in Fig.1.25. The upper figure shows the relation in
linear scale, whereas the lower figure shows the relation on a circle. Remember that the analog
frequency F is not periodic, that is, it can have value between −∞ to ∞ , while the digital
frequency ω is of circular nature, that is, it varies around a circle with the periodicity of 2π , and
with the central period [, ] corresponding to the Nyquist interval [-fs/2, fs/2] (Fig.1.18). This
means that the sinusoids having different frquencies ω in that interval are distinct, while the
sinusoids having frequencies outside that interval will be aliased into that interval.
3 fs fs fs 3 fs
... fs 0 fs ... F (Hz)
2 2 2
2
... -3 -2 - 0 2 3 ...
Nyquist interval
f
F s
2 4
(F=fs/2)
- (F=-fs/2) 0
f
F s
2 4
24
-1 1 4 6 8 11 13 16 18 20 23
-2 0 2 3 5 7 9 10 12 14 15 17 19 2122 24 n
-1 -1 -1
Fig.1.31: Signal x (n)=cos(5 πn/6 )
So, the periodicity of discrete sinusoids is rather confusing, and we may expect there should be some
criterion. For this, let’s begin with a discrete sinusoid cos ωn periodic with a period of N samples:
cosωn=cos ω(n+N )=cos (ωn+ ωN )
The condition is that ωN must be some integer m of 2π :
ωN =2 πm , m integers
or
ω m
=f =
2π N
(1.54)
This means that ω /2 π (or f) must be a rational function (ratio of two integers). The actual period N
will equal to the denomitator of m/N after simplification (cancelling out the common factor). If
ω/2 π is not a rational number the signal is not periodic (nonperiodic or aperiodic). As said earlier,
even though a discrete sinusoid may or may not be periodic but its envelope of samples is always
periodic (but may not be sinusoidal). Following is some examples.
25
Signal cos( πn/6 ) is periodic because ω/2 π=(π /6 )/2 π =1/12 (a rational function)
and the period is 12.
Signal cos 5 πn/6 ω/2 π=(5 π /6 )/2 π =5/12 (a rational function)
is periodic because
and the period is 12 (the period is not 2 π /(5 π/6) as for analog sinusoid).
Signal cos πn/8 is periodic with period = 16, but cos0.4n is not ( ω/2 π=0 . 4 /2 π is not a
rational function). We can plot out these two signals the first 30 samples to check.
A point about periodicity of discrete sinusoid should be added before we leave the topic: A
small change in digital frequency can lead to a large change in period. For example, with
ω /2 π=51/100 the period is 100, but with /2 = 50/100 = 1/2, the period is just 2.
Complex exponential
We are considering the signals of the type x(n) = an with constant a complex, called complex
exponential, or complex sinusoid by some authors. Let
a = r ej ω (1.55)
then the signal is
x(n) =( r e
jω
) n = rn e
jωn
= 2n ( cosωn+ jsin ωn)
(1.56)
whose real and imginary components are, respectively,
xR(n) = rn cosn ω
xI(n) = rn sinn ω
From these we get the magnitude and phase
Example 1.4.1
Plot xR(n), xI(n), │x(n)│, and Φ(n) when r = 0.9 and = /10.
Solution
The necessary expressions are
x(n)=0 .9 n e jπn /10
x R (n )=0 . 9n cos πn /10
n
x I (n)=0 . 9 sin πn/10
|x(n)|=0.9n
Φ(n )=πn/10
Fig.1.32 shows the results. Notice, especially, the phase response (n) . The phase response is by
convention, limited in the range [, ] . At n = 10, the phase is which is also .
26
Example 1.5.1
The system is described by its input-output equation
(a) y(n) = x(n-1)
Solution
The common input signal is the sequence (Fig.1.34)
x(n) = [ 0, 3, 2, 1, 0, 1, 2, 3, 0 ]
The rude way to find the output signal is to compute it from the equation for n = 0, 1, 2, … until the
output remains zero, then compute the output at n = -1, -2, … until it remains zero. A better and much
quicker way is to observe the signal equation to understand its action then deduce the output
straightaway.
28
Example 1.5.2
For the input sequence
x(n) = [0, 1, 2, 3, 4, 5, 6, 5, 4, 3, 2, 1, 0]
29
(b) y(n) = x
( n2 ) , n even
0 , n odd
Solution
(a) Let’s proceed the evaluation:
y( 0) = x( 0) = 6
y( 1) = x( 2) = 4
y( 2) = x( 4) = 2
...
y(-1) = x(-2) = 4
y(-2) = x(-4) = 2
...
The system keeps one sample then drops the next one. This is the rate compression or down-
sampling or sample decimation.
(b) The evaluation is proceeded as follows:
y( 0) = x( 0) = 6
y( 1) = x(1/2) = 0
y( 2) = x( 1) = 5
y( 3) = x(3/2) = 0
...
y(-1) = x(-1/2) = 0
y(-2) = x( -1) = 5
y(-3) = x(-3/2) = 0
...
The system doubles the number of samples by inserting zero-amplitude sample in between every two
consecutive samples. This is rate expansion or up-sampling or sample interpolation.
Instead of using a multiplier X(-1) we put a minus sign at the lower entry of the adder to show the
substraction.
30
x 2 (n )
Signal squaring
x(n ) X y(n ) = x 2 (n )
Example 1.5.3
Draw the block diagram (or structure) of the systems
1
[ 4 x (n )+3 x (n−1)−2 x (n−2)]
(a) y(n) = 5
(b) y(n) = 2x1(n) - 3x2(n) + 4x1(n)x2(n)
(c) y(n) = -5x(n) + 2x(n-2) – 0.8y(n-1) + 0.5y(n-3)
Solution
(a)
4 1/ 5
x(n) +
y (n)
z 1 3
z 1 -2
(b)
31
2 +
x1 (n) 2x 1 (n) - 3x 2 (n)
- 3
+
x2 (n) - y(n)
+ y (n)
5 4
x
5x1 (n)x2 (n)
x(n) -5 y(n)
(c) + +
z 2 2 z 1
-0.8
z 2
0.5
(d)
x(n) 4 z 1
3
z 1
-2
1/5
+ + y (n)
Fig.1.35: Example 1.5.3
In Fig. 1.35c we can combine the two adders into just one with 4 inputs.
In the last system there is a feedback from the output back to somewhere in system so that
the output does not only depends on the input at various times but also on some previous outputs.
Such a system is called regenerative or recursive.
These is another way to draw the schematic structure. For example, Fig.1.35a can be of the
form as in Fig.1.35d. Chapter 7 will present filter structure in details.
Reversely, if the schematic diagram of a system is given we can easily construct its input-
output signal equation.
+∞
y ( n )= ∑ x ( n−k )
k =−∞ : infinite memory
x(n) y(n)
delayed by k delayed by k
Fig.1.36 : Time (shift) invariant system
33
Example 1.6.4
Are the following systems time-invariant?
1
[ x(n−1 )+x(n )+x(n+1)]
(a) y(n) = 3
(b) y(n) = nx(n)
(c) y(n) = x(-n)
Solution
(a) For the system
1
y(n) = 3 [x(n-1) + x(n) + x(n+1)]
if the present input is delayed by k (that is by replacing x(n) by x(n – k)…) then the output is
1
y(n-k) = 3 [x(n-1-k) + x(n-k) + x(n+1+k)]
and if the present output is delayed by k (that is by replacing n by n – k)
1
y’(n-k) = 3 [x(n-1-k) + x(n-k) + x(n+1+k)]
Since
y’(n-k) = y(n-k)
the system is time-invariant.
(b) For the system
y(n) = nx(n)
if the present input is delayed by k then the output is
y(n-k) = nx(n-k)
and if the present output is delayed by k then the output is
y’(n-k) = (n-k)x(n-k)
Since
y’(n-k) ¿ y(n-k)
the system is time-variant.
(c) For the system
y(n) = x(-n)
we have
y(n-k) = x(-n-k)
y’(n-k) = x[-(n-k)] = x(-n+k) y(n-k)
So the system is time-variant.
34
If x1(n) y1(n)
x2(n) y2(n)
then
a1x1(n) + a2x2(n) a1y1(n) + a2y2(n)
(a1 and a2 are constants)
Fig.1.37: Linear systems
Example 1.6.2
Consider the linearity of the following systems:
(a) y(n) = n2x(n)
(b) y(n) = nx(n2)
(c) y(n) = nx2(n)
(d) y(n) = Ax(n) + B, A, B constants
Solution
(a) The system is
y(n) = n2x(n)
The two separate inputs and conesponding outputs are
y1(n) = n2x1(n)
y2(n) = n2x2(n)
Now for the combined input
x(n) = a1x1(n) + a2x2(n)
the output is
y(n) = n2[a1x1(n) + a2x2(n)] = a1[n2x1(n)] + a2[n2x2(n)] = a1y1(n) + a2y2(n)
So the system is linear.
(b) The system is
y(n) = nx(n2)
1.2 Noise
There exist several types of noise, the most popular is thermal noise. Noise can be internal or external
(interference). White noise has its power spectral density unchanged with frequency. When a white
36
noise passes through a filter, the output will be no longer white. The probability of occurrence at
different amplitudes of noise is very important in analysis, this concerms random variable, probability
density function (PDF), and cumulative distribution function (CDF). The uniform and gaussian
distributions are usually considered. The two main parameters of distributions are mean and variance.
APPENDIX CHAPTER 1
GENERATION OF NOISE
One of the functions, perhaps the most widely used, of digital systems is filtering. Digital
filters, like analog ones, filter out the desired signal from the background noise, or filter off the
background the from the signal. Actually, digital filters do much more in digital systems than analog
filters in analog systems. As mentioned briefly in section 1.2, there are several types of noise
38
generated within or penetrated to systems, making the orginal signals, or data in generale embedded in
a noise background. In various parts of several subsequent chapters, especially in problems, we need
to generate noise for the purpose of illustrating the effectiveness of digital filters.
Noise can be generated by electronic circuits but here we only mention the generation of noise
by software.
In the case of Matlab, successive calls to the function rand, with no including arguments, will
generate a sequence of uniformly distributed random values, that is, uniform white noise, in the
interval [0,1]. More generally, the function rand(m,n) returns a random m×n matrix. From an uniform
white noise r(n), for 0 ≤ n ≤ N, in the interval [0,1], we can generate uniform white noise v(n) in an
arbitrary interval [a,b] by introducing an offset and a scale factor as follows:
v(n) = a + (b – a) r(n), 0≤n≤ N (1.46)
N is assumed large.
The average power of uniformly distributed random variable x is the of expectation (expected
value) of x2:
2 ∞ 2 ∞
Px =E [ x ]=∫−∞ x p( x)dx P x =E [ x2 ]= ∫ x 2 p( x ) dx
−∞
(1.47)
For uniform white noise it is
1 b 2 b 3−a3 b
Pu = ∫ x dx= 1 2 b 3−a 3
b−a a 3 (b−a) Pu= b−a ∫ x dx= 3(b−a)
a
(1.48)
For example, with a = -5, b = 5, and N = 512, the average power computed from above expression is
3 3
5 −(−5 )
Pu = =8 .333
3 (5−(−5 ))
x(n)
n
n
Fig.1.33: Uniform white noise over [-5,5] with N = 512
39
SN(k)
k
Fig.1.34: Power spectrum density (PSD) of uniform white noise
(the horizontal line is the average power Pu )
whereas the actual average value computed from generated values of v(n) is given by
N −1
1
Pa = ∑ v 2 (n) N−1
N n=0 1
P a=
N
∑ v 2 (n)
=8. 781 n=0
(1.49)
As the length N increases the difference between the theoretical power P u and the actual power Pa
decreases. The waveform v(n) of noise is shown in Fig. 1.33 and the power spectrum density (PSD)
SN(k) in Fig. 1.34.
In Matlab, the function randn generates Gaussian white noise having mean m and standard
deviation σ =1 σ =1. If r (n) r(n) , for 0 ≤ n ≤ N 0≤n≤N , is such a white noise random
sequence, we generate Gaussian white noise having mean m and standard deviation σ by using the
function
v (n )=m+σ r (n) v ( n )=m+ σr ( n ) ,0 ≤ n ≤ N
(1.50)
The noise has power at all frequencies but not uniform. The noise average power is given by
x2
∞ 2 ∞ 2 σ2
Pg =∫−∞ x 2 p( x )dx= ∫ 0
x e dx 2
√2 π σ ∞
2
∞ −x
2
σ2 Pg =∫ x 2 p ( x ) dx= ∫ x2 e σ
dx
= −∞ √ 2 π σ 0
2 √2
σ2
¿
2√ 2
Thus the average power of zero-mean Gaussian white noise is proportional to its variance.
40
Just as in the case of uniform white noise, the actual average power computed from generated
values of v(n) is given by (1.49).