Principles of Communication Systems: EEE351, Fall 2021
Principles of Communication Systems: EEE351, Fall 2021
Principles of Communication
Systems
Resource Persons
• Dr. Saleem Akhtar (Power Stream)
• Email: [email protected]
[2]
Principles of Communication Systems
Lecture Outline
❑ 3.4 Signal Transmission through a Linear System
[3]
3.4 Signal Transmission through a Linear System
❑ For a Linear Time Invariant (LTI) Continuous Time System, the input and output
are related through the relation
𝑦 𝑡 =𝑥 𝑡 ∗ℎ 𝑡 Eq. 1
[4]
3.4 Signal Transmission through a Linear System
▪ In frequency domain, the Eq. 1 can be expressed as
𝑌 𝑓 = 𝑋 𝑓 .𝐻 𝑓 Eq. 2
▪ As
𝑦 𝑡 ⟺ 𝑌 𝑓 = 𝑌 𝑓 𝑒 𝑗𝜃𝑦 𝑓
𝑌 𝑓 𝑒 𝑗𝜃𝑦 𝑓
= 𝐻(𝑓) 𝑋(𝑓) 𝑒 𝑗 𝜃ℎ 𝑓 +𝜃𝑥 𝑓
Eq. 3
[5]
3.4 Signal Transmission through a Linear System
▪ It is clear from the above equations that an LTI system modifies the amplitude
and phase of the input signal to produce the amplitude and phase of the output
signal.
▪ The response of an LTI system may be to boost the signal amplitude at some
frequencies and attenuate some other frequencies.
[6]
3.4.2 Distortionless Transmission
▪ A transmission is said to be distortion less if the input and output signals have identical
waveforms within a multiplicative constant.
▪ A delayed output signal that retains the input signal waveform is also considered as
distortionless.
𝑦 𝑡 = 𝑘. 𝑥 𝑡 − 𝑡𝑑 Eq. 4
or
𝑌 𝑓 = 𝑘𝑋 𝑓 𝑒 −𝑗2𝜋𝑓𝑡𝑑 Eq. 5
[7]
3.4.2 Distortionless Transmission
𝐻 𝑓 = 𝑘𝑒 −𝑗2𝜋𝑓𝑡𝑑 Eq. 6
▪ Amplitude response = 𝐻 𝑓 =𝑘
[8]
3.4.2 Distortionless Transmission
▪ Therefore, for the distortion less transmission, the amplitude response 𝐻 𝑓 must be a
constant and
▪ The slope of 𝜃ℎ 𝑓 with respect to 𝑓 must be constant −𝑡𝑑 , where 𝑡𝑑 is the delay of
output with respect to the input.
▪ Audio: Human ear is sensitive to amplitude variation/ distortions and relatively in sensitive
to phase variations/distortions.
[9]
3.4.2 Distortionless Transmission
❑ Problem 3.4.1: Signals 𝑔1 𝑡 = 104 Π 104 𝑡 and 𝑔2 𝑡 = 𝛿 𝑡 are applied at the inputs of
the ideal low-pass filters 𝐻1 𝑓 = Π 𝑓Τ20,000 and 𝐻2 𝑓 = Π 𝑓Τ10,000 . The outputs
𝑦1 𝑡 and 𝑦2 𝑡 of these filters are multiplied to obtain the signal 𝑦 𝑡 = 𝑦1 𝑡 𝑦2 𝑡 .
▪ Sketch 𝐺1 𝑓 and 𝐺2 𝑓
▪ Sketch 𝐻1 𝑓 and 𝐻2 𝑓 .
▪ Sketch 𝑌1 𝑓 and 𝑌2 𝑓 .
Figure P.3.4-1
[10]
3.5 Ideal Versus Practical filters
❑ Ideal Filters allow distortion less transmission for a certain band of frequencies
and suppress all other frequencies.
Figure 3.26 (a) Ideal low-pass filter (LPF) frequency response and (b) its impulse response.
Figure 3.27 Ideal high-pass (HPF) and bandpass filter (BPF) frequency responses.
[11]
3.5 Ideal Versus Practical filters
▪ Let us consider an ideal LPF, its output should be of the form with 𝑘 = 1
𝑦 𝑡 = 𝑥 𝑡 − 𝑡𝑑
To obtain this distortionless output, the frequency response should be of the form
𝑓
𝐻 𝑓 = 𝑟𝑒𝑐𝑡 𝑒 −𝑗2𝜋𝑓𝑡𝑑
2𝐵
𝑓
𝐻 𝑓 = 𝑟𝑒𝑐𝑡 𝜃ℎ 𝑓 = −2𝜋𝑓𝑡𝑑
2𝐵
Therefore, the unit impulse response should be of the form: Figure 3.26 (b)
ℎ 𝑡 = 2𝐵𝑠𝑖𝑛𝑐 2𝜋𝐵 𝑡 − 𝑡𝑑
▪ From Figure 3.26 (b), it is clear that an ideal LPF that can transmit the input signal
𝑥 𝑡 without distortion but with time delay 𝑡𝑑 is noncasual and therefore physically
unrealizable in real time.
▪ Similarly, other ideal filters (such as the ideal high-pass or the ideal bandpass are also
physically unrealizable.
[12]
3.5 Ideal Versus Practical filters
▪ For physically realizable systems in real time, ℎ 𝑡 must be casual.
∞ ln 𝐻 𝑓
−∞ 1+ 𝑑𝑓 <∞ Frequency domain criteria (Eq. 2)
2𝜋𝑓 2
▪ According to Paley-Wiener criterion , 𝐻 𝑓 cannot be zero over any finite band because
if 𝐻 𝑓 = 0 then this implies that ln 𝐻 𝑓 = ∞ over that band and the condition of
Paley−Wiener criterion is not fullfiled.
[13]
3.5 Ideal Versus Practical filters
▪ Therefore, for all ideal filters whether they are LPF, BPF or HPF, one practical approach to
have ℎ 𝑡 = 0 for 𝑡 < 0 is to cut off the tail in case of ℎ 𝑡 ≠ 0 for 𝑡 < 0 and to have
𝑡𝑑 sufficiently large.
ℎ 𝑡 = ℎ 𝑡 𝑢 𝑡
– ℎ 𝑡 can be physically realizable in real time because it is causal. ℎ 𝑡 can closely match
ℎ 𝑡 if 𝑡𝑑 is large.
Figure 3.28 Approximate realization of an ideal low-pass filter by truncating its impulse response.
[14]
3.7 Signal Energy and Energy Spectral Density
2
▪ The energy 𝐸𝑔 of a signal 𝑔 𝑡 is defined as the area under 𝑔(𝑡) .
∞
𝐸𝑔 = −∞ 𝑔 𝑡 𝑔∗ 𝑡 𝑑𝑡 𝑔∗ 𝑡 = complex conjugate of 𝑔(𝑡)
▪ We can also determine the signal energy from its Fourier transform 𝐺 𝑓 through
Parseval's theorem.
∞
2
𝐸𝑔 = න 𝐺 𝑓 𝑑𝑓
−∞
▪ Therefore, the energy of a signal 𝑔 𝑡 can be obtained from either the time
domain specification 𝑔 𝑡 and its frequency domain specification 𝐺 𝑓 .
▪ Example 3.16
[15]
3.7 Signal Energy and Energy Spectral Density
❑ Energy Spectral Density (ESD) is defined as Energy per unit bandwidth (BW)
𝜓𝑔 𝑓 = 𝐺 𝑓 2
Given the ESD, one can also find the energy from ESD as
∞
𝐸𝑔 = න 𝜓𝑔 𝑓 𝑑𝑓
−∞
▪ The spectra of most signals extend to infinity. However, because the energy of a practical
signal is finite, the signal spectrum must approach 0 as 𝑓 → ∞.
▪ Most of the signal energy is contained within a certain band of 𝐵 Hz, and the energy
content of the components of frequencies greater than 𝐵 Hz is negligible.
▪ We can therefore suppress the signal spectrum beyond 𝐵 Hz with little effect on the signal
shape and energy. The bandwidth 𝐵 is called the essential bandwidth of the signal. The
criterion for selecting 𝐵 depends on the error tolerance in a particular application.
[16]
3.7 Signal Energy and Energy Spectral Density
❑ Energy of Modulated Signals:
▪ Let 𝑔 𝑡 be a baseband signal band-limited to 𝐵 Hz. This signals can be amplitude
modulated by multiplying this signal with high frequency carrier (sinusoidal) signal
cos 2𝜋𝑓0 𝑡.
[17]
3.7 Signal Energy and Energy Spectral Density
❑ Energy of Modulated Signals:
▪ The ESDs of both 𝑔 𝑡 and the modulated signal 𝜑 𝑡 are shown in Fig. 3.35.
Figure 3.35 Energy spectral densities of (a) modulating and (b) modulated signals.
▪ It is clear that modulation shifts the ESD of 𝑔 𝑡 by ±𝑓0 . Observe that the area under
Ψ𝜑 𝑓 is half the area under Ψ𝑔 𝑓 .
▪ Because the energy of a signal is proportional to the area under its ESD, it follows that the
energy of 𝜑 𝑡 is half the energy of 𝑔 𝑡 .
[18]
3.7 Signal Energy and Energy Spectral Density
❑ Time autocorrelation Function and the Energy Spectral Density
▪ The time autocorrelation function of a signal and its energy spectrum density of
the signal form Fourier transform pair.
▪ If 𝑔 𝑡 and are 𝑦 𝑡 the input and the corresponding output of a linear time-
invariant (LTI) system, then
𝑌 𝑓 =𝐻 𝑓 𝐺 𝑓
Ψ𝑦 𝑓 = 𝐻 𝑓 2 Ψ𝑔 𝑓
2
▪ The output signal ESD is 𝐻 𝑓 times the input signal ESD.
[19]
3.8 Signal Power and Power Spectral Density
▪ For a power signal, a meaningful measure of its size is its power as the time average of the
signal energy averaged over the infinite time interval.
1 ∞
𝑃𝑔 = lim −∞ 𝑔2 𝑡 𝑑𝑡 𝑔 𝑡 is defined over −∞<𝑡 <∞
𝑇
𝑇→∞
▪ The signal power and the related concepts can be understood by defining a truncated
𝑇
𝑔 𝑡 𝑡 ≤
𝑔𝑇 𝑡 = 2
𝑇
0 𝑡 >
2
𝐸𝑔𝑇
𝑃𝑔 = lim
𝑇→∞ 𝑇
[20]
3.8 Signal Power and Power Spectral Density
❑ Power Spectral Density (PSD)
▪ As is the case with ESD, the PSD is also a positive, real, and even function of 𝑓.
[21]
3.8 Signal Power and Power Spectral Density
❑ Time autocorrelation Function of Power Signals
▪ Similar to ESD, The time autocorrelation function of a power signal and its power
spectrum density of the signal form Fourier transform pair.
𝐺𝑇 𝑓 2
𝑅𝑔 𝜏 ⟺ lim = 𝑆𝑔 𝑓
𝑇→∞ 𝑇
[22]
3.8 Signal Power and Power Spectral Density
❑ Power Spectral Density (PSD) and Power of Modulated Signals:
▪ Following the similar arguments ESD and Energy of modulated signals of energy signals,
we can have
𝜑 𝑡 = 𝑔 𝑡 cos 2𝜋𝑓0 𝑡
1
𝑆𝜑 𝑓 = 𝑆 𝑓 + 𝑓0 + 𝑆𝑔 𝑓 − 𝑓0
4 𝑔
1 𝑃𝑔
𝑃𝜑 = 2𝑃𝑔 = 𝑓0 ≥ 𝐵
4 2
[23]