Communication Systems: Course Outline
Communication Systems: Course Outline
Resources Assignments
⚫ Textbook ⚫ Homework exercises + project ( 20%)
[1] B.P. Lathi, Modern Digital and Analog Communication ⚫ Attention (10%)
Systems, 4th edition, Oxford University Press, 2010
⚫ Multiple choice questions (20%)
⚫ Recommended readings
⚫ Final exam (50%)
[1] Leon W. Couch II, Digital and Analog Communication
System, Sixth Edition, Prentice Hall, 2001.
[2] John G. Proakis and Masoud Salehi, Communication Systems
Engineering, Second Edition, Prentice Hall, 2002.
⚫ Software:
[1] Matlab
Faculty of Electronics & Telecommunications. HCMUS [3] Faculty of Electronics & Telecommunications [4]
Projects
⚫ Personal Wireless Networks: Bluetooth ⚫ 12:30 – 17:00
⚫ Wireless Communications: WiFi/ LoRa
⚫ Mobile Communications: LTE (4G)/NB-IoT
⚫ Optical communications
⚫ Satellite communications
Faculty of Electronics & Telecommunications. HCMUS [5] Faculty of Electronics & Telecommunications. HCMUS [6]
Reconstructed
Signal Source Channel
demodulation A/D
output decoder decoder
receiver
2. Telecommunication Wireless Communications
⚫ Telegraph ⚫ Satellite
⚫ TV
⚫ Fixed line telephone
⚫ Cordless phone
⚫ Cable/Wired networks
⚫ Cellular phone
⚫ Wireless Communications ⚫ Wireless LAN, WIFI
⚫ Internet ⚫ Wireless MAN, WIMAX
⚫ Fiber communications ⚫ Bluetooth
⚫ Ultra Wide Band
⚫ Wireless Laser
⚫ Microwave
⚫ GPS
⚫ Ad hoc/Sensor Networks
⚫ Analog Message: continuous in amplitude and over time ➢ A digital communication system transfers
– AM, FM for voice sound information from a digital source to the intended
– Traditional TV for analog video receiver (also called the sink).
– First generation cellular phone (analog mode)
– Record player ➢ An analog communication system transfers
information from an analog source to the sink.
⚫ Digital message: 0 or 1, or discrete value
– VCD, DVD ➢ A digital waveform is defined as a function of time
– 2G/3G cellular phone that can have a discrete set of amplitude values.
– Data on your disk ➢ An Analog waveform is a function that has a
– Your grade continuous range of values.
⚫ Digital age: why digital communication will prevail
Eeng360 12
Digital Communication ADC/DAC
➢ Advantages
• Relatively inexpensive digital circuits may be used; ⚫ Analog-to-Digital Conversion (ADC) and Digital-to-Analog
Conversion (DAC) are the processes that allow digital
• Privacy is preserved by using data encryption; computers to interact with these everyday signals.
• Data from voice, video, and data sources may be merged and ⚫ Digital information is different from its continuous counterpart
transmitted over a common digital transmission system; in two important respects: it is sampled, and it is quantized
• In long-distance systems, noise dose not accumulate from repeater
to repeater. Data regeneration is possible
• Errors in detected data may be small, even when there is a large
amount of noise on the received signal;
• Errors may often be corrected by the use of coding.
• Disadvantages
• Generally, more bandwidth is required than that for analog
systems;
• Synchronization is required. Eeng360 13
Net A Net B
3.27 3.28
Sine Wave Two signals with two different amplitudes
Peak
The sine wave is the most fundamental form of a amplitude
periodic analog signal.
Value
•••
Peak
Time amplitude
3.29 3.30
Two signals with same phase, different amplitudes and frequency Example
3.31 3.32
Phase Three sine waves with different phases
3.33 3.34
Example
Time and Frequency Domains
A sine wave is offset 1/6 cycle with respect to time 0. What A sine wave is comprehensively defined by its
is its phase in degrees and radians? amplitude, frequency, and phase.
Solution We have been showing a sine wave by using what is
We know that 1 complete cycle is 360°. Therefore, 1/6 cycle called a time domain plot. The time-domain plot
is shows changes in signal amplitude with respect to
time (it is an amplitude-versus-time plot).
3.35 3.36
The time and frequency-domain plots of a sine wave Example
3.37 3.38
Amplitude
Figure 3.12 shows a nonperiodic composite signal. It can be
•••
the signal created by a microphone or a telephone set when a
Time word or two is pronounced. In this case, the composite
signal cannot be periodic, because that implies that we are
repeating the same word or words with exactly the
same tone.
Amplitude
f 3f 9f Frequency
3.41 3.42
3.43 3.44
Figure 3.13: The bandwidth of periodic and nonperiodic composite
signals Example 3.10
3.45 3.46
3.47 3.48
Figure 3.17: Two digital signals: one with two signal levels and the Example 3.16
other with four signal levels
A digital signal has eight levels. How many bits are needed
per level? We calculate the number of bits from the
following formula. Each signal level is represented by 3 bits.
3.49 3.50
3.51 3.52
Example 3.19 Example 3.20
Dang Le Khoa
Email: [email protected]
3.55
Overview Baseband and Carrier Communication
⚫ Baseband and Carrier Communication ⚫ Baseband:
− Describes signals and systems whose range of frequencies is measured
⚫ Different AMs from 0 to a maximum bandwidth or highest signal frequency
⚫ DSB-SC
− Voice: Telephone 0-3.5KHz; CD 0-22.05KHz
⚫ Conventional AM (DSB-FC)
− Video: Analog TV 4.5MHz, TV channel is 0-6MHz. Digital, depending
⚫ SSB on the size, movement, frames per second, …
⚫ VSB − Example: wire, coaxial cable, optical fiber, PCM phone
⚫ Amplitude Demodulations ⚫ Carrier Communication:
⚫ FDM system − Carrier: a waveform (usually sinusoidal) that is modulated to represent
the information to be transmitted. This carrier wave is usually of much
⚫ Phase-locked loop higher frequency than the modulating (baseband) signal.
− Modulation: is the process of varying a carrier signal in order to use
that signal to convey information.
t→
t→ in two frequencies which are the cos(MIX t )
sum and difference of the EXAMPLE : Let m(t)
be as shown.
( t ) frequencies of the sinusoids m(t) (t) e1(t)
multiplied.
( t )
t→
cos(t ) cos(t ) = 21 [cos(( + )t ) + cos(( − )t )]
cos(c t ) t→
m( t ) ⚫ To change the carrier frequency c t→ t→
M() F{cos( c t )} M()
of a modulated signal to an SPECTRA
= 21 m( t )[cos((2 c + I )t ) + cos((I )t )]
−I 0 I →
c
− c 0 → ⚫ Example 4.2, 4.3
0.7
1.
M() 0.
-0.4 t→
t→
0 →
AM (t ) = [ A + m(t )] cos(c t )
()
− c 0
c →
t→
m(t)
Then we see that for A m p , 0 1
0.7
0 When 1 (or A m p ) the signal is overmodula ted, and envelope detection can not be used.
0.
t→ t→
-0.4 (However, we can still use synchronous demodulati on).
m(t)
mp 2
mp = 2; = = . i) = 0.5 A = 4 ii) = 1 A = 2 t→
AM (t ) = [ A + m(t )] cos(c t ) A A
For dc offset of 1 = 2.
=1 =2
= 0.5
t→
t→ t→ t→
AM ( t ) = A cos(c t ) + m( t ) cos(c t )
The first term is the carrier and the second term is sidebands which contain the signal itself.
The total AM signal power is the sum of carrier power and the sideband power.
A2
Carrier power Pc =
2
Sideband power Ps = 21 Pm where Pm is the power of m(t).
The sideband power is the useful power.
useful power Ps Pm
Efficiency : = = = .
Total power Pc + Ps A 2 + Pm
t→
−2B 2B →
0
DSB
−c c →
0
SSB
−c c →
0
SSB Generator SSB Demodulation
• Selective Filtering using filters with sharp cutoff characteristics. Sharp cutoff Synchronous, SSB-SC demodulation
filters are difficult to design. The audio signal spectrum has no dc SSB ( t ) cos(c t ) = m( t ) cos(c t ) jm h ( t ) sin(c t )cos(n(c t ) = 21 m( t )(1 + cos(c t )) jm h ( t ) sin(2c t )
component, therefore , the spectrum of the modulated audio signal has a null
around the carrier frequency. This means a less than perfect filter can do a
A lowpass filter can be used to get 21 m( t ).
reasonably good job of filtering the DSB to produce SSB signals.
• Baseband signal must be bandpass SSB+C, envelop detection
SSB +C ( t ) = A cos( c t ) + m( t ) cos( c t ) m h ( t ) sin( c t )
• Filter design challenges An envelope detector can be used to demodulate such SSB signals .
• No low frequency components What is the envelope of SSB +C ( t ) = ( A + m( t )) cos( c t )) + m h ( t ) sin( c t ) = E( t ) cos( c t + ) ?
( )
1
= A 1 + A + A + A
2
m (t) m (t)
2
2m( t )
h
0
− c c →
2
2
−c c →
Filtering scheme for the generation of VSB modulated wave. VSB Transceiver
m(t) VSB () e(t)
VSB () M()
Hi() LPF
Ho()
2cos(c t )
2cos(c t )
Transmitter Receiver
– Si=Acos(wct+1(t)), Sv=Avcos(wct+c(t))
– Sp=0.5AAv[sin(2wct+1+c)+sin(1-c)]
– So=0.5AAvsin(1-c)=AAv(1-c)
⚫ Capture Range and Lock Range
Carrier Acquisition in DSB-SC Viet Nam National University Ho Chi Minh City
⚫ Signal Squaring method University of Science
1 1 Faculty of Electronics & Telecommunications
⚫ Costas Loop v1 (t ) = Ac Al m(t ) cos , v2 (t ) = Ac Al m(t )sin
2 2
Chapter 3:
Angle Modulation and Demodulation
v4 (t ) = K sin 2
2 2
1 1 1 Dang Le Khoa
v3 (t ) = Ac Al m(t ) cos sin = Ac Al m(t ) sin 2
2 2 2
Email: [email protected]
⚫ SSB-SC not working
Overview FM Basics
⚫ Angle modulation ⚫ VHF (30M-300M) high-fidelity broadcast
⚫ Wideband FM, (FM TV), narrow band FM (two-way radio)
⚫ FM modulation
⚫ 1933 FM and angle modulation proposed by Armstrong, but
–Principle success by 1949.
–Signal spectrum ⚫ Digital: Frequency Shift Key (FSK), Phase Shift Key (BPSK,
QPSK, 8PSK,…)
⚫ FM detection ⚫ AM/FM: Transverse wave/Longitudinal wave
– Frequency discriminator
– Phase-locked loop
s(t ) = Ac cos 2 f ct + (t )
where (t ) is a function of message signal m(t ).
0
k f : frequency sensitivity
instantanous frequency fi (t ) = f c + k f m(t )
angle i (t ) = 2 f i ( ) d
t
t
= 2 f c t + 2 k f m( )d
0
[9]
Example FM Characteristics
FM ⚫ Characteristics of FM signals
– Zero-crossings are not regular
– Envelope is constant
– FM and PM signals are similar
[10]
m( )d
t
FM of m(t ) PM of
0
dm(t )
PM of m(t ) FM of
dt
Frequency Modulation Example
⚫ FM (frequency modulation) signal Consider m(t)- a square wave- as shown. The FM wave for this m(t) is
shown below. t
-
0 t
Assume m(t) starts at t = 0. For 0 t T2 m(t) = 1 , m( )d = t and
k f : frequency sensitivity 0
T
t t
instantanous frequency fi (t ) = f c + k f m(t )
2
angle i (t ) = 2 f i ( ) d
t The instantane ous frequency is i ( t ) = c + k f m( t ) = c + k f for 0 t T2
0 (Assume zero initial phase) and i ( t ) = c − k f for T t T.
2
t i max = c + k f i min = c − k f
= 2 f c t + 2 k f m( )d
and
0 m(t)
d 2 k f Am cos(2 f m )d
t
1 d 1 d 2 f ct 1 FM ( t )
fi = = +
0
2 dt 2 dt 2 dt
t →
1
= fc + 2 k f Am cos(2 f m ) Let =t
2
k 2f k 2f
S(t) has ∞ spectrum frequency (t ) = Re( (t )) = Acos wct − k f a(t ) sin wct − a 2 (t ) cos wct + a 3 (t ) sin wc t...
(with non-zero power). 2! 3!
Narrow Band Angle Modulation Example
Definition k f a(t ) 1
Wide Band FM
Block diagram of a method for generating a narrowband FM
⚫ Wideband FM signal
signal.
m(t ) = Am cos(2 f mt )
s(t ) = Ac cos 2 f ct + sin(2 f mt )
⚫ Fourier series representation
s (t ) = Ac J n ( ) cos 2 ( f c + nf m )t
n =−
Ac
S( f ) = J n ( ) ( f − f c − nf m ) + ( f + f c + nf m )
2 n =−
1. J n ( ) = (−1) n J − n ( )
2. If is small, then J 0 ( ) 1,
J1 ( ) ,
2
J n ( ) 0 for all n 2
3. J
n =−
2
n ( ) = 1
24
Bandwidth of FM Carson’s Rule
⚫ Facts ⚫ Nearly all power lies within a bandwidth of
– For single-tone message signal with frequency fm
– FM has side frequencies extending to infinite frequency →
theoretically infinite bandwidth BT = 2f + 2 f m = 2( + 1) f m
– But side frequencies become negligibly small beyond a point
→ practically finite bandwidth – For general message signal m(t) with bandwidth (or highest
frequency) W
– FM signal bandwidth equals the required transmission
(channel) bandwidth BT = 2f + 2W = 2( D + 1)W
⚫ Bandwidth of FM signal is approximately by f
where D = is deviation ratio (equivalent to ),
– Carson’s Rule (which gives lower-bound) W
f = max k f m(t )
fi right
= n fi left
Analysis of Indirect FM Armstrong FM Modulator
⚫ Invented by E. Armstrong, an indirect FM
1. Input: v(t ) = Ac cos 2 f1t + 2 k f m( )d ,
t
0
max | nk f m(t ) | Solution:
where new fi (t ) = nf1 + nk f m(t ), = (a) f = 14.4 Hz. (b) f = 72 14.4 = 1.036 kHz.
W
(c) f = 1.036 kHz. (d) f = 72 1.036 = 74.65 kHz.
0
Let the slope circuit be simply differentiator:
0 Magnitude frequency
response of
so (t ) − Ac 2 f c + 2 k f m(t )
so(t) linear with m(t) transformer BPF.
⚫ Input Signal
vi (t ) = A(t ) cos (t ) = A(t ) cos(wct + k f m(a)da)
t
−
FM Stereo Multiplexing
Viet Nam National University Ho Chi Minh City
University of Science
Dang Le Khoa
Email: [email protected]
Outline Properties of Signals & Noise
⚫ Properties of Signals
– Periodic Waveforms
➢ In communication systems, the received waveform is
usually categorized into two parts:
– DC Value, Power
– Energy and Power Waveforms
– Signal-to-noise Ratio
– dB, dBm Signal: Noise:
The desired part containing the The undesired part
⚫ Properties of Noise information.
– Random Variables
– Cumulative density function (CDF) ➢Properties of waveforms include:
– Probability density function (PDF) • DC value, • Phase spectrum,
– Stationarity and Ergodicity • Root-mean-square (rms) value, • Power spectral density,
• Normalized power, • Bandwidth
– Gaussian Distribution • Magnitude spectrum, • ………………..
3
Faculty of Electronics & Telecommunications
Periodic Waveforms
Where does noise come from?
⚫ External sources: e.g., atmospheric, galactic noise, interference; ➢ Definition
⚫ Internal sources: shot noise and thermal noise (generated by A waveform w(t) is periodic with period T0 . A sinusoidal waveform
communication devices themselves). of frequency f0 = 1/T0 Hertz is periodic
– Thermal noise caused by the rapid and random motion of electrons
within a conductor due to thermal agitation.
It has a Gaussian distribution with zero mean ➢ Theorem: If the waveform involved is periodic, the time average
operator can be reduced to
– Shot noise: the electrons are discrete and are not moving in a
continuous steady flow, so the current is randomly fluctuating.
It has a Gaussian distribution with zero mean.
– The Gaussian distribution follows from the central limit theorem. where T0 is the period of the waveform and a is an arbitrary real constant, which
may be taken to be zero.
5
Faculty of Electronics & Telecommunications
DC Value Power
➢ Definition.
Let v(t) denote the voltage across a set of circuit terminals,
and let i(t) denote the current into the terminal, as shown .
The instantaneous power (incremental work divided by
incremental time) associated with the circuit is given by:
p(t) = v(t)i(t)
the instantaneous power flows into the circuit when p(t) is
positive and flows out of the circuit when p(t) is negative.
➢ The average power is:
6 7
12 13
Decibel Decibel Gain
➢ A base 10 logarithmic measure of power ratios.
➢ The ratio of the power level at the output of a circuit
compared with that at the input is often specified by ➢ If resistive loads are involved,
the decibel gain instead of the actual ratio.
➢ Decibel measure can be defined in 3 ways
Decibel Gain
Definition of dB may be reduced to,
Decibel signal-to-noise ratio
Mill watt Decibel or dBm
14 15
Decibel Signal-to-noise Ratio (SNR) Decibel with Mili watt Reference (dBm)
➢ Definition. The decibel signal-to-noise ratio (S/R, SNR) is:
➢ Definition. The decibel power level with respect to 1 mW is:
Let x(t) be a radio broadcast. How useful is it if x(t) is known? Noise is ubiquitous.
2.5
y[n]
2 x[n]
1.5
h(t)
-0.5
-1
-1.5
-2
-2.5
0 100 200 300 400 500 600 700 800 900 1000
Event RV P(x)
Value P(x) Fx(a)
E P(x) 1 1
A 3 0.2
B 1 0.5
A B -2 0.5 0.2 0.1 0.2 0.5
D 0.5
C C 0 0.1
D -1 0.2 -2 -1 0 3 x -2 -1 0 3 a
-2 -1 0 3 x
-2 -1 0 3 x -1 0 1 x -1 0 1 x
PDF Properties Calculating Probability
⚫ fx(x) is nonnegative, fx(x) > 0 ⚫ To calculate the probability for a range of values
⚫ The total probability adds up to one Px ( a x b ) = Px ( x b ) − Px ( x a )
= Fx ( b ) − Fx ( a )
f x ( x ) dx = Fx ( ) = 1 b +
−
= lim f x ( x ) dx
→0 a +
fx(x) Fx(a)
CDF
2 PDF 1
AREA= F(b)- F(a) F(b)
fx(x) 2
F(a)
-1 0 1 -1 1
-1 0 a b 1 -1 a b 1
y = E y = h ( x ) f x ( x ) dx
MEAN is the first moment taken about x o =0
− m = x = x f x ( x ) dx
−
[•] = [•] f x ( x ) dx VARIANCE 2 is the second moment around the mean
−
2 = ( x − x) 2 = ( x − x) 2 f x ( x ) dx
For Discrete distributions −
Gaussian Distribution
Gaussian Distribution
⚫ Gaussian distribution also called Normal distribution is one of
the most common and important distributions
⚫ PDF
fx ( x ) =
1
e
−( x − m )
2
( 2 2 ) m is the mean and is variance
2
⚫ CDF
m−a 1 m−a
F (a) = Q = erfc Complementary Error Function
2 2
1 − 2 2
Q( z)
2 z
e d (m = 0, = 1) Q function
2 − 2
erfc ( z ) e d = 1 − erf ( z ) Error function
z
2 z − 2 1 z
erf ( z )
e d Q( z ) = erfc
0 2 2
Ideal Low-Pass Filtered White Noise
Gaussian CDF
⚫ Start with definition of CDF: ⚫ Suppose white noise is applied to an ideal low-pass filter such
F (a) =
a
f ( x ) dx =
1 a −( x − m )
2
( 2 2 )dx that
− x 2 −
e ⚫ Baseband
F (a) =
1 ( m−a ) − y
2
( 2 2 ) ( − ) dy Passband
2
⚫
e
1 −y 2
( 2 2 )dy = Q m − a
2 ( m − a )
= e
m−a
F (a) = Q Băng thông của tín hiệu phức là B thì băng thông của tín hiệu
truyền là 2B
4 5
Pulse modulation Pulse Modulation
Parallel data
Filter
B 3KHZ
• A general decision has been made to make:
B = 4 KHZ
9/24/2023 10 9/24/2023 11
Filtering Encoding
⚫
Binary codes used for PCM are n-bit codes
3-bit PCM code
ALIASING DISTORTION. Sign Magnitude Decimal
0 10 -2
occurs when: 0 11 -3
-3V 011
⚫ Overload distortion
fs 2 B ⚫ Resolution or minimum step size of ADC (Vlsb)
⚫ Quantizingquantization error (Qe) or quantization noise (Qn)
Qe=Vlsb/2
9/24/2023 12 9/24/2023 13
Encoding
Encoding
increase sampling
rate→PAM signal
more precisely, not
reducing
quantization error
9/24/2023 14 9/24/2023 15
Example
Tín hiệu analog->ADC 8 bit, có tần số cao nhất là không quá 4Khz
⚫ a. Xác định tần số cắt của lọc thông thấp chống biệt danh
⚫ b. Nếu tín hiệu analog là 1V, ADC sử dụng Vref-=-5V,
Vref+=5V, xác định tín hiệu (chuỗi bit sau PCM)
⚫ Hướng dẫn:
a) fcut=4Khz
b) Chuỗi bit: 8 bit (1 dấu, 7 data), bit dấu: 1, 7 bit data: 000 1101)
(1/5)x(2^7-1)=25=000 1101
⚫ 4 popular methods:
– Time-division multiplexing (TDM)
– Frequency-division multiplexing (FDM)
– Code-division multiplexing (CDM)
– Wavelength-division multiplexing (WDM)
9/24/2023 18 9/24/2023 19
FDM TDM
t Bi
Guard Band Guard Time
CHANNELS CHANNELS
(FREQUENCY SLOTS - CHANNELS) (TIME SLOTS - CHANNELS)
…. ….
ΔF1 ΔF2 ΔF3 ΔF4 ΔF5 ΔF6 …. F [Hz] Δt1 Δt2 Δt3 Δt4 Δt5 Δt6 …. t [sec]
9/24/2023 20 9/24/2023 21
Time-Division Multiplexing Time-Division Multiplexing
⚫ Transmission from multiple sources are interleaved in the ⚫ In TDM, each source device occupies a subset of transmission
time domain bandwidth for a slice of time (time slot)
⚫ PCM is the most common type of modulation used with ⚫ At the end, returning to the 1st source device and the process
TDM(PCM-TDM System) continues
⚫ In PCM-TDM system, 2 or more voice-band channels are
sampled, converted to PCM codes and then time-division
multiplexed onto a single medium
9/24/2023 22 9/24/2023 23
D
⚫ The basic building block begins with a DS-0 channel
Output value
B C
A
Input signal A A D
C B
Time
Output
Input signal B
Input signal C
Input signal D
⚫ Channel 1 and 2 are alternately selected and connected to PCM code PCM code
multiplexer output Channel 1 Channel 2
⚫ Frame time is the time taking to transmit one sample from each
channel 1 1
= = 125 s 26 27
9/24/2023 f s 8000 9/24/2023
9/24/2023 28 9/24/2023 29
Digital Carrier System T1 physical layout
1
Telephone
F 24 8 7 6 5 4 3 2 1
fax communication 2
Fax
3
D V V
4
5 Mux
Sample 1
6
D V V
7 Sample 2
digital data
9/24/2023 30 9/24/2023 31
24 channels 8 bits 2
x = 192 bits per frame
frame sample EACH CHANNEL IS SAMPLED ONCE PER FRAME
BUT NOT AT THE SAME TIME
192 bits 8000 frames
x = 1536kbps = 1.536M bps
frame sec
24
9/24/2023 32 9/24/2023 33
Digital Carrier System Frame synchronization
9/24/2023 36 37
D1-type channel banks D1-type channel banks
B = n.64 Kbps
BITS = 8n
……
SYSTEM CLOCK:
n channels 8 bits 8000 frames
x x
frame channel sec ond
EXAMPLE: n = 32; B = 2.048 Mbps Does not support Bit Robbing as in T1 for
BITS = 256 BITS, CLOCK = 2.048 MHz Signalling
9/24/2023 40 9/24/2023 41
E1 Carrier System E1 Carrier System
⚫ E1 frame ⚫ 1 frame has 32 TS (0-31), a multiframe consists of 16 frames (0-
15)
⚫ TS0 is used for:
– Synchronization
– Alarm transport
– International carrier use
⚫ TS16 used to transmit CAS (Channel Associated Signaling)
information
9/24/2023 42 9/24/2023 43
Conversation On hook
Clear back
On hook
On hook Clear forward
On hook
9/24/2023 45
9/24/2023 44
TDM Hierarchy TDM Hierarchy
⚫ North America and Japan standards (AT&T)
⚫ TDM standards for North America
Digital Signal Bit rate, R No of Transmission media used
Number (Mbps) 64kbps
24 4 DS-1 7 DS-2 6 DS-3 2 DS-4 DS-5 PCM
DS-0 channels
DS-0 0.064 1 Wire pairs
1 1 1 1 1
DS-1 1.544 24 Wire pairs
1st level 2nd level 3rd level 4th level 5th level
…
…
multiplexer multiplexer multiplexer multiplexer multiplexer
24 4 7 6 2 DS-2 6.312 96 Wire pairs, fiber
560.160Mbps
DS-3 44.736 672 Coax., radio, fiber
TDM Hierarchy
Viet Nam National University Ho Chi Minh City
⚫ Europe standards University of Science
Faculty of Electronics & Telecommunications
30 E0 E1 E2 E3 E4 DS-5
Chapter 6:
Basic Digital Modulations
1 1 1 1 1
565.148Mbps
Dang Le Khoa
2.048Mbps 8.448Mbps 34.368Mbps 139.264Mbps
Email: [email protected]
9/24/2023 48
Outline ASK, OOK, MASK
– ASK, OOK, MASK ⚫ The amplitude (or height) of the sine wave varies to transmit the
– FSK, MFSK ones and zeros
– BPSK, DBPSK, MPSK
– MQAM,
– OQPSK
– Bit error rate.
H( f )
h(t ) = sinc(t / T )
T 1
2 Ei
si (t ) = cos (ct + )
T
On-off keying (M=2):
si (t ) = ai 1 (t ) i = 1, ,M “0” “1”
s2 s1
2 1 (t )
1 (t ) = cos (ct + ) 0 E1 − 2T − T 0 T 2T
T −1 0 1 f t
ai = Ei 2T 2T
1
W=
2T
4 5
The raised cosine filter
⚫ Raised-Cosine Filter
– A Nyquist pulse (No ISI at the sampling time)
1 for | f | 2W0 − W
| f | +W − 2W0
H ( f ) = cos 2 for 2W0 − W | f | W
4 W − W0
0 for | f | W
cos[2 (W − W0 )t ]
h(t ) = 2W0 (sinc(2W0t ))
1 − [4(W − W0 )t ]2
W − W0
Nyquist pulses (filters): no ISI at the sampling time Excess bandwidth:W − W0 Roll-off factor r = W0
0 r 1
6 7
The Raised cosine filter – cont’d Binary amplitude shift keying, Bandwidth
⚫ d ≥ 0 → related to the condition of the line
| H ( f ) |=| H RC ( f ) | h(t ) = hRC (t )
1 r =0 1
r = 0.5
0.5 0.5 r =1
r =1 r = 0.5
r =0
−1 − 3 −1 0 1 3 1 − 3T − 2T − T 0 T 2T 3T
T 4T 2T 2T 4T T
Rs
Baseband W sSB = (1 + r ) . Passband W DSB = (1 + r ) Rs . BDSB = (1+r) Rbaud = (1+r) Rb
2
8
Implementation of binary ASK OOK and MASK
⚫ OOK (On-OFF Key)
– 0 silence.
– Sensor networks: battery life, simple implementation
⚫ MASK: multiple amplitude levels
A cos(2f c t )
binary 1
s (t ) =
A cos(2f ct + )
binary 0
QAM
⚫ QAM is a combination of ASK and PSK
– Two different signals sent simultaneously on the same carrier frequency
s(t ) = d1 (t )cos 2f ct + d 2 (t )sin 2f ct
– M=4, 16, 32, 64, 128, 256
p1 ( x) dx = − pN ( x − d1)dx p1 ( x) dx = p N ( x )d x
S S d1
d0 S equivalently
Q0 = p0 ( x) dx Q1=
p1 ( x)dx − − − S
with
S − substituting x −d1 = x
d0− d1 d1 − d0 p0 ( x ) dx =
When we define P0 and P1 as equal a-priori probabilities of d 0 and d1 d + d −
for x = S = 0 1 2 2
1 1
(P0 = P1 = 12 ) 2 = + p N ( x)d x = − p N ( x)d x d1 − d0
we will get the bit error probability 2 2
d +d d −d
2
1
+ pN ( x ') dx '
0 0
S S
S = 0 1 − d1= 0 1
Pb = P0Q0 + P1Q1 = 1
2 p0 ( x)dx + 12
Ss
−
p1 ( x)dx = 12 +
−
1
2 p1 ( x) − 12 p0 ( x ) dx 2 2
1
d1 − d0
2
2 0
Pb = 1 − 2 p N ( x )dx
S
2
1− p0 ( x ) dx 0
−
Special Case: Gaussian distributed noise Error function and its complement
⚫ function y = Q(x)
Motivation: • many independent interferers
• central limit theorem y = 0.5*erfc(x/sqrt(2));
• Gaussian distribution 2.5
erf(x)
erfc(x)
2
d1− d
2
n 0
2 − x
2
−
1 1 2
e d x
2 N2 1.5
pN ( n ) = Pb = 1 −
2
2 N
e
2 N 2 2 N 0 0
erf(x), erfc(x)
1
0.5
no closed solution
0
Definition of Error Function and Error Function Complement
-0.5
x d
x 2
2 −
e erfc( x ) = 1 − erf( x )
-1
erf( x ) = x
0 -1.5
-3 -2 -1 0 1 2 3
x
Bit error rate with error function complement Bit error rate for unipolar and antipodal transmission
d1 − d0
x2
1 2 1 2 −
1 d − d0 ⚫ BER vs. SNR
2 N2
Pb = 1 − e d x Pb = erfc 1
2
2 N 0
2 2 2 N
theoretical
-1
10 simulation
Expressions with E S and N 0
unipolar
antipodal: d1 = +d ; d 0 = −d unipolar d1= +d ; d 0 = 0
1 d −d 1 d 1 d 1 d2 10
-2
Pb = erfc 1 0 = erfc Pb = erfc = erfc
2 N 2 2 N 8 N
2
2 N
2 2 2
BER
2 2
1 d2 1 SNR 1 d2 / 2 1 SNR antipodal
= erfc = erfc = erfc = erfc
2 N2 2 2 2
4
4 N 2
2 -3
2 10
d2 ES d2 / 2 ES
SNR = = SNR = =
N
2 matched
N0 / 2 2
N
matched N0 / 2
ES
-4
1 ES 1 10
Pb = erfc Pb = erfc -2 0 2 4 6 8 10
Q function 2 N0 ES
2 N0 2
in dB
N0
⚫ Basic concepts
Chapter 7: ⚫ Information
⚫ Entropy
Introduction to Information Theory
⚫ Source Encoding
Dang Le Khoa
Email: [email protected]
9/24/2023 2
Basic concepts What is Information Theory?
⚫ Block diagram of digital communication system
⚫ Information theory provides a quantitative measure of source
information, the information capacity of a channel
⚫ Dealing with coding as a means of utilizing channel capacity
Source Source Channel for information transfer
Modulator
encoder encoder
⚫ Shannon’s coding theorem:
Noisy
Channel
“If the rate of information from a source does not exceed the
capacity of a communication channel, then there exists a
coding technique such that the information can be transmitted
Destination Source Channel over the channel with an arbitrarily small probability of error,
Demodulator
decoder decoder
despite the presence of error”
9/24/2023 3 9/24/2023 4
9/24/2023 5 9/24/2023 6
Information Information
9/24/2023 7 9/24/2023 8
9/24/2023 11 9/24/2023 12
9/24/2023 13 9/24/2023
Example of Source encoding
Example of Source encoding(cont’d)
9/24/2023 15 9/24/2023 16
9/24/2023 19
9/24/2023 20
p( x )l
L
i i
i =1
The symbol rate at the encoder output:
H(X )
Lmin = where
L log2 D H(X): the entropy of source
r = 3.5(0.533) = 1.864 code symbols/second D: the number of symbols in
n the coding alphabet
H (X )
eff =
L log2 D
This rate is accepted by the channel
H(X )
or eff = for a binary alphabet
L
9/24/2023 21 9/24/2023 22
Entropy and efficiency of an extended binary source Behavior of L/n
L
⚫ Entropy of the n th-order extension of a discrete 1.0
n
▪L/n always exceeds
memoryless source: the source entropy
0.8 and converges to the
H (X n)=n*H (X) source entropy for
⚫ The efficiency of the extended source: 0.6 large n
9/24/2023 23 9/24/2023 24
9/24/2023 25
9/24/2023 26
Shannon-Fano coding Huffman Coding [1][2][3]
7 ⚫ Procedure: 3 steps
L = pi li = 2.41 1. Listing the source symbols in order of decreasing
i =1 probability. The two source symbols of lowest
7
H (U ) = − pi log2 pi = 2.37
probabilities are assigned a 0 and a 1.
i =1 2. These two source symbols are combined into a new source
symbol with probability equal to the sum of the two
H (U ) 2.37 original probabilities. The new probability is placed in the
eff = = = 0.98
L 2.41 list in accordance with its value.
3. Repeat until the final probability of new combined symbol
is 1.0.
▪The code generated is prefix code due to equiprobable
partitioning ⚫ Example:
▪Not lead to an unique prefix code. Many prefix codes have
the same efficiency.
9/24/2023 27 9/24/2023 28
Dang Le Khoa
[email protected]
9/24/2023 31
Channel
Digital demodulation
2 3
What is channel coding? Error control techniques
⚫ Automatic Repeat reQuest (ARQ)
◼ Channel coding: – Full-duplex connection, error detection codes
Transforming signals to improve – The receiver sends a feedback to the transmitter, saying that if
communications performance by increasing any error is detected in the received packet or not (Not-
the robustness against channel impairments Acknowledgement (NACK) and Acknowledgement (ACK).
(noise, interference, fading, ..) – The transmitter retransmits the previously sent packet if it
◼ Waveform coding: Transforming waveforms to receives NACK.
better waveforms ⚫ Forward Error Correction (FEC)
◼ Structured sequences: Transforming data
– Simplex connection, error correction codes
sequences into better sequences, having
structured redundancy. – The receiver tries to correct some errors
-“Better” in the sense of making the decision process less ⚫ Hybrid ARQ (ARQ+FEC)
subject to errors.
– Full-duplex, error detection and correction codes
4 5
6 7
Linear block codes Linear block codes
◼ The information bit stream is chopped into blocks of k bits. ⚫ The Hamming weight of vector U, denoted by w(U),
◼ Each block is encoded to a larger block of n bits. is the number of non-zero elements in U.
◼ The coded bits are modulated and sent over the channel. ⚫ The Hamming distance between two vectors U and
◼ The reverse procedure is done at the receiver. V, is the number of elements in which they differ.
8 9
e = dmin − 1 U = mG
⚫ Error correcting-capability t of a code, which is V1
defined as the maximum number of guaranteed V
correctable errors per codeword, is (u1, u2 , , un ) = ( m1, m2 , , mk ) 2
d − 1
t = min
2
◼ The rows of G are linearly independent.
Vk
(u1, u2 , , un ) = m1 V1 + m2 V2 + + m2 Vk .
10 Lecture 9 11
Linear block codes – cont’d Linear block codes – cont’d
⚫ Example: Block code (6,3) ⚫ Systematic block code (n,k)
– For a systematic code, the first (or last) k elements in the codeword
Message vector Codeword are information bits.
000 000000
V1 110100 1 00 11 01 00 . G = [P Ik ]
G = V2 = 011010 01 0 011 01 0
I k = k k identity matrix
V3 101001 11 0 1 0111 0
Pk = k (n − k ) matrix
001 1 01001
1 01 0111 01 U = (u1, u2 ,..., un ) = ( p1, p2 ,..., pn − k , m1, m2 ,..., mk )
011 11 0011
parity bits message bits
111 000111
12 13
⚫ For any linear code we can find an matrix H( n − k )n , such ⚫ Standard array
n−k
that its rows are orthogonal to the rows of G : – For row i = 2,3,...,2 find a vector in Vn of minimum
weight which is not already listed in the array.
GHT = 0 – Call this pattern e i and form the i:th row as the corresponding
coset
⚫ H is called the parity check matrix and its rows are zero
linearly independent.
codeword U1 U2 U 2k
⚫ For systematic linear block codes: e2 e2 U 2 e 2 U 2k coset
.
H = [I n −k PT ] e 2n − k e 2n − k U 2 e 2 n − k U 2k
coset leaders
14 15
Linear block codes – cont’d Linear block codes – cont’d
⚫ Example: Standard array for the (6,3) code
Data source Format
m Channel U Modulation
encoding
codewords channel
Channel Demodulation
000000 110100 011010 101110 101001 011101 110011 000111 Data sink Format
decoding Detection
000001 110101 011011 101111 101000 011100 110010 000110 m̂ r
000010 110111 011000 101100 101011 011111 110001 000101
000100 110011 011100 101010 101101 011010 110111 000110 r = U+e
.
001000 111100 r = (r1 , r2 ,...., rn ) received codeword or vector
010000 100100 coset e = (e1 , e2 ,...., en ) error pattern or vector.
100000 010100
010001 100101 010110 ◼ Syndrome testing:
Coset leaders
◼ S is the syndrome of r, corresponding to the error pattern e.
S = rHT = eHT
16 17
18 19
Hamming codes Hamming codes
Lecture 9 20 21
1 1 1 0 0 0 0
1 0 0 1 1 0 0
G
0 1 0 1 0 1 0
1 1 0 1 0 0 1
22 23
Cyclic block codes Cyclic block codes
◼ A linear (n,k) code is called a Cyclic code if all cyclic shifts ⚫ Algebraic structure of Cyclic codes, implies expressing
of a codeword are also codewords. codewords in polynomial form
U = (u0 , u1, u2 ,..., un −1) “i” cyclic shifts of U U( X ) = u0 + u1 X + u2 X 2 + ... + un−1X n−1 degree (n - 1).
U(i ) = (un −i , un −i +1,..., un −1, u0 , u1, u2 ,..., un −i −1). ⚫ Relationship between a codeword and its cyclic shifts:
XU( X ) = u0 X + u1 X 2 + ..., un − 2 X n −1 + un −1 X n
◼ Example: = un −1 + u0 X + u1 X 2 + ... + un − 2 X n −1 + un −1 X n + un −1 .
U(1) ( X ) un −1 ( X n +1)
U = (1101) = U(1) ( X ) + un −1 ( X n + 1)
U(1) = (1110) U(2) = (0111) U(3) = (1011) U(4) = (1101) = U. – Hence: U(1) ( X ) = XU( X ) modulo ( X n + 1)
By extension
U(i ) ( X ) = X i U( X ) modulo ( X n + 1)
24 25
26 27
Cyclic block codes Cyclic block codes
⚫ Systematic encoding algorithm for an (n,k) Cyclic code: ⚫ Example: For the systematic (7,4) Cyclic code with
generator polynomial g ( X ) = 1 + X + X 3.
1. Multiply the message polynomial m( X ) by X n − k
1. Find the codeword for the message m = (1011).
n = 7, k = 4, n − k = 3
28 29
P I 4 4 I 33 PT S( X ) = 0 : no error
S( X ) 0 : error , retransmission
30 31
Example of the block codes Viet Nam National University Ho Chi Minh City
University of Science
QPSK
Dang Le Khoa
[email protected]
Eb / N0 [dB]
32
Introduction Definitions
◼ In block coding, the encoder accepts k- ◼ An convolutional encoder: a finite-state machine that
consists of an M-stage shift register, n modulo-2 adders
bit message block and generates n-bit ◼ L-bit message sequence produces an output sequence
codewordBlock-by-block basis with n(L+M) bits
Code rate:
◼ Encoder must buffer an entire message ◼
L
block before generating the codeword r=
n( L + M )
(bits/symbol)
◼ K=M+1
Path 2
4 5
Example Generations
◼ Convolutional code (3,2,1) ◼ Convolutional code is nonsystematic code
◼ n=3: 3 modulo-2 adders or 3 outputs ◼ Each path connecting the output to the input can be
◼ k=2: 2 input
characterized by impulse response or generator
polynomial
◼ M=1: 1 stages of each shift register (K=2 each)
◼ denoting the impulse response of the
( g M(i ) ,..., g 2(i ) , g1(i ) , g 0(i ) )
ith path
◼ Generator polynomial of the ith path:
g (i ) ( D) = g M(i ) D M + ... + g 2(i ) D 2 + g1(i ) D + g0(i )
◼ D denotes the unit-delay variabledifferent from X of
Input
Output
cyclic codes
◼ A complete convilutional code described by a set of
polynomials {g (1) ( D), g ( 2) ( D),..., g ( n) ( D)}
6 7
Example(1/8) Example(2/8)
◼ Output polynomial of path 1:
c (1) ( D) = m( D) g (1) ( D)
◼ Consider the case of (2,1,2)
= ( D 4 + D 3 + 1)( D 2 + D + 1)
◼ Impulse response of path 1 is (1,1,1)
= D6 + D5 + D 4 + D5 + D 4 + D3 + D 2 + D + 1
◼ The corresponding generator polynomial is
g (1) ( D) = D2 + D + 1 = D6 + D3 + D 2 + D + 1
Example(3/8) Example(4/8)
◼ Another way to calculate the output:
◼ m= (11001) ◼ Path 1:
◼ c(1)=(1001111) m 111 output
◼ c(2)=(1111101) 001001 1 1
◼ Encoded sequence c=(11,01,01,11,11,10,11) 00100 11 0
◼ Message length L=5bits 0010 011 0
◼ Output length n(L+K-1)=14bits 001 001 1 1
◼ A terminating sequence of K-1=2 zeros is 00 100 11 1
appended to the last input bit for the shift 0 010 011 1
register to be restored to its zero initial state 001 0011 1
10 c(1)=(1001111) 11
Example(5/8) Example(6/8)
◼ Consider the case of (3,2,1)
m 101 output
◼ Path 2
001001 1 1
00100 11 1
0010 011 1 Output
Input
001 001 1 1
00 100 11 1
0 010 011 0
001 0011 1
◼ denoting the impulse
gi( j ) = ( gi(,Mj ) , gi(,Mj ) −1 ,..., gi(,1j ) , g i(,0j ) )
response of the jth path corresponding to ith
c(2)=(1111101)
input
12 13
Example(7/8) Example(8/8)
◼ Assume that:
◼ m
(1)=(101)m(1)(D)=D2+1
◼ m
(2)=(011)m(1)(D)=D+1
g( 3)
= (11) g1(1) ( D) = D + 1 =D3+D2+D+1+D2+D=1=D3+1 c(3)=(1001)
1
◼ Output c=(101,100,010,011)
g 2(3) = (10) g1(1) ( D) = D 14 15
1/10
Trellis(1/2) Trellis(1/2)
a=00
0/00 0/00 0/00 0/00 0/00 0/00 0/00 0/00 ◼ The trellis contains (L+K) levels
◼ Labeled as j=0,1,…,L,…,L+K-1
b=10
◼ The first (K-1) levels correspond to the
encoder’s departure from the initial state a
◼ The last (K-1) levels correspond to the
c=01
encoder’s return to state a
◼ For the level j lies in the range K-1jL, all
d=11
1/10 1/10 1/10 1/10 the states are reachable
Level j=0 1 2 3 4 5 L-1 L L+1 L+2
18 19
Maximum Likelihood Decoding
Example of Convolutional codes
◼ Message 11001
Input 1 1 0 0 1 0 0 ◼ m denotes a message vector
a=00
0/00 0/00 0/00 0/00 0/00 0/00 0/00 ◼ c denotes the corresponding code vector
◼ r denotes the received vector
◼ With a given r , decoder is required to make estimate m̂
of message vector, equivalently produce an estimate ĉ
b=10
of the code vector
◼ mˆ = m only if cˆ = c otherwise, a decoding error
happens
c=01
◼ Decoding rule is said to be optimum when the
propability of decoding error is minimized
◼ The maximum likelihood decoder or decision rule is
d=11
1/10 1/10 1/10 described as follows:
Level j=0 1 2 3 4 5 6 7 ◼ Choose the estimate ĉ for which the log-likelihood
Output 11 01 01 11 11 10 11 20 function logp(r/c) is maximum 21
log p(r | c) = log p(ri | ci ) Hamming distance between the received vector r
i =1
and the transmitted vector c
p if ri ci
with p(ri | ci ) = ◼ The received vector r is compared with each
1 − p if ri = ci
possible code vector c, and the one closest
◼ r differs from c in d positions, or d is the Hamming to r is chosen as the correct transmitted
distance between r and c code vector
log p(r | c) = d log p + ( N − d ) log(1 − p)
p
= d log + N log(1 − p)
1− p
22 23
The Viterbi algorithm The Viterbi algorithm
0 4 32 32 24
b=10
1
c=01 6 43 25 25
1
1 4
d=11 3
1/10 4 3 1/10 4 3 1/10
Code 11 01 01 11 11 10 11
26
Output 1 1 0 0 1 0 0