0% found this document useful (0 votes)
24 views87 pages

Communication Systems: Course Outline

Uploaded by

Hoàng Huy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views87 pages

Communication Systems: Course Outline

Uploaded by

Hoàng Huy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

Viet Nam National University Ho Chi Minh City Course Outline

University of Science ⚫ Chapter 1: Introduction


Faculty of Electronics & Telecommunications ⚫ Chapter 2: Amplitude Modulations and Demodulations
⚫ Chapter 3: Angle Modulation and Demodulation
⚫ Chapter 4: Noise in Communication Systems
Communication Systems ⚫ Chapter 5: Digital Transmission and Multiplexing
⚫ Chapter 6: Basic Digital Modulation
⚫ Chapter 7: Introduction to Information Theory
Dang Le Khoa ⚫ Chapter 8: Error Correcting Codes
Email: [email protected]

Faculty of Electronics & Telecommunications. HCMUS [2]

Resources Assignments
⚫ Textbook ⚫ Homework exercises + project ( 20%)
[1] B.P. Lathi, Modern Digital and Analog Communication ⚫ Attention (10%)
Systems, 4th edition, Oxford University Press, 2010
⚫ Multiple choice questions (20%)
⚫ Recommended readings
⚫ Final exam (50%)
[1] Leon W. Couch II, Digital and Analog Communication
System, Sixth Edition, Prentice Hall, 2001.
[2] John G. Proakis and Masoud Salehi, Communication Systems
Engineering, Second Edition, Prentice Hall, 2002.
⚫ Software:

[1] Matlab

Faculty of Electronics & Telecommunications. HCMUS [3] Faculty of Electronics & Telecommunications [4]
Projects
⚫ Personal Wireless Networks: Bluetooth ⚫ 12:30 – 17:00
⚫ Wireless Communications: WiFi/ LoRa
⚫ Mobile Communications: LTE (4G)/NB-IoT
⚫ Optical communications
⚫ Satellite communications

Faculty of Electronics & Telecommunications. HCMUS [5] Faculty of Electronics & Telecommunications. HCMUS [6]

Chapter 1: Introduction 1. Communication System Components


– Basic Block Diagram Communication systems are designed to transmit information.
– Typical Communication systems
– Analog or Digital Source Source Channel
Modulation D/A
– Entropy to Measure the Quantity of Information input Coder Coder
– Channels/Spectrum Allocation transmitter
– Shannon Capacity
– Modulation
channel Distortion and noise +
– Communication Networks
– Review of Signals

Reconstructed
Signal Source Channel
demodulation A/D
output decoder decoder

receiver
2. Telecommunication Wireless Communications
⚫ Telegraph ⚫ Satellite
⚫ TV
⚫ Fixed line telephone
⚫ Cordless phone
⚫ Cable/Wired networks
⚫ Cellular phone
⚫ Wireless Communications ⚫ Wireless LAN, WIFI
⚫ Internet ⚫ Wireless MAN, WIMAX
⚫ Fiber communications ⚫ Bluetooth
⚫ Ultra Wide Band
⚫ Wireless Laser
⚫ Microwave
⚫ GPS
⚫ Ad hoc/Sensor Networks

3. Analog or Digital Digital and Analog Sources and Systems

⚫ Analog Message: continuous in amplitude and over time ➢ A digital communication system transfers
– AM, FM for voice sound information from a digital source to the intended
– Traditional TV for analog video receiver (also called the sink).
– First generation cellular phone (analog mode)
– Record player ➢ An analog communication system transfers
information from an analog source to the sink.
⚫ Digital message: 0 or 1, or discrete value
– VCD, DVD ➢ A digital waveform is defined as a function of time
– 2G/3G cellular phone that can have a discrete set of amplitude values.
– Data on your disk ➢ An Analog waveform is a function that has a
– Your grade continuous range of values.
⚫ Digital age: why digital communication will prevail

Eeng360 12
Digital Communication ADC/DAC
➢ Advantages
• Relatively inexpensive digital circuits may be used; ⚫ Analog-to-Digital Conversion (ADC) and Digital-to-Analog
Conversion (DAC) are the processes that allow digital
• Privacy is preserved by using data encryption; computers to interact with these everyday signals.
• Data from voice, video, and data sources may be merged and ⚫ Digital information is different from its continuous counterpart
transmitted over a common digital transmission system; in two important respects: it is sampled, and it is quantized
• In long-distance systems, noise dose not accumulate from repeater
to repeater. Data regeneration is possible
• Errors in detected data may be small, even when there is a large
amount of noise on the received signal;
• Errors may often be corrected by the use of coding.
• Disadvantages
• Generally, more bandwidth is required than that for analog
systems;
• Synchronization is required. Eeng360 13

4. Source Coder 4. Channel, Bandwidth, Spectrum


⚫ Examples ⚫ Bandwidth: the number of bits per second is proportional to B
– Digital camera: encoder; http://www.ntia.doc.gov/osmhome/allochrt.pdf
TV/computer: decoder
– Camcorder
– Phone
– Read the book
⚫ Theorem
– How much information is
measured by Entropy
– More randomness, high
entropy and more information
Power, Channel, Noise 5. Shannon Capacity
⚫ Transmit power ⚫ Shannon Theory
– Constrained by device, battery, health issue, etc. – It establishes that given a noisy channel with information capacity C and
information transmitted at a rate R, then if R<C, there exists a coding
⚫ Channel responses to different frequency and different time technique which allows the probability of error at the receiver to be made
– Satellite: almost flat over frequency, change slightly over time arbitrarily small. This means that theoretically, it is possible to transmit
information without error up to a limit, C.
– Cable or line: response very different over frequency, change
slightly over time. – The converse is also important. If R>C, the probability of error at the
receiver increases without bound as the rate is increased. So no useful
– Fiber: perfect information can be transmitted beyond the channel capacity. The theorem
– Wireless: worst. Multipath reflection causes fluctuation in does not address the rare situation in which rate and capacity are equal.
frequency response. Doppler shift causes fluctuation over time
⚫ Shannon Capacity
⚫ Noise and interference
– AWGN: Additive White Gaussian noise C = B log2 (1 + SNR) bit / s
– Interferences: power line, microwave, other users (CDMA phone)

6. Modulation Quality of a Link (service, QoS)


⚫ Process of varying a carrier signal ⚫ Mean Square Error
in order to use that signal to N
1
convey information MSE =
N
 | Xˆ
i =1
i − X i |2
– Carrier signal can transmit far
away, but information cannot ⚫ Signal to noise ratio (SNR)
– Modem: amplitude, phase, and Prec PtxG
frequency = =
– Analog: AM, amplitude, FM,
 2
2
frequency, Vestigial sideband – Bit error rate
modulation, TV – Frame error rate
– Digital: mapping digital – Packet drop rate
information to different – Peak SNR (PSNR)
constellation: Frequency-shift – SINR/SNIR: signal to noise plus interference ratio
key (FSK)
⚫ Human factor
Multiplexing 7. Communication Networks
⚫ Space-division multiplexing ⚫ Connection of 2 or more distinct (possibly dissimilar) networks.
⚫ Frequency-division multiplexing
⚫ Requires some kind of network device to facilitate the
⚫ Time-division multiplexing connection.
⚫ Code-division multiplexing
⚫ Internet

Net A Net B

OSI Model 8. Review of Signals


⚫ Data and Signals
⚫ Periodic analog signals
⚫ Digital signals

Faculty of Electronics & Telecommunications. HCMUS [24]


Analog and Digital Signals
Analog and Digital Data
⚫ Data can be analog or digital.
Like the data they represent, signals can be either
– The term analog data refers to information that is continuous;
– Digital data refers to information that has discrete states.
analog or digital. An analog signal has infinitely
many levels of intensity over a period of time. As the
⚫ For example:
wave moves from value A to value B, it passes
– An analog clock that has hour, minute, and second hands gives
information in a continuous form; the movements of the hands are through and includes an infinite number of values
continuous. along its path. A digital signal, on the other hand,
– A digital clock that reports the hours and the minutes will change can have only a limited number of defined values.
suddenly from 8:05 to 8:06. Although each value can be any number, it is often
as simple as 1 and 0.

Faculty of Electronics & Telecommunications. HCMUS [25] 3.26

Comparison of analog and digital signals Periodic and Nonperiodic

A periodic signal completes a pattern within a


measurable time frame, called a period, and repeats
that pattern over subsequent identical periods. The
completion of one full pattern is called a cycle.
A nonperiodic signal changes without exhibiting a
pattern or cycle that repeats over time.

3.27 3.28
Sine Wave Two signals with two different amplitudes

Peak
The sine wave is the most fundamental form of a amplitude
periodic analog signal.

Value

•••
Peak
Time amplitude

3.29 3.30

Two signals with same phase, different amplitudes and frequency Example

The period of a signal is 100 ms. What is its frequency in


kilohertz?.
Solution
First we change 100 ms to seconds, and then we calculate
the frequency from the period (1 Hz = 10–3 kHz).

3.31 3.32
Phase Three sine waves with different phases

The term phase, or phase shift, describes the


position of the waveform relative to time 0.
If we think of the wave as something that can be
shifted backward or forward along the time axis,
phase describes the amount of that shift. It
indicates the status of the first cycle.

3.33 3.34

Example
Time and Frequency Domains
A sine wave is offset 1/6 cycle with respect to time 0. What A sine wave is comprehensively defined by its
is its phase in degrees and radians? amplitude, frequency, and phase.
Solution We have been showing a sine wave by using what is
We know that 1 complete cycle is 360°. Therefore, 1/6 cycle called a time domain plot. The time-domain plot
is shows changes in signal amplitude with respect to
time (it is an amplitude-versus-time plot).

3.35 3.36
The time and frequency-domain plots of a sine wave Example

The frequency domain is more compact and useful when we


are dealing with more than one sine wave. For example,
Figure shows three sine waves, each with different
amplitude and frequency. All can be represented by three
spikes in the frequency domain.

3.37 3.38

Composite Signals A composite periodic signal

Figure shows a periodic composite signal with


frequency f. We can consider it to be three alarm
systems, each with a different frequency. The analysis
of this signal can give us a good understanding of
how to decompose signals. It is very difficult to
manually decompose this signal into a series of
simple sine waves.
However, there are tools, both hardware and software,
that can help us do the job. We are not concerned
about how it is done; we are only interested in the
result. Figure shows the result of decomposing the
above signal in both the time and frequency domains.
3.39 3.40
Figure 3.11: Decomposition of a composite periodic signal Example 3.9

Amplitude
Figure 3.12 shows a nonperiodic composite signal. It can be
•••
the signal created by a microphone or a telephone set when a
Time word or two is pronounced. In this case, the composite
signal cannot be periodic, because that implies that we are
repeating the same word or words with exactly the
same tone.
Amplitude

f 3f 9f Frequency

b. Frequency-domain decomposition of the composite signal

3.41 3.42

Figure 3.12: Time and frequency domain of a non-periodic signal Bandwidth

The range of frequencies contained in a composite


signal is its bandwidth. The bandwidth is normally a
difference between two numbers. For example, if a
composite signal contains frequencies between 1000
and 5000, its bandwidth is 5000 − 1000, or 4000.

3.43 3.44
Figure 3.13: The bandwidth of periodic and nonperiodic composite
signals Example 3.10

If a periodic signal is decomposed into five sine waves with


frequencies of 100, 300, 500, 700, and 900 Hz, what is its
bandwidth? Draw the spectrum, assuming all components
have a maximum amplitude of 10 V.
Solution
Let fh be the highest frequency, fl the lowest frequency, and
B the bandwidth. Then

3.45 3.46

Figure 3.14: The bandwidth for example 3.10


3-3 DIGITAL SIGNALS

In addition to being represented by an analog


signal, information can also be represented by a
digital signal.
A digital signal can have more than two levels. In
this case, we can send more than 1 bit for each
level.

Figure 3.17 shows two signals, one with two levels


and the other with four.

3.47 3.48
Figure 3.17: Two digital signals: one with two signal levels and the Example 3.16
other with four signal levels

A digital signal has eight levels. How many bits are needed
per level? We calculate the number of bits from the
following formula. Each signal level is represented by 3 bits.

3.49 3.50

Bit Rate Example 3.18

Assume we need to download text documents at the rate of


Most digital signals are nonperiodic, and thus period
100 pages per second. If we assume that A page is an
and frequency are not appropriate characteristics.
average of 24 lines with 80 characters in each line and one
Another term—bit rate (instead of frequency)—is
character requires 8 bits, What is the required bit rate of the
used to describe digital signals. The bit rate is the
channel?
number of bits sent in 1s, expressed in bits per
second (bps). Figure 3.17 shows the bit rate for two Solution
signals. The bit rate is:

3.51 3.52
Example 3.19 Example 3.20

What is the bit rate for high-definition TV (HDTV)?


A digitized voice channel, as we will see in Chapter 4, is
made by digitizing a 4-kHz bandwidth analog voice signal. Solution
We need to sample the signal at twice the highest frequency HDTV uses digital signals to broadcast high quality video
(two samples per hertz). We assume that each sample signals. The HDTV screen is normally a ratio of 16 : 9 (in
requires 8 bits. What is the required bit rate? contrast to 4 : 3 for regular TV), which means the screen is
Solution wider. There are 1920 by 1080 pixels per screen, and the
The bit rate can be calculated as screen is renewed 30 times per second. Twenty-four bits
represents one color pixel. We can calculate the bit rate as

The TV stations reduce this rate to 20 to 40 Mbps through


compression.
3.53 3.54

Viet Nam National University Ho Chi Minh City


University of Science
Faculty of Electronics & Telecommunications
•A digital signal is a composite analog
signal with an infinite bandwidth.
Chapter 2
Amplitude Modulations and Demodulations
•Limited by the bandwidth of the medium

Dang Le Khoa
Email: [email protected]
3.55
Overview Baseband and Carrier Communication
⚫ Baseband and Carrier Communication ⚫ Baseband:
− Describes signals and systems whose range of frequencies is measured
⚫ Different AMs from 0 to a maximum bandwidth or highest signal frequency
⚫ DSB-SC
− Voice: Telephone 0-3.5KHz; CD 0-22.05KHz
⚫ Conventional AM (DSB-FC)
− Video: Analog TV 4.5MHz, TV channel is 0-6MHz. Digital, depending
⚫ SSB on the size, movement, frames per second, …
⚫ VSB − Example: wire, coaxial cable, optical fiber, PCM phone
⚫ Amplitude Demodulations ⚫ Carrier Communication:
⚫ FDM system − Carrier: a waveform (usually sinusoidal) that is modulated to represent
the information to be transmitted. This carrier wave is usually of much
⚫ Phase-locked loop higher frequency than the modulating (baseband) signal.
− Modulation: is the process of varying a carrier signal in order to use
that signal to convey information.

Modulation Double Sideband


⚫ Modulation ⚫ Modulation: m(t)cos(wct) 0.5[M(w-wc)+M(w+wc)]
− A process that causes a shift in the range of frequencies of a signal.
⚫ Low/upper side band (LSB/USB), Double side band (DSB)
⚫ Gain advantages ⚫ DSB-SC: suppressed carrier, no carrier frequency
− Antenna size: half of the antenna size. Thousands of miles for baseband
⚫ Wc >= bandwidth of the signal to avoid aliasing.
− Better usage of limited bandwidth: less side lopes
− Trade bandwidth for SNR: CDMA
⚫ Demodulation: e(t)=m(t)(cos(wct))^2=0.5(m(t)+m(t)cos(2wct))
− Robust to inter-symbol-interference (multipath delay) E(w)=0.5M(w)+0.25(M(w+2wc)+M(w-2wc))
− Robust to errors and distortions Low pass filter to remove the higher frequency
⚫ Types ⚫ Coherent and non-coherent detection
− Analog: AM (DSB, SSB, VSB), FM, Delta modulation – Receiver can recover the frequency and phase of the transmitter by
PLL. Error of timing can cause the performance error floor
− Digital: ASK, FSK, PSK, QAM, …
– Non-coherent receiver has 3dB worst performance than coherent.
− Pulse modulation: PCM, PDM, … Fiber, phone
⚫ Advanced: CDMA (3G), OFDM (WLAN, WMAN), ….
AM-DSB-SC Frequency Conversion
⚫ Move the signals to other
⚫ Example 4.1 frequency (t ) = m(t ) cos(C t ) e 1 ( t ) = 21 m( t ) cos(I t )

⚫ Multiplying two sinusoids results BPF@ I

t→
t→ in two frequencies which are the cos(MIX t )


sum and difference of the EXAMPLE : Let m(t)
be as shown.
( t ) frequencies of the sinusoids m(t) (t) e1(t)
multiplied.
( t )
t→
 cos(t ) cos(t ) = 21 [cos((  + )t ) + cos((  − )t )]
cos(c t ) t→
m( t )   ⚫ To change the carrier frequency c t→ t→
M() F{cos( c t )} M()

of a modulated signal to an SPECTRA

intermediate frequency I we use


0 →
→ c → an oscillator to generate a sinusoid
0 0
()
of frequency MIX such that
 I =  c − MIX .
− c 0 c  →
() Then m(t)cos( c t ) cos(MIX t ) = 21 m( t )[cos(( c + MIX )t ) + cos(( c − MIX )t )]
E1()
Lower sideband (LSB)
Upper sideband (USB)

= 21 m( t )[cos((2 c + I )t ) + cos((I )t )]
−I 0 I →
c
− c 0 → ⚫ Example 4.2, 4.3

Amplitude Modulation (DSB-FC) AM Example


⚫ Why DSB-SC not working: do not know the carrier frequency in receiver. • m(t) has a minimum value of about -0.4. Adding a dc offset of A=1 results in
⚫ The last impulse functions indicate that the carrier is not suppressed in this A+m(t) being always positive. Therefore the positive envelope of is just
case. For some M() shown, the modulated signal spectrum is as shown. A+m(t). An envelope detector can be used to retrieve this.

 AM (t ) = [ A + m(t )] cos(ct ) A=1


( ) = 12 M ( − c ) + M ( − c ) + A  ( − c ) +  ( + c ) m(t)
A+m(t)

0.7
1.
M() 0.
-0.4 t→
t→

0 →

 AM (t ) = [ A + m(t )] cos(c t )
()
− c 0
c  →
t→

⚫ With this type of AM the demodulation can be performed with/without a


local oscillator synchronized with the transmitter.
AM Example (cont.) Modulation Index
⚫ The choice of dc offset should be such that A+m(t) should always be • Let mp be the absolute negative peak of m(t).
positive. Otherwise envelope detector cannot be used, but coherent still ok ⚫ EXAMPLE : Single-tone modulation. Let m(t)=2sin(20t)
⚫ For example, the minimum value of m(t) = -0.4 . Therefore A > |min(m(t))|
for successful envelope detection. What if A< |m(t) |. A  mp A is the carrier amplitude.
mp
⚫ In the previous example let A=0.3. MODULATION INDEX :  =
A
A+m(t)

m(t)
Then we see that for A  m p , 0    1

0.7

0 When   1 (or A  m p ) the signal is overmodula ted, and envelope detection can not be used.
0.
t→ t→
-0.4 (However, we can still use synchronous demodulati on).
m(t)

mp 2
mp = 2;  = = . i)  = 0.5 A = 4 ii)  = 1 A = 2 t→
 AM (t ) = [ A + m(t )] cos(c t ) A A
For dc offset of 1  = 2.
 =1 =2
 = 0.5

t→
t→ t→ t→

Coherent detector for demodulating DSB-SC


Sideband and Carrier Power modulated wave.

 AM ( t ) = A cos(c t ) + m( t ) cos(c t )
The first term is the carrier and the second term is sidebands which contain the signal itself.
The total AM signal power is the sum of carrier power and the sideband power.
A2
Carrier power Pc =
2
Sideband power Ps = 21 Pm where Pm is the power of m(t).
The sideband power is the useful power.
useful power Ps Pm
Efficiency :  = = = .
Total power Pc + Ps A 2 + Pm

For example , let m(t) = Bcos(m t )


mp = B,  = B
A
or B = A.
2 2A 2 2
Pm = B2 = 2
 = x100%
2 + 2
1
For  = 1, max = x100% = 33%
2 +1
AM Nocoherent Decoder QAM
⚫ Rectifier Detector: synchronous ⚫ AM signal BANDWIDTH : AM signal bandwidth is twice the bandwidth
⚫ Envelope Detector: asynchronous of the modulating signal. A 5kHz signal requires 10kHz bandwidth for AM
transmission. If the carrier frequency is 1000 kHz, the AM signal spectrum
is in the frequency range of 995kHz to 1005 kHz.
t→ t→ t→ ⚫ QUADRARTURE AMPLITUDE MODULATION is a scheme that allows
two signals to be transmitted over the same frequency range.
⚫ Coherent in frequency
+
and phase. Expensive
AM signal R vc(t)
C ⚫ TV for analog
- ⚫ Most modems

t→

Single Sideband (SSB) SSB Frequency


• Purpose : to reduce the bandwidth requirement of AM by one-half. This is
achieved by transmitting only the upper sideband or the lower sidebband of
M()
the DSB AM signal.
baseband

−2B 2B →
0

DSB
−c c →
0

 SSB () SSB (Upper sideband)

SSB
−c c →
0
SSB Generator SSB Demodulation
• Selective Filtering using filters with sharp cutoff characteristics. Sharp cutoff Synchronous, SSB-SC demodulation
filters are difficult to design. The audio signal spectrum has no dc  SSB ( t ) cos(c t ) = m( t ) cos(c t )  jm h ( t ) sin(c t )cos(n(c t ) = 21 m( t )(1 + cos(c t ))  jm h ( t ) sin(2c t )
component, therefore , the spectrum of the modulated audio signal has a null
around the carrier frequency. This means a less than perfect filter can do a
A lowpass filter can be used to get 21 m( t ).
reasonably good job of filtering the DSB to produce SSB signals.
• Baseband signal must be bandpass SSB+C, envelop detection
 SSB +C ( t ) = A cos( c t ) + m( t ) cos( c t )  m h ( t ) sin( c t )
• Filter design challenges An envelope detector can be used to demodulate such SSB signals .
• No low frequency components What is the envelope of  SSB +C ( t ) = ( A + m( t )) cos( c t )) + m h ( t ) sin( c t ) = E( t ) cos( c t + ) ?

( )
1

{Recall Acos( ) + Bsin( ) = A 2 + B 2 2


cos( + ),  = −tan -1( B
A
))
E(t) = ((A + m( t ))2 + m h2 ( t )) = ((A 2 + m 2 ( t )) + m h2 ( t ) + 2Am( t ))
1 1
2 2

= A 1 + A + A + A 
2
m (t) m (t)
2
2m( t )
h
0
− c c → 
2

2

 A + m( t ) for A  m(t) , A  m h (t) .


The efficiency of this scheme is very low since A has to be large.

SSB vs. AM Vestigial Sideband (VSB)


• VSB is a compromise between DSB and SSB. To produce SSB signal from
⚫ Since the carrier is not transmitted, there is a reduction by DSB signal ideal filters should be used to split the spectrum in the middle so
67% of the transmitted power (-4.7dBm). --In AM @100% that the bandwidth of bandpass signal is reduced by one half. In VSB system
modulation: 2/3 of the power is comprised of the carrier; with one sideband and a vestige of other sideband are transmitted together. The
the remaining (1/3) power in both sidebands. resulting signal has a bandwidth > the bandwidth of the modulating
(baseband) signal but < the DSB signal bandwidth.
⚫ Because in SSB, only one sideband is transmitted, there is a DSB

further reduction by 50% in transmitted power


⚫ Finally, because only one sideband is received, the receiver's
−c c →
needed bandwidth is reduced by one half--thus effectively 0
 SSB () SSB (Upper sideband)
reducing the required power by the transmitter another 50%
⚫ (-4.7dBm (+) -3dBm (+) -3dBm = -10.7dBm). −c c →
0
 VSB () VSB Spectrum
⚫ Relative expensive receiver

−c c →
Filtering scheme for the generation of VSB modulated wave. VSB Transceiver
m(t)  VSB () e(t)
 VSB () M()
Hi() LPF
Ho()

2cos(c t )
2cos(c t )
Transmitter Receiver

M() is bandlimite d to 2B rad/sec


 VSB () = [M( −  c ) + M( +  c )]Hi ()
E() = [ VSB ( −  c ) +  VSB ( +  c )]
= [Hi ( −  c )M( − 2 c ) + Hi ( +  c )M() + Hi ( −  c )M() + Hi ( +  c )M( + 2 c )]
High freq. term High freq. term
 M() = E()Ho () = Hi ( +  c ) + Hi ( −  c )M()Ho ()
+ [Hi ( −  c )M( − 2 c ) + Hi ( +  c )M( + 2 c )]Ho ()
Lowpass filter removes this.
Thus we should have Hi ( + c ) + Hi ( − c )Ho () = 1 for   2B
1
OR Ho () =
Hi (  +  c ) + Hi (  −  c )

Block diagram of FDM system. FMA of SSB for Telephone Systems


Illustrating the modulation steps in an FDM system AM Broadcasting
⚫ History
⚫ Frequency
– Long wave: 153-270kHz
– Medium wave: 520-1,710kHz, AM radio
– Short wave: 2,300-26,100kHz, long distance, SSB, VOA
⚫ Limitation
– Susceptibility to atmospheric interference
– Lower-fidelity sound, news and talk radio
– Better at night, ionosphere.

Superheterodyne vs. homodyne Carrier Recover Error


⚫ Move all frequencies of different channels to one medium freq. ⚫ DSB: e(t)=2m(t)cos(wct)cos((wc+ w)t+)
– In AM receivers, that frequency is 455 kHz, e(t)=m(t) cos((w)t+)
– for FM receivers, it is usually 10.7 MHz. – Phase error: if fixed, attenuation. If not, shortwave radio
– Frequency error: catastrophic beating effect
⚫ SSB, only frequency changes, f<30Hz.
– Donald Duck Effect
⚫ Crystal oscillator, atoms oscillator, GPS, …
⚫ Pilot: a signal, usually a single frequency, transmitted over a
communications system for supervisory, control, equalization,
– Filter Design Concern
continuity, synchronization, or reference purposes.
– Accommodate more radio stations
– Edwin Howard Armstrong
Phase-Locked Loop Ideal Model
⚫ Can be a whole course. The most important part of receiver. ⚫ Model
⚫ Definition: a closed-loop feedback control system that generates and outputs
a signal in relation to the frequency and phase of an input ("reference") signal Si Sp So
LPF
⚫ A phase-locked loop circuit responds both to the frequency and phase of the
input signals, automatically raising or lowering the frequency of a controlled
oscillator until it is matched to the reference in both frequency and phase. Sv
VCO

– Si=Acos(wct+1(t)), Sv=Avcos(wct+c(t))
– Sp=0.5AAv[sin(2wct+1+c)+sin(1-c)]
– So=0.5AAvsin(1-c)=AAv(1-c)
⚫ Capture Range and Lock Range

Carrier Acquisition in DSB-SC Viet Nam National University Ho Chi Minh City
⚫ Signal Squaring method University of Science
1 1 Faculty of Electronics & Telecommunications
⚫ Costas Loop v1 (t ) = Ac Al m(t ) cos  , v2 (t ) = Ac Al m(t )sin 
2 2

Chapter 3:
Angle Modulation and Demodulation

v4 (t ) = K sin 2
2 2
1  1  1 Dang Le Khoa
v3 (t ) =  Ac Al m(t )  cos  sin  =  Ac Al m(t )  sin 2
 2   2  2
Email: [email protected]
⚫ SSB-SC not working
Overview FM Basics
⚫ Angle modulation ⚫ VHF (30M-300M) high-fidelity broadcast
⚫ Wideband FM, (FM TV), narrow band FM (two-way radio)
⚫ FM modulation
⚫ 1933 FM and angle modulation proposed by Armstrong, but
–Principle success by 1949.
–Signal spectrum ⚫ Digital: Frequency Shift Key (FSK), Phase Shift Key (BPSK,
QPSK, 8PSK,…)
⚫ FM detection ⚫ AM/FM: Transverse wave/Longitudinal wave
– Frequency discriminator
– Phase-locked loop

Angle Modulation vs. AM Angle Modulation Pro/Con Application


⚫ Summarize: properties of amplitude modulation ⚫ Why need angle modulation?
– Amplitude modulation is linear – Better noise reduction
just move to new frequency band, spectrum shape does not – Improved system fidelity
change. No new frequencies generated.
⚫ Disadvantages
– Spectrum: S(f) is a translated version of M(f)
– Low bandwidth efficiency
– Bandwidth ≤ 2W – Complex implementations
⚫ Properties of angle modulation ⚫ Applications
– They are nonlinear – FM radio broadcast
spectrum shape does change, new frequencies generated. – TV sound signal
– S(f) is not just a translated version of M(f) – Two-way mobile radio
– Bandwidth is usually much larger than 2W – Cellular radio
– Microwave and satellite communications
Instantaneous Frequency Phase Modulation
•Angle modulation has two forms ⚫ PM (phase modulation) signal
- Frequency modulation (FM): message is represented as the
variation of the instantaneous frequency of a carrier
s(t ) = Ac cos 2 f ct + k p m(t ) 
- Phase modulation (PM): message is represented as the  (t ) = k p m(t ), k p : phase sensitivity
variation of the instantaneous phase of a carrier k p dm(t )
instantanous frequency fi (t ) = f c +
2 dt
s(t ) = Ac cos i (t ) , 1 di (t )
fi (t ) =
where Ac : carrier amplitude, i (t ) : angle (phase) 2 dt

s(t ) = Ac cos  2 f ct +  (t )
where  (t ) is a function of message signal m(t ).

Frequency Modulation Example


⚫ FM (frequency modulation) signal
s(t ) = Ac cos 2 f ct + 2 k f  m( )d 
t

 0 
k f : frequency sensitivity
instantanous frequency fi (t ) = f c + k f m(t )

angle i (t ) = 2  f i ( ) d
t

0 (Assume zero initial phase)


t
= 2 f c t + 2 k f m( )d
0

[9]
Example FM Characteristics
FM ⚫ Characteristics of FM signals
– Zero-crossings are not regular
– Envelope is constant
– FM and PM signals are similar

[10]

Relations between FM and PM FM/PM Example (Time/Frequency)

 m( )d
t
FM of m(t )  PM of
0

dm(t )
PM of m(t )  FM of
dt
Frequency Modulation Example
⚫ FM (frequency modulation) signal Consider m(t)- a square wave- as shown. The FM wave for this m(t) is
shown below. t

s(t ) = Ac cos 2 f ct + 2 k f  m( )d 


t  FM ( t ) = A cos( c t + k f  m(  )d ).

 
-
0 t
Assume m(t) starts at t = 0. For 0  t  T2 m(t) = 1 ,  m(  )d = t and
k f : frequency sensitivity 0
T
t t
instantanous frequency fi (t ) = f c + k f m(t )
2

for T2  t  T m(t) = - 1 ,  m(  )d =  m(  )d +  m(  )d = T2 - (t - T2 ) = T - t.


T
0 0 2

angle i (t ) = 2  f i ( ) d
t The instantane ous frequency is i ( t ) =  c + k f m( t ) =  c + k f for 0  t  T2
0 (Assume zero initial phase) and i ( t ) =  c − k f for T  t  T.
2


t i max =  c + k f i min =  c − k f
= 2 f c t + 2 k f m( )d
and
0 m(t)

m(t ) = Am cos(2 f mt ) fi = f c + k f Am cos(2 f mt )


0 T 2T t →

d  2 k f  Am cos(2 f m )d 
t

1 d 1 d  2 f ct  1   FM ( t )
fi = = +
0

2 dt 2 dt 2 dt
t →
1
= fc + 2 k f  Am cos(2 f m )  Let  =t
2

Frequency Deviation Modulation Index


⚫ Frequency deviation Δf ⚫ Indicate by how much the modulated variable (instantaneous
– difference between the maximum instantaneous and carrier frequency frequency) varies around its unmodulated level (message
– Definition: f = k f Am = k f max | m(t ) | frequency)
– Relationship with instantaneous frequency
max | ka m(t ) |
AM (envelope): ,
single-tone m(t ) case: f i = f c + f cos(2 f mt ) A1
general case: f c − f  fi  f c + f FM (frequency):  =
max | k f m(t ) |
fm
– Question: Is bandwidth of s(t) just 2Δf?
⚫ Bandwidth
No, instantaneous frequency is not
a(t ) =  m( )d
equivalent to spectrum frequency t

(with non-zero power)! −

 k 2f k 2f 
S(t) has ∞ spectrum frequency  (t ) = Re( (t )) = Acos wct − k f a(t ) sin wct − a 2 (t ) cos wct + a 3 (t ) sin wc t...
(with non-zero power).  2! 3! 
Narrow Band Angle Modulation Example
Definition k f a(t )  1

Equation  (t ) = Acos wct − k f a(t ) sin wct 


Comparison with AM
Only phase difference of Pi/2
Frequency: similar
Time: AM: frequency constant
FM: amplitude constant

Conclusion: NBFM signal is


similar to AM signal
NBFM has also bandwidth
2W. (twice message signal
bandwidth)

Wide Band FM
Block diagram of a method for generating a narrowband FM
⚫ Wideband FM signal
signal.
m(t ) = Am cos(2 f mt )
s(t ) = Ac cos  2 f ct +  sin(2 f mt )
⚫ Fourier series representation

s (t ) = Ac J n (  ) cos  2 ( f c + nf m )t 
n =−

Ac
S( f ) = J n (  )  ( f − f c − nf m ) +  ( f + f c + nf m ) 
2 n =−

J n ( ) : n-th order Bessel function of the first kind


Example Bessel Function of First Kind

1. J n (  ) = (−1) n J − n (  )
2. If  is small, then J 0 (  )  1,

J1 (  )  ,
2
J n (  )  0 for all n  2

3. J
n =−
2
n ( ) = 1

Bessel functions of the First Kind Spectrum of WBFM


⚫ Spectrum when m(t) is single-tone
Modulation Carrier Side freq pairs 
index s (t ) = Ac cos  2 f ct +  sin(2 f mt )  = Ac J n (  ) cos  2 ( f c + nf m )t 

n =−
J0 J1 J2 J3 J4 J5 J6 
Ac
S( f ) = J n (  )  ( f − f c − nf m ) +  ( f + f c + nf m ) 
0 1.00 - - - - - - 2 n =−

0.25 0.98 0.12 - - - - -


⚫ Example 2.2
0.5 0.94 0.24 0.03 - - - -
1 0.77 0.44 0.11 0.02 - - -
1.5 0.51 0.56 0.23 0.06 0.01 - -
2 0.22 0.58 0.35 0.13 0.03 - -
2.4 0 0.52 0.43 0.2 0.06 0.02 -

24
Bandwidth of FM Carson’s Rule
⚫ Facts ⚫ Nearly all power lies within a bandwidth of
– For single-tone message signal with frequency fm
– FM has side frequencies extending to infinite frequency →
theoretically infinite bandwidth BT = 2f + 2 f m = 2( + 1) f m
– But side frequencies become negligibly small beyond a point
→ practically finite bandwidth – For general message signal m(t) with bandwidth (or highest
frequency) W
– FM signal bandwidth equals the required transmission
(channel) bandwidth BT = 2f + 2W = 2( D + 1)W
⚫ Bandwidth of FM signal is approximately by f
where D = is deviation ratio (equivalent to  ),
– Carson’s Rule (which gives lower-bound) W
f = max  k f m(t ) 

FM Modulator and Demodulator FM Direct Modulator


⚫ FM modulator
– Direct FM ⚫ Direct FM
– Indirect FM – Carrier frequency is directly varied by the message through
voltage-controlled oscillator (VCO)
⚫ FM demodulator
– VCO: output frequency changes linearly with input voltage
– Direct: use frequency discriminator (frequency-voltage
converter) – A simple VCO: implemented by variable capacitor
– Ratio detector – Capacitor Microphone FM generator
– Zero crossing detector
– Indirect: using PLL
⚫ Superheterodyne receiver
⚫ FM broadcasting
FM Direct Modulator cont. Indirect FM
⚫ Direct method is simple, low cost, but lack of high stability & ⚫ Generate NBFM first, then NBFM is frequency multiplied for
accuracy, low power application, unstable at the carrier frequency
targeted Δf.
Capacitance changes with LC oscillator frequency:
⚫ Good for the requirement of stable carrier frequency
the applied voltage: 1 1
fi (t ) = = ⚫ Commercial-level FM broadcasting equipment all use indirect
C (t ) = C0 + Cm(t ) 2 LC 2 LC0 + LCm(t )
FM
1  C 
= 1 − m(t )  + O(t 2 ) ⚫ A typical indirect FM implementation: Armstrong FM
2 LC0  2C0  ⚫ Block diagram of indirect FM
f C
 f0 − 0 m(t )
2C0
= f 0 − f m(t )
⚫ Modern VCOs are usually implemented as PLL IC
⚫ Why VCO generates FM signal?

Indirect FM cont. Indirect FM cont.


⚫ First, generate NBFM signal with a very small β1 ⚫ Then, apply frequency multiplier to magnify β
v(t ) = Ac cos(2 f1t ) − 1 Ac sin(2 f1t )sin(2
m(t)  f mt ) – Instantaneous frequency is multiplied by n
– So do carrier frequency, Δf, and β
– What about bandwidth?

fi right
= n fi left
Analysis of Indirect FM Armstrong FM Modulator
⚫ Invented by E. Armstrong, an indirect FM
1. Input: v(t ) = Ac cos  2 f1t + 2 k f  m( )d  ,
t

 0  ⚫ A popular implementation of commercial level FM


max | k f m(t ) | ⚫ Parameter: message W=15 kHz, FM s(t): Δf=74.65 kHz.
where fi (t ) = f1 + k f m(t ),  = 1
W ⚫ Can you find the Δf at (a)-(d)?
2. Nonlinear device outputs frequencies: nf1 + nk f m(t )
vo (t ) = a1v(t ) + a2v 2 (t ) + + an v n (t ) +

3. Bandpass filter select new carrier f c = nf1

s (t ) = Ac cos  2 nf1t + 2 nk f  m( )d 


t

 0 
max | nk f m(t ) | Solution:
where new fi (t ) = nf1 + nk f m(t ),  = (a) f = 14.4 Hz. (b) f = 72 14.4 = 1.036 kHz.
W
(c) f = 1.036 kHz. (d) f = 72 1.036 = 74.65 kHz.

FM Demodulator FM Slope Demodulator


⚫ Four primary methods ⚫ Principle: use slope detector (slope circuit) as frequency
discriminator, which implements frequency to voltage
– Differentiator with envelope detector/Slope detector
conversion (FVC)
FM to AM conversion – Slope circuit: output voltage is proportional to the input frequency.
– Phase-shift discriminator/Ratio detector Example: filters, differentiator
Approximates the differentiator
– Zero-crossing detector
– Frequency feedback
Phase lock loops (PLL)
freqency in s(t) voltage in x(t)
10 Hz j 20
20 Hz j 40
FM Slope Demodulator cont. Slope Detector
⚫ Block diagram of direct method (slope detector = slope circuit +
envelope detector)

s(t ) = Ac cos 2 f ct + 2 k f  m( )d  , where fi (t ) = f c + k f m(t )


t

 0 
Let the slope circuit be simply differentiator:

s1 (t ) = − Ac  2 f c + 2 k f m(t )  sin  2 f ct + 2 k f  m( )d 


t

 0  Magnitude frequency
response of
so (t )  − Ac  2 f c + 2 k f m(t ) 
so(t) linear with m(t) transformer BPF.

Bandpass Limiter Ratio Detector


⚫ A device that imposes hard limiting on a signal and contains a ⚫ Foster-Seeley/phase shift discriminator
– uses a double-tuned transformer to convert the instantaneous frequency
filter that suppresses the unwanted products (harmonics) of the variations of the FM input signal to instantaneous amplitude variations. These
limiting process. amplitude variations are rectified to provide a DC output voltage which varies
in amplitude and polarity with the input signal frequency.
– Example

⚫ Input Signal
vi (t ) = A(t ) cos (t ) = A(t ) cos(wct + k f  m(a)da)
t

−

⚫ Output of bandpass limiter


4 1 1 
vo (t ) =  cos (t ) − cos 3 (t ) + cos 5 (t ) 
 3 5 
⚫ Bandpass filter
4
cos(wct + k f  m(a)da)
t
eo (t ) =
 − ⚫ Ratio detector
⚫ Remove the amplitude variations – Modified Foster-Seeley discriminator, not response to AM, but 50%
Zero Crossing Detector FM Demodulator PLL
⚫ Phase-locked loop (PLL)
– A closed-loop feedback control circuit, make a signal in
fixed phase (and frequency) relation to a reference signal
Track frequency (or phase) variation of inputs
Or, change frequency (or phase) according to inputs
– PLL can be used for both FM modulator and demodulator
Just as Balanced Modulator IC can be used for most amplitude
modulations and demodulations

PLL FM Superheterodyne Receiver


⚫ Remember the following relations ⚫ Radio receiver’s main function
– Si=Acos(wct+1(t)), Sv=Avcos(wct+c(t)) – Demodulation → get message signal
– Sp=0.5AAv[sin(2wct+1+c)+sin(1-c)]
– Carrier frequency tuning → select station
– So=0.5AAvsin(1-c)=AAv(1-c)
– Filtering → remove noise/interference
– Amplification → combat transmission power loss
⚫ Superheterodyne receiver
– Heterodyne: mixing two signals for new frequency
– Superheterodyne receiver: heterodyne RF signals with local tuner,
convert to common IF
– Invented by E. Armstrong in 1918.
Advantage of superheterodyne receiver FM Broadcasting
⚫ A signal block (of circuit) can hardly achieve all: selectivity, signal ⚫ The frequency of an FM broadcast station is usually an exact
quality, and power amplification multiple of 100 kHz from 87.5 to 108.5 MHz . In most of the
Americas and Caribbean only odd multiples are used.
⚫ Superheterodyne receiver deals them with different blocks
⚫ fm=15KHz, f=75KHz, =5, B=2(fm+f)=180kHz
⚫ RF blocks: selectivity only
⚫ IF blocks: filter for high signal quality, and amplification, use circuits ⚫ Pre-emphasis and de-emphasis
that work in only a constant IF, not a large band – Random noise has a 'triangular' spectral distribution in an FM
system, with the effect that noise occurs predominantly at the
highest frequencies within the baseband. This can be offset, to a
limited extent, by boosting the high frequencies before transmission
and reducing them by a corresponding amount in the receiver.
⚫ Block diagram and spectrum
⚫ Relation of stereo transmission and monophonic transmission

FM Stereo Multiplexing
Viet Nam National University Ho Chi Minh City
University of Science

Fc=19KHz. Faculty of Electronics & Telecommunications


(a) Multiplexer in
transmitter of FM stereo.
(b) Demultiplexer in
receiver of FM stereo. Chapter 4:
Noise in Communication Systems
Backward compatible
For non-stereo receiver

Dang Le Khoa

Email: [email protected]
Outline Properties of Signals & Noise
⚫ Properties of Signals
– Periodic Waveforms
➢ In communication systems, the received waveform is
usually categorized into two parts:
– DC Value, Power
– Energy and Power Waveforms
– Signal-to-noise Ratio
– dB, dBm Signal: Noise:
The desired part containing the The undesired part
⚫ Properties of Noise information.
– Random Variables
– Cumulative density function (CDF) ➢Properties of waveforms include:
– Probability density function (PDF) • DC value, • Phase spectrum,
– Stationarity and Ergodicity • Root-mean-square (rms) value, • Power spectral density,
• Normalized power, • Bandwidth
– Gaussian Distribution • Magnitude spectrum, • ………………..
3
Faculty of Electronics & Telecommunications

Periodic Waveforms
Where does noise come from?
⚫ External sources: e.g., atmospheric, galactic noise, interference; ➢ Definition
⚫ Internal sources: shot noise and thermal noise (generated by A waveform w(t) is periodic with period T0 . A sinusoidal waveform
communication devices themselves). of frequency f0 = 1/T0 Hertz is periodic
– Thermal noise caused by the rapid and random motion of electrons
within a conductor due to thermal agitation.
It has a Gaussian distribution with zero mean ➢ Theorem: If the waveform involved is periodic, the time average
operator can be reduced to
– Shot noise: the electrons are discrete and are not moving in a
continuous steady flow, so the current is randomly fluctuating.
It has a Gaussian distribution with zero mean.
– The Gaussian distribution follows from the central limit theorem. where T0 is the period of the waveform and a is an arbitrary real constant, which
may be taken to be zero.

5
Faculty of Electronics & Telecommunications
DC Value Power

➢ Definition: The DC (direct “current”) value of a waveform


w(t) is given by its time average, w(t). Thus,

➢ Definition.
Let v(t) denote the voltage across a set of circuit terminals,
and let i(t) denote the current into the terminal, as shown .
The instantaneous power (incremental work divided by
incremental time) associated with the circuit is given by:
p(t) = v(t)i(t)
the instantaneous power flows into the circuit when p(t) is
positive and flows out of the circuit when p(t) is negative.
➢ The average power is:
6 7

Evaluation of DC Value Evaluation of Power


➢ A 120V , 60 Hz fluorescent lamp wired in a high power factor configuration.
Assume the voltage and current are both sinusoids and in phase ( unity power The instantaneous power is:
factor)
p(t ) = (V cos  0t )  (I cos  0t )
DC Value of this waveform is:
= 1 / 2 VI (1 + 2 cos  0t )
Vdc = v(t ) = V cos  0t Voltage
The Average power is:
1

T0 / 2
=
T0 −T0 / 2
V cos  0t  dt = 0
P = 1/ 2 VI (1 + 2 cos 0t )
T0
VI
Where, Current = 
−T 0
2
(1 + 2 cos 0t ) dt
2T0 2
 0 = 2 / T0 , and
f 0 = 1 / T0 = 60Hz VI
=
Similarly, 2
I dc = 0 8
The Maximum power is: Pmax=VI
9
Instantenous Power
RMS Value
Normalized Power
➢ Definition: The root-mean-square (rms) value of w(t) is:
➢ In the concept of Normalized Power, R is assumed to be 1Ω,
although it may be another value in the actual circuit.
➢ Another way of expressing this concept is to say that the
V
power is given on a per-ohm basis.
Rms value of a sinusoidal: Wrms = V cos(ot ) = 2
2
➢ ➢ It can also be realized that the square root of the normalized
power is the rms value.
➢ Theorem: Definition. The average normalized power is given by:
If a load is resistive (i.e., with unity power factor), the average Where w(t) is the voltage or current waveform
power is:

1
P = lim  g 2 (t )dt
T → T
−
where R is the value of the resistive load.
10 11

Energy and Power Waveforms


Energy and Power Waveforms
➢ If a waveform is classified as either one of these types, it
➢ Definition: w(t) is a power waveform if and only if the normalized cannot be of the other type.
average power P is finite and nonzero (0 < P < ∞). ➢ If w(t) has finite energy, the power averaged over infinite time
is zero.
➢ If the power (averaged over infinite time) is finite, the energy
➢ Definition: The total normalized energy is if infinite.
 ➢ However, mathematical functions can be found that have both
E= g infinite energy and infinite power and, consequently, cannot be
2
(t )dt classified into either of these two categories. (w(t) = e-t).
−
➢ Physically realizable waveforms are of the energy type.
➢ Definition: w(t) is an energy waveform if and only if the total – We can find a finite power for these!!
normalized energy is finite and nonzero (0 < E < ∞).

12 13
Decibel Decibel Gain
➢ A base 10 logarithmic measure of power ratios.
➢ The ratio of the power level at the output of a circuit
compared with that at the input is often specified by ➢ If resistive loads are involved,
the decibel gain instead of the actual ratio.
➢ Decibel measure can be defined in 3 ways
Decibel Gain
Definition of dB may be reduced to,
Decibel signal-to-noise ratio
Mill watt Decibel or dBm

➢ Definition: Decibel Gain of a circuit is:


or

14 15

Decibel Signal-to-noise Ratio (SNR) Decibel with Mili watt Reference (dBm)
➢ Definition. The decibel signal-to-noise ratio (S/R, SNR) is:
➢ Definition. The decibel power level with respect to 1 mW is:

= 30 + 10 log (Actual Power Level (watts)


Where, Signal Power (S) =
• Here the “m” in the dBm denotes a milliwatt reference.
• When a 1-W reference level is used, the decibel level is
And, Noise Power (N) = denoted dBW;
• when a 1-kW reference level is used, the decibel level is
So, definition is equivalent to denoted dBk.
E.g.: If an antenna receives a signal power of 0.3W, what is the
received power level in dBm?
dBm = 30 + 10xlog(0.3) = 30 + 10x(-0.523)3 = 24.77 dBm
16 17
Why probability in Communications? Signals
⚫ Two types of signals
⚫ Modeling effects of noise – Deterministic – know everything with complete certainty
– Quantization – Random – highly uncertain, perturbed with noise
– Channel ⚫ Which contains the most information? Information content is determined
– Thermal from the amount of uncertainty and unpredictability. There is no
information in deterministic signals
⚫ What happens when noise and signal are filtered, mixed, etc?
⚫ Making the “best” decision at the receiver Information = Uncertainty

Let x(t) be a radio broadcast. How useful is it if x(t) is known? Noise is ubiquitous.
2.5
y[n]
2 x[n]

1.5

x(t) y(t) 0.5

h(t)
-0.5

-1

-1.5

-2

-2.5
0 100 200 300 400 500 600 700 800 900 1000

Need for Probabilistic Analysis Relative Frequency


⚫ Consider a server process
⚫ nA – number of elements in a set, e.g. the number of times an
– e.g. internet packet switcher, HDTV frame decoder, bank teller line,
instant messenger video display, IP phone, multitasking operating event occurs in N trials
system, hard disk drive controller, etc., etc. ⚫ Probability is related to the relative frequency
⚫ For N small, fraction varies a lot; usually gets better as N
Customers arrive Queue, increases n
at random times Length L Satisfied customer f ( A) = A Relative Frequency
n
n 
P ( A ) = lim  A  Probability
Server:
n →
 n 
Rejected customer,
1 customer 0  P ( A)  1
per  seconds
Queue full P ( A) = 0 Never Occurs
P ( A) = 1 Always Occurs
Cumulative Density Function
Random Variables
⚫ Definition: A real-valued random variable (RV) is a real- ⚫ The cumulative density function (CDF) of the RV, x,
valued function defined on the events of the probability is given by Fx(a)=Px(x<a)
system

Event RV P(x)
Value P(x) Fx(a)
E P(x) 1 1
A 3 0.2
B 1 0.5
A B -2 0.5 0.2 0.1 0.2 0.5
D 0.5
C C 0 0.1
D -1 0.2 -2 -1 0 3 x -2 -1 0 3 a
-2 -1 0 3 x

Probability Density Function Types of Distributions


⚫ The probability density function(PDF) of the RV x is ⚫ Discrete-M discrete values at x1, x2, x3,. . . , xm
given by f(x) ⚫ Continuous- Can take on any value in an defined interval
fx(x) Fx(a)
⚫ Shows how probability is distributed across the axis 1 1
0.5
dF ( a ) dP ( x  a ) 0.2 0.1 0.5
DISCRETE
fx ( x ) = x
0.2
= x
da a = x da a=x
-2 -1 0 3 x -2 -1 0 3 a
fx(x) fx(x) Fx(a) Continuous
1 1 1
0.5
0.2 0.1 0.2 0.5 0.5

-2 -1 0 3 x -1 0 1 x -1 0 1 x
PDF Properties Calculating Probability
⚫ fx(x) is nonnegative, fx(x) > 0 ⚫ To calculate the probability for a range of values
⚫ The total probability adds up to one Px ( a  x  b ) = Px ( x  b ) − Px ( x  a )
 = Fx ( b ) − Fx ( a )
 f x ( x ) dx = Fx (  ) = 1 b +
−
= lim  f x ( x ) dx
 →0 a +
fx(x) Fx(a)
CDF
2 PDF 1
AREA= F(b)- F(a) F(b)
fx(x) 2
F(a)

-1 0 1 -1 1
-1 0 a b 1 -1 a b 1

Discrete Random Variables Stationarity and Ergodicity


• Summations are used instead of integrals for discrete RV. ⚫ Ensemble averaging : là giá trị trung bình của biến trên tập quá
• Discrete events are represented by using DELTA trình ngẫu nhiên tại một thời điểm nhất định ( biến số thời gian
t được giữa không đổi)
functions.
⚫ Stationary random process : giá trình trung bình không thay
đổi theo thời gian
If x is discretely distributed and xi represents a discrete event
M ⚫ Time averaging : thuộc tính đạt được bằng cách lấy trung bình
f ( x) =  P( xi ) ( x - xi ) trên từng mẫu theo thời gian
i =1
⚫ Ergodic process : là quá trình dừng trong đó trung bình trên
L
F( a) =  P( xi ) tập bằng trung bình theo thời gian của tín hiệu.
i =1

Faculty of Electronics & Telecommunications


Moments
Ensemble Averages
⚫ The r th moment of RV x about x=xo is

⚫ The expected value (or ensemble average) of y=h(x) is: ( x − xo ) r =  ( x − xo ) r f x ( x ) dx
−


y = E  y  =  h ( x ) f x ( x ) dx
MEAN is the first moment taken about x o =0

− m = x =  x f x ( x ) dx
−

[•] =  [•] f x ( x ) dx VARIANCE  2 is the second moment around the mean
− 
 2 = ( x − x) 2 =  ( x − x) 2 f x ( x ) dx
For Discrete distributions −

STANDARD DEVIATION  - the second moment around mean



y = [h( x)] = E  y  =  h ( xi ) f x ( xi )

 = 2 =  ( x − x) 2 f x ( x ) dx
−
i =−
 2 = x 2 − ( x) 2

Gaussian Distribution
Gaussian Distribution
⚫ Gaussian distribution also called Normal distribution is one of
the most common and important distributions
⚫ PDF
fx ( x ) =
1
e
−( x − m )
2
( 2 2 ) m is the mean and is  variance
2 
⚫ CDF
 m−a 1  m−a
F (a) = Q   = erfc   Complementary Error Function
   2  2 
1  − 2 2
Q( z)
2 z
e d  (m = 0,  = 1) Q function

2  − 2
erfc ( z )  e d  = 1 − erf ( z ) Error function
 z
2 z − 2 1  z 
erf ( z ) 
e d Q( z ) = erfc  
 0 2  2
Ideal Low-Pass Filtered White Noise
Gaussian CDF
⚫ Start with definition of CDF: ⚫ Suppose white noise is applied to an ideal low-pass filter such

F (a) =
a
f ( x ) dx =
1 a −( x − m )
2
( 2 2 )dx that
− x 2 −
e ⚫ Baseband

⚫ Change variables, y=(m-x)/:

F (a) =
1 ( m−a )  − y
2
( 2 2 ) ( − ) dy Passband
2 

e

1  −y 2
( 2 2 )dy = Q  m − a 
2 ( m − a ) 
= e
  
 m−a
F (a) = Q   Băng thông của tín hiệu phức là B thì băng thông của tín hiệu
  
truyền là 2B

Signal-to-noise power ratio (SNR)


Viet Nam National University Ho Chi Minh City
⚫ Energy-per-bit (Eb), Energy per-symbol (Es). Mật độ phổ
công suất N0/2
University of Science
⚫ Do nhiễu n(t) có mật độ phổ công suất đồng nhất N0/2, tổng Faculty of Electronics & Telecommunications
công suất nhiễu trong băng thông 2B là N = N0/2 × 2B = N0B.
Vậy SNR là:
Pr
SNR = Chapter 5:
N0 B Digital Transmission and Multiplexing
⚫ SNR thường diễn tả dưới dạng năng lượng của bit Eb hoặc
của symbol Es P Es Eb
SNR = r = =
N 0 B N 0 BTs N 0 BTb
với Ts là thời gian ký hiệu và Tb là thời gian 1 bit
⚫ Khi hệ số roll-off =1, chúng ta có SNR = Es/N0
Dang Le Khoa
Email: [email protected]
Faculty of Electronics & Telecommunications [36]
Ouline Digital transmission
⚫ Digital transmission ⚫ Digital transmission: the transmittal of digital signals between
– Pulse Code Modulation 2 or more points
⚫ Multiplexing ⚫ Signals: binary or discrete-level digital pulses
– Time-Division Multiplexing
⚫ Source: digital or analog forms
– T1, E1
⚫ Media: metallic wire, coaxial cable, optical fiber, wireless
⚫ Signaling
⚫ TDM Hierarchy

Faculty of Electronics & Telecommunications. HCMUS [2] 3

Digital transmission Digital transmission


⚫ Advantages: (review) ⚫ Disadvantages: (review)
1. Relatively inexpensive digital circuits may be used; 1. Requiring more bandwidth
2. Privacy is preserved by using data encryption; 2. Requiring clock recovery circuits for time synchronization in
3. Data from voice, video, and data sources may be merged and receivers
transmitted over a common digital transmission system;
3. Need additional encoding and decoding circuit for A/D –D/A
4. In long-distance systems, noise dose not accumulate from conversion
repeater to repeater. Data regeneration is possible
4. Incompatible with older analog tranmission facilities
5. Errors in detected data may be small, even when there is a large
amount of noise on the received signal;
6. Errors may often be corrected by the use of coding.

4 5
Pulse modulation Pulse Modulation

⚫ Methods of converting information into pulses for transmission Analog


1. PWM (Pulse Width Modulation): pulse width is proportional to signal
amplitude of analog signal
2. PPM (Pulse Position Modulation): position of a constant-width
Sample
pulse varied according to amplitude of analog signal in a time slot pulse
PWM
3. PAM (Pulse Amplitude Modulation): amplitude of constant-width,
constant-position pulse vaied according to amplitude of analog
signal PPM
4. PCM (Pulse Code Modulation): analog signal is sampled and
converted to fixed-length, binary series according to amplitude of PAM
analog signal
⚫ PCM is popular method in communication systems, mainly in PSTN PCM

Faculty of Electronics & Telecommunications. HCMUS 6 9/24/2023 7

Pulse Code Modulation Pulse Code Modulation


⚫ PCM is not really a modulation form, but rather a form of ⚫ Block diagram of a single channel, simplex PCM
source coding
⚫ Pulses are of fixed length and fixed amplitude Parallel data

– A pulse  logic 1 Bandpass


filter
Sample and
hold
PAM A/D
converter
P/S
converter
Analog
– Lack of puse logic 0 input
signal Sample Conversion Line speed
Serial PCM
pulse clock clock
⚫ An integrated circuit performing PCM encoding and decoding is code

called a codec Regenerative


repeater Serial PCM
Regenerative
repeater
code
Quantized
PAM signal
Serial PCM
code Quantizer Encoder

Parallel data

S/P D/A Hold PAM Low pass


converter converter circuit filter
Analog
output signal
Line speed Conversion
clock clock
8 9/24/2023 9
Filtering Filtering
Amplitude
• Most of the energy of the spoken language is somewhere between
200 or 300 Hz and about 3300 or 3400 Hz:

Filter
B  3KHZ
• A general decision has been made to make:

B = 4 KHZ

300 4000 Frequency [Hz]

9/24/2023 10 9/24/2023 11

Filtering Encoding

• This bandlimiting filter is used to prevent ⚫


Binary codes used for PCM are n-bit codes
3-bit PCM code
ALIASING DISTORTION. Sign Magnitude Decimal

• This bandlimiting filter is thus called an +3V 111


1 11 +3
+2V 110
1 10 +2
antialiasing filter 1 01 +1 +1V 101

• The aliasing distortion might occur in the


100
1 00 0 0V
000
0 00 0 -1V 001

sampling step of the PCM process. This distortion 0 01 -1 -2V 010

0 10 -2
occurs when: 0 11 -3
-3V 011

⚫ Overload distortion
fs  2 B ⚫ Resolution or minimum step size of ADC (Vlsb)
⚫ Quantizingquantization error (Qe) or quantization noise (Qn)
Qe=Vlsb/2
9/24/2023 12 9/24/2023 13
Encoding
Encoding

increase sampling
rate→PAM signal
more precisely, not
reducing
quantization error

9/24/2023 14 9/24/2023 15

Example
Tín hiệu analog->ADC 8 bit, có tần số cao nhất là không quá 4Khz
⚫ a. Xác định tần số cắt của lọc thông thấp chống biệt danh
⚫ b. Nếu tín hiệu analog là 1V, ADC sử dụng Vref-=-5V,
Vref+=5V, xác định tín hiệu (chuỗi bit sau PCM)

⚫ Hướng dẫn:
a) fcut=4Khz
b) Chuỗi bit: 8 bit (1 dấu, 7 data), bit dấu: 1, 7 bit data: 000 1101)
(1/5)x(2^7-1)=25=000 1101

[16] Faculty of Electronics & Telecommunications. HCMUS [17]


Multiplexing FDM and TDM

⚫ Multiplexing is the transmission of information (in


any form) from more than one source to more than ⚫ Both methods divide the capacity of the
one destination over the same transmission medium transmisison facility into slots
⚫ Transmission medium: metallic wire pair, coaxial
cable, terrestrial microwave radio system, satellite
microwave system, optical fiber cable Slot Slot Slot Slot Slot Slot Slot Slot Slot
3 2 1 3 2 1 3 2 1

⚫ 4 popular methods:
– Time-division multiplexing (TDM)
– Frequency-division multiplexing (FDM)
– Code-division multiplexing (CDM)
– Wavelength-division multiplexing (WDM)
9/24/2023 18 9/24/2023 19

FDM TDM
t Bi
Guard Band Guard Time
CHANNELS CHANNELS
(FREQUENCY SLOTS - CHANNELS) (TIME SLOTS - CHANNELS)

…. ….
ΔF1 ΔF2 ΔF3 ΔF4 ΔF5 ΔF6 …. F [Hz] Δt1 Δt2 Δt3 Δt4 Δt5 Δt6 …. t [sec]

9/24/2023 20 9/24/2023 21
Time-Division Multiplexing Time-Division Multiplexing
⚫ Transmission from multiple sources are interleaved in the ⚫ In TDM, each source device occupies a subset of transmission
time domain bandwidth for a slice of time (time slot)
⚫ PCM is the most common type of modulation used with ⚫ At the end, returning to the 1st source device and the process
TDM(PCM-TDM System) continues
⚫ In PCM-TDM system, 2 or more voice-band channels are
sampled, converted to PCM codes and then time-division
multiplexed onto a single medium

9/24/2023 22 9/24/2023 23

Time-Division Multiplexing Time-Division Multiplexing

D
⚫ The basic building block begins with a DS-0 channel
Output value

B C
A
Input signal A A D
C B

Time
Output
Input signal B

Input signal C

Input signal D

⚫ Output’s bit rate: 8000 samples x 8 bits = 64kbps


sec sample
9/24/2023 24 9/24/2023 25
Time-Division Multiplexing Time-Division Multiplexing

⚫ PCM-TDM system made of 2 DS-0 channels


⚫ PCM code for each channel occupies a fixed time slot
within TDM frame
⚫ A two-channel system:
⚫ Line speed (bandwidth) at the output of multiplexer is
2 channels 8000 samples 8 bits
x x = 128 kbps
frame sec sample
125s

⚫ Channel 1 and 2 are alternately selected and connected to PCM code PCM code
multiplexer output Channel 1 Channel 2
⚫ Frame time is the time taking to transmit one sample from each
channel 1 1
= = 125  s 26 27
9/24/2023 f s 8000 9/24/2023

Time-Division Multiplexing Digital Carrier System

⚫ A n-channel system: ⚫ DSC is a communication system using digital pulses, rather


⚫ Line speed (bandwidth) at the output of multiplexer is than analog signals, to encode information
n channels 8000 samples 8 bits ⚫ The Bell system (AT&T) T1 carrier system is the North
x x = n * 64kbps
frame sec sample American telephone standard and recognized by the CCITT
(ITU-T) - G.733
125s
⚫ A T1 carrier uses PCM-TDM. It multiplexs 24 voice-band
PCM code PCM code PCM code channels (1 frame). Each frame is separated by a Framing

Channel 1 Channel 2 Channel n bit
⚫ The 24-channel multiplexed bits, now a 1st level digital
Ex: n = 32 → 256 Bits/frame, B = 2.048 Mbps, CLOCK = 2.048 MHz signal (DS-1), are processed by a T1 line driver

9/24/2023 28 9/24/2023 29
Digital Carrier System T1 physical layout

1
Telephone

F 24 8 7 6 5 4 3 2 1
fax communication 2
Fax
3
D V V
4
5 Mux
Sample 1
6
D V V

7 Sample 2

digital data

24 8000 ⚫ A T1 is physically made up of two balanced pairs of copper wire


key: D = Data (commonly known as twisted pair). The pairs are used in a full duplex
V = Voice
F = Frame configuration where one pair transmits information and the other pair
receives information. Customer Premises Equipment (CPE) typically
A T1 time-division multiplexing scheme
terminate a T1 with a RJ-48C jack

9/24/2023 30 9/24/2023 31

Digital Carrier System Digital Carrier System

⚫ Each channel contains 8-bit PCM code and is


sampled 8000 times /sec
1
⚫ Transmit line speed:

24 channels 8 bits 2
x = 192 bits per frame
frame sample EACH CHANNEL IS SAMPLED ONCE PER FRAME
BUT NOT AT THE SAME TIME
192 bits 8000 frames
x = 1536kbps = 1.536M bps
frame sec
24

9/24/2023 32 9/24/2023 33
Digital Carrier System Frame synchronization

⚫ An additional bit (framing bit) is added in front of the


transmit signal of each frame
⚫ The framing bit occurs once per frame (8 Kbps rate)
and is recoverd at the receiver
⚫ The framing bits collected at the receiver (framing
channel) are used to maintain frame and sample
synchronization between the TDM transmitter and
receiver
⚫ The collection of the framing bits form a code that the
Bell system T1 digital carrier system receiver understands (multiframe alignment-later)
9/24/2023 34 9/24/2023 35

Frame synchronization Frame synchronization


⚫ A frame contains: 192+1=193 bits
⚫ There are 8000 framing bits per second
⚫ The line speed for a T1 digital carrier system is:

193 bits 8000 frames


x = 1.544 Mbps
frame sec ond

= 1.536 Mbps + 8000 bits = 1.544 Mbps

9/24/2023 36 37
D1-type channel banks D1-type channel banks

⚫ D1 framing ( T1 early version)


⚫ For each channel use the LSB called the S-bit (signaling
bit → inband signaling) (7 bits for voice, 1 bit for
signaling)
⚫ The S-bit is used for: FLASH-HOOK, ON-HOOK,
OFF-HOOK, DIAL PULSING, RINGING, ….
⚫ Signaling channel rate = 8 Kbps (BW waste)
voice = 56 kbps (voice quality suffers) 128
quantization steps vs. 256 (with 8 bits)
⚫ Framing bit pattern = 1 0 1 0 1 0 1 0 1 …… (POOR)
(NO SF, NO ESF)
9/24/2023 38 9/24/2023 39

E1 Carrier System E1 Carrier System


EUROPE 32 TIME SLOTS (CHANNELS)

B = n.64 Kbps
BITS = 8n
……
SYSTEM CLOCK:
n channels 8 bits 8000 frames
x x
frame channel sec ond
EXAMPLE: n = 32; B = 2.048 Mbps Does not support Bit Robbing as in T1 for
BITS = 256 BITS, CLOCK = 2.048 MHz Signalling
9/24/2023 40 9/24/2023 41
E1 Carrier System E1 Carrier System
⚫ E1 frame ⚫ 1 frame has 32 TS (0-31), a multiframe consists of 16 frames (0-
15)
⚫ TS0 is used for:
– Synchronization
– Alarm transport
– International carrier use
⚫ TS16 used to transmit CAS (Channel Associated Signaling)
information

9/24/2023 42 9/24/2023 43

What is Signaling? What is Signaling?


⚫ Trunk call
⚫ Consider the procedure of connection setup
  7305315
⚫ Local call:
A B Off hook
  7305315 Dial tone
7
Off hook … 5
Dial tone . Seize
Seize ACK
7
Digits
… …
Digits
5
. Ringback tone . Ringing tone
Ringback tone Ringing tone Off hook
Off hook
Conversation

Conversation On hook
Clear back
On hook
On hook Clear forward
On hook
9/24/2023 45

9/24/2023 44
TDM Hierarchy TDM Hierarchy
⚫ North America and Japan standards (AT&T)
⚫ TDM standards for North America
Digital Signal Bit rate, R No of Transmission media used
Number (Mbps) 64kbps
24 4 DS-1 7 DS-2 6 DS-3 2 DS-4 DS-5 PCM
DS-0 channels
DS-0 0.064 1 Wire pairs
1 1 1 1 1
DS-1 1.544 24 Wire pairs
1st level 2nd level 3rd level 4th level 5th level


multiplexer multiplexer multiplexer multiplexer multiplexer
24 4 7 6 2 DS-2 6.312 96 Wire pairs, fiber

560.160Mbps
DS-3 44.736 672 Coax., radio, fiber

DS-4 274.176 4032 Coax., fiber


1.544Mbps 6.312Mbps 44.736Mbps 274.176Mbps
DS-4E 139.264 2016 Coax., radio, fiber

DS-5 560.160 8064 Coax., fiber


9/24/2023 46 9/24/2023 47

TDM Hierarchy
Viet Nam National University Ho Chi Minh City
⚫ Europe standards University of Science
Faculty of Electronics & Telecommunications

30 E0 E1 E2 E3 E4 DS-5
Chapter 6:
Basic Digital Modulations
1 1 1 1 1

1stlevel 2nd level 3rd level 4thlevel 5th level


multiplexer multiplexer multiplexer multiplexer multiplexer


30 4 4 4 4

565.148Mbps
Dang Le Khoa
2.048Mbps 8.448Mbps 34.368Mbps 139.264Mbps

Email: [email protected]
9/24/2023 48
Outline ASK, OOK, MASK
– ASK, OOK, MASK ⚫ The amplitude (or height) of the sine wave varies to transmit the
– FSK, MFSK ones and zeros
– BPSK, DBPSK, MPSK
– MQAM,
– OQPSK
– Bit error rate.

⚫ One amplitude encodes a 0 while another amplitude encodes a 1


(a form of amplitude modulation)

On-off keying Ideal Nyquist pulse (filter)


Ideal Nyquist filter Ideal Nyquist pulse

H( f )
h(t ) = sinc(t / T )
T 1
2 Ei
si (t ) = cos (ct +  )
T
On-off keying (M=2):
si (t ) = ai 1 (t ) i = 1, ,M “0” “1”
s2 s1
2  1 (t )
 1 (t ) = cos (ct +  ) 0 E1 − 2T − T 0 T 2T
T −1 0 1 f t
ai = Ei 2T 2T
1
W=
2T
4 5
The raised cosine filter
⚫ Raised-Cosine Filter
– A Nyquist pulse (No ISI at the sampling time)

1 for | f | 2W0 − W

   | f | +W − 2W0 
H ( f ) = cos 2   for 2W0 − W | f | W
 4 W − W0 
0 for | f | W

cos[2 (W − W0 )t ]
h(t ) = 2W0 (sinc(2W0t ))
1 − [4(W − W0 )t ]2
W − W0
Nyquist pulses (filters): no ISI at the sampling time Excess bandwidth:W − W0 Roll-off factor r = W0
0  r 1
6 7

The Raised cosine filter – cont’d Binary amplitude shift keying, Bandwidth
⚫ d ≥ 0 → related to the condition of the line
| H ( f ) |=| H RC ( f ) | h(t ) = hRC (t )
1 r =0 1

r = 0.5
0.5 0.5 r =1
r =1 r = 0.5
r =0

−1 − 3 −1 0 1 3 1 − 3T − 2T − T 0 T 2T 3T
T 4T 2T 2T 4T T

Rs
Baseband W sSB = (1 + r ) . Passband W DSB = (1 + r ) Rs . BDSB = (1+r) Rbaud = (1+r) Rb
2

8
Implementation of binary ASK OOK and MASK
⚫ OOK (On-OFF Key)
– 0 silence.
– Sensor networks: battery life, simple implementation
⚫ MASK: multiple amplitude levels

Pro, Con and Applications Example


⚫ Pro ⚫ We have an available bandwidth of 100 kHz which spans from
– Simple implementation 200 to 300 kHz. What are the carrier frequency and the bit
rate if we modulated our data by using ASK with r = 1?
⚫ Con
– Major disadvantage is that telephone lines are very susceptible to ⚫ Solution
variations in transmission quality that can affect amplitude – The middle of the bandwidth is located at 250 kHz. This
– Susceptible to sudden gain changes means that our carrier frequency can be at fc = 250 kHz.
– Inefficient modulation technique for data
We can use the formula for bandwidth to find the bit rate
(with n= 1 and r = 1).
⚫ Applications
– On voice-grade lines, used up to 1200 bps
– Used to transmit digital data over optical fiber B = (1+r) Rbaud = 2 xRb=100KHz
– Morse code => Rb=50kbps
– Laser transmitters
Frequency Shift Keying FSK Bandwidth
⚫ One frequency encodes a 0 while another frequency encodes a 1 ⚫ Limiting factor: Physical capabilities of the carrier
(a form of frequency modulation) ⚫ Not susceptible to noise as much as ASK

⚫ Represent each logical value with another frequency (like FM)


⚫ Applications
 A cos(2f1t )
 binary 1 – On voice-grade lines, used up to 1200bps
s (t ) = 
 (
 A cos 2f 2t
) binary 0 – Used for high-frequency (3 to 30 MHz) radio transmission
– used at higher frequencies on LANs that use coaxial cable

Example Multiple Frequency-Shift Keying (MFSK)


⚫ We have an available bandwidth of 100 kHz which spans from ⚫ More than two frequencies are used
200 to 300 kHz. What should be the carrier frequency and the ⚫ More bandwidth efficient but more susceptible to error
bit rate if we modulated our data by using FSK with r= 1? si (t ) = A cos 2f i t 1 i  M
⚫ Solution f i = f c + (2i – 1 – M)f d
– This problem is similar to Example 5.3, but we are modulating f c = the carrier frequency
by using FSK. The midpoint of the band is at 250 kHz. We
choose 2Δf to be 50 kHz; this means
f d = the difference frequency
M = number of different signal elements = 2 n
B = (1+r) Rbaud+ 2Δf = 100KHz n = number of bits per signal element

=>2xRb= 2xRbaud =50kHz


=> Rb=25kbps

EE 541/451 Fall 2007


FSK detection Phase Shift Keying
⚫ One phase change encodes a 0 while another phase change
encodes a 1 (a form of phase modulation)

 A cos(2f c t )
 binary 1
s (t ) = 
 A cos(2f ct +  )
 binary 0

DBPSK, QPSK QPSK Example


⚫ Differential BPSK
– 0 = same phase as last signal element
– 1 = 180º shift from last signal element

⚫ Four Level: QPSK  


A cos 2f c t + 
  4
11
3 



A cos 2f c t + 
s (t ) = 
01
 4 
 3 
A cos 2f c t − 

00
 4 

 
A cos 2f c t −
 
 10
 4
Bandwidth Example
⚫ Min. BW requirement: same as ASK! ⚫ We have an available bandwidth of 100 kHz which spans from
⚫ Self clocking (most cases) 200 to 300 kHz. What are the carrier frequency and the bit
rate if we modulated our data by using QPSK with r = 1?
⚫ Solution
– The middle of the bandwidth is located at 250 kHz. This
means that our carrier frequency can be at fc = 250 kHz.
We can use the formula for bandwidth to find the bit rate
(with n= 2 and r = 1).

B = (1+r) Rbaud = 2 xRbaud=100KHz


BDSB = (1+r) Rbaud Rbaud=50ksps => Rb=100kbps

Concept of a constellation diagram MPSK


⚫ Using multiple phase angles with each angle having more than
one amplitude, multiple signals elements can be achieved
R R
Rbaud = =
n log 2 M
– Rbaud = modulation rate, baud
– R = data rate, bps
– M = number of different signal elements = 2n
– n = number of bits per signal element
QAM – Quadrature Amplitude Modulation QAM
⚫ Modulation technique used in the cable/video networking world ⚫ QAM
– As an example of QAM, 12
⚫ Instead of a single signal change representing only 1 bps – different phases are combined
multiple bits can be represented buy a single signal change with two different amplitudes
⚫ Combination of phase shifting and amplitude shifting (8 phases, 2 – Since only 4 phase angles have 2
amplitudes) different amplitudes, there are a
total of 16 combinations
– With 16 signal combinations, each
baud equals 4 bits of information
(2 ^ 4 = 16)
– Combine ASK and PSK such that
each signal corresponds to
multiple bits
– More phases than amplitudes
– Minimum bandwidth requirement
same as ASK or PSK

QAM
⚫ QAM is a combination of ASK and PSK
– Two different signals sent simultaneously on the same carrier frequency
s(t ) = d1 (t )cos 2f ct + d 2 (t )sin 2f ct
– M=4, 16, 32, 64, 128, 256

Faculty of Electronics & Telecommunication. HCMUS [29]


Bit Error Probability Conditional pdfs
Noise na(t) The transmission system induces two conditional pdfs depending on d (i )
i T
• if d (i ) = d 0 • if d (i ) = d1
d(i) gTx(t) gRx(t) r0 (i T ) + n(iT )
p0 (x ) = pN ( x −d 0 ) p1 ( x ) = pN ( x − d1)

We assume: • binary transmission with d (i ) {d 0 ,d1} p0 (x ) p1 ( x )

• transmission system fulfills 1st Nyquist criterion


• noise n(i T), independent of data source
p N (n )

Probability density function (pdf) of n(i T) x


d0 d1 x
Mean and variance

Example of samples of matched filter output


for some bandpass modulation schemes

Figure 5.8 Illustrating the


partitioning of the observation
space into decision regions for
the case when N = 2 and M = 4;
it is assumed that the M
transmitted symbols are equally
likely.
Probability of wrong decisions Conditions for illustrative solution
d 0 + d1
 pN ( − x) = pN ( x ) 
1
Placing a threshold S With  P1 = P0 = and S=
p 0 (x ) p1 ( x ) 2 2
Probability of 1
S S

wrong decision Pb = 1 +  p1 ( x ) dx −  p0 (x ) dx 
2  − − 
d0− d1
d0+ d1 S =
x x S S= S 2
2

 p1 ( x) dx = − pN ( x − d1)dx  p1 ( x) dx =  p N ( x  )d x 
S S d1
 d0 S equivalently
Q0 =  p0 ( x) dx Q1=
 p1 ( x)dx − − − S
with 
S − substituting x −d1 = x
d0− d1 d1 − d0  p0 ( x ) dx =
When we define P0 and P1 as equal a-priori probabilities of d 0 and d1 d + d −
for x = S = 0 1 2 2
1 1
(P0 = P1 = 12 ) 2 = +  p N ( x)d x = −  p N ( x)d x d1 − d0
we will get the bit error probability 2 2
d +d d −d
2
1
+  pN ( x ') dx '
0 0
 S S
 S  = 0 1 − d1= 0 1
Pb = P0Q0 + P1Q1 = 1
2  p0 ( x)dx + 12
Ss

−
p1 ( x)dx = 12 + 
−
1
2 p1 ( x) − 12 p0 ( x ) dx 2 2
1
d1 − d0
2

2 0

Pb = 1 − 2  p N ( x )dx 
S
2 
1−  p0 ( x ) dx 0
−

Special Case: Gaussian distributed noise Error function and its complement
⚫ function y = Q(x)
Motivation: • many independent interferers
• central limit theorem y = 0.5*erfc(x/sqrt(2));
• Gaussian distribution 2.5
erf(x)
erfc(x)
2
d1− d
 
2
n 0
2 − x
2

1 1 2 
 e d x
2 N2  1.5
pN ( n ) =  Pb = 1 −
2
2 N
e
2  N 2 2  N 0 0
erf(x), erfc(x)

     1

0.5
no closed solution
0
Definition of Error Function and Error Function Complement
-0.5

x d 
x 2
2 −
e erfc( x ) = 1 − erf( x )
-1
erf( x ) = x
 0 -1.5
-3 -2 -1 0 1 2 3
x
Bit error rate with error function complement Bit error rate for unipolar and antipodal transmission
 d1 − d0
x2 
1 2 1 2 −
 1  d − d0  ⚫ BER vs. SNR

2 N2
Pb = 1 − e d x  Pb = erfc  1 
2

 2 N 0 
2  2 2 N 
theoretical
-1
10 simulation
Expressions with E S and N 0
unipolar
antipodal: d1 = +d ; d 0 = −d unipolar d1= +d ; d 0 = 0
1  d −d  1  d  1  d  1  d2  10
-2
Pb = erfc  1 0  = erfc  Pb = erfc   = erfc  
 
  2 N   2 2 N  8 N
2
2  N 
2 2  2

BER
2 2 
1  d2  1  SNR  1  d2 / 2  1  SNR  antipodal
= erfc   = erfc   = erfc   = erfc  
 2 N2  2 2   2 
4 
 4 N  2
2 -3
   2  10
d2 ES d2 / 2 ES
SNR = = SNR = =
N
2 matched
N0 / 2  2
N
matched N0 / 2
 ES 
-4
1  ES  1 10
Pb = erfc   Pb = erfc   -2 0 2 4 6 8 10
Q function  2 N0  ES
2  N0  2
  in dB
N0

Viet Nam National University Ho Chi Minh City Outline


University of Science
Faculty of Electronics & Telecommunications

⚫ Basic concepts
Chapter 7: ⚫ Information
⚫ Entropy
Introduction to Information Theory
⚫ Source Encoding

Dang Le Khoa
Email: [email protected]
9/24/2023 2
Basic concepts What is Information Theory?
⚫ Block diagram of digital communication system
⚫ Information theory provides a quantitative measure of source
information, the information capacity of a channel
⚫ Dealing with coding as a means of utilizing channel capacity
Source Source Channel for information transfer
Modulator
encoder encoder
⚫ Shannon’s coding theorem:
Noisy
Channel
“If the rate of information from a source does not exceed the
capacity of a communication channel, then there exists a
coding technique such that the information can be transmitted
Destination Source Channel over the channel with an arbitrarily small probability of error,
Demodulator
decoder decoder
despite the presence of error”

9/24/2023 3 9/24/2023 4

Information measure Information measure

⚫ Consider the three results: win, draw, loss


⚫ Information Theory: how Cases Events Information Probability
much information 1 Barca wins No information 1, quite sure
– … is contained in a signal?
2 Barca draws with More information Relatively low
– … can a system generate? GĐT-LA
– … can a channel transmit? 3 Barca loses A vast amount of Very low probability
information of occurrence in a
⚫ Information is the typical situation
commodity produced by the
source for transfer to some
user at the destination ⚫ The less likely the message, the more information it
conveys
⚫ How is information mathematically defined?

9/24/2023 5 9/24/2023 6
Information Information

⚫ The base of the logarithm


⚫ Let xj be an event with p(xj) is the probability of the event
– 10 →the measure of information is hartley
that xj is selected for transmission
– e →the measure of information is nat
⚫ If xj occurred, we have – 2 →the measure of information is bit
1
I ( x j ) = log a = − log a p( x j ) (1) ⚫ Examples: A random experiment with 16 equally likely outcomes:
p( x j )
– The information associated with each outcomes is:
units of information
I(xj)=-log2(1/16)=log216=4 bits
◼ I(xj) is called self-information – The information is greater than one bit, since the probability of each
 I(xj)0 for 0 p(xj)1 outcome is much less than ½.
 I(xj)→0 for p(xj)→1
 I(xj)>I(xi) for p(xj)<p(xi)

9/24/2023 7 9/24/2023 8

Entropy and Information rate Example


⚫ Consider an information source emitting a sequence of symbols from
the set X={x1,x2..,xM}
⚫ Each symbol xi is treated as a message with probability p(xi) and self-
information I(xi)
⚫ Random variable with uniform distribution over 32
⚫ This source has an average rate of r symbols/sec
outcomes
⚫ Discrete memoryless source
– H(X) = -  (1/32).log(1/32) = log 32 = 5
⚫ The amount of information produced by the source during an arbitrary
symbol interval is a disrete random variable X. – # bits required = log 32 = 5 bits!
⚫ The average information per symbol is then given by: – Therefore H(X) = number of bits required to represent a
random event
M
H ( X ) = E{I ( x j )} = − p( x j ) log2 p( x j ) bit/symbol (2) ⚫ How many bits are needed for:
j =1
– Outcome of a coin toss
⚫ Entropy = information = uncertainty
⚫ If a signal is completely predictable, it has zero entropy and no
information
⚫ Entropy = average number of bits required to transmit the signal
9/24/2023 9 9/24/2023 10
Entropy Example

⚫ The value of H(X) for a given source depends upon the


symbol probabilities p(xi) and M
⚫ However,
⚫ For a binary source (M=2), p(1)=
and p(0)=1- = .
0  H ( X )  log2 M (3)
⚫ From (2), we have the binary
⚫ The lower bound corresponds to no uncertainty entropy:
⚫ The upper bound corresponds to maximum uncertainty, ⚫ H(X)= -.log -(1-).log(1-)
occuring when each symbol are equally likely
⚫ The proof of this inequality is shown in [2] Chapter 15

9/24/2023 11 9/24/2023 12

Source coding theorem Source coding theorem (cont’d)


⚫ Shannon’s first theorem (noiseless coding theorem):
– “Given a channel and a source that generates information at
⚫ Information from a source producing different symbols a rate less than the channel capacity, it is possible to encode
could be described by the entropy H(X) the souce output in such a manner that it can be
transmitted through the channel”
⚫ Source information rate (bit/s):
⚫ Demonstration of source encoding by an example:
Rs = rH(X) (bit/s)
– H(X): source entropy (bits/symbol)
Discrete
– r: symbol rate (symbols/s) binary
Source Binary
encoder channel
source
⚫ Assume this source is the input to a channel:
Source symbol rate C = 1 bit/symbol
– C: capacity (bits/symbol)
r = 3.5 symbols/s S = 2 symbols/s
– S: available symbol rate (symbols/s)
SC = 2 bits/s
– S.C: bits/s 14

9/24/2023 13 9/24/2023
Example of Source encoding
Example of Source encoding(cont’d)

⚫ Codewords assigned to n-symbol groups of source symbols


⚫ Discrete binary source: A(p=0.9), B(p=0.1)
⚫ Rules:
⚫ Source symbol rate (3.5) >channel capacity (2)  – Shortest codewords for the most probable group
source symbols cannot be transmitted directly – Longest codewords for the least probable group
⚫ Check Shannon’s theorem: ⚫ n -symbol groups of source symbols # n th-order extension of
– H(X)=-0.1 log20.1-0.9log20.9=0.469bits/symbol original source
– Rs = rH(X) = 3.5(0.469)=1.642 bits/s < S.C = 2 bits/s
⚫ Transmission is possible by source encoding to
decrease the average symbol rate

9/24/2023 15 9/24/2023 16

First-Order extension Second-Order extension


Grouping 2 source symbols at a time:
Source symbol P () Codeword [P()].[Number of Code
Symbols]
Source P () Codeword [P()].[Number of Code AA 0.81 0 0.81
symbol Symbols] AB 0.09 10 0.18
A 0.9 0 0.9 BA 0.09 110 0.27
BB 0.01 111 0.03
B 0.1 1 0.1 L=1.29
L=1.0
2n
L =  p( xi ) * li L: the average code length
⚫ Symbol rate of the encoder = symbol rate of source i =1 p(xi): probability of the i th symbol of the
extended source
⚫ Larger than that of the channel can accommodate
li : length of the codeword corresponding to18
the i th symbol
9/24/2023 17
9/24/2023
Second-Order extension Third-Order extension
Grouping 3 source symbols at a time:

L 1.29 Source P () Codeword [P()].[Number of Code


= = 0.645 code symbols/source symbol symbol Symbols]
n 2 AAA 0.729 0 0.729
AAB 0.081 100 0.243
The symbol rate at the encoder output:
ABA 0.081 101 0.243
L
r = 3.5(0.645) = 2.258 code symbols/sec >2 BAA 0.081 110 0.243
n ABB 0.009 11100 0.045
BAB 0.009 11101 0.045
Still greater than the 2 symbols/second of the channel BBA 0.009 11110 0.045
So, we continue with the third-order extension BBB 0.001 11111 0.005
L=1.598

9/24/2023 19
9/24/2023 20

Third-Order extension Efficiency of a source code

⚫ Efficiency is a useful measure of goodness of a source code


L 1.598
= = 0.533 code symbols/source symbol Lmin Lmin
n 3 eff = = n

 p( x )l
L
i i
i =1
The symbol rate at the encoder output:
H(X )
Lmin = where
L log2 D H(X): the entropy of source
r = 3.5(0.533) = 1.864 code symbols/second D: the number of symbols in
n the coding alphabet
H (X )
 eff =
L log2 D
This rate is accepted by the channel
H(X )
or eff = for a binary alphabet
L
9/24/2023 21 9/24/2023 22
Entropy and efficiency of an extended binary source Behavior of L/n

L
⚫ Entropy of the n th-order extension of a discrete 1.0
n
▪L/n always exceeds
memoryless source: the source entropy
0.8 and converges to the
H (X n)=n*H (X) source entropy for
⚫ The efficiency of the extended source: 0.6 large n

n.H ( X ) 0.469 H(X) ▪Decreasing the


eff = 0.4
average codeword
L length leads to
0.2
increasing decoding
complexity
0 n
0 1 2 3 4

9/24/2023 23 9/24/2023 24

Shannon-Fano Coding [1]


⚫ Procedure: 3 steps Example of Shannon-Fano Coding
1. Listing the source symbols in order of decreasing
probability
Ui pi 1 2 3 4 5 Codewords
2. Partitioning the set into 2 subsets as close to equiprobable
U1 .34 0 0 00
as possible. 0’s are assigned to the upper set and 1’s to the
lower set U2 .23 0 1 01
3. Continue to partition subsets until further partitioning is U3 .19 1 0 10
not possible
U4 .1 1 1 0 110
⚫ Example: U5 .07 1 1 1 0 1110
U6 .06 1 1 1 1 0 11110
U7 .01 1 1 1 1 1 11111

9/24/2023 25
9/24/2023 26
Shannon-Fano coding Huffman Coding [1][2][3]
7 ⚫ Procedure: 3 steps
L =  pi li = 2.41 1. Listing the source symbols in order of decreasing
i =1 probability. The two source symbols of lowest
7
H (U ) = − pi log2 pi = 2.37
probabilities are assigned a 0 and a 1.
i =1 2. These two source symbols are combined into a new source
symbol with probability equal to the sum of the two
H (U ) 2.37 original probabilities. The new probability is placed in the
eff = = = 0.98
L 2.41 list in accordance with its value.
3. Repeat until the final probability of new combined symbol
is 1.0.
▪The code generated is prefix code due to equiprobable
partitioning ⚫ Example:
▪Not lead to an unique prefix code. Many prefix codes have
the same efficiency.

9/24/2023 27 9/24/2023 28

Examples of Huffman Coding Huffman Coding: disadvantages


0 ⚫ When source have many symbols (outputs/messages), the code
1.0
Ui pi becomes bulkyHufman code + fixed-length code.
0 1 Ui Codewords
U1 .34
.58 ⚫ Still some redundancy and redundancy is large with a small set
U1 00 of messagesgrouping multiple independent messages
1 U2 10
U2 .23 0
.42
U3 .19 U3 11
0 1
.24 U4 011
U4 .1
U5 .07 0 1 U5 0100
.14
U6 01010
U6 .06 0 1
.07 U7 01011
U7 .01
1
9/24/2023 29 9/24/2023 30
Huffman Coding: disadvantages
Viet Nam National University Ho Chi Minh City
⚫ Example 9.8 and 9.9 ([2],pp. 437-438) University of Science
⚫ Grouping make redundancy small but the number of codewords Faculty of Electronics & Telecommunications
grows exponentially, code become more complex and delay is
introduced.
Chapter 8a: Error Correcting Codes

Linear block codes

Dang Le Khoa
[email protected]
9/24/2023 31

Outline Block diagram of a DCS


⚫ Channel coding
⚫ Linear block codes
– The error detection and correction capability
– Encoding and decoding Source Channel Pulse Bandpass
Format
– Hamming codes
encode encode modulate modulate
– Cyclic codes Digital modulation

Channel
Digital demodulation

Source Channel Demod.


Format Detect
decode decode Sample

2 3
What is channel coding? Error control techniques
⚫ Automatic Repeat reQuest (ARQ)
◼ Channel coding: – Full-duplex connection, error detection codes
Transforming signals to improve – The receiver sends a feedback to the transmitter, saying that if
communications performance by increasing any error is detected in the received packet or not (Not-
the robustness against channel impairments Acknowledgement (NACK) and Acknowledgement (ACK).
(noise, interference, fading, ..) – The transmitter retransmits the previously sent packet if it
◼ Waveform coding: Transforming waveforms to receives NACK.
better waveforms ⚫ Forward Error Correction (FEC)
◼ Structured sequences: Transforming data
– Simplex connection, error correction codes
sequences into better sequences, having
structured redundancy. – The receiver tries to correct some errors
-“Better” in the sense of making the decision process less ⚫ Hybrid ARQ (ARQ+FEC)
subject to errors.
– Full-duplex, error detection and correction codes
4 5

Why using error correction coding? Some definitions


◼ Error performance vs. bandwidth
⚫ Binary field :
◼ Power vs. bandwidth
– The set {0,1}, under modulo 2 binary addition and
◼ Data rate vs. bandwidth PB multiplication forms a field.
◼ Capacity vs. bandwidth
Coded
Addition Multiplication
A 0 0 =0 0 0 = 0
F
Coding gain: 01 =1 0 1 = 0
For a given bit-error probability, C B
1 0 =1 10 = 0
the reduction in the Eb/N0 that can be
realized through the use of code: D 1 1 =0 1 1 = 1
E
E  E  –
Uncoded
Binary field is also called Galois field, GF(2).
G [dB] =  b  [dB] −  b  [dB].
 N0 u  N 0 c Eb / N0 (dB)

6 7
Linear block codes Linear block codes

◼ The information bit stream is chopped into blocks of k bits. ⚫ The Hamming weight of vector U, denoted by w(U),
◼ Each block is encoded to a larger block of n bits. is the number of non-zero elements in U.
◼ The coded bits are modulated and sent over the channel. ⚫ The Hamming distance between two vectors U and
◼ The reverse procedure is done at the receiver. V, is the number of elements in which they differ.

Channel d (U, V ) = w(U  V )


Data block Codeword
encoder ⚫ The minimum distance of a block code is
k bits n bits

n-k Redundant bits


d min = min d (Ui , U j ) = min w(Ui )
i j i
k
Rc = Code rate
n

8 9

Linear block codes – cont’d Linear block codes – cont’d

⚫ Error detection capability is given by ◼ Encoding in (n,k) block code

e = dmin − 1 U = mG
⚫ Error correcting-capability t of a code, which is  V1 
defined as the maximum number of guaranteed V 
correctable errors per codeword, is (u1, u2 , , un ) = ( m1, m2 , , mk )   2 
 
d − 1  
t =  min
 2 

◼ The rows of G are linearly independent.
 Vk 
(u1, u2 , , un ) = m1  V1 + m2  V2 + + m2  Vk .

10 Lecture 9 11
Linear block codes – cont’d Linear block codes – cont’d
⚫ Example: Block code (6,3) ⚫ Systematic block code (n,k)
– For a systematic code, the first (or last) k elements in the codeword
Message vector Codeword are information bits.
000 000000
 V1  110100  1 00 11 01 00 . G = [P Ik ]
G =  V2  = 011010  01 0 011 01 0
I k = k  k identity matrix
 V3  101001  11 0 1 0111 0
Pk = k  (n − k ) matrix
001 1 01001
1 01 0111 01 U = (u1, u2 ,..., un ) = ( p1, p2 ,..., pn − k , m1, m2 ,..., mk )
011 11 0011
parity bits message bits
111 000111

12 13

Linear block codes – cont’d Linear block codes – cont’d

⚫ For any linear code we can find an matrix H( n − k )n , such ⚫ Standard array
n−k
that its rows are orthogonal to the rows of G : – For row i = 2,3,...,2 find a vector in Vn of minimum
weight which is not already listed in the array.
GHT = 0 – Call this pattern e i and form the i:th row as the corresponding
coset

⚫ H is called the parity check matrix and its rows are zero

linearly independent.
codeword U1 U2 U 2k
⚫ For systematic linear block codes: e2 e2  U 2 e 2  U 2k coset
.

H = [I n −k PT ] e 2n − k e 2n − k  U 2 e 2 n − k  U 2k
coset leaders

14 15
Linear block codes – cont’d Linear block codes – cont’d
⚫ Example: Standard array for the (6,3) code
Data source Format
m Channel U Modulation
encoding
codewords channel
Channel Demodulation
000000 110100 011010 101110 101001 011101 110011 000111 Data sink Format
decoding Detection
000001 110101 011011 101111 101000 011100 110010 000110 m̂ r
000010 110111 011000 101100 101011 011111 110001 000101
000100 110011 011100 101010 101101 011010 110111 000110 r = U+e
.
001000 111100 r = (r1 , r2 ,...., rn ) received codeword or vector
010000 100100 coset e = (e1 , e2 ,...., en ) error pattern or vector.
100000 010100
010001 100101 010110 ◼ Syndrome testing:
Coset leaders
◼ S is the syndrome of r, corresponding to the error pattern e.
S = rHT = eHT
16 17

Linear block codes – cont’d Linear block codes – cont’d

⚫ Standard array and syndrome table decoding


Error pattern Syndrome
1. Calculate S = rH .
T
U = (101110) transmitted
000000 000
2. Find the coset leader, eˆ = ei , corresponding to S . r = (001110) is received.
000001 101
ˆ = r + eˆ and corresponding m̂ .
3. Calculate U The syndrome of r is computed:
000010 011
S = rHT = (001110)HT = (100)
000100 110
– ˆ = r + eˆ = (U + e) + eˆ = U + (e + eˆ )
Note that U . Error pattern corresponding to this syndrome is
eˆ = e , error is corrected. 001000 001
If eˆ = (100000)
If eˆ  e , undetectable decoding error occurs. 010000 010 The corrected vector is estimated
100000 100
Uˆ = r + eˆ = (001110) + (100000) = (101110)
010001 111

18 19
Hamming codes Hamming codes

◼ Hamming codes ⚫ Example: Systematic Hamming code (7,4)


Hamming codes are a subclass of linear block codes and

belong to the category of perfect codes. 1 0 0 0 1 1 1
◼ Hamming codes are expressed as a function of a single H = 0 1 0 1 0 1 1 = [I 33 PT ].
integer m  2 . 0 0 1 1 1 0 1
Code length: n = 2m − 1
0 1 1 1 0 0 0
Number of information bits: k = 2m − m − 1. 1 0 1 0 1 0 0 
Number of parity bits: n-k = m
G= = [P I 44 ].
Error correction capability: t = 1 1 1 0 0 0 1 0
The columns of the parity-check matrix, H, consist of all  
1 1

non-zero binary m-tuples.
1 1 0 0 0

Lecture 9 20 21

Biểu diễn khác cùa mã Hamming Cyclic block codes


⚫ Example: Systematic Hamming code (7,4) ⚫ Cyclic codes are a subclass of linear block codes.
0 1 1 1 0 0 0 ⚫ Encoding and syndrome calculation are easily performed
1 0 1 0 1 0 0 using feedback shift-registers.
G [P I4 ]
1 1 0 0 0 1 0
4
– Hence, relatively long block codes can be implemented with a
1 1 1 0 0 0 1 reasonable complexity.
⚫ BCH and Reed-Solomon codes are cyclic codes.
⚫ Sắp xếp lại ma trận theo thứ tự cột: 3,2,4,1,5,6,7. Ta có

1 1 1 0 0 0 0
1 0 0 1 1 0 0
G
0 1 0 1 0 1 0
1 1 0 1 0 0 1

22 23
Cyclic block codes Cyclic block codes

◼ A linear (n,k) code is called a Cyclic code if all cyclic shifts ⚫ Algebraic structure of Cyclic codes, implies expressing
of a codeword are also codewords. codewords in polynomial form

U = (u0 , u1, u2 ,..., un −1) “i” cyclic shifts of U U( X ) = u0 + u1 X + u2 X 2 + ... + un−1X n−1 degree (n - 1).

U(i ) = (un −i , un −i +1,..., un −1, u0 , u1, u2 ,..., un −i −1). ⚫ Relationship between a codeword and its cyclic shifts:
XU( X ) = u0 X + u1 X 2 + ..., un − 2 X n −1 + un −1 X n

◼ Example: = un −1 + u0 X + u1 X 2 + ... + un − 2 X n −1 + un −1 X n + un −1 .
U(1) ( X ) un −1 ( X n +1)

U = (1101) = U(1) ( X ) + un −1 ( X n + 1)

U(1) = (1110) U(2) = (0111) U(3) = (1011) U(4) = (1101) = U. – Hence: U(1) ( X ) = XU( X ) modulo ( X n + 1)
By extension
U(i ) ( X ) = X i U( X ) modulo ( X n + 1)
24 25

Cyclic block codes Cyclic block codes


⚫ Basic properties of Cyclic codes: The orthogonality of G and H in polynomial form is
expressed as g( X )h( X ) = X + 1 . This means h( X ) is
n
– Let C be a binary (n,k) linear cyclic code
1. Within the set of code polynomials in C, there is a also a factor of X n + 1
unique monic polynomial g( X )with minimal degree
r  n. g ( X ) is called the generator polynomials. 1. The row i, i = 1,..., k , of generator matrix is formed by the
g( X ) = g0 + g1 X + ... + g r X r coefficients of the "i − 1" cyclic shift of the generator
polynomial.
2. Every code polynomial U( X ) in C, can be expressed
uniquely as U( X ) = m( X )g( X ) g g gr 0
 g( X )   0 1 
3. The generator polynomial g( X ) is a factor of  Xg( X )   g0 g1 gr 
X n +1 G= = 
   
 k −1   g0 g1 gr 
 X g( X )  
0 g0 g1 g r 

26 27
Cyclic block codes Cyclic block codes

⚫ Systematic encoding algorithm for an (n,k) Cyclic code: ⚫ Example: For the systematic (7,4) Cyclic code with
generator polynomial g ( X ) = 1 + X + X 3.
1. Multiply the message polynomial m( X ) by X n − k
1. Find the codeword for the message m = (1011).
n = 7, k = 4, n − k = 3

2. Divide the result of Step 1 by the generator m = (1011)  m( X ) = 1 + X 2 + X 3

polynomial g( X ) . Let p( X ) be the reminder. X n − k m( X ) = X 3m( X ) = X 3 (1 + X 2 + X 3 ) = X 3 + X 5 + X 6 .


Divide X n − k m( X ) by g( X) :
X 3 + X 5 + X 6 = (1 + X + X 2 + X 3 ) (1 + X + X 3 ) + 1
n−k
3. Add p( X ) to X m( X ) to form the codeword quotient q(X) generator g(X) remainder p ( X )

U( X ). Form the codeword polynomial:


U(X ) = p(X ) + X 3m( X ) = 1 + X 3 + X 5 + X 6
U=(1 0 0 1 0 1 1)
parity bits message bits

28 29

Cyclic block codes Cyclic block codes


– Find the generator and parity check matrices, G and H,
respectively.
◼ Syndrome decoding for Cyclic codes:
g( X ) = 1 + 1  X + 0  X 2 + 1  X 3  ( g 0 , g1, g 2 , g3 ) = (1101). ◼ Received codeword in polynomial form is given by
1 1 0 1 0 0 0
0 1 1 0 1 0 0 
Not in systematic form. Received r ( X ) = U ( X ) + e( X ) Error
G= We do the following: codeword pattern
0 0 1 1 0 1 0 row(1) + row(3) → row(3)
  ◼ The syndrome is the remainder obtained by dividing the
0 0 0 1 1 0 1 row(1) + row(2) + row(4) → row(4)
1 1 0 1 0 0 0 received polynomial by the generator polynomial.
0 0  1 0 0 1 0 1 1 
1 1 0 1 0 r ( X ) = q( X )g( X ) + S( X ).
G= H = 0 1 0 1 1 1 0  .
Syndrome
.
1 1 1 0 0 1 0
  0 0 1 0 1 1 1  With syndrome.
1 0 1 0 0 0 1 ◼

P I 4 4 I 33 PT S( X ) = 0 : no error
S( X )  0 : error , retransmission
30 31
Example of the block codes Viet Nam National University Ho Chi Minh City
University of Science

Faculty of Electronics & Telecommunications

PB Chapter 8b: Error Correcting Codes


8PSK
Convolutional codes

QPSK
Dang Le Khoa
[email protected]

Eb / N0 [dB]
32

Introduction Definitions
◼ In block coding, the encoder accepts k- ◼ An convolutional encoder: a finite-state machine that
consists of an M-stage shift register, n modulo-2 adders
bit message block and generates n-bit ◼ L-bit message sequence produces an output sequence
codewordBlock-by-block basis with n(L+M) bits
Code rate:
◼ Encoder must buffer an entire message ◼

L
block before generating the codeword r=
n( L + M )
(bits/symbol)

◼ When the message bits come in serially ◼ L>>M, so


rather than in large blocks, using buffer is 1
r= (bits/symbol)
undesirable n
◼ Convolutional coding
2 3
Definitions Example
◼ Convolutional code (2,1,2)
◼ n=2: 2 modulo-2 adders or 2 outputs
◼ Constraint length (K): the number of ◼ k=1: 1 input
shifts over which a single message bit ◼ M=2: 2 stages of shift register (K=M+1=2+1=3)

influence the output Path 1

◼ M-stage shift register: needs M+1 shifts


for a message to enter the shift register
Input
and come out Output

◼ K=M+1
Path 2
4 5

Example Generations
◼ Convolutional code (3,2,1) ◼ Convolutional code is nonsystematic code
◼ n=3: 3 modulo-2 adders or 3 outputs ◼ Each path connecting the output to the input can be
◼ k=2: 2 input
characterized by impulse response or generator
polynomial
◼ M=1: 1 stages of each shift register (K=2 each)
◼ denoting the impulse response of the
( g M(i ) ,..., g 2(i ) , g1(i ) , g 0(i ) )
ith path
◼ Generator polynomial of the ith path:
g (i ) ( D) = g M(i ) D M + ... + g 2(i ) D 2 + g1(i ) D + g0(i )
◼ D denotes the unit-delay variabledifferent from X of
Input
Output
cyclic codes
◼ A complete convilutional code described by a set of
polynomials {g (1) ( D), g ( 2) ( D),..., g ( n) ( D)}

6 7
Example(1/8) Example(2/8)
◼ Output polynomial of path 1:
c (1) ( D) = m( D) g (1) ( D)
◼ Consider the case of (2,1,2)
= ( D 4 + D 3 + 1)( D 2 + D + 1)
◼ Impulse response of path 1 is (1,1,1)
= D6 + D5 + D 4 + D5 + D 4 + D3 + D 2 + D + 1
◼ The corresponding generator polynomial is
g (1) ( D) = D2 + D + 1 = D6 + D3 + D 2 + D + 1

◼ Impulse response of path 2 is (1,0,1) ◼ Output sequence of path 1 (1001111)


◼ The corresponding generator polynomial is ◼ Output polynomial of path 2:
g ( D) = D + 1
( 2) 2
c ( 2 ) ( D ) = m( D ) g ( 2 ) ( D )
= ( D 4 + D 3 + 1)( D 2 + 1)
◼ Message sequence (11001)
= D6 + D 4 + D5 + D3 + D 2 + 1
◼ Polynomial representation: m(D) = D4 + D3 + 1
◼ Output sequence of path 2 (1111101)
8 9

Example(3/8) Example(4/8)
◼ Another way to calculate the output:
◼ m= (11001) ◼ Path 1:
◼ c(1)=(1001111) m 111 output
◼ c(2)=(1111101) 001001 1 1
◼ Encoded sequence c=(11,01,01,11,11,10,11) 00100 11 0
◼ Message length L=5bits 0010 011 0
◼ Output length n(L+K-1)=14bits 001 001 1 1
◼ A terminating sequence of K-1=2 zeros is 00 100 11 1
appended to the last input bit for the shift 0 010 011 1
register to be restored to its zero initial state 001 0011 1
10 c(1)=(1001111) 11
Example(5/8) Example(6/8)
◼ Consider the case of (3,2,1)
m 101 output
◼ Path 2
001001 1 1
00100 11 1
0010 011 1 Output
Input
001 001 1 1
00 100 11 1
0 010 011 0
001 0011 1
◼ denoting the impulse
gi( j ) = ( gi(,Mj ) , gi(,Mj ) −1 ,..., gi(,1j ) , g i(,0j ) )
response of the jth path corresponding to ith
c(2)=(1111101)
input
12 13

Example(7/8) Example(8/8)
◼ Assume that:
◼ m
(1)=(101)m(1)(D)=D2+1

◼ m
(2)=(011)m(1)(D)=D+1

Output ◼ Outputs are:


Input
◼ c(1)=m(1)*g1(1)+m(2)*g2(1)
= (D2+1)(D+1)+(D+1)(1)
=D3+D2+D+1+D+1=D3+D2c(1)=(1100)
◼ c(2)=m(1)*g1(2)+m(2)*g2(2)
g(1)
1 = (11)  g ( D) = D + 1
(1)
1
= (D2+1)(1)+(D+1)(D)
g (1)
2 = (01)  g1(1) ( D) = 1 =D2+1+D2+D=D+1 c(2)=(0011)
g1( 2) = (01)  g1( 2 ) ( D) = 1 ◼ c(3)=m(1)*g1(3)+m(2)*g2(3)
g ( 2)
= (10)  g ( D) = D
( 2) = (D2+1)(D+1)+(D+1)(D)
2 2

g( 3)
= (11)  g1(1) ( D) = D + 1 =D3+D2+D+1+D2+D=1=D3+1 c(3)=(1001)
1
◼ Output c=(101,100,010,011)
g 2(3) = (10)  g1(1) ( D) = D 14 15
1/10

State diagram Example d


11

Consider convolutional code (2,1,2) state Binary


1/10 description
◼ Message 11001 1/01 0/01
0/10
d a 00 ◼ Start at state a
11 b 10 ◼ Walk through the b 10 01 c
c 01 state diagram in
1/00
1/01 0/01 d 11 accordance with 0/11
0/10 1/11
message sequence
◼ 4 possible states
b 10 01 c 00
◼ Each node has 2 incoming
branches, 2 outgoing branches a
1/00 ◼ A transition from on state to
0/11 another in case of input 0 is 0/00
1/11 represented by a solid line and Input 1 1 0 0 1 0 0
of input 1 is represented by
dashed line State 00 10 11 01 00 10 01 00
00
◼ Output is labeled over the
a transition line a b d c a b c a
Output 11 01 01 11 11 10 11
0/00 16 17

Trellis(1/2) Trellis(1/2)
a=00
0/00 0/00 0/00 0/00 0/00 0/00 0/00 0/00 ◼ The trellis contains (L+K) levels
◼ Labeled as j=0,1,…,L,…,L+K-1
b=10
◼ The first (K-1) levels correspond to the
encoder’s departure from the initial state a
◼ The last (K-1) levels correspond to the
c=01
encoder’s return to state a
◼ For the level j lies in the range K-1jL, all
d=11
1/10 1/10 1/10 1/10 the states are reachable
Level j=0 1 2 3 4 5 L-1 L L+1 L+2

18 19
Maximum Likelihood Decoding
Example of Convolutional codes
◼ Message 11001
Input 1 1 0 0 1 0 0 ◼ m denotes a message vector
a=00
0/00 0/00 0/00 0/00 0/00 0/00 0/00 ◼ c denotes the corresponding code vector
◼ r denotes the received vector
◼ With a given r , decoder is required to make estimate m̂
of message vector, equivalently produce an estimate ĉ
b=10
of the code vector
◼ mˆ = m only if cˆ = c otherwise, a decoding error
happens
c=01
◼ Decoding rule is said to be optimum when the
propability of decoding error is minimized
◼ The maximum likelihood decoder or decision rule is
d=11
1/10 1/10 1/10 described as follows:
Level j=0 1 2 3 4 5 6 7 ◼ Choose the estimate ĉ for which the log-likelihood
Output 11 01 01 11 11 10 11 20 function logp(r/c) is maximum 21

Maximum Likelihood Decoding Maximum Likelihood Decoding


of Convolutional codes of Convolutional codes
◼ Binary symmetric channel: both c and r are binary
sequences of length N
Decoding rule is restated as follows:
N
p (r | c) =  p (ri | ci ) ◼
i =1
N ◼ Choose the estimate ĉ that minimizes the

 log p(r | c) =  log p(ri | ci ) Hamming distance between the received vector r
i =1
and the transmitted vector c
 p if ri  ci
with p(ri | ci ) =  ◼ The received vector r is compared with each
1 − p if ri = ci
possible code vector c, and the one closest
◼ r differs from c in d positions, or d is the Hamming to r is chosen as the correct transmitted
distance between r and c code vector
 log p(r | c) = d log p + ( N − d ) log(1 − p)
 p 
= d log  + N log(1 − p)
1− p 
22 23
The Viterbi algorithm The Viterbi algorithm

◼ Choose a path in the trellis whose ◼ The algorithm operates by computing a


metric for every possible path in the trellis
coded sequence differs from the ◼ Metric is Hamming distance between coded
received sequence in the fewest sequence represented by that path and
number of positions received sequence
◼ For each node, two paths enter the node, the
lower metric is survived. The other is
discarded
◼ Computation is repeated every level j in the
range K-1 jL
◼ Number of survivors at each level 2K-1=4
24 25

The Viterbi algorithm


◼ c=(11,01,01,11,11,10,11),r=(11,00,01,11,10,10,11)
Input 11 00 01 11 10 10 11

0/00 2 0/00 2 0/00


3 2 0/00 4 1 0/00 4 2 0/00
3 0/00 5
a=00
2

0 4 32 32 24
b=10

1
c=01 6 43 25 25
1

1 4
d=11 3
1/10 4 3 1/10 4 3 1/10
Code 11 01 01 11 11 10 11
26
Output 1 1 0 0 1 0 0

You might also like