Unit04 Coding
Unit04 Coding
1
Learning Objectives
Describe symbol mapping for QAM constellations
Implement symbol detection for faded symbols
◦ Compute average BER and SER on AWGN and flat channels and compare
Identify if a system can be modeled as slow vs. fast and frequency-selective vs. flat fading
For slow and flat fading, compute outage probability and capacity under a fading model
For IID fading, compute the ergodic capacity
Create a TX and RX chain for flat and fading channels with given components
◦ Symbol equalization, soft symbol detection, interleaving, channel decoder
2
Outline
Uncoded Modulation over Fading Channels
Capacity with Coding over Fading Channels: Outage Capacity
Capacity with Coding over Fading Channels: Ergodic Capacity
Review: Coding over an AWGN Channel
Coding over Fading Channels
Capacity with Bit-Interleaved Coded Modulation
3
Uncoded Modulation
Symbol
Symbol TX filter RX filter demodulation
mapping
4
Mathematical Model
Fading channel
Symbol TX QAM RX QAM
symbols Symbol
mapping symbols
𝑠𝑠[𝑛𝑛] demodulation
𝑟𝑟[𝑛𝑛]
TX bits RX bits
5
Review: Bit to Symbol Mapping
𝑏𝑏 𝑘𝑘 ∈ {0,1} = sequence of bits. 𝑏𝑏 0 , 𝑏𝑏 1 , 𝑏𝑏 2 , 𝑏𝑏 3 , 𝑏𝑏 4 , 𝑏𝑏 5 , …
s 𝑛𝑛 ∈ {𝑠𝑠1 , … , 𝑠𝑠𝑀𝑀 } = sequence of complex symbols
◦ Each symbol has one of 𝑀𝑀 possible values
𝑠𝑠[0] 𝑠𝑠[1] 𝑠𝑠[2]
Modulation rate: 𝑅𝑅𝑚𝑚𝑚𝑚𝑚𝑚 = log 2 𝑀𝑀bits per symbol
◦ Each 𝑅𝑅𝑚𝑚𝑚𝑚𝑚𝑚 bits gets mapped to one symbol T
6
Review: QAM Modulation
𝑀𝑀 −QAM: Most common bit to symbol mapping in wireless system
◦ 𝑅𝑅/2 bits mapped to 𝐼𝐼 and 𝑅𝑅 /2 bits mapped to Q
◦ Each dimension is mapped uniformly
QPSK
𝑅𝑅 = 2 bits/sym
𝑅𝑅 = 4 bits/sym 𝑅𝑅 = 6 bits/sym
7
ML Estimation for Symbol Demodulation
Consider single symbol: 𝑟𝑟 = ℎ𝑠𝑠 + 𝑤𝑤, 𝑤𝑤 ~ 𝐶𝐶𝐶𝐶 0, 𝑁𝑁0 , 𝑠𝑠 ∈ {𝑠𝑠1 , … , 𝑠𝑠𝑀𝑀 }
◦ Drop the sample index 𝑛𝑛
◦ 𝑠𝑠 is a QAM symbol
8
Equalization and Nearest Symbol Detection
1 𝑟𝑟−ℎ𝑠𝑠 2
Likelihood: 𝑝𝑝 𝑟𝑟 𝑠𝑠 = exp −
𝜋𝜋 𝑁𝑁0 𝑁𝑁0
2 2
MLE is: 𝑠𝑠̂ = arg max 𝑝𝑝(𝑟𝑟|𝑠𝑠) = arg min 𝑟𝑟 − ℎ𝑠𝑠 = arg min 𝑧𝑧 − 𝑠𝑠
𝑠𝑠 𝑠𝑠 𝑠𝑠
𝑟𝑟
Here, 𝑧𝑧 = = equalized symbol.
ℎ
Procedure:
𝑟𝑟
◦ Step 1: Equalize the symbol: 𝑧𝑧 =
ℎ
◦ Step 2: Find 𝑠𝑠 = 𝑠𝑠1 , … , 𝑠𝑠𝑀𝑀 closest to 𝑧𝑧 in the complex plane
9
Decision Regions
s2 s1 (closest Decision
point) Example: Decision
region for s1
𝑧𝑧 region in QPSK
s3 s4
ML estimate is closest point in constellation to 𝑧𝑧: 𝑠𝑠̂ = arg min 𝑧𝑧 − 𝑠𝑠𝑖𝑖
𝑖𝑖
10
Error Probabilities on an AWGN Channel
Error probabilities:
◦ Symbol error rate, SER: Prob symbol is misdetected No error
z w
◦ Bit error rate, BER: Probability of a bit is in error z in correct decision region
◦ Assume TX symbols are uniformly distributed
sm
First consider AWGN model: 𝑧𝑧 = 𝑠𝑠 + 𝑣𝑣
◦ No fading
11
SER for AWGN Modulation
Error formula can be derived for most QAM mappings
◦ See, e.g., Proakis
12
Ex: BER Simulation for 16-QAM
See demo
Easy to do in MATLAB
13
SNR on a Fading Channel
Now return to a fading channel:
𝑟𝑟 = ℎ𝑠𝑠 + 𝑤𝑤, 𝑤𝑤~𝐶𝐶𝐶𝐶 0, 𝑁𝑁0 ,
𝑟𝑟
Equalization: 𝑧𝑧 = = 𝑠𝑠 + 𝑣𝑣, ℎ 2 𝐸𝐸𝑠𝑠
𝑤𝑤
ℎ SNR: 𝛾𝛾𝑠𝑠 =
◦ 𝑣𝑣 = Effective noise after equalization 𝑁𝑁0
ℎ
14
Average SER on a Fading Channel
Fading channel: 𝑟𝑟 = ℎ𝑠𝑠 + 𝑤𝑤
2 𝐸𝐸𝑠𝑠
With fading, SNR is random ,. SNR is 𝛾𝛾𝑠𝑠 = ℎ
𝑁𝑁0
15
Example: SER on QPSK with Rayleigh Fading
Rayleigh fading: 𝛾𝛾𝑠𝑠 is exponential 𝐸𝐸 𝛾𝛾𝑠𝑠 = 𝛾𝛾𝑠𝑠̅
QPSK: 𝑆𝑆𝑆𝑆𝑆𝑆 𝛾𝛾𝑠𝑠 ≈ 2𝑄𝑄 2𝛾𝛾𝑠𝑠 for large 𝛾𝛾𝑠𝑠
1 𝛼𝛼�
𝛾𝛾 1
Lemma: Suppose that 𝛾𝛾 is exponential 𝐸𝐸 𝛾𝛾 = 𝛾𝛾,̅ 𝐸𝐸 𝑄𝑄 𝛼𝛼𝛼𝛼 = 1− ≈
2 2+𝛼𝛼� 𝛾𝛾 2𝛼𝛼�
𝛾𝛾
◦ Detailed proof below. Write
Average SER: From Lemma
2 1
𝑆𝑆𝑆𝑆𝑆𝑆 = 𝐸𝐸 𝑆𝑆𝑆𝑆𝑆𝑆(𝛾𝛾𝑠𝑠 ) ≈ =
2 2 𝛾𝛾̅ 2𝛾𝛾̅
Average SER decays as ∝ 1⁄𝛾𝛾𝑠𝑠̅
In AWGN channel, SER decays as 𝑄𝑄 2𝛾𝛾𝑠𝑠 ∝ 𝑒𝑒 −𝛾𝛾𝑠𝑠
Much slower decay
16
Comparison of Fading vs. AWGN
Error rate with fading is dramatically higher.
SER with QPSK
Ex. for QPSK:
◦ No fading, SER decays exponentially
◦ With fading, SER decays with inverse SNR
17
16-QAM Example
See demo
Large gap between AWGN and Rayleigh
18
Lemma for Average of Q function
Lemma: Suppose that 𝛾𝛾 is exponential 𝐸𝐸 𝛾𝛾 = 𝛾𝛾.̅
1 𝛼𝛼 𝛾𝛾̅ 1
𝐸𝐸 𝑄𝑄 𝛼𝛼𝛼𝛼 = 1− ≈
2 2 + 𝛼𝛼 𝛾𝛾̅ 2𝛼𝛼 𝛾𝛾̅
Proof:
1 ∞
◦ 𝐸𝐸 𝑄𝑄 𝛼𝛼𝛼𝛼 = � ∫0 𝑄𝑄 𝛼𝛼𝛼𝛼 𝑒𝑒 −𝛾𝛾⁄𝛾𝛾� 𝑑𝑑𝛾𝛾
𝛾𝛾
1 ∞ −𝑢𝑢2 ⁄2
◦ 𝑄𝑄 𝛼𝛼𝛼𝛼 = ∫ 𝑒𝑒 𝑑𝑑𝑑𝑑
2𝜋𝜋 𝛼𝛼𝛼𝛼
◦ Change order of integral
19
In-Class Exercise
20
Outline
Uncoded Modulation over Fading Channels
Capacity with Coding over Fading Channels: Outage Capacity
Capacity with Coding over Fading Channels: Ergodic Capacity
Review: Coding over an AWGN Channel
Coding over Fading Channels
Capacity with Bit-Interleaved Coded Modulation
21
Coding Over Fading Channels
Lesson from previous section:
◦ With fading, uncoded modulation cannot provided sufficient reliability
◦ Error rate decays slowly with SNR
Channel coding:
◦ Send data in blocks
◦ Block contains redundancy
◦ If some parts fade, can still recover block
22
Time and Frequency Fading
Recall: Channels vary over time and frequency SNR [dB]
Variation in time:
◦ Due to Doppler spread, 𝛿𝛿𝛿𝛿
1 𝑊𝑊𝑐𝑐𝑐𝑐𝑐
◦ Coherence time, 𝑇𝑇𝑐𝑐𝑐𝑐𝑐 ≈
2𝛿𝛿𝛿𝛿 ≈ 5 MHz
◦ Time over which channel changes significantly
Variation in frequency
◦ Due to delay spread 𝛿𝛿𝛿𝛿
1
◦ Coherence bandwidth, 𝑊𝑊𝑐𝑐𝑐𝑐𝑐 ≈
2𝛿𝛿𝛿𝛿
◦ Frequency over which channel changes significantly
𝑇𝑇𝑐𝑐𝑐𝑐𝑐
≈ 5 ms
20 path random channel with
𝛿𝛿𝛿𝛿 = 100 Hz, 𝛿𝛿𝛿𝛿 = 100 ns
23
Flat vs. Frequency-Selective Fading
Suppose we transmit a coding block Flat
◦ 𝑇𝑇 in time and 𝑊𝑊 in bandwidth
◦ 𝑇𝑇 × 𝑊𝑊 region in time and frequency
𝑊𝑊𝑐𝑐𝑐𝑐𝑐
𝑊𝑊
𝑇𝑇
24
Slow vs. Fast Fading
Suppose we transmit a coding block
◦ 𝑇𝑇 in time and 𝑊𝑊 in bandwidth
◦ 𝑇𝑇 × 𝑊𝑊 region in time and frequency
Slow
𝑊𝑊
𝑇𝑇
25
Summary: Four Regimes
Transmission
bandwidth 𝑊𝑊
Slow, Fast,
Frequency Frequency
selective selective
𝑊𝑊𝑐𝑐𝑐𝑐𝑐
Slow, Fast,
Flat Flat
Transmission
𝑇𝑇𝑐𝑐𝑐𝑐𝑐 time 𝑇𝑇 Coding block
𝑊𝑊
𝑇𝑇
26
Regimes to Model Coding
To analyze fading, consider two extreme cases
Transmission
bandwidth 𝑊𝑊
Flat and slow fading over coding block:
Slow, Fast, ◦ Channel is flat and slow fading over coding block
Frequency Frequency ◦ All symbols see approximately same fading
selective selective
𝑊𝑊𝑐𝑐𝑐𝑐𝑐
Slow, Fast, IID fading in coding block:
Flat Flat
◦ Channel fades in time and/or frequency over block
◦ Fast and/or frequency selective
Transmission
𝑇𝑇𝑐𝑐𝑐𝑐𝑐 time 𝑇𝑇 ◦ Model as large number of independent fades
27
Analysis of Coding with Flat and Slow Fading
Coding block sees an SNR 𝛾𝛾
SNR 𝛾𝛾 varies but is constant over each block Coding block
◦ Transmission time ≪ Coherence time
◦ Transmission bandwidth ≪ Coherence bandwidth
28
Outage Probability for Rayleigh Fading
Suppose a channel is Rayleigh fading
SNR 𝛾𝛾 is exponentially distributed with some mean 𝛾𝛾̅
𝛾𝛾𝑡𝑡𝑡𝑡𝑡𝑡
−
Outage probability: 𝑃𝑃𝑜𝑜𝑜𝑜𝑜𝑜 = 𝑃𝑃 𝛾𝛾 < 𝛾𝛾𝑡𝑡𝑡𝑡𝑡𝑡 = 1 − 𝑒𝑒 𝛾𝛾�
𝛾𝛾𝑡𝑡𝑡𝑡𝑡𝑡
Average SNR for a given outage probability: 𝛾𝛾̅ = −
ln 1−𝑃𝑃𝑜𝑜𝑜𝑜𝑜𝑜
Fade margin: Additional SNR needed above target for a given outage probability:
�
𝛾𝛾 𝛾𝛾𝑡𝑡𝑡𝑡𝑡𝑡 1
◦ In linear scale: =− ≈
𝛾𝛾𝑡𝑡𝑡𝑡𝑡𝑡 ln 1−𝑃𝑃𝑜𝑜𝑜𝑜𝑜𝑜 𝑃𝑃𝑜𝑜𝑜𝑜𝑜𝑜
◦ In dB: 𝛾𝛾̅ ≈ 𝛾𝛾𝑡𝑡𝑡𝑡𝑡𝑡 − 10 log10 (𝑃𝑃𝑜𝑜𝑜𝑜𝑜𝑜 )
29
Fade Margin Example
Example:
◦ Target SNR is 𝛾𝛾𝑡𝑡𝑡𝑡𝑡𝑡 = 10 dB
◦ Outage probability: 𝑃𝑃𝑜𝑜𝑜𝑜𝑜𝑜 = 0.01
20 dB fade
margin
30
Outage Capacity
Suppose we can achieve some rate 𝑅𝑅 𝛾𝛾 as a function of SNR 𝛾𝛾
When SNR 𝛾𝛾 is random, so is the rate 𝑅𝑅(𝛾𝛾)
Outage capacity: Rate, 𝑅𝑅𝑜𝑜𝑜𝑜𝑜𝑜 , we can achieve with a probability 𝑃𝑃𝑜𝑜𝑜𝑜𝑜𝑜
𝑃𝑃𝑜𝑜𝑜𝑜𝑜𝑜 = 𝑃𝑃(𝑅𝑅 𝛾𝛾 ≤ 𝑅𝑅𝑜𝑜𝑜𝑜𝑜𝑜 )
Example:
◦ Suppose system has 20 MHz bandwidth and the rate is 60% of Shannon capacity
◦ The average SNR is 20 dB.
◦ What is the outage capacity for 1% outage assuming Rayleigh fading?
Solution:
◦ From earlier, for Rayleigh fading, the SNR achievable at the outage probability is
𝛾𝛾 ≈ 𝛾𝛾̅ + 10 log10 ( 𝑃𝑃𝑜𝑜𝑜𝑜𝑜𝑜 ) = 20 + 10 log10 ( 0.01) = 20 − 20 = 0
◦ In linear scale, 𝛾𝛾 = 1
◦ Outage capacity: 𝑅𝑅𝑜𝑜𝑜𝑜𝑜𝑜 = 0.6 20 log 2 (1 + 1) = 12 Mbps
◦ At the average SNR the rate is 𝑅𝑅 = 0.6 20 log 2 (1 + 100) = 80 Mbps
31
System Implications for Outage
With flat and slow Rayleigh fading, need to add large fade margin
Channel coding does not mitigate fading
◦ Fading causes all bits to fail
◦ Still may be useful to use channel coding (e.g., for noise across the symbols)
Possible solutions?
◦ If there is motion, perhaps we can retransmit later
◦ Go to a lower rate (needs less SNR)
◦ Just accept that some locations are in outage
32
Outline
Uncoded Modulation over Fading Channels
Capacity with Coding over Fading Channels: Outage Capacity
Capacity with Coding over Fading Channels: Ergodic Capacity
Review: Coding over an AWGN Channel
Coding over Fading Channels
Capacity with Bit-Interleaved Coded Modulation
33
IID Fading Model
Coding block with fast and/or frequency selective fading Coding blocks with
Simple mathematical model: fast and/or frequency selective fading
𝑟𝑟 𝑛𝑛 = ℎ 𝑛𝑛 𝑠𝑠 𝑛𝑛 + 𝑤𝑤 𝑛𝑛 , 𝑛𝑛 = 1, … , 𝑁𝑁
34
Ergodic Capacity
IID fading model: 𝑟𝑟 𝑛𝑛 = ℎ 𝑛𝑛 𝑠𝑠 𝑛𝑛 + 𝑤𝑤 𝑛𝑛 , 𝑤𝑤 𝑛𝑛 ~𝐶𝐶𝐶𝐶(0, 𝑁𝑁0 )
◦ Channel gains ℎ[𝑛𝑛] are i.i.d. with some distribution
Ergodic capacity: Theoretical maximum rate per symbol
◦ Assume average transmit power limit 𝐸𝐸 𝑠𝑠 𝑛𝑛 2 = 𝐸𝐸𝑠𝑠
◦ Maximum taken over all codes and blocklength
◦ No computational limits
Theorem: Ergodic capacity of an IID fading channel is:
ℎ 2 𝐸𝐸𝑠𝑠
𝐶𝐶 = 𝐸𝐸 log(1 + 𝛾𝛾) , 𝛾𝛾 =
𝑁𝑁0
35
Shannon Ergodic Capacity Key Remarks
From previous slide, ergodic capacity is:
ℎ 2 𝐸𝐸𝑠𝑠
𝐶𝐶 = 𝐸𝐸 log(1 + 𝛾𝛾) , 𝛾𝛾 =
𝑁𝑁0
Theoretical result: Needs infinite computation and delay
◦ We will look at performance of real codes next
If TX knew the channel, it could get theoretically get slightly higher rate
◦ Uses a method called water-filling
◦ Place more power on symbols with better SNR.
◦ Gain is not typically large and rarely used in practical wireless systems
36
Comparing Ergodic and Flat Capacity
Fading capacity is always lower than flat fading
◦ Keeping the same average SNR the same
◦ This fact follows from Jensen’s inequality:
Conclusions:
◦ We should try to code over large number of fading realizations
◦ In this case, the capacity loss is theoretically small
◦ Much better than the case of uncoded modulation
37
Small-Scale and Large-Scale Fading
Up to now we have considered variations due to small scale fading
◦ Variations from constructive or destructive interference of multipath components Small scale
◦ May or may not cause variations within a coding block region
◦ Ex: Variations within a few wavelength in one location in the office area
38
Analysis with Small- and Large-Scale Fading
Suppose SNR varies as 𝛾𝛾(𝑢𝑢, 𝑣𝑣):
◦ 𝑢𝑢: Vector of large-scale parameters, e.g., distance, angles, shadowing, etc.
◦ 𝑣𝑣: Small-scale parameters, e.g., time-frequency location of a degree of freedom
If fading in coding block can be modeled as large number of i.i.d. samples of 𝑣𝑣:
◦ Ergodic capacity is 𝑅𝑅(𝑢𝑢) = 𝐸𝐸𝑣𝑣 (log(1 + 𝛾𝛾 𝑢𝑢, 𝑣𝑣 )
◦ Take average over small-scale, but NOT large scale
◦ Rate is a function of 𝑢𝑢
◦ Then outage probability is: 𝑃𝑃(𝑅𝑅 𝑢𝑢 ≤ 𝑅𝑅𝑡𝑡𝑡𝑡𝑡𝑡 )
39
Example: Rate CDF Calculation
Large-scale model:
𝑑𝑑0 2
◦ SNR due to large scale variations 𝛾𝛾̅ 𝑑𝑑 = 𝛾𝛾0 [Simple model just for exercise]
𝑑𝑑
◦ 𝛾𝛾0 = 10 dB and 𝑑𝑑0 =100 m
◦ Distances vary 𝑑𝑑 uniformly in [50,200]m
Small scale model:
◦ Variation within a location is Rayleigh
◦ SNR at a particular time-freq DoF will be 𝛾𝛾 = 𝛾𝛾̅ 𝑑𝑑 𝑣𝑣
◦ 𝑣𝑣 can be modeled as exponential
Plotted:
◦ SE under a constant model (slow and flat fading)
◦ SE under IID fading at each distance 𝑑𝑑
See demo
40
In-Class Exercise
Indoor environment
Look at large scale and small-scale fading
41
Outline
Uncoded Modulation over Fading Channels
Capacity with Coding over Fading Channels: Outage Capacity
Capacity with Coding over Fading Channels: Ergodic Capacity
Review: Coding over an AWGN Channel
Coding over Fading Channels
Capacity with Bit-Interleaved Coded Modulation
42
Coded Communication on an AWGN Channel
TX Channel with
Info bits RX Info bits
Coded noise
𝑏𝑏[𝑘𝑘] symbols symbols
bits 𝑐𝑐[𝑘𝑘] LLR[𝑘𝑘] 𝑏𝑏� [𝑘𝑘]
𝑠𝑠[𝑛𝑛] 𝑟𝑟[𝑛𝑛]
𝑟𝑟 𝑛𝑛 = 𝑠𝑠 𝑛𝑛 + 𝑤𝑤 𝑛𝑛 , 𝑤𝑤 𝑛𝑛 ~𝐶𝐶𝐶𝐶(0, 𝑁𝑁0 )
All details can be found in the digital communications class
43
Uncoded vs. Coded Modulation
Uncoded Modulation:
Symbol Complex
Info bits mapping ◦ Modulate raw information bits
symbols
◦ One symbol at a time.
◦ Any symbol is in error, data packet is lost!
Coding Symbol
mapping Coded modulation:
◦ Transmit in blocks (also called frames)
◦ Add extra parity bits to each block for reliability
Info bits Info Parity Block of
◦ Decode entire block together
block symbols
Coded bit
block
44
Key Parameters of Block Codes
Coding
45
Coded Communication on an AWGN Channel
Channel
Info bits TX RX Info bits
Coded with noise
𝑏𝑏[𝑘𝑘] symbols symbols 𝑏𝑏� [𝑘𝑘]
bits 𝑐𝑐[𝑘𝑘] 𝐿𝐿𝐿𝐿𝐿𝐿[𝑘𝑘]
𝑠𝑠[𝑛𝑛] 𝑟𝑟[𝑛𝑛]
𝑟𝑟 𝑛𝑛 = 𝑠𝑠 𝑛𝑛 + 𝑤𝑤 𝑛𝑛 , 𝑤𝑤 𝑛𝑛 ~𝐶𝐶𝐶𝐶(0, 𝑁𝑁0 )
46
Soft Symbol Demodulation
𝑤𝑤
𝑐𝑐1 , … , 𝑐𝑐𝐾𝐾 Symbol 𝑠𝑠 𝑟𝑟 = 𝑠𝑠 + 𝑤𝑤 𝐿𝐿𝐿𝐿𝐿𝐿1 , … , 𝐿𝐿𝐿𝐿𝐿𝐿𝐾𝐾
Soft
mapping
+ demod
47
LLR for QPSK
𝐸𝐸𝑠𝑠 Mapping of bits (𝑐𝑐0 , 𝑐𝑐1 )
TX symbol: 𝑠𝑠 = ±𝐴𝐴 ± 𝑖𝑖𝑖𝑖, 𝐴𝐴 =
2
48
QPSK LLR Visualized
Mapping of bits (𝑐𝑐0 , 𝑐𝑐1 )
𝐿𝐿𝐿𝐿𝑅𝑅0
𝑟𝑟𝐼𝐼
𝑐𝑐0 = 1 𝑐𝑐0 = 1
more likely more likely
4 𝐸𝐸𝑠𝑠
LLR for 𝑐𝑐0 is: 𝐿𝐿𝐿𝐿𝑅𝑅0 = 𝑟𝑟
𝑁𝑁0 2 𝐼𝐼
4 𝐸𝐸𝑠𝑠
LLR for 𝑐𝑐1 is: 𝐿𝐿𝐿𝐿𝑅𝑅1 = 𝑟𝑟
𝑁𝑁0 2 𝑄𝑄
49
High Order Constellations
Higher order constellations (eg. 16- or 64-QAM) Mapping of bits (𝑐𝑐1 , 𝑐𝑐2 , 𝑐𝑐3 , 𝑐𝑐4 )
Each constellation 𝑟𝑟 is a point is a function of multiple bits.
Example: For 16-QAM
◦ In phase dimension 𝑟𝑟𝐼𝐼 depends on bits (𝑐𝑐0 , 𝑐𝑐1 )
00 01 11 10
Two bits: 𝑟𝑟 = 𝑠𝑠 + 𝑛𝑛
𝑐𝑐1 , 𝑐𝑐2
50
High Order Constellations
𝑠𝑠00 𝑠𝑠10 𝑠𝑠11 𝑠𝑠01 𝑟𝑟 = 𝑠𝑠 + 𝑛𝑛
Two bits:
𝑐𝑐1 , 𝑐𝑐2
To create LLRs for individual bits use total probability rule:
1
𝑝𝑝 𝑟𝑟 𝑐𝑐1 = 𝑝𝑝 𝑟𝑟 𝑐𝑐1 , 𝑐𝑐2 = 0 + 𝑝𝑝 𝑟𝑟 𝑐𝑐1 , 𝑐𝑐2 = 1
2
Resulting bitwise LLR:
𝑝𝑝 𝑟𝑟 𝑐𝑐1 , 𝑐𝑐2 = 1,0 + 𝑝𝑝 𝑟𝑟 𝑐𝑐1 , 𝑐𝑐2 = 1,1
𝐿𝐿𝐿𝐿𝐿𝐿 for 𝑐𝑐1 = log
𝑝𝑝 𝑟𝑟 𝑐𝑐1 , 𝑐𝑐2 = 0,0 + 𝑝𝑝 𝑟𝑟 𝑐𝑐1 , 𝑐𝑐2 = 0,1
51
High Order Constellation Examples
2 bits / dim (16-QAM) 3 bits / dim (64-QAM)
52
Approximate Bitwise LLR
Exact LLR:
𝑃𝑃1 𝑟𝑟−𝑠𝑠 2 /𝑁𝑁0 𝑟𝑟−𝑠𝑠 2 /𝑁𝑁0
𝐿𝐿𝐿𝐿𝑅𝑅𝑘𝑘 = log , 𝑃𝑃1 = � 𝑒𝑒 − , 𝑃𝑃0 = � 𝑒𝑒 −
𝑃𝑃0
𝑠𝑠∶𝑏𝑏𝑏𝑏𝑏𝑏 𝑘𝑘=1 𝑠𝑠∶𝑏𝑏𝑏𝑏𝑏𝑏 𝑘𝑘=0
◦
Can be too computationally expensive
53
Coded Communication on an AWGN Channel
Channel
Info bits TX RX Info bits
Coded with noise
𝑏𝑏[𝑘𝑘] symbols symbols 𝑏𝑏� [𝑘𝑘]
bits 𝑐𝑐[𝑘𝑘] LLR[𝑘𝑘]
𝑠𝑠[𝑛𝑛] 𝑟𝑟[𝑛𝑛]
𝑟𝑟 𝑛𝑛 = 𝑠𝑠 𝑛𝑛 + 𝑤𝑤 𝑛𝑛 , 𝑤𝑤 𝑛𝑛 ~𝐶𝐶𝐶𝐶(0, 𝑁𝑁0 )
54
Maximum Likelihood Channel Decoding
Channel
TX RX Info bits
Info bits Coded with noise
symbols symbols 𝑏𝑏� [𝑘𝑘]
𝒃𝒃 bits 𝒄𝒄 LLR[𝑘𝑘]
Channel Symbol 𝒔𝒔 𝒓𝒓 Demod Channel
coding mapping decoder
Channel coding: Information block: 𝒃𝒃 = 𝑏𝑏1 , … , 𝑏𝑏𝐾𝐾 generates a codeword 𝒄𝒄 = (𝑐𝑐1 , … , 𝑐𝑐𝑁𝑁 )
Receiver gets a vector 𝒓𝒓 = 𝑟𝑟1 , … , 𝑟𝑟𝐿𝐿 , 𝐿𝐿 = number of complex modulation symbols
Channel decoder: Goal is to estimate 𝒃𝒃 (or equivalently 𝒄𝒄) from 𝒓𝒓.
Ideally will use maximum likelihood decoding:
55
Decoding from LLRs
TX RX
Coded LLRs Info bits
Info bits symbols symbols
𝒃𝒃 bits 𝒄𝒄 𝒛𝒛 �
𝒃𝒃
Channel Symbol 𝒔𝒔 𝒓𝒓 Demod Channel
coding mapping decoder
𝑃𝑃 𝒓𝒓 𝑐𝑐𝑖𝑖 = 1
LLRs sent to decoder: 𝑧𝑧𝑖𝑖 = log
𝑃𝑃 𝒓𝒓 𝑐𝑐𝑖𝑖 = 0
56
Decoding from LLRs via Bitwise Approx
Bitwise independent approximation: Assume
𝑛𝑛
RX
𝑝𝑝 𝒛𝒛 𝒄𝒄 ≈ � 𝑝𝑝(𝑧𝑧𝑖𝑖 |𝑐𝑐𝑖𝑖 ) LLRs Codeword
symbols 𝒄𝒄�
𝑖𝑖=1 𝒓𝒓 𝒛𝒛
Demod Channel
◦ decoder
Ignore dependencies between coded bits
Under bitwise independent approximation, optimal decoder is:
𝑛𝑛
𝒄𝒄� = arg max 𝑝𝑝(𝒛𝒛|𝒄𝒄) = arg max � 𝑐𝑐𝑖𝑖 𝑧𝑧𝑖𝑖
𝒄𝒄 𝒄𝒄 𝑖𝑖=1
𝑝𝑝 𝑟𝑟 𝑐𝑐𝑖𝑖 = 1
◦ 𝑧𝑧𝑖𝑖 = log LLR for coded bit 𝑖𝑖
𝑝𝑝 𝑟𝑟 𝑐𝑐𝑖𝑖 = 0
◦ Proof in digital communications class
57
Decoding Complexity
Channel decoding ideally selects codeword 𝑁𝑁
58
Quest for the Shannon Limit
Shannon capacity formula and random codes, 1948.
◦ Determines the capacity, but no practical code to achieve it.
Hamming (7,4) code, 1950
Reed-Solomon codes based on polynomials over finite fields, 1960
◦ Used in Voyager program, 1977. CD players, 1982.
Convolutional codes.
◦ Viterbi algorithm, 1969. Widely used in cellular systems. (Viterbi later invents CDMA and founds Qualcomm)
◦ Typically, within 3-4 dB of capacity
LDPC codes
◦ Similar iterative technique as turbo codes. Re-discovered in 1996.
◦ Used in 5G systems
59
Convolutional Codes
Encode data through parallel binary (usu. FIR) filters c1[t]
60
Convolutional Code Performance
Convolutional codes performance:
◦ > 5 dB better than uncoded BPSK at low BER
61
Simulation in MATLAB
MATLAB has excellent tools
◦ Conv encoder / decoder
◦ LLR
See demo
Rate=1/2, QPSK,
K=7 conv code
62
Turbo Codes
Turbo codes:
◦ Concatenation of two convolutional codes
Typically IIR and short (K=3)
◦ Interleaver: Randomly permutes the input bits
Output
◦ Input bit, and
◦ Parity bits from each convolutional encoder
◦ With no puncturing R=1/3
Discovered in 1993, ,
◦ Berrou, Glavieux, Thitimajshima, 1993.
◦ Able to achieve capacity within a fraction of dB.
Used in 3G and 4G standards
63
Turbo Code Iterative Decoding
Turbo decoder uses an iterative message passing
◦ Decode each convolutional coder one at a time
◦ Use posterior information of one code as prior for the other
64
LDPC Codes
LDPC Graph Code defined by a bipartite graph
Coded bits ◦ Connects 𝑛𝑛 coded bits and 𝑛𝑛 − 𝑘𝑘 parity bits
◦ Data 𝑘𝑘 information bits
65
In-Class Problem
66
Outline
Uncoded Modulation over Fading Channels
Capacity with Coding over Fading Channels: Outage Capacity
Capacity with Coding over Fading Channels: Ergodic Capacity
Review: Coding over an AWGN Channel
Coding over Fading Channels
Capacity with Bit-Interleaved Coded Modulation
67
Coded Communication on a Fading Channel
Info bits
𝑏𝑏[𝑘𝑘] Info bits
Channel Inter- Symbol Equal- De-inter- Channel
𝑏𝑏� [𝑘𝑘]
encoder leaver mapping ization leaver decoder
Fading channel
New block
New blocks
68
Symbol Equalization via Inversion
Received noisy symbol:
𝑟𝑟 = ℎ𝑠𝑠 + 𝑤𝑤, 𝑤𝑤~𝐶𝐶𝐶𝐶 0, 𝑁𝑁0
𝑤𝑤
Symbol equalization:
𝑠𝑠 𝑟𝑟 Symbol 𝑠𝑠̂
◦ Estimate 𝑠𝑠 from 𝑟𝑟 ℎ + Equalization
◦ Also obtain a noise estimate (needed for LLR)
Channel inversion:
𝑟𝑟 𝑤𝑤
◦ Symbol estimate 𝑠𝑠̂ = = 𝑠𝑠 + 𝑣𝑣, 𝑣𝑣 =
ℎ ℎ
2 1 2 𝑁𝑁
◦ Noise estimate: 𝐸𝐸 𝑣𝑣 = 𝐸𝐸 𝑤𝑤 = 02
ℎ2 ℎ
69
MMSE Symbol Equalization
Received noisy symbol:
𝑟𝑟 = ℎ𝑠𝑠 + 𝑤𝑤, 𝑤𝑤~𝐶𝐶𝐶𝐶 0, 𝑁𝑁0
𝑤𝑤
MMSE estimation:
𝑠𝑠 𝑟𝑟 Symbol 𝑠𝑠̂
◦ Use linear estimate 𝑠𝑠̂ = 𝛼𝛼𝛼𝛼 ℎ + Equalization
2 2
◦ Select 𝛼𝛼 to minimize 𝐸𝐸 𝑠𝑠 − 𝑠𝑠̂ = 𝐸𝐸 𝑠𝑠 − 𝛼𝛼𝛼𝛼
70
Interleaving and De-Interleaving
Problem: Fading is correlated in time
◦ Will result in many consecutive faded bits Inter- De-
◦ Many codes perform poorly if errors are together leaver Interleaving
Interleaver
◦ Shuffles the bits before symbol mapping
◦ De-interleaving is performed on LLRs
◦ Randomizes locations of errors
◦ Removes time correlations
71
Simulation
Simulation:
1
◦ Convolutional code, rate with QPSK
2
◦ Constraint length 𝐾𝐾 = 7
𝐸𝐸𝑏𝑏
◦ Plotted is block error rate (BLER) vs.
𝑁𝑁0
72
Scaling the LLRs in MATLAB
Noise variance after equalization: Manual noise scaling: Works
2
𝐸𝐸𝑠𝑠 𝑁𝑁0
𝜎𝜎𝑛𝑛 =
ℎ𝑛𝑛 2 𝐸𝐸𝑠𝑠 + 𝑁𝑁0
◦ Changes with channel gain
◦ Difference symbols have different 𝜎𝜎𝑛𝑛2
1
Recall that for approximate LLR: 𝐿𝐿𝐿𝐿𝐿𝐿 ∝
𝜎𝜎𝑛𝑛2
MATLAB:
◦ Compute LLRs with noise variance = 1
◦ Then scale with noise variance MATLAB built-in scaling: Does not work!
◦ Built-in scaling doesn’t appear to work
73
Simulating in MATLAB
Transmitter and Channel Fading Channel Noise and Receiver
74
Summary
Fading: Causes variations in SNR
Uncoded modulation:
◦ Dramatically increases error rate
◦ Must add significant fade margin
75
Outline
Uncoded Modulation over Fading Channels
Capacity with Coding over Fading Channels: Outage Capacity
Capacity with Coding over Fading Channels: Ergodic Capacity
Review: Coding over an AWGN Channel
Coding over Fading Channels
Capacity with Bit-Interleaved Coded Modulation
76
Practical Fading Channel Capacity
Info bits Coded Bitwise
𝑏𝑏[𝑘𝑘] bits LLRs
𝑐𝑐[𝑘𝑘] Inter- Symbol Equal- De-inter- 𝑧𝑧[𝑘𝑘] Channel
Channel
encoder leaver mapping ization leaver decoder
Fading channel
77
Bitwise Channel
Info bits
𝑏𝑏[𝑘𝑘]
Channel Inter- Symbol Equal- De-inter- Channel
encoder leaver mapping ization leaver decoder
Fading channel
78
BICM Theorem
Consider channel from coded bits to LLRs Coded Bitwise
◦ Theoretical capacity = 𝐼𝐼 𝑐𝑐; 𝑧𝑧 bits LLRs
◦ Information bits / coded bit ∈ [0,1] (𝑐𝑐1 , … , 𝑐𝑐𝐾𝐾 ) 𝑧𝑧1 , … , 𝑧𝑧𝐾𝐾 Channel
Channel Bitwise
encoder channel decoder
Theorem: Mutual information is bounded below:
𝐼𝐼 𝑐𝑐; 𝑧𝑧 ≥ 1 − 𝐸𝐸[𝐵𝐵𝐵𝐵𝐵𝐵 𝑐𝑐, 𝑧𝑧 ]
Noise and channel
gain
◦ Binary cross entropy:
𝐵𝐵𝐵𝐵𝐵𝐵 𝑐𝑐, 𝑧𝑧 = log 1 + 𝑒𝑒 −𝑧𝑧 − 𝑐𝑐𝑐𝑐
Bound is exact when:
𝑃𝑃 𝑟𝑟𝑘𝑘 𝑐𝑐𝑘𝑘 = 1
◦ 𝑧𝑧𝑘𝑘 = log for some received data 𝑟𝑟𝑘𝑘
𝑃𝑃(𝑟𝑟𝑘𝑘 |𝑐𝑐𝑘𝑘 =0)
◦ 𝑧𝑧𝑘𝑘 are independent given 𝒄𝒄
79
BICM Capacity
Model 𝑟𝑟 = ℎ𝑠𝑠 + 𝑤𝑤
◦ 𝑠𝑠 = QAM symbol
AWGN ℎ = 1
Fading: ℎ i.i.d. complex Normal
Loss is ~2 dB with optimal MCS choice
80