8.part II - Channel Code Introduction M
8.part II - Channel Code Introduction M
Technology
Channel Codes
‘
1. Introduction
2. Repetition Code
5. Revision
Given a BSC
(Binary symmetric
channel)
❑ Assume 𝑝 = 10−3 (Error rate)
Consider a repetition code:
In transmitter:
111 substitutes for 1, 000 substitutes for 0
• Encoding example: 1 . 0 . 1 . → 111 . 000 . 111 .
• Decoding rule: Majority vote!
Examples of received codewords:
110 . 000 . 111 . → 1 . 0 . 1 . Error-free!
111 . 000 . 010 . → 1 . 0 . 0 . Error!
Khoa Điện tử - Viễn thông Digital Communications and Coding
Trường Đại học Công nghệ, ĐHQGHN 10 Truyền thông số và mã hóa
Analysis of Repetition Code
➔ Error rate:
𝑃𝑒1 = 𝑃𝑟 001 + 𝑃𝑟 010 +𝑃𝑟 100 +𝑃𝑟 000
= 3𝑝2 1 − 𝑝 + 𝑝3 = 3. 10−2.3 1 − 10−3 + 10−3.3 ≈ 3.10−6
As n gets bigger, the decoding error probability goes down. Here’s
what happens with a 10% probability of transmission error (p =0.1)
Number of repetitions Probability of incorrect decoding
3 0.028
5 0.0086
7 0.002728
9 0.000892
It seems that we can push the error probability as close to zero as
we wish by using more repetitions.
If we let the number of repetitions grow and grow, we can
approach perfect reliability (No error at all!)
Red curve is signal. Black dots correspond to samples. Dashed curve is another signal which
would result in the same values for samples taken at the same points. This plot shows that
sampling frequency here is not sufficient because cannot uniquely reconstruct original signal.
Overlap balls
❑ The situation we don't want is one
in which it is ambiguous where
the signal came from? Here the
two error balls overlap,
introducing the ambiguity. So we Non-Overlap balls
must choose the words of the
vocabulary sufficiently far apart, as
in second image, so there is no
ambiguity.
❑ In order to have as large a
Perfect!
vocabulary as possible, we clearly
want to pack as many balls as
possible into our space of signals.
1 dimension signal!
a ➔ aaaaaaaaaaa
Decision Bound.
Figure 1: Repetition coding packs points inefficiently in the high dimensional signal
Figure : The number of noise spheres that can be packed into the y-sphere yields
the maximum number of codewords that can be reliably distinguished
= 𝐻(𝑋) − 𝑝(𝑥)𝐻(𝑌 ∣ 𝑋 = 𝑥)
= 𝐻(𝑋) − 𝑝(𝑥)𝐻(𝑝)
= 𝐻(𝑋) − 𝐻(𝑝)
≤ 𝟏 − 𝑯(𝒑)
where the last inequality follows because 𝑌 is a binary random
variable. Equality 𝑯 𝑿 = 𝟏 when the input distribution is
uniform.
Khoa Điện tử - Viễn thông Digital Communications and Coding
Trường Đại học Công nghệ, ĐHQGHN 26 Truyền thông số và mã hóa
The channel coding theorem (No proof)
Capacity in BSC
❑ Hamming Code