TSKS01 Digital Communication: Lecture 3
TSKS01 Digital Communication: Lecture 3
Digital Communication
Lecture 3
Stochastic Processes, Hypothesis Testing, Introduction to Digital Modulation
Emil Björnson
( (
% ℎ 𝑡 & 𝑑𝑡 = 1
and
% 𝐻 𝑓 & 𝑑𝑓 = 1
)( )(
Properties: 𝑟4,8 𝑡B,𝑡& = 𝑟8,4 𝑡& ,𝑡B and 𝑟4,4 𝑡B ,𝑡& = 𝑟4 𝑡B, 𝑡&
Definition: Sample 𝑋(𝑡) and 𝑌(𝑡) at time instants 𝑡B̅ = (𝑡B,B ,… , 𝑡B,J )
and 𝑡&̅ = (𝑡&,B ,… , 𝑡&,J ). The processes are independent if
𝐹4 L̅M ,8 L̅N 𝑥̅,𝑦Q = 𝐹4 L̅M
(𝑥̅ )𝐹8 L̅N (𝑦Q)
for every 𝑁, 𝑡B̅ , and 𝑡&̅ .
Definition: Sample 𝑋(𝑡) and 𝑌(𝑡) at time instants 𝑡B̅ = (𝑡B,B ,… , 𝑡B,J )
and 𝑡&̅ = (𝑡&,B ,… , 𝑡&,J ). The processes are jointly Gaussian if 𝑋 𝑡B̅
and 𝑌 𝑡&̅ are jointly Gaussian variables for every 𝑁, 𝑡B̅ , and 𝑡&̅ .
Theorem: If 𝑋(𝑡) and 𝑌(𝑡) are jointly Gaussian processes and
uncorrelated, they are also independent
Cross-correlation:
(
LTI-‐‑system (
𝑌B 𝑡 = ∫)( ℎB 𝜏 𝑋(𝑡 − 𝜏)𝑑𝜏
ℎB 𝑡
WSS
stochastic
process
𝑋(𝑡)
LTI-‐‑system (
𝑌& 𝑡 = ∫)( ℎ& 𝜏 𝑋(𝑡 − 𝜏)𝑑𝜏
ℎ& 𝑡
(
Definition: ℎB 𝑡 and ℎ& 𝑡 are orthogonal filters if ∫)( ℎB 𝜏 ℎ& 𝜏 𝑑𝜏 = 0
Property: If 𝑋(𝑡) is WSS with 𝑚4 = 0 and 𝑅4 𝑓 = 𝑅7, then
𝑚8e = 𝐻f 0 𝑚4 = 0
for
𝑘 = 1,2
(
𝑟8M ,8N 𝑡, 𝑡 = 𝐸 𝑌B 𝑡 𝑌& 𝑡 = ⋯ = 𝑅7 ∫)( ℎB 𝜏 ℎ& 𝜏 𝑑𝜏 = 0 = 𝑚8M 𝑚8N
Source Channel
Source Modulator
encoder encoder
00 10 11 01 11 00 01 10
Represented using time-delimited signals. Duration 𝑇 seconds (T~1/bandwidth)
Mapping: → for
Signal energy:
Noise
Vector noise
Vector Vector
Source Vector channel Destination
sender detector
Hypothesis Testing
§ Goal: Guess properties of a continuous stochastic vector 𝑋Q
§ Observe realization 𝑋Q = 𝑥̅
§ 𝑀 hypothesis 𝐻7 , … , 𝐻~)B with probabilities Pr 𝐻v for 𝑖 = 0, … , 𝑀 − 1
§ Select the “best” hypothesis based on 𝑥̅
• 𝑥̅
Decision rule: 𝐻 − what criterion to use?
§ MAP also written as 𝑘 = argmax7†v†~)B Pr 𝐻v 𝑓4Q|Š (𝑥̅ |𝐻v
is
true)
§ MAP=ML if hypotheses are equally probable
2015-09-11 TSKS01 Digital Communication - Lecture 3 15
Example: Signal in Noise
&)
Hypothesis 0: 𝑆 = −1, thus 𝑋|𝑆 ∼ 𝑁(−1, 𝜎“
— ˜B N
1 ) N
𝑓4|” 𝑥 −1 = 𝑒 & ™š
2𝜋𝜎“
&)
Hypothesis 1: 𝑆 = +1, thus 𝑋|𝑆 ∼ 𝑁(1, 𝜎“
— )B N
1 ) N
𝑓4|” 𝑥 +1 = 𝑒 & ™š
2𝜋𝜎“
2015-09-11 TSKS01 Digital Communication - Lecture 3 16
Example: Signal in Noise cont’d
•žM N • M N
) )
NŸN NŸN
𝐻7 is “best”: 𝑓4|” 𝑥 −1 > 𝑓4|” 𝑥 +1
⟺
𝑒 š >𝑒 š
⟺
𝑥 < 0
0.7
Hypothesis 0 most likely Hypothesis 1 most likely
0.6
Conditional probability density
0.5
0.3
0.2
0.1
0
−4 −3 −2 −1 0 1 2 3 4
Observation x
• 𝑥̅ = 𝐻B |𝐻7 + Pr 𝐻B Pr 𝐻
Error prob.: 𝑃Œ = Pr 𝐻7 Pr 𝐻 • 𝑥̅ = 𝐻7|𝐻B
= 𝑄
(1/𝜎“ )
0.7
0.6
Conditional probability density
0.5
0.4
0.3
1 1
0.2 Area Q Area Q
σW
σW
0.1
0
−4 −3 −2 −1 0 1 2 3 4
Observation x
Examples:
Corresponding vectors:
Sender
Vector
Modulator
sender
𝜙B
𝑠̅B
𝑠̅&
𝜙7
𝑠̅7
𝜃B
𝑠̅&
𝑠̅7
𝜃7
𝑠̅B
𝑠̅&
𝜙7
𝑠̅7
𝜃B
Vectors:
ON-basis:
Inner products:
Claim:
Claim:
Proof:
of : Squared length.
Relation: ⇒
𝜙B
𝑠̅B
𝑠̅&
𝜙7
𝑠̅7