DC-04-Optimum Receivers For AWGN Channels
DC-04-Optimum Receivers For AWGN Channels
1
Contents
3
Waveform & Vector Channel Models(1)
( )
4
Waveform & Vector Channel Models(2)
( )
◊ Based on the observed signal r(t), the receiver makes the decision
about which message m, 1≤m≤ M was transmitted
◊ Optimum decision: minimize error probability Pe = P[m
ˆ ≠ m]
◊ Any orthonormal basis {φj(t), 1 ≤ j ≤ N} can be used for
expansion of a zero-mean white Gaussian process (Problem 2.8-1)
◊ The resulting coefficients are iid zero-mean
zero mean Gaussian random variables
with variance N0/2
◊ {φj(t), 1 ≤ j ≤ N} can be used for expansion of noise n(t)
◊ Using {φj(t), 1 ≤ j ≤ N}, r (t ) = sm (t ) + n(t ) has the vector form
r = sm + n
◊ All vectors are N-dimensional
◊ Components in n are i.i.d. zero-mean Gaussian with variance N0/2
5
Waveform & Vector Channel Models(3)
( )
6
Waveform & Vector Channel Models(4)
( )
7
Chapter 4.1
4.1--1
Optimal Detection for General Vector Channel
8
4.1--1 Optimal
4.1 p Detection for General Vector Channel ((1))
∑
N
N n2 N 2
⎛ 1 ⎞ ⎛ 1 ⎞ −N
j =1 j n
−
p ( n) = ⎜ ⎟ e 2σ 2
=⎜ ⎟ e 0
⎜ πN ⎟ ⎜ πN ⎟
⎝ 0 ⎠ ⎝ 0 ⎠
◊ A more general vector channel model:
10
MAP and ML Receivers
11
The Decision Region
g
12
The Error Probability
y (1)
( )
13
The Error Probability
y (2)
( )
exhibits
hibit certain
t i symmetric
t i property
t
◊ R l i between
Relation b symbol
b l error prob.
b andd bit
bi error prob.
b
Pe
◊ ≤ Pb ≤ Pe ≤ kPb
k
14
Sufficient Statistics ((An Example)
p )
sm r ρ m̂
m
Channel G(r) Detector
17
Chapter 4.2
Waveform and Vector AWGN Channels
18
Waveform & Vector AWGN Channels(1)
( )
◊ By Gram-Schmidt
B G S h idt procedure,
d we derive
d i an orthonormal
th l basis
b i
{φj(t), 1≤ j ≤ N}, and vector representation of signals {sm, 1≤ m ≤ M}
19
Waveform & Vector AWGN Channels(2)
( )
N
◊ sm (t ) = ∑s
j =1
j=
mj φ j (t ), where smj = sm (t ), φ j (t )
N
◊ r (t ) =
∑ (s
j =1
mj + n j )φ j (t ) + n2 (t )
◊ Define rj = smj + n j
where
) φ j (t ) + n(t ),
rj = sm (t ), ) φ j (t ) = sm (t ) + n(t ),
) φ j (t ) = r (t ),
) φ j (t )
N
20
Waveform & Vector AWGN Channels(3)
( )
◊ Prove that the noise components {nj} are iid zero-mean Gaussian
with variance N0/2
∞
n j = ∫ n(t )φ j (t )dt
−∞
◊
⎡ ∞
⎤ ∞
E[n j ] = E ∫ n(t )φ j (t )dt = ∫ E[n(t )]φ j (t )dt = 0 (Zero-Mean)
⎢⎣ −∞∞ ⎥⎦ −∞∞
COV[ni n j ] = E[ni n j ] − E[ni ]E[n j ] = E ∫ n(t )φi (t )dt ∫ n( s )φ j (t ) ds ⎤
⎡ ∞ ∞
◊
⎣⎢ −∞ −∞ ⎦⎥
∞ ∞
=∫ ∫ Ε[n(t )n( s )]φi (t )φ j ( s )dtds
−∞ −∞
n(t) is white.
= ( N 0 / 2) ∫ ⎡ ∫ δ (t − s )φi (t )dt ⎤φ j ( s )ds
∞ ∞
d
−∞ ⎢
⎣ −∞ ⎥
⎦
∞ ⎧ N0 / 2 i= j
= ( N 0 / 2) ∫ φi ( s )φ j ( s )ds = ⎨
−∞
⎩0 i≠ j
21
Waveform & Vector AWGN Channels(4)
( )
⎡ ∞
⎤ ⎡ N ⎤
= E n(t ) ∫ n( s )φ j ( s )ds − E ⎢ n j ∑ niφi (t ) ⎥
⎢⎣ −∞ ⎥⎦ ⎣ i =1 ⎦
N0 ∞ N0
=
2 ∫−∞
δ ( t − s )φ j ( s ) ds −
2
φ j (t )
N0 N0
= φ j (t ) − φ j (t ) = 0
2 2
22
Waveform & Vector AWGN Channels(5)
( )
23
Chapter 4.2-
4.2-1
Optimal Detection for the Vector AWGN
Channel
24
4.2--1 Optimal
4.2 p Detection for the Vector AWGN Channel ((1))
= arg max ⎢ Pm e N0 ⎥
1≤ m ≤ M ⎢ ⎥
⎣ ⎦ ln(x) is increasing
⎡ r − sm ⎤
2
25
4.2--1 Optimal
4.2 p Detection for the Vector AWGN Channel ((2))
(b) ⎡ r − sm ⎤
2
(MAP)
26
4.2--1 Optimal
4.2 p Detection for the Vector AWGN Channel ((3))
N0 1
ηm = ln Pm − Em
2 2
◊ If Pm=1/M, for all m, the optimal decision becomes
⎡N 1 2⎤
mˆ = arg max ⎢ 0 ln Pm − r − s m ⎥ = arg max ⎡ − r − s m ⎤
2
1≤ m ≤ M ⎣ 2 2 ⎦ 1≤ m ≤ M ⎣ ⎦
= arg
arg min r − s m
1≤ m ≤ M
27
4.2--1 Optimal
4.2 p Detection for the Vector AWGN Channel ((4))
28
4.2--1 Optimal
4.2 p Detection for the Vector AWGN Channel ((5))
∴ in
i AWGN channel,
h l
⎡ N0 ∞ 1 ∞ 2 ⎤
MAP : mˆ = arg max ⎢ ln Pm + ∫ r (t )sm (t )dt − ∫ sm (t )dt ⎥
1≤ m≤ M ⎣ 2 −∞
∞ 2 −∞∞ ⎦
⎡ ∞ 1 ∞ 2 ⎤
ML: mˆ = arg max ⎢ ∫ r (t ) sm (t )dt − ∫ sm (t )dt ⎥
1≤ m ≤ M ⎣ −∞ 2 −∞ ⎦
29
4.2--1 Optimal
4.2 p Detection for the Vector AWGN Channel ((6))
max [ N 0 ln Pm + C (r, s m ) ]
= arg max
1≤ m ≤ M
◊ ML : mˆ = arg max C (r, s m )
1≤ m ≤ M
30
Optimal
p Detection for Binary
y Antipodal
p Signaling
g g (1)
( )
⎧ N 1 N 1 ⎫
◊ D1 = ⎨r : r Eb + 0 ln p − Eb > −r Eb + 0 ln(1 − p) − Eb ⎬
⎩ 2 2 2 2 ⎭
⎧⎪ N0 1 − p ⎫⎪
= ⎨r : r > ln ⎬
⎪⎩ 4 Eb p ⎪⎭ N0 1− p
rth = ln
= {r : r > rth } 4 Eb p
31
Optimal
p Detection for Binary
y Antipodal
p Signaling
g g (2)
( )
⎧⎪ N0 1 − p ⎫⎪
◊ D1 = ⎨r : r > rth ≡ ln ⎬
⎪⎩ 4 Eb p ⎪⎭
◊ When pÆ0, rth Æ∞, entire real line becomes D2
( )
= p ∫ p r | s = Eb dr + (1 − p) ∫ p r | s = − Eb dr
D2
(
D1
)
= p∫ p ( r | s = E ) dr + (1 − p) ∫ p ( r | s = − E ) dr
rth ∞
−∞ b rth b
32
Optimal
p Detection for Binary
y Antipodal
p Signaling
g g (3)
( )
( ) ( )
rth ∞
◊ Pe = p ∫ p r | s = Eb dr + ((1 − p) ∫ p r | s = − Eb dr
−∞ rth
= pP ⎡ N ( Eb , N 0 / 2 ) < r ⎤ + (1 − p) P ⎡ N ( − )
Eb , N 0 / 2 > rth ⎤
⎣ th
⎦ ⎣ ⎦
⎛ Eb − rth ⎞ ⎛ rth + Eb ⎞ Q( x) = P [ N (0,1) > x ]
= pQ ⎜ ⎟ + (1 − p)Q ⎜ ⎟
⎜ N /2 ⎟ ⎜ N /2 ⎟ Q(− x) = 1 − Q( x)
⎝ 0 ⎠ ⎝ 0 ⎠
◊ When p=1/2, rth =0, the error probability is simplified as
⎛ 2 Eb ⎞
Pe = Q ⎜⎜ ⎟⎟ P ⎡N
⎣ ( )
Eb , N 0 / 2 < rth ⎤
⎦
⎝ N0 ⎠
( )
= 1 − P ⎡ N Eb , N 0 / 2 > rth ⎤
⎣ ⎦
◊ Since the system
y is binary,
y, Pe=Pb
⎛ r − Eb ⎞
= 1 − Q ⎜ th ⎟
⎜ N 2 ⎟
⎝ 0 ⎠
⎛ Eb − rth ⎞
= Q⎜ ⎟
⎜ N 2 ⎟
⎝ 0 ⎠
33
Error Probabilityy for Equiprobable
q p Binaryy Signaling
g g Schemes ((1))
⎡ n ⋅ ( s2 − s1 ) d12 ⎤ ⎡ d122 ⎤
Pb = P ⎢ > ⎥ = P ⎢n ⋅ ( s2 − s1 ) > ⎥
⎣ d 12 2 ⎦ ⎣ 2 ⎦
s2 − s1
is a unit vector. n = r − s1.
d12
34
Error Probabilityy for Equiprobable
q p Binaryy Signaling
g g Schemes ((2))
35
Optimal
p Detection for Binaryy Orthogonal
g Signaling
g g (1)
( )
γ b = Eb / N 0
37
Chapter 4.2-
4.2-2
Implementation of the Optimal Receiver for
AWGN Channels
38
4.2--2 Implementation
4.2 p of the Optimal
p Receiver for AWGN Channels
39
The Correlation Receiver ((1))
40
The Correlation Receiver ((2))
41
41
The Correlation Receiver ((3))
◊ Another implementation
∞ N0 1
mˆ = arg max[η m + ∫ r (t ) sm (t )dt ], where η m = ln Pm − Em
1≤ m ≤ M −∞ 2 2
◊ Requires M correlators
◊ Usually M > N
◊ Less preferred
42
The Matched Filter Receiver ((1))
43
The Matched Filter Receiver ((2))
44
The Matched Filter Receiver ((3))
◊ |H( f )|=|S( f )|
◊ ∠S( f ) − 2π f T
∠H( f ))= −∠S(
45
The Matched Filter Receiver ((4))
46
The Matched Filter Receiver ((5))
y (T ) = ⎡ ∫ H ( f ) S ( f )e
2
∞
j 2π fT
df ⎤ ∫ ∫
S ⎢⎣ −∞ ⎥⎦ −∞ −∞
∞ 2 ∞ 2
≤∫ H ( f ) df ⋅ ∫ S ( f )e j 2π fT
df =Eh Es
−∞ −∞
47
Matched--Filter
Matched
◊ Time-Domain Property of the matched filter.
◊ If a signal s(t) is corrupted by AWGN, the filter with an
impulse response matched to s(t) maximizes the output
signal-to-noise
i lt i ratio
ti (SNR).
(SNR)
◊ Proof: let us assume the receiver signal r(t) consists of the
signal s(t) and AWGN n(t) which has zero-mean
zero mean and
1
Φ nm ( f ) = N 0 W/Hz.
2
◊ Suppose the signal r(t) is passed through a filter with
48
Matched--Filter
Matched
◊ Proof: ((cont.))
◊ At the sampling instant t=T:
T T
y (T ) = ∫ s (τ ) h (T − τ ) dτ + ∫ n (τ ) h (T − τ ) dτ
0 0
= y s (T ) + yn (T )
◊ This problem
Thi bl isi to
t select
l t the
th filter
filt impulse
i l response that
th t
maximizes the output SNR0 defined as:
ys2 (T )
SNR 0 =
E ⎡⎣ yn2 (T ) ⎤⎦
1 T T 1 T
= N 0 ∫ ∫ δ (t − τ )h(T − τ )h(T − t )dtdτ = N 0 ∫ h 2 (T − t )dt
2 0 0 2 0
49
Matched--Filter
Matched
◊ Proof: (cont.)
◊ By substituting for ys (T) and E ⎡ y n ( T ) ⎤ into SNR0.
2
⎣ ⎦
τ ' = T −τ
2 2
⎡ s (τ ) h(T − τ ) dτ ⎤
T
⎡ h (τ ') s (T − τ ') dτ '⎤
T
⎢⎣ ∫0 ⎥⎦ ⎢⎣ ∫0 ⎥⎦
SNR 0 = =
1 T 1 T
N 0 ∫ h (T − t ) dt
2
N 0 ∫ h 2 (T − t ) dt
2 0 2 0
50
Matched--Filter
Matched
◊ Proof: (cont.)
◊ Cauchy-Schwarz inequality: if g1(t) and g2(t) are finite-
⎢⎣ ∫− ∞ 1 2 ⎥⎦ ∫−∞ 1 ∫−∞ 2
with equality when g1(t)=Cg2(t) for any arbitrary constant
C.
◊ If we set g1(t)=h1(t) and g2(t)=s(T−t), it is clear that the
SNR is maximized when h(t)=Cs(T–t).
51
Matched--Filter
Matched
◊ Proof: (cont.)
◊ The output (maximum) SNR obtained with the matched
filter is:
2 2
⎡ s (τ ) h(T − τ ) dτ ⎤
T
⎡ s (τ )Cs (T − (T − τ ) ) dτ ⎤
T
⎢ ∫0
⎣ ⎥
⎦ 2 ⎣⎢ ∫0 ⎦⎥
SNR 0 = =
1 s (T − (T − t ) ) dt
T
N
∫
T
N 0 ∫ h (T − t ) dt
2 2 2
0 C
0
2 0
2 T 2 2ε h ( t ) = s (T − t )
=
N0 ∫0
s (t ) dt =
N0
h (τ ) = s (T − τ )
h (T − τ ) = s (T − (T − τ ) )
◊ Note that the output SNR from the matched filter depends
on the energy of the waveform s(t) but not on the detailed
characteristics
h t i ti off s(t).
(t)
52
The Matched Filter Receiver ((6))
◊ IImpulse
l response off twot
matched filters (fig (b))
h1 (t ) = φ1 (T − t ) = 2 / T , T /2≤t ≤T
h2 (t ) = φ2 (T − t ) = 2 / T , 0≤t ≤T /2
∞
y ( t ) = s ( t ) * h ( t ) = ∫ s (τ )h(t −τ )dτ
−∞
∞
= ∫ s (τ ) s (T − t + τ )dτ
−∞
53
The Matched Filter Receiver ((7))
◊ If s1(t) is transmitted, the noise-free responses of two matched
filter(fig(c)) sampled at t=T are
y1s (T ) = A2T / 2= E ; y2 s (T ) = 0
◊ If s1(t) is transmitted
transmitted, the received vector formed from two
matched filter outputs at sampling instances t = T is
r = (r1 , r2 ) = ( E + n1 , n2 )
◊ Noise : n1=y1n(T) & n2=y2n(T)
T
k (T ) = ∫ n (t )φk (t ) dt ,
ykn
0
k = 1, 2
a) E[nk ] = E[ ykn(T )] = 0
b)) VAR[[ nk ] = ( N 0 / 2)) Eφk = N 0 / 2 VAR[v(T ))] = ( N 0 / 2 ) Eh ( 4.2 − 52 )
◊ SNR for the first matched filter
( E )2 2 E
SNR o = =
N0 / 2 N0
54
Chapter 4.2-
4.2-3
AU
Union
i B Bound d on P
Probability
b bilit
of Errors of ML Detection
55
4.2-3
4.2-
A Union Bound on Probability of Errors of ML Detection (1)
≤ ∑ ∫
1≤ m '≤ M
Dmm '
p (r | sm )dr = ∑
1≤ m '≤ M
Pm→m '
m '≠ m m '≠ m
1 M
1 M
Î Pe ≤
M
∑ ∑ ∫
m =1 1≤ m '≤ M
Dmm '
p (r | sm )dr
d =
M
∑ ∑
m =1 1≤ m '≤ M
Pm→m '
m '≠ m m '≠ m
57
4.2-3
4.2-
A Union Bound on Probability of Errors of ML Detection (3)
◊ In an AWGN channel
◊ Pairwise probability : Pm→m ' = Pb = Q d mm
2
' /(2 N 0 ) ( ) (4.2-37)
1 M ⎛ d2 ⎞
◊ Pe ≤ ∑ ∑ Q ⎜ mm ' ⎟
M m =1 1≤ m '≤ M ⎜⎝ 2 N 0 ⎟⎠
m '≠ m 1 −x /2 2
2
Q( x) ≤ e
1 M −
d mm ' 2
≤
2M
∑ ∑
m =1 1≤ m '≤ M
e 4 N0
Union bound for an AWGN channel.
m '≠ m
∑ ∑
2
d2
T(X ) = X d mm '
= ad X
d mm ' =||s m −s m ' || all distinct d 'ss
1≤m ,m '≤ M ,m≠ m '
58
4.2-3
4.2-
A Union Bound on Probability of Errors of ML Detection (4)
2
d mm
Union bound: Pe ≤ 1 1
'
M −
◊
2M
∑ ∑m =1 1≤ m '≤ M
e 4 N0
=
2M
T(X ) − 1
X = e 4 N0
m '≠ m
1 M ⎛ 2
d mm ' ⎞ ⎛ 2
d min ⎞ −
d min
M − 1 4 N0
Pe ≤ ∑ ∑ Q ⎜ ⎟
M m =1 1≤ m '≤ M ⎜⎝ 2 N 0 ⎟⎠
≤ ( M − 1)
)Q ⎜
⎜ 2 N0 ⎠
⎟
⎟
≤
2
e
m '≠ m ⎝
◊ Good constellation provides maximum possible minimum distance
59
4.2-3
4.2-
A Union Bound on Probability of Errors of ML Detection (5)
◊ Total 16×15
15=240
240 possible distances
◊ Distance enumerator function :
2 2 2 2 2
T ( X ) = 48 X d + 36 X 2 d + 32 X 4 d + 48 X 5 d + 16 X 8d
9d 2 10 d 2 13 d 2 18 d 2
+ 16 X + 24 X + 16 X + 4X
◊ Upper bound of error probability
1 ⎛ − 4 N0 ⎞
1
Pe ≤ T ⎜ e ⎟
32 ⎜⎝ ⎟
⎠
60
4.2-3
4.2-
A Union Bound on Probability of Errors of ML Detection (6)
2 2 Ebavg
d min
M −1 − 15 −
◊ Pe ≤ e 4 N0
= e 5 N0
2 2
◊ When SNR is large T ( X ) ≈ 48 X ( d2
),
2 2 Ebavg
1 ⎛ ⎞ 48
1 d min
3
i
− − −
Pe ≤ T ⎜ e ⎟ ≈ =
4 N0 4 N0 5 N0
e e
32 ⎜⎝ ⎟ 32
⎠ 2
61
Lower Bound on Probability
y of Error ((1))
1 M
1 M
≥
M
∑
m =1
∫Dmc ' m
p (r | s m )dr = ∑ ∫ p (r | s m )dr
M m =1 Dmm '
1 M ⎛ d mm ' ⎞
=
M
∑ Q⎜
⎜ 2 N
⎟,
⎟
for all m ' ≠ m
m =1 ⎝ 0 ⎠
(Finding m
m’ such that dmm’’ is miniized
miniized.))
◊ To derive the tightest lower boundÎ maximize the right hand side
1 M ⎛ d mm ' ⎞ 1 M ⎛ d min m ⎞
Pe ≥
M
∑ max Q ⎜
m '≠ m ⎜ 2 N
⎟=
⎟ M
∑ Q⎜
⎜ 2 N
⎟
⎟
m =1 ⎝ 0 ⎠ m =1 ⎝ 0 ⎠
◊
m
d min : distance from m to its nearest neighbor
g
62
Lower Bound on Probability
y of Error ((2))
◊ ∵ d min ≥ d min
m
(d min
m
denotes the distance from m to its nearest neighbor in the constellation)
⎛ d min
Q⎜
m
⎟≥⎨
( )
⎞ ⎧⎪Q d min / 2 N 0 , At least one signal at distance d min from s m
⎜ 2N ⎟
⎝ 0 ⎠ ⎪⎩0, otherwise
1 ⎛ d min ⎞
◊ Pe ≥
M
∑ Q⎜
⎜ 2 N
⎟
⎟
1< m < M
∃m '≠ m: ||s m −s m ' || = d min
⎝ 0 ⎠
63
Chapter 4.3: Optimal Detection
and Error Probability for
Bandlimited Signaling
64
Chapter 4.3
4.3--1 Optimal Detection & Error Prob.
for ASK or PAM Signaling
65
Chapter 4.3
4.3--1 Optimal Detection & Error Prob.
for ASK or PAM Signaling
◊ Symbol
y error pprobabilityy
1 M
1 ⎡ ⎛ d min ⎞ ⎛ d min ⎞ ⎤
Pe = ∑ P ⎣⎡error m sent ⎦⎤ = ⎢ 2( M − 2)Q ⎜
⎜
⎟ + 2Q ⎜
⎟ ⎜
⎟⎥
⎟
M m =1 M⎢
⎣ ⎝ 2 N0 ⎠ ⎝ 2 N 0 ⎠ ⎥⎦
2 ( M − 1) ⎛ d min ⎞
= Q⎜ ⎟
⎜ ⎟ 12 log
l 2M
M ⎝ 2 N0 ⎠ d min = Ebavg
⎛ ⎞ M −1
2
⎛ 1 ⎞ 6log 2 M bavg
E
= 2 ⎜1 − ⎟ Q ⎜ ⎟
⎝ M ⎠ ⎜⎝ M − 1 N 0 ⎟⎠
2
67
Chapter 4.3
4.3--1 Optimal Detection & Error Prob.
for ASK or PAM Signaling
68
Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling
r1
v ⎛ v 2 + E − 2 Ev cos θ ⎞
pV ,Θ (v, θ ) = exp ⎜⎜ − ⎟⎟
π N0 ⎝ N0 ⎠
69
Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling
◊ Marginal PDF of Q is
∞
pΘ (θ ) = ∫ pV ,Θ (v, θ ) dv
0
v 2 + E − 2 Ev cosθ
∞ v −
=∫ e N0
dv
0 π N0
( v − 2 γ s cosθ ) 2
1 −γ s sin 2 θ ∞ −
=
2π
e ∫0
ve 2
dv
E
◊ Symbol SNR: γ s = N
0
◊ As gs increases, pQ(q ) is
more peaked around q = 0.
70
Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling
◊ Decision region:
g D1 = {θ : −π / M < θ ≤ π / M }
◊ Error probability is
π /M
Pe = 1 − ∫ p Θ (θ )dθ
−π / M
Pb = Q ( 2 Eb / N 0 )
◊ M=4, Î two binary phase modulation signals
When M=4
( )
2
⎡
Pc = (1 − Pb ) = 1 − Q 2 Eb / N 0
2 ⎤
⎣ ⎦
( ⎡ 1
) ⎤
Pe = 1 − Pc = 2Q 2 Eb / N 0 ⎢1 − Q 2 Eb / N 0 ⎥
⎣ 2 ⎦
( )
71
Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling
72
Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling
◊ For large
g SNR (E/N
( ) , pΘ(θ) is approximated
0>> 1) pp
− γ s sin 2 θ
pΘ (θ ) ≈ γ s / π cos θ e , for | θ |≤ π / 2
⎛ ⎛ π ⎞⎞ ⎛ 2 ⎛ π ⎞ Eb
⎞
= 2Q ⎜ 2γ s sin ⎜ ⎟ ⎟ = 2Q ⎜ (2 log 2 M ) sin ⎜ ⎟ ⎟⎟
⎝ ⎝ M ⎠⎠ ⎜ ⎝ M ⎠ N0 ⎠
⎝
Eb E γs
◊ = = When M=2 or 4:
N 0 N 0 log 2 M log 2 M
Pe ≈ 2Q 2 Eb / N 0 ( )
73
Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling
◊ For large
g M and large
g SNR:
◊ sin(π/M) ~ π/M
◊ Error probability is approximated by
⎛ 2π 2 log M E ⎞
Pe ≈ 2Q ⎜ 2 b
⎟ for large M
⎜ M 2
N 0 ⎟⎠
⎝
◊ For large M,
M doubling M reduces effective SNR by 6 dB.
dB
◊ When Gray codes is used in the mapping
◊ Since the most probable errors occur in the erroreous selection of an
adjacent phase to the true phase
1
Pb ≈ Pe
k
74
Differentially Encoded PSK Signaling
◊ In p
practice,, the carrier pphase is extracted from the
received signal by performing nonlinear operation ⇒
phase ambiguity
p g y
◊ For BPSK
• The received signal is first squared.
• The resulting double-frequency component is filtered.
• The signal is divided by 2 in frequency to extract an estimate of the
carrier frequency and phase φ.φ
• This operation result in a phase ambiguity of 180°in the carrier phase.
◊ For QPSK,
QPSK there are phase ambiguities of ±90° and 180° in
the phase estimate.
◊ Consequently,
q y, we do not have an absolute estimate of the
carrier phase for demodulation.
75
Differentially Encoded PSK Signaling
77
Chapter 4.3
4.3--3 Optimal Detection & Error Prob.
for QAM Signaling
◊ In detection of Q
QAM signals,
g , need two filter matched to
φ1 (t ) = 2 / Eg g (t ) cos 2π f ct
φ2 (t ) = − 2 / Eg g (t ) sin 2π f ct
A2 M
=
M
∑ (a
m =1
2
mc + ams
2
)
79
Chapter 4.3
4.3--3 Optimal Detection & Error Prob.
for QAM Signaling
◊ Rectangular QAM
◊ Generated by two PAM signals in I-phase and Q-phase carriers.
◊ Easily demodulated.
◊ For M≧16, only requires energy slight greater than that of the
( )
2
Pc , M −QAM = P2
c , M − PAM
= 1 − Pe, M − PAM
80
Chapter 4.3
4.3--3 Optimal Detection & Error Prob.
for QAM Signaling
◊ Probability of errors of square M-QAM
⎛ 1 ⎞
Pe , M −QAM = 1 − (1 − Pe, M − PAM
) 2
= 2 Pe , M − PAM ⎜1 − Pe , M − PAM ⎟
⎝ 2 ⎠
◊ Th error probability
The b bilit off PAM is
i (from
(f (4.3-4)
(4 3 4) & (4.3-5)
(4 3 5) )
81
Chapter 4.3
4.3--3 Optimal Detection & Error Prob.
for QAM Signaling
1≤ m ≤ M ⎝ ⎢⎣ −∞ ⎥⎦ 2 −∞ ⎠
◊ ML decision rule is
⎛ ⎡ ∞ ⎤ 1 ∞ ⎞
R ∫ rl (t ) s ml (t )dt
d − ∫ s ml (t ) dt
∗ 2
mˆ = arg max⎜ Re d ⎟
1≤ m ≤ M ⎝ ⎢⎣ −∞ ⎥⎦ 2 −∞ ⎠
84
Chapter 4.3
4.3--4 Demodulation and Detection
Complex
p matched filter.
Detailed structure of a
complex matched filter
in terms of its in-phase
and quadrature
components..
86
Chaper 4.4-
4.4-1 Optimal Detection & Error Prob.
for Orthogonal Signaling
s1 = ( E , 0,..., 0)
s 2 = (0,
(0 E ,..., 0)
=
s M = (0,
(0 00,..., E )
87
Chaper 4.4-
4.4-1 Optimal Detection & Error Prob.
for Orthogonal Signaling
◊ Suppose that s1 is transmitted, the received vector is
r = ( E + n1 , n2 ,..., nM )
◊ E is symbol energy
◊ (n r vs with σ n2 = N 0 / 2
( 1, n2,…, nM) are iid zero-mean Gaussian r.vs
◊ Define random variables R1 = r ⋅ s1
= ( E + n1 , n2 ,,...,, nM ) ⋅ ( E,0 ,0 )
Rm = r ⋅ s m , 1 ≤ m ≤ M = E + En1
Rm = Enm , 2 ≤ m ≤ M
◊ A correct decision is made if R1 > Rm for m=2,3,…,M
Pc = P[ R1 > R2 , R1 > R3 ,..., R1 > RM | s1 sent]
Baye’s
= P[ E + n1 > n2 , E + n1 > n3 ,..., E + n1 > nM | s1 sent]
∞
Theorem = ∫ P[n2 < n + E , n3 < n + E ,..., nM < n + E | s1 sent,n1 = n] ⋅ pn1 (n) dn
−∞
n2, n3,…,
( P[n )
M −1
nM are iid ∞
=∫ 2 < n + E | s1 sent,n1 = n] pn1 (n) dn
−∞
88
Chaper 4.4-
4.4-1 Optimal Detection & Error Prob.
for Orthogonal Signaling
◊ Since n2~N(0,N
( , 0/2))
⎛ n+ E ⎞
P[n2 < n + E | s1 sent,n1 = n] = 1 − Q ⎜ ⎟
⎜ N /2 ⎟
⎝ 0 ⎠
M −1
∞ 1 ⎡ ⎛ n+ E ⎞ ⎤ −
n2
◊ Pc = ∫ ⎢1 − Q ⎜ ⎟ ⎥ e 0 dn
N
π N 0 ⎢⎣ ⎜ ⎟ n+ E
⎝ N 0 / 2 ⎠ ⎥⎦
−∞
x=
N0 / 2
( x − 2 E / N0 )
2
1 ∞ −
∫
M −1
= (1 − Q ( x)) e 2
dx
2π −∞
( x − 2 E / N0 )
2
( x− )
2
2 E / N0
1 ∞ − 1 ∞ −
Pe = 1 − Pc = ∫ ⎡1 − (1 − Q ( x )) M −1
⎤ e 2
dx ∫ e 2
dx = 1
2π −∞ ⎣ ⎦
◊ 2π −∞
Pe Pe
P[[s m received | s1 sent ] = = k , 2 ≤ m ≤ M.
M −1 2 −1
89
Chaper 4.4-
4.4-1 Optimal Detection & Error Prob.
for Orthogonal Signaling
90
Error Probability in FSK signaling
91
A Union Bound on the Probability of Error in
Orthogonal Signaling
◊ From Sec.4.2-3, the union bound in AWGN channel is
2
d min
M −1 −
Pe ≤ e 4 N0
2
◊ In orthogonal signaling
signaling, d min = 2 E
E E
M − 1 − 2 N0 −
Pe ≤ e < Me 2 N0
2
92
A Union Bound on the Probability of Error in Orthogonal
Signaling
⎪⎩2e , ln 2 ≤ Eb / N 0 ≤ 4 ln 2
93
Chaper 4.4-
4.4-2 Optimal Detection & Error Prob. for
Bi--orthogonal Signaling
Bi
◊ A set of M
M=22k biorthogonal signals comes from NN=M/2
M/2
orthogonal signals by including the negatives of these signals
◊ Requires
q onlyy M/2 cross-correlators or matched filters
◊ Vector representation of biorthogonal signals
s1 = −s N +1 = ( E , 0,..., 0)
s 2 = −s N + 2 = (0, E ,..., 0)
=
s N = −s 2 N = (0, 0,..., E )
◊ Assume that s1 is transmitted, the received signal vector is
r = ( E + n1 , n2 ,..., nN )
◊ {nm} are zero-mean, iid Gaussian r.vs with σ n2 = N 0 / 2
94
94
Chaper 4.4-
4.4-2 Optimal Detection & Error Prob. for
Bi--orthogonal Signaling
Bi
◊ Fi Pe v.s. Eb/N0
Fig:
◊ E=k Eb
Shannon
Limit
96
96
Chaper 4.4-
4.4-3 Optimal Detection & Error Prob. for
Simplex Signaling
◊ Simplex
p signals
g are obtained from shifting
g a set of orthogonal
g
signals by the average of these orthogonal signals
ÎGeometry of simples signals is exactly the same as that of original
orthogonal signals
◊ The error probability equals to the original orthogonal signals
◊ Since simplex signals have lower energy, the energy in the
expression of error probability should be scaled, i.e.,
2
⎛ M 2E ⎞
−⎜⎜ x − ⎟⎟ / 2
1 ∞
∫−∞ ⎣
M −1 N 0
Pe = 1 − Pc = ⎡1 − (1 − Q ( x )) M −1
⎤⎦e ⎝ ⎠
dx
2π
◊ A relative
l i gain
i off 10 llog(M/M−1)
( / 1) dB
d over orthogonal
h l signaling
i li
◊ M=2Æ 3dB gain; M=10, 0.46 dB gain
97
Chap 44.5
Ch 5 :Optimal
O i l Detection
D i in
i
Presence of Uncertainty: Non-
coherent Detection
98
Chaper 4.5 Optimal Detection in Presence of Uncertainty:
Non--coherent Detection
Non
99
Chaper 4.5 Optimal Detection in Presence of Uncertainty:
Non--coherent Detection
Non
g max Pm ∫ p (r | m, θ ) p (θ )dθ
= arg
1≤ m ≤ M
∑ ∫ ( ∫ p (r − s )
M M
= ∑ Pm n m ,θ ) p (θ )dθ dr (4.5-3)
Dm '
m −1 m '=1
m '≠ m
100
Chaper 4.5 Optimal Detection in Presence of Uncertainty:
Non--coherent Detection
Non
{
D1 = r : ∫ e
∞
0
− ( r − A Eb ) 2 / N 0
p ( A) dA > ∫ e
∞
0
− ( r + A Eb ) 2 / N 0
p ( A)dA }
= {r : ∫ e ( ) p( A)dA > 0} ∵ A>0
∞ −2 rA
− A2 Eb / N 0 2 rA
A Eb / N 0 A Eb / N 0
e −e
0
−2 rA Eb / N 0 −2 rA Eb / N 0
e
2 rA Eb / N 0
−e >0 ∴(e 2 rA Eb / N 0
−e )>0
= {r : r > 0} =e
4rA
A Eb / N 0
>1
=>4rA Eb / N 0 > 0
101 => r > 0
Chaper 4.5 Optimal Detection in Presence of Uncertainty:
Non--coherent Detection
Non
⎛ ∞ 1 ⎞
2
( r + A Eb )
∞ −
Pb = ∫ ⎜ ∫ e N0
dr ⎟ p ( A)dA
0 ⎜ 0 ⎟
π N0
⎝ ⎠
∞⎛ ⎡ N ⎤⎞
= ∫ ⎜ P ⎢ N (− A Eb , 0 ) > 0 ⎥ ⎟ p ( A)dA
0
⎝ ⎣ 2 ⎦⎠
∞ ⎛ ⎡ A Eb ⎤ ⎞
=∫ ⎜ P ⎢ N (0,1) > ⎥ ⎟ p( A)dA
0 ⎜ ⎢ N 0 / 2 ⎥⎦ ⎟⎠
⎝ ⎣
( )
∞
= ∫ Q A 2 Eb / N 0 p ( A)dA
0
= E ⎡Q ( A 2 E / N )⎤
⎣ b 0
⎦
102
Chaper 4.5-
4.5-1 Noncoherent Detection of Carrier
Modulated Signals
Pm 2π
∫
jφ
mˆ = arg max p (r − e s ml )dφ
1≤ m≤ M 2π 0 nl l
From (2.9 − 13)
F
From (4
(4.5-3)
5 3) Pm 1 2
2π − rl − e jφ s ml /(4 N0 ) nl ~ N (0 N ×1 , 2 N 0 I N )
N ∫0
= arg max e dφ
1≤ m ≤ M 2π ( 4π N 0 )
104
Chaper 4.5-
4.5-1 Noncoherent Detection of Carrier
Modulated Signals
sm = sml cos 2π f c t
P 1 2π
2
− rl − e jφ s ml /(4 N 0 ) 2
∫ dφ
sml
ˆ = arg
◊ m g max m e
2
→ Em = ∫ sml cos 2π f c t dt= 2
1≤ m ≤ M 2π ( 4π N 0 ) N 0
→ s ml
2
= 2Em
2
{ }
2 2
E 1 rl − e jφ sml
2
= rl − 2 Re rl ⋅ e jφ sml + e jφ sml
P − m 2π Re[ rl ⋅e jφ s ml ]
= arg max m e ∫ dφ
2 N0 2 N0
e R {r ⋅ e } + 2E
2 jφ
= rl − 2 Re sml
2π
l m
1≤ m ≤ M 0
( )
Em 1 H
Re[( rl ⋅s ml ) e − jφ ] ⋅ r =e- jφ ( sml ) ⋅ r
H
Pm − 2π rl ⋅ e jφ sml = e jφ sml
∫ dφ
2 N0 2 N0
= arg max e e
1≤ m ≤ M 2π 0 =e- jφ r ⋅ sml
Em 1
P − 2π Re[|rl ⋅s ml |e− j ( φ −θ ) ]
∫ dφ
2 N0 2 N0
= arg max m e e
1≤ m ≤ M 2π 0 rl ⋅ s ml =| rl ⋅ s ml | e jθ
Em 1
Pm − 2 N0 2π 2 N0 |rl ⋅sml |cos(φ −θ ) θ : phase of rl ⋅ s ml
= arg max
1≤ m ≤ M 2π
e ∫0
e dφ
I 0 ( x) is modified Bessel function
−
Em
⎛| r ⋅s |⎞ of the 1st kind and order zero
= arg max Pm e 2 N0
I 0 ⎜ l ml ⎟ 1 2π
1≤ m ≤ M ⎝ 2 N0 ⎠ I 0 ( x) =
2π ∫0
e x cosφ dφ
105
Chaper 4.5-
4.5-1 Noncoherent Detection of Carrier
Modulated Signals
106
4.6 Comparison
p of Digital
g Modulation Methods
107
4.6 Comparison
p of Digital
g Modulation Methods
108
4.6 Comparison
p of Digital
g Modulation Methods
110
4.6 Comparison
p of Digital
g Modulation Methods