0% found this document useful (0 votes)
252 views112 pages

DC-04-Optimum Receivers For AWGN Channels

This document discusses optimal detection for additive white Gaussian noise (AWGN) channels. It presents waveform and vector channel models for AWGN channels and describes how the received signal can be represented as a transmitted signal plus noise. The document outlines the optimal maximum a posteriori probability (MAP) receiver and maximum likelihood (ML) receiver for minimizing error probability. It describes how the receiver partitions the output space into decision regions based on the transmitted message and defines symbol and bit error probabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
252 views112 pages

DC-04-Optimum Receivers For AWGN Channels

This document discusses optimal detection for additive white Gaussian noise (AWGN) channels. It presents waveform and vector channel models for AWGN channels and describes how the received signal can be represented as a transmitted signal plus noise. The document outlines the optimal maximum a posteriori probability (MAP) receiver and maximum likelihood (ML) receiver for minimizing error probability. It describes how the receiver partitions the output space into decision regions based on the transmitted message and defines symbol and bit error probabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 112

Chapter 4

Optimum Receiver for AWGN Channels

Wireless Information Transmission System Lab.


Institute of Communications Engineering
g g
National Sun Yat-
Yat-sen University

1
Contents

◊ 4.1 Waveform and Vector Channel Models


◊ 4.2 Waveform and Vector AWGN Channels
◊ 4.3 Optimal Detection and Error Probability for Band-Limited
Signaling
◊ 4.4 Optimal Detection and Error Probability for Power-Limited
Signaling
◊ 4.5 Optimal Detection in Presence of Uncertainty: Non-Coherent
Detection
◊ 4.6 A Comparison of Digital Signaling Methods
◊ 4.10 Performance Analysis for Wireline and Radio
Communication Systems
2
Chapter 4.1
Waveform and Vector Channel Models

Wireless Information Transmission System Lab.


Institute of Communications Engineering
g g
National Sun Yat-
Yat-sen University

3
Waveform & Vector Channel Models(1)
( )

◊ In this chapter, we study the effect of noise on the reliability of


the modulation systems studied in Chapter3.
◊ We assume that the transmitter sends digital information by use of
M signals waveforms {sm(t)=1,2,…,M }. Each waveform is
transmitted within the symbol interval of duration T, i.e. 0≤t≤T.
◊ Th additive
The dditi white
hit Gaussian
G i noisei (AWGN) channel
h l model:
d l
r (t ) = sm (t ) + n(t )

◊ sm(t): transmitted signal


◊ (t) : sample
n(t) l function
f ti off AWGN process
with PSD: Φnn( f )=N0/2 (W/Hz)

4
Waveform & Vector Channel Models(2)
( )

◊ Based on the observed signal r(t), the receiver makes the decision
about which message m, 1≤m≤ M was transmitted
◊ Optimum decision: minimize error probability Pe = P[m
ˆ ≠ m]
◊ Any orthonormal basis {φj(t), 1 ≤ j ≤ N} can be used for
expansion of a zero-mean white Gaussian process (Problem 2.8-1)
◊ The resulting coefficients are iid zero-mean
zero mean Gaussian random variables
with variance N0/2
◊ {φj(t), 1 ≤ j ≤ N} can be used for expansion of noise n(t)
◊ Using {φj(t), 1 ≤ j ≤ N}, r (t ) = sm (t ) + n(t ) has the vector form
r = sm + n
◊ All vectors are N-dimensional
◊ Components in n are i.i.d. zero-mean Gaussian with variance N0/2

5
Waveform & Vector Channel Models(3)
( )

◊ It is convenient to subdivide the receiver into two parts—the


signal demodulator and the detector.

◊ The function of the signal demodulator is to convert the


received waveform r(t) into an N-dimensional vector r=[r1 r2
..…rN ] where
h N is i the
h dimension
di i off the
h transmitted
i d signal
i l
waveform.
◊ The function of the detector
d t t is to decide which of the M
possible signal waveforms was transmitted based on the vector
r.

6
Waveform & Vector Channel Models(4)
( )

◊ Two realizations of the signal demodulator are described in the


next two sections:
◊ One is based on the use of signal correlators.

◊ The second is based on the use of matched filters.

◊ The optimum detector that follows the signal demodulator is


designed to minimize the probability of error.

7
Chapter 4.1
4.1--1
Optimal Detection for General Vector Channel

Wireless Information Transmission System Lab.


Institute of Communications Engineering
g g
National Sun Yat-
Yat-sen University

8
4.1--1 Optimal
4.1 p Detection for General Vector Channel ((1))

◊ AWGN channel model: r = sm + n


◊ Message m is chosen from the set {1,2,..,M} with probability
Pm
◊ Components in n are i.i.d. N(0, N0/2) ; PDF of noise n is


N
N n2 N 2

⎛ 1 ⎞ ⎛ 1 ⎞ −N
j =1 j n

p ( n) = ⎜ ⎟ e 2σ 2
=⎜ ⎟ e 0
⎜ πN ⎟ ⎜ πN ⎟
⎝ 0 ⎠ ⎝ 0 ⎠
◊ A more general vector channel model:

◊ sm is selected from {sm,1 ≤ j ≤ M} according to a priori probability Pm


◊ The received vector r statistically
y depends
p on the transmitted vector
through the conditional PDF p(r|sm).
9
4.1--1 Optimal
4.1 p Detection for General Vector Channel ((2))

◊ Based on the observation r, receiver decides which message is


transmitted, mˆ ∈ {1, 2,..., M }
◊ Decision function: g (r ):a function from R N into messages {1, 2,..., M }
◊ Given r is received, the probability of correct detection:
P[correct decision | r ] = P[mˆ sent | r ]
◊ The probability of correct detection
P[correct decision] = ∫ P[correct decision|r ]p (r )dr = ∫ P[mˆ sent|r ] p (r )dr
◊ Max P[correct detection]= max P[mˆ | r ] for each r
◊ Optimal decision rule:
mˆ = g opt (r ) = arg max P[m | r ]
1≤ m≤ M
= arg max P[ sm | r ]
1≤ m ≤ M

10
MAP and ML Receivers

◊ Optimum decision rule: mˆ = g opt (r ) = arg max P[ sm | r ]


1≤ m ≤ M
ÎMaximize a posteriori probability (MAP) rule
◊ MAP rule can be simplified
p ( r , sm ) p ( r | sm ) p ( sm ) P p ( r | sm )
mˆ = g opt (r ) = arg max = arg max = arg max m
1≤ m ≤ M p(r ) 1≤ m ≤ M p(r ) p(r )
= arg max Pm p (r | sm )
1≤ m ≤ M

◊ When the messages are equiprobable a priori, P1=…=PM=1/M


mˆ = g opt (r ) = arg max p(r | sm )
1≤m≤ M
◊ p(r|sm) is called likelihood of message m
◊ ÎMaximum-likelihood
ÎMaximum likelihood (ML) receiver

Note: p(r|sm) is channel conditional PDF.

11
The Decision Region
g

◊ For any detector, R N → {1, 2,..., M }


⇒Partition the output space RN into M regions (D1, D2,…, DM),
if r∈ Dm, then mˆ = g (r ) = m
◊ Dm is decision region for message m

◊ For a MAP detector,


Dm = {r ∈ R N : P[ m | r ] > P[ m′ | r ], for all 1 ≤ m′ ≤ M and m′ ≠ m}

◊ If more than one messagesg achieve the maximum a posteriori


p
probability, arbitrary assign r to one of the decision regions

12
The Error Probability
y (1)
( )

◊ When sm is transmitted, an error occurs when r is not in Dm


◊ Symbol error probability of a receiver with {Dm, 1 ≤ m ≤ M}
M M
Pe = ∑ Pm P[r ∉ Dm sm sent] = ∑ Pm Pe|m
m =1 m =1

◊ Pe|m is the error pprobabilityy when message


g m is transmitted
M
Pe|m = ∫
Dmc
p(r | sm )dr = ∑ ∫
1≤ m′≤ M
Dm′
p(r | sm )dr
m′ ≠ m
◊ Symbol error probability (or message error probability)
M
Pe = ∑ Pm ∑ ∫ p(r | sm )dr
Dm′
m =1 1≤ m ′ ≤ M
m′ ≠ m

13
The Error Probability
y (2)
( )

◊ Another type of error probability: Bit error probability Pb: error


probability in transmission of a single bit
◊ Requires knowledge of how different bit sequences are
mapped to signal points
◊ Finding bit error probability is not easy unless the constellation

exhibits
hibit certain
t i symmetric
t i property
t

◊ R l i between
Relation b symbol
b l error prob.
b andd bit
bi error prob.
b
Pe
◊ ≤ Pb ≤ Pe ≤ kPb
k

14
Sufficient Statistics ((An Example)
p )

◊ Assumption (1) :the observation r can be written in terms of r1


and r2, r = (r1, r2)
◊ Assumption (2): p(r1 , r2 | sm ) = p(r1 | sm ) p(r2 | r1 )

◊ The MAP detection becomes


mˆ = arg max Pm p (r | sm ) = arg max Pm p (r1 ,r2 | sm )
1≤ m ≤ M 1≤ m ≤ M

= arg max Pm p(r1 | sm ) p(r2 | r1 ) = arg max Pm p (r1 | sm )


1≤ m ≤ M 1≤ m ≤ M

◊ Under these assumptions, the optimal detection


◊ Based only y on r1Æ r1 : sufficient
ff statistics for detection of sm
◊ r2 can be ignoredÆ r2 : irrelevant data or irrelevant information
◊ Recognizing sufficient statistics helps to reduce complexity of
the detection through ignoring irrelevant data.
15
Preprocessing
p g at the Receiver ((1))

sm r ρ m̂
m
Channel G(r) Detector

◊ Assume that the receiver applies an invertible operation G(r)


before detection
◊ The optimal detection is
mˆ = arg max Pm p(r ,ρ | sm ) = arg max Pm p(r | sm ) p( ρ | r )
1≤ m ≤ M 1≤ m ≤ M

= argg max Pm p(r | sm )


1≤ m ≤ M
◊ when r is given, ρ does not depend on sm
◊ The optimal detector based on the observation of ρ makes the
same decision as the optimal detector based on the observation of
r.
◊ The invertible does not change the optimality of the receiver
16
Preprocessing
p g at the Receiver ((2))

◊ Ex.4.1-3 Assume that the received vector is of the form


r = sm + n
where n is colored noise. Let us further assume that there existing
an invertible whitening operator denoted by W, s.t. v = Wn is a
white vector.
◊ Consider
ρ = Wr = Ws m + v
◊ Equivalent to a channel with white noise for detection
◊ No degradation on the performance
◊ The linear operator W is called a whitening filter

17
Chapter 4.2
Waveform and Vector AWGN Channels

Wireless Information Transmission System Lab.


Institute of Communications Engineering
g g
National Sun Yat-
Yat-sen University

18
Waveform & Vector AWGN Channels(1)
( )

◊ Waveform AWGN channel:


r (t ) = sm (t ) + n(t )
◊ sm (t ) ∈ {s1 (t ), s2 (t ),...., sM (t )} with prior probability Pm
◊ n(t) : zero-mean white Gaussian with PSD N0/2

◊ By Gram-Schmidt
B G S h idt procedure,
d we derive
d i an orthonormal
th l basis
b i
{φj(t), 1≤ j ≤ N}, and vector representation of signals {sm, 1≤ m ≤ M}

◊ Noise process n(t) is decomposed into two components:


N
◊ n1 (t ) = ∑ n jφ j (t ),
) h n j = n(t )), φ j (t )
where
j =1
◊ n2 (t ) = n(t ) − n1 (t )

19
Waveform & Vector AWGN Channels(2)
( )
N
◊ sm (t ) = ∑s
j =1
j=
mj φ j (t ), where smj = sm (t ), φ j (t )
N
◊ r (t ) =
∑ (s
j =1
mj + n j )φ j (t ) + n2 (t )
◊ Define rj = smj + n j
where
) φ j (t ) + n(t ),
rj = sm (t ), ) φ j (t ) = sm (t ) + n(t ),
) φ j (t ) = r (t ),
) φ j (t )
N

◊ So r (t ) = ∑ rjφ j (t ) + n2 (t ) where rj = r(t),φ j (t )


j =1

◊ The noise components


p {{nj} are iid zero-mean Gaussian with
variance N0/2

20
Waveform & Vector AWGN Channels(3)
( )

◊ Prove that the noise components {nj} are iid zero-mean Gaussian
with variance N0/2

n j = ∫ n(t )φ j (t )dt
−∞


⎡ ∞
⎤ ∞
E[n j ] = E ∫ n(t )φ j (t )dt = ∫ E[n(t )]φ j (t )dt = 0 (Zero-Mean)
⎢⎣ −∞∞ ⎥⎦ −∞∞
COV[ni n j ] = E[ni n j ] − E[ni ]E[n j ] = E ∫ n(t )φi (t )dt ∫ n( s )φ j (t ) ds ⎤
⎡ ∞ ∞

⎣⎢ −∞ −∞ ⎦⎥
∞ ∞
=∫ ∫ Ε[n(t )n( s )]φi (t )φ j ( s )dtds
−∞ −∞
n(t) is white.
= ( N 0 / 2) ∫ ⎡ ∫ δ (t − s )φi (t )dt ⎤φ j ( s )ds
∞ ∞
d
−∞ ⎢
⎣ −∞ ⎥

∞ ⎧ N0 / 2 i= j
= ( N 0 / 2) ∫ φi ( s )φ j ( s )ds = ⎨
−∞
⎩0 i≠ j
21
Waveform & Vector AWGN Channels(4)
( )

◊ COV[n j n2 (t )] = E[n j n2 (t )] = E[n j n(t )] − E[n j n1 (t )]

⎡ ∞
⎤ ⎡ N ⎤
= E n(t ) ∫ n( s )φ j ( s )ds − E ⎢ n j ∑ niφi (t ) ⎥
⎢⎣ −∞ ⎥⎦ ⎣ i =1 ⎦
N0 ∞ N0
=
2 ∫−∞
δ ( t − s )φ j ( s ) ds −
2
φ j (t )
N0 N0
= φ j (t ) − φ j (t ) = 0
2 2

◊ n2(t) is uncorrelated with {nj}


Î n2((t)) and n1((t)) are independent
p

22
Waveform & Vector AWGN Channels(5)
( )

◊ Since is n2(t) independent of sm(t) and n1(t)


N
◊ r (t ) = ∑ ( smj + n j )φ j (t ) + n2 (t )
j =1

◊ Only the first component carries information


◊ Second component is irrelevant data and can be ignored

◊ The AWGN waveform channel


r (t ) = sm (t ) + n(t ), 1≤ m ≤ M
is equivalent to N-dimensional vector channel
r = s m + n, 1≤ m ≤ M

23
Chapter 4.2-
4.2-1
Optimal Detection for the Vector AWGN
Channel

Wireless Information Transmission System Lab.


Institute of Communications Engineering
g g
National Sun Yat-
Yat-sen University

24
4.2--1 Optimal
4.2 p Detection for the Vector AWGN Channel ((1))

◊ The MAP detector in AWGN channel is


mˆ = arg max[ Pm p (r | s m )]
1≤ m ≤ M r = sm + n
= arg max Pm [ pn (r − s m )] N0
1≤ m ≤ M
n ~ N (0, I)
⎡ ⎛ N r −sm
2
⎤ 2
1 ⎞ −
= arg max ⎢ Pm ⎜ ⎟ e 0
N ⎥
1≤ m ≤ M ⎢ ⎜ πN ⎟ ⎥ N
⎣ ⎝ 0 ⎠
⎦ ⎛ 1


⎟ is a constant
⎜ πN ⎟
⎡ − r −sm ⎤ ⎝ ⎠
2
0

= arg max ⎢ Pm e N0 ⎥
1≤ m ≤ M ⎢ ⎥
⎣ ⎦ ln(x) is increasing
⎡ r − sm ⎤
2

= arg max ⎢ln Pm − ⎥


1≤ m ≤ M ⎢ N 0 ⎥⎦

25
4.2--1 Optimal
4.2 p Detection for the Vector AWGN Channel ((2))

(b) ⎡ r − sm ⎤
2

mˆ = arg max ⎢ln Pm − ⎥


1≤ m ≤ M ⎢ N 0 ⎥⎦ Multiply N0/2

(c)
⎡N 1 2⎤
= arg max ⎢ 0 ln Pm − r − s m ⎥
1≤ m ≤ M ⎣ 2 2 ⎦
⎡N 1 2 2 ⎤
= arg max ⎢ 0 ln Pm − ( r + s m − 2r ⋅ s m ) ⎥ 2
1≤ m ≤ M ⎣ 2 2 ⎦ s m = Em
2
(d )
⎡N 1 ⎤ r is dropped
= arg max ⎢ 0 ln Pm − Em + r ⋅ s m ⎥
1≤ m ≤ M ⎣ 2 2 ⎦ N0 1
(e) ηm = ln Pm − Em
= argg max [η m + r ⋅ s m ] 2 2
1≤ m ≤ M Bi term
Bias t

(MAP)

26
4.2--1 Optimal
4.2 p Detection for the Vector AWGN Channel ((3))

◊ MAP decision rule for AWGN vector channel:


mˆ = arg max [ηm + r ⋅ s m ]
1≤ m ≤ M

N0 1
ηm = ln Pm − Em
2 2
◊ If Pm=1/M, for all m, the optimal decision becomes
⎡N 1 2⎤
mˆ = arg max ⎢ 0 ln Pm − r − s m ⎥ = arg max ⎡ − r − s m ⎤
2

1≤ m ≤ M ⎣ 2 2 ⎦ 1≤ m ≤ M ⎣ ⎦
= arg
arg min r − s m
1≤ m ≤ M

◊ Nearest neighbor or minimum distance detector


◊ Signals are equiprobable in AWGN channel
ÆMAP=ML=minimum distance

27
4.2--1 Optimal
4.2 p Detection for the Vector AWGN Channel ((4))

◊ For minimum-distance detector, boundary of decisions Dm and


Dm’ are equidistant from sm and sm’
◊ Right figure:
◊ 2-dim constellation (N=2)
◊ 4 signal points (M=4)

◊ When signals are equiprobable


and have equal
q energy gy
N0 1
◊ ηm = ln Pm − Em indep of m
2 2
◊ mˆ = arg max r ⋅ s m
1≤ m ≤ M

28
4.2--1 Optimal
4.2 p Detection for the Vector AWGN Channel ((5))

◊ In general, the decision region is


Dm = {r ∈ N : r ⋅ s m + ηm > r ⋅ s m ' + ηm ' , for all 1 ≤ m′ ≤ M and m′ ≠ m} (4.2-20)
◊ Each region is described in terms of at most M-1 inequalities
◊ For each boundary:
r ⋅ (s m − s m′ ) > η m′ − η m Î equation of a hyperplane

◊ ∵ r ⋅ sm = ∫ −∞
r (t )sm (t )dt

= ∫ sm2 (t )dt
2
Em = s m −∞

∴ in
i AWGN channel,
h l
⎡ N0 ∞ 1 ∞ 2 ⎤
MAP : mˆ = arg max ⎢ ln Pm + ∫ r (t )sm (t )dt − ∫ sm (t )dt ⎥
1≤ m≤ M ⎣ 2 −∞
∞ 2 −∞∞ ⎦
⎡ ∞ 1 ∞ 2 ⎤
ML: mˆ = arg max ⎢ ∫ r (t ) sm (t )dt − ∫ sm (t )dt ⎥
1≤ m ≤ M ⎣ −∞ 2 −∞ ⎦

29
4.2--1 Optimal
4.2 p Detection for the Vector AWGN Channel ((6))

◊ Distance metric: Euclidean distance between r and sm



=∫ ( r (t ) − sm (t ) ) dt
2 2
D (r, s m ) = r − s m
−∞

◊ Modified distance metric: distance when ||r||2 is removed


2
D′(r, s m ) = −2r ⋅ s m + s m
◊ Correlation metric : negative
g of modified distance metric
∞ ∞
= 2∫ r (t ) sm (t )dt − ∫ sm2 (t )dt
2
C (r, s m ) = 2r ⋅ s m − s m
−∞ −∞
◊ With these definitions,,
◊ MAP : mˆ = arg max [ N 0 ln Pm − D(r, s m ) ]
1≤ m ≤ M

max [ N 0 ln Pm + C (r, s m ) ]
= arg max
1≤ m ≤ M
◊ ML : mˆ = arg max C (r, s m )
1≤ m ≤ M

30
Optimal
p Detection for Binary
y Antipodal
p Signaling
g g (1)
( )

◊ In binary antipodal signaling,


◊ s1(t) =s(t), with p1=p
◊ s2(t) = −s(t), with p2=1−p
◊ Vector representation (N=1) is
◊ s1 = ES = Eb
s2 = − E S = − Eb F
From (4
(4.2-20)
2 20)

⎧ N 1 N 1 ⎫
◊ D1 = ⎨r : r Eb + 0 ln p − Eb > −r Eb + 0 ln(1 − p) − Eb ⎬
⎩ 2 2 2 2 ⎭
⎧⎪ N0 1 − p ⎫⎪
= ⎨r : r > ln ⎬
⎪⎩ 4 Eb p ⎪⎭ N0 1− p
rth = ln
= {r : r > rth } 4 Eb p

31
Optimal
p Detection for Binary
y Antipodal
p Signaling
g g (2)
( )
⎧⎪ N0 1 − p ⎫⎪
◊ D1 = ⎨r : r > rth ≡ ln ⎬
⎪⎩ 4 Eb p ⎪⎭
◊ When pÆ0, rth Æ∞, entire real line becomes D2

◊ When pÆ1, Æ ∞ entire real line becomes D1


pÆ1 rth Æ−∞,
◊ When p=1/2, rth =0, minimum distance rule

◊ Error probability of MAP receiver:


2
Pe = ∑ Pm ∑∫ P(r | s m )dr
Dm′
m =1 1≤ m′≤ 2
m′≠ m

( )
= p ∫ p r | s = Eb dr + (1 − p) ∫ p r | s = − Eb dr
D2
(
D1
)
= p∫ p ( r | s = E ) dr + (1 − p) ∫ p ( r | s = − E ) dr
rth ∞

−∞ b rth b

32
Optimal
p Detection for Binary
y Antipodal
p Signaling
g g (3)
( )

( ) ( )
rth ∞
◊ Pe = p ∫ p r | s = Eb dr + ((1 − p) ∫ p r | s = − Eb dr
−∞ rth

= pP ⎡ N ( Eb , N 0 / 2 ) < r ⎤ + (1 − p) P ⎡ N ( − )
Eb , N 0 / 2 > rth ⎤
⎣ th
⎦ ⎣ ⎦
⎛ Eb − rth ⎞ ⎛ rth + Eb ⎞ Q( x) = P [ N (0,1) > x ]
= pQ ⎜ ⎟ + (1 − p)Q ⎜ ⎟
⎜ N /2 ⎟ ⎜ N /2 ⎟ Q(− x) = 1 − Q( x)
⎝ 0 ⎠ ⎝ 0 ⎠
◊ When p=1/2, rth =0, the error probability is simplified as
⎛ 2 Eb ⎞
Pe = Q ⎜⎜ ⎟⎟ P ⎡N
⎣ ( )
Eb , N 0 / 2 < rth ⎤

⎝ N0 ⎠
( )
= 1 − P ⎡ N Eb , N 0 / 2 > rth ⎤
⎣ ⎦
◊ Since the system
y is binary,
y, Pe=Pb
⎛ r − Eb ⎞
= 1 − Q ⎜ th ⎟
⎜ N 2 ⎟
⎝ 0 ⎠
⎛ Eb − rth ⎞
= Q⎜ ⎟
⎜ N 2 ⎟
⎝ 0 ⎠

33
Error Probabilityy for Equiprobable
q p Binaryy Signaling
g g Schemes ((1))

◊ In AWGN channel, transmitter transmits either s1(t) or s2(t), and


assume two signals are equiprobable
◊ Equiprobable in AWGN channelÆdecision regions are separated by
th prependicular
the di l bisector
bi t off ththe li
line connecting
ti s1 andd s2
◊ Error probabilities when s1 or s2 is transmitted are equal
◊ When s1 is transmitted,
transmitted error occurs when
◊ r is in D2
◊ Distance between the pprojection
j of r−s1 on s2−s1 from s1
is greater than d12/2, d12=|| s2−s1 ||

⎡ n ⋅ ( s2 − s1 ) d12 ⎤ ⎡ d122 ⎤
Pb = P ⎢ > ⎥ = P ⎢n ⋅ ( s2 − s1 ) > ⎥
⎣ d 12 2 ⎦ ⎣ 2 ⎦
s2 − s1
is a unit vector. n = r − s1.
d12
34
Error Probabilityy for Equiprobable
q p Binaryy Signaling
g g Schemes ((2))

Since n ⋅ ( s2 − s1 ) ~ N (0, d12 N 0 / 2)


2

⎛α −m ⎞
Pb = P ⎣⎡n ⋅ ( s2 − s1 ) > d122 / 2 ⎦⎤ P[ X > α ] = Q ⎜
⎝ σ ⎠

( 2.3 − 12 )
⎛ d12 / 2 ⎞ ⎛ d ⎞ ⎛ m −α ⎞
P[ X < α ] = Q ⎜
2 2

= Q⎜ ⎟ = Q⎜ 12
⎟ ⎝ σ ⎠
⎜d N /2⎟ ⎜ 2N0 ⎟
⎝ 12 0 ⎠ ⎝ ⎠
◊ Since Q(
Q(x)) is decreasing,
g min. error pprobabilityy =max. d12

d =∫ ( s1 (t ) − s2 (t ) )
2 2
◊ 12 dt
−∞
◊ When equiprobable
q p signals
g have the same energy,
gy, Es1= Es2 =E
d122 = Es1 + Es 2 − 2 s1 (t ), s2 (t ) = 2 E (1 − ρ )
◊ −1≤ ρ ≤1 is cross-correlation coefficient
◊ d12 is maximized when ρ = −1 Î antipodal signals

35
Optimal
p Detection for Binaryy Orthogonal
g Signaling
g g (1)
( )

◊ For binary orthogonal signals,


∞ ⎧ Eb i = j
∫−∞ si (t )s j (t )dt = ⎨⎩ 0 i ≠ j 1 ≤ i, j ≤ 2
◊ Ch
Choose φ j (t ) = s j (t ) / Eb , vector representation
i isi
s1 = ( )
Eb , 0 ; (
s 2 = 0, Eb )
◊ When
h signals
i l are equiprobable
i b bl (figure)
(fi )
◊ Error probability : ( d = 2 Eb )
( ) (
Pb = Q d 2 / 2 N 0 = Q Eb / N 0 )
◊ Given the same Pb, binary orthogonal signals
requires
i twice
t i energy off antipodal
ti d l signals
i l
◊ A binary orthogonal signaling requires twice
the energy per bit of a binary antipodal signaling system to provide
the same error probability.
36
Optimal
p Detection for Binaryy Orthogonal
g Signaling
g g (2)
( )

◊ Figure: Error Probability v.s.


SNR/bit for binary orthogonal
and binary antipodal signaling
systems.

◊ Signal-to-noise ratio (SNR) per bit:

γ b = Eb / N 0

37
Chapter 4.2-
4.2-2
Implementation of the Optimal Receiver for
AWGN Channels

Wireless Information Transmission System Lab.


Institute of Communications Engineering
g g
National Sun Yat-
Yat-sen University

38
4.2--2 Implementation
4.2 p of the Optimal
p Receiver for AWGN Channels

◊ Present different implementations of MAP receivers in AWGN


channel
◊ Correlation Receiver

◊ Matched Filter Receiver

◊ All these structures are equivalent in performance and result in


minimum error probability.

39
The Correlation Receiver ((1))

◊ The MAP decision in AWGN channel (from 4.2-17)


N0 1
mˆ = arg max[η m + r ⋅ s m ], where η m = ln Pm − Em
1≤ m ≤ M 2 2
1) r is derived

from
rj = ∫ r (t )φ j (t )dt Æ Correlation receiver
−∞
1)) Find the inner pproduct of r and sm, 1≤ m ≤ M
2) Add the bias term ηm
3) Choose m that maximize the result

40
The Correlation Receiver ((2))

ηm’ss and sm’ss can be computed once and stored in memory

41

41
The Correlation Receiver ((3))

◊ Another implementation
∞ N0 1
mˆ = arg max[η m + ∫ r (t ) sm (t )dt ], where η m = ln Pm − Em
1≤ m ≤ M −∞ 2 2
◊ Requires M correlators
◊ Usually M > N
◊ Less preferred

42
The Matched Filter Receiver ((1))

◊ In both correlation receiver, we compute



rx = ∫ r (t ) x(t )dt
−∞
◊ ( ) is φj((t)) or sm((t))
x(t)
◊ Define h(t)= x(T−t) for arbitrary T: filter matched to x(t)
◊ If r(t)
( ) is applied
pp to h(t),
( ) the output
p y(y(t)) is

y (t ) = r (t ) * h(t )= ∫−∞
r (τ )h(t −τ )dτ h ( t ) = x (T − t )
h (τ ) = x (T − τ )

= ∫−∞
r (τ ) x(T − t +τ )dτ h ( t − τ ) = x (T − ( t − τ ) )
= x (T − t + τ )

◊ rx = y (T ) = ∫ r (τ ) x(τ )dτ
−∞
◊ rx can be obtained by sampling the output of matched filter at
t=T.

43
The Matched Filter Receiver ((2))

◊ A matched filter implementation of the optimal receiver

44
The Matched Filter Receiver ((3))

Frequency Domain Interpretation:


◊ Property I:
◊ Matched filter to signal s(t) is h(t)=s(T−t). The properties of Fourier
transform
f is
i H ( f ) = S * ( f )e − j 2π fT
Conj(signal spectrum) Sampling delay of T

◊ |H( f )|=|S( f )|
◊ ∠S( f ) − 2π f T
∠H( f ))= −∠S(

45
The Matched Filter Receiver ((4))

◊ Property II: signal-to-noise maximizing property


◊ Assume that r(t) = s(t) + n(t) is passed through a filter h(t), and the output
y(t) ≡ yS(t)+v(t) is sampled at time T
◊ Signal Part: F{ yS(t)}=
(t)} H( f ) S( f )

⇒ ys (T ) = ∫ H ( f ) S ( f )e j 2π f T df
−∞

◊ Zero-Mean Gaussian Noise: Sv( f ) = (N0 /2) |H( f )|2



⇒ VAR[v(T )] = ( N 0 / 2) ∫−∞ H ( f ) df = ( N 0 / 2) Eh
2

◊ Eh is the energy in h(t).


R l i h' Theorem:
Rayleigh's Th
∞ ∞
x ( t ) dt = X ( f ) df = Ex
2 2

−∞

−∞

46
The Matched Filter Receiver ((5))

◊ The SNR at the output of filter H( f ) is


ys 2 (T )
SNR O =
VAR[v(T )]
◊ From Cauchy-Schwartz inequality Rayleigh's Theorem:
∞ ∞
x ( t ) dt = X ( f ) df = Ex
2 2 2

y (T ) = ⎡ ∫ H ( f ) S ( f )e
2

j 2π fT
df ⎤ ∫ ∫
S ⎢⎣ −∞ ⎥⎦ −∞ −∞

∞ 2 ∞ 2
≤∫ H ( f ) df ⋅ ∫ S ( f )e j 2π fT
df =Eh Es
−∞ −∞

◊ Equality holds iff H( f ) =α S*( f ) e−j2π f T , α ∈ C


Es Eh 2E E
◊ SNR O ≤ = s = S
( N 0 / 2) Eh N0 N0 2
◊ The matched filter h(t)=s(T−t), i.e. H( f ) = S*( f ) e−j2π f T ,
maximizes
i i SNR

47
Matched--Filter
Matched
◊ Time-Domain Property of the matched filter.
◊ If a signal s(t) is corrupted by AWGN, the filter with an
impulse response matched to s(t) maximizes the output
signal-to-noise
i lt i ratio
ti (SNR).
(SNR)
◊ Proof: let us assume the receiver signal r(t) consists of the
signal s(t) and AWGN n(t) which has zero-mean
zero mean and
1
Φ nm ( f ) = N 0 W/Hz.
2
◊ Suppose the signal r(t) is passed through a filter with

impulse response h(t), 0≤t≤T, and its output is sampled


at time t=T. The outputp signal
g of the filter is:
t
y (t ) = r (t ) ∗ h(t ) = ∫ r (τ )h(t − τ )dτ
0
t t
= ∫ s (τ )h(t − τ )dτ + ∫ n(τ )h(t − τ )dτ
0 0

48
Matched--Filter
Matched
◊ Proof: ((cont.))
◊ At the sampling instant t=T:
T T
y (T ) = ∫ s (τ ) h (T − τ ) dτ + ∫ n (τ ) h (T − τ ) dτ
0 0

= y s (T ) + yn (T )
◊ This problem
Thi bl isi to
t select
l t the
th filter
filt impulse
i l response that
th t
maximizes the output SNR0 defined as:
ys2 (T )
SNR 0 =
E ⎡⎣ yn2 (T ) ⎤⎦

∫ E [ n(τ )n(t )] h(T − τ )h(T − t )dtdτ


T T
E ⎡⎣ y (T ) ⎤⎦ = ∫
2
n 0 0

1 T T 1 T
= N 0 ∫ ∫ δ (t − τ )h(T − τ )h(T − t )dtdτ = N 0 ∫ h 2 (T − t )dt
2 0 0 2 0

49
Matched--Filter
Matched
◊ Proof: (cont.)
◊ By substituting for ys (T) and E ⎡ y n ( T ) ⎤ into SNR0.
2
⎣ ⎦
τ ' = T −τ
2 2
⎡ s (τ ) h(T − τ ) dτ ⎤
T
⎡ h (τ ') s (T − τ ') dτ '⎤
T

⎢⎣ ∫0 ⎥⎦ ⎢⎣ ∫0 ⎥⎦
SNR 0 = =
1 T 1 T
N 0 ∫ h (T − t ) dt
2
N 0 ∫ h 2 (T − t ) dt
2 0 2 0

◊ Denominator of the SNR depends on the energy in h(t).


◊ The maximum outputp SNR over h(t)
( ) is obtained by
y
maximizing the numerator subject to the constraint that the
denominator is held constant.

50
Matched--Filter
Matched
◊ Proof: (cont.)
◊ Cauchy-Schwarz inequality: if g1(t) and g2(t) are finite-

energy signals, then


2
⎡ g (t ) g (t ) dt ⎤ ≤ ∞ g 2 (t ) dt ∞ g 2 (t ) dt

⎢⎣ ∫− ∞ 1 2 ⎥⎦ ∫−∞ 1 ∫−∞ 2
with equality when g1(t)=Cg2(t) for any arbitrary constant
C.
◊ If we set g1(t)=h1(t) and g2(t)=s(T−t), it is clear that the
SNR is maximized when h(t)=Cs(T–t).

51
Matched--Filter
Matched
◊ Proof: (cont.)
◊ The output (maximum) SNR obtained with the matched

filter is:
2 2
⎡ s (τ ) h(T − τ ) dτ ⎤
T
⎡ s (τ )Cs (T − (T − τ ) ) dτ ⎤
T

⎢ ∫0
⎣ ⎥
⎦ 2 ⎣⎢ ∫0 ⎦⎥
SNR 0 = =
1 s (T − (T − t ) ) dt
T
N

T
N 0 ∫ h (T − t ) dt
2 2 2
0 C
0
2 0

2 T 2 2ε h ( t ) = s (T − t )
=
N0 ∫0
s (t ) dt =
N0
h (τ ) = s (T − τ )
h (T − τ ) = s (T − (T − τ ) )

◊ Note that the output SNR from the matched filter depends
on the energy of the waveform s(t) but not on the detailed
characteristics
h t i ti off s(t).
(t)

52
The Matched Filter Receiver ((6))

◊ Ex 4.2-1 M=4 biorthogonal signals are constructed by two signals


in fig.(a) for transmission in AWGN channel. The noise is zero
mean and PSD=N0/2
◊ Dimension N=2, and basis function
φ1 (t ) = 2 / T , 0≤t ≤T /2
φ2 (t ) = 2 / T , T /2≤t ≤T

◊ IImpulse
l response off twot
matched filters (fig (b))
h1 (t ) = φ1 (T − t ) = 2 / T , T /2≤t ≤T
h2 (t ) = φ2 (T − t ) = 2 / T , 0≤t ≤T /2

y ( t ) = s ( t ) * h ( t ) = ∫ s (τ )h(t −τ )dτ
−∞

= ∫ s (τ ) s (T − t + τ )dτ
−∞

53
The Matched Filter Receiver ((7))
◊ If s1(t) is transmitted, the noise-free responses of two matched
filter(fig(c)) sampled at t=T are
y1s (T ) = A2T / 2= E ; y2 s (T ) = 0
◊ If s1(t) is transmitted
transmitted, the received vector formed from two
matched filter outputs at sampling instances t = T is
r = (r1 , r2 ) = ( E + n1 , n2 )
◊ Noise : n1=y1n(T) & n2=y2n(T)
T
k (T ) = ∫ n (t )φk (t ) dt ,
ykn
0
k = 1, 2
a) E[nk ] = E[ ykn(T )] = 0
b)) VAR[[ nk ] = ( N 0 / 2)) Eφk = N 0 / 2 VAR[v(T ))] = ( N 0 / 2 ) Eh ( 4.2 − 52 )
◊ SNR for the first matched filter
( E )2 2 E
SNR o = =
N0 / 2 N0
54
Chapter 4.2-
4.2-3
AU
Union
i B Bound d on P
Probability
b bilit
of Errors of ML Detection

Wireless Information Transmission System Lab.


Institute of Communications Engineering
g g
National Sun Yat-
Yat-sen University

55
4.2-3
4.2-
A Union Bound on Probability of Errors of ML Detection (1)

◊ When signals are equiprobable, Pm=1/M, ML decision is optimal.


The
h error probability
b bili becomes
b
1 M
1 M
Pe =
M

m =1
Pe|m =
M
∑ ∑ ∫
m =1 1≤ m '≤ M
Dm '
p(r | s m )dr
m '≠ m

◊ For AWGN channel,


Pe|m = ∑ ∫
1≤ m '≤ M
Dm '
p (r | s m )dr = ∑ ∫
1≤ m '≤ M
Dm '
pn (r − s m )dr (4.2-63)
m '≠ m m '≠ m
N 2
r −s m
⎛ 1 ⎞ −
=⎜ ⎟ ∑ ∫D e 0 dr
N
⎜ π N ⎟ 1≤ m '≤ M m '
⎝ 0 ⎠
m '≠ m
◊ For most constellation, the integrals does not have a close form
◊ It’s convenient to have upper bounds for the error probability
◊ The union bound is the simplest and most widely used bound which
is quite tight particularly at high SNR.
56
4.2-3
4.2-
A Union Bound on Probability of Errors of ML Detection (2)

◊ In general, the decision region Dm’ under ML detection is


Dm ' = {r ∈ R N : p (r | sm ' ) > p ( r | sk ), for all 1 ≤ k ≤ M and k ≠ m '}
◊ Define Dmm ' = { p (r | s m ' ) > p(r | s m )}
◊ Decision region for m’ in a binary equiprobable signals sm & sm’
◊ Dm ' ⊆ Dmm ' Pairwise Error

∫ Dm '
p ( r | sm )dr ≤ ∫
Dmm '
p ( r | sm ) dr Probability PmÆm’
(4.2-67)
Î Pe|m = ∑ ∫
1≤ m '≤ M
Dm '
p (r | s m )dr
m '≠ m

≤ ∑ ∫
1≤ m '≤ M
Dmm '
p (r | sm )dr = ∑
1≤ m '≤ M
Pm→m '
m '≠ m m '≠ m

1 M
1 M
Î Pe ≤
M
∑ ∑ ∫
m =1 1≤ m '≤ M
Dmm '
p (r | sm )dr
d =
M
∑ ∑
m =1 1≤ m '≤ M
Pm→m '
m '≠ m m '≠ m
57
4.2-3
4.2-
A Union Bound on Probability of Errors of ML Detection (3)

◊ In an AWGN channel
◊ Pairwise probability : Pm→m ' = Pb = Q d mm
2
' /(2 N 0 ) ( ) (4.2-37)

1 M ⎛ d2 ⎞
◊ Pe ≤ ∑ ∑ Q ⎜ mm ' ⎟
M m =1 1≤ m '≤ M ⎜⎝ 2 N 0 ⎟⎠
m '≠ m 1 −x /2 2

2
Q( x) ≤ e
1 M −
d mm ' 2

2M
∑ ∑
m =1 1≤ m '≤ M
e 4 N0
Union bound for an AWGN channel.
m '≠ m

◊ Distance enumerator function for a constellation T(X):

∑ ∑
2
d2
T(X ) = X d mm '
= ad X
d mm ' =||s m −s m ' || all distinct d 'ss
1≤m ,m '≤ M ,m≠ m '

◊ ad : # of ordered pairs (m,m’) s.t. m ≠ m’, and ||sm−sm’||=d

58
4.2-3
4.2-
A Union Bound on Probability of Errors of ML Detection (4)
2
d mm

Union bound: Pe ≤ 1 1
'
M −

2M
∑ ∑m =1 1≤ m '≤ M
e 4 N0
=
2M
T(X ) − 1
X = e 4 N0
m '≠ m

◊ Minimum distance: d min = 1≤ mmin


, m '≤ M ,
|| s m − s m ' ||
m≠m '
◊ Since Q(x) is decreasing
⎛ d2 ⎞ ⎛ d2 ⎞
Î Q⎜
mm '
⎟ ≤ Q ⎜ min ⎟
⎜ 2 N0 ⎟ ⎜ 2 N0 ⎟
⎝ ⎠ ⎝ ⎠
◊ Th error probability
The b bilit (looser
(l form
f off the
th union
i bound)
b d)
2

1 M ⎛ 2
d mm ' ⎞ ⎛ 2
d min ⎞ −
d min
M − 1 4 N0
Pe ≤ ∑ ∑ Q ⎜ ⎟
M m =1 1≤ m '≤ M ⎜⎝ 2 N 0 ⎟⎠
≤ ( M − 1)
)Q ⎜
⎜ 2 N0 ⎠



2
e
m '≠ m ⎝
◊ Good constellation provides maximum possible minimum distance

59
4.2-3
4.2-
A Union Bound on Probability of Errors of ML Detection (5)

◊ Ex 4.2-2 Consider a 16-QAM constellation (M=16)


◊ From Chap 3.2(equation3.2-44), the minimum distance is
6 log 2 M 8
d min = Ebavg = Ebavg
M −1 5

◊ Total 16×15
15=240
240 possible distances
◊ Distance enumerator function :
2 2 2 2 2
T ( X ) = 48 X d + 36 X 2 d + 32 X 4 d + 48 X 5 d + 16 X 8d
9d 2 10 d 2 13 d 2 18 d 2
+ 16 X + 24 X + 16 X + 4X
◊ Upper bound of error probability
1 ⎛ − 4 N0 ⎞
1

Pe ≤ T ⎜ e ⎟
32 ⎜⎝ ⎟

60
4.2-3
4.2-
A Union Bound on Probability of Errors of ML Detection (6)
2 2 Ebavg
d min
M −1 − 15 −
◊ Pe ≤ e 4 N0
= e 5 N0

2 2
◊ When SNR is large T ( X ) ≈ 48 X ( d2
),
2 2 Ebavg
1 ⎛ ⎞ 48
1 d min
3
i
− − −
Pe ≤ T ⎜ e ⎟ ≈ =
4 N0 4 N0 5 N0
e e
32 ⎜⎝ ⎟ 32
⎠ 2

◊ Exact error probability


2
⎛ 4 Ebavg ⎞ 9 ⎡ ⎛ 4 Ebavg ⎞⎤
Pe = 3Q ⎜ ⎟ − ⎢Q ⎜ ⎟⎥
⎜ 5N0 ⎟ 4 ⎢ ⎜ 5N0 ⎟
⎝ ⎠ ⎣ ⎝ ⎠ ⎥⎦
(
(see example
l 4.3-1)
4 3 1)

61
Lower Bound on Probability
y of Error ((1))

◊ In an equiprobable M-ary signaling scheme


1 M
1 M Dm ' ⊆ Dmm '
Pe =
M
∑ P[Error | m sent] = ∑ ∫
M m =1 mD c
p (r | s m )dr
m =1 C
Dmm ' ⊆ Dm
C

1 M
1 M

M

m =1
∫Dmc ' m
p (r | s m )dr = ∑ ∫ p (r | s m )dr
M m =1 Dmm '
1 M ⎛ d mm ' ⎞
=
M
∑ Q⎜
⎜ 2 N
⎟,

for all m ' ≠ m
m =1 ⎝ 0 ⎠
(Finding m
m’ such that dmm’’ is miniized
miniized.))
◊ To derive the tightest lower boundÎ maximize the right hand side
1 M ⎛ d mm ' ⎞ 1 M ⎛ d min m ⎞
Pe ≥
M
∑ max Q ⎜
m '≠ m ⎜ 2 N
⎟=
⎟ M
∑ Q⎜
⎜ 2 N


m =1 ⎝ 0 ⎠ m =1 ⎝ 0 ⎠

m
d min : distance from m to its nearest neighbor
g

62
Lower Bound on Probability
y of Error ((2))

◊ ∵ d min ≥ d min
m

(d min
m
denotes the distance from m to its nearest neighbor in the constellation)
⎛ d min
Q⎜
m
⎟≥⎨
( )
⎞ ⎧⎪Q d min / 2 N 0 , At least one signal at distance d min from s m
⎜ 2N ⎟
⎝ 0 ⎠ ⎪⎩0, otherwise

1 ⎛ d min ⎞
◊ Pe ≥
M
∑ Q⎜
⎜ 2 N


1< m < M
∃m '≠ m: ||s m −s m ' || = d min
⎝ 0 ⎠

◊ Nmin : number of points in the constellation s.t.


∃m ' ≠ m : || s m − s m ' ||= d min
N min ⎛ d min ⎞ ⎛ d min ⎞
◊ Q⎜ ⎟ ≤ Pe ≤ ( M − 1)Q ⎜ ⎟
M ⎜ 2N ⎟ ⎜ 2N ⎟
⎝ 0 ⎠ ⎝ 0 ⎠

63
Chapter 4.3: Optimal Detection
and Error Probability for
Bandlimited Signaling

Wireless Information Transmission System Lab.


Institute of Communications Engineering
g g
National Sun Yat-
Yat-sen University

64
Chapter 4.3
4.3--1 Optimal Detection & Error Prob.
for ASK or PAM Signaling

◊ In this section we studyy signaling


g g schemes that are
mainly characterized by their low bandwidth
requirements.
q
◊ These signaling schemes have low dimensionality which
is independent from the number of transmitted signals,
and, as we will see, their power efficiency decreases
when the number of messages increases.
◊ This family of signaling schemes includes ASK, PSK,
and QAM
QAM.

65
Chapter 4.3
4.3--1 Optimal Detection & Error Prob.
for ASK or PAM Signaling

◊ For ASK signaling


g g scheme:
12 log 2 M (3.2-22)
d min = Ebavg
M −12

◊ The constellation points: {±dmin/2, ±3dmin/2,…, ±(M−


1)dmin/2 }
◊ Two type of signal points
◊ M−2 inner points: detection error occurs if |n| > dmin /2.
◊ 2 outer points: error probability is half of inner points.
◊ Let Pei and Peo are error probabilities of inner and
outer points
⎛ d min ⎞ 1 ⎛ d min ⎞
⎡ 1 ⎤ Peo = Pei = Q⎜ ⎟
Pei = P ⎢ n > d min ⎥ = 2Q ⎜ ⎟ ⎜ 2N ⎟
⎣ 2 ⎦ ⎜ 2 N ⎟ 2 ⎝ ⎠
⎝ 0 ⎠66 0
Chapter 4.3
4.3--1 Optimal Detection & Error Prob.
for ASK or PAM Signaling

◊ Symbol
y error pprobabilityy
1 M
1 ⎡ ⎛ d min ⎞ ⎛ d min ⎞ ⎤
Pe = ∑ P ⎣⎡error m sent ⎦⎤ = ⎢ 2( M − 2)Q ⎜

⎟ + 2Q ⎜
⎟ ⎜
⎟⎥

M m =1 M⎢
⎣ ⎝ 2 N0 ⎠ ⎝ 2 N 0 ⎠ ⎥⎦
2 ( M − 1) ⎛ d min ⎞
= Q⎜ ⎟
⎜ ⎟ 12 log
l 2M
M ⎝ 2 N0 ⎠ d min = Ebavg
⎛ ⎞ M −1
2

⎛ 1 ⎞ 6log 2 M bavg
E
= 2 ⎜1 − ⎟ Q ⎜ ⎟
⎝ M ⎠ ⎜⎝ M − 1 N 0 ⎟⎠
2

⎛ 6log M Ebavg ⎞ Doubling M


≈ 2Q ⎜ ⎟
2 b
for large M Îincreasing rate by 1
⎜ M 2 − 1 N0 ⎟
⎝ ⎠ bit/transmission
Decrease with M SNR/bit ÎNeed 4 times SNR/bit
to keep performance

67
Chapter 4.3
4.3--1 Optimal Detection & Error Prob.
for ASK or PAM Signaling

◊ Symbol error probability for


ASK or PAM signaling
◊ For large M, the distance
b t
between M andd 2M is
i
roughly 6dB.

68
Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling

◊ In M-aryy PSK signaling,


g g, assume signals
g are
equiprobable.
Î minimum distance decision is optimal.
p
Î error probability = error prob. when s1 is transmitted.
◊ When s1 = ( E , 0) is transmitted,, received signal
g is
r = ( r1 , r2 ) = ( E + n1 , n2 )
◊ r1 ~ N ( E , N 0 / 2)) and r2 ~ N ((0,, N 0 / 2)) are indep.
p
1 ⎛ (r1 − E ) 2 + r22 ⎞
p (r1 , r2 ) = exp ⎜⎜ − ⎟⎟
π N0 ⎝ N 0 ⎠
r2
V = r + r , Θ = arctan
1
2
2
2

r1
v ⎛ v 2 + E − 2 Ev cos θ ⎞
pV ,Θ (v, θ ) = exp ⎜⎜ − ⎟⎟
π N0 ⎝ N0 ⎠
69
Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling

◊ Marginal PDF of Q is

pΘ (θ ) = ∫ pV ,Θ (v, θ ) dv
0
v 2 + E − 2 Ev cosθ
∞ v −
=∫ e N0
dv
0 π N0
( v − 2 γ s cosθ ) 2
1 −γ s sin 2 θ ∞ −
=

e ∫0
ve 2
dv

E
◊ Symbol SNR: γ s = N
0

◊ As gs increases, pQ(q ) is
more peaked around q = 0.
70
Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling

◊ Decision region:
g D1 = {θ : −π / M < θ ≤ π / M }
◊ Error probability is
π /M
Pe = 1 − ∫ p Θ (θ )dθ
−π / M

◊ Not have a simple form except for M=2 or 4.


4
◊ When M=2 Î binary antipodal signaling

Pb = Q ( 2 Eb / N 0 )
◊ M=4, Î two binary phase modulation signals
When M=4
( )
2

Pc = (1 − Pb ) = 1 − Q 2 Eb / N 0
2 ⎤
⎣ ⎦
( ⎡ 1
) ⎤
Pe = 1 − Pc = 2Q 2 Eb / N 0 ⎢1 − Q 2 Eb / N 0 ⎥
⎣ 2 ⎦
( )
71
Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling

◊ Symbol error probability of M-PSK


◊ M ↑, Required SNR ↑

72
Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling

◊ For large
g SNR (E/N
( ) , pΘ(θ) is approximated
0>> 1) pp
− γ s sin 2 θ
pΘ (θ ) ≈ γ s / π cos θ e , for | θ |≤ π / 2

◊ Error probability is approximated by


π /M
Pe ≈ 1 − ∫
2
γ s / π cos θ e −γ s sin θ dθ
−π / M
u = γ s sin θ
2 ∞
∫γ
−u2
≈ e du γ s = E / N0
π s sin(π / M )

⎛ ⎛ π ⎞⎞ ⎛ 2 ⎛ π ⎞ Eb

= 2Q ⎜ 2γ s sin ⎜ ⎟ ⎟ = 2Q ⎜ (2 log 2 M ) sin ⎜ ⎟ ⎟⎟
⎝ ⎝ M ⎠⎠ ⎜ ⎝ M ⎠ N0 ⎠

Eb E γs
◊ = = When M=2 or 4:
N 0 N 0 log 2 M log 2 M
Pe ≈ 2Q 2 Eb / N 0 ( )
73
Chapter 4.3
4.3--2 Optimal Detection & Error Prob.
for PSK Signaling

◊ For large
g M and large
g SNR:
◊ sin(π/M) ~ π/M
◊ Error probability is approximated by
⎛ 2π 2 log M E ⎞
Pe ≈ 2Q ⎜ 2 b
⎟ for large M
⎜ M 2
N 0 ⎟⎠

◊ For large M,
M doubling M reduces effective SNR by 6 dB.
dB
◊ When Gray codes is used in the mapping
◊ Since the most probable errors occur in the erroreous selection of an
adjacent phase to the true phase
1
Pb ≈ Pe
k
74
Differentially Encoded PSK Signaling

◊ In p
practice,, the carrier pphase is extracted from the
received signal by performing nonlinear operation ⇒
phase ambiguity
p g y
◊ For BPSK
• The received signal is first squared.
• The resulting double-frequency component is filtered.
• The signal is divided by 2 in frequency to extract an estimate of the
carrier frequency and phase φ.φ
• This operation result in a phase ambiguity of 180°in the carrier phase.
◊ For QPSK,
QPSK there are phase ambiguities of ±90° and 180° in
the phase estimate.
◊ Consequently,
q y, we do not have an absolute estimate of the
carrier phase for demodulation.
75
Differentially Encoded PSK Signaling

◊ The pphase ambiguity


g y can be overcome by y encodingg the
information in phase differences between successive signals.
◊ In BPSK
◊ Bit 1 is transmitted by shifting the phase of carrier by 180°.
◊ Bit 0 is transmitted by a zero phase shift.
◊ IIn QPSK
QPSK, the
h phase
h shifts
hif are 0,
0 90°, 180°, and
d -90
90°, corresponding
di
to bits 00, 01, 11, and 10, respectively.
◊ The PSK signals resulting from the encoding process are
differentially encoded.
◊ The
h detector
d is
i a simple
i l phase
h comparator that
h
compares the phase of the demodulated signal over two
consecutive
i intervals
i l to extract the
h information.
i f i
76
Differentially Encoded PSK Signaling

◊ Coherent demodulation of differentially encoded PSK


results in a higher error probability than that derived for
absolute phase encoding.
◊ With differentially encoded PSK, an error in the
demodulated phase of the signal in any given interval
usually
ll results
lt in
i decoding
d di errors over two t consecutiveti
signaling intervals.
◊ Th error probability
The b bilit in
i differentially
diff i ll encoded
d d M-ary
M
PSK is approximately twice the error probability of M-
ary PSK with absolute phase encoding
encoding.
◊ Only a relative small loss in SNR.

77
Chapter 4.3
4.3--3 Optimal Detection & Error Prob.
for QAM Signaling

◊ In detection of Q
QAM signals,
g , need two filter matched to
φ1 (t ) = 2 / Eg g (t ) cos 2π f ct
φ2 (t ) = − 2 / Eg g (t ) sin 2π f ct

◊ Output of matched filters r =( r1, r2)


◊ Compute C (r, s m ) = 2r • s m − Em (See 4.2-28)
◊ Select mˆ = arg max C (r, s m )
1≤ m ≤ M

◊ To determine Pe Î must specify signal constellation


◊ For M=4,, figg ((a)) and ((b)) are ppossible constellations.
◊ Assume both have dmin =2A
• r = 2 A Î Eavg = 2 A2
1
• A1 = A, A2 = 3 A Î avg
E = ⎡⎣ 2(3 A2 ) + 2 A2 ⎤⎦ = 2 A2
4
78
Chapter 4.3
4.3--3 Optimal Detection & Error Prob.
for QAM Signaling

◊ When M=8,, four ppossible constellations in fig


g ((a)~(d).
) ( )
◊ Signal points (Amc, Ams)
◊ All have dmin =2A
◊ Average energy
1 M
Eavg =
M
∑ mc ms )
( A
m =1
2
+ A 2

A2 M
=
M
∑ (a
m =1
2
mc + ams
2
)

◊ (a) and (C): Eavg=6A2


◊ =6 83A2
(b): Eavg=6.83A
◊ (d): Eavg=4.73A2
◊ (d) require less energy

79
Chapter 4.3
4.3--3 Optimal Detection & Error Prob.
for QAM Signaling
◊ Rectangular QAM
◊ Generated by two PAM signals in I-phase and Q-phase carriers.

◊ Easily demodulated.

◊ For M≧16, only requires energy slight greater than that of the

best 16-QAM constellation.


◊ When k is even, the constellation is square, the minimum
distance 6 log M
d min = 2
Ebavg
M −1
◊ Can be considered as two M -ary PAM constellations.
◊ An error occurs if either n1 or n2 is large enough to cause an error.
error
◊ Probability of correct decision is

( )
2
Pc , M −QAM = P2
c , M − PAM
= 1 − Pe, M − PAM

80
Chapter 4.3
4.3--3 Optimal Detection & Error Prob.
for QAM Signaling
◊ Probability of errors of square M-QAM
⎛ 1 ⎞
Pe , M −QAM = 1 − (1 − Pe, M − PAM
) 2
= 2 Pe , M − PAM ⎜1 − Pe , M − PAM ⎟
⎝ 2 ⎠
◊ Th error probability
The b bilit off PAM is
i (from
(f (4.3-4)
(4 3 4) & (4.3-5)
(4 3 5) )

⎛ 1 ⎞ ⎛ d min ⎞ ⎛ 1 ⎞ ⎛ 3log 2 M Ebavg ⎞


Pe, = 2 ⎜1 − ⎟ Q ⎜⎜ ⎟ = 2 ⎜1 − ⎟ Q ⎜⎜ ⎟

M − PAM
⎝ M ⎠ ⎝ 2 N0 ⎟ ⎝ M ⎠ ⎝ M −1 N0
⎠ ⎠
◊ Thus, the error probability of square M-QAM is
≦1
≦1
⎛ 1 ⎞ ⎛ 3log 2 M Ebavg ⎞ ⎛ ⎛ 1 ⎞ ⎛ 3log 2 M Ebavg ⎞⎞
Pe, M −QAM = 4 ⎜1 − ⎟ Q ⎜⎜ ⎟ × ⎜1 − ⎜1 − ⎟ Q ⎜⎜ ⎟⎟
⎝ M ⎠ ⎝ M − 1 N0 ⎟ ⎜ ⎝ M ⎠ ⎝ M − 1 N0 ⎟⎟
⎠ ⎝ ⎠⎠
⎛ 3log M ε bavg ⎞
≤ 4Q ⎜ 2
⎟ The upper bound is
⎜ M − 1 N0 ⎟
⎝ ⎠ quite
it tight
ti ht for
f large
l M

81
Chapter 4.3
4.3--3 Optimal Detection & Error Prob.
for QAM Signaling

◊ The penalty of increasing


transmission rate is 3dB/bit
for QAM.
◊ The penalty of increasing
transmission rate is 6dB/bit
for PAM and PSKPSK.
◊ QAM is more power efficient
compared with PAM and
PSK.
◊ The advantage g of PSK is its
constant-envelope properties.
◊ More comparisons
p are shown
in the text (page 200).
82
Chapter 4.3
4.3--4 Demodulation and Detection

◊ ASK,, PSK and QAM


Q have one- or two-dimensional
constellation
◊ Basis functions of PSK and QAM:
φ1 (t ) = 2 / Eg g (t ) cos 2π f c t
φ2 (t ) = − 2 / Eg g (t ) sin 2π f c t
◊ Basis functions of PAM:
φ1 (t ) = 2 / Eg g (t ) cos 2π f c t
◊ r(t) and basis functions are bandpass Î high sampling rate
◊ To relieve the requirement on sampling rate
Î First, demodulate signals to obtain a lowpass equivalent signals.
Î Then, perform signal detection.
83
Chapter 4.3
4.3--4 Demodulation and Detection

◊ From chapp 2.1 ((2.1-21 and 2.1-24):


)
Ex = Exl / 2 x(t ), y (t ) = Re { xl (t ), yl (t ) } / 2

◊ Optimal detection rule (MAP) becomes


⎛ N0 1 ⎞
m = arg
ˆ g max ⎜ r ⋅ s m + ln Pm − Em ⎟
1≤ m ≤ M ⎝ 2 2 ⎠
= arg max ( Re[rl ⋅ s ml ] + N 0 ln Pm − Eml / 2 )
1≤ m ≤ M
⎛ ⎡ ∞ ⎤ 1 ∞ ⎞
= arg max ⎜ Re ∫ rl (t ) sml (t )dt + N 0 ln Pm − ∫ sml (t ) dt ⎟
∗ 2

1≤ m ≤ M ⎝ ⎢⎣ −∞ ⎥⎦ 2 −∞ ⎠

◊ ML decision rule is
⎛ ⎡ ∞ ⎤ 1 ∞ ⎞
R ∫ rl (t ) s ml (t )dt
d − ∫ s ml (t ) dt
∗ 2
mˆ = arg max⎜ Re d ⎟
1≤ m ≤ M ⎝ ⎢⎣ −∞ ⎥⎦ 2 −∞ ⎠
84
Chapter 4.3
4.3--4 Demodulation and Detection

Complex
p matched filter.

Detailed structure of a
complex matched filter
in terms of its in-phase
and quadrature
components..

◊ Throughout this discussion we have assumed that the receiver


has complete knowledge of the carrier frequency and phase.
85
Chaper 4.4:Optimal
Ch 4 4 O i l Detection
D i andd
Error Probability for Power-
Limited Signaling

Wireless Information Transmission System Lab.


Institute of Communications Engineering
g g
National Sun Yat-
Yat-sen University

86
Chaper 4.4-
4.4-1 Optimal Detection & Error Prob.
for Orthogonal Signaling

◊ In an equal-energy orthogonal signaling scheme, N=M and

s1 = ( E , 0,..., 0)
s 2 = (0,
(0 E ,..., 0)
=
s M = (0,
(0 00,..., E )

◊ For equiprobable equal-energy orthogonal signals, optimum


d t t
detector= largest
l t cross-correlation
l ti between
b t r and
d sm
mˆ = arg max r ⋅ s m
1≤ m ≤ M

∵ Constellation is symmetric & distance between signal points is 2E


∴ Error probability is independent of the transmitted signal

87
Chaper 4.4-
4.4-1 Optimal Detection & Error Prob.
for Orthogonal Signaling
◊ Suppose that s1 is transmitted, the received vector is
r = ( E + n1 , n2 ,..., nM )
◊ E is symbol energy
◊ (n r vs with σ n2 = N 0 / 2
( 1, n2,…, nM) are iid zero-mean Gaussian r.vs
◊ Define random variables R1 = r ⋅ s1
= ( E + n1 , n2 ,,...,, nM ) ⋅ ( E,0 ,0 )
Rm = r ⋅ s m , 1 ≤ m ≤ M = E + En1
Rm = Enm , 2 ≤ m ≤ M
◊ A correct decision is made if R1 > Rm for m=2,3,…,M
Pc = P[ R1 > R2 , R1 > R3 ,..., R1 > RM | s1 sent]

Baye’s
= P[ E + n1 > n2 , E + n1 > n3 ,..., E + n1 > nM | s1 sent]

Theorem = ∫ P[n2 < n + E , n3 < n + E ,..., nM < n + E | s1 sent,n1 = n] ⋅ pn1 (n) dn
−∞
n2, n3,…,

( P[n )
M −1
nM are iid ∞
=∫ 2 < n + E | s1 sent,n1 = n] pn1 (n) dn
−∞
88
Chaper 4.4-
4.4-1 Optimal Detection & Error Prob.
for Orthogonal Signaling

◊ Since n2~N(0,N
( , 0/2))
⎛ n+ E ⎞
P[n2 < n + E | s1 sent,n1 = n] = 1 − Q ⎜ ⎟
⎜ N /2 ⎟
⎝ 0 ⎠
M −1
∞ 1 ⎡ ⎛ n+ E ⎞ ⎤ −
n2
◊ Pc = ∫ ⎢1 − Q ⎜ ⎟ ⎥ e 0 dn
N

π N 0 ⎢⎣ ⎜ ⎟ n+ E
⎝ N 0 / 2 ⎠ ⎥⎦
−∞
x=
N0 / 2
( x − 2 E / N0 )
2

1 ∞ −

M −1
= (1 − Q ( x)) e 2
dx
2π −∞
( x − 2 E / N0 )
2
( x− )
2
2 E / N0
1 ∞ − 1 ∞ −

Pe = 1 − Pc = ∫ ⎡1 − (1 − Q ( x )) M −1
⎤ e 2
dx ∫ e 2
dx = 1
2π −∞ ⎣ ⎦
◊ 2π −∞

Pe Pe
P[[s m received | s1 sent ] = = k , 2 ≤ m ≤ M.
M −1 2 −1

89
Chaper 4.4-
4.4-1 Optimal Detection & Error Prob.
for Orthogonal Signaling

◊ Assume that s1 corresponds to data


sequence of length k and first bit=0
◊ Probability
y that first bit is detected as 1 =
prob. of detecting as {sm: first bit =1}
k −1 Pe 2k −1 1
Pb = 2 = P ≈ Pe
2 −1 2 −1
e
k k
2
◊ Last approximation holds for k>>1
◊ Fig: Prob. of bit error v.s. SNR per bit
◊ Increasing M, required SNR is reduced
Î in contrast with ASK, PSK and QAM

90
Error Probability in FSK signaling

◊ FSK signaling is a special case of orthogonal signaling when


l
Δf = , for any positive integer l
2T

◊ In binary FSK, a frequency separation that guarantees


orthogonality does not minimize the error probability.

◊ For binary FSK


FSK, the error probability is minimized when (see
Problem 4.18)
0.715
Δf =
T

91
A Union Bound on the Probability of Error in
Orthogonal Signaling
◊ From Sec.4.2-3, the union bound in AWGN channel is
2
d min
M −1 −
Pe ≤ e 4 N0

2
◊ In orthogonal signaling
signaling, d min = 2 E
E E
M − 1 − 2 N0 −
Pe ≤ e < Me 2 N0
2

◊ Using M=2k and Eb=E/k,


kEb k⎛ E ⎞
− − ⎜ b − 2ln 2 ⎟
2 ⎝ N0
Pe < 2k e 2 N0
=e ⎠

◊ If Eb / N 0 > 2 ln 2 = 1.39 (1.42dB) ⇒ Pe → 0 as k → ∞


Î If SNR per bit > 1.42 dB, reliable communication is possible (Sufficient, but
y)
not necessary)

92
A Union Bound on the Probability of Error in Orthogonal
Signaling

◊ A necessary and sufficient condition for reliable


communications is
Eb
> ln 2 = 0.693 ( − 1.6dB)
N0
◊ The -1.6 dB bound is obtained from a tighter bound on error
probability
⎧ e − ( k / 2)( Eb / N0 − 2ln 2) , Eb / N 0 > 4 ln 2

Pe ≤ ⎨
− k ( Eb / N 0 − ln l 2)
2

⎪⎩2e , ln 2 ≤ Eb / N 0 ≤ 4 ln 2

◊ The minimum value of SNR per bit needed, i.e., -1.6 dB is


Shannon Limit.

93
Chaper 4.4-
4.4-2 Optimal Detection & Error Prob. for
Bi--orthogonal Signaling
Bi

◊ A set of M
M=22k biorthogonal signals comes from NN=M/2
M/2
orthogonal signals by including the negatives of these signals
◊ Requires
q onlyy M/2 cross-correlators or matched filters
◊ Vector representation of biorthogonal signals
s1 = −s N +1 = ( E , 0,..., 0)
s 2 = −s N + 2 = (0, E ,..., 0)
=
s N = −s 2 N = (0, 0,..., E )
◊ Assume that s1 is transmitted, the received signal vector is
r = ( E + n1 , n2 ,..., nN )
◊ {nm} are zero-mean, iid Gaussian r.vs with σ n2 = N 0 / 2

94
94
Chaper 4.4-
4.4-2 Optimal Detection & Error Prob. for
Bi--orthogonal Signaling
Bi

◊ Since all signals are equiprobable and have equal energy


energy, the
optimum detector decides m with the larges magnitude of
C ( r, s m ) = r ⋅ s m , 1≤ m ≤ M / 2
◊ The sign is to decide whether sm(t) or –sm(t) is transmitted
◊ P [ correct decision ] = P[[r1 = E + n1 > 0, r1 >| rm |=| nm |,| m = 2,3,..., M ]
r1

But, P [| nm |< r1 | r1 > 0] = 1 r1 1


∫ ∫
2 2
e− x e − x / 2 dx
N0 2
◊ / N0
dx =
π N0 2π
r1
− r1 −
N0 2

◊ Probability of correct decision is


( M / 2) −1
∞⎛ 1 r1
⎞ r1 ~ N ( E , N 0 / 2)
Pc = ∫ ⎜ ∫
2
−x /2
dx ⎟
N0 2
e p (r1 )dr1 v = r1 2 / N 0
0 ⎜ ⎟
π
r1
⎝ 2 −
N 0 2 ⎠
( M / 2) −1
1 ∞ ⎛ 1 v + 2 E / N0 − x2 / 2 ⎞
∫ ∫
− v2 / 2
=
− 2E/N 0 ⎜ ⎟
e d
dx e dv
d
2π ⎝ 2π − ( v + 2 E / N0 )

95
Chaper 4.4-
4.4-2 Optimal Detection & Error Prob. for
Bi--orthogonal Signaling
Bi

◊ Symbol error Probability


Pe=1−Pc

◊ Fi Pe v.s. Eb/N0
Fig:
◊ E=k Eb

Shannon
Limit

96
96
Chaper 4.4-
4.4-3 Optimal Detection & Error Prob. for
Simplex Signaling

◊ Simplex
p signals
g are obtained from shifting
g a set of orthogonal
g
signals by the average of these orthogonal signals
ÎGeometry of simples signals is exactly the same as that of original
orthogonal signals
◊ The error probability equals to the original orthogonal signals
◊ Since simplex signals have lower energy, the energy in the
expression of error probability should be scaled, i.e.,
2
⎛ M 2E ⎞
−⎜⎜ x − ⎟⎟ / 2
1 ∞
∫−∞ ⎣
M −1 N 0
Pe = 1 − Pc = ⎡1 − (1 − Q ( x )) M −1
⎤⎦e ⎝ ⎠
dx

◊ A relative
l i gain
i off 10 llog(M/M−1)
( / 1) dB
d over orthogonal
h l signaling
i li
◊ M=2Æ 3dB gain; M=10, 0.46 dB gain

97
Chap 44.5
Ch 5 :Optimal
O i l Detection
D i in
i
Presence of Uncertainty: Non-
coherent Detection

Wireless Information Transmission System Lab.


Institute of Communications Engineering
g g
National Sun Yat-
Yat-sen University

98
Chaper 4.5 Optimal Detection in Presence of Uncertainty:
Non--coherent Detection
Non

◊ Previous sections assume that signal{s )} or orthonormal basis {φj((t)}


g { m((t)} )}
are available
◊ In many cases the assumption is not valid
◊ Transmission over channel introduces a random attenuation or a random
phase shift to the signal
◊ Imperfect knowledge of signals at rx when the tx and rx are not perfectly
synchronized
Î Although the tx knows {sm(t)}, due to asynchronism, it can only use {sm(t−
td)}; td is random time slip between the tx and rx clock
◊ Consider transmission over AWGN channel with random parameter(θ)
r (t ) = sm (t ;θ ) + n(t )
◊ By K-L expansion theorem (2.8-2), we can find an orthornaormal basis
r = s m ,θ + n

99
Chaper 4.5 Optimal Detection in Presence of Uncertainty:
Non--coherent Detection
Non

◊ The optimal (MAP) detection rule is (see 4.2-15)


mˆ = arg max Pm p (r | m)
1≤ m ≤ M

g max Pm ∫ p (r | m, θ ) p (θ )dθ
= arg
1≤ m ≤ M

= arg max Pm ∫ pn (r − s m ,θ ) p (θ )dθ


1≤ m ≤ M
◊ The decision rule determines the decision regions
◊ The minimum error probability is

( ∫ p(r | m,θ ) p(θ )dθ ) dr


M
Pe = ∑ Pm ∫
Dmc
m −1

∑ ∫ ( ∫ p (r − s )
M M
= ∑ Pm n m ,θ ) p (θ )dθ dr (4.5-3)
Dm '
m −1 m '=1
m '≠ m

100
Chaper 4.5 Optimal Detection in Presence of Uncertainty:
Non--coherent Detection
Non

◊ ( ) Consider a binaryy antipodal


(ex) p signaling
g g system
y w. equiprobable
q p
signals s1(t)=s(t) & s2(t)=−s(t) in an AWGN channel w. noise PSD N0/2
◊ The channel is modeled as
r (t ) = Asm (t ) + n(t )
◊ A>0: random gain with PDF p(A)
◊ A<0: random ggain with PDF p( )
p(A)=0
◊ p(r | m, A) = pn( r − A sm )
◊ Optimal decision region for s1(t) is

{
D1 = r : ∫ e

0
− ( r − A Eb ) 2 / N 0
p ( A) dA > ∫ e

0
− ( r + A Eb ) 2 / N 0
p ( A)dA }
= {r : ∫ e ( ) p( A)dA > 0} ∵ A>0
∞ −2 rA
− A2 Eb / N 0 2 rA
A Eb / N 0 A Eb / N 0
e −e
0
−2 rA Eb / N 0 −2 rA Eb / N 0
e
2 rA Eb / N 0
−e >0 ∴(e 2 rA Eb / N 0
−e )>0
= {r : r > 0} =e
4rA
A Eb / N 0
>1
=>4rA Eb / N 0 > 0
101 => r > 0
Chaper 4.5 Optimal Detection in Presence of Uncertainty:
Non--coherent Detection
Non

◊ The error probability is

⎛ ∞ 1 ⎞
2
( r + A Eb )
∞ −
Pb = ∫ ⎜ ∫ e N0
dr ⎟ p ( A)dA
0 ⎜ 0 ⎟
π N0
⎝ ⎠
∞⎛ ⎡ N ⎤⎞
= ∫ ⎜ P ⎢ N (− A Eb , 0 ) > 0 ⎥ ⎟ p ( A)dA
0
⎝ ⎣ 2 ⎦⎠
∞ ⎛ ⎡ A Eb ⎤ ⎞
=∫ ⎜ P ⎢ N (0,1) > ⎥ ⎟ p( A)dA
0 ⎜ ⎢ N 0 / 2 ⎥⎦ ⎟⎠
⎝ ⎣
( )

= ∫ Q A 2 Eb / N 0 p ( A)dA
0

= E ⎡Q ( A 2 E / N )⎤
⎣ b 0

102
Chaper 4.5-
4.5-1 Noncoherent Detection of Carrier
Modulated Signals

◊ For carrier modulated signals, {sm(t)} are bandpass with


lowpass equivalents sml(t)
sm (t ) = Re ⎡⎣ sml (t )e j 2π fct ⎤⎦
◊ In AWGN channel,
r (t ) = sm (t − td ) + n(t )
◊ td : random
d time
ti asynchronismh i between b t t and
tx d rx
◊ r (t ) = Re ⎡⎣ sml (t − td )e j 2π fc ( t −td ) ⎤⎦ + n(t )
= Re ⎡⎣ sml (t − td )e − j 2π fctd e j 2π fct ⎤⎦ + n(t )
◊ Lowpass equivalent of sm(t − td) is sml (t − td )e − j 2π fctd
◊ In practice, td <<TS ⇒ sml (t − td) ≈ sml (t)
◊ The random phase shift φ =2πfctd could be large since fc is large
ÎNoncoherent Detection
103
Chaper 4.5-
4.5-1 Noncoherent Detection of Carrier
Modulated Signals

◊ In the noncoherent case,


Re ⎡⎣ rl (t )e j 2π fct ⎤⎦ = Re ⎡⎣ (e jφ sml (t ) + nl (t ))e j 2π fct ⎤⎦
◊ The baseband channel model:
rl (t ) = e jφ sml (t ) + nl (t )
◊ Vector equivalent form:
rl = e jφ s ml + n l
◊ The optimum noncoherent detection is

Pm 2π


mˆ = arg max p (r − e s ml )dφ
1≤ m≤ M 2π 0 nl l
From (2.9 − 13)
F
From (4
(4.5-3)
5 3) Pm 1 2
2π − rl − e jφ s ml /(4 N0 ) nl ~ N (0 N ×1 , 2 N 0 I N )
N ∫0
= arg max e dφ
1≤ m ≤ M 2π ( 4π N 0 )

104
Chaper 4.5-
4.5-1 Noncoherent Detection of Carrier
Modulated Signals
sm = sml cos 2π f c t
P 1 2π
2
− rl − e jφ s ml /(4 N 0 ) 2

∫ dφ
sml
ˆ = arg
◊ m g max m e
2
→ Em = ∫ sml cos 2π f c t dt= 2

1≤ m ≤ M 2π ( 4π N 0 ) N 0
→ s ml
2
= 2Em
2

{ }
2 2
E 1 rl − e jφ sml
2
= rl − 2 Re rl ⋅ e jφ sml + e jφ sml
P − m 2π Re[ rl ⋅e jφ s ml ]
= arg max m e ∫ dφ
2 N0 2 N0
e R {r ⋅ e } + 2E
2 jφ
= rl − 2 Re sml

l m
1≤ m ≤ M 0

( )
Em 1 H
Re[( rl ⋅s ml ) e − jφ ] ⋅ r =e- jφ ( sml ) ⋅ r
H
Pm − 2π rl ⋅ e jφ sml = e jφ sml
∫ dφ
2 N0 2 N0
= arg max e e
1≤ m ≤ M 2π 0 =e- jφ r ⋅ sml
Em 1
P − 2π Re[|rl ⋅s ml |e− j ( φ −θ ) ]

∫ dφ
2 N0 2 N0
= arg max m e e
1≤ m ≤ M 2π 0 rl ⋅ s ml =| rl ⋅ s ml | e jθ
Em 1
Pm − 2 N0 2π 2 N0 |rl ⋅sml |cos(φ −θ ) θ : phase of rl ⋅ s ml
= arg max
1≤ m ≤ M 2π
e ∫0
e dφ
I 0 ( x) is modified Bessel function

Em
⎛| r ⋅s |⎞ of the 1st kind and order zero
= arg max Pm e 2 N0
I 0 ⎜ l ml ⎟ 1 2π
1≤ m ≤ M ⎝ 2 N0 ⎠ I 0 ( x) =
2π ∫0
e x cosφ dφ
105
Chaper 4.5-
4.5-1 Noncoherent Detection of Carrier
Modulated Signals

◊ If signals are equiprobable and have equal energy,


⎛ | rl ⋅ s ml | ⎞
m = arg max I 0 ⎜
ˆ ⎟ I0(x) is increasing
1≤ m ≤ M ⎝ 2 N 0 ⎠
= arg max | rl ⋅ s ml |
1≤ m ≤ M

= arg max | ∫ rl (t ) sm* l (t )dt | E l Detector
Envelop D t t
1≤ m ≤ M −∞

106
4.6 Comparison
p of Digital
g Modulation Methods

◊ One can compare the digital modulation methods on the basis of


the SNR required to achieve a specified probability of error.
◊ However, such a comparison would not be very meaningful,
unless it were made on the basis of some constraint, such as a
fixed data rate of transmission or, on the basis of a fixed
bandwidth.
bandwidth
◊ For multiphase signals, the channel bandwidth required is
simply the bandwidth of the equivalent low-pass
low pass signal pulse
g(t) with duration T and bandwidth W, which is approximately
equal to the reciprocal of T.
R
◊ Since T=k/R=(log2M)/R, it follows that W =
log 2 M

107
4.6 Comparison
p of Digital
g Modulation Methods

◊ As M is increased,, the channel bandwidth required,


q , when the bit
rate R is fixed, decreases. The bandwidth efficiency is measured
by the bit rate to bandwidth ratio, which is
R
= log 2 M
W
◊ The bandwidth-efficient method for transmitting PAM is single-
sideband. The channel bandwidth required to transmit the signal
is approximately
pp y equal
q to 1/2T and,
R
= 2 log 2 M
W
this iis a ffactor
thi t off 2 bbetter
tt ththan PSK
PSK.
◊ For QAM, we have two orthogonal carriers, with each carrier
having a PAM signal.signal

108
4.6 Comparison
p of Digital
g Modulation Methods

◊ Thus, we double the rate relative to PAM. However, the QAM


signal must be transmitted via double-sideband.
double sideband Consequently,
Consequently
QAM and PAM have the same bandwidth efficiency when the
bandwidth is referenced to the band
band-pass
pass signal.
◊ As for orthogonal signals, if the M = 2k orthogonal signals are
constructed by
y means of orthogonal
g carriers with minimum
frequency separation of 1/2T, the bandwidth required for
transmission of k = log2M information bits is
M M M
W = = = R
2T 2(k R ) 2 log 2 M
In the case, the bandwidth increases as M increases.
◊ In the case of biorthogonal
g signals,
g the required
q bandwidth is
one-half of that for orthogonal signals.
109
4.6 Comparison
p of Digital
g Modulation Methods

A compact and meaningful


comparison of modulation
methods is one based on the
normalized data rate R/W (bits per
second per hertz of bandwidth)
th SNR per bit (εb/N0 )
versus the
required to achieve a given error
probability
probability.
In the case of PAM, QAM, and
PSK, increasing M results in a
higher bit-to-bandwidth ratio R/W.

110
4.6 Comparison
p of Digital
g Modulation Methods

◊ However, the cost of achieving the higher data rate is an


increase in the SNR per bit.
◊ Consequently, these modulation methods are appropriate for
communication
i i channels
h l that
h are bandwidth
b d id h limited,
li i d where
h we
desire a R/W >1 and where there is sufficiently high SNR to
support increases in M.
M
◊ Telephone channels and digital microwave ratio channels are
p of such band-limited channels.
examples
◊ In contrast, M-ary orthogonal signals yield a R/W ≤ 1. As M
increases, R/W decreases due to an increase in the required
channel bandwidth.
◊ The SNR per bit required to achieve a given error probability
decreases as M increases.
111
4.6 Comparison
p of Digital
g Modulation Methods

◊ Consequently, M ary orthogonal signals are appropriate for


M-ary
power-limited channels that have sufficiently large bandwidth to
accommodate a large number of signals.
◊ A M→∞, the
As h error probability
b bili can be
b maded as smallll as desired,
d i d
provided that SNR>0.693 (-1.6dB). This is the minimum SNR
per bit required to achieve reliable transmission in the limit as
the channel bandwidth W→∞ and the corresponding R/W→0.
◊ The figure above also shown the normalized capacity of the
b d li i d AWGN channel,
band-limited h l which
hi h is
i due
d to Shannon
h (1948).
( )
◊ The ratio C/W, where C (=R) is the capacity in bits/s, represents
the highest achievable bit rate-to-bandwidth
rate to bandwidth ratio on this
channel.
◊ Hence,, it serves the upper
pp bound on the bandwidth efficiencyy of
any type of modulation.
112

You might also like