0% found this document useful (0 votes)
62 views98 pages

CommII Chapter12 2014 PDF

Here are the erf and erfc values for the given x values: x erf(x) x erf(x) x erf(x) x erf(x) 0.05 0.051535 0.80 0.539980
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views98 pages

CommII Chapter12 2014 PDF

Here are the erf and erfc values for the given x values: x erf(x) x erf(x) x erf(x) x erf(x) 0.05 0.051535 0.80 0.539980
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

Communications II

(ECE 461)

Dr.-Ing. Tuan Do-Hong

Department of Telecommunications Engineering


Faculty of Electrical and Electronics Engineering
HoChiMinh City University of Technology

E-mail: [email protected]

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 1 DHT, HCMUT
References

[1] J. G. Proakis, Digital Communications, 5th Edition, McGraw-


Hill, 2008.

[2] Athanasios Papoulis, Probability, Random Variables, and Stochastic


Processes, McGraw-Hill, 1991 (3rd Ed.), 2001 (4th Ed.).

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 2 DHT, HCMUT
Outline

Chapter 1: Motivation, Overview, Probability & Stochastic Processes


Review ([1]: Chapters 1-2)

Chapter 2: Communication Signals and Systems ([1]: Chapter 2)

Chapter 3: Optimum Receivers for AWGN Channels ([1]: Chapter 4)

Chapter 4: Synchronization ([1]: Chapter 5)

Chapter 5: Signaling on ISI channels and equalization ([1]: Chapter 9)

Chapter 6: Multiple-Antenna Systems ([1]: Chapter 15)

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 3 DHT, HCMUT
Goal of the Course
 A review of deterministic and random signal analysis, including
bandpass and lowpass signal representations, probabilities of
random variables, limit theorems for sums of random variables, and
random processes.
 Focusing on optimum receivers for additive white Gaussian noise
(AWGN) channels and their error rate performance. Also included
link budget analyses for wireline and radio communication systems.
 Carrier phase estimation and time synchronization methods based
on the maximum-likelihood criterion.
 Focusing on digital communication through band-limited channels,
including the optimum receiver for channels with intersymbol
interference and AWGN, and suboptimum equalization methods,
namely, linear equalization, decision feedback equalization.
 Providing a treatment of multiple-antenna systems, called multiple-
input, multiple-output (MIMO) systems, which are designed to yield
spatial signal diversity and spatial multiplexing.
Dept. of Telecomm. Eng. Comm II 2014
Faculty of EEE 4 DHT, HCMUT
Grading

 30% for midterm examination

 15% for in-class quizzes

 15% for homeworks/assignments

 40% for final examination.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 5 DHT, HCMUT
Chapter 1:
Motivation, Overview, Probability &
Stochastic Processes Review

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE DHT, HCMUT
1. Digital Communication Systems (1)
 Typical digital communication system:

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 7 DHT, HCMUT
1. Digital Communication Systems (2)
 Measure of performance: Average symbol error probability (or the
average bit error probability in the cases of binary symbols) at the
output of the demodulator. This is the probability that the symbol (or
bit) estimate given by the demodulator does not correspond to the
transmitted symbol (or bit).
 Our primary design objective is to minimize the average symbol
error probability of a digital communication system.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 8 DHT, HCMUT
1. Communication Channels (1)
 The communication channel provides a connection through which the
information-bearing signal propagates.
It is perhaps the most important component of a communication
system. The design of all other components in digital communication
system depends heavily on the characteristics of the communication
channel.
There are many different types of physical communication channels,
such as:
 wireline channels
 wireless channels
 fiber optic channels
 underwater acoustic channels
 storage channels

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 9 DHT, HCMUT
1. Communication Channels (2)
 Different kinds of channels can have very different characteristics. In
order to design an “efficient” digital communication system over a
specific communication channel, we need to study the characteristics
of the channel extensively and carefully.
 Unfortunately, this is impractical for our general treatment on digital
communication theory. Instead, we adopt a model-based approach
here, i.e., we construct a generic mathematical channel model to
represent a “typical” communication channel.
For this purpose, our channel model describes the physical
communication channel as well as the properties of the RF equipments,
such as antennas and amplifiers, necessary to access the channel.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 10 DHT, HCMUT
1. Communication Channels (3)
 The major characteristic of a communication channel we are interested
in is how the channel distorts the information-bearing signal. We start
by listing out some common channel defects:
 thermal noise in the electronic devices
 signal attenuation
 amplitude and phase distortion
 multipath distortion
 finite-bandwidth (lowpass filter) distortion
 impulsive noise
 Based on knowledge of these channel defects, we construct the generic
channel model. Suppose we use the symbol s(t) to denote the
transmitted signal at the output of the modulator, then it is found that
the following linear filter model (see Figure below) sufficiently
approximates the behaviors of many typical communication channels:

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 11 DHT, HCMUT
1. Communication Channels (4)

(1.1)

where r(t) represents the received signal at the input of the


demodulator, n(t) is a random process which models the thermal and
impulsive noises, and c(τ; t) is a linear time-varying filter which
models the other channel distortions listed above. We note that the
above linear (time-varying) channel model in equation (1.1) is very
general and we work with simplifications of this model in many cases.
Dept. of Telecomm. Eng. Comm II 2014
Faculty of EEE 12 DHT, HCMUT
1. Communication Channels (5)
 Among the various common simplifications of the general model, the
additive white Gaussian noise (AWGN) model is perhaps the most
studied and important. In the AWGN model, c(τ; t) = δ(τ) and
equation (1.1) reduces to

where n(t) is a zero-mean wide-sense stationary Gaussian random


process with autocorrelation function Rn(τ) = (N0/2) δ(τ). The factor
N0/2 is called the two-sided noise spectral density of the noise n(t). α
is the attenuation factor.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 13 DHT, HCMUT
1. Gaussian Random Variables (1)
 Definition: A Gaussian random variable is any continuous random
variable with a probability density function of the form

(1.2)

where µ is a constant, and σ > 0.


 Properties:
Let X be a Gaussian random variable with the density function shown
in (1.2).
 The mean and the variance of X are µ and σ2, respectively. Hence,
a Gaussian random variable is completely specified by its mean
and variance.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 14 DHT, HCMUT
1. Gaussian Random Variables (2)
 A zero-mean unit-variance Gaussian random variable has the
density function

The probability distribution function of a zero-mean unit-variance


Gaussian random variable is given by

Very often, it is convenient to use the Q-function defined by

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 15 DHT, HCMUT
1. Gaussian Random Variables (3)
 The Gaussian distribution is widely tabulated and is also available
in most mathematical softwares. For example, in Matlab, we can
use the error function (erf) and the complementary error function
(erfc) to find values of Φ(.) and Q(.). The relationships are given
by

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 16 DHT, HCMUT
1. Gaussian Random Variables (4A)

x erf(x) x erf(x) x erf(x) x erf(x)

0.05 0.01994 0.80 0.28814 1.55 0.43943 2.30 0.48928


0.10 0.03983 0.85 0.30234 1.60 0.44520 2.35 0.49061
0.15 0.05962 0.90 0.31594 1.65 0.45053 2.40 0.49180
0.20 0.07926 0.95 0.32894 1.70 0.45543 2.45 0.49286
0.25 0.09871 1.00 0.34134 1.75 0.45994 2.50 0.49379

0.30 0.11791 1.05 0.35314 1.80 0.46407 2.55 0.49461


0.35 0.13683 1.10 0.36433 1.85 0.46784 2.60 0.49534
0.40 0.15542 1.15 0.37493 1.90 0.47128 2.65 0.49597
0.45 0.17364 1.20 0.38493 1.95 0.47441 2.70 0.49653
0.50 0.19146 1.25 0.39435 2.00 0.47726 2.75 0.49702

0.55 0.20884 1.30 0.40320 2.05 0.47982 2.80 0.49744


0.60 0.22575 1.35 0.41149 2.10 0.48214 2.85 0.49781
0.65 0.24215 1.40 0.41924 2.15 0.48422 2.90 0.49813
0.70 0.25804 1.45 0.42647 2.20 0.48610 2.95 0.49841
0.75 0.27337 1.50 0.43319 2.25 0.48778 3.00 0.49865

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 17 DHT, HCMUT
1. Gaussian Random Variables (4B)
 Jointly Gaussian Random Variables
 Two random variables X and Y are said to be jointly Gaussian if
their joint density satisfies the equation

 Note that the following properties hold:


• X is Gaussian with mean mX and variance σX 2
• Y is Gaussian with mean mY and variance σY 2
• The conditional densities fX|Y(x|y) and fY|X(y|x) are also
Gaussian
• ρ is the correlation coefficient between X and Y. If ρ = 0, then
X and Y are independent.
• Z = aX + bY is also Gaussian (what are the mean and variance
of Z?)

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 18 DHT, HCMUT
Quiz 1 & 2

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 19 DHT, HCMUT
1. Gaussian Random Variables (5)
 Central limit theorem:
Given n independent random variables Xi, we form their sum:
X = X1+…+Xn
X is a random variable with mean η = η1+…+ηn and the variance
σ2= σ12+…+ σn2
The central limit theorem states that under certain general conditions,
the distribution of X approaches a Gaussian distribution with mean η
and variance σ2 as n increases.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 20 DHT, HCMUT
1. Gaussian Random Variables (6)

Uniform distribution [0, 2]

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 21 DHT, HCMUT
1. Gaussian Random Variables (7)

Central limit theorem with sum of 2 identically distributed r.vs (uniform dis. [0,2])

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 22 DHT, HCMUT
1. Gaussian Random Variables (8)

Central limit theorem with sum of 1000 identically distributed r.vs (uniform dis. [0,2])

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 23 DHT, HCMUT
1. Review of Stochastic Processes: Introduction (1)
 Let ζ denote the random outcome of an experiment. To every such
outcome suppose a waveform X(t, ζ) is assigned. The collection of such
waveforms form a stochastic process. The set of {ζk} and the time index t
can be continuous or discrete (countably infinite or finite) as well. For fixed
ζi ∈ S (the set of all experimental outcomes), X(t, ζ) is a specific time
function. For fixed t, X1 = X(t1, ζi) is a random variable. The ensemble of
all such realizations X(t, ζ) over
time represents the stochastic X (t, ξ )
process (or random process) 
X(t) (see the figure). X (t, ξ )
n

For example: X (t, ξ k ) 


X(t) = acos(ω0t +φ)
X (t, ξ 2 ) 
where φ is a uniformly distributed
random variable in (0, 2π), X (t, ξ1 )
represents a stochastic process. 0 t1 t2
t

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 24 DHT, HCMUT
1. Review of Stochastic Processes: Introduction (2)
If X(t) is a stochastic process, then for fixed t, X(t) represents a random
variable. Its distribution function is given by
FX ( x, t ) = P{ X (t ) ≤ x}
Notice that FX(x, t) depends on t, since for a different t, we obtain a
different random variable. Further
∆ dFX ( x, t )
f X ( x, t ) =
dx
represents the first-order probability density function of the process X(t).

For t = t1 and t = t2, X(t) represents two different random variables


X1 = X(t1) and X2 = X(t2), respectively. Their joint distribution is given by
FX ( x1 , x2 , t1 , t2 ) = P{ X (t1 ) ≤ x1 , X (t2 ) ≤ x2 }
and 2
∆ ∂ FX ( x1 , x2 , t1 , t2 )
f X ( x1 , x2 , t1 , t2 ) =
∂x1 ∂x2
represents the second-order density function of the process X(t).
Dept. of Telecomm. Eng. Comm II 2014
Faculty of EEE 25 DHT, HCMUT
1. Review of Stochastic Processes: Introduction (3)
Similarly, fX(x1, x2,… xn, t1, t2,… tn) represents the nth order density function
of the process X(t). Complete specification of the stochastic process X(t)
requires the knowledge of fX(x1, x2,… xn, t1, t2,… tn) for all ti , i = 1, 2…, n
and for all n. (an almost impossible task in reality!).

 Mean of a stochastic process:


+∞
µ (t ) ∆ E=
= { X (t )} ∫ −∞ x f X
( x, t )dx
represents the mean value of a process X(t). In general, the mean of a
process can depend on the time index t.

 Autocorrelation function of a process X(t) is defined as



{ X (t1 ) X * (t2 )} ∫ ∫ 1 2 f X ( x1 , x2 , t1 , t2 )dx1dx2
*
RXX (t1 , t2 ) E=
= x x
and it represents the interrelationship between the random variables X1 =
X(t1) and X2 = X(t2) generated from the process X(t).

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 26 DHT, HCMUT
1. Review of Stochastic Processes: Introduction (4)
Properties:

(i) R XX (t1 , t2 ) = R XX* (t2 , t1 ) = [ E{ X (t2 ) X * (t1 )}]*

(ii) R XX (t , t ) = E{| X (t ) |2 } > 0. (Average instantaneous power)

(iii) RXX(t1, t2) represents a nonnegative definite function, i.e., for any set
of constants {ai}ni=1 n n
∑∑ ai a*j R XX
(ti , t j ) ≥ 0.
i =1 j =1
n
this follows by noticing that E{| Y | } ≥ 0 for Y = ∑ ai X (ti ).
2

i =1

The function
C XX (t1 , t 2 ) = RXX (t1 , t 2 ) − µ X (t1 ) µ *X (t 2 )
represents the autocovariance function of the process X(t).

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 27 DHT, HCMUT
1. Review of Stochastic Processes: Introduction (5)
Example:
X (t ) = a cos(ω 0 t + ϕ ), ϕ ~ U (0,2π ).
This gives
µ (t ) = E{ X (t )} = aE{cos(ω 0 t + ϕ )}
X

= a cos ω 0 t E{cosϕ } − a sin ω 0 t E{sin ϕ } = 0,



since E{cosϕ } = 1 cosϕ dϕ = 0 = E{sin ϕ }.
2π ∫0
Similarly,
R XX (t1 , t2 ) = a 2 E{cos(ω 0 t1 + ϕ ) cos(ω 0 t2 + ϕ )}
a2
= E{cosω 0 (t1 − t2 ) + cos(ω 0 (t1 + t2 ) + 2ϕ )}
2
a2
= cos ω 0 (t1 − t2 ).
2

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 28 DHT, HCMUT
1. Review of Stochastic Processes: Stationary (1)
 Stationary processes exhibit statistical properties that are invariant to shift
in the time index. Thus, for example, second-order stationarity implies that
the statistical properties of the pairs {X(t1), X(t2)} and {X(t1+c), X(t2+c)} are
the same for any c. Similarly, first-order stationarity implies that the
statistical properties of X(ti) and X(ti+c) are the same for any c.

In strict terms, the statistical properties are governed by the joint probability
density function. Hence a process is nth-order Strict-Sense Stationary
(S.S.S) if
f X ( x1 , x2 , xn , t1 , t2 , tn ) ≡ f X ( x1 , x2 , xn , t1 + c, t2 + c , tn + c ) (*)
for any c, where the left side represents the joint density function of the
random variables X1 = X(t1), X2 = X(t2), …, Xn = X(tn), and the right side
corresponds to the joint density function of the random variables X’1=X(t1+c),
X’2 = X(t2+c), …, X’n = X(tn+c). A process X(t) is said to be strict-sense
stationary if (*) is true for all ti, i = 1, 2, …, n; n = 1, 2, … and any c.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 29 DHT, HCMUT
1. Review of Stochastic Processes: Stationary (2)
For a first-order strict sense stationary process, from (*) we have
f X ( x, t ) ≡ f X ( x, t + c )
for any c. In particular c = – t gives
f X ( x, t ) = f X ( x )
i.e., the first-order density of X(t) is independent of t. In that case
+∞
E [ X (t )]
= ∫=
−∞
x f ( x )dx µ , a constant.

Similarly, for a second-order strict-sense stationary process we have


from (*)
f X ( x1 , x2 , t1 , t 2 ) ≡ f X ( x1 , x2 , t1 + c, t 2 + c)
for any c. For c = – t2 we get
f X ( x1 , x2 , t1 , t 2 ) ≡ f X ( x1 , x2 , t1 − t 2 )
i.e., the second order density function of a strict sense stationary process
depends only on the difference of the time indices t1 – t2 = τ .

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 30 DHT, HCMUT
1. Review of Stochastic Processes: Stationary (3)
In that case the autocorrelation function is given by
RXX (t1 , t2 ) =∆ E{ X (t1 ) X * (t2 )}
= ∫ ∫ x1 x2
*
f X ( x1 , x2 ,τ= t1 − t2 )dx1dx2
= RXX (t1 − t2 ) ∆= RXX (τ ) = RXX* ( −τ ),
i.e., the autocorrelation function of a second order strict-sense stationary
process depends only on the difference of the time indices τ .

However, the basic conditions for the first and second order stationarity are
usually difficult to verify. In that case, we often resort to a looser definition
of stationarity, known as Wide-Sense Stationarity (W.S.S). Thus, a process
X(t) is said to be Wide-Sense Stationary if
(i)
E{ X (t )} = µ
and
(ii) E{ X (t ) X * (t )} = R (t − t ),
1 2 XX 1 2

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 31 DHT, HCMUT
1. Review of Stochastic Processes: Stationary (4)
i.e., for wide-sense stationary processes, the mean is a constant and the
autocorrelation function depends only on the difference between the time
indices. Strict-sense stationarity always implies wide-sense stationarity.
However, the converse is not true in general, the only exception being the
Gaussian process. If X(t) is a Gaussian process, then
wide-sense stationarity (w.s.s) ⇒ strict-sense stationarity (s.s.s).

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 32 DHT, HCMUT
1. Review of Stochastic Processes: Systems (1)
 A deterministic system transforms each input waveform X(t, ζi ) into an
output waveform Y(t, ζi) = T[X(t, ζi)] by operating only on the time variable t.
A stochastic system operates on both the variables t and ζ .

Thus, in deterministic system, a set of realizations at the input corresponding


to a process X(t) generates a new set of realizations Y(t, ζ) at the output
associated with a new process Y(t).

Y (t, ξ i )
X (t, ξ i )

X (t )
→ T [⋅] Y→
(t )

t t

Our goal is to study the output process statistics in terms of the input process
statistics and the system function.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 33 DHT, HCMUT
1. Review of Stochastic Processes: Systems (2)

Deterministic Systems

Memoryless Systems Systems with Memory


Y (t ) = g [ X (t )]

Time-varying Time-invariant Linear systems


systems systems Y (t ) = L[ X (t )]

Linear-Time Invariant
(LTI) systems
+∞
X (t ) h (t ) Y (t ) = ∫ − ∞ h (t − τ ) X (τ )dτ
+∞
= ∫ − ∞ h (τ ) X (t − τ )dτ .
LTI system
Dept. of Telecomm. Eng. Comm II 2014
Faculty of EEE 34 DHT, HCMUT
1. Review of Stochastic Processes: Systems (3)
 Memoryless Systems:
The output Y(t) in this case depends only on the present value of the input X(t).
i.e., Y (t ) = g{ X (t )}

Strict-sense Memoryless Strict-sense


stationary input system stationary output.

Wide-sense Need not be


Memoryless
stationary in
stationary input system
any sense.

X(t) stationary Memoryless Y(t) stationary,but


Gaussian with system not Gaussian with
RXX (τ ) R XY (τ ) = ηR XX (τ ).

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 35 DHT, HCMUT
1. Review of Stochastic Processes: Systems (4)
Theorem: If X(t) is a zero mean stationary Gaussian process, and Y(t) =
g[X(t)], where g(.) represents a nonlinear memoryless device, then
R XY (τ ) = ηR XX (τ ), η = E{g ′( X )}.
where g’(x) is the derivative with respect to x.

 Linear Systems: L[.] represents a linear system if


L{a1 X (t1 ) + a 2 X (t2 )} = a1 L{ X (t1 )} + a 2 L{ X (t2 )}.
Let Y(t) = L{X(t)} represent the output of a linear system.

 Time-Invariant System: L[.] represents a time-invariant system if


Y (t ) = L{ X (t )} ⇒ L{ X (t − t0 )} = Y (t − t0 )
i.e., shift in the input results in the same shift in the output also.

 If L[.] satisfies both conditions for linear and time-invariant, then it


corresponds to a linear time-invariant (LTI) system.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 36 DHT, HCMUT
1. Review of Stochastic Processes: Systems (5)
LTI systems can be uniquely represented in terms of their output to a
delta function h (t ) Impulse
response of
the system
δ (t ) LTI h (t )
t

Impulse Impulse
response
Y (t )
then
X (t )
X (t ) Y (t ) t
t LTI
+∞
Y (t ) = ∫ − ∞ h(t − τ ) X (τ )dτ
arbitrary
input +∞
= ∫ − ∞ h(τ ) X (t − τ )dτ
where +∞
X (t ) = ∫ − ∞ X (τ )δ (t − τ )dτ

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 37 DHT, HCMUT
1. Review of Stochastic Processes: Systems (6)
Thus +∞
Y (t ) = L{ X (t )} = L{∫ − ∞ X (τ )δ (t − τ )dτ }
+∞
= ∫ − ∞ L{ X (τ )δ (t − τ )dτ } By Linearity
+∞
= ∫ − ∞ X (τ ) L{δ (t − τ )}dτ By Time-invariance
+∞ +∞
= ∫ − ∞ X (τ )h(t − τ )dτ = ∫ − ∞ h(τ ) X (t − τ )dτ .

Then, the mean of the output process is given by


+∞
µ (t ) = E{Y (t )} = ∫ − ∞ E{ X (τ )h(t − τ )dτ }
Y

+∞
= ∫ − ∞ µ X (τ )h(t − τ )dτ = µ X (t ) ∗ h(t ).

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 38 DHT, HCMUT
1. Review of Stochastic Processes: Systems (7)
Similarly, the cross-correlation function between the input and output
processes is given by
R XY (t1 , t2 ) = E{ X (t1 )Y * (t2 )}
+∞
= E{ X (t1 ) ∫ − ∞ X * (t2 − α )h * (α )dα }
+∞
= ∫ − ∞ E{ X (t1 ) X * (t2 − α )}h * (α )dα
+∞
= ∫ − ∞ R XX (t1 , t2 − α )h * (α )dα
= R XX (t1 , t2 ) ∗ h * (t2 ).

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 39 DHT, HCMUT
1. Review of Stochastic Processes: Systems (8)
Finally the output autocorrelation function is given by
RYY (t1 , t2 ) = E{Y (t1 )Y * (t2 )}
+∞
= E{∫ − ∞ X (t1 − β )h( β )dβ Y * (t2 )}
+∞
= ∫ − ∞ E{ X (t1 − β )Y * (t2 )}h( β )dβ
+∞
= ∫ − ∞ R XY (t1 − β , t2 )h( β )dβ
= R XY (t1 , t2 ) ∗ h(t1 ),
or
RYY (t1 , t2 ) = R XX (t1 , t2 ) ∗ h * (t2 ) ∗ h(t1 ).

In particular, if X(t) is wide-sense stationary, then we have µX(t) = µX


Also +∞
R XY (t1 , t2 ) = ∫ − ∞ R XX (t1 − t2 + α )h (α )dα
*


= R XX (τ ) ∗ h * ( −τ ) = R XY (τ ), τ = t1 − t2 .
Dept. of Telecomm. Eng. Comm II 2014
Faculty of EEE 40 DHT, HCMUT
1. Review of Stochastic Processes: Systems (9)
Thus X(t) and Y(t) are jointly w.s.s. Further, the output autocorrelation
simplifies to
+∞
RYY (t1 , t 2 ) = ∫ −∞ RXY (t1 − β − t 2 )h( β )dβ , τ = t1 − t 2
= RXY (τ ) ∗ h(τ ) = RYY (τ ).
or
RYY (τ ) = RXX (τ ) ∗ h* (−τ ) ∗ h(τ ).

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 41 DHT, HCMUT
1. Review of Stochastic Processes: Systems (10)

X (t ) Y (t )
wide-sense LTI system wide-sense
stationary process h(t) stationary process.
(a)

X (t ) Y (t )
LTI system
strict-sense strict-sense
h(t)
stationary process stationary process
(b)

X (t ) Y (t )
Gaussian process Linear system Gaussian process
(also stationary) (also stationary)
(c)

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 42 DHT, HCMUT
1. Review of Stochastic Processes: Systems (11)
 White Noise Process: W(t) is said to be a white noise process if
RWW (t1 , t 2 ) = q (t1 )δ (t1 − t 2 ),
i.e., E[W(t1) W*(t2)] = 0 unless t1 = t2.

W(t) is said to be wide-sense stationary (w.s.s) white noise if


E[W(t)] = constant, and
RWW (t1 , t 2 ) = qδ (t1 − t 2 ) = qδ (τ ).

If W(t) is also a Gaussian process (white Gaussian process), then all of


its samples are independent random variables (why?).

For w.s.s. white noise input W(t), we have


+∞
E [ N (t )] = µW ∫ −∞ h(τ )dτ , a constant and Rnn (τ ) = qδ (τ ) ∗ h* (−τ ) ∗ h(τ )
= qh* (−τ ) ∗ h(τ ) = qρ (τ )
+∞
where ρ (τ ) = h(τ ) ∗ h* (−τ ) = ∫ −∞ h(α )h* (α + τ )dα .

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 43 DHT, HCMUT
1. Review of Stochastic Processes: Systems (12)
Thus the output of a white noise process through an LTI system represents
a (colored) noise process.
Note: White noise need not be Gaussian.
“White” and “Gaussian” are two different concepts!

White noise LTI Colored noise


W(t) h(t) N=
(t ) h (t ) ∗W (t )

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 44 DHT, HCMUT
1. Review of Stochastic Processes: Discrete Time (1)
 A discrete time stochastic process (DTStP) Xn = X(nT) is a sequence of
random variables. The mean, autocorrelation and auto-covariance functions
of a discrete-time process are given by

µ n = E{ X ( nT )}
R( n1 , n2 ) = E{ X ( n1T ) X * ( n2T )}
C ( n1 , n2 ) = R ( n1 , n2 ) − µ n1 µ n*2

respectively. As before strict sense stationarity and wide-sense stationarity


definitions apply here also. For example, X(nT) is wide sense stationary if
E{ X ( nT )} = µ , a constant
and
∆ *
E[ X {( k + n )T } X * {( k )T }] =
R(n ) =
rn =r− n
i.e., R(n1, n2) = R(n1 – n2) = R*(n2 – n1).

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 45 DHT, HCMUT
1. Review of Stochastic Processes: Discrete Time (2)
 If X(nT) represents a wide-sense stationary input to a discrete-time
system {h(nT)}, and Y(nT) the system output, then as before the cross
correlation function satisfies

R XY ( n ) = R XX ( n ) ∗ h * ( − n )
and the output autocorrelation function is given by
RYY ( n ) = R XY ( n ) ∗ h( n )
or
RYY ( n ) = R XX ( n ) ∗ h * ( − n ) ∗ h( n ).

Thus wide-sense stationarity from input to output is preserved for discrete-


time systems also.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 46 DHT, HCMUT
1. Review of Stochastic Processes: Discrete Time (3)
 Mean (or ensemble average µ) of a stochastic process is obtained by
averaging across the process, while time average is obtained by
averaging along the process as

1 M −1
µ=
ˆ
M
∑X
n =0
n

where M is total number of time samples used in estimation.

Consider wide-sense DTStP Xn, time average converge to ensemble


average if:
[
lim (µ − µˆ ) = 0
M →∞
2
]
the process Xn is said mean ergodic (in the mean-square error sense).

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 47 DHT, HCMUT
1. Review of Stochastic Processes: Discrete Time (4)
 Let define an (M×1)-observation vector xn represents elements of time series

xn = [Xn, Xn-1, ….Xn-M+1]T

An (M×M)-correlation matrix R (using condition of wide-sense stationary) can b

 R(0) R(1)  R( M − 1) 
 R(−1)  R( M − 2)
R = E xnxn[ H
] =
 
R ( 0)
   
 
 R(− M + 1) R(− M + 2)  R(0) 

Superscript H denotes Hermitian transposition.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 48 DHT, HCMUT
1. Review of Stochastic Processes: Discrete Time (5)
Properties:
(i) Correlation matrix R of stationary DTStP is Hermitian: RH = R or
R(k)=R*(k). Therefore:
 R(0) R(1)  R( M − 1) 
 R ∗ (1) R ( 0)  R( M − 2)
R= 
     
 ∗ ∗ 
 R ( M − 1) R ( M − 2)  R(0) 

(ii) Matrix R of stationary DTStP is Toeplitz: all elements on main


diagonal are equal, elements on any subdiagonal are also equal.
(iii) Let x be arbitrary (nonzero) (M×1)-complex-valued vector, then
xHRx ≥ 0 (nonnegative definition). B T
x n = [ xn − M +1 , xn − M + 2 ,..., xn ]
(iv) If xBn is backward arrangement of xn:
Then, E[xBnxBHn] = RT
Dept. of Telecomm. Eng. Comm II 2014
Faculty of EEE 49 DHT, HCMUT
1. Review of Stochastic Processes: Discrete Time (6)
(v) Consider correlation matrices RM and RM+1, corresponding to M and M+1
observations of process, these matrices are related by

 R(0) r H 
R M +1 =  
 r R M

or
R M r B∗ 
R M +1 =  BT 
r R(0)

where rH = [R(1), R(2),…, R(M)] and rBT = [r(-M), r(-M+1),…, r(-1)]

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 50 DHT, HCMUT
1. Review of Stochastic Processes: Discrete Time (7)
 Consider a time series consisting of complex sine wave plus noise:
un = u (n) = α exp( jωn) + v(n), n = 0,..., N − 1
Sources of sine wave and noise are independent. Assumed that v(n) has
zero mean and autocorrelation function given by
σ 2
k =0
[ ]
E v ( n )v ∗ ( n − k ) =  v
0 k ≠0

For a large k, autocorrelation function of process u(n):


 α
2
+ σ v2 , k =0
[ ∗ 
]
r (k ) = E u (n)u (n − k ) =  2 jωk
 α e , k ≠0

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 51 DHT, HCMUT
1. Review of Stochastic Processes: Discrete Time (8)
Therefore, correlation matrix of u(n):
 1 
 1 + exp( jω )  exp( jω ( M − 1)) 
ρ
 
1
2 exp(− jω ) 1+  exp( jω ( M − 2))
R=α  ρ 
     
 1 
exp(− jω ( M − 1)) exp(− jω ( M − 2))  1+ 
 ρ 
2
α
where ρ = 2 : signal to noise ratio (SNR).
σv

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 52 DHT, HCMUT
1. Power Spectrum (1)
 For a deterministic signal x(t), the spectrum is well defined: If X(ω)
represents its Fourier transform, i.e., if
+∞
X (ω ) = ∫ −∞ x(t )e − jω t dt , (3-1)
then | X(ω) |2 represents its energy spectrum. This follows from Parseval’s
theorem since the signal energy is given by
+∞ +∞
∫ −∞ 1= ω dω E. (3-2)
2π ∫ −∞
2 2
= x (t ) dt | X ( ) |

Thus, | X(ω) |2∆ω represents the signal energy in the band (ω, ω + ∆ω).
| X (ω )|2
X (t ) Energy in (ω ,ω +∆ω )

t 0 ω
0 ω ω + ∆ω

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 53 DHT, HCMUT
1. Power Spectrum (2)
 However for stochastic processes, a direct application of (3-1) generates
a sequence of random variables for every ω. Moreover, for a stochastic
process, E{| X(t) |2} represents the ensemble average power (instantaneous
energy) at the instant t.
To obtain the spectral distribution of power versus frequency for stochastic
processes, it is best to avoid infinite intervals to begin with, and start with a
finite interval (– T, T ) in (3-1). Formally, partial Fourier transform of a
process X(t) based on (– T, T ) is given by

T
X T (ω ) = ∫ −T X (t )e − jω t dt (3-3)

so that
| X T (ω ) |2 1 T 2
= ∫ −T X (t )e
− jω t
dt (3-4)
2T 2T
represents the power distribution associated with that realization based
on (– T, T ). Notice that (3-4) represents a random variable for every ω
and its ensemble average gives, the average power distribution based on
Dept. of Telecomm. Eng. Comm II 2014
(– T, T ). Thus
Faculty of EEE 54 DHT, HCMUT
1. Power Spectrum (3)
 | X T (ω ) |2  1 T T − jω ( t1 − t2 )
=PT (ω ) E=
  ∫−T ∫−T E { X ( t1 ) X *
( t 2 )}e dt1dt2
 2 T  2T
1 T T
(3-5)
= ∫−T ∫−T R (t1 , t2 )e − jω ( t1 − t2 ) dt1dt2
2T
XX

represents the power distribution of X(t) based on (– T, T ). If X(t) is


assumed to be w.s.s, then RXX(t1, t2) = RXX(t1 - t2), and (3-5) simplifies to
1 T T − jω ( t1 − t2 )
=PT (ω ) ∫ −T ∫ −T XX 1 2
R ( t − t ) e dt1dt2 .
2T
Let τ = t1 – t2 , we get
1 2T
=PT (ω ) ∫ − 2T R (τ )e − jωτ (2T − | τ |)dτ
2T
XX

(3-6)
2T
= ∫ − 2T R XX
(τ )e − jωτ (1 − 2|τT| )dτ ≥ 0

to be the power distribution of the w.s.s. process X(t) based on (– T, T ).


Finally letting T → ∞ in (3-6), we obtain

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 55 DHT, HCMUT
1. Power Spectrum (4)
S XX (ω ) lim
= = PT (ω )
+∞
∫−∞ RXX (τ )e − jωτ dτ ≥ 0 (3-7)
T →∞

to be the power spectral density of the w.s.s process X(t). Notice that
RXX (τ ) ←→
F⋅T
S XX (ω ) ≥ 0. (3-8)

i.e., the autocorrelation function and the power spectrum of a w.s.s process
form a Fourier transform pair, a relation known as the Wiener-Khinchin
Theorem. From (3-8), the inverse formula gives
+∞
RXX (τ ) = 21π ∫−∞ S XX
(ω ) e jωτ
dω (3-9)

and in particular for τ = 0, we get


+∞
1 (ω=
)d ω R= (3-10)
2π ∫−∞ S XX XX
(t ) |2 } P,
(0) E{| X = the total power.

From (3-10), the area under SXX(ω) represents the total power of the
process X(t), and hence SXX(ω) truly represents the power spectrum.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 56 DHT, HCMUT
1. Power Spectrum (5)

S XX (ω ) represents the power


S XX ( ω ) ∆ω
in the band (ω , ω + ∆ω )

ω
0 ω ω + ∆ω

The nonnegative-definiteness property of the autocorrelation function


translates into the “nonnegative” property for its Fourier transform
(power spectrum)
n n n n +∞
∑∑ ai a*j 21π
∑∑ ai a RXX (ti − t j ) =
jω ( ti − t j )
∫−∞ S XX (ω )e dω
*
j
=i 1 =j 1 =i 1 =j 1 (3-11)
+∞ 2
1 (ω ) ∑i =1 ai e
n jω ti
d ω ≥ 0.
= 2π ∫−∞ S XX

From (3-11), it follows that


RXX (τ ) nonnegative - definite ⇔ S XX (ω ) ≥ 0. (3-12)

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 57 DHT, HCMUT
1. Power Spectrum (6)
If X(t) is a real w.s.s process, then RXX(τ ) = RXX(-τ ), so that
+∞
S XX (ω ) = ∫ −∞ RXX (τ )e − jωτ dτ
(3-13)
+∞
= ∫ −∞ RXX (τ ) cos ωτ dτ

= 2 ∫ 0 RXX (τ ) cos ωτ dτ = S XX ( −ω ) ≥ 0

so that the power spectrum is an even function, (in addition to being real
and nonnegative).

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 58 DHT, HCMUT
1. Power Spectra and Linear Systems (1)
 If a w.s.s process X(t) with autocorrelation
function RXX(τ ) → SXX(ω) is applied to a X(t) h(t) Y(t)
linear system with impulse response h(t), then
the cross correlation function RXY(τ ) and the output autocorrelation function
RYY(τ ) can be determined. From there
RXY (τ=
) RXX (τ ) ∗ h* (−τ ), RYY (τ= ) RXX (τ ) ∗ h* (−τ ) ∗ h(τ ). (3-14)
If
f (t ) ↔ F (ω ), g (t ) ↔ G (ω ) (3-15)
then
f (t ) ∗ g (t ) ↔ F (ω )G (ω ) (3-16)
since
+∞
= ∫ −∞ f (t ) ∗ g (t )e − jω t dt
 { f (t ) ∗ g (t )}
+∞
 { f (t ) ∗ g (t )}= ∫ −∞ {∫ +∞
−∞ }
f (τ ) g (t − τ )dτ e − jω t dt (3-17)
+∞ − jωτ +∞
= ∫ −∞ f (τ )e dτ ∫ −∞ g (t − τ )e − jω ( t −τ ) d (t − τ )
=F (ω )G (ω ).
Dept. of Telecomm. Eng. Comm II 2014
Faculty of EEE 59 DHT, HCMUT
1. Power Spectra and Linear Systems (2)
Using (3-15)-(3-17) in (3-14) we get
S=
XY
(ω )  {RXX (ω ) ∗ h*=
(−τ )} S XX (ω ) H * (ω ) (3-18)
since
dτ ( ∫ −∞ h(t )e =
dt )
+∞ +∞ *
− jωτ − jω t
∫ −∞ h (−τ )e= H * (ω ),
*

where
+∞
H (ω ) = ∫ −∞ h(t )e − jω t dt (3-19)
represents the transfer function of the system, and
SYY (ω ) 
= = {RYY (τ )} S XY (ω ) H (ω ) (3-20)
= S XX (ω ) | H (ω ) |2 .
From (3-18), the cross spectrum needs not be real or nonnegative.
However the output power spectrum is real and nonnegative and is related
to the input spectrum and the system transfer function as in (3-20). Eq. (3-
20) can be used for system identification as well.
Example of Thermal noise (Example 11.1, p. 351, [4])
Dept. of Telecomm. Eng. Comm II 2014
Faculty of EEE 60 DHT, HCMUT
1. Power Spectra and Linear Systems (3)
 W.S.S White Noise Process: If W(t) is a w.s.s white noise process, then
RWW (τ ) =qδ (τ ) ⇒ SWW (ω ) =q. (3-21)

Thus the spectrum of a white noise process is flat, thus justifying its
name. Notice that a white noise process is unrealizable since its total
power is indeterminate.

From (3-20), if the input to an unknown system is a white noise process,


then the output spectrum is given by
SYY (ω ) = q | H (ω ) |2 (3-22)

Notice that the output spectrum captures the system transfer function
characteristics entirely, and for rational systems Eq (3-22) may be used
to determine the pole/zero locations of the underlying system.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 61 DHT, HCMUT
1. Power Spectra and Linear Systems (4)
Example 3.1: A w.s.s white noise process W(t) is passed through a low pass
filter (LPF) with bandwidth B/2. Find the autocorrelation function of the
output process.
Solution: Let X(t) represent the output of the LPF. Then from (3-22)
q, | ω |≤ B / 2 (3-23)
= S (ω ) q= | H (ω ) | 
2
.
0, | ω |> B / 2
XX

Inverse transform of SXX(ω) gives the output autocorrelation function to


be B/2 jωτ B/2 jωτ
= R (τ ) ∫=−B / 2
XX
S (ω ) e
XX
d ω q ∫ −B / 2
e dω (3-24)
sin( Bτ / 2) R (τ )
= qB = qB sinc( Bτ / 2) XX

( Bτ / 2) qB
| H (ω )|2
1

τ
−B /2 B/2 ω

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 62 DHT, HCMUT
1. Power Spectra and Linear Systems (5)
Eq (3-23) represents colored noise spectrum and (3-24) its autocorrelation
function.

Example 3.2: Let 1 t +T


Y (t ) = ∫ X (τ )dτ (3-25)
2T t −T
represent a “smoothing” operation using a moving window on the input
process X(t). Find the spectrum of the output Y(t) in terms of that of X(t).
h (t )
Solution: If we define an LTI system with impulse 1 / 2T
response h(t) as in the figure, then in terms of h(t),
(3-25) reduces to t
+∞ −T T
∫ −∞ h(t − τ ) X (τ )dτ =
Y (t ) = h(t ) ∗ X (t ) (3-26)

so that SYY (ω ) = S XX (ω ) | H (ω ) |2 . (3-27)


+T
where
= H (ω ) 1 − jω t
∫ −T T e dt sinc(ωT )
= 2
(3-28)

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 63 DHT, HCMUT
1. Power Spectra and Linear Systems (6)
so that
SYY (ω ) = S XX (ω ) sinc 2 (ω T ). (3-29)

S XX (ω ) sinc 2 (ω T ) S YY (ω )

ω ω ω
π
T

Notice that the effect of the smoothing operation in (3-25) is to suppress


the high frequency components in the input (beyond π / T), and the
equivalent linear system acts as a low-pass filter (continuous-time moving
average) with bandwidth 2π / T in this case.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 64 DHT, HCMUT
1. Discrete-Time Processes (1)
 For discrete-time w.s.s stochastic processes X(nT) with autocorrelation
sequence {rk }+∞
−∞ ,
(proceeding as above) or formally defining a continuous
time process
= X (t ) ∑ n X (nT )δ (t − nT ), we get the corresponding
autocorrelation function to be
+∞
RXX (τ )
= ∑ rkδ (τ − kT ).
k = −∞

Its Fourier transform is given by


+∞
S XX (ω )
= ∑ rk e− jωT ≥ 0, (3-30)
k = −∞

and it defines the power spectrum of the discrete-time process X(nT).


From (3-30),
(ω ) S XX (ω + 2π / T )
S XX= (3-31)
so that SXX(ω) is a periodic function with period

2B = . (3-32)
T
Dept. of Telecomm. Eng. Comm II 2014
Faculty of EEE 65 DHT, HCMUT
1. Discrete-Time Processes (2)
This gives the inverse relation
1 B jkωT (3-33)
rk = ∫ S (ω ) e dω
2B
XX
−B
and
1 B
= {| X ( nT ) |2 }
r0 E= ∫ S (ω )d ω (3-34)
2B − B XX

represents the total power of the discrete-time process X(nT). The


input-output relations for discrete-time system h(nT) translate into
S XY (ω ) = S XX (ω ) H * (e jω ) (3-35)
and
SYY (ω ) = S XX (ω ) | H (e jω ) |2 (3-36)
where +∞

H (e ) = ∑ h(nT ) e − jω nT (3-37)
n = −∞

represents the discrete-time system transfer function.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 66 DHT, HCMUT
1. Matched Filter (1)
 Let r(t) represent a deterministic signal s(t) corrupted by noise. Thus
r (t )= s(t ) + w(t ), 0 < t < t0 (3-38)

where r(t) represents the observed data, t = t0


y(t)
and it is passed through a receiver with r(t) h(t)
impulse response h(t). The output y(t) is
given by ∆
y (t ) y s (t ) + n (t )
= (3-39)
where
y s (t ) =
s(t ) ∗ h(t ), n(t ) =
w(t ) ∗ h(t ),

and it can be used to make a decision about the presence or absence of


s(t) in r(t). Towards this, one approach is to require that the receiver
output signal to noise ratio (SNR)0 at time instant t0 be maximized.
Notice that

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 67 DHT, HCMUT
1. Matched Filter (2)
Output signal power at t = t0 | y s (t0 ) |2
( SNR )0 ∆
=
Average output noise power E{| n (t ) |2 } (3-41)
+∞ 2
1 jω t0
| y s (t0 ) |2 2π ∫ −∞ S (ω ) H (ω )e dω
=
1 +∞ S (ω )d ω 1 +∞
SWW (ω ) | H (ω ) |2 d ω
2π ∫ −∞ nn 2π ∫ −∞
represents the output SNR, where we have made use of (3-20) to
determine the average output noise power, and the problem is to
maximize (SNR)0 by optimally choosing the receiver filter H(ω).

 Optimum Receiver for White Noise Input: The simplest input noise
model assumes w(t) to be white noise in (3-38) with spectral density N0,
so that (3-41) simplifies to
+∞ 2
jω t0
∫ −∞ S (ω ) H (ω )e dω
(3-42)
( SNR )0 = +∞
2π N 0 ∫ −∞ | H (ω ) |2 d ω
Dept. of Telecomm. Eng. Comm II 2014
Faculty of EEE 68 DHT, HCMUT
1. Matched Filter (3)
Direct application of Cauchy-Schwarz’ inequality in (3-42) gives
+∞

2
s ( t ) dt Es
( SNR )0 ≤ 2π N ∫ −∞ | S (ω ) | d ω = = (3-43)
+∞
1 2 0
0
N0 N0
and equality in (3-43) is guaranteed if and only if
H (ω ) = S * (ω )e − jω t0 (3-44)
or
h=
(t ) s (t0 − t ). (3-45)

From (3-45), the optimum receiver that maximizes the output SNR at t =
t0 is given by (3-44)-(3-45). Notice that (3-45) need not be causal, and the
corresponding SNR is given by (3-43).

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 69 DHT, HCMUT
1. Matched Filter (4)
s (t ) h (t ) h (t )

t t
T t
−T / 2 t0 T
(a) (b) t0=T/2 (c) t0=T

The figure shows the optimum h(t) for two different values of t0. In
figure(b), the receiver is noncausal, whereas in figure (c) the receiver
represents a causal waveform.

If the receiver is causal, the optimum causal receiver can be shown to


be
hopt =
(t ) s(t0 − t )u(t ) (3-46)
and the corresponding maximum (SNR)0 in that case is given by
t0
( SNR0 ) = N1 ∫0 s 2 (t )dt (3-47)
0

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 70 DHT, HCMUT
Quiz 3 & 4

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 71 DHT, HCMUT
Chapter 2:
Communication Signals and Systems

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE DHT, HCMUT
 There are 2 approaches:
 Complex baseband (complex envelope) representation.
 Signal space representation.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 73 DHT, HCMUT
2. Hilbert Transform (1)
The Hilbert transform of a function s(t) is defined by

For example, the Hilbert transform of cos(2πfct) is sin(2πfct).


Taking Fourier transform, we have

The inverse Hilbert transform in frequency domain is given by

In time domain, we have

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 74 DHT, HCMUT
2. Hilbert Transform (2)
Hence, we obtain the following two properties of the Hilbert transform:

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 75 DHT, HCMUT
2. Representations of Bandpass Deterministic Signals (1)
A (real) signal s(t) is called bandpass or narrowband if its frequency
content concentrates around fc.
More precisely, s(t) is bandpass if its Fourier transform satisfies

where W ≤ fc. Recall the Fourier transform property that for any signal s(t)
(real or complex),

The signal s(t) is real if and only if

if and only if

Therefore, the Fourier transform on positive frequencies of a real


bandpass signal completely specifies the signal itself.
Dept. of Telecomm. Eng. Comm II 2014
Faculty of EEE 76 DHT, HCMUT
2. Representations of Bandpass Deterministic Signals (2)
 Complex analytic representation
Since the bandpass signal s(t) is real, it is unique determined by S(f) for
f > 0. We define S+(f) by

The corresponding time domain signal s+(t) is the complex analytic


representation of s(t). Taking inverse Fourier transform, we have

Conversely,

and

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 77 DHT, HCMUT
2. Representations of Bandpass Deterministic Signals (3)
 Complex baseband representation
If the carrier frequency fc is known, one can translate s+(t) to baseband
to obtain the complex baseband representation or complex envelope
defined by

Taking Fourier transform, we have

Notice that the complex envelope is bandlimited to W, i.e., .


. Conversely,

and

To get back s(t) and S(f), one can use

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 78 DHT, HCMUT
2. Representations of Bandpass Deterministic Signals (4)
 Bandpass representation
Let sI(t) and sQ(t) be the real and the imaginary parts of

Then

and, in frequency domain,

This is the bandpass representation of s(t). Notice that

and hence

Therefore, sI(t) is also bandlimited to W.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 79 DHT, HCMUT
2. Representations of Bandpass Deterministic Signals (5)
Similarly,

and sQ(t) is also bandlimited to W. Therefore, the bandpass


representation expresses s(t) in terms of a pair of baseband signals
modulated onto quadrature carriers. The baseband signal can be
obtained directly from the bandpass signal via the following
relationships

where LPFfc [.] represents ideal lowpass filtering with cut-off at fc.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 80 DHT, HCMUT
2. Representations of Bandpass Deterministic Signals (6)
In time domain, we have

Therefore, we can use the circuit shown in below figure to convert a


real bandpass signal to its complex envelope. Since the complex
envelope is a (complex) baseband signal which uniquely represents the
real bandpass signal, we can perform all developments of the RF
communication system in baseband by treating the complex envelope
as our baseband signal.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 81 DHT, HCMUT
2. Representations of Bandpass Deterministic Signals (7)

Digital modulation

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 82 DHT, HCMUT
2. Representation of Bandpass Random Processes (1)
Suppose n(t) is a wide-sense stationary (WSS) process with zero mean
and power spectral density Φn(f). If Φn(f) satisfies the narrowband
assumption, i.e.,

where W ≤ fc, then n(t) is called a bandpass (narrowband) process.


It turns out that n(t) also has the bandpass representation:

where nI(t) and nQ(t) are zero-mean jointly WSS processes. Moreover,
if n(t) is Gaussian, nI(t) and nQ(t) are jointly Gaussian. By employing
the stationarity of the random processes involved, we can show that

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 83 DHT, HCMUT
2. Representation of Bandpass Random Processes (2)

where Rn(τ) = E[n(t + τ)n(t)] is the autocorrelation function of the random


process n(t), RnI(τ) = E[nI(t + τ)nI(t)] and RnQ(τ) = E[nQ(t + τ)nQ(t)] are,
respectively, the autocorrelation functions of the processes nI(t) and nQ(t),
and RnInQ(τ) = E[nI(t + τ)nQ(t)] and RnQnI(τ) = E[nQ(t + τ)nI(t)] are the
cross-correlation functions. Now, let us define the complex envelope
of the random process n(t),

Obviously, is a zero-mean WSS complex random process with


autocorrelation function:

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 84 DHT, HCMUT
2. Representation of Bandpass Random Processes (3)

The frequency domain equivalent is

To obtain ΦnI (f) and ΦnQnI(f) from Φn(f), we can do

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 85 DHT, HCMUT
2. Representation of Bandpass Random Processes (4)
 Bandpass additive Gaussian noise
A common example of narrowband process is the bandpass additive
Gaussian noise n(t) with zero mean and power spectral density Φn(f) =
N0/2 for |f| < 2fc and Φn(f) = 0 otherwise. Since Φn(f) satisfies the
narrowband assumption, n(t) can be written as

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 86 DHT, HCMUT
2. Signal Space Representation (1)
For M-ary communications, the transmitter selects one out of a set of M
signals, instead of one out of a set of 2 signals, to send. Due to the
increase in the size of the signal set, it would be convenient if we can have
a simple way to represent the set of signals.

A common approach is to obtain an orthonormal basis for the signal set


and, then, to represent a signal by its coordinates with respect to the basis.
In other words, we represent the set of signals by a set of vectors. This
method provides us a geometric viewpoint for the set of signals, and is
usually known as the signal space representation.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 87 DHT, HCMUT
2. Signal Space Representation (2)
 Represent any set of M energy signals {si(t), i = 1, 2, · · · , M} as linear
combinations of N orthonormal basis functions {φj(t), j = 1, 2, · · · , N},
N ≤ M.

 Real-valued basis functions φ 1(t), φ 2(t), · · · , φ N(t) are orthonormal,


i.e.,

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 88 DHT, HCMUT
2. Signal Space Representation (3)
 Vector to signal (a), and signal to vector (b) mappings:

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 89 DHT, HCMUT
2. Signal Space Representation (4)

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 90 DHT, HCMUT
2. Signal Space Representation (5)
 Example: The four signal vectors represented as points in three-
dimensional space.

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 91 DHT, HCMUT
2. Signal Space Representation (6)
 N-dimensional Euclidean Space
 Length of a vector: absolute value or norm

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 92 DHT, HCMUT
2. Signal Space Representation (7)
 Inner product: dot product, scalar product

 Euclidean distance:

 Angle θik subtended between two signal vectors si and sk:

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 93 DHT, HCMUT
2. Signal Space Representation (8)
 Gram-Schmidt Orthogonalization

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 94 DHT, HCMUT
2. Signal Space Representation (9)

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 95 DHT, HCMUT
2. Signal Space Representation (10)
Example (Example 2.2-3, text book):

s4(t)

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 96 DHT, HCMUT
2. Signal Space Representation (12)

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 97 DHT, HCMUT
Quiz
 Find a complete orthonormal
set of basis functions
 Characterize signals by
corresponding vectors

Dept. of Telecomm. Eng. Comm II 2014


Faculty of EEE 98 DHT, HCMUT

You might also like