3G 4 DigitalComm PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 163

Introduction to Digital Communications

System
Recommended Books
Digital Communications / Fourth Edition (textbook)
-- John G. Proakis, McGraw Hill
Communication Systems / 4th Edition
-- Simon Haykin, John Wiley & Sons, Inc.
Digital Communications – Fundamentals and Applications /
2nd Edition
-- Bernard Sklar, Prentice Hall
Principles of Communications / Fifth Edition
-- Rodger E. Ziemer and William H. Tranter, John Wiley &
Sons, Inc.
Modern Digital and Analog Communication Systems
-- B.P. Lathi, Holt, Rinehart and Winston, Inc.
2 WITS Lab, NSYSU.
Example of Communications System

Local
Loop Mobile
Switch T1/E1 Facilities
Switching
Transmission Center
Equipment regenerator
Base
Central Office A/D Conversion
Station
(Digitization)
Local
Loop SONET
Switch T1/E1 Facilities
M SDH
U
Transmission T1/E1 Facilities
Equipment regenerator X
Central Office A/D Conversion
(Digitization)
Local
Loop
Switch T1/E1 Facilities

Transmission
Equipment regenerator
Mobile
Central Office A/D Conversion Switching
(Digitization) Center

Public Switched Telephone Network (PSTN) Base


Station

3 WITS Lab, NSYSU.


Basic Digital Communication Nomenclature

Textual Message: information comprised of a sequence of


characters.
Binary Digit (Bit): the fundamental information unit for all
digital systems.
Symbol (mi where i=1,2,…M): for transmission of the bit
stream; groups of k bits are combined to form new symbol
from a finite set of M such symbols; M=2k.
Digital Waveform: voltage or current waveform representing
a digital symbol.
Data Rate: Symbol transmission is associated with a symbol
duration T. Data rate R=k/T [bps].
Baud Rate: number of symbols transmitted per second [baud].

4 WITS Lab, NSYSU.


Nomenclature Examples

5 WITS Lab, NSYSU.


Messages, Characters, and Symbols

6 WITS Lab, NSYSU.


Typical Digital Communications System
From Other Sources

Information Bits Source Bits Channel Bits

TX
Source Channel Frequency Multiple
Format Encryption Interleaving Multiplexing Modulation RF
Encoding Encoding Spreading Access
PA

si (t)
Digital
Input C
mi H
A
Bit Digital
Synchronization N
Stream Waveform N
Digital E
Output L
m̂ i
sˆi (t)
RX
Source Channel Frequency Multiple
Format Decryption Deinterleaving Demultiplexing Demodulation RF
Decoding Decoding Despreading Access
IF

Information Sink Source Bits Channel Bits


Optional
Essential To Other Destinations

7 WITS Lab, NSYSU.


Format
Typical Digital Communications System
From Other Sources

Information Bits Source Bits Channel Bits

TX
Source Channel Frequency Multiple
Format Encryption Interleaving Multiplexing Modulation RF
Encoding Encoding Spreading Access
PA

si (t)
Digital
Input C
mi H
A
Bit Digital
Synchronization N
Stream Waveform N
Digital E
Output L
m̂ i
sˆi (t)
RX
Source Channel Frequency Multiple
Format Decryption Deinterleaving Demultiplexing Demodulation RF
Decoding Decoding Despreading Access
IF

Information Sink Source Bits Channel Bits


Optional
Essential To Other Destinations

9 WITS Lab, NSYSU.


Formatting and Baseband Transmission

10 WITS Lab, NSYSU.


Sampling Theorem

11 WITS Lab, NSYSU.


Sampling Theorem
Sampling Theorem: A bandlimited signal having no
spectral components above fm hertz can be determined
uniquely by values sampled at uniform intervals of Ts
seconds, where
1
TS ≤ or sampling rate f S ≥ 2 f m
2 fm
In sample-and-hold operation, a switch and storage
mechanism form a sequence of samples of the
continuous input waveform. The output of the sampling
process is called pulse amplitude modulation (PAM).

12 WITS Lab, NSYSU.


Sampling Theorem


1
X S ( f ) = X ( f ) ∗ Xδ ( f ) =
TS
∑ X ( f − nf
n = −∞
S )

13 WITS Lab, NSYSU.


Spectra for Various Sampling Rates

14 WITS Lab, NSYSU.


Natural Sampling

15 WITS Lab, NSYSU.


Pulse Code Modulation (PCM)

PCM is the name given to the class of baseband


signals obtained from the quantized PAM signals by
encoding each quantized sample into a digital word.

The source information is sampled and quantized to


one of L levels; then each quantized sample is digitally
encoded into an ℓ-bit (ℓ=log2L) codeword.

16 WITS Lab, NSYSU.


Example of Constructing PCM Sequence

17 WITS Lab, NSYSU.


Uniform and Non-uniform Quantization

18 WITS Lab, NSYSU.


Statistical Distribution of Single-Talker
Speech Amplitudes
50% of the time, speech voltage is less than ¼ RMS.
Only 15% of the time, voltage exceeds RMS.
Typical voice signal dynamic range is 40 dB.

19 WITS Lab, NSYSU.


Problems with Linear Quantization

Fact: Unacceptable S/N for small signals.


Solution:
Increasing quantization levels – price is too high.
Applying nonlinear quantization – achieved by first
distorting the original signal with a logarithmic
compression characteristic and then using a uniform
quantizer.
At the receiver, an inverse compression characteristic,
called expansion, is applied so that the overall
transmission is not distorted. The processing pair is
referred to as companding.

20 WITS Lab, NSYSU.


Implementation of Non-linear Quantizer

21 WITS Lab, NSYSU.


Companding Characteristics
In North America: μ-law compression:
loge[1+ µ ( x / xmax )]
y = ymax ⋅ sgn x
loge (1+ µ )
where
⎧+1 for x ≥ 0
sgn x = ⎨
⎩−1 for x < 0
In Europe: A-law compression:
⎧ A( x / xmax ) x 1
⎪ y max ⋅ sgn x 0< ≤
⎪ 1 + log e A xmax A
y=⎨
⎪ y 1 + log e [ A( x / xmax )] ⋅ sgn x 1
<
x
≤1
⎪⎩ max 1 + log e A A xmax

22 WITS Lab, NSYSU.


Compression Characteristics
Standard values of μ is 255 and A is 87.6.

23 WITS Lab, NSYSU.


Source Coding
Typical Digital Communications System
From Other Sources

Information Bits Source Bits Channel Bits

TX
Source Channel Frequency Multiple
Format Encryption Interleaving Multiplexing Modulation RF
Encoding Encoding Spreading Access
PA

si (t)
Digital
Input C
mi H
A
Bit Digital
Synchronization N
Stream Waveform N
Digital E
Output L
m̂ i
sˆi (t)
RX
Source Channel Frequency Multiple
Format Decryption Deinterleaving Demultiplexing Demodulation RF
Decoding Decoding Despreading Access
IF

Information Sink Source Bits Channel Bits


Optional
Essential To Other Destinations

25 WITS Lab, NSYSU.


Source Coding
Source coding deals with the task of forming efficient
descriptions of information sources.
For discrete sources, the ability to form reduced data
rate descriptions is related to the information content
and the statistical correlation among the source
symbols.
For analog sources, the ability to form reduced data
rate descriptions, subject to a fixed fidelity criterion I
related to the amplitude distribution and the temporal
correlation of the source waveforms.

26 WITS Lab, NSYSU.


Huffman Coding
The Huffman code is source code whose average word
length approaches the fundamental limit set by the
entropy of a discrete memoryless source.

The Huffman code is optimum in the sense that no other


uniquely decodable set of code-words has smaller
average code-word length for a given discrete
memoryless source.

27 WITS Lab, NSYSU.


Huffman Encoding Algorithm
1. The source symbols are listed in order of decreasing
probability. The two source symbols of lowest
probability are assigned a 0 and a 1.
2. These two source symbols are regarded as being
combined into a new source symbol with probability
equal to the sum of the two original probabilities. The
probability of the new symbol is placed in the list in
accordance with its value.
3. The procedure is repeated until we are left with a final
list of source statistics of only two for which a 0 and a 1
are assigned.
4. The code for each (original) source symbol is found by
working backward and tracing the sequence of 0s and 1s
assigned to that symbol as well as its successors.
28 WITS Lab, NSYSU.
Example of Huffman Coding
Symbol Probability Code Word
S0 0.4 00
S1 0.2 10
S2 0.2 11
S3 0.1 010
S4 0.1 011
Symbol Stage 1 Stage 2 Stage 3 Stage 4

S0 0.4 0.4 0.4 0.6 0


S1 0.2 0.2 0.4 0 0.4
1
S2 0.2 0.2 0 0.2
1
S3 0.1 0 0.2 1
S4 0.1
1
29 WITS Lab, NSYSU.
Properties of Huffman Code
Huffman encoding process is not unique.
Code words for different Huffman encoding process
can have different lengths. However, the average
code-word length is the same.
When a combined symbol is moved as high as
possible, the resulting Huffman code has a
significantly smaller variance than when it is moved
as low as possible.
Huffman code is a prefix code.
A prefix code is defined as a code in which no code-word
is the prefix of any other code-word.
30 WITS Lab, NSYSU.
Bit Compression Technologies for Voice

Differential PCM (DPCM)


Adaptive DPCM
Delta Modulation (DM)
Adaptive DM (ADM)

.
.
.
Speech Encoding

31 WITS Lab, NSYSU.


Differential PCM (DPCM)

32 WITS Lab, NSYSU.


Delta Modulation (DM)
Delta modulation is a one-bit DPCM.
Advantage: bit compression.
Disadvantage: slope overload.

33 WITS Lab, NSYSU.


Speech Coding Objective
Reduce the number of bits needed to be transmitted,
therefore lowering the bandwidth required.

34 WITS Lab, NSYSU.


Speech Properties
Voiced Sound
Arises in generation of vowels and latter portion of some consonants.
Displays long-term repetitive pattern corresponding to the duration of a
pitch interval
Pulse-like waveform.

Unvoiced Sound
Arises in pronunciation of certain consonants such as “s”, “f”, “p”, “j”,
“x”, …, etc.
Noise-like waveform.

35 WITS Lab, NSYSU.


Categories of Speech Encoding
Waveform Encoding
Treats voice as analog signal and does not use properties of
speech:

Source Model Coding or Vocoding


Treats properties of speech to preserve word information
Hybrid or parametric methods
Combines waveform and vocoding

36 WITS Lab, NSYSU.


Linear Predictive Coder (LPC)

37 WITS Lab, NSYSU.


Multi-Pulse Linear Predictive Coder
(MP-LPC)

38 WITS Lab, NSYSU.


Regular Pulse Excited Long Term Prediction
Coder (RPE-LPT)

39 WITS Lab, NSYSU.


Code-Excited Linear Predictive (CELP)

40 WITS Lab, NSYSU.


Speech Coder Complexity

41 WITS Lab, NSYSU.


Speech Processing for GSM

Composition of the 13 kbps signal:


36 bits for filter parameters every 20 ms.
9 bits for LTP every 5 ms.
47 bits for RPE every 5 ms.
Thus, in a 20 ms (2080-bit block, or 260 sample) interval,
we need a total of
36+9*20/5+47*20/5=260 bits.
Data Rate = 260/(20 ms) = 13 kbps.
42 WITS Lab, NSYSU.
Speech Processing for IS-54

Composition of the 7.95 kbps signal:


43 bits for filter parameters every 20 ms.
7 bits for LTP every 5 ms.
88 bits for codebook every 20 ms.
Thus, in a 20 ms (2080-bit block, or 260 samples) interval, we
need a total of:
43+7*20/5+88=159 bits.
Data Rate = 159/(20ms) = 7.95 kbps.
43 WITS Lab, NSYSU.
Channel Coding
Typical Digital Communications System
From Other Sources

Information Bits Source Bits Channel Bits

TX
Source Channel Frequency Multiple
Format Encryption Interleaving Multiplexing Modulation RF
Encoding Encoding Spreading Access
PA

si (t)
Digital
Input C
mi H
A
Bit Digital
Synchronization N
Stream Waveform N
Digital E
Output L
m̂ i
sˆi (t)
RX
Source Channel Frequency Multiple
Format Decryption Deinterleaving Demultiplexing Demodulation RF
Decoding Decoding Despreading Access
IF

Information Sink Source Bits Channel Bits


Optional
Essential To Other Destinations

45 WITS Lab, NSYSU.


Channel Coding
Error detecting coding: Capability of detecting errors so
that re-transmission or dropping can be done.
Cyclic Redundancy Code (CRC)

Error Correcting Coding: Capability of detecting and


correcting errors.
Block Codes: BCH codes, RS codes, … etc.
Convolutional codes.
Turbo codes.

46 WITS Lab, NSYSU.


Linear Block Codes
Encoder transforms block of k successive binary digits
into longer block of n (n>k) binary digits.
Called an (n,k) code.
Redundancy = n-k; Code Rate = k/n;
There are 2k possible messages.
There are 2k possible code words corresponding to the
messages.
Code word (or code vector) is an n-tuple from the space
Vn of all n-tuple.
Storing the 2k code vector in a dictionary is prohibitive
for large k.
47 WITS Lab, NSYSU.
Vector Spaces
The set of all binary n-tuples, Vn, is called a vector
space over GF (2).
GF: Galois Field.
Two operations are defined:
Addition: V + U = V1 + U1 + V2 + U 2 + ... + Vn + U n
Scalar Multiplication: aV = aV1 + aV2 + ... + aVn
Example: Vector Space V4
0000 0001 0010 0011 0100 0101 0110 0111
1000 1001 1010 1011 1100 1101 1110 1111
(0101)+(1110)=(0+1, 1+1, 0+1, 1+0)=(1, 0, 1, 1)
1·(1010)=(1·1, 1·0, 1·1, 1·0)=(1, 0, 1, 0)
48 WITS Lab, NSYSU.
Subspaces

A subset S of Vn is a subspace if
The all-zero vector is in S
The sum of any two vectors in S is also in S.

Example of S: V 0 = 0000
V 1 = 0101
V 2 = 1010
V 3 = 1111

49 WITS Lab, NSYSU.


Reducing Encoding Complexity
Key feature of linear block codes: the 2k code vectors
form a k-dimensional subspace of all n-tuples.
Example: k = 3, 2k = 8, n = 6, ( 6 , 3 ) code
Message Code Word
000 000000 ⎫

100 110100 ⎪
010 011010 ⎪

110 101110 ⎪ A 3 - dimensiona l subspace of

001 101001 ⎪ the vector space of all 6 - tuples.
101 011101 ⎪

011 110011 ⎪

111 000111 ⎭

50 WITS Lab, NSYSU.


Reducing Encoding Complexity
It is possible to find a set of k linearly independent n-
tuples v1 , v 2 , ..., v k such that each n-tuple of the suspace
is a linear combination of v1 , v 2 , ..., v k .

Code word u = m1 v1 + m2 v 2 + ... + mk v k


where mi = 0 or 1
i = 1,..., k

51 WITS Lab, NSYSU.


Generator Matrix
⎡ v1 ⎤ ⎡ v11 v12 v1n ⎤
⎢ v ⎥ ⎢v v v ⎥
G = ⎢ 2 ⎥ = ⎢ 21 22 2n ⎥
= k × n Generator Matrix
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢⎣ k ⎥⎦ ⎢⎣ k1 k 2
v v v v kn ⎥

The 2k code vectors can be described by a set of k linearly
independent code vectors.
Let m=[m1, m2, … , mk] be a message.
Code word corresponding to message m is obtained by:
⎡ v1 ⎤
⎢v ⎥
u = mG = [m1 m2 mk ] ⎢ 2 ⎥
⎢ ⎥
⎢ ⎥
⎣v k ⎦
52 WITS Lab, NSYSU.
Generator Matrix
Storage is greatly reduced.
The encoder needs to store the k rows of G instead of
the 2k code vectors of the code.
For example:
⎡ v1 ⎤ ⎡1 1 0 1 0 0 ⎤
Let G = ⎢⎢v 2 ⎥⎥ = ⎢⎢ 0 1 1 0 1 0 ⎥⎥ and m = [1 1 0]
⎢⎣ v 3 ⎥⎦ ⎢⎣1 0 1 0 0 1 ⎥⎦
Then
⎡ v1 ⎤ = 1⋅ v1 + 1⋅ v 2 + 0 ⋅ v3
u = [1 1 0] ⎢⎢v 2 ⎥⎥ = 1⋅ [110100] + 1⋅ [ 011010] + 0 ⋅ [101001]
⎢⎣ v 3 ⎥⎦ = [1 0 1 1 1 0] Code Vector for m = [110]
53 WITS Lab, NSYSU.
Systematic Code

54 WITS Lab, NSYSU.


Parity Check Matrix

For each generator matrix G, there exists a parity check matrix H


such that the rows of G are orthogonal to the rows of H. (u·h=0)
⎡ h1 ⎤ ⎡ h11 h12 h1n ⎤
⎢ h ⎥ ⎢ h h22 h2 n ⎥⎥
H= ⎢ 2 ⎥
= ⎢ 21

⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢⎣ h( n − k ) ⎥⎦ ⎢⎣ h( n − k )1 h( n − k )2 h( n − k ) n ⎥⎦
u = u1 , u2 ,… , un
uH T = u1hi1 + u2 hi 2 + + un hin = 0
where i = 1, 2,… , n − k
U is a code word generated by matrix G if and only if uHT=0

55 WITS Lab, NSYSU.


Parity Check Matrix and Syndrome
In a systematic code with G=[Pkxr Ikxk]
H=[Irxr PTrxk]
r u e
Received Code Error
= +
Vector Vector Vector
Syndrome of r used for error detection and correction
s = rH T

⎧= 0 If r is a code vector
Syndrome s ⎨
⎩≠ 0 Otherwise
56 WITS Lab, NSYSU.
Example of Syndrome Test
⎤ H = [ I n − k PT ]

⎢1 1 0 1 0 0⎥
⎢ ⎥
G = ⎢0 1 1 0 1 0⎥ ⎡1 0 0 1 0 1 ⎤
⎢1 0 1 0 0 1⎥ H = ⎢⎢0 1 0 1 1 0 ⎥⎥
⎢⎣ ⎥
P Ik ⎦ ⎢⎣0 0 1 0 1 1 ⎥⎦
The 6-tuple 1 0 1 1 1 0 is the code vector corresponding to the
message 1 1 0. ⎡1 0 0 ⎤
⎢0 1 0 ⎥⎥

⎢0 0 1⎥
s = u ⋅ H = [1 0 1 1 1 0] • ⎢
T
⎥ = [ 0 0 0]
⎢1 1 0⎥
⎢0 1 1⎥
⎢ ⎥
⎢⎣1 0 1 ⎥⎦
Compute the syndrome for the non-code-vector 0 0 1 1 1 0
s = [ 0 0 1 1 1 0] ⋅ H T = [1 0 0]
57 WITS Lab, NSYSU.
Weight and Distance of Binary Vectors

Hamming Weight of a Vector:


w(v) = Number of non-zero bits in the vector.
Hamming Distance between 2 vectors:
d(u,v) = Number of bits in which they differ.
For example: u=10010110001
v=11001010101
d(u,v) = 5.
d(u,v) =w(u+v)
The Hamming Distance between 2 vectors is equal to the
Hamming Weight of their vector sum.

58 WITS Lab, NSYSU.


Minimum Distance of a Linear Code
The set of all code vectors of a linear code form a
subspace of the n-tuple space.
If u and v are 2 code vectors, then u+v must also be a
code vector.
Therefore, the distance d(u,v) between 2 code vectors
equals the weight of a third code vector.
d(u,v) =w(u+v)=w(w)
Thus, the minimum distance of a linear code equals
the minimum weight of its code vectors.
A code with minimum distance dmin can be shown to
correct (dmin-1)/2 erroneous bits and detect (dmin-1)
erroneous bits.
59 WITS Lab, NSYSU.
Example of Minimum Distance

dmin=3
60 WITS Lab, NSYSU.
Example of Error Correction and Detection
Capability

u v

d min (u , v ) = 7

⎢ d min − 1 ⎥
t max =⎢ ⎥ : Error Correcting Strength
⎣ 2 ⎦

mmax = d min − 1 : Error Detecting Strength


61 WITS Lab, NSYSU.
Convolutional Code Structure

1 2 K
1 2 k 1 2 k 1 2 k
k bits

+ 1 + 2 + n-1 + n

Output

62 WITS Lab, NSYSU.


Convoltuional Code
Convolutional codes
k = number of bits shifted into the encoder at one time
k=1 is usually used!!
n = number of encoder output bits corresponding to the k
information bits
r = k/n = code rate
K = constraint length, encoder memory
Each encoded bit is a function of the present input bits
and their past ones.

63 WITS Lab, NSYSU.


Generator Sequence

u v
r0 r1 r2

g (1)
0 = 1, g (1)
1 = 0, g (1)
2 = 1, and g (1)
3 = 1.

Generator Sequence: g(1)=(1 0 1 1)

u v
r0 r1 r2 r3

g 0( 2 ) = 1, g1( 2 ) = 1, g 2( 2 ) = 1, g 3( 2 ) = 0, and g 4( 2 ) = 1.
Generator Sequence: g(2)=(1 1 1 0 1)
64 WITS Lab, NSYSU.
Convolutional Codes
An Example – (rate=1/2 with K=2)
G1(x)=1+x2 0(00)
G2(x)=1+x1+x2
x1 x2
00
Present Next Output 0(11) 1(11)

0 00 00 00 0(01)
01 10
1 00 10 11
1(00)
0 01 00 11
1 01 10 00 0(10) 1(10)
11
0 10 01 01
1 10 11 10
0 11 01 10 1(01)
1 11 11 01 State Diagram
65 WITS Lab, NSYSU.
Trellis Diagram Representation
0(00) 0(00) 0(00) 0(00) 0(00)
00 00 00 00 0(00) 00 0(00) 00 00 00
1 (1 1

1 (1 1

1 (1 1

1 (1 1

1 (1 1
)

)
)

11

11
11

11

11

0(

0(
0(

0(

0(
01 01 01 01 01
1 (0

1 (0

1 (0
0)

0)

0)
)

)
)

)
1

1
1

1
0(0

0(0
0(0

0(0

0(0
10 10 10 10 10
1(
10
)

1(

1(

1(

)
)

0(10
0(10

0(10

0(10
10

10

10
)

)
11 1(01) 11 1(01) 11 1(01) 11
Trellis termination: K tail bits with value 0 are usually added to the end of the code.
66 WITS Lab, NSYSU.
Encoding Process
Input: 1 0 1 1 1 0 0
Output: 11 01 00 10 01 10 11
0(00) 0(00) 0(00) 0(00) 0(00)
00 00 00 00 0(00) 00 0(00) 00 00 00
1(11

1(11

1 (1 1

1 (1 1

1 (1 1
)

)
)

11

11
11

11

11

0(

0(
0(

0(

0(
01 1(0 01 01 01 01

1 (0

1 (0
0

0)

0)
)
)

1)
)

)
1

1
0(0

0 (0
0(0

0(0

0(0
10 10 10 10 10
1(
10
)

1(

1(

1(

)
)

0(10
0(10

0(10

0(10
10

10

10
)

)
11 1(01) 11 1(01) 11 1(01) 11
67 WITS Lab, NSYSU.
Viterbi Decoding Algorithm
Maximum Likelihood (ML) decoding rule
ML
received sequence r detected sequence d

min(d,r) !!

Viterbi Decoding Algorithm


An efficient search algorithm
Performing ML decoding rule.
Reducing the computational complexity.

68 WITS Lab, NSYSU.


Viterbi Decoding Algorithm
Basic concept
Generate the code trellis at the decoder
The decoder penetrates through the code trellis level by level in
search for the transmitted code sequence
At each level of the trellis, the decoder computes and
compares the metrics of all the partial paths entering a node
The decoder stores the partial path with the larger metric and
eliminates all the other partial paths. The stored partial path is
called the survivor.

69 WITS Lab, NSYSU.


Viterbi Decoding Process
Output: 11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00) 0(00)
00 00 00 00 0(00) 00 0(00) 00 00 00
1 (1 1

1 (1 1

1 (1 1

1 (1 1

1 (1 1
2
)

)
)

11

11
11

11

11

0(

0(
0(

0(

0(
01 1(0 01 01 01 01

1 (0

1 (0
0

0)

0)
)
1)

1)
1)

)
1

1
0 (0

0 (0
0 (0

0(0

0(0
10 10 10 10 10
1(
10

0
)

1(

1(

1(

)
)

0(10
0(10

0(10

0(10
10

10

10
)

)
11 1(01) 11 1(01) 11 1(01) 11
70 WITS Lab, NSYSU.
Viterbi Decoding Process
Output: 11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00) 0(00)
00 00 00 00 0(00) 00 0(00) 00 00 00
1 (1 1

1 (1 1

1 (1 1

1 (1 1

1 (1 1
2 4
)

)
)

11

11
11

11

11

0(

0(
0(

0(

0(
01 1(0 01 01 01 01

1 (0

1 (0
1
0

0)

0)
)
1)

1)
1)

)
1

1
0 (0

0 (0
0 (0

0(0

0(0
10 10 10 10 10
1(
10

0 2
)

1(

1(

1(

)
)

0(10
0(10

0(10

0(10
10

10

10
)

)
11 1(01) 11 1(01) 11 1(01) 11
1 71 WITS Lab, NSYSU.
Viterbi Decoding Process
Output: 11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00) 0(00)
00 00 00 00 0(00) 00 0(00) 00 00 00
1 (1 1

1 (1 1

1 (1 1

1 (1 1

1 (1 1
2 4 3
)

)
)

11

11
11

11

11

0(

0(
0(

0(

0(
01 1(0 01 01 01 01

1 (0

1 (0
1 2
0

0)

0)
)
1)

1)
1)

)
1

1
0 (0

0 (0
0 (0

0(0

0(0
10 10 10 10 10
1(

1
10

0 2
)

1(

1(

1(

)
)

0(10
0(10

0(10

0(10
10

10

10
)

)
11 1(01) 11 1(01) 11 1(01) 11
1 2 72 WITS Lab, NSYSU.
Viterbi Decoding Process
Output: 11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00) 0(00)
00 00 00 00 0(00) 00 0(00) 00 00 00
1 (1 1

1 (1 1

1 (1 1

1 (1 1

1 (1 1
2 4 3 3
)

)
)

11

11
11

11

11

0(

0(
0(

0(

0(
01 1(0 01 01 01 01

1 (0

1 (0
1 2 2
0

0)

0)
)
1)

1)
1)

)
1

1
0 (0

0 (0
0 (0

0(0

0(0
10 10 10 10 10
1(

1 3
10

0 2
)

1(

1(

1(

)
)

0(10
0(10

0(10

0(10
10

10

10
)

)
11 1(01) 11 1(01) 11 1(01) 11
1 2 73 1 WITS Lab, NSYSU.
Viterbi Decoding Process
Output: 11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00) 0(00)
00 00 00 00 0(00) 00 0(00) 00 00 00
1 (1 1

1 (1 1

1 (1 1

1 (1 1

1 (1 1
2 4 3 3 3
)

)
)

11

11
11

11

11

0(

0(
0(

0(

0(
01 1(0 01 01 01 01

1 (0

1 (0
1 2 2 3
0

0)

0)
)
1)

1)
1)

)
1

1
0 (0

0 (0
0 (0

0(0

0(0
10 10 10 10 10
1(

1 3 3
10

0 2
)

1(

1(

1(

)
)

0(10
0(10

0(10

0(10
10

10

10
)

)
11 1(01) 11 1(01) 11 1(01) 11
1 2 74 1 1 Lab, NSYSU.
WITS
Viterbi Decoding Process
Output: 11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00) 0(00)
00 00 00 00 0(00) 00 0(00) 00 00 00
1 (1 1

1 (1 1

1 (1 1

1 (1 1

1 (1 1
2 4 3 3 3 3
)

)
)

11

11
11

11

11

0(

0(
0(

0(

0(
01 1(0 01 01 01 01

1 (0

1 (0
1 2 2 3 2
0

0)

0)
)
1)

1)
1)

)
1

1
0 (0

0 (0
0 (0

0(0

0(0
10 10 10 10 10
1(

1 3 3
10

0 2
)

1(

1(

1(

)
)

0(10
0(10

0(10

0(10
10

10

10
)

)
11 1(01) 11 1(01) 11 1(01) 11
1 2 75 1 1 Lab, NSYSU.
WITS
Viterbi Decoding Process
Output: 11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00) 0(00)
00 00 00 00 0(00) 00 0(00) 00 00 00
1 (1 1

1 (1 1

1 (1 1

1 (1 1

1 (1 1
2 4 3 3 3 3 2
)

)
)

11

11
11

11

11

0(

0(
0(

0(

0(
01 1(0 01 01 01 01

1 (0

1 (0
1 2 2 3 2
0

0)

0)
)
1)

1)
1)

)
1

1
0 (0

0 (0
0 (0

0(0

0(0
10 10 10 10 10
1(

1 3 3
10

0 2
)

1(

1(

1(

)
)

0(10
0(10

0(10

0(10
10

10

10
)

)
11 1(01) 11 1(01) 11 1(01) 11
1 2 76 1 1 Lab, NSYSU.
WITS
Viterbi Decoding Process
Decision:11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00) 0(00)
00 00 00 00 0(00) 00 0(00) 00 00 00
1 (1 1

1 (1 1

1 (1 1

1 (1 1

1 (1 1
2 4 3 3 3 3 2
)

)
)

11

11
11

11

11

0(

0(
0(

0(

0(
01 1(0 01 01 01 01

1 (0

1 (0
1 2 2 3 2
0

0)

0)
)
1)

1)
1)

)
1

1
0 (0

0 (0
0 (0

0(0

0(0
10 10 10 10 10
1(

1 3 3
10

0 2
)

1(

1(

1(

)
)

0(10
0(10

0(10

0(10
10

10

10
)

)
11 1(01) 11 1(01) 11 1(01) 11
1 2 77 1 1 Lab, NSYSU.
WITS
Channel Coding in GSM

78 WITS Lab, NSYSU.


Channel Coding in IS-54/136

79 WITS Lab, NSYSU.


Turbo Codes Basic Concepts
Turbo coding uses parallel concatenation of two
recursive systematic convolutional codes joined through
an interleaver.
Information bits are encoded block by block.
Turbo codes uses iterative decoding techniques.
Soft-output decoder is necessary for iterative decoding.
Turbo codes can approach to Shannon limit.

80 WITS Lab, NSYSU.


Turbo Codes Encoder – An Example
X(t)

Y(t)

X(t)

Interleaver

Y’(t)

X'(t)

When the switch is placed on the low position, the tail bits are feedback
and the trellis will be terminated.
81 WITS Lab, NSYSU.
Turbo Codes Encoding Example
A systematic convolutional encoder with memory 2
The dotted line is for termination code
Test sequence: 1011

X0

X1

1101 D D

82 WITS Lab, NSYSU.


Turbo Codes Encoding Example

X0=1

X1=1

1101 0 0

00
11
01

10

11

83 WITS Lab, NSYSU.


Turbo Codes Encoding Example

X0=0

X1=1

110 1 0

00
11
01

10 01

11

84 WITS Lab, NSYSU.


Turbo Codes Encoding Example

X0=1

X1=0

11 1 1

00
11
01

10 01
10
11

85 WITS Lab, NSYSU.


Turbo Codes Encoding Example

X0=1

X1=0

1 1 1

00
11
01

10 01
10 10
11

86 WITS Lab, NSYSU.


Turbo Codes Encoding Example

X0=0

X1=1

1 1

00
11
01
01
10 01
10 10
11

87 WITS Lab, NSYSU.


Turbo Codes Encoding Example

X0=1

X1=1

0 1

00 11
11
01
01
10 01
10 10
11

88 WITS Lab, NSYSU.


Turbo Codes Encoding Example

X0=0

X1=0

0 0

00
00 11
11
01
01
10 01
10 10
11

89 WITS Lab, NSYSU.


Turbo Codes Encoding Example

X0

X1
1101
D D

Interleaver
(X0)

X2
1011 D D

Output sequence: X0, X1, X2, X0, X1, X2, X0, X1, X2,...

90 WITS Lab, NSYSU.


Turbo Codes Encoding Example
The second encoder input is the interleaved
data

1 0 1011 1101
1 1

00 00
00 11
11
01
10 00 10

10

11

91 WITS Lab, NSYSU.


CRC in WCDMA
gCRC24(D) = D 24 + D 23 + D 6 + D 5 + D + 1;

gCRC16(D) = D 16 + D 12 + D 5 + 1;

gCRC12(D) = D 12 + D 11 + D 3 + D 2 + D + 1;

gCRC8(D) = D 8 + D 7 + D 4 + D 3 + D + 1.

92 WITS Lab, NSYSU.


Channel Coding Adopted in WCDMA

Type of TrCH Coding scheme Coding rate


BCH

PCH Convolutional 1/2


RACH coding

1/3, 1/2
CPCH, DCH, DSCH,
Turbo coding 1/3
FACH
No coding

93 WITS Lab, NSYSU.


Convolutional Coding in WCDMA

Input
D D D D D D D D
Output 0
G0 = 561 (octal)

Output 1
G1 = 753 (octal)
(a) Rate 1/2 convolutional coder

Input
D D D D D D D D
Output 0
G0 = 557 (octal)

Output 1
G1 = 663 (octal)

Output 2
G2 = 711 (octal)
(b) Rate 1/3 convolutional coder

94 WITS Lab, NSYSU.


Turbo Coder in WCDMA
xk

1st constituent encoder zk


xk
Input D D D

Input Output
Turbo code
internal interleaver
2nd constituent encoder
Output z’k

D D D
x’k

x’k

95 WITS Lab, NSYSU.


Interleaving
Typical Digital Communications System
From Other Sources

Information Bits Source Bits Channel Bits

TX
Source Channel Frequency Multiple
Format Encryption Interleaving Multiplexing Modulation RF
Encoding Encoding Spreading Access
PA

si (t)
Digital
Input C
mi H
A
Bit Digital
Synchronization N
Stream Waveform N
Digital E
Output L
m̂ i
sˆi (t)
RX
Source Channel Frequency Multiple
Format Decryption Deinterleaving Demultiplexing Demodulation RF
Decoding Decoding Despreading Access
IF

Information Sink Source Bits Channel Bits


Optional
Essential To Other Destinations

97 WITS Lab, NSYSU.


Bursty Error in Fading Channel

98 WITS Lab, NSYSU.


Interleaving Mechanism (1/2)

x Bit y
Interleaver

y
x j x n-bit
Shift registers
Write Clock Read Clock

Bit Stream before entering bit interleaver:


x=(a11 a12 … a1n a21 a22 … a2n … aj1 aj2 … ajn)

99 WITS Lab, NSYSU.


Interleaving Mechanism (2/2)
Conceptually, the WRITE clock places the bit stream
x by the row while the REA clock takes the bit stream
y by the column:
⎡ a11 a12 . . . a1n ⎤
⎢a a 22 . . . a 2 n ⎥⎥
⎢ 21
⎢ . . . . . . ⎥
⎢ ⎥
⎢ . . . . . . ⎥
⎢ . . . . . . ⎥
⎢ ⎥
⎢⎣ a j1 a j2 . . . a jn ⎥⎦
Bit stream at the output of the bit interleaver:
y = (a11 a21 ... a j1 a12 a22 ... a j 2 ... a1n a2 n ... a jn )

100 WITS Lab, NSYSU.


Burst Error Protection with Interleaver

101 WITS Lab, NSYSU.


Modulation
Typical Digital Communications System
From Other Sources

Information Bits Source Bits Channel Bits

TX
Source Channel Frequency Multiple
Format Encryption Interleaving Multiplexing Modulation RF
Encoding Encoding Spreading Access
PA

si (t)
Digital
Input C
mi H
A
Bit Digital
Synchronization N
Stream Waveform N
Digital E
Output L
m̂ i
sˆi (t)
RX
Source Channel Frequency Multiple
Format Decryption Deinterleaving Demultiplexing Demodulation RF
Decoding Decoding Despreading Access
IF

Information Sink Source Bits Channel Bits


Optional
Essential To Other Destinations

103 WITS Lab, NSYSU.


Modulation
Digital Modulation: digital symbols are transformed into
waveforms that are compatible with the characteristics of the
channel.
In baseband modulation, these waveforms are pulses.
In bandpass modulation, the desired information signal
modulates a sinusoid called a carrier. For radio transmission,
the carrier is converted in an electromagnetic (EM) wave.
Why modulation?
Antenna size should be comparable with wave length –
baseband transmission is not possible.
Modulation may be used to separate the different signals
using a single channel.
104 WITS Lab, NSYSU.
PCM Waveform Representations

105 WITS Lab, NSYSU.


PCM Waveform Representations
PCM waveform is also called line codes.
Digital baseband signals often use line codes to provide
particular spectral characteristics of a pulse train.
NRZ-L. Bi-φ-S.
NRZ-M. Dicode-NRZ.
NRZ-S. Dicode-RZ.
Unipolar-RZ. Delay Mode.
Polar-RZ. 4B3T.
Bi-φ-L. Multi-level.
Bi-φ-M. … etc.
106 WITS Lab, NSYSU.
PCM Waveform : NRZ-L

1 0 1 1 0 0 0 1 1 0 1
+E

0
-E

NRZ Level (or NRZ Change)


“One” is represented by one level.
“Zero” is represented by the other level.

107 WITS Lab, NSYSU.


PCM Waveform : NRZ-M

1 0 1 1 0 0 0 1 1 0 1
+E

0
-E

NRZ Mark (Differential Encoding)


“One” is represented by a change in level.
“Zero” is represented by a no change in level.

108 WITS Lab, NSYSU.


PCM Waveform : NRZ-S

1 0 1 1 0 0 0 1 1 0 1
+E

0
-E

NRZ Space (Differential Encoding)


“One” is represented by a no change in level.
“Zero” is represented by a change in level.

109 WITS Lab, NSYSU.


PCM Waveform : Unipolar-RZ

1 0 1 1 0 0 0 1 1 0 1
+E

0
-E

Unipolar - RZ
“One” is represented by a half-bit width pulse.
“Zero” is represented by a no pulse condition.

110 WITS Lab, NSYSU.


PCM Waveform : Polar-RZ

1 0 1 1 0 0 0 1 1 0 1
+E

0
-E

Polar - RZ
“One” and “Zero” are represented by opposite
level polar pulses that are one half-bit in width.

111 WITS Lab, NSYSU.


PCM Waveform : Bi-φ-L

1 0 1 1 0 0 0 1 1 0 1
+E

0
-E

Bi-φ-L (Biphase Level or Split Phase Manchester


11 + 180o)
“One” is represented by a 10.
“Zero” is represented by a 01.
112 WITS Lab, NSYSU.
PCM Waveform : Bi-φ-M

1 0 1 1 0 0 0 1 1 0 1
+E

0
-E

Bi-φ-M ( Biphase Mark or Manchester 1)


A transition occurs at the beginning of every bit period.
“One” is represented by a second transition one half bit
period later.
“Zero” is represented by no second transition.
113 WITS Lab, NSYSU.
PCM Waveform : Bi-φ-S

1 0 1 1 0 0 0 1 1 0 1
+E

0
-E

Bi-φ-S ( Biphase Space)


A transition occurs at the beginning of every bit period.
“One” is represented by no second transition.
“Zero” is represented by a second transition one-half bit
period later.
114 WITS Lab, NSYSU.
PCM Waveform : Dicode - NRZ

1 0 1 1 0 0 0 1 1 0 1
+E

0
-E

Dicode Non-Return-to-Zero
A “One” to “Zero” or “Zero” to “One” changes polarity.
Otherwise, a “Zero” is sent.

115 WITS Lab, NSYSU.


PCM Waveform : Dicode - RZ

1 0 1 1 0 0 0 1 1 0 1
+E

0
-E

Dicode Return-to-Zero
A “One” to “Zero” or “Zero” to “One” transition produces
a half duration polarity change.
Otherwise, a “Zero” is sent.

116 WITS Lab, NSYSU.


PCM Waveform : Delay Mode

1 0 1 1 0 0 0 1 1 0 1
+E

0
-E

Dicode Non-Return-to-Zero
A “One” is represented by a transition at the midpoint of
the bit interval.
A “Zero” is represented by a no transition unless it is
followed by another zero. In this case, a transition is
placed at the end of bit period of the first zero.
117 WITS Lab, NSYSU.
PCM Waveform : 4B3T

O --

118 WITS Lab, NSYSU.


PCM Waveform : 4B3T
Ternary words in the middle column are balanced in
their DC content.
Code words from the first and third columns are selected
alternately to maintain DC balance.
If more positive pulses than negative pulses have been
transmitted, column 1 is selected.
Notice that the all-zeros code word is not used.

119 WITS Lab, NSYSU.


PCM Waveform : Multilevel Transmission

Multilevel transmission with 3 bits per signal interval.

120 WITS Lab, NSYSU.


Criteria for Selecting PCM Waveform

DC component: eliminating the dc energy from the


signal’s power spectrum.
Self-Clocking: Symbol or bit synchronization is
required for any digital communication system.
Error detection: some schemes provide error detection
without introducing additional error-detection bits.
Bandwidth compression: some schemes increase
bandwidth utilization by allowing a reduction in
required bandwidth for a given data rate.
Noise immunity.
Cost and complexity.

121 WITS Lab, NSYSU.


Spectral Densities of Various PCM Waveforms

122 WITS Lab, NSYSU.


Linear Modulation Techniques
Digital modulation techniques may be broadly classified as linear
and nonlinear.
In linear modulation techniques, the amplitude of the transmitted
signal, s(t), varies linearly with the modulating digital signal, m(t).
Linear modulation techniques are bandwidth efficient, though
they must be transmitted using linear RF amplifiers which have
poor power efficiency.
Using power efficient nonlinear amplifiers leads to the
regeneration of filtered sidelobes which can cause severe adjacent
channel interference, and results in the loss of all the spectral
efficiency gained by linear modulation.
Clever ways have been developed to get around these difficulties:
QPSK, OQPSK, π/4-QPSK.
123 WITS Lab, NSYSU.
Digital Modulations

Basic digital modulated signal:

v(t) = A(t) cos (ωt + θ)

Where A(t) = Amplitude; ω = Frequency; θ = Phase;


124 WITS Lab, NSYSU.
Basic Digital Modulations

125 WITS Lab, NSYSU.


Extended Modulated Signals – M-FSK
Example: 16-FSK
Every 4 bits is encoded as: A ⋅ cos(ω j t ) j = 1,2,…,16

Gray Coding.

126 WITS Lab, NSYSU.


Extended Modulated Signals – M-PSK
Example: 16-PSK
Every 4 bits is encoded as: A ⋅ sin(ω t + θ j ) j = 1, 2,… ,16
Gray Coding.

Dotted lines are decision boundaries.


127 WITS Lab, NSYSU.
Extended Modulated Signals – 16-QAM

Every 4 bits is represented by one point in the signal constellation.


Every point has its unique “amplitude” and “phase”.

128 WITS Lab, NSYSU.


Binary Phase Shift Keying (BPSK)
In BPSK, the phase of a constant amplitude carrier signal is
switched between two values according to the two possible
signals m1 and m2 corresponding to binary 1 and 0. Normally,
the two phases are separated by 180o.
2 Eb
sBPSK ( t ) = m ( t ) cos ( 2π f c t + θ c ) 0 ≤ t ≤ Tb
Tb
= Re { g BPSK ( t ) exp ( j 2π f c t )}
2
2 Eb ⎛ sin π fTb ⎞
g BPSK ( t ) = m ( t ) e ⇒ Pg BPSK (t ) ( f ) = 2 Eb ⎜
jθc

Tb ⎝ π fTb ⎠
Eb ⎛ sin π ( f − f c ) Tb ⎞ ⎛ sin π ( − f − f c ) Tb ⎞ ⎤
⎡ 2 2

PBPSK ( f ) = ⎢⎜ ⎟ + ⎜ ⎟ ⎥
2 ⎢⎝⎜ π ( f − f c ) Tb ⎠⎟ ⎝⎜ π ( − f − f c ) Tb ⎠⎟ ⎥
⎣ ⎦
129 WITS Lab, NSYSU.
Power Spectral Density (PSD) of a BPSK
Signal.

130 WITS Lab, NSYSU.


BPSK Receiver
BPSK uses coherent or synchronous demodulation,
which requires that information about the phase and
frequency of the carrier be available at the receiver.
If a low level pilot carrier signal is transmitted along
with the BPSK signal, then the carrier phase and
frequency may be recovered at the receiver using a
phase locked loop (PLL).
If no pilot carrier is transmitted, a Costas loop or
squaring loop may be used to synthesize the carrier
phase and frequency from the received BPSK signal.

131 WITS Lab, NSYSU.


BPSK Receiver with Carrier Recovery
Circuits

132 WITS Lab, NSYSU.


Operations of BPSK Receiver with Carrier
Recovery Circuits
The received signal is squared to generate a DC signal and an
amplitude varying sinusoid at twice the carrier frequency.
The DC signal is filtered out using a bandpass filter with center
frequency tuned to 2fc.
A frequency divider is used to recreate the waveform
cos(2πfct+θ).
The output of the multiplier is applied to an integrate and dump
circuit which forms the low pass filter segment of a BPSK
detector.
If the transmitter and receiver pulse shapes are matched, then the
detection will be optimum.
A bit synchronizer is used to facilitate sampling of the integrator
output precisely at the end of each bit period.

133 WITS Lab, NSYSU.


Differential Phase Shift Keying (DPSK)
Differential PSK is a noncoherent form of phase shift keying
which avoids the need for a coherent reference signal at the
receiver.

d k = mk ⊕ d k −1

134 WITS Lab, NSYSU.


Block Diagram of DPSK Receiver

135 WITS Lab, NSYSU.


Quadrature Phase Shift Keying (QPSK)

136 WITS Lab, NSYSU.


Spectrum of QPSK Signals
⎡⎛ sin π ( f − f ) T 2
⎞ ⎛ sin π ( − f − f c ) Ts ⎞
2

PQPSK ( f ) = Eb ⎢⎜⎜ c s
⎟⎟ + ⎜⎜ ⎟⎟ ⎥
⎢⎝ π ( f − f c ) T ⎠ ⎝ π ( − f − fc ) T ⎠ ⎥
⎣ ⎦

137 WITS Lab, NSYSU.


Block Diagram of a QPSK Transmitter

138 WITS Lab, NSYSU.


Block Diagram of a QPSK Receiver

139 WITS Lab, NSYSU.


Offset QPSK (OQPSK)
For QPSK, the occasional phase shift of πradians can cause the
signal envelope to pass through zero for just an instant.
The amplification of the zero-crossings brings back the filtered
sidelobes since the fidelity of the signal at small voltage levels is
lost in transmission.
To prevent the regeneration of sidelobes and spectral widening, it
is imperative that QPSK signals that use pulse shaping be
amplified only using linear amplifiers, which are less efficient.
A modified form of QPSK, called offset QPSK (OQPSK) or
staggered QPSK is less susceptible to these deleterious effects
and supports more efficient amplification.
OQPSK ensures there are fewer baseband signal transitions.
Spectrum of an OQPSK signal is identical to that of QPSK.
140 WITS Lab, NSYSU.
Offset QPSK (OQPSK)
The time offset waveforms that are applied to the in-phase and
quadrature arms of an OQPSK modulator. Notice that a half-
symbol offset is used.

141 WITS Lab, NSYSU.


π/4-DQPSK

142 WITS Lab, NSYSU.


Generic π/4-DQPSK Transmitter

143 WITS Lab, NSYSU.


π/4-DQPSK Baseband Differential
Detector

144 WITS Lab, NSYSU.


Detection of Binary Signals in Gaussian
Noise

145 WITS Lab, NSYSU.


Digital Demodulation Techniques

Coherent detection: Exact replicas of the possible arriving


signals are available at the receiver. This means that the
receiver has exact knowledge of the carrier wave’s phase
reference, in which case we say the receiver is phase-locked to
the transmitter. Coherent detection is performed by cross-
correlating the received signal with each one of the replicas,
and then making a decision based on comparisons with pre-
selected thresholds.
Non-coherent detection: Knowledge of the carrier wave’s
phase is not required. The complexity of the receiver is
thereby reduced but at the expense of an inferior error
performance, compared to a coherent system.
146 WITS Lab, NSYSU.
Correlation Demodulator

147 WITS Lab, NSYSU.


Matched Filter Demodulator

148 WITS Lab, NSYSU.


Inter-Symbol Interference (ISI)

149 WITS Lab, NSYSU.


Inter Symbol Interference (ISI)
Inter-Symbol Interference (ISI) arises because of
imperfections in the overall frequency response of the
system. When a short pulse of duration Tb seconds is
transmitted through a band-limited system, the
frequency components constituting the input pulse
are differentially attenuated and differentially delayed
by the system. Consequently, the pulse appearing at
the output of the system is dispersed over an interval
longer than Tb seconds, thereby resulting in inter-
symbol interference.
Even in the absence of noise, imperfect filtering and
system bandwidth constraints lead to ISI.

150 WITS Lab, NSYSU.


Nyquist Channels for Zero ISI
The Nyquist channel is not physically realizable since it
dictates a rectangular bandwidth characteristic and an infinite
time delay.
Detection process would be very sensitive to small timing
errors.
Solution: Raised Cosine Filter.

151 WITS Lab, NSYSU.


Raised Cosine Filter

⎧1 for f < 2W0 − W


⎪⎪ 2 π f + W − 2W0
H ( f ) = ⎨cos ( ) for 2W0 − W < f < W
⎪ 4 W − W0
⎪⎩ 0 for f > W

1
W0 =
2T
Excess Bandwidth :W − W0
W − W0
Roll - Off Factor : r =
W0
152 WITS Lab, NSYSU.
Raised Cosine Filter Characteristics

153 WITS Lab, NSYSU.


Raised Cosine Filter Characteristics

154 WITS Lab, NSYSU.


Equalization
In practical systems, the frequency response of the
channel is not known to allow for a receiver design that
will compensate for the ISI.
The filter for handling ISI at the receiver contains
various parameters that are adjusted with the channel
characteristics.
The process of correcting the channel-induced distortion
is called equalization.

155 WITS Lab, NSYSU.


Equalization

156 WITS Lab, NSYSU.


Introduction to RAKE Receiver
Multiple versions of the transmitted signal are seen at
the receiver through the propagation channels.
Very low correlation between successive chips is in
CDMA spreading codes.
If these multi-path components are delayed in time
by more than a chip duration, they appear like
uncorrelated noise at a CDMA receiver.

Equalization is Combine
NOT necessary Coherently

157 WITS Lab, NSYSU.


Introduction to RAKE Receiver

To utilize the advantages of diversity techniques,


channel parameters are necessary to be estimated.
Arrival time of each path, Amplitude, and Phase.

Maximal Ratio Combiner (MRC): The combiner that


achieves the best performance is one in which each
output is multiplied by the corresponding complex-
valued (conjugate) channel gain. The effect of this
multiplication is to compensate for the phase shift in the
channel and to weight the signal by a factor that is
proportional to the signal strength.

158 WITS Lab, NSYSU.


Maximum Ratio Combining (MRC)

MRC: Gi=Aie-jθi
Coherent Combining
G1 G2 GL
Channel Estimation
Best Performance

Receiver

159 WITS Lab, NSYSU.


Maximum Ratio Combining (MRC)
L
Received Envelope:rL = ∑ Gl ⋅ rl
l =1
L
Total Noise Power: σ = ∑ Gl σ n2,l
2 2
n
l =1

L 2

r 2 ∑G ⋅r
l =1
l l
SNR: SNRL = = L
2 ⋅σ n
2 L
2 ⋅ ∑ Gl ⋅ σ n2,l
2

l =1

2 2
L L ⎛ rl ⎞
Since ∑G ⋅r l l = ∑ Glσ n ,l ⎜
⎜σ ⎟⎟
l =1 l =1 ⎝ n ,l ⎠
160 WITS Lab, NSYSU.
Maximum Ratio Combining (MRC)
2 2
L L L
rl
Chebychev's Inequality : ∑ Gl ⋅ rl ≤ ∑ Glσ n ,l ⋅ ∑
2

l =1 l =1 l =1 σ n ,l
2
L L
rl
∑ ⋅∑
2

1
l n ,l
σ n ,l 1 rl
L
2
L
= ∑ 2 = ∑ SNRl
l =1 l =1
SNRL ≤
2 L
2 l =1 σ n ,l l =1
∑ l n ,l
2
G σ 2

l =1

rl*
With equality hold : Glσ n ,l = k
σ n ,l
⇒ Output SNR = Sum of SNRs from all branches @ Gl ∝ rl*
161 WITS Lab, NSYSU.
Example of RAKE Receiver Structure

162 WITS Lab, NSYSU.


Advantages of RAKE Receiver

Consider a receiver with only one finger:


Once the output of a single correlator is corrupted by
fading, large bit error is expected.

Consider a RAKE receiver


If the output of a single correlator is corrupted by fading,
the others may NOT be.
Diversity is provided by combining the outputs
Overcome fading
Improve CDMA reception

163 WITS Lab, NSYSU.

You might also like