무선 이동 통신
Wireless Mobile Communications
제4,7장 (Ch.4, Ch.7)
초지능 컴퓨팅 및 통신 네트워크 연구실
조성래
[email protected]
Source and Channel Coding
General Structure of a Communication
Systems
Noise
Transmitted Received Received
Info. signal signal info.
SOURCE
Source Transmitter Channel Receiver Destination
Transmitter
Source Channel
Formatter Modulator Multiplexer
encoder encoder
Receiver
Source Channel
Formatter Demodulator Demultiplexer
decoder decoder
Copyright ⓒ CAU UC Lab all right reserved
Source Coding vs. Channel Coding (1/2)
Coding theory is the study of the properties of codes and their
fitness for a specific application. Codes are used for data
compression, cryptography, error-correction and more recently
also for network coding.
This typically involves the removal of redundancy or the
correction (or detection) of errors in the transmitted data.
There are essentially two aspects to coding theory:
Data compression (or, source coding)
Error correction (or channel coding)
Source encoding attempts to compress the data from a source
in order to transmit it more efficiently. This practice is found
every day on the Internet where the common Zip data
compression is used to reduce the network load and make files
smaller.
Copyright ⓒ CAU UC Lab all right reserved
Source Coding vs. Channel Coding (2/2)
Channel encoding, adds extra data bits to make the
transmission of data more robust to disturbances present on the
transmission channel. A typical music CD uses the Reed-
Solomon code to correct for scratches and dust. Cell phones
also use coding techniques to correct for the fading and noise
of high frequency radio transmission. Data modems, telephone
transmissions, and NASA all employ channel coding techniques
to get the bits through, for example the turbo code and LDPC
codes.
Copyright ⓒ CAU UC Lab all right reserved
Source Coding
Introduction (1/2)
1. Source symbols encoded in binary
2. The average codelength must be reduced
3. Remove redundancy ➔ reduces bit-rate
Consider a discrete memoryless source on the alphabet
Let the corresponding probabilities be
and codelengths be
Then, the average codelength (average number of bits per
symbol) of the source is defined as
Copyright ⓒ CAU UC Lab all right reserved
Introduction (2/2)
If is the minimum possible value of , then the coding
efficiency of the source is given by .
Data Compaction:
1. Removal of redundant information prior to transmission.
2. Lossless data compaction – no information is lost.
3. A source code which represents the output of a discrete
memoryless source should be uniquely decodable.
Copyright ⓒ CAU UC Lab all right reserved
Source Coding Schemes for Data
Compaction
Prefix Coding
1. The Prefix Code is variable length source coding scheme
where no code is the prefix of any other code.
2. The prefix code is a uniquely decodable code.
3. But, the converse is not true i.e., all uniquely decodable codes
may not be prefix codes.
Copyright ⓒ CAU UC Lab all right reserved
Source Coding Schemes for Data
Compaction
Prefix Coding
1. The Prefix Code is variable length source coding scheme
where no code is the prefix of any other code.
2. The prefix code is a uniquely decodable code.
3. But, the converse is not true i.e., all uniquely decodable codes
may not be prefix codes.
Not a prefix code
Copyright ⓒ CAU UC Lab all right reserved
Source Coding Schemes for Data
Compaction
Prefix Coding
1. The Prefix Code is variable length source coding scheme
where no code is the prefix of any other code.
2. The prefix code is a uniquely decodable code.
3. But, the converse is not true i.e., all uniquely decodable codes
may not be prefix codes.
Prefix code
Copyright ⓒ CAU UC Lab all right reserved
Source Coding Schemes for Data
Compaction
Prefix Coding
1. The Prefix Code is variable length source coding scheme
where no code is the prefix of any other code.
2. The prefix code is a uniquely decodable code.
3. But, the converse is not true i.e., all uniquely decodable codes
may not be prefix codes.
Uniquely decodable but not a prefix code
Copyright ⓒ CAU UC Lab all right reserved
Source Coding Schemes for Data
Compaction
Copyright ⓒ CAU UC Lab all right reserved
Source Coding Schemes for Data
Compaction
Any symbol is emitted with probability
and
Therefore, the average codeword length is given by
Copyright ⓒ CAU UC Lab all right reserved
Huffman Coding
Step 1: arrange the symbol probabilities in a decreasing order
and consider them as leaf nodes of a tree
Step 2: while there are more than one node:
Find the two nodes with the smallest probability and assign the one with
the lowest probability a “0”, and the other one a “1” (or the other way, but
be consistent)
Merge the two nodes to form a new node whose probability is the sum of
the two merged nodes.
Go back to Step 1
Step 3: For each symbol, determine its codeword by tracing the
assigned bits from the corresponding leaf node to the top of the
tree. The bit at the leaf node is the last bit of the codeword
Copyright ⓒ CAU UC Lab all right reserved
Huffman Coding
1. Huffman code is a prefix code
2. The length of codeword for each symbol is roughly equal to the
amount of information conveyed.
3. If the probability distribution is known and accurate, Huffman
coding is very good
Variance is a measure of the variability in codeword lengths of a
source code and is defined as follows:
It is reasonable to choose the Huffman tree which gives greater
variance (Provide diversity or dissimilarity to avoid errors).
Copyright ⓒ CAU UC Lab all right reserved
Huffman Coding
A Huffman tree is constructed as shown in Figure. 3, (a) and (b)
represents two forms of Huffman trees.
We see that both schemes have same average length but
different variances.
0.32 0.23 0.21 0.12 0.12
0.44 0.24
0.56
0.51 0.29 0.11 0.05 0.04
0.09
0.2
0.49
Copyright ⓒ CAU UC Lab all right reserved
Channel Coding
Copyright ⓒ CAU UC Lab all right reserved
Channel Coding in Digital
Communication Systems
Copyright ⓒ CAU UC Lab all right reserved
Forward Error Correction (FEC)
The key idea of FEC is to transmit enough
redundant data to allow receiver to recover
from errors all by itself. No sender
retransmission required.
The major categories of FEC codes are
Block codes,
Cyclic codes,
Reed-Solomon codes,
Convolutional codes, and
Turbo codes, etc.
Copyright ⓒ CAU UC Lab all right reserved
Block Codes
Information is divided into blocks of length k
r parity bits or check bits are added to each
block (total length n = k + r),.
Code rate R = k/n
Decoder looks for codeword closest to
received vector (code vector + error vector)
Tradeoffs between
Efficiency
Reliability
Encoding/Decoding complexity
Copyright ⓒ CAU UC Lab all right reserved
Block Codes: Linear Block Codes
C of the Linear Block Code is
C = (c1 , c2 ,..., ck , ck +1 ,..., cn −1 , cn )
where m is the uncoded C = (m | c p )
message vector m = ( m
1 , m2 ,..., mk ) c p = mP
and G is the generator matrix. p11 p12 ... p1( n − k ) p1
p p21 ... p2 ( n − k ) p2
G = [ I | P] P=
21
=
... ... ... ... ...
pk1 pk1 ... pk ( n − k ) pk
g ( x) = generator polynomial
Copyright ⓒ CAU UC Lab all right reserved
Block Codes: Linear Block Codes
The parity check matrix
Copyright ⓒ CAU UC Lab all right reserved
Block Codes: Linear Block Codes
Operations of the generator matrix and the parity check matrix
Copyright ⓒ CAU UC Lab all right reserved
Block Codes: Example
Example : Find linear block
code encoder G if code
generator polynomial
for a (7, 4) code.
We have n = Total number of
bits = 7, k = Number of
information bits = 4,1 r =0 0
p1 0
Number of parity bits
0 =1n -0k
= 3. 0 p2
G = [ I | P] =
0 0 1 0 p3
0 0 0 1 p 4
Copyright ⓒ CAU UC Lab all right reserved
Block Codes: Example (Continued)
1 0 0 0 1 1 0
0 1 0 0 0 1 1
G=
0 0 1 0 1 1 1
0 0 0 1 1 0 1
Copyright ⓒ CAU UC Lab all right reserved
Block Codes: Example (Continued)
Parity Check Matrix
1 0 1 1 1 0 0
H = 1 1 1 0 0 1 0
0 1 1 1 0 0 1
For message vector m = [1011]
We get a codeword
C = mG = [1011100 ]
If no errors at receiver, the
syndrome vector
s = cH T = [000 ]
Copyright ⓒ CAU UC Lab all right reserved
Convolutional Codes
Encoding of information stream rather than
information blocks
Value of certain information symbol also
affects the encoding of next M information
symbols, i.e., memory M
Easy implementation using shift register
Assuming k inputs and n outputs
Decoding is mostly performed by the Viterbi
Algorithm (not covered here)
Copyright ⓒ CAU UC Lab all right reserved
Convolutional Codes: (n=2, k=1,
M=2)Encoder
D1 and D2
were initially 0
Copyright ⓒ CAU UC Lab all right reserved
Interleaving
Copyright ⓒ CAU UC Lab all right reserved
Interleaving (Example)
Copyright ⓒ CAU UC Lab all right reserved
Information Capacity Theorem
(Shannon Limit)
The information capacity (or channel capacity) C of a
continuous channel with bandwidth B Hertz can be
perturbed by additive Gaussian white noise of power
spectral density provided bandwidth B satisfies
Copyright ⓒ CAU UC Lab all right reserved
Shannon Limit
Copyright ⓒ CAU UC Lab all right reserved
Turbo Codes
Copyright ⓒ CAU UC Lab all right reserved
Modulation
Signal transmission through linear systems
Input Output
Linear system
Ideal filters:
Low-pass
Band-pass High-pass
Copyright ⓒ CAU UC Lab all right reserved
Bandwidth of signal
Baseband versus bandpass:
Baseband Bandpass
signal signal
Local oscillator
Copyright ⓒ CAU UC Lab all right reserved
Bandwidth of signal: Approximations
Different definition of bandwidth:
a) Half-power bandwidth d) Fractional power containment bandwidth
b) Noise equivalent bandwidth e) Bounded power spectral density
c) Null-to-null bandwidth
(a)
(b)
(c)
(d)
(e)50dB
Copyright ⓒ CAU UC Lab all right reserved
Modulation
Encoding information in a manner suitable for
transmission.
- Translate baseband source signal to bandpass
signal
- Bandpass signal: “modulated signal”
Copyright ⓒ CAU UC Lab all right reserved
Modulation and Demodulation
mi Pulse g i (t ) Bandpass si (t ) M-ary modulation
Format
modulate modulate i = 1, , M
channel
transmitted symbol hc (t )
estimated symbol n(t )
Demod.
Format Detect
m̂i z (T ) & sample r (t )
Major sources of errors:
Thermal noise (AWGN)
disturbs the signal in an additive fashion (Additive)
has flat spectral density for all frequencies of interest (White)
is modeled by Gaussian random process (Gaussian Noise)
Inter-Symbol Interference (ISI)
Due to the filtering effect of transmitter, channel and receiver, symbols are
“smeared”.
Copyright ⓒ CAU UC Lab all right reserved
Basic Modulation Techniques
Amplitude Modulation (AM)
Frequency Modulation (FM)
Frequency Shift Keying (FSK)
Phase Shift Keying (PSK)
Quadrature Phase Shift Keying (QPSK)
Quadrature Amplitude Modulation (QAM)
Copyright ⓒ CAU UC Lab all right reserved
Analog and Digital Signals
Copyright ⓒ CAU UC Lab all right reserved
Hearing, Speech, and Voice-band
Channels
Copyright ⓒ CAU UC Lab all right reserved
Amplitude Modulation (AM)
Copyright ⓒ CAU UC Lab all right reserved
Frequency Modulation (FM)
FM integrates message signal with carrier signal by
varying the instantaneous
frequency. Amplitude of carrier signal is kept constant.
Copyright ⓒ CAU UC Lab all right reserved
Frequency Shift Keying (FSK)
Copyright ⓒ CAU UC Lab all right reserved
Phase Shift Keying (PSK)
Copyright ⓒ CAU UC Lab all right reserved
Receiver job for Demodulation
Demodulation and sampling:
Waveform recovery and preparing the received signal for
detection:
Improving the signal power to the noise power (SNR) using
matched filter (project to signal space)
Reducing ISI using equalizer (remove channel distortion)
Sampling the recovered waveform
Detection:
Estimate the transmitted symbol based on the received sample
Copyright ⓒ CAU UC Lab all right reserved
Receiver structure
Step 1 – waveform to sample transformation Step 2 – decision making
Demodulate & Sample Detect
z (T ) m̂i
r (t ) Threshold
Frequency Receiving Equalizing
comparison
down-conversion filter filter
For bandpass signals Compensation for
channel induced ISI
Received waveform Baseband pulse
Baseband pulse Sample
(possibly distorted)
(test statistic)
Copyright ⓒ CAU UC Lab all right reserved
Signal Space Concept
Copyright ⓒ CAU UC Lab all right reserved
Signal space: Overview
What is a signal space?
Vector representations of signals in an N-dimensional orthogonal
space
Why do we need a signal space?
It is a means to convert signals to vectors and vice versa.
It is a means to calculate signals energy and Euclidean distances
between signals.
Why are we interested in Euclidean distances
between signals?
For detection purposes: The received signal is transformed to a
received vectors.
The signal which has the minimum distance to the received signal
is estimated as the transmitted signal.
Copyright ⓒ CAU UC Lab all right reserved
Schematic example of a signal
(t )
space
2
s1 = (a11 , a12 )
1 (t )
z = ( z1 , z 2 )
s 3 = (a31 , a32 )
s 2 = (a21 , a22 )
s1 (t ) = a11 1 (t ) + a12 2 (t ) s1 = (a11 , a12 )
Transmitted signal
s2 (t ) = a21 1 (t ) + a22 2 (t ) s 2 = (a21 , a22 )
alternatives
s3 (t ) = a31 1 (t ) + a32 2 (t ) s 3 = (a31 , a32 )
Received signal at
z (t ) = z1 1 (t ) + z 2 2 (t ) z = ( z1 , z 2 )
matched filter output
Copyright ⓒ CAU UC Lab all right reserved
Signal space
To form a signal space, first we need to know
the inner product between two signals
(functions):
Inner (scalar) product:
x(t ), y (t ) =
*
x (t ) y (t )dt
−
= cross-correlation between x(t) and y(t)
Properties of inner product:
ax(t ), y (t ) = a x(t ), y (t )
x(t ), ay (t ) = a * x(t ), y (t )
x(t ) + y (t ), z (t ) = x(t ), z (t ) + y (t ), z (t )
Copyright ⓒ CAU UC Lab all right reserved
Signal space …
The distance in signal space is measure by calculating the norm.
What is norm?
Norm of a signal:
x(t ) = x(t ), x(t ) = x(t ) dt = E x
2
−
= “length” of x(t)
ax(t ) = a x(t )
Norm between two signals:
d x , y = x(t ) − y (t )
We refer to the norm between two signals as the Euclidean
distance between two signals.
Copyright ⓒ CAU UC Lab all right reserved
Example of distances in signal
space
2 (t )
s1 = (a11 , a12 )
E1 d s1 , z
1 (t )
E3 z = ( z1 , z 2 )
d s3 , z E2 d s2 , z
s 3 = (a31 , a32 )
s 2 = (a21 , a22 )
The Euclidean distance between signals z(t) and s(t):
d si , z = si (t ) − z (t ) = (ai1 − z1 ) 2 + (ai 2 − z2 ) 2
i = 1,2,3
Copyright ⓒ CAU UC Lab all right reserved
Orthogonal signal space
N-dimensional orthogonal signal space is characterized
by N linearly independent functions j (t )Nj=1 called
basis functions. The basis functions must satisfy the
orthogonality condition
0t T
T
i (t ), j (t ) = i (t ) (t )dt = K i ji
*
0
j
j , i = 1,..., N
1 → i = j
where ij =
0 → i j
If all K i = 1 , the signal space is orthonormal.
Constructing Orthonormal basis from non-orthonormal set of vectors:
Gram-Schmidt procedure
Copyright ⓒ CAU UC Lab all right reserved
Example of an orthonormal bases
Example: 2-dimensional orthonormal signal
space
2
1 (t ) = cos(2t / T ) 0t T 2 (t )
T
(t ) = 2 sin( 2t / T ) 0t T
2 T
T
1 (t )
0
1 (t ), 2 (t ) = 1 (t ) 2 (t )dt = 0
0
1 (t ) = 2 (t ) = 1
Example: 1-dimensional orthonornal signal
space 1 (t )
1 1 (t ) = 1
T 1 (t )
0
0 T t
Copyright ⓒ CAU UC Lab all right reserved
Example: BPSK
Note: two symbols, but only one dimension in
BPSK.
Copyright ⓒ CAU UC Lab all right reserved
Signal space …
Any arbitrary finite set of waveforms si (t )i =1
M
where each member of the set is of duration T, can
be expressed as a linear combination of N orthogonal
waveforms j (t )N where N M.
j =1
N
si (t ) = aij j (t ) i = 1,..., M
j =1 NM
where
j = 1,..., N
T
1 1
K j 0
aij = si (t ), j (t ) = si (t ) j (t )dt 0t T
Kj i = 1,..., M
N
Ei = K j aij
2
s i = (ai1 , ai 2 ,..., aiN )
j =1
Vector representation of waveform Waveform energy
Copyright ⓒ CAU UC Lab all right reserved
Signal space …
N
si (t ) = aij j (t ) s i = (ai1 , ai 2 ,..., aiN )
j =1
Waveform to vector conversion Vector to waveform conversion
1 (t ) 1 (t )
T a i1
a i1
ai1 ai1
0 sm
si (t ) = sm si (t )
N (t ) N (t )
T aiN aiN
0 aiN aiN
Copyright ⓒ CAU UC Lab all right reserved
Example of projecting signals to an
orthonormal signal space
2 (t )
s1 = (a11 , a12 )
1 (t )
s 3 = (a31 , a32 )
s 2 = (a21 , a22 )
s1 (t ) = a11 1 (t ) + a12 2 (t ) s1 = (a11 , a12 )
Transmitted signal
alternatives s2 (t ) = a21 1 (t ) + a22 2 (t ) s 2 = (a21 , a22 )
s3 (t ) = a31 1 (t ) + a32 2 (t ) s 3 = (a31 , a32 )
T
aij = si (t ) j (t )dt j = 1,..., N i = 1,..., M 0t T
0 Copyright ⓒ CAU UC Lab all right reserved
QPSK Signal Constellation
Copyright ⓒ CAU UC Lab all right reserved
Quadrature Amplitude Modulation
(QAM)
Combination of AM and PSK
Two carriers out of phase by 90 deg are amplitude
modulated
Copyright ⓒ CAU UC Lab all right reserved