Mobile Communicaton Engineering: Review On Fundamental Limits On Communications

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 31

Mobile Communicaton

Engineering
Review on Fundamental Limits on
Communications

Hyuckjae Lee
2
Fundamental Limits on Performance
Given an information source, and a noisy
channel
1) Limit on the minimum number of bits
per symbol
2) Limit on the maximum rate for reliable
communication
Shannons 3 theorems
3
Uncertainty, Information and Entropy
Let the source alphabet,

with the prob. of occurrence


Assume the discrete memoryless source (DMS)

What is the measure of information?

0, 1 -1
{ , .. , }
K
S s s s =
-1
0
0,1, .. , - 1 ( ) , 1
K
k k k
k
k K P s s p and p
=
= = = =

4
Uncertainty, Information, and Entropy
(cont)
Interrelations between info., uncertainty or surprise
No surprise no information


If A is a surprise and B is another surprise,
then what is the total info. of simultaneous A and B


The amount of info may be related to the inverse of
the prob. of occurrence.

1
( . )
Pr .
Info
ob
~
.( ) .( ) .( ) Info A B Info A Info B +
1
( ) log( )
k
k
I S
p
=
5
Property of Information

1)
2)
3)
4)

* Custom is to use logarithm of base 2
k k
(s ) 0 for p 1 I = =
k
( ) 0 for 0 p 1
k
I s > s s
k i
( ) ( ) for p p
k i
I s I s > <
indep. statist. s and s if ), ( ) ( ) (
i k i k i k
s I s I s s I + =
6
Entropy
Def. : measure of average information
contents per source symbol
The mean value of over S,

The property of H

1) H(S)=0,iff for some k, and all other
No Uncertainty
2) H(S)=
Maximum Uncertainty

) (
k
s I
K-1 K-1
2
k 0 k 0
1
( ) E[ ( )] ( ) log ( )
k k k k
k
H S I s p I s p
p
= =
= = =

2
0 ( ) log , ( # ) H S K where K is radix of symbols s s =
1 =
k
p 0 ' = s p
i
2
1
log ,
k
K iff p for all k
K
=
7
Entropy of Binary Memoryless Source
For binary source of symbol 0 with ,
and symbol 1 with
0
p
) 1 (
0 1
p p =
0 2 0 1 2 1
0 2 0 0 2 0
( ) - log - log
- log - (1- ) log (1- ), (bits)
H S p p p p
p p p p
=
=
) (S H
0 . 1
0
2
1
1
0
p
8
Source Coding Theorem
Source encoding
Efficient representation of data compaction
Be uniquely decodable
Need of statistics of the source
(There is an algorithm called Lempel-Ziv for
unknown statistics of the source
read Proakis book)
( Another frequent method is Run-Length code)
9
Source Coding Theorem (cont)
Variable length code Fixed length code



The average code-Length, , is


The coding efficiency,
where is the minimum possible value of


DMS
k
s
k
b
(bits) h l with lengt
k
binary sequence
Source
Encoder
L

=
=
1 - K
0 k
k k
l p L
min

L
L
q =
L
min
L
10
Shannons first theorem :
Source-coding theorem

Given a dms of entropy H(S), the average code-word
length for any source coding is



i.e.)
L
( ) L H S >
( )
min
( ) &
H S
L
L H S q = =
11
Practical Source Coding
Prefix coding
Def. : A code in which no code-word
is the prefix of any other code-word
Ex)
Symbols code1( ) code2( ) Code3( )




0.5
0.25
0.125
0.125
0
1
0 0
1 1
0
1 0
1 1 0
1 1 1
0
0 1
0 1 1
0 1 1 1
k
P
3
2
1
0
S
S
S
S
12
Practical Source Coding (cont)
Decoding








Equality holds under one condition that
Initial state
1
0
0
1
1
1
S
0
S
0
2
S
3
S
where o denotes terminal state

1 H(S) L H(S) + < s
k
l
k
P

= 2
Initial state
0

1
1
0

0

1
13
Huffman Coding
Property
a prefix code
average word length
to fundamental limit , H(S)
optimum
Algorithm shown by ex.


L
Symbol (stage1) Stage 2 Stage 3 stage4
0.4
0.2
0.2
0.1
0.1
0.4
0.2
0.2
0.2
0.4
0.4
0.2
0.6
0.4
k
P
4
3
2
1
0
S
S
S
S
S

14
Huffman Coding (cont)
The result is







Then,
while, H(S) = 2.12193
Huffman encoding is not unique.
1) 0 1 trivial
1 0

Symbol Code - word
0.4
0.2
0.2
0.1
0.1
0 0
1 0
1 1
0 1 0
0 1 1
k
P
4
3
2
1
0
S
S
S
S
S
2 . 2 = L
or

15
Practical Source Coding (cont)
Set the combined symbol with equal prob. ,
a) as high as possible
or
b) as low as possible

get the same average code-word length,
but with different variance


16
Discrete Memoryless Channel





Definition of DMC
Channel with input X & output Y which is
noisy version of X.
Discrete when both of alphabets X & Y finite sizes.
Memoryless when no dependency between
input symbols.
Y
y
.
.
.
y
y
Y ) | P(y X
.
.
.
X
1 - K
1
0
k
1
1
0

j
J
x
x
x
x



17
Channel Matrix (Transition Probability Matrix)





The size is J by K
for all j
a priori prob. is :

(
(
(
(
(
(

) | ( .. . . . . ) | (
.
.
) | ( . . . . . ) | (
) | ( ... ) | (y ) | (
1 1 1 0
1 1 1 0
0 1 0 1 0 0
J K J
K
K
x y p x y p
x y p x y p
x y p x p x y p
P
1 ,.., 1 , 0 ), ( = = J j x p P
j k

=
=
1
0
1 ) | (
K
k
j k
x y p
Discrete Memoryless Channel (cont)


18
Given a priori prob. , and the channel matrix, P
then we can find the prob. of the various output
symbols, as
the joint prob. distn of X and Y


the marginal prob. distn of the output Y,

) (
j
x p
) (
k
y p
) ( ) / (
) ( ) / ( ) , ( ) , (
j j k
j j k k j k j
x p x y p
x X p x X y Y p y Y x X p y x p
=
= = = = = = =
1
0
1
0
( ) ( ) ( / ) ( )
( / ) ( ), 0,1,.., 1
J
k k k j j
j
J
k j j
j
p y p Y y p Y y X x p X x
p y x p x for k K

=
= = = = = =
= =


Discrete Memoryless Channel (cont)

19

Discrete Memoryless Channel(cont)

BSC (Binary Symmetric Channel)
1-p
1-p
p
p
0
0
= x
1
1
= x
1
1
= y
0
0
= y
20
Mutual Information
Conditional Entropy


The mean value




-> H(X|Y) : a conditional entropy (equivocation)
The amount of uncertainty remaining about the
channel input data after the channel output has been
observed.
Mutual Information : The uncertainty of the input resolved by
observing output
I(X;Y) H(X) - H(X|Y) ,and

=
= =
1
0
2
]
) | (
1
[ log ) | ( ) | (
J
j
k j
k j k
y x p
y x p y Y X H

=
=
= =
1
0
1
0
2
1
0
]
) | (
1
[ log ) , (
) ( ) | ( ) | (
K
k
J
j
k j
k j
K
k
k k
y x p
y x p
y p y Y X H Y X H
1 1
2
0 0
( | )
( ; ) ( , ) log [ ]
( )
J K
k j
j k
j k
k
p y x
I X Y p x y
p y

= =
=

21
Properties of Mutual Information
( simple ex. needed for 2 by 2 DMC)
Symmetric :
Non-negative :



where




) ; ( ) ; ( X Y I Y X I =
0 ) ; ( > Y X I
) | ( ) ( ) ; ( X Y H Y H Y X I =

) , ( ) ( ) ( ) ; ( Y X H Y H X H Y X I + =

=
=
1
0
1
0
2
]
) , (
1
[ log ) , ( ) , (
J
j
K
k
k j
k j
y x p
y x p Y X H
H(X|Y) I(X;Y) H(Y|X)
H(X,Y)
H(X) H(Y)
22
Channel Capacity
For a dms with input X, output Y, & ,


where

I(X;Y) just depends upon , & channel.
Since is indep. of the channel, it is possible to
maximize I(X;Y) w.r.t. .

Def. of Channel Capacity.

(bits per channel use)
) | (
j k
x y p
]
) (
) | (
[ log ) , ( ) ; (
1
0
1
0
2

=
=
J
j
K
k
k
j k
k j
y p
x y p
y x p Y X I

=
= =
1
0
) ( ) | ( ) ( , ) ( ) | ( ) , (
J
j
j j k k j j k k j
x p x y p y p x p x y p y x p
} 1 ,..., 2 , 1 , 0 ), ( { = J j x p
j
)} ( {
j
x p
)} ( {
j
x p
) ; ( max
)} ( {
Y X I C
j
x p
=
23
Ex.) for BSC
) ( 1 ) 1 ( log ) 1 ( log 1
| ) ; ( ) ; ( max
2 2
5 . 0 ) (
0
p H p p p p C
Y X I Y X I C
x p
= + + =
= =
=
1.0
0.5
p
C
24
Channel Coding Theorem
For reliable communication , needs channel encoding & decoding.
any coding scheme which gives the error as small as possible, and
which is efficient enough that code rate is not too small?

=> Shannons second theorem (noisy coding theorem)

Let dms with alphabet X have entropy H(X) and produce symbols
once every Ts, and dmc have capacity C and be used once every Tc .
Then,

i) if , there exists a coding scheme.

) if , it is not possible to transmit with arbitrary
small error.

c s
T
C
T
X H
s
) (
c s
T
C
T
X H
>
) (
25
Ex.) for BSC with
The condition for reliable comm. ,


Let be r , then

for , there exists a code (with code rate less
than or equal to C) capable of achieving an arbitrary
low probability of error.
The code rate where k is k-bit input, and n is
n-bit coded bits,.

c s
T
C
T
s
1
s
c
T
T
C r s

C r s
n
k
r =
5 . 0
0
= p
26
Differential Entropy
Differential Entropy


where is p.d.f.

-extension into continuous r.v.

Basis to derive the Channel Capacity Theorem
}


= dx
x f
x f X h
x
x
]
) (
1
[ log ) ( ) (
2
) (x f
x
27
Maximum Differential Entropy for Specified Variance








2 2
2
2
Find p.d.f. for which ( ) is maximum, subject to
) ( ) 1 , ) ( ) ( )
where is the mean, and is the variance
Since is a measure of average powe
X X
h x
i f x dx ii x f x dx const o
o
o


= = =
} }
r, it is to find maximization
with constraint of constant power
28
Maximum Differential Entropy for Specified Variance








2
2 1 2
Sol. is based on calculus of variation & use of Lagrange multiplier
( )log ( ) ( ) ( ) ( )
should be stationary to get maximum entropy.
- The desired f
X X X X
I f x f x f x x f x dx
I

(
= + +

}
2
2
2
2
orm of ( ) is
1 ( )
( ) exp
2 2
Gaussian p.d.f.
- The maximum entropy
1
( ) log (2 )
2
Gaussian channel mode
X
X
f x
x u
f x
h x e
o to
t o
| |

=
|
\ .

=
- l is so widely utilized.
29
Mutual Information for Continuous r.v








, 2
,
2
( )
( | )
( : ) ( , )log
( )
where ( , ) is the joint of X&Y, &
( | ) is the conditional of , given that
Max{ ( : ) : E ]

x
k
X
X Y
X
X Y
X
k k k
f x
f x y
I X Y f x y dxdy
f x
f x y pdf
f x y pdf X Y y
I X Y X P


(
=
(

=
(
=

} }
leads to the most important channel capacity
30
Shannons Channel Capacity Theorem








For bandlimited, power limited Gaussian channels
log 1 (bits/s)
2
The capacity of a channel of bandwidth

P
C B
N
B
| |
|
\ .
= +
, perturbed by
additive white gaussian noise of psd / 2, and limited in bandwidth to ,
0
is the average transmitted power, and is the noise ( )
- It is not possible to transmit at rate highe
N B
P N N B
o
r than reliability by any means.
- It does not say how to find coding and modulation to achieve maximum capacity,
but it indicates that approaching this limit, the transmitted signal should
C
have statistical
property approximately to Gaussian noise.

31
Bandwidth efficient diagram








2
/
Define ideal system as ,
where is the Tx energy per bit
Then, log (1 )
2 1

/
For infinite bandwidth channel
ln 2 0.693 1.6

b
b b
b
o
C B
b
o
b
o
B
R C
P E C E
E C C
B N B
E
N C B
E
dB
N
>
=
=
= +

=
| |
= = =
|
\ .
2
lim log
B
o
P
C C e
N

>
= =
-10 0 10 20 30 40 50
0.1
1
10
20
Eb/No(dB)
C/B
-1.6
Rb < C
Rb > C
Rb = C
Shannon's limit

You might also like