0% found this document useful (0 votes)
14 views19 pages

Ece458 L7

This document discusses differential entropy, mutual information for continuous random variables, Gaussian channels, and the capacity of Gaussian channels. It also covers band limited Gaussian channels and provides an example calculating capacity for a telephone line.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views19 pages

Ece458 L7

This document discusses differential entropy, mutual information for continuous random variables, Gaussian channels, and the capacity of Gaussian channels. It also covers band limited Gaussian channels and provides an example calculating capacity for a telephone line.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

ECE458

INFORMATION THEORY AND CODING


LECTURE 7
Associate Prof. Fatma Newagy
[email protected]
LAST LECTURE TOPICS
 Joint entropy and Conditional Entropy
 Mutual Information

[email protected]
 Channel Capacity

2
TODAY’S TOPICS
 Differential Entropy
 Mutual Information for Continuous R.V.

[email protected]
 Gaussian Channel

 Capacity of Gaussian Channel

 Band limited Gaussian Channel

3
REMEMBER

 The entropy of a random variable is a function which attempts to


characterize the “unpredictability” of a random variable. It’s
average amount of Information of a source.

[email protected]
 Mutual information is a quantity that measures a relationship
between two random variables that are sampled simultaneously.
In particular, it measures how much information is
communicated, on average, in one random variable about
another.

 Channel Capacity is the tight upper bound on the rate at


which information can be reliably transmitted over a
communication channel.

4
DIFFERENTIAL ENTROPY
 We introduce the concept of differential entropy, which is the
entropy of a continuous random variable.

[email protected]
 Definition Let X be a random variable with cumulative
distribution function F(x) = Pr(X ≤ x). If F(x) is continuous, the
random variable is said to be continuous.

 Let f (x) = F’(x) when the derivative is defined. If


 f (x)  1


 f (x) is called the probability density function for X. The set where
f(x) > 0 is called the support set of X.

5
Differential Entropy

 Definition The differential entropy h(X) of a continuous random


variable X with density f (x) is defined as

[email protected]
h( X )    f ( x) log f ( x)dx
S

 where S is the support set of the random variable.

 As in the discrete case, the differential entropy depends only on


the probability density of the random variable, and therefore the
differential entropy is sometimes written as h(f ) rather than h(X).

6
Example: Uniform Distribution

 Example (Uniform distribution) Consider a random variable


distributed uniformly from 0 to a so that its density is 1/a from 0
to a and 0 elsewhere. Then its differential entropy is:

[email protected]
a
1 1
h( X )    log dx  log a
0
a a

 Note: For a < 1, loga < 0, and the differential entropy is negative.
Hence, unlike discrete entropy, differential entropy can be
negative. However, 2h(X) = 2log a = a is the volume of the support
set, which is always nonnegative, as we expect.

7
Example: Normal Distribution

 Let  x2
1 2
X ~ ( x)  e 2
2 2

[email protected]
 Then calculating the differential entropy in nats, we obtain
  x2 2
h( )     ln      ( x)   ln 2 
 2 2

EX 2 1 1 1 1 1
  ln 2 2
  ln 2 2
 ln e  ln 2 2
2 2
2 2 2 2 2
1
 ln 2e 2
2
 Changing the base of the logarithm:
1
h()  log 2e 2 8
2
 Note : Gaussian distribution maximizes the entropy over all distributions
with the same variance.
MUTUAL INFORMATION FOR CONTINUOUS R.V.
 Definition The mutual information I (X; Y) between two random
variables with joint density f (x, y) is defined as:

[email protected]
f ( x, y)
I ( X ; Y )   f ( x, y) log dxdy
f ( x) f ( y )

 From the definition it is clear that:


I (X; Y) = h(X) − h(X|Y)
= h(Y ) − h(Y |X)
= h(X) + h(Y ) − h(X, Y )

9
[email protected]
10
GAUSSIAN CHANNEL
GAUSSIAN CHANNEL
 The most important continuous alphabet channel: AWGN

Yi = Xi + Zi, noise Zi N(0;δ2), independent of Xi

[email protected]

 Model for communication channels: satellite links

11
CAPACITY OF GAUSSIAN CHANNEL

[email protected]
12
[email protected]
13
CAPACITY OF GAUSSIAN CHANNEL

[email protected]
When 𝑌 is normal and since 𝑍 is normal then the
optimizing input distribution 𝑋 is N(0,𝑃).

C as maximum data rate


we can also show this C is the maximum of rate
achievable for AWGN

Dentition: a rate R is achievable for Gaussian channel


with power constraint P:

14
GAUSSIAN CHANNEL CAPACITY THEOREM
Theorem. The capacity of a Gaussian channel with power constraint
P and noise variance N is

[email protected]
15
BAND LIMITED GAUSSIAN CHANNEL

[email protected]
16
BAND LIMITED GAUSSIAN CHANNEL

[email protected]
17
EXAMPLE: TELEPHONE LINE
 Telephone signal are band-limited to 3300 Hz

SNR = 33 dB: P/(N0W) = 2000

[email protected]

 Capacity = 36 kb/s

 Practical modems achieve transmission rates up to 33.6


kb/s uplink and downlink

18
IMPLICATIONS OF THE INFORMATION
CAPACITY THEOREM

[email protected]
19

You might also like