0% found this document useful (0 votes)
62 views7 pages

Solutions To Information Theory Exercise Problems 5-8

The document discusses solutions to information theory exercise problems involving error-correcting codes, channel capacity, and different types of noise. Specifically, it addresses: [1] Evaluating syndromes for a corrupted (7,4) Hamming code block and determining the corrected block; [2] Conditions that maximize channel capacity without error correction and expressions for residual error probability with Hamming and repetition codes; [3] Gaussian signals having maximum entropy per variance and their properties; [4] Numerical estimates of channel capacity for additive white Gaussian noise channels; and [5] The limits of channel capacity as signal-to-noise ratio and bandwidth increase. It also discusses different "colors" of noise and their power spectral densities
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views7 pages

Solutions To Information Theory Exercise Problems 5-8

The document discusses solutions to information theory exercise problems involving error-correcting codes, channel capacity, and different types of noise. Specifically, it addresses: [1] Evaluating syndromes for a corrupted (7,4) Hamming code block and determining the corrected block; [2] Conditions that maximize channel capacity without error correction and expressions for residual error probability with Hamming and repetition codes; [3] Gaussian signals having maximum entropy per variance and their properties; [4] Numerical estimates of channel capacity for additive white Gaussian noise channels; and [5] The limits of channel capacity as signal-to-noise ratio and bandwidth increase. It also discusses different "colors" of noise and their power spectral densities
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Solutions to Information Theory Exercise Problems 5–8

Exercise 5

(a) An error-correcting (7/4) Hamming code combines four data bits b3 , b5 , b6 , b7 with three
error-correcting bits: b1 = b3 ⊕ b5 ⊕ b7 , b2 = b3 ⊕ b6 ⊕ b7 , and b4 = b5 ⊕ b6 ⊕ b7 . The 7-bit
block is then sent through a noisy channel, which corrupts one of the seven bits. The following
incorrect bit pattern is received:
b1 b2 b3 b4 b5 b6 b7
1 1 0 1 0 0 0
Evaluate three syndromes that can be derived upon reception of this corrupted 7-bit block:
s1 = b1 ⊕ b3 ⊕ b5 ⊕ b7 , s2 = b2 ⊕ b3 ⊕ b6 ⊕ b7 , s4 = b4 ⊕ b5 ⊕ b6 ⊕ b7 , and provide the corrected
7-bit block that was the original input to this noisy channel.

Solution:

(a) The three syndromes that are derived upon reception of the 7-bit block evaluate to:

s1 = b1 ⊕ b3 ⊕ b5 ⊕ b7 = 1
s2 = b2 ⊕ b3 ⊕ b6 ⊕ b7 = 1
s4 = b4 ⊕ b5 ⊕ b6 ⊕ b7 = 1

Because they are not all 0 they show that a 1-bit error did occur, namely at position bs4 s2 s1 = b7 .
So, the error-corrected 7-bit block, infered to be the original bit pattern that was the input
to this noisy channel, is:
b1 b2 b3 b4 b5 b6 b7
1 1 0 1 0 0 1

(b) Consider a binary symmetric channel with error probability p that any bit may be flipped.
Two possible error-correcting coding schemes are available: Hamming, or simple repetition.

(i ) Without any error-correcting coding scheme in place, state all the conditions that
would maximise the channel capacity. Include conditions on the error probability p and
also on the probability distribution of the binary source input symbols.
Solution:

(i ) With no error-correcting coding scheme in place, the capacity of this channel would be
maximised if: (1) the binary source had probabilities (0.5, 0.5) for the two input symbols;
and (2) the bit error probability was either p = 0, or p = 1.

(ii ) If a (7/4) Hamming code is used to deliver error correction for up to one flipped bit
in any block of seven bits, provide an expression for the residual error probability Pe that
such a scheme would fail.

1
Solution:

(ii ) A (7/4) Hamming code can always correct errors provided that any block of 7 bits
contains no more than 1 flipped bit. It will fail if 2 or more bits have flipped. Thus, its
residual error rate Pe is simply the probability that two or more bits in 7 have flipped
(and the rest have not), summed over all possible “ways” of choosing 2 or more from 7.
Hence we have a combinatorial term times a probability term, in a binomial series:
7
!
X 7
Pe = pi (1 − p)7−i
i=2
i

(iii ) If repetition were used to try to achieve error correction by repeating every message an
odd number of times N = 2m + 1, for some integer m followed by majority voting, provide
an expression for the residual error probability Pe that the repetition scheme would fail.
Solution:

(iii ) If a repetition strategy is used instead, the majority voting scheme will fail if more
than half of the number N = 2m + 1 of repeated transmissions (namely m + 1 of them)
had errors. Thus we again have a binomial remainder series for the residual error rate:
2m+1
!
X 2m + 1
Pe = pi (1 − p)2m+1−i
i=m+1
i

Exercise 6

(a) What class of continuous signals has the greatest possible entropy for a given variance (i.e.
power level)? What probability density function describes the excursions taken by such signals
from their mean value?

Solution:

(a) The family of continuous signals having maximum entropy per variance (or power level) are
Gaussian signals. Their probability density function for excursions x around a mean value µ,
when the power level (or variance) is σ 2 , is:
1 2 2
p(x) = √ e−(x−µ) /2σ
2πσ

(b) What does the Fourier power spectrum of this class of signals look like?

Solution:

(b) The Fourier power spectrum of this class of signals is flat (uniform over all frequencies), so
it is called “white” in analogy with light. Hence the term “white noise”.

2
(c) Consider a noisy continuous communication channel of bandwidth W = 1 MHz, which is
perturbed by additive white Gaussian noise whose total spectral power is N0 W = 1. Continuous
signals are transmitted across such a channel, with average transmitted power P = 1,000. Give
a numerical estimate for the channel capacity, in bits per second, of this noisy channel. Then,
for a channel having the same bandwidth W but whose signal-to-noise ratio N0PW is four times
better, repeat your numerical estimate of capacity in bits per second.

Solution:

(c) The channel capacity of a noisy continuous communication channel having bandwidth W in
Hertz, perturbed by additive white Gaussian noise whose whose power spectral density is N0 ,
when transmitting a signal with average power P (defined by its expected variance), is:

P
 
C = W log2 1+ bits per second
N0 W

Therefore, for the parameter values given, the channel capacity is about 107 bits per second.

If the signal-to-noise ratio N0PW of this channel were improved by a factor of four, then its
channel capacity would increase to about 12 million bits per second. (Note that 4, 000 ≈ 212 ).

(d ) Suppose that for such a continuous channel with added white Gaussian noise, the ratio of
signal power to noise power is given as 30 decibels, and the frequency bandwidth W of this
channel is 10 MHz. Roughly what is the information capacity C of this channel, in bits/second?

Solution:

(d ) 30 decibels means that the ratio of signal power to noise power (SNR) is 1,000:1, whose
base-2 logarithm is about 10. Thus with the channel’s frequency bandwidth now W = 10 MHz,
we conclude that this channel’s information capacity is now C = 100 million bits/second.

(e) With no constraints on the parameters of such a channel, is there any limit to its capacity
P
if you increase its signal-to-noise ratio without limit? If so, what is that limit?
N0 W

Solution:

(e) The capacity of such a channel, in bits per second, is

P
 
C = W log2 1 +
N0 W
P
Increasing the signal-to-noise ratio, the quantity inside the logarithm, without bounds
N0 W
increases the channel’s capacity monotonically without limit (but at a decelerating rate).

(f ) Is there any limit to the capacity of such a channel if you can increase its spectral bandwidth
W (in Hertz) without limit, while not changing N0 or P ? If so, what is that limit?

Solution:

3
(f ) Increasing the bandwidth W alone causes a monotonic increase in capacity, but only up to
an asymptotic limit. That limit can be evaluated by observing that in the limit of small x,
the quantity loge (1 + x) is approximately x. In this case, setting x = N0PW and allowing W to
become arbitrarily large, C approaches the limit NP0 log2 (e). Thus there are vanishing returns
from endless increase in bandwidth, unlike the unlimited returns enjoyed from improvement in
signal-to-noise ratio.

Exercise 7

Shannon’s Noisy Channel Coding Theorem showed how the capacity C of a continuous commu-
nication channel is limited by added white Gaussian noise; but other colours of noise are available.
Among the “power-law” noise profiles shown in the figure as a function of frequency ω, Brownian
noise has power that attenuates as ( ωω0 )−2 , and pink noise as ( ωω0 )−1 , above some minimum ω0 .

Colours of Noise
Power Spectral Density (dB)

103 104 105


Frequency (Hz)

Consider three channels suffering from either white, pink, or Brownian noise. At frequency ω = ω0
all three channels have the same signal-to-noise ratio SNR(ω0 ) and it remains at this level for the
white channel, but at higher frequencies ω it improves as ( ωω0 ) for the pink channel and as ( ωω0 )2 for
the Brownian channel. Show that across any frequency band [ω1 , ω2 ] (ω0 < ω1 < ω2 ) the Brownian
and the pink noise channels have higher capacity than the white noise channel, and show that as
frequency grows large the Brownian channel capacity approaches twice that of the pink channel.

Solution:

Shannon’s Noisy Channel Coding Theorem implies that the information capacity of a channel
within a band of frequencies [ω1 , ω2 ] with added noise described by SNR(ω) is:
Z ω2
C= log2 (1 + SNR(ω)) dω bits/sec.
ω1

Clearly for any band of frequencies above ω0 the information capacity of the pink and the Brownian
channels exceeds that of the white channel, because their SNR(ω) is larger. As frequency ω grows
large, the “1+” term in the logarithm can be ignored and the capacity of the channel with added
pink noise becomes Z ω2
ω
 
C= log2 dω bits/sec
ω1 ω0
and the capacity of the channel with added Brownian noise becomes
Z ω2 2 Z ω2
ω ω
  
C= log2 dω = 2 × log2 dω bits/sec.
ω1 ω0 ω1 ω0
We see that the capacity of the Brownian channel approaches twice that of the pink channel, and
both are greater than that of the white noise channel having constant SNR.

4
Exercise 8

(a) An inner product space V is spanned by an orthonormal system of vectors {e1 , e2 , . . . , en }


so that ∀i 6= j the inner product hei , ej i = 0, but every ei is a unit vector so that hei , ei i = 1.
We wish to represent a data set consisting of vectors u ∈ span{e1 , e2 , . . . , en } in this space as a
n
X
linear combination of the orthonormal vectors: u = ai ei . Derive how the coefficients ai can
i=1
be determined for any vector u, and comment on the computational advantage of representing
the data in an orthonormal system.

Solution:
n
X
(a) Given that u = ai ei , when we project the vector u onto any vector ei we find that all
i=1
of the inner products generated by this sum equal 0 (due to orthogonality) except for one of
them, which equals 1 (due to orthonormality):
hu, ei i = ha1 e1 + a2 e2 + · · · + an en , ei i
= a1 he1 , ei i + a2 he2 , ei i + · · · + an hen , ei i
= ai .
Thus the required coefficients ai which represent the data in this space can be obtained sim-
ply by taking the inner product of each data vector u with each of the orthonormal vectors:
ai = hu, ei i. A major computational advantage of representing data in an orthonormal system
is that the expansion coefficients ai are just these inner product projection coefficients.

(b) An inner product space containing complex functions f (x) and g(x) is spanned by a set of
X basis functions {ei }.
orthonormal XComplex coefficients {αi } and {βi } therefore exist such that
f (x) = αi ei (x) and g(x) = βi ei (x).
i i
X
Show that the inner product hf, gi = αi βi .
i
Solution:

(b)
* +
X X
hf, gi = αi ei , βj ej
i j
* +
X X
= αi e i , βj ej
i j
 
X X
= αi  βj hei , ej i
i j
X
= αi β i
i

where the last step exploits the fact that hei , ej i = 0 for i 6= j but hei , ej i = 1 if i = j,
because {ei } is an orthonormal basis.

5
(c) Consider a noiseless analog communication channel whose bandwidth is 10,000 Hertz. A
signal of duration 1 second is received over such a channel. We wish to represent this continuous
signal exactly, at all points in its one-second duration, using just a finite list of real numbers
obtained by sampling the values of the signal at discrete, periodic points in time. What is
the length of the shortest list of such discrete samples required in order to guarantee that we
capture all of the information in the signal and can recover it exactly from this list of samples?

Solution:

(c) As the signal duration is T = 1 second and the channel bandwidth is W = 10, 000 Hertz,
the Nyquist Sampling Theorem tells us that 2W T = 20, 000 discrete (regularly spaced) samples
are required for exact recovery of the signal, even at points in between the samples.

(d ) Name, define algebraically, and sketch a plot of the function you would need to use in order
to recover completely the continuous signal transmitted, using just such a finite list of discrete
periodic samples of it.

Solution:

(d ) The sinc function is required to recover the signal from its discrete samples, defined as:
sin(2πW t)
sinc(t) = . Each sample point is replaced by scaled copies of this function, scaled
2πW t
by the amplitude of the sample taken, and with its sign. These all superimpose to reproduce
the signal exactly, even between the points where it was sampled. (!)

Figure 6: The sinc function, sin(x)


x

(e) Explain why smoothing a signal, by low-pass filtering it before sampling it, can prevent
aliasing. Explain aliasing by a picture in the Fourier domain, and also show in the picture how
smoothing solves the problem. What would be the most effective low-pass filter to use for this
1.2

purpose? Draw its spectral sensitivity.


1

Solution: 0.8

0.6
(e) The Nyquist Sampling Theorem tells us that aliasing results when the signal contains Fourier
components higher than one-half the sampling frequency. Aliasing can be avoided by removing
0.4

such frequency components from the signal, by low-pass filtering it, before sampling the signal.
0.2
The ideal low-pass filter for this task would strictly reject all frequencies starting at one-half
the planned sampling rate, as indicated below by the ±W in trace (c).
0
-W 0 W

Figure 7: Aliasing eect example


6
HH
 A
(a)  A -
0
-W W

HH HH HH HH HH


 A  A  A  A  A
(b)  A A A A A-
-W 0 W

-W 0 W

(c) -
0
-W W

(f ) If a continuous signal f (t) is modulated by multiplying it with a complex exponential wave


exp(iωt) whose frequency is ω, what happens to the Fourier spectrum of the signal?

Name a very important practical application of this principle, and explain why modulation
is a useful operation. How can the original Fourier spectrum later be recovered?

Solution:

(f ) Modulation of the continuous signal by a complex exponential wave exp(iωt) will shift its
entire frequency spectrum upwards by an amount ω.

Amplitude Modulation communication is based on this principle. It allows many different


communications channels to be multiplexed into a single medium, namely the electromagnetic
spectrum, by shifting different signals up into separate frequency bands.
51
Each original signal with its original Fourier spectrum can be recovered by demodulating it
back down (this removes each AM carrier). This is equivalent to multiplying the transmitted
signal by the conjugate complex exponential, exp(−iωt), in the band-selecting tuner.

You might also like