0% found this document useful (0 votes)
43 views45 pages

ECEVSP L02 Info Theor

The document discusses communication systems and how they are embedded in daily life. It describes the basic elements of a communication system including the communication link and source and sink. It also discusses how information and probabilities relate to communication systems and how randomness exists within them.

Uploaded by

HuayiLI1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views45 pages

ECEVSP L02 Info Theor

The document discusses communication systems and how they are embedded in daily life. It describes the basic elements of a communication system including the communication link and source and sink. It also discusses how information and probabilities relate to communication systems and how randomness exists within them.

Uploaded by

HuayiLI1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Communication Systems

Technology Embedded in Daily Life

VSP 2019, Dr. Lutz Lampe & Dr. Paul Lusina


The basic element of a
communication system is
a communication “link”
… which connects a
“source” and a “sink” over
space and/or time
Source

Sink
📢$ 🔋
🔋
Source
A •– J •–––

S ••• A
Sink
•– J
$
•––– S •••
B –••• K –•– T – B –••• K –•– T –

C –•–• L •–•• U ••– C –•–• L •–•• U ••–


D –•• M –– V •••– D –•• M –– V •••–

E • N –• W •–– E • N –• W •––
F ••–• O ––– X –••– F ••–• O ––– X –••–

G ––• P •––• Y –•–– G ––• P •––• Y –•––


H •••• Q ––•– Z ––•• H •••• Q ––•– Z ––••

I •• R •–• I •• R •–•

📢
🔋 🔋
Source Sink
Communication system (link) model

Error
Compression
Source Correction Modulator
Encoder
Encoder

Medium
Analog or
Digital Digital Analog

Error
Compression
Sink Correction Demodulator
Decoder
Decoder
Communication channel

Error
Compression
Source Correction Modulator
Encoder
Encoder

Medium
Error
Compression
Sink Correction Demodulator
Decoder
Decoder
Communication channel

For example:
optical fibre, cable, wires

Medium
air + (moving) objects
paper
solid state memory
paintings

Communication channel

Here the channel includes


Modulator
hardware component such as

Medium
antennas, couplers (e.g.
handwriting style), amplifiers,
digital-analog converters, Demodulator
filters (e.g. circuits or digital
processing).
Communication channel

Error
Compression
Source Correction Modulator
Encoder
Encoder

Medium
Error
Compression
Sink Correction Demodulator
Decoder
Decoder

includes more processing components (e.g.


software)
Communication channel

The part of an communication system which can


or should not be changed (and thus optimized).
We will look at
reliable (error-free) channel

Error
Compression
Source Correction Modulator
Encoder
Encoder

Medium
Error
Compression
Sink Correction Demodulator
Decoder
Decoder

Three questions of compression


1. How much information is contained in a data file?
2. How can we compress this file?
3. What is the smallest file size possible (limit)?
We will look at
unreliable channel
Error
Compression
Source Correction Modulator
Encoder
Encoder

Medium
Error
Compression
Sink Correction Demodulator
Decoder
Decoder

Three questions of error correction coding


1. How to express reliability of the channel?
2. How can we encode (decode) data to achieve reliable communication?
3. What is the smallest packet size possible (limit)?
But …what is information?
And how could we measure it?
Your inner information detector

You’re an editor and three stories come across your desk.


Which one do you print?

Clock strikes 12
A
at noon!
Flash flood in
B
the Sahara
desert.
UBC B-Line
C
filled to capacity.
Your inner information detector

You’re an editor and three stories come across your desk.


Which one do you print?

Is ‘information’ somehow
Clock strikes 12
A related to the probability of
at noon!
an event?
Flash flood in
B
the Sahara
desert.
UBC B-Line Least likely of the
C three events.
filled to capacity.
Heuristic measure of information

Big surprise = Lots of information!

Information is the resolution of uncertainty.


Claude E. Shannon

(April 30, 1916 –February 24, 2001) was an American mathematician, electronic engineer,
and cryptographer known as "the father of information theory". He was also a juggler.
https://en.wikipedia.org/wiki/Claude_Shannon

See http://eecs.umich.edu/eecs/about/articles/2016/shannonposters/poster10.pdf
Where does randomness exist in
a telecommunication system?

Error
Compression
Source Correction Modulator
Encoder
Encoder

Medium
Error
Compression
Sink Correction Demodulator
Decoder
Decoder
Probabilities and Ensembles
ensemble
X : (x, AX , PX )
x outcome, value of random variable

AX = {a1 , a2 , . . . , aI }
set of possible alphabet
values, alphabet size

PX = {p1 , p2 , . . . , pI }

set of probabilities for outcomes


Probabilities and Ensembles
ensemble
X : (x, AX , PX )

Probability: P (x = ai ) = pi (= p(ai ))

Facts: pi 0 8i

P
I
pi = 1
i=1
Joint Ensembles
XY : outcome is ordered pair (x, y) with
x 2 AX = {a1 , . . . , aI }, y 2 AY = {b1 , . . . , bJ }

Joint Probability: P (x = ai , y = aj ) = pi,j (= p(ai , bj ))

Conditional P (x=ai ,y=bj )


P (x = ai |y = bj ) = P (y=bj )
Probability:
P
Marginal P (x = ai ) = P (x = ai , y)
Probability: y2AY
Figure 2.1

Figure 2.2
Figure 2.3
Joint Ensembles
P
I
1) P (x = ai |y = bj ) =
i=1

?
2) P (x = ai , y = bj ) = P (x = ai )P (y = bj )

only for independent random variables


Examples
1) A fair coin is tossed five times. What is the probability of it landing
(Heads, Heads, Heads, Tails, Tails)?

2) Monty-Hall Problem:
Suppose you are on a game show, and you are given the choice of
three doors: Behind one door is a car; behind the others, goats.
You pick a door, say No. 1, and the host, who knows what is behind
the doors, opens another door, say No. 3, which has a goat.
He then says to you, “Do you want to pick door No. 2?”
Is it to your advantage to switch your choice?

https://en.wikipedia.org/wiki/Monty_Hall_problem
Measure of information
- what is 1 bit?
Claude E. Shannon, “A mathematical theory of communication,” Bell System Technical Journal, vol.
27, pp. 379–423 and 623–656, July and October 1948.
Available: http://moser.cm.nctu.edu.tw/nctu/doc/shannon1948.pdf

http://www.inference.org.uk/itprnn/book.pdf
Information from a message
Message is the outcome of rolling dice
1) roll with one die: 6 possible outcomes 🎲
2) roll with two dice: 36 possible outcomes 🎲
🎲
Which observation resolves more uncertainty about the outcome?

3) your roll one die 3 times 🎲🎲🎲

How much more uncertain should we be about an outcome


compared to 1)?
Information from a message
Observations
A) information measure should increase with alphabet size I
B) information measure should be additive

Experiment (3) could be seen as rolling three dice once, so I=63.


But message should only contain 3 times more information than
outcome from Experiment (1) (additivity).
Information from a message
Define:
IH = log2 (I)

logarithm with base 2 gives unit “bit”

Example: A village has eight telephones. How long must the


phone number be?
Example: How many yes/no questions do you need to find a
certain guest (unknown to you) at a party of 32 people?
Information from a message
Consider the two binary messages:
X: x=0 if you lost in the lottery, x=1 if you won
Y: y=0 if you toss tail and y=1 if you toss head with a fair coin

For both IH = log2 (I) = 1 bit

Does this make (intuitive) sense?


Does observing X or Y resolve more uncertainty?
Information from a message
Observation
C) information measure should take into account probabilities
of different outcomes

1
Define: IS (x) = log2 p(x)

Lottery 6/49:
x=1: you win the jackpot, P(x=1)=1/13,983,816 ≣ 23.7 bit
x=0: you did not win it, P(x=0)=1-P(x=1) ≣ 10-7 bit
Entropy of a Source
Average information (reduction of uncertainty when observing the
outcome) of a message (random experiment)

P ⇣ ⌘
Define: H(X) = 1
p(x) log2 p(x)
x2AX

This is referred to as the entropy of a message or entropy of a source


Table 2.9
Entropy of a message

Facts: 1) 0  H(X)  log2 (I)


2) The entropy of a message is maximized iff
p(x) = 1/I 8x 2 AX
3) Impossible events do not contribute to entropy
lim p log2 (p) = 0
p!+0
Entropy of a binary message
1

0.9
Coin toss:
0.8

0.7

H(X) 0.6

0.5
p1 log2 (p1 ) p0 log2 (p0 )
0.4

0.3

0.2

0.1

Radnat at freedigitalphotos.net
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

P(x=Head)
Entropy of a message

Facts: 1) 0  H(X)  log2 (I)


2) The entropy of a message is maximized iff
p(x) = 1/I 8x 2 AX
3) Impossible events do not contribute to entropy
lim p log2 (p) = 0
p!+0

4) H(X, Y )  H(X) + H(Y )

H(X, Y ) = H(X) + H(Y )


if and only if X and Y are statistically independent
Entropy of a message
Additional details:

joint entropy
P ⇣ ⌘
1
H(X, Y ) = p(x, y) log2 p(x,y)
(x,y)2AX ⇥AY

conditional entropy
P ⇣ ⌘
1
H(X|Y ) = p(x, y) log2 p(x|y)
(x,y)2AX ⇥AY

H(X, Y ) = H(X) + H(Y |X) = H(Y ) + H(X|Y )


chain rule of entropy

H(X|Y )  H(X)
with equality if and only if X and Y are
statistically independent
Gedankenexperiment: Weighing
babies problem

You are given 12 babies


all equal in weight
except for one that is
either heavier or lighter.

• Design a strategy to determine which is the different


baby and whether it is heavier or lighter in as few uses of
the balance as possible.
From lecture notes, David McKay
What is the minimum number of weighings to find the
heavier / lighter baby?

# Weighings
A 3-4
B 5-6
C 7-8
D 9-10
E 11-12
What is your ensemble for
baby weighing?
Set of outcomes Random Variable is Probabilities
mapping to numbers associated with an
. outcome……..
1
How should you
distribute the
0 babies to get the
most information in
the 1st weighing
-1 (trial)?
What is your first weighing distribution?

# on Left
vs.Right

A 6v6
B 5v5
C 4v4
D 3v3
E 2v2
Probabilities associated with weighing distributions

C={# on
P ( x = '\ ' = 1) P ( x = '- ' = 0) P ( x = '/ ' = -1) H (c) bits
Left vs.
Right}
6v6 1/2 0 1/2 1.00
5v5 5/12 1/6 5/12 1.48
4v4 1/3 1/3 1/3 1.58
2v2 1/6 2/3 1/6 1.25
1v1 1/12 5/6 1/12 0.82
Which weighing distribution do you think is the best and why?

What is the next step?


Gedankenexperiment:
The game of 63
There is a set of 64 numbers:
x Î {0,1,2,...,63}

I secretly pick one number from the set and you have to guess it.
You may ask me any yes/no question you like.
1. What strategy would you use in asking questions?
2. What is the minimum number of questions you need to ask to
guarantee guessing the number?
What is the minimum number of questions you
need to ask to guarantee finding the number?
# Questions
A <=4
B 5
C 6
D 7
E >=8
What have we learned from the
Gedankenexperiments …

Information theory can be used to design strategies


such as optimizing searches.

You might also like