0% found this document useful (0 votes)
22 views85 pages

BookInformationTheoryJVS Figures v2

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 85

This file contains figures from the book:

Information Theory
A Tutorial Introduction
by
Dr James V Stone
2015

Sebtel Press.
Copyright JV Stone.
These figures are released for use under the Creative
Commons License specified on next slide.

A copy of this file can be obtained from


http://jim-stone.staff.shef.ac.uk/BookInfoTheory2013/InfoTheoryBookFigures.html
Chapter 1
0 ----- 0 0 0 = 0
0
1 -----0 0 1 = 1
B
0 0 -----0 1 0 = 2
1
Left C
1 D --0 1 1 = 3
A
0 ----- 1 0 0 = 4
Right
0 ----- 1 0 1 = 5
1 1
0 ----- 1 1 0 = 6
1
1 ----- 1 1 1 = 7
Fish 000=0
Bird 001=1
Q2
Yes Dog 010=2
Q3
No
Yes Cat 011=3
Q1

Yes
Car 100=4
Van 101=5
Truck 1 10=6
Bus 111=7
Chapter 2
Experiment Outcome Mapping
Random variable X X(head) = 1
Coin flip
= head

Experiment Outcome Mapping


Random variable X X(tail) = 0
Coin flip
= tail
Data s
Data s

Encoder
Noise η Decoder
x = g(s)


Input x Channel Output y


Chapter 3
Data Data

Encoder

X
Noise Decoder

Input Channel Output


A 0.87 1
B 0.04 1
1.00
C 0.04 0 0.08 1 0

1 0 0.13
D 0.03
0 0.05
E 0.02
from xkcd.com
From xkcd.com
Chapter 4
Data s
Data s

Encoder
Noise η Decoder
x = g(s)


Input x Channel Output y


a

b
H(X,Y)

H(X) H(Y)

H(X|Y) I(X,Y) H(Y|X)


H(X,Y)

H(X) H(Y)

H(X|Y) I(X,Y) H(Y|X)


Number of possible inputs
for one output = 2H(X|Y)
Outputs Y

2H(X|Y)
Number of Number of
inputs = 2H(X) outputs = 2H(Y)

2H(Y|X)
Inputs X

Number of possible outputs


for one input = 2H(Y|X)
Output noise
Output
Input entropy H(Y|X)
Input noise entropy H(Y)
entropy H(X)
entropy H(X|Y)

Mutual information I(X,Y) Mutual information I(X,Y)

(a) (b)
A
B B
C

D
E E
F

G
H H
I
Chapter 5
a) After receiving 2 bits, U=25%

Residual
uncertainty

b) After receiving 1 bit, U=50%

Residual uncertainty

c) After receiving 1/2 a bit, U=71%

Residual uncertainty
Data Data

Encoder
Noise Decoder

Input Channel Output


Chapter 6
Data s
Data s

Encoder
Noise η Decoder
x = g(s)


Input x Channel Output y


H(X,Y)

H(X) H(Y)

H(X|Y) I(X,Y) H(Y|X)


Chapter 7
10
10

5
5

0
0

-5 -5

-10 -10
5 10 15 20 5 10 15 20
x1 x2
Chapter 8
A B
Warm Warm
A B
Cold Hot
Chapter 9
c 1
0.9

Cdf of contrast, g(s)


0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-1 -0.5 0 0.5 1
Contrast, s

1
2.5
d
b 2 0.8

Encoded contrast, x
1.5
Contrast, s

0.6
1
0.4
0.5

0 0.2

-0.5
0
-1 0 50 100 150 200
0 50 100 150 200 Time (milliseconds)
Time (milliseconds)

a 0.9
e 1
Probability density, p(s)

0.8
0.7
0.8
0.6
0.5 0.6
p(x)

0.4
0.4
0.3
0.2 0.2
0.1
0 0
-1 -0.5 0 0.5 1 1.5 2 0 0.2 0.4 0.6 0.8 1
Contrast, s Encoded contrast, x
82

S-cone M-cone L-cone

Blue-Yellow Luminance Red-Green


channel channel channel
83

THE END.

You might also like