0% found this document useful (0 votes)
13 views

Module-1

Uploaded by

Revanth Narne
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Module-1

Uploaded by

Revanth Narne
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Module-1

Introduction to Information Theory


By
Dr. Mahesh M
Assistant Professor
SENSE
ITC: Applications

Claude Shannon

• The capacity can be computed simply from the noise characteristics


of the channel.

• Shannon proposed that certain types of random processes, such as music and speech,
have a fundamental or irreducible complexity. This means there is a lower bound to
how much these signals can be compressed without losing essential information. This
is a cornerstone of source coding theory, which addresses limits to data compression.
DATA and Information
What is Information
Let us consider the following statements:
 Tomorrow, the sun will rise from the East.

 The phone will ring in the next one hour.

 It will snow in Vijayawada this winter.

 Nex Sunday is working day

 Syria president has left the country

From above statements, the amount of information is nothing to do with the length of the
sentences.

Also, it is interesting to note that the amount of information carrying is inversely proportional to
the probability of occurrences of the events.

Shannon gave the mathematical definition to quantify the amount of information through his
published work titled “A Mathematical Theory of Communication”.
Uncertainty and Information
Suppose, an experiment involves the observation of the output. (Ex- Tossing a fair
dice).

Every time we throw a dice, the outcome belongs to be any one of the symbol 1,
2, 3, 4, 5, 6}.

Consider the event . Before the event, there is an amount of uncertainty.

During the event, there is an amount of surprise.

After the occurrence of event there is a gain in the amount of information (which
may be viewed as resolution of uncertainty).
Information Sources
• In communication system, it can be a device sending message, it can be
continuous or discrete.
• Continuous sources can be converted to discrete sources by sampling
and quantization.
• Therefore, discrete information source is a source which has only a finite
set of symbols as possible output.

• Ex: Mores Code:


Memoryless Sources
We assume that symbols emitted by the source during successive attempts are
statistically independent (each symbol produced is independent of previous symbol).
Such a source is called Discrete Memory less Source (DMS).

A source for which each symbol produced is independent of the previous symbols is
called DMS.
Ex:

When the symbols are binary 1’s and 0’s then it is called binary memoryless source
(BMS).
Measure of Information (Very
Important)
• For a source alphabet where is the number of symbols each with
probability distribution . Then the information gained for the outcome
is

1. =0 for =1

2. the random variables and are statistically independent.


Measure of Information
• Units of Information:
• bits if the base “b”=2,
• Hartley or decit if the base “b” =10,

𝐼 ( 𝒮= 𝑠 𝑖 ) =− log 𝑏 𝑝𝑖
• nat (natural unit) if b = e.
Example
You are watching the weather forecast, and the announcer says that it will rain
tomorrow. The probability of rain in your area is known to be P(rain)=0.2, and the
probability of no rain is P(no rain)=0.8.

Calculate the amount of information content (measured in bits) in the weather forecast.
Imagine you are a detective solving a mystery, and your job is to find out which
suspect committed the crime. There are 8 suspects.
Imagine you're trying to guess someone's 4-digit PIN code, and you know the PIN is
randomly chosen. Let’s calculate the information content of finding out the PIN.
Example on Information content
Question: A source produces one of four possible symbols, having probability P(x1)= ½,
P(x2)= ¼, P(x3)= P(x4)=1/8. Obtain the information content of each of these symbol.

Answer:
Example on Information content
Question: In a binary channel, of ‘0’ occurs with probability ¼ and ‘1’ occurs with
probability of ¾. Then, calculate the amount of information carried by each binit.
Entropy (Average Information
per symbol)
• In practical communication, we transmit long sequences of symbols.

• Let is the probability distribution of symbols defined over the source


alphabet The average information is defined as

The units of are bits/symbol.


Note: indicates if and only if
3 Properties of Entropy

The units of are bits/symbol.


1. (Prove This)
2. and , .
3.

Note: indicates if and only if


Entropy of BMS
• Let be the BMS with probability distribution function as . Then the entropy of BMS is:

(Write a MATLAB program


to obtain this curve)

vs
Example
• You have a biased coin where the probability of getting Heads is ppp, and the probability of getting Tails is
1−p1-p1−p.
1. Case 1: The coin is fair (p=0.5).
2. Case 2: The coin is biased, with p=0.9 for Heads and 1−p=0.1 for Tails.
3. Case 3: The coin is so biased that it always lands on Heads (p=1).
Example
• A weather forecast predicts the chance of rain or no rain for a given day. The probabilities for different
scenarios are as follows:
1. Case 1: The weather forecast says:
1. Rain: p=0.5,
2. No Rain: 1−p=0.5
2. Case 2: The forecast predicts:
1. Rain: p=0.9
2. No Rain: 1−p=0.1
3. Case 3: The forecast is certain:
1. Rain: p=1
2. No Rain: 1−p=0
Example
• A light bulb can either be ON or OFF. Consider the following three cases:
1. Case 1: The light bulb is almost always ON with probability p=0.9999, and OFF with probability 0.0001.
2. Case 2: The light bulb is equally likely to be ON or OFF, with p=0.5
3. Case 3: The light bulb is always ON with probability p=1
• (a) Information Content:
• For each case, calculate the information (III) for the ON and OFF events.
• (b) Entropy:
• For each case, calculate the entropy (HHH) of the system. Explain why the entropy is low or high in each
scenario.
Example
•A bag contains 16 balls:
•12 balls are red,
•4 balls are white.
•One ball is picked randomly.
Calculate the entropy HHH of the system using the formula:
Information Rate
• If the time rate at which emits symbols is (symbols/sec), then the
information rate of the source is given by

bits/second

H(P) is entropy or average information.

r is rate at symbols are generated.


Scenario
•Suppose a keyboard emits 10 characters per second (r = 10
symbols/second). Each character has an entropy H(P) of 5 bits (e.g.,
considering English characters).

•You’re watching a football match live-streamed in 4K resolution (Ultra HD) on


your smart TV. Compared to standard HD (1080p), 4K offers a much sharper and
more detailed picture, allowing you to see every blade of grass, the stitching on
the players’ jerseys, and even expressions in the crowd. However, delivering
such high-quality video requires significantly more bandwidth due to the
increased entropy and frame size.
Information Rate
Solution
Information Rate
Solution
Information Rate vs Channel
Capacity
 Information rate (R) directly imposes a limit on the channel capacity (C)

 Information rate ( = r x H) ≤ Channel capacity (C)

 It is impossible to transmit information at a rate greater than channel capacity

 The capacity of a channel (C) is proportional to the bandwidth (BW) and S/N.

(this is log base 2, please note)


 S/N = Signal power/ Noise power

 Signal power and noise power are measured in W, more frequently in dB

 So, in summary, entropy decides the rate at which you generate messages

 Information rate, thereof, decides the limit on the channel capacity

 This is how “everything” (source and channel) are interlinked


• Determine the channel capacity if the SNR is 30 dB and the telephone
signals of 4 kHz bandwidth are transmitted.

You might also like