0% found this document useful (0 votes)
11 views59 pages

Unit - 1

digital communication

Uploaded by

22ec015
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views59 pages

Unit - 1

digital communication

Uploaded by

22ec015
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Unit-1

INFORMATION THEORY
What is Information?
•The randomness of happening of an event and the probability of its
prediction as a news is known as information.
•Information of an event depends only on its probability of occurrence
and is not dependent on its content.
•The message with less probability contains more information
and message with highest probability contains less information.
•Information is a non-negative quantity.
•Ifan event has probability 1, weget no information from the
occurrence of the event: I (1) = 0
Information Source:
•The setof source symbols are called as source alphabet and
the elements of the set are called as symbols or letters.

•Information source can be classified into


• Memoryless: A memoryless source is one for which each symbol produced is
independent of the previous symbols
• Memory : A source with memory is one for which a current symbol depends
on the previous symbols.
Information Content of a Symbol (i.e. Logarithmic
Measure of Information):
Entropy (i.e. Average Information):
• Entropy (H) is a measure of the average information content per
source symbol.
• Assumptions:
i. The source is stationary so that the probabilities may remain
constant with time.
ii. The successive symbols are statistically independent and
come from the source at an average rate of ‘r’ symbols per
second.
• Entropy is calculated as follows:
Maximum Entropy for binary Source:
The Discrete Memoryless Channels (DMC):
•A DMC is a statistical model with an input X and output Y.
•Each possible input to output path is indicated along with a
conditional probability P(yj/xi), where P(yj/xi) is the conditional
probability of obtaining output yj given that the input is xi and is
called a channel transition probability.
•A channel is completely specified by the complete set of transition
probabilities. The channel is specified by the matrix of transition
probabilities [P(Y/X)]. This matrix is known as Channel Matrix.
Types of Channels:
Channel Capacity:
Channel Capacity:
Few Terms Related to Source Coding Process:
•Code word Length: Number of binary digits in the code word
•Average Code word Length: the average number of bits per source
symbol used in the source coding process.
• Code Efficiency: The code efficiency η is defined as

where Lmin, is the minimum value of L. As η approaches unity,


the code is said to be efficient.

•Code Redundancy:
▪ Fixed – Length Codes:
✔ Code 1 and Code 2 of above table are fixed – length code words
with length 2.

▪ Variable – Length Codes:


✔ All codes of above table except Code 1 and Code 2 are variable
– length codes.

▪ Distinct Codes: All codes of above table except Code 1 are distinct
codes.
▪ Prefix – Free Codes:
✔ A code in which no code word can be formed by adding
code symbols to another code word is called a prefix- free code. In a
prefix
– free code, no code word is prefix of another.
✔ Codes 2, 4 and 6 of above table are prefix – free codes.

▪ Uniquely Decodable Codes: no code word is a prefix of another. Thus


the prefix – free codes 2, 4 and 6 are uniquely decodable codes.

▪ Instantaneous Codes: A uniquely decodable code is called an


instantaneous code.

▪ Optimal Codes: A code is said to be optimal if it is instantaneous and


has the minimum average L for a given source with a given probability
assignment for the source symbols.
Step-by-step Procedure
Example:
•Encode the symbols a,b,c,d,e and f emitted by the discrete source
with probability 0.4, 0.3, 0.15, 0.1, 0.03 and 0.02 respectively. Use
shannon-fano coding algorithm. Also calculate the entropy and
coding efficiency.
The source entropy H(X) is given by,
Average code word length (Lavg) is given by,
Coding efficiency () is given by,
Shannon-Fano Coding example: 2
Play this video
Advantages of variable length coding:

Play this video


Q&A
Can you decode the encoded sequence?
1) 1110 0 1110 ?

You might also like