0% found this document useful (0 votes)
79 views68 pages

Information Theory Module 3

Uploaded by

Akash Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views68 pages

Information Theory Module 3

Uploaded by

Akash Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Module-3

Probability Based Source Coding

Dr. Markkandan S

School of Electronics Engineering (SENSE)


Vellore Institute of Technology
Chennai

Dr. Markkandan S (School of Electronics Engineering (SENSE)Vellore


Module-3Institute
Probability
of Technology
Based Source
Chennai)
Coding 1 / 63
Outline

1 Source Coding Theorem

2 Huffman Coding

3 Shannon - Fano Coding

4 Arithmetic Coding

Dr. Markkandan S Module-3 Probability Based Source Coding 2/63


Source Coding Theorem
Introduction

Source Coding: Efficient Representation of symbols generated by a source.


1. The Primary motivation is to compress the data by efficient representation of symbols
2. A code is a set of vectors called code words
3. A discrete memoryless source (DMS) outputs a symbol selected from a finite set of
symbols xi = 1, 2, . . . , L The number of binary digits (bits) R required for unique coding,
when L is a power of 2 is R = log2 L
4. When L is not a power of 2, R = ⌊log2 L⌋ + 1

Dr. Markkandan S Module-3 Probability Based Source Coding 4/63


Fixed Length Code (FLC) and Variable Length Code(VLC)

Let us represent 26 letters in the English alphabet using bits.

Dr. Markkandan S Module-3 Probability Based Source Coding 5/63


Fixed Length Code (FLC) and Variable Length Code(VLC)

Let us represent 26 letters in the English alphabet using bits.

R = ⌊log2 26⌋ + 1 = 5 bits

We know 25 = 32 > 26
Hence, each of the letters can be uniquely represented using fixed length of 5 bits

Dr. Markkandan S Module-3 Probability Based Source Coding 5/63


Fixed Length Code (FLC) and Variable Length Code(VLC)

Let us represent 26 letters in the English alphabet using bits.

R = ⌊log2 26⌋ + 1 = 5 bits

We know 25 = 32 > 26
Hence, each of the letters can be uniquely represented using fixed length of 5 bits
Allotting equal no of bits for frequently used letters and not frequently used letters is not
an efficient way
We have to represent more frequently occurring letters with less number of bits using
Variable Length Code (VLC)

Dr. Markkandan S Module-3 Probability Based Source Coding 5/63


Example: Fixed Length Code (FLC ) and Variable Length
Code(VLC )
Let us code First 8 letters (A − H) of English.

Let us Represent A BAD CAB using FLC and VLC

Dr. Markkandan S Module-3 Probability Based Source Coding 6/63


Example: Fixed Length Code (FLC ) and Variable Length
Code(VLC )
Let us code First 8 letters (A − H) of English.

Let us Represent A BAD CAB using FLC and VLC

Dr. Markkandan S Module-3 Probability Based Source Coding 6/63


Example: Variable Length Code(VLC )

Let us code First 8 letters (A − H) of English.

Let us Represent A BAD CAB using both VLC

Dr. Markkandan S Module-3 Probability Based Source Coding 7/63


Example: Variable Length Code(VLC )

Let us code First 8 letters (A − H) of English.

Let us Represent A BAD CAB using both VLC

Dr. Markkandan S Module-3 Probability Based Source Coding 7/63


Variable Length Code(VLC ) : Issues

Prefix Condition : No codeword forms prefix of another code word ( VLC1 has better
prefix than VLC2)
Instantaneous Codes: As soon as the sequence of bits corresponding to any one of the
possible codewords is detected, symbol will be decoded
Uniquely Decodable Codes: Encoded string will be generated by only one possible input
string, Have to wait unitl entire string is obtained before decoding even the first symbol
VLC2 is not a uniquely decodable code. VLC1 is uniquely decodable code

Dr. Markkandan S Module-3 Probability Based Source Coding 8/63


Kraft Inequality
A necessary and sufficient condition for the existence of a binary code with codewords
having lengths n1 ≤ n2 ≤ . . . nL that satisfy the prefix condition is
L
X
2−nk ≤ 1
k=1

Proof:

Consider a binary tree n = nL . This tree has 2n termianl


nodes. Let us select any code of order n1 as the first
codeword C1 . Since no code word is prefix of any other
codeword, this choice eliminates 2n−n1 terminal codes. This
process continues until the last codeword is assigned.

Dr. Markkandan S Module-3 Probability Based Source Coding 9/63


Kraft Inequality: Example

A six symbol source is encoded in to binary codes shown below. Which of these codes are
instantaneous ?

Dr. Markkandan S Module-3 Probability Based Source Coding 10/63


Kraft Inequality: Example
A six symbol source is encoded in to binary codes shown below. Which of these codes are
instantaneous ?

Check for Prefix Property and Kraft Inequality :


CODE E - Not Satisfies Kraft Inequality ;CODE D - Not Satisfies Prefix Property
CodeA, Code B, Code C satisfy both properties and instantaneous
Dr. Markkandan S Module-3 Probability Based Source Coding 11/63
Kraft Inequality: Example
Construction of a prefix code using a binary tree

L
X
2−nk = 2−1 + 2−2 + 2−3 + 2−3 = 0.5 + 0.25 + 0.125 + 0.125 = 1
k=1
Hence Kraft inequality satisfiedModule-3
Dr. Markkandan S Probability Based Source Coding 12/63
Source Coding Theorem

Statement:
Let X be the ensemble of letters from a DMS with finite Entropy H(X) and the output
symbols xk , k = 1, 2, . . . , L occurring with probabilities P(xk ), k = 1, 2, . . . , L.
It is possible to construct a code that satisfies the prefix condition and has an average length
R̄ that satisfies the inequality
H(X ) ≤ R̄ < H(X ) + 1
The efficiency of a prefix code is
H(X )
η=

Redundancy of the code is
E =1−η

Dr. Markkandan S Module-3 Probability Based Source Coding 13/63


Example: Source Coding Theorem
Consider a Source X which generates four symbols with probabilities
P(x1 ) = 0.5, P(x2 ) = 0.3, P(x3 ) = 0.1 and P(x4 ) = 0.1
The entropy of the source is
4
X
H(X ) = − P(xk )log2 P(xk ) = 1.685 bits
k=1

If we use Prefix code discussed before {0, 10, 110, 111}


The average code word length R̄ is
4
X
R̄ = nk P(xk ) = 1(0.5) + 2(0.3) + 3(0.1) + 3(0.1) = 1.700 bits
k=1

The efficiency of the code is


η = 1.685/1.700 = 0.9912
Dr. Markkandan S Module-3 Probability Based Source Coding 14/63
Huffman Coding
Huffman Coding Algorithm

This algorithm is optimal in sense that average number of bits require to represent the source
symbols is a minimum provided the prefix condition is met.
Steps:
1. Arrange the source symbols in a decreasing order of their probabilities
2. Take the bottom two symbols and tie them together. Add the probabilities of the two symbols and write it
on the combined branches with a ’1’ and ’0’.
3. Treat this sum of probabilites as a new probability associated with a new symbol. Again pick the two
smallest probabilities tie tham together. Each time we perform this, total number of symbols is reduced by
one
4. Continue this procedure until only one probability is left . This completes the construction of Huffman Tree
5. To find the prefix codeword for any symbol, follwo the branches from the final node back to the symbol

Dr. Markkandan S Module-3 Probability Based Source Coding 16/63


Huffman Coding
Combining probabilities

Number of stages required for encoding operation


N −r
n=
r −1
Here N= Total Number of symbols in source alphabet
Binary Huffman Coding r=2
Ternary Huffman coding r=3
Quarternary Huffman Coding r=4
Dr. Markkandan S Module-3 Probability Based Source Coding 17/63
Example 1: Binary Huffman Coding

Consider a DMS with seven possible symbols xi , i = 1, 2, . . . , 7 and the corresponding


probabilities P(x1 ) = 0.37, P(x2 ) = 0.33, P(x3 ) = 0.16, P(x4 ) = 0.07, P(x5 ) = 0.04, P(x6 ) =
0.02, P(x7 ) = 0.01
Letus construct the Huffman Tree

Dr. Markkandan S Module-3 Probability Based Source Coding 18/63


Example 1: Binary Huffman Coding

The entropy of the source is


7
X
H(X ) = − P(xk )log2 P(xk ) = 2.1152 bits
k=1
The average number of binary digits per symbol is
7
X
R̄ = nk P(xk ) = 1(0.37) + 2(0.33) + 3(0.16) + 4(0.07) + 5(0.04) + 6(0.02) + 6(0.01) = 2.17 bits
k=1

Dr. Markkandan S Module-3 Probability Based Source Coding 19/63


Example 2: Non Binary Huffman Coding - Quarternary Huffman
Coding

Construct a quarternary Huffman code for the following set of message symbols with the
A B C D E F G H
respective probabilities
0.22 0.20 0.18 0.15 0.10 0.08 0.05 0.02

N−r 8−4 4
Step-1: No of Stages n = r −1 = 4−1 = 3 Not an integer

Next value to get integer is N=10


n = N−r 10−4
r −1 = 4−1 = 2

Dr. Markkandan S Module-3 Probability Based Source Coding 20/63


Example 2: Non Binary Huffman Coding - Quarternary Huffman
Coding

Dr. Markkandan S Module-3 Probability Based Source Coding 21/63


Example 2: Non Binary Huffman Coding - Quarternary Huffman
Coding

Dr. Markkandan S Module-3 Probability Based Source Coding 22/63


Example 3: Huffman Coding Based on Frequency

Dr. Markkandan S Module-3 Probability Based Source Coding 23/63


Example 3: Huffman Coding Based on Frequency

Dr. Markkandan S Module-3 Probability Based Source Coding 24/63


Example 3: Huffman Coding Based on Frequency

Dr. Markkandan S Module-3 Probability Based Source Coding 25/63


Example 3: Huffman Coding Based on Frequency

Dr. Markkandan S Module-3 Probability Based Source Coding 26/63


Example 3: Huffman Coding Based on Frequency

Dr. Markkandan S Module-3 Probability Based Source Coding 27/63


Example 3: Huffman Coding Based on Frequency

Dr. Markkandan S Module-3 Probability Based Source Coding 28/63


Example 3: Huffman Coding Based on Frequency

Dr. Markkandan S Module-3 Probability Based Source Coding 29/63


Example 3: Huffman Coding Based on Frequency

Dr. Markkandan S Module-3 Probability Based Source Coding 30/63


Example 3: Huffman Coding Based on Frequency

Dr. Markkandan S Module-3 Probability Based Source Coding 31/63


Example 3: Huffman Coding Based on Frequency

Dr. Markkandan S Module-3 Probability Based Source Coding 32/63


Example 3: Huffman Coding Based on Frequency

Dr. Markkandan S Module-3 Probability Based Source Coding 33/63


Example 3: Huffman Coding Based on Frequency

Dr. Markkandan S Module-3 Probability Based Source Coding 34/63


Example 3: Huffman Coding Based on Frequency

Dr. Markkandan S Module-3 Probability Based Source Coding 35/63


Example 3: Huffman Coding Based on Frequency

Dr. Markkandan S Module-3 Probability Based Source Coding 36/63


Extended Huffman Coding
Consider a DMS with three possible symbols xi , i = 1, 2, 3 and the corresponding probabilities
P(x1 ) = 0.4, P(x2 ) = 0.35, P(x3 ) = 0.25
Codewords using Huffman Algorithm

The entropy of this source is


3
X
H(X ) = − P(xk )log2 P(xk ) = 1.5589 bits
k=1
The average number of binary digita per symbol is
3
X
R̄ = nk P(xk ) = 1(0.40) + 2(0.35) + 2(0.25) = 1.60 bits
k=1
The Efficiency of this code is η =S 1.5589/1.6
Dr. Markkandan Module-3=Probability
0.9743Based Source Coding 37/63
Extended Huffman Coding
Group the symbols for 2nd order extension

The entropy of this source is


9
X
2H(X ) = − P(xk )log2 P(xk ) = 3.1177 bits =⇒ H(X ) = 1.5589 bits
k=1

The average number of binary digita per symbol is


9
X
R¯B = nk P(xk ) = 3.1775 bits per symbol pair =⇒ R̄ = R¯B /2 = 1.5888
k=1

The Efficiency of this code is η = 1.5589/1.5888 = 0.9812


Dr. Markkandan S Module-3 Probability Based Source Coding 38/63
Shannon - Fano Coding
Shannon’s First Encoding Algorithm

1
Codes that uses codeword lengths of l(x) = ⌈log P(x) ⌉ are called Shannon Codes. Shannon
codeword lengths satisfy the kraft inequality.
Steps:
1. Given the source alphabet S and the corresponding probabilities P for a given information
source
2. Arrange the probabilities in the non increasing order
3. Compute the length of li for the codeword corresponding to each symbol si from
probabilit pi is given by
1
li ≥ log2
Pi

Dr. Markkandan S Module-3 Probability Based Source Coding 40/63


Shannon’s First Encoding Algorithm

4. Define the following parameters from the probability set q1 = 0


q2 = p1 = q1 + p1
q3 = p1 + p2 = q2 + p2
q4 = p1 + p2 + p3 = q3 + p3
...
...
qN+1 = 1
5. Expand qi in binary till li number of places after decimal point
6. The numbers after decimal places in the binary representation of qi are the codewords for
the corresponding symbol si

Dr. Markkandan S Module-3 Probability Based Source Coding 41/63


Example : Shannon’s First Encoding Algorithm

Consider a source with alphabets S = {A, B, C , D} with corresponding probabilities


P = (0.1, 0.2, 0.3, 0.4).Find the codewords for symbols using Shannon’s algorithm. Also find
the source efficiency and redundancy
1. Arrange the probabilities in the non increasing order

P = (0.4, 0.3, 0.2, 0.1) S = (D, C , B, A)

2. Find the minimum value of li such that l1 ≥ log2 p11 = log2 0.4
1
=⇒ l1 = 2
l2 ≥ log2 p12 = log2 0.3
1
=⇒ l3 =2
1 1
l3 ≥ log2 p3 = log2 0.2 =⇒ l3 =3
l4 ≥ log2 p14 = log2 0.1
1
=⇒ l4 =4

Dr. Markkandan S Module-3 Probability Based Source Coding 42/63


Example: Shannon’s First Encoding Algorithm

3. Calcualte the parameters qi


q1 = 0
q2 = p1 = 0.4
q3 = p1 + p2 = 0.7
q4 = p1 + p2 + p3 = 0.9
q5 = p1 + p2 + p3 + p4 = 1
4. Represent q1 , q2 , q3 , q4 in binary up to li places after decimal points
q1 = (0.0)10 = (0.00)2
q2 = (0.4)10 = (0.01)2
q3 = (0.7)10 = (0.101)2
q4 = (0.9)10 = (0.1110)2

Dr. Markkandan S Module-3 Probability Based Source Coding 43/63


Example: Shannon’s First Encoding Algorithm

Symbol Probability Code Word Length


D 0.4 00 2
5. The codewords are C 0.3 01 2
B 0.2 101 3
A 0.1 1110 4
PL 1
6. The Entropy H(x) = k=1 p(xk )log2 pk = 1.8464 bits/sym
7. The Average Code Length R̄ = Lk=1 nk p(xk ) = 2.4 bits/sym
P
1.8464
8. The efficiency is η = 2.4 = 0.7693 =⇒ 76.93%
9. The redundancy is E = 1 − η = 0.2307

Dr. Markkandan S Module-3 Probability Based Source Coding 44/63


Shannon-Fano Encoding Algorithm

This is an improvement over Shannon’s first algorithm. It offers better coding efficiency
compared to Shannon’s algorithm.
Steps:
1. Arrange the probabilities in the non-increasing order
2. Group the probabilities in to exactly two sets such that the sum of probabilities in both
the groups is almost equal.
3. Assign bit ’0’ to all elements of the first group and bit’1’ to all elements of group 2
4. Repreat Step-2 by dividing each group in two sub groups till no further division is possible

Dr. Markkandan S Module-3 Probability Based Source Coding 45/63


Example: Shannon-Fano Encoding Algorithm
Consider the following source: S = (A, B, C , D, E , F ) with following probabilities
P = {0.1, 0.15, 0.25, 0.35, 0.08, 0.07}.
Steps:
1. Arrange the given probabilities in the non-increasing order. Divide the probabilites in to
two almost equiprobable groups.

Dr. Markkandan S Module-3 Probability Based Source Coding 46/63


Example:Shannon-Fano Encoding Algorithm

2. The codewords using Shannon-Fano Algoritm is

Dr. Markkandan S Module-3 Probability Based Source Coding 47/63


Example:Shannon-Fano Encoding Algorithm

3. The entropy of the code is


L
X 1
H(x) = p(xk )log2 = 2.33 bits/sym
pk
k=1

4. The Average Code Length is


R̄ = Lk=1 nk p(xk )
P
R̄ = (0.35)2 + (0.25)2 + (0.15)2 + (0.10)3 + (0.08)4 + (0.07)4
R̄ = 2.4 bits/sym
5. The Efficiency is η = 2.33/2.4 = 0.978 =⇒ 97.8%
6. The redundancy is E = 1 − η = 1 − 0.978 = 0.292 =⇒ 2.92%

Dr. Markkandan S Module-3 Probability Based Source Coding 48/63


Shannon-Fano-Elias Coding
1
Codes that uses codeword lengths of l(x) = ⌈log P(x) ⌉ are called Shannon Codes.
Shannon-Fano-Elias Coding uses Cumulative Distribution Function to allocate Code Words.
The Cumulative Distribution Function is Defined as
X
F (x) = P(z)
z≤x

Where P(z) is probability of occurance of z.


The modified Cumulative Distributive Function is
X 1
F̄ (x) = P(z) + P(x)
2
z≤x

Where, F̄ (x) represents the sum of probabilities of all symbols less than x plus half the
probability of the symbols x.
Note: In this code, No need to arrange the probabilities in descending order
Dr. Markkandan S Module-3 Probability Based Source Coding 49/63
Example : Shannon-Fano-Elias Coding

PROBLEM:
Construct Shannon-Fano-Elias coding for the source symbols x1 , x2 , x3 , x4 with probabilites
1 1 1 1
2 , 22 , 23 , 23 .
STEPS:
P
1. Find F (x) = z≤x P(z) (Add all previous and current probabilities of the symbol)

Symbol Probability F(x)


1
x1 2 0.5
1
x2 22
0.75
1
x3 23
0.875
1
x4 23
1
1 1 1 1
For Example, To find F (x4 ) = 2 + 22
+ 23
+ 23
=1

Dr. Markkandan S Module-3 Probability Based Source Coding 50/63


Example : Shannon-Fano-Elias Coding

2. Find F̄ (x) = z<x P(z) + 12 P(x) (Add all previous probabilities of less than x and half
P
the current probability of the symbol)
Symbol Probability F(x) F̄ (x)
1
x1 2 0.5 0.25
1
x2 22
0.75 0.625
1
x3 23
0.875 0.8125
1
x4 23
1 0.9375
1
1 1 23
For Example, To find F̄ (x3 ) = 2 + 22
+ 2 = 0.8125

Dr. Markkandan S Module-3 Probability Based Source Coding 51/63


Example : Shannon-Fano-Elias Coding

3. Find F̄ (x) in binary form (Convert the decimal floating values in to binary)
Symbol Probability F(x) F̄ (x) F̄ (x)binary
1
x1 2 0.5 0.25 0.01
1
x2 22
0.75 0.625 0.101
1
x3 23
0.875 0.8125 0.1101
1
x4 23
1 0.9375 0.1111
For Example, To find F̄ (x3 ) = 0.8125 in to F̄ (x3 )binary
0.8125X 2 = 1.6250
0.625X 2 = 1.250
0.25X 2 = 0.50
0.5X 2 = 1 =⇒ (0.1101)2

Dr. Markkandan S Module-3 Probability Based Source Coding 52/63


Example : Shannon-Fano-Elias Coding

1
4. Determine the length of the codeword using l(x) = ⌈log P(x) ⌉+1

Symbol Probability F(x) F̄ (x) F̄ (x)binary l(x)


1
x1 2 0.5 0.25 0.01 2
1
x2 22
0.75 0.625 0.101 3
1
x3 23
0.875 0.8125 0.1101 4
1
x4 23
1 0.9375 0.1111 4
1
For Example, To find l3 = ⌈log 1 ⌉+1=4
23

Dr. Markkandan S Module-3 Probability Based Source Coding 53/63


Example : Shannon-Fano-Elias Coding

5. Write the code word from F̄ (x)b inary for the length of l(x)
Symbol Probability F(x) F̄ (x) F̄ (x)binary l(x) code
1
x1 2 0.5 0.25 0.01 2 01
1
x2 22
0.75 0.625 0.101 3 101
1
x3 23
0.875 0.8125 0.1101 4 1101
1
x4 23
1 0.9375 0.1111 4 1111
For Example, To find codeword for x3 , F̄ (x3 )binary = 0.1101 with l(3) is 4. Hence Code
word is 1101
6. Entropy for this code is 1.75 bits
7. Average Code Word Length is 2.75 bits

Dr. Markkandan S Module-3 Probability Based Source Coding 54/63


Arithmetic Coding
INTRODUCTION

Huffman Codes are only optimal if the probabilities of the symbols are negative powers of two.
Because all prefixcodes work at bit level.
1. Prefix codes try to match self information of the symbols using codewords whose lengths
are integers. The length matching may ascribe a codeword either longer than the self
information or shorter.
2. If prefix codes are generated usign binary tree, the decisions between tree branches always
take one bit.
3. Arithmetic coding doesnot have this restriction, it works by representing the file to be
encoded by an interval of real numbers between 0 and 1. Successive symbols in the
message reduce this interval in accoradance with the probability of that symbol. the more
likely symbols reduce the range by less and thus add fewer bits to the message.

Dr. Markkandan S Module-3 Probability Based Source Coding 56/63


Example:Arithmetic Coding

Consider a discrete memoryless source with S = {A, B, C } with respective probabilities


P = {0.5, 0.25, 0.25} Find the codeword for the message ’B A C A’ using arithemtic coding.
Step 1: Divide the interval [0,1) in to three intervals proportional to their probabilities. Thus
A corresponds to [0 0.5), B corresponds to [0.5,0.75) and C corresponds to [0.75 1).

Dr. Markkandan S Module-3 Probability Based Source Coding 57/63


Example:Arithmetic Coding

Step 2: For First letter to be encoded is ’B’ the corresponding interval is [0.5 0.75).

Dr. Markkandan S Module-3 Probability Based Source Coding 58/63


Example:Arithmetic Coding

Step 3: For next letter to be encoded is ’A’.


The interval B [0.5 0.75) is further divided in to three as [0.5 0.625), [0.625 0.6875), [0.6875
0.75).
Step 4: Map A to interval [0.5 0.625)

Dr. Markkandan S Module-3 Probability Based Source Coding 59/63


Example:Arithmetic Coding

Step 5: For next letter to be encoded is ’C’.


The interval A [0.5 0.625) is further divided in to three as [0.5 0.5625), [0.5625 0.59375),
[0.59375 0.625).
Step 6: Map C to interval [0.59375 0.625)

Dr. Markkandan S Module-3 Probability Based Source Coding 60/63


Example:Arithmetic Coding

Step 7: For next letter to be encoded is ’A’.


The interval C [0.59375 0.625) is further divided in to three as [0.59375 0.609375), [0.609375
0.6171875), [0.6171875 0.625).
Step 8: Map A to interval [0.59375 0.609375)

Dr. Markkandan S Module-3 Probability Based Source Coding 61/63


Example:Arithmetic Coding

Step 9: Hence the codeword for ’BACA’ lies anywhere in the interval[0.59375 0.609375). We
choose the minimum interval in the range 0.59375

Dr. Markkandan S Module-3 Probability Based Source Coding 62/63


Example:Arithmetic Coding - Decoding
Consider a discrete memoryless source with S = {A, B, C } with respective probabilities
P = {0.5, 0.25, 0.25} is used for transmission. The received arithmetic code word is 0.59375.
Find the message transmitted.

Dr. Markkandan S Module-3 Probability Based Source Coding 63/63


Example:Arithmetic Coding - Decoding
Consider a discrete memoryless source with S = {A, B, C } with respective probabilities
P = {0.5, 0.25, 0.25} is used for transmission. The received arithmetic code word is 0.59375.
Find the message transmitted.
1. Reverse the process of encoding.
2. First it checks with where the number 0.59375 lies. clearly [0.5 0.75). This segment
corrresponding to B.
3. Then this segment is further divided in to three segments and identify the symbols fall in
the number 0.59375 for every step

Dr. Markkandan S Module-3 Probability Based Source Coding 63/63

You might also like