0% found this document useful (0 votes)
2 views

Coding-Part 2

Uploaded by

Love Badgoti
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Coding-Part 2

Uploaded by

Love Badgoti
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Dr. Asmita A.

Moghe
Professor, Department of Information Technology
UIT RGPV-Bhopal
 ShannonFano Coding
 Examples with M=2
 Example with M=3
 Aims to construct reasonably efficient
separable binary codes.
 Let [X] –ensemble of messages to be
transmitted.
 [P] – be their corresponding
probabilities
 C k - sequence of binary numbers of
length nk associated with xk message,
must fulfill following criteria
 (1)No sequences of employed binary
numbers Ck can be obtained from each
other by adding more binary digits to the
shorter sequence(prefix property)

 (2)The transmission of an encoded


message is reasonably efficient i.e. 1 and
0 appear independently, with equal
probabilities.
 Messages are written in descending order
of probabilities.

 The
message set is then partitioned into two
most equi-probable subsets [X1] and [X2].

A 0 is assigned to each message contained


in one sub set and a 1 is assigned to each
message contained in another sub set.
 The same procedure is repeated for the
sub sets of[X1] and [X2] i.e. [X1] is
partiotioned into [X11] and [X12] and [X2]
into [X21] and [X22].

 Codewords will start with in [X11 ] will


start with 00, [X12] will start with 01; [X21]
will start with 10 and [X22] 11.
 Thisprocedure is continued until each
subset contains only one message.

 Note:- Each digit 0 or 1 in each


partitioning of the probability space
appears with a more or less equal
probability and is independent of
previous or subsequent partitioning.
 Hence P(o) orP(1) are more or less equal
 [X] = [x1 x2 x3 x4 x5 x6 x7 x8]
 [P] = [1/4 1/8 1/16 1/16 1/16 ¼ 1/16 1/8]
 To apply Shannnon Fano coding
Message Probability Encoded Length(ni)
(pi) Message
X1 0.25 0 0 2
X6 0.25 0 1 2
X2 0.125 1 0 0 3
x8 0.125 1 0 1 3
X3 0.0625 1 1 0 0 4
X4 0.0625 1 1 0 1 4
X5 0.0625 1 1 1 0 4
x7 0.0625 1 1 1 1 4
8
L   pk nk  (1 / 4  2)  (1 / 4  3)  (1 / 16  4)  (1 / 16  4)
k 1

 (1 / 16  4)  (1 / 4  2)  (1 / 16  4)  (1 / 8  3)
 2.75letters / message

8
H ( X )    pk log( pk )
k 1

  [1 / 4 log 1 / 4  1 / 8 log 1 / 8  1 / 16 log 1 / 16  1 / 16 log 1 / 16


 1 / 16 log 1 / 16  1 / 4 log 1 / 4  1 / 16 log 1 / 16  1 / 8 log 1 / 8]
 2.75 bits / message
log M  log 2  1bits / letter
H (X ) 2.75
   100%
L log M 2.75  1
 [X] = [x1 x2 x3 x4 x5 x6 x7]
 [p] = [0.4 0.2 0.12 0.08 0.08 0.08 0.04]

Message Probability Encoded Length(ni)


(pi) Message
x1 0.4 0 0 2
x2 0.2 0 1 2
x3 0.12 1 0 0 3
x4 0.08 1 0 1 3
x5 0.08 1 1 0 3
x6 0.08 1 1 1 0 4
x7 0.04 1 1 1 1 4
7
L   pk nk  (0.4  2)  (0.2  2)  (0.12  3)  (0.08  3)
k 1

 (0.08  3)  (0.08  4)  (0.04  4)


 2.52 letters / message

7
H ( X )   pk nk  [(0.4 log 0.4)  (0.2 log 0.2)  (0.12 log 0.12)
k 1

 3  (0.08 log 0.08)  (0.04 log 0.04)


 2.42 bits / message

H (X ) 2.42
   96.03%
L log M 2.52 log 2
 [X] = [x1 x2 x3 x4 x5 x6 x7]
 [p] = [0.4 0.2 0.12 0.08 0.08 0.08 0.04]
Message Probability Encoded Length(ni)
(pi) Message
X1 0.4 0 1
X2 0.2 1 0 0 3
X3 0.12 1 0 1 3
X4 0.08 1 1 0 0 4
X5 0.08 1 1 0 1 4
X6 0.08 1 1 1 0 4
x7 0.04 1 1 1 1 4
7
L   pk nk  (0.4 1)  (0.2  3)  (0.12  3)  (0.08  4)
k 1

 (0.08  4)  (0.08  4)  (0.04  4)


 2.48 letters / message

7
H ( X )   pk nk  [(0.4 log 0.4)  (0.2 log 0.2)  (0.12 log 0.12)
k 1

 3  (0.08 log 0.08)  (0.04 log 0.04)


 2.42 bits / message

H (X ) 2.42
   97.6%
L log M 2.48 log 2
 Shannon
Fano coding method is
ambiguous.

 Thisis due to more than one scheme of


partitioning.

 Alsoas M tends to increase ,


approximately equi probable groups is
rather difficult and with little choice
 [X] = [x1 x2 x3 x4 x5 x6 x7]
 [p] = [0.4 0.2 0.12 0.08 0.08 0.08 0.04]
Message Probability Encoded Length(ni)
(pi) Message
X1 0.4 -1 1
X2 0.2 0 -1 2
X3 0.12 0 0 2
X4 0.08 1 -1 2
X5 0.08 1 0 2
X6 0.08 1 1 -1 3
x7 0.04 1 1 0 3
7
L   pk nk  (0.4 1)  (0.2  2)  (0.12  2)  (0.08  2)
k 1

 (0.08  2)  (0.08  3)  (0.04  3)


1.72 letters / message

H (X ) 2.42
   88.7%
L log M 1.72 log 3
 R. P
Singh & S.D Sapre, Communication
Systems

You might also like