An Application of Generalized Tsalli's-Havrda-Charvat Entropy in Coding Theory Through A Generalization of Kraft Inequality
An Application of Generalized Tsalli's-Havrda-Charvat Entropy in Coding Theory Through A Generalization of Kraft Inequality
ISSN: 2456-1452
Maths 2016; 1(4): 01-05
2016 Stats & Maths An application of generalized Tsallis-Havrda-Charvat
www.mathsjournal.com
Received: 01-09-2016 entropy in coding theory through a generalization of
Accepted: 02-10-2016
Kraft inequality
Dhanesh Garg
Maharishi Markendeshwar
University, Mullana, Ambala, Dhanesh Garg
Haryana, India
Abstract
A parametric mean length is defined as the quantity
Lu
1
1
u p i D ni 1
i
,
u i pi
1
where 0 1 , 0, ui 0 , D 1 is an integer, p i 1 .This being the useful mean
length of code words weighted by utilities, u i . Lower and Upper bounds for Lu are derived in terms
1. Introduction
Consider the following model for a random experiment S,
S N E ; P ; U ,
where E E1 , E 2 , ..., E N is a finite system of events happening with respective
probabilities P p1 , p2 , ..., pN , p i 0 , p i 1 and credited with utilities
U u1 , u 2 , ..., u N , u i 0, i 1, 2, ..., N . Denote the model by S N , where,
E1 , E2 , .... , E N
S N p1 , p2 , .... , p N .
(1.1)
u1 , u 2 , ...., u N
[2]
We call (1.1) a Utility Information Scheme (UIS). Belis and Guiasu proposed a measure of
information called useful information for this scheme, given by
H U ; P u i p i lo g ( p i ) , (1.2)
D
i 1
ni
1. (1.3)
Where D is the size of the code alphabet. The useful mean length Lu of code was defined as:
Lu
u n p i i i
, (1.4)
u p i i
and the authors obtained bounds for it in terms of H U ; P . Generalized coding theorems by considering different generalized
measures under condition (1.3) of unique decipherability were investigated by several authors, see for instance the papers [4, 8-10, 13].
In this paper, we study some coding theorems by considering a new function depending on the parameters , and a utility
function. Our motivation for studying this new function is that it generalizes useful information measure already existing in the
literature such Tsallis entropy [17], Havrda-Charvat [6] etc.
2. Coding Theorems
In this section, we define a new information measure as :
1 u p ,
H U ; P 1
i i
(2.1)
1 u p i i
(ii) When ui 1 for each i, i.e., when the utility aspect is ignored, p i 1 , and 1 , then (2.1) reduces to Tsallis-Havrda-
Charvat entropy.
1
i.e., H P 1 pi . (2.3)
1
(iii) When 1 , and 1 , then (2.1) reduces to a measure of useful information due to Hooda and Bhaker [1].
i.e., H U ; P
u p log ( p ) .
i i i
(2.4)
u p i i
(iv) When u i 1 for each i, then (2.1) reduced to Satish and Arun [13] entropy.
1 p .
H U ; P 1
i.e., i
(2.5)
1 p i
(v) When u i 1 for each i, i.e., When the utility aspect is ignored, p i 1 , 1 , and 1 , the measure (2.1) reduces
[15]
to Shannons entropy .
i.e., H P pi log ( pi ) . (2.6)
Further consider,
Definition: The useful mean length Lu with respect to useful R-norm information measure is defined as :
1 u i pi D i
n 1
Lu 1 , (2.7)
1 u i pi
under the condition, u D i
ni
u i p i . (2.8)
Clearly the inequality (2.8) is the generalization of Krafts inequality (1.3). A code satisfying (2.8) would be termed as a useful
personal probability code. D (D>2) is the size of the code alphabet. When, ui 1 for each i and 1 , 1 , (2.8) reduces to
(1.3).
~2~
International Journal of Statistics and Applied Mathematics
(i) For ui 1 for each i and 1, and 1 , Lu becomes the optimal code length defined by Shannon [15].
(ii) For ui 1 for each i and 1 , then (2.7) becomes a new mean code word length corresponding to the Tsallis entropy.
1
i.e., L
1 pi D ni 1 . (2.9)
1
(iii) If 1 , then (2.7) becomes a new mean codewords length corresponding to the entropy (2.2).
1 u i pi D i
n 1
i.e., Lu 1 .
ui pi
1
(iv) If u i 1 , then (2.7) becomes a mean codewords length corresponding to the entropy (2.5).
1 pi D i
n 1
L 1
i.e., .
1 pi
We establish a result, that in a sense, provides a characterization of H U ; P under the condition of unique decipherability.
Putting these values in (2.11) and using the inequality (2.8), we get
1
ui pi D ni ( 1) 1 ui pi 1 u p .
i i
(2.13)
ui pi ui pi u p i i
It implies
ui pi 1 ui pi D ni ( 1) 1
. (2.14)
ui pi
ui pi
u p i i
u p Di i
ni ( 1)
(2.15)
u p i i u p i i
Since, 1 ( 1) 0 for 0 1 , we get from (2.15) the inequality (2.10).
D ni pi
which is equivalent to
1 D
D ni . (2.17)
pi pi
In the following theorem, we give an upper bound for Lu in terms of H (U ; P ) .
Theorem 2.2. By properly choosing the lengths n1 , n2 ,..., nN in the code of Theorem 2.1, Lu can be made to satisy the
following inequality:
1
Lu D (1 ) H (U ; P )
1
1 D (1 ) (2.18)
Multiplying both sides by ui pi and then summing over i . we get
u i p i D ni ( 1) D (1 ) u i p i . (2.20)
D (1 ) u p i i
(2.21)
u p i i u p i i
Theorem 2.3. For arbitrary N ,1 0 , 0, and for every codeword lengths ni , i 1, 2,..., N of Theorem 2.1,
Lu can be made to satisy the following inequality:
1
Lu H (U ; P ) D H (U ; P ) 1 D . (2.22)
1
Proof: Suppose,
1
ni log D , 0. (2.23)
p i
Clearly ni and ni 1 satisfy the equality in Holders inequality (2.11). Moreover, ni satisfies (2.8). Suppose ni is the unique
integer between ni and ni 1 , then obviously, ni satisfies (2.8).
Since 1 0 , 0 , we have
u p D u p D
i i
ni ( 1)
i i
ni ( 1)
u p i u p
i i i
u p D ni ( 1)
D i
.
i
(2.24)
u p i i
Since,
ui pi D n ( 1) ui pi i
.
ui pi ui pi
~4~
International Journal of Statistics and Applied Mathematics
u p i i
u p
i i
u p
i i
3. References
1. Bhaker US, Hooda DS. Mean value Characterization of useful information measures, Tamkang J Math. 1993; 24:283-294.
2. Belis M, Guiasu S. A Qualitative-Quantitative Measure of Information in Cybernetics Systems, IEEE Trans. Information
Theory, 1968; IT-14:593-594.
3. Feinstein A. Foundation of Information Theory, McGraw Hill, New York. 1958.
4. Guiasu S, Picard CF. Borne Infericutre de la Longueur Utile de Certain Codes, C.R. Acad. Sci, Paris, 1971; 273A:248-251.
5. Gurdial, Pessoa F. On Useful Information of Order , J. Comb. Information and Syst. Sci. 1977; 2:158-162.
6. Havrda JF, Charvat F. Qualification Method of Classification Process the concept of Structural -Entropy, Kybernetika,
1967; 3:30-35, 279.
7. Kumar S. Some more results on R-Norm information measure, Tamkang Journal of Mathematics, 2009; 40(1):41-58.
8. Kumar S. Some more results on a generalized useful R-Norm information measure, Tamkang Journal of Mathematics,
2009; 40(2):211-216.
9. Kumar S, Choudhary A. Some More Noiseless Coding Theorem on Generalized R-Norm Entropy, Journal of Mathematics
Research. 2011; 3(1):125-130.
10. Kumar S, Choudhary A. Coding Theorem Connected on R-Norm Entropy, International Journal of Contemporary
Mathematical Sciences. 2011; 6(17):825-831.
11. Kumar S, Choudhary A. Some Coding Theorems Based on Three Types of the Exponential Form of Cost Functions, Open
Systems and Information Dynamics, 2012; 19(4):1-14.
12. Kumar S, Kumar R, Choudhary A. Some more results on a generalized parametric R-norm information measure of type
Alpha. Journal of Applied Science and Engg. 2014; 17(4):447-453.
13. Kumar S, Choudhary A. Some coding theorems on generalized Havrda-Charvat and Tsallis entropy, Tamkang journal of
mathematics, 2012; 43(3):437-444.
14. Longo G. A Noiseless Coding Theorem for Sources Having Utilities, SIAM J. Appl. Math., 1976; 30(4):732-738.
15. Shannon CE. A Mathematical Theory of Communication, Bell System Tech-J. 1948; 27:394-423, 623-656.
16. Shisha O. Inequalities, Academic Press, New York. 1967.
17. Tsallis C. Possible generalization of BoltzmannGibbs statistics J Stat Phys. 1988; 52:480-487.
~5~