0% found this document useful (0 votes)
92 views5 pages

An Application of Generalized Tsalli's-Havrda-Charvat Entropy in Coding Theory Through A Generalization of Kraft Inequality

research

Uploaded by

Zahid Bhat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views5 pages

An Application of Generalized Tsalli's-Havrda-Charvat Entropy in Coding Theory Through A Generalization of Kraft Inequality

research

Uploaded by

Zahid Bhat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

International Journal of Statistics and Applied Mathematics 2016; 1(4): 01-05

ISSN: 2456-1452
Maths 2016; 1(4): 01-05
2016 Stats & Maths An application of generalized Tsallis-Havrda-Charvat
www.mathsjournal.com
Received: 01-09-2016 entropy in coding theory through a generalization of
Accepted: 02-10-2016
Kraft inequality
Dhanesh Garg
Maharishi Markendeshwar
University, Mullana, Ambala, Dhanesh Garg
Haryana, India

Abstract
A parametric mean length is defined as the quantity

Lu
1
1
u p i D ni 1
i
,
u i pi

1
where 0 1 , 0, ui 0 , D 1 is an integer, p i 1 .This being the useful mean
length of code words weighted by utilities, u i . Lower and Upper bounds for Lu are derived in terms

of useful Tsallis-Havrda-Charvat information measure for power probability distribution pi .


Keywords: Tsallis Entropy, Useful Tsallis entropy, Utilities, Kraft inequality, Holders inequality
AMS Subject classification: 94A15, 94A17, 94A24, 26D15.

1. Introduction
Consider the following model for a random experiment S,
S N E ; P ; U ,
where E E1 , E 2 , ..., E N is a finite system of events happening with respective
probabilities P p1 , p2 , ..., pN , p i 0 , p i 1 and credited with utilities
U u1 , u 2 , ..., u N , u i 0, i 1, 2, ..., N . Denote the model by S N , where,

E1 , E2 , .... , E N
S N p1 , p2 , .... , p N .

(1.1)
u1 , u 2 , ...., u N
[2]
We call (1.1) a Utility Information Scheme (UIS). Belis and Guiasu proposed a measure of
information called useful information for this scheme, given by

H U ; P u i p i lo g ( p i ) , (1.2)

where H U ; P reduces to Shannons [15]


entropy when the utility aspect of the scheme is
ignored i.e., when u i 1 for each i. Throughout the paper, will stand for
N
unless
i 1
Correspondence: otherwise stated and logarithms are taken to base D D 1 .
Dhanesh Garg
Maharishi Markendeshwar Guiasu and Picard [4] considered the problem of encoding the outcomes in (1.1) by means of a
University, Mullana, Ambala, prefix code with codewords w1 , w2 , .... , w N having lengths n1 , n 2 , .... , n N and
Haryana, India
satisfying Krafts inequality [3].
~1~
International Journal of Statistics and Applied Mathematics

D
i 1
ni
1. (1.3)

Where D is the size of the code alphabet. The useful mean length Lu of code was defined as:

Lu
u n p i i i
, (1.4)
u p i i

and the authors obtained bounds for it in terms of H U ; P . Generalized coding theorems by considering different generalized
measures under condition (1.3) of unique decipherability were investigated by several authors, see for instance the papers [4, 8-10, 13].
In this paper, we study some coding theorems by considering a new function depending on the parameters , and a utility
function. Our motivation for studying this new function is that it generalizes useful information measure already existing in the
literature such Tsallis entropy [17], Havrda-Charvat [6] etc.

2. Coding Theorems
In this section, we define a new information measure as :
1 u p ,
H U ; P 1
i i
(2.1)
1 u p i i

where 0, 0 1 , u i 0 , p i 0, i 1, 2, ..., N and pi 1 .

(i) If 1 , Then (2.1) becomes a useful information measure


1 u p .
H U ; P 1
i.e., i i
(2.2)
1 u p i i

(ii) When ui 1 for each i, i.e., when the utility aspect is ignored, p i 1 , and 1 , then (2.1) reduces to Tsallis-Havrda-
Charvat entropy.
1
i.e., H P 1 pi . (2.3)

1
(iii) When 1 , and 1 , then (2.1) reduces to a measure of useful information due to Hooda and Bhaker [1].

i.e., H U ; P
u p log ( p ) .
i i i
(2.4)
u p i i

(iv) When u i 1 for each i, then (2.1) reduced to Satish and Arun [13] entropy.

1 p .
H U ; P 1
i.e., i
(2.5)
1 p i

(v) When u i 1 for each i, i.e., When the utility aspect is ignored, p i 1 , 1 , and 1 , the measure (2.1) reduces
[15]
to Shannons entropy .
i.e., H P pi log ( pi ) . (2.6)

Further consider,
Definition: The useful mean length Lu with respect to useful R-norm information measure is defined as :
1 u i pi D i
n 1

Lu 1 , (2.7)
1 u i pi

under the condition, u D i
ni
u i p i . (2.8)

Clearly the inequality (2.8) is the generalization of Krafts inequality (1.3). A code satisfying (2.8) would be termed as a useful
personal probability code. D (D>2) is the size of the code alphabet. When, ui 1 for each i and 1 , 1 , (2.8) reduces to
(1.3).
~2~
International Journal of Statistics and Applied Mathematics

(i) For ui 1 for each i and 1, and 1 , Lu becomes the optimal code length defined by Shannon [15].
(ii) For ui 1 for each i and 1 , then (2.7) becomes a new mean code word length corresponding to the Tsallis entropy.
1
i.e., L
1 pi D ni 1 . (2.9)
1
(iii) If 1 , then (2.7) becomes a new mean codewords length corresponding to the entropy (2.2).
1 u i pi D i
n 1

i.e., Lu 1 .
ui pi

1

(iv) If u i 1 , then (2.7) becomes a mean codewords length corresponding to the entropy (2.5).

1 pi D i
n 1

L 1
i.e., .
1 pi

We establish a result, that in a sense, provides a characterization of H U ; P under the condition of unique decipherability.

Theorem 2.1. Let ui , pi , ni , i 1, 2,..., N , satisfy the inequality (2.8). Then


Lu H (U ; P ) , 1 0 , 0 . (2.10)

Proof: By Holders inequality, we have


1 1
N p p N q q N
x i y i xi y i , (2.11)
i 1 i 1 i 1
1
where p q 1 1; p ( 0) 1, q 0 or q ( 0) 1, p 0; xi , yi 0 for each i .
( 1)
Setting, p , q 1 , and

1
u p 1 ni u p 1
xi i i D , yi i i , (2.12)
u p u p
i i i i

Putting these values in (2.11) and using the inequality (2.8), we get
1
ui pi D ni ( 1) 1 ui pi 1 u p .

i i
(2.13)
ui pi ui pi u p i i

It implies

ui pi 1 ui pi D ni ( 1) 1
. (2.14)
ui pi

ui pi

Now consider two cases:


Case 1: Let 0 1. Raising both sides of (2.14) to the power ( 1) , we get

u p i i

u p Di i
ni ( 1)

(2.15)
u p i i u p i i
Since, 1 ( 1) 0 for 0 1 , we get from (2.15) the inequality (2.10).

Case 2: Let 1 . The proof follows on the same lines.


It is clear that the equality in (2.10) is true if and only if

D ni pi

which implies that


1
ni log D (2.16)
p i
~3~
International Journal of Statistics and Applied Mathematics

Thus, it is always possible to have a codeword satisfying the requirement


1 1
log D
ni log D 1 ,
pi pi

which is equivalent to
1 D

D ni . (2.17)
pi pi
In the following theorem, we give an upper bound for Lu in terms of H (U ; P ) .

Theorem 2.2. By properly choosing the lengths n1 , n2 ,..., nN in the code of Theorem 2.1, Lu can be made to satisy the
following inequality:
1
Lu D (1 ) H (U ; P )
1
1 D (1 ) (2.18)

Proof: From (2.17), it is clear that


D ni D 1 p . i
(2.19)

We have again the following two possibilities.


(i) Let 1 . Raising both sides of (2.19) to the power ( 1) , we have
D ni ( 1) D1 pi ( 1) .


Multiplying both sides by ui pi and then summing over i . we get
u i p i D ni ( 1) D (1 ) u i p i . (2.20)

Obviously (2.20) can be written as


u p D
i i
ni ( 1)

D (1 ) u p i i
(2.21)
u p i i u p i i

Since 1 0 for 1 , we get the inequality (2.18) from (2.21).


(ii) If 0 1, the proof follows similarly. But the inequality (2.21) is reversed.

Theorem 2.3. For arbitrary N ,1 0 , 0, and for every codeword lengths ni , i 1, 2,..., N of Theorem 2.1,
Lu can be made to satisy the following inequality:
1
Lu H (U ; P ) D H (U ; P ) 1 D . (2.22)
1

Proof: Suppose,
1
ni log D , 0. (2.23)
p i
Clearly ni and ni 1 satisfy the equality in Holders inequality (2.11). Moreover, ni satisfies (2.8). Suppose ni is the unique
integer between ni and ni 1 , then obviously, ni satisfies (2.8).
Since 1 0 , 0 , we have

u p D u p D
i i
ni ( 1)
i i
ni ( 1)

u p i u p
i i i

u p D ni ( 1)

D i
.
i
(2.24)

u p i i

Since,
ui pi D n ( 1) ui pi i

.
ui pi ui pi
~4~
International Journal of Statistics and Applied Mathematics

Hence (2.24) becomes


u p D
i i
ni ( 1)


u p
i i

D
u p .
i i

u p i i

u p
i i

u p
i i

Which gives (2.22).

3. References
1. Bhaker US, Hooda DS. Mean value Characterization of useful information measures, Tamkang J Math. 1993; 24:283-294.
2. Belis M, Guiasu S. A Qualitative-Quantitative Measure of Information in Cybernetics Systems, IEEE Trans. Information
Theory, 1968; IT-14:593-594.
3. Feinstein A. Foundation of Information Theory, McGraw Hill, New York. 1958.
4. Guiasu S, Picard CF. Borne Infericutre de la Longueur Utile de Certain Codes, C.R. Acad. Sci, Paris, 1971; 273A:248-251.
5. Gurdial, Pessoa F. On Useful Information of Order , J. Comb. Information and Syst. Sci. 1977; 2:158-162.
6. Havrda JF, Charvat F. Qualification Method of Classification Process the concept of Structural -Entropy, Kybernetika,
1967; 3:30-35, 279.
7. Kumar S. Some more results on R-Norm information measure, Tamkang Journal of Mathematics, 2009; 40(1):41-58.
8. Kumar S. Some more results on a generalized useful R-Norm information measure, Tamkang Journal of Mathematics,
2009; 40(2):211-216.
9. Kumar S, Choudhary A. Some More Noiseless Coding Theorem on Generalized R-Norm Entropy, Journal of Mathematics
Research. 2011; 3(1):125-130.
10. Kumar S, Choudhary A. Coding Theorem Connected on R-Norm Entropy, International Journal of Contemporary
Mathematical Sciences. 2011; 6(17):825-831.
11. Kumar S, Choudhary A. Some Coding Theorems Based on Three Types of the Exponential Form of Cost Functions, Open
Systems and Information Dynamics, 2012; 19(4):1-14.
12. Kumar S, Kumar R, Choudhary A. Some more results on a generalized parametric R-norm information measure of type
Alpha. Journal of Applied Science and Engg. 2014; 17(4):447-453.
13. Kumar S, Choudhary A. Some coding theorems on generalized Havrda-Charvat and Tsallis entropy, Tamkang journal of
mathematics, 2012; 43(3):437-444.
14. Longo G. A Noiseless Coding Theorem for Sources Having Utilities, SIAM J. Appl. Math., 1976; 30(4):732-738.
15. Shannon CE. A Mathematical Theory of Communication, Bell System Tech-J. 1948; 27:394-423, 623-656.
16. Shisha O. Inequalities, Academic Press, New York. 1967.
17. Tsallis C. Possible generalization of BoltzmannGibbs statistics J Stat Phys. 1988; 52:480-487.

~5~

You might also like