0% found this document useful (0 votes)
10 views6 pages

Bhattacharya Bounds

The document discusses the Bhattacharya system of lower bounds, which generalizes the Cramer-Rao lower bound for unbiased estimators of real-valued functions of parameters. It outlines the necessary regularity conditions and presents a theorem establishing a lower bound for the variance of unbiased estimators. Additionally, it provides proofs and examples to illustrate the application of the Bhattacharya lower bound in statistical estimation.

Uploaded by

Sia Banerjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views6 pages

Bhattacharya Bounds

The document discusses the Bhattacharya system of lower bounds, which generalizes the Cramer-Rao lower bound for unbiased estimators of real-valued functions of parameters. It outlines the necessary regularity conditions and presents a theorem establishing a lower bound for the variance of unbiased estimators. Additionally, it provides proofs and examples to illustrate the application of the Bhattacharya lower bound in statistical estimation.

Uploaded by

Sia Banerjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Subject-Statistics

Paper- Bhattacharya lower bound


Shirsendu Mukherjee
Department of Statistics, Asutosh College, Kolkata, India.
shirsendu [email protected]

Bhattacharyya System of lower bound


In this module we shall discuss a generalization of Cramer-Rao lower bound
which is known as Bhattacharya system of lower bounds. Let us consider
a family of distributions Fθ = {fθ (x); θ ∈ Θ} which satisfies the following
Bhattacharya regularity conditions,

1. Θ is an open interval of the real line.

2. The support X = {x : fθ (x) > 0} does not depend on θ.


∂i
3. For some integer K, f (x)
∂θi θ
exists for all x and for all θ for all i =
1, . . . , K.
∂ i
4. For any statistic h(X) with Eθ (|h(X)|) < ∞ for all θ, ∂θ i Eθ (h(X)) =
∂ i R R ∂ i
∂iθ
h(x)fθ (x)dx = h(x) ∂ i θ fθ (x)dx for all θ and for all i = 1, . . . , K.

5. The matrix h
VK = ((vij )) exists
 j for all θ i
and is positive definite where,
∂i ∂
vij (θ) = Eθ ∂θi log fθ (x) ∂θj log fθ (x) for all θ ∈ Θ.

Theorem. Let the family of distribution Pθ = {fθ (x), θ ∈ Θ} satisfies


the Bhattacharya regularity conditions and g(θ) be a real valued function of
θ such that g(θ) is K-times differentiable for some integer K. Let T be an
unbiased estimator of g(θ) such that V arθ (T ) finite for all θ , then

V arθ (T ) ≥ g0 VK −1 g, for all θ


i
where g = (g (1) (θ), g (2) (θ), . . . , g (K) (θ))0 and g (i) (θ) = ∂θ

i g(θ).

Note :- For K=1, the Bhattacharya lower bound (and the regularity condi-
tions) reduces to Cramer-Rao lower bound (and the corresponding regularity
conditions).
∂i
Proof: Define, Si (θ, x) = fθ1(x) ∂θ i fθ (x). Hence,

Z
1 ∂i
E(Si ) = fθ (x)fθ (x)dx
fθ (x) ∂θi

1
Z
∂i
= fθ (x)dx
∂θi
∂i Z
= fθ (x)dx
∂θi
∂i
= 1 = 0, ∀ i
∂θi
Therefore, V (Si ) = vii , cov(Si , Sj ) = vij , and
cov(Si , T ) = E(Si T )
Z
1 ∂i
= t(x) fθ (x)fθ (x)dx
fθ (x) ∂θi
∂i Z
= t(x)fθ (x)dx
∂θi
∂i
= g(θ) = g (i) (θ).
∂θi
Define,
varθ (T ) g (1) (θ) · · · g (K) (θ)
 
 (1)
 g (θ)


ΣK+1,K+1
 
= dispersion matrix of (T, S1 , S2 , . . . Sk ) =  VK .

.. 
.
 
 
g (K) (θ)
Since Σ is a positive definite matrix. Therefore det(Σ) = |V||varθ (T ) −
g0 VK −1 g| ≥ 0. Hence, varθ (T ) ≥ g0 VK
−1
g.

Case of equality. Equality sign holds if |Σ| = 0 i.e. Rank(Σ) < K + 1


and hence Rank(Σ) = K since Σ contains V which is non-singular. Hence
there exists a non-zero vector l such that
T − g(θ) = l0 S, with probability one where S = (S1 , S2 , . . . Sk )0
⇒ T − g(θ) = g0 VK −1 S, with probability one.
Proof: Note that
V arθ (l0 S − g0 VK −1 S) = V arθ (T − g(θ) − g0 VK −1 S)
= V arθ (T ) + g0 VK −1 Disp(S) − 2Cov(T, g0 VK −1 S)
= g0 VK −1 g + g0 VK −1 g − 2g0 VK −1 g
= 0

2
Hence, l0 S − g0 VK −1 S = E(l0 S − g0 VK −1 S) = 0, with probability one.
Thus we get
T − g(θ) = l0 S = g0 VK −1 S, with probability one.
Now we consider a very important result relating to Bhattacharya system of
lower bounds.

Result: Suppose the nth lower bound be denoted by ∆n = gn0 Vn−1 gn ,


where gn0 = (g (1) (θ), g (2) (θ), . . . , g (n) (θ))0 .The sequence of lower bounds {∆n }
is non-decreasing sequence, i.e., ∆n+1 ≥ ∆n . Observe that, ∆1 is Cramer-Rao
lower bound.
0 −1
Proof Note that, ∆n+1 = gn+1 Vn+1 gn+1 , and
.
0
gn+1 = (g (1) (θ), g (2) (θ), . . . , g (n+1) (θ)..g (n) (θ))0 .
0
Suppose the vector gn+1 and the matrix Vn+1 be partitioned as follows
.
g0 = (g0 ..g (n+1) (θ)),
n+1 n
!
Vn un+1
Vn+1 = 0 .
un+1 vn+1,n+1
For any non-singular matrix C we can write,
0 −1 −1
∆n+1 = gn+1 −1
C 0 C 0 Vn+1 C 0 C 0 gn+1
−1
 
= (Cgn+1 0 ) CVn+1 C 0 (Cgn+1 ) .
If we choose, !
In 0
C= ,
−u0n+1 Vn 1
then we have Cgn+1 = (gn , g (n+1) − u0n+1 Vn gn )0 , and
! ! !
0 In 0 Vn un+1 In −u0n+1 Vn
CVn+1 C =
−u0n+1 Vn 1 u0n+1 vn+1,n+1 0 1
! !
Vn vn+1 In −u0n+1 Vn
=
un+1 − un+1 vn+1,n+1 − u0n+1 Vn un+1
0 0
0 1
!
0
Vn vn+1 − u0n+1
=
un+1 − un+1 vn+1,n+1 − u0n+1 Vn un+1
0 0
!
Vn 0
=
0 En+1,n+1

3
where En+1,n+1 = vn+1,n+1 − u0n+1 Vn un+1 . Since, Vn+1 is positive definite
matrix, Vn is p.d. and En+1,n+1 > 0. Therefore,

Vn−1
!
0 −1 0
(CVn+1 C ) = 1 .
0 En+1,n+1

So finally,
−1
 
∆n+1 = (Cgn+1 0 ) CVn+1 (θ)C 0 (Cgn+1 )
 2
g (n+1) − u0n+1 Vn gn
= gn0 Vn−1 (θ)gn +
En+1,n+1
≥ ∆n .

Note : The implication of the result that if there is no UE of g(θ) which


attains the n th lower bound ∆n , then a sharper lower bound ∆n+1 can
be obtained and looked at. If an UE satisfies the n th lower bound ∆n
then no further improvement can be made and hence ∆n = ∆n+1 . However
∆n = ∆n+1 does not imply that there exists an UE of g(θ) which attains the
n th lower bound. Consider the following example:

Example : Let X1 , X2 , . . . , Xn be independent sample from N (θ, 1). In


the last module we have seen that there does not exists an UE of θ2 which
attains ∆1 i.e. the CRLB. We want to find an UE of θ2 which attains the
Bhattacharya lower bound ∆2 .
The joint p.d.f. of X is given by,
n
( )
1 1X
fθ (x) = √ n exp − (xi − θ)2 .
( 2π) 2 i=1

Hence,
1 ∂
S1 = fθ (x)
fθ (x) ∂θ
n
1 X
= fθ (x) (xi − θ)
fθ (x) i=1
n
X
= (xi − θ)
i=1
= n(x̄ − θ)

4
and
1 ∂2
S2 = fθ (x)
fθ (x) ∂θ2
!
1 ∂ ∂
= fθ (x)
fθ (x) ∂θ ∂θ
n
!
1 ∂ X
= fθ (x) (xi − θ)
fθ (x) ∂θ i=1
n
!
1 ∂ X
= fθ (x) (xi − θ) + fθ (x)(−n)
fθ (x) ∂θ i=1
 !2 
n
1  X
= fθ (x) (xi − θ) + fθ (x)(−n)
fθ (x) i=1
n
!2
X
= (xi − θ) −n
i=1
= n (x̄ − θ)2 − n.
2

h i
By definition we know, E(S1 ) = E(S2 ) = 0. Therefore, E(S12 ) = E {n(x̄ − θ)}2 =
n2 n1 = n,
  2 
E(S22 ) = E n 4
X̄ − θ) 2 3
+ n − 2n (X̄ − θ) 2

1 1
= n4 3 2
+ n2 − 2n3
n n
2
= 2n ,

and E(S1 S2 ) = n2 E[(X̄ − θ)3 ] − n2 E(X̄ − θ) = 0. Therefore,


!
n 0
V2 = .
0 2n2

Since the parameter of interest under consideration is g(θ) = θ2 we have,


g0 = (2θ, 2). Therefore, the V2 , BLB for θ2 is given by,

∆2 = g0 V −1 g
! !
1
4 n
0 θ
= (θ, 1) 1
n 0 2n2
1

5
!
4 1 θ
 
= θ,
n 2n 1
4 2 1
 
= θ +
n 2n
The UE of θ2 which attains the Bhattacharya lower bound of order 2 is given
by

g0 V2 −1 S = g0 V −1 g
! !
1
n
0 n(x̄ − θ)
= (2θ, 2) 1
0 2n2
n2 (x̄ − θ)2 − n
! !
2θ 1 n(x̄ − θ)
= , 2
n n n (x̄ − θ)2 − n
2

1
 
= x̄2 − − θ2
n
Hence T = X̄ 2 − n1 is the unbiased estimator of θ2 which attains Bhattacharya
lower bound of order 2.

You might also like