0% found this document useful (0 votes)
4 views8 pages

Han 2009

This paper presents the E-Bayesian estimation method for estimating failure rates, particularly effective for censored or truncated data with small sample sizes. It discusses the definition, properties, and practical applications of E-Bayesian estimation alongside hierarchical Bayesian estimation, providing formulas and a real data example to demonstrate efficiency. The method aims to simplify the estimation process in reliability engineering, overcoming challenges posed by classical statistical techniques.

Uploaded by

royalinkonsult
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views8 pages

Han 2009

This paper presents the E-Bayesian estimation method for estimating failure rates, particularly effective for censored or truncated data with small sample sizes. It discusses the definition, properties, and practical applications of E-Bayesian estimation alongside hierarchical Bayesian estimation, providing formulas and a real data example to demonstrate efficiency. The method aims to simplify the estimation process in reliability engineering, overcoming challenges posed by classical statistical techniques.

Uploaded by

royalinkonsult
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Applied Mathematical Modelling 33 (2009) 1915–1922

Contents lists available at ScienceDirect

Applied Mathematical Modelling


journal homepage: www.elsevier.com/locate/apm

E-Bayesian estimation and hierarchical Bayesian estimation of failure rate q


Ming Han *
Department of Mathematics and Physics, Fujian University of Technology, Fuzhou, Fujian 350108, China

a r t i c l e i n f o a b s t r a c t

Article history: This paper introduces a new method, E-Bayesian estimation method, to estimate failure rate.
Received 14 January 2007 The method is suitable for the censored or truncated data with small sample sizes and high
Received in revised form 22 March 2008 reliability. The definition and properties of E-Bayesian estimation are given. A real data set
Accepted 31 March 2008
is discussed, which shows that the method is both efficiency and easy to operate.
Available online 10 April 2008
Ó 2008 Elsevier Inc. All rights reserved.

Keywords:
Failure rate
Exponential distribution
E-Bayesian estimation
Hierarchical Bayesian estimation
Reliability

1. Introduction

In the area related to the reliability of industrial products, for the life testing of products with high reliability, engi-
neers often deal with truncated data sets, which some times have small sample size or are censored. All of these cer-
tainly cause difficulties on estimation of parameters related to the life of products using classical statistical techniques.
In the literature, Lindley and Smith [1] first introduced the idea of hierarchical prior distribution. Han [2] developed
methods to construct hierarchical prior distribution. Recently, some results have been made on hierarchical Bayesian
method to deal with life data. But all of these methods involved in integration. Though some computing methods for
integration such as MCMC (Markov chain Monte Carlo) are available [3], integration is still very hard to be implemented
in practice.
Consider type I censored life testing. Let the triplet ðni ; r i ; ti Þ be the testing data set at the ith testing time ði ¼ 1; 2; . . . ; mÞ,
where ni ; ri and t i are the numbers of products at time i, the number of failures and the number of being censored,
respectively.
This paper introduces a new method, named E-Bayesian estimation, to estimate failure rate. The definition of E-Bayesian
estimation of the failure rate is described in Section 2. In Sections 3 and 4, the formulas of E-Bayesian estimation and hier-
archical Bayesian estimation of the failure rate are given respectively. In Section 5, the properties of E-Bayesian estimation
are discussed. Section 6 gives a real example, and Section 7 is a conclusion.

q
This work was supported partly by the Fujian Province Natural Science Foundation of China, and partly by the Fujian University of Technology.
* Tel.: +86 0591 87723376.
E-mail address: [email protected]

S0307-904X/$ - see front matter Ó 2008 Elsevier Inc. All rights reserved.
doi:10.1016/j.apm.2008.03.019
1916 M. Han / Applied Mathematical Modelling 33 (2009) 1915–1922

2. Definition of E-Bayesian estimation

Suppose the life of a product has an exponential distribution with probability density function
f ðtÞ ¼ k expftkg; t > 0; ð1Þ
where k > 0 is the failure rate of the exponential distribution.
If we take the conjugate prior of k, namely Cða; bÞ, then density function
a
pðk j a; bÞ ¼ b ka1 expðbkÞ=CðaÞ;
R1
where the Gamma function CðaÞ ¼ 0 xa1 ex dx, hyperparameters a > 0 and b > 0.
According to Han [2], a and b should be selected to guarantee that pðk j a; bÞ is a decreasing function of k. The derivative of
pðk j a; bÞ with respect to k is
d½pðk j a; bÞ a
¼ ½b ka2 expðbkÞ=CðaÞ½ða  1Þ  bk:
dk
Note that a > 0, b > 0, and k > 0, it follows 0 < a < 1, b > 0 due to d½pðkja;bÞ
dk
< 0, and therefore pðk j a; bÞ is a decreasing func-
tion of k. Given 0 < a < 1, the larger b is, the thinner the tail of the Gamma density function will be. Considering the robust-
ness of Bayesian estimate [4], the thinner tailed prior distribution often reduces the robustness of Bayesian estimate.
Accordingly, b should not be larger then a given upper bound c, where c > 0 is a constant to be determined. Thereby, the
hyperparameters a and b should be selected with the restriction of 0 < a < 1 and 0 < b < c. When a ¼ 1, pðk j a; bÞ is a
decreasing function of k. Accordingly, b should not be too big while a ¼ 1. It is better to choose b below a given upper bound
c (c is a positive constant). Thereby, the range of hyperparameter b may be considered as 0 < b < c. How to determine the
constant c would be described later in an example. If a ¼ 1, then the density function of k will become
pðk j bÞ ¼ b expðbkÞ; k > 0: ð2Þ

Definition 1. With ^
kðbÞ being continuous,
Z
^
kEB ¼ ^
kðbÞpðbÞdb
D

is called the expected Bayesian estimation of k (briefly E-Bayesian estimation), which is assumed finite, where D is the do-
main of b, ^kðbÞ is Bayesian estimation of k with hyperparameter b, and pðbÞ is the density function of b over D.
Definition 1 indicates that the E-Bayesian estimation of k is the expectation of Bayesian estimation of k for the hyperparameter b.

3. E-Bayesian estimation

E-Bayesian estimation based on three different prior distributions of the hyperparameter is used in this section to inves-
tigate the influence of different prior distributions on the E-Bayesian estimation of k.
Theorem 1. For the testing data set fðni ; ri ; ti Þ; i ¼ 1; . . . ; mg with type I censor, where r i ¼ 0; 1; 2; . . . ; ni . Let
P Pm
M¼ m i¼1 ðni  r i Þt i ; r ¼ i¼1 r i . If the prior density function pðk j bÞ of k is given by (2), then, we have the following two conclusions:

(i) With the quadratic loss function, the Bayesian estimation of k is


^ rþ1
kðbÞ ¼ :
Mþb
(ii) For the following priors of b
2ðc  bÞ
p1 ðbÞ ¼ ; 0 < b < c; ð3Þ
c2
1
p2 ðbÞ ¼ ; 0 < b < c; ð4Þ
c
2b
p3 ðbÞ ¼ 2 ; 0 < b < c; ð5Þ
c
the corresponding E-Bayesian estimation of k are, respectively
   
^ 2ðr þ 1Þ Mþc
kEB1 ¼ ðM þ cÞ ln  c ;
c2 M
 
^ ðr þ 1Þ Mþc
kEB2 ¼ ln ;
c M
  
^ 2ðr þ 1Þ Mþc
kEB3 ¼ c  M ln :
c2 M
M. Han / Applied Mathematical Modelling 33 (2009) 1915–1922 1917

Proof

(i) For the above testing data set fðni ; ri ; t i Þ; i ¼ 1; . . . ; mg, if the number of corresponding failure samples are X i in the
testing process, according to Lawless [5], X i obeys a Poisson distribution with parameter ðni  ri Þti k, then the likelihood
function of k is
( )
Ym Ym
½ðni  r i Þt i ri r
Lðr j kÞ ¼ PfX i ¼ r i g ¼ k expfMkg;
i¼1 i¼1
ðri Þ!
P Pm
where M ¼ m i¼1 ðni  r i Þt i ; r ¼ i¼1 r i .
If the prior density function pðk j bÞ of k is given by (2), then the Bayesian theorem leads to the posterior density func-
tion of k,
pðk j bÞLðr j kÞ
hðk j rÞ ¼ R 1
0
pðk j bÞLðr j kÞdk
kr expfðM þ bÞkg
¼ R1
0
kr expfðM þ bÞkgdk
ðM þ bÞrþ1 r
¼ k expfðM þ bÞkg; k > 0:
Cðr þ 1Þ
With quadratic loss function, the Bayesian estimation of k is
Z 1
^
kðbÞ ¼ khðk j rÞdk
0
Z
ðM þ bÞrþ1 1
¼ kðrþ2Þ1 expfðM þ bÞkgdk
Cðr þ 1Þ 0

Cðr þ 2ÞðM þ bÞrþ1


¼
Cðr þ 1ÞðM þ bÞrþ2
rþ1
¼ :
Mþb
(ii) If the prior density function p1 ðbÞ of b is given by (3), then, by Definition 1, the E-Bayesian estimation of k will be
Z
^
kEB1 ¼ ^
kðbÞp1 ðbÞdb
D
Z
2ðr þ 1Þ c c  b
¼ 2
db
c Mþb
0   
2ðr þ 1Þ Mþc
¼ ðM þ cÞ ln  c :
c2 M
Similarly, if the prior density function p2 ðbÞ of b is given by (4), the E-Bayesian estimation of k will be
Z
^
kEB2 ¼ ^
kðbÞp2 ðbÞdb
D
 
ðr þ 1Þ Mþc
¼ ln :
c M
Also, if the prior density function p3 ðbÞ of b is given by (5), the E-Bayesian estimation of k will be
Z
^
kEB3 ¼ ^
kðbÞp3 ðbÞdb
D
  
2ðr þ 1Þ Mþc
¼ c  M ln :
c2 M

That concludes the proof of Theorem 1. h

4. Hierarchical Bayesian estimation

If the prior density function pðk j bÞ of k is given by (2), how can the value of hyperparameter b be determined? Lindley
and Smith [1] addressed an idea of hierarchical prior distribution, which suggested that one prior distribution may be
adapted to the hyperparameters while the prior distribution includes hyperparameters.
If the prior density function pðk j bÞ of k is given by (2), and the prior density function of hyperparameter b are given by
(3)–(5), then the corresponding hierarchical prior density functions of k are, respectively
1918 M. Han / Applied Mathematical Modelling 33 (2009) 1915–1922

Z c Z c
2
p4 ðkÞ ¼ pðk j bÞp1 ðbÞdb ¼
2
bðc  bÞ expðbkÞdb; k > 0; ð6Þ
0 c 0
Z c Z c
1
p5 ðkÞ ¼ pðk j bÞp2 ðbÞdb ¼ b expðbkÞdb; k > 0; ð7Þ
0 c 0
Z c Z c
2 2
p6 ðkÞ ¼ pðk j bÞp3 ðbÞdb ¼ 2 b expðbkÞdb; k > 0: ð8Þ
0 c 0

Theorem 2. For the testing data set fðni ; ri ; ti Þ; i ¼ 1; . . . ; mg with type I censor, where r i ¼ 0; 1; 2; . . . ; ni . Let
P Pm
M¼ m i¼1 ðni  r i Þt i ; r ¼ i¼1 r i . If the hierarchical prior density function of k are given by (6)–(8), then, using the quadratic
loss function, the corresponding hierarchical Bayesian estimation of k are, respectively
Rc bðcbÞ
0 ðMþbÞrþ2
db
^
kHB1 ¼ ðr þ 1Þ R c ;
bðcbÞ
0 ðMþbÞrþ1
db
Rc b
0 ðMþbÞrþ2
db
^
kHB2 ¼ ðr þ 1Þ R c ;
b
0 ðMþbÞrþ1
db
Rc b2
0 ðMþbÞrþ2
db
^
kHB3 ¼ ðr þ 1Þ R c :
b2
0 ðMþbÞrþ1
db

Proof. According to the course of the proof of Theorem 1, the likelihood function of k is
( )
Y
m Ym
½ðni  r i Þt i ri r
Lðr j kÞ ¼ PfX i ¼ r i g ¼ k expfMkg;
i¼1 i¼1
ðr i Þ!

P Pm
where M ¼ m i¼1 ðni  r i Þt i ; r ¼ i¼1 r i .
If the hierarchical prior density function of k is given by (6), then, by means of the Bayesian theorem, the hierarchical
posterior density function of k will be

p4 ðk j bÞLðr j kÞ
hðk j rÞ ¼ R 1
0
p4 ðk j bÞLðr j kÞdk
Rc
0
bðc  bÞkr exp½ðM þ bÞkdb
¼ R c bðcbÞCðrþ1Þ ; k > 0:
0 ðMþbÞrþ1
db

Using the quadratic loss function, the corresponding hierarchical Bayesian estimation of k will be

Z 1
^
kHB1 ¼ khðk j rÞdk
0
Rc R1
0
bðc  bÞf 0 kðrþ2Þ1 exp½ðM þ bÞkdkgdb
¼ R c bðcbÞCðrþ1Þ
0 ðMþbÞrþ1
db
Rc bðcbÞCðrþ2Þ
0 ðMþbÞrþ2
db
¼ Rc bðcbÞCðrþ1Þ
0 ðMþbÞrþ1
db
Rc bðcbÞ
0 ðMþbÞrþ2
db
¼ ðr þ 1Þ R c bðcbÞ
:
0 ðMþbÞrþ1
db

Similarly, if the hierarchical prior density functions of k are given by (7) and (8), then, the corresponding hierarchical Bayes-
ian estimation of k will be, respectively,

Rc b
0 ðMþbÞrþ2
db
^
kHB2 ¼ ðr þ 1Þ R c ;
b
0 ðMþbÞrþ1
db
Rc b2
0 ðMþbÞrþ2
db
^
kHB3 ¼ ðr þ 1Þ R c : 
b2
0 ðMþbÞrþ1
db
M. Han / Applied Mathematical Modelling 33 (2009) 1915–1922 1919

5. Properties of E-Bayesian estimation

Now we discuss the relations among ^ kEB1 , ^


kEB2 and ^
kEB3 in Theorem 1, and the relations among ^
kEBi ði ¼ 1; 2; 3Þ and
^
kHBi ði ¼ 1; 2; 3Þ in Theorems 1 and 2.

kEB1 , ^
5.1. Relations among ^ kEB2 and ^
kEB3

Theorem 3. In Theorem 1, when 0 < c < M, then we have:

kEB3 < ^
(i) ^ kEB2 < ^
kEB1 ;
(ii) limM!1 ^kEB1 ¼ limM!1 ^
kEB2 ¼ limM!1 ^
kEB3 .

Proof

(i) Based on Theorem 1, we have


   
^ ðr þ 1Þ Mþc
kEB1  ^
kEB2 ¼ ^
kEB2  ^
kEB3 ¼ ð2M þ cÞ ln  2c : ð9Þ
c2 M
For 1 < x < 1, we have
x2 x3 x4 X1
xi
lnð1 þ xÞ ¼ x  þ  þ  ¼ ð1Þi1 :
2 3 4 i¼1
i

Let x ¼ Mc , when 0 < c < M, 0 < Mc < 1, we get


 
Mþc
ð2M þ cÞ ln  2c
M
 c
¼ ð2M þ cÞ ln 1 þ  2c
 M 
c 1  c 2 1  c 3 1  c 4 1  c 5
¼ ð2M þ cÞ  þ  þ      2c
M 2 M 3 M 4 M 5 M
 
c2 2c3 2c4 2c5 2c6
¼ 2c  þ  þ  þ     2c
M 3M2 4M3 5M4 6M5 ð10Þ
 2 
c c3 c4 c5 c6
þ  þ  þ    
M 2M2 3M3 4M4 5M5
 3   
c c4 3c5 2c6
¼ 2
 3
þ 4
 5
þ 
6M 6M 20M 15M
3   5  
c c c c
¼ 1 þ 98 þ 
6M2 M 60M4 M
> 0:

According to (9) and (10), we have


^
kEB1  ^
kEB2 ¼ ^
kEB2  ^
kEB3 > 0;
that is
^
kEB3 < ^
kEB2 < ^
kEB1 :
(ii) From (9) and (10), we get
 
lim ^ kEB1  ^kEB2
M!1
 
¼ lim ^ kEB2  ^kEB3
M!1
 3  
ðr þ 1Þ c c c5  c
¼ 2
lim 2
1 þ 4
98 þ 
c M!1 6M M 60M M
¼ 0;
that is, limM!1 ^
kEB1 ¼ limM!1 ^
kEB2 ¼ limM!1 ^
kEB3 .

Thus, the proof is completed. h


The part (i) of Theorem 3 shows that with different priors (3)–(5) of the hyperparameter b, the corresponding E-Bayesian
estimates ^kEBi ði ¼ 1; 2; 3Þ are also different. The part (ii) of Theorem 3 shows that the ^ kEBi ði ¼ 1; 2; 3Þ are asymptotically equiv-
alent to each other as M tends to infinity, and that means ^ kEBi ði ¼ 1; 2; 3Þ are all close to each other when M is sufficiently large.
1920 M. Han / Applied Mathematical Modelling 33 (2009) 1915–1922

5.2. Relations among ^


kEBi ði ¼ 1; 2; 3Þ and ^
kHBi ði ¼ 1; 2; 3Þ

Theorem 4. In Theorems 1 and 2, when 0 < c < M, ^


kEBi ði ¼ 1; 2; 3Þ and ^
kHBi ði ¼ 1; 2; 3Þ satisfy

lim ^
kEBi ¼ lim ^
kHBi ði ¼ 1; 2; 3Þ:
M!1 M!1

Proof. Based on Theorem 2,


R c bðcbÞ
0 ðMþbÞrþ2
db
^
kHB1 ¼ ðr þ 1Þ R c :
bðcbÞ
0 ðMþbÞrþ1
db

1
Since b, ðMþbÞrþ2 is continuous on [0, c], by the generalized mean value theorem for integration (when 0 < b < c; bðc  bÞ > 0),

there is at least one number b1 2 ½0; c such that


Z c Z c
bðc  bÞ 1 c3 1
rþ2
db ¼ bðc  bÞdb ¼  : ð11Þ
0 ðM þ bÞ ðM þ b1 Þrþ2 0 6 ðM þ b1 Þrþ2

Similarly, there is at least one number b2 2 ½0; c such that


Z c Z c
bðc  bÞ 1 c3 1
rþ1
db ¼ rþ1
bðc  bÞdb ¼  : ð12Þ
0 ðM þ bÞ ðM þ b2 Þ 0 6 ðM þ b2 Þrþ1

According to (11) and (12), we have


Rc bðcbÞ c3  rþ1
0 ðMþbÞrþ2
db 6
 ðMþb1 Þrþ2 M þ b2 1
1
Rc bðcbÞ
¼ c3 ¼  : ð13Þ
db 6
 ðMþb1 Þrþ1 M þ b1 M þ b1
0 ðMþbÞrþ1 2

According to (13) and Theorem 2, we get


R c bðcbÞ ( rþ1 )
0 ðMþbÞrþ2
db M þ b2 1
^
lim kHB1 ¼ ðr þ 1Þ lim R c bðcbÞ ¼ ðr þ 1Þ lim  ¼ 0: ð14Þ
M!1 M!1 db M!1 M þ b1 M þ b1
0 ðMþbÞrþ1

According to Theorem 1,
   
^ 2ðr þ 1Þ Mþc
kEB1 ¼ ðM þ cÞ ln  c : ð15Þ
c2 M
For 1 < x < 1, we have
x2 x3 x4 X1
xi
lnð1 þ xÞ ¼ x  þ  þ  ¼ ð1Þi1 :
2 3 4 i¼1
i

Let x ¼ Mc , when 0 < c < M, 0 < Mc < 1, we get


 
Mþc
ðM þ cÞ ln c
M
 c
¼ ðM þ cÞ ln 1 þ c
 M 
c 1  c 2 1  c 3 1  c 4 1  c 5
¼ ðM þ cÞ  þ  þ    c
M 2 M 3 M 4 M 5 M
 
c2 c3 c4 c5 c6
¼ c þ  þ  þ   c ð16Þ
2M 3M2 4M3 5M4 6M5
 2 
c c3 c4 c5 c6
þ  þ  þ    
M 2M2 3M3 4M4 5M5
 2   4 
c c3 c c5
¼  2
þ 3
 4
þ 
2M 6M 12M 20M
2   4  
c c c c
¼ 3 þ 53 þ 
6M M 60M3 M
According to (15) and (16), we have
 2  
2ðr þ 1Þ c c c4  c
lim ^
kEB1 ¼ 2
lim 3 þ 3
53 þ    ¼ 0: ð17Þ
M!1 c M!1 6M M 60M M
M. Han / Applied Mathematical Modelling 33 (2009) 1915–1922 1921

According to (14) and (17), we get

lim ^
kEB1 ¼ lim ^
kHB1 :
M!1 M!1

Similarly, we have

lim ^
kEBi ¼ lim ^
kHBi ði ¼ 2; 3Þ:
M!1 M!1

Thus, the proof is completed. h


Theorem 4 shows that the ^ kEBi ði ¼ 1; 2; 3Þ are asymptotically equivalent to each other as M tends to infinity, and that
means ^
kHBi ði ¼ 1; 2; 3Þ are all close to each other when M is sufficiently large.

6. A real example

Consider the testing data from a type of electronic products presented in Table 1 (time unit: hour), the life of such type of
electronic products has an exponential distribution.
By Theorems 1 and 2, we can obtain the values of ^ kEBi ði ¼ 1; 2; 3Þ and ^kHBi ði ¼ 1; 2; 3Þ. Some numerical results are listed in
Table 2.
From Table 2, we find that for the same c (100, 500, 1000, 2000, 3000, 4000), ^ kEBi ði ¼ 1; 2; 3Þ are very close to each other,
and satisfy Theorem 3. For different c (100, 500, 1000, 2000, 3000, 4000), ^ kEBi ði ¼ 1; 2; 3Þ and ^ kHBi ði ¼ 1; 2; 3Þ are all robust,
and satisfy Theorem 4.
From Table 2, we can obtain R b EBi ðtÞ ¼ expf^
kEBi tg ði ¼ 1; 2; 3Þ and R b HBi ðtÞ ¼ expf^ kHBi tg ði ¼ 1; 2; 3Þ. Some of the results
are listed in Table 3.
From Table 3, we find that for the same c (100, 500, 1000, 2000, 3000, 4000) R b EBi ðtÞ ði ¼ 1; 2; 3Þ are very close to each other.
For different c (100, 500, 1000, 2000, 3000, 4000) R b EBi ðtÞ ði ¼ 1; 2; 3Þ and Rb HBi ðtÞ ði ¼ 1; 2; 3Þ are all robust.

Table 1
Testing data of the electronic products

i 1 2 3 4 5 6 7
ti 480 680 880 1080 1280 1480 1680
ni 3 3 5 5 8 8 8
ri 0 0 0 1 0 2 1

Table 2
Results of ^
kEBi ði ¼ 1; 2; 3Þ and ^
kHBi ði ¼ 1; 2; 3Þ

c 500 1000 2000 3000 4000 Range


^
kEB1 1.186E04 1.181E04 1.172E04 1.163E04 1.154E04 3.139E06
^
kHB1 1.183E04 1.177E04 1.164E04 1.151E04 1.139E04 4.429E06
^
k1 2.270E07 4.465E07 8.497E07 1.204E06 1.517E06 4.290E06
^
kEB2 1.183E04 1.176E04 1.163E04 1.150E04 1.137E04 3.139E06
^
kHB2 1.181E04 1.172E04 1.155E04 1.138E04 1.123E04 4.429E06
^
k1 2.301E07 4.421E07 8.306E07 1.170E06 1.455E06 1.225E06
^
kEB3 1.181E04 1.172E04 1.154E04 1.137E04 1.120E04 6.120E06
^
kHB3 1.180E04 1.170E04 1.150E04 1.131E04 1.113E04 6.675E06
^
k1 1.132E07 2.189E07 3.963E07 5.526E07 6.682E07 5.550E07

Note: 1:132E  07 ¼ 1:132  107 ; ^


ki ¼ ^
kEBi  ^
kHBi ; i ¼ 1; 2; 3.

Table 3
b EBi ð500Þ ði ¼ 1; 2; 3Þ and R
Results of R b HBi ð500Þ ði ¼ 1; 2; 3Þ

c 500 1000 2000 3000 4000 Range


b EB1 ð500Þ
R 0.9424343 0.9426531 0.9430831 0.9435035 0.9439148 0.0014805
b HB1 ð500Þ
R 0.9425413 0.9428635 0.9434838 0.9440715 0.9446312 0.0020899
b 1
R 0.0001070 0.0002104 0.0004007 0.0005680 0.0007164 0.0006094
b EB2 ð500Þ
R 0.9425443 0.9426806 0.9435083 0.9441275 0.9447288 0.0021845
b HB2 ð500Þ
R 0.9426528 0.9428891 0.9439003 0.9446795 0.9454167 0.0027639
b 2
R 0.0001085 0.0002085 0.0003920 0.0005520 0.0006879 0.0005794
b EB3 ð500Þ
R 0.9426543 0.9430881 0.9439338 0.9447518 0.9455434 0.0028891
b HB3 ð500Þ
R 0.9427077 0.9431913 0.9441209 0.9450129 0.9458594 0.0031517
b 3
R 0.0000534 0.0001032 0.0001871 0.0002611 0.0003160 0.0002626
b i ¼ R
Note: R b HBi ð500Þ  R
b EBi ð500Þ; i ¼ 1; 2; 3.
1922 M. Han / Applied Mathematical Modelling 33 (2009) 1915–1922

In application, the author suggest select a value of c in the middle point of interval (0, 4000], i.e., c ¼ 2000.

7. Conclusion

This paper introduces a new method, E-Bayesian estimation method, to estimate failure rate. The author would like to put
forward the following two questions for any new parameter estimation method: (i) How much close is the dependence be-
tween a new method and an already-made ones? (ii) In which aspects is a new method superior to an old ones?
For the E-Bayesian estimation method, Theorems 3 and 4 have given a good answer to the above question (i). To the above
question (ii), from Theorems 1 and 2, we find that the expression of the E-Bayesian estimation is simple, whereas the expres-
sions of the hierarchical Bayesian estimation relies on integral expression, which is often not easy.
Reviewing the example in Section 6, for different prior densities p1 ðbÞ (decreasing function), p2 ðbÞ (constant) and p3 ðbÞ
(increasing function) of the hyperparameter b, the corresponding ^ kEBi ði ¼ 1; 2; 3Þ and ^
kHBi ði ¼ 1; 2; 3Þ are all robust and con-
sistent with Theorem 4, ^ kEBi ði ¼ 1; 2; 3Þ are consistent with Theorem 3. Thus, the author suggests take uniform distribution
as the prior of the hyperparameter b (that is, in application, taking p2 ðbÞ as the prior density of the hyperparameter b). The
above example showed that the E-Bayesian estimation method is both efficient and easy to perform.

Acknowledgement

The author wish to thank Professors Xizhi Wu and Jiabao Wang, who checked the paper and gave author very helpful sug-
gestions. The author also wants to thank the referees for their helpful comments.

References

[1] D.V. Lindley, A.F.M. Smith, Bayes estimaters for the linear model, Journal of the Royal Statistical Society – Series B 34 (1972) 1–41.
[2] M. Han, The structure of hierarchical prior distribution and its applications, Chinese Operations Research and Management Science 6 (3) (1997) 31–40.
[3] S.P. Brooks, Markov chain Monte Carlo method and its application, The Statistician 47 (1) (1998) 69–100.
[4] J.O. Berger, Statistical Decision Theory and Bayesian Analysis, second ed., Springer-Verlag, New York, 1985.
[5] J.F. Lawless, Statistical Models and Method for Lifetime Data, Wiley, New York, 1982.

You might also like