Gibbs Sampler Approach For Objective Bayeisan Inference in Elliptical Multivariate Random Effects Model

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

Gibbs sampler approach for objective Bayeisan inference

in elliptical multivariate random effects model

Olha Bodnar1 and Taras Bodnar2

1
National Institute of Standards and Technology, Gaithersburg, MD
arXiv:2305.15983v1 [stat.ME] 25 May 2023

20899-8980, USA and Unit of Statistics, School of Business, Örebro


University, SE-70182 Örebro, Sweden
2
Department of Mathematics, Stockholm University, SE-10691 Stockholm,
Sweden

Abstract

In this paper, we present the Bayesian inference procedures for the parameters of
the multivariate random effects model derived under the assumption of an elliptically
contoured distribution when the Berger and Bernardo reference and the Jeffreys priors
are assigned to the model parameters. We develop a new numerical algorithm for drawing
samples from the posterior distribution, which is based on the hybrid Gibbs sampler.
The new approach is compared to the two Metropolis-Hastings algorithms, which were
previously derived in the literature, via an extensive simulation study. The results are
implemented in practice by considering ten studies about the effectiveness of hypertension
treatment for reducing blood pressure where the treatment effects on both the systolic
blood pressure and diastolic blood pressure are investigated.

Keywords: Gibbs sampler; multivariate random-effects model; noninformative prior; elliptically


contoured distribution; multivariate meta-analysis

1
1 Introduction
Multivariate random effects model presents a classical quantitative tool used to combine the measure-
ments of individual studies into a single consensus vector. The approach is widely used in multivariate
meta-analysis in medicine, psychology, chemistry, among others (see, e.g., Gasparrini et al. (2012),
Wei and Higgins (2013), Jackson and Riley (2014), Liu et al. (2015), Noma et al. (2019), Negeri and
Beyene (2020), Jackson et al. (2020)) as well as in interlaboratory comparison study in physics and
metrology (see, e.g., Rukhin (2007), Toman et al. (2012), Rukhin (2013), Bodnar and Eriksson (2023)).
In most of applications, the random effects model is based on the assumption that the individual
measurements are normally distributed with unknown common mean vector and known covariance
matrices which are provided together with the measurement results by each participant (cf., Lambert
et al. (2005), Viechtbauer (2007), Sutton and Higgins (2008), Riley et al. (2010), Strawderman and
Rukhin (2010), Guolo (2012), Turner et al. (2015), Bodnar et al. (2017), Wynants et al. (2018),
Michael et al. (2019), Veroniki et al. (2019), Bodnar and Eriksson (2023)). In the multivariate case,
the model is determined by the following equation

x i = µ + λi + ε i with λi ∼ Np (0, Ψ) and εi ∼ Np (0, Ui ), (1)

where {λi }i=1,...,n and {εi }i=1,...,n are mutually independent. Both λi and εi are normally distributed
random vectors in the stochastic representation (1) of xi .
The aim of a meta-analysis and/or of an interlaboratory comparison study is to use the observation
vectors xi , i = 1, ..., n to produce a single estimate of the common mean vector µ. Usually, it appears
that the constructed estimator is more volatile as one would expect by using the uncertainties reported
by each individual study Ui , i = 1, ..., n. This effect is known as the dark uncertainty of between-study
variability (cf., Thompson and Ellison (2011)) in statistical literature and it is explained by the fact
that the individual studies may be performed in different places and in different times resulting in
extra viability. Thus the random effects λi , i = 1, ..., n are introduced in (1), while their covariance
matrix Ψ, known also the between-study covariance matrix, represents the scope of dark uncertainty.
An extension of the multivariate random effects model was considered by Bodnar et al. (2016) and
Bodnar (2019) in the univariate case and by Bodnar and Bodnar (2023) in the multivariate case. It
is based on the assumption of elliptically contoured distributions to model the stochastic behaviour
of measurement results, which include the normal distribution as a special case (see, Gupta et al.
(2013)). Let X = (x1 , ..., xn ) denote the observation matrix. Then, the model is given by
1  
p(X|µ, Ψ) = p f vec(X − µ1⊤ )⊤ (Ψ ⊗ I + U)−1 vec(X − µ1) , (2)
det(Ψ ⊗ I + U)
where 1 is the vector of ones, I is the identity matrix of an appropriate order, and U = diag(U1 , ..., Un )
is a block-diagonal matrix consisting of the reported covariance matrices Ui , i = 1, ..., n, as its diagonal
blocks. The symbol ⊗ denotes the Kronecker product, while vec stands for the vec operator (see, e.g.,
Harville (1997)). The function f (.) is density generator which determines the type of an elliptical
distribution used in the model assumption.

2
Statistical inference procedures for the parameters of the multivariate random effects model were
derived from both viewpoints of the frequentist and Bayesian statistics. Under the assumption of
normality, Jackson et al. (2010) extended the DerSimonian and Laird approach to the multivariate
data. Chen et al. (2012) developed the method based on the restricted maximum likelihood. These
two procedures together with the maximum likelihood estimator constitute the most commonly used
methods of the frequentist statistics (cf., Jackson et al. (2013), Schwarzer et al. (2015), Jackson et al.
(2020)). While Nam et al. (2003) and Paul et al. (2010) derived Bayesian inference procedures under
the assumption of the multivariate normal random effects model by assigning informative priors to
model parameters, recently Bodnar and Bodnar (2023) developed a full objective Bayesian analysis
of the generalized multivariate random effects model. They also presented two Metropolis-Hastings
algorithms for drawing samples from the derived posterior distribution. In the current paper, we
extend the previous findings by deriving a Gibbs sampler algorithm, which will be compared to the
existent approaches for both simulated and real data.
The rest of the paper is structured as follows. In the next section, the objective Bayesian inference
procedures for the parameters of the generalized random effects model are present, which are derived
by using the Berger and Bernardo reference prior (see, Berger and Bernardo (1992), Berger et al.
(2009)) and the Jeffreys prior (see, Jeffreys (1946)). The Gibbs sampler methods for drawing samples
from the posterior distribution of the parameters of the generalized random effects model is introduced
in Section 3. Within an extensive simulation study we compare the new approach to the existent ones
in Section 4, where split-R̂ estimates based on the rank normalization of Vehtari et al. (2021) and
the coverage probabilities of the constructed credible intervals are used as performance measures. In
Section 5 the results of the empirical illustration are provided, while Section 6 summarizes the findings
obtained in the paper.

2 Bayesian inferences in generalized multivariate ran-


dom effects model
In many practical situations no information or only vague information is available for model pa-
rameters. In such a case, it is preferable to construct Bayesian inference procedures by assigning a
noninformative prior. The Laplace prior, the Jeffreys prior and the Berger and Bernardo reference
prior present the most commonly used noninformative priors in statistical literature (see, Laplace
(1812), Jeffreys (1946), Berger and Bernardo (1992), Berger et al. (2009)).
The Berger and Bernardo reference prior and the Jeffreys prior for the parameters µ and Ψ of the
generalized multivariate random effects model were derived in Bodnar and Bodnar (2023). They are
given by

3
• The Berger and Bernardo reference prior
n
" "
2J2 X
G⊤ (Ψ + Ui )−1 ⊗ (Ψ + Ui )−1

πR (µ, Ψ) = πR (Ψ) ∝ det 2 2 p
2pn + p n
i=1
 ⊤ # #!
  n
! n 1/2
J2 1 X
−1
X
−1 
+ − vec (Ψ + U i ) vec  (Ψ + U j ) Gp (3)
2pn + p2 n2 4
i=1 j=1

with
  !2 
f ′ R2
J2 = E (R2 )2 , (4)
f (R2 )

where R2 = vec(Z)⊤ vec(Z) with Z ∼ Ep,n (Op,n , Ip×n , f ) (matrix-variate elliptically contoured
distribution with zero location matrix Op,n , identity dispersion matrix Ip×n and density gen-
erator f (.)). The symbol Gp denotes for the duplication matrix (see, Magnus and Neudecker
(2019)).

• The Jeffreys prior


v " n #
u
u X
πJ (µ, Ψ) = πJ (Ψ) ∝ πR (Ψ)tdet (Ψ + Ui )−1 . (5)
i=1

Both priors in (3) and (5) depend only on Ψ and assign the constant prior to µ, which is not
surprising since µ is the location parameter of the model. As such, the choice between the Berger and
Bernardo reference prior and the Jeffreys prior has impact on the marginal posterior distribution of
Ψ, while the conditional posterior for µ given Ψ is same and it is given by
v
n n
u ! ! !
X X
⊤ −1
u
π(µ|Ψ, X) ∝ tdet (Ψ + Ui )−1 fΨ,X (µ − x̃(Ψ)) (Ψ + Ui ) (µ − x̃(Ψ)) (6)
,
i=1 i=1

where
n
!
X
fΨ,X (u) = f (xi − x̃(Ψ))⊤ (Ψ + Ui )−1 (xi − x̃(Ψ)) + u u ≥ 0, (7)
i=1

with
n
!−1 n
X X
x̃(Ψ) = (Ψ + Ui )−1 (Ψ + Ui )−1 xi . (8)
i=1 i=1

Furthermore, a similar statement for the conditional posterior of µ holds for any prior of µ and Ψ,
which is a function of Ψ only. Using a generic notation π(Ψ), we get the marginal posterior for Ψ
expressed as
π(Ψ)
π(Ψ|X) ∝
det( i=1 (Ψ + Ui )−1 ) ni=1 det(Ψ + Ui )
p Pn Q p

Z∞ Xn
!
p−1 2 ⊤ −1
× u f u + (xi − x̃(Ψ)) (Ψ + Ui ) (xi − x̃(Ψ)) du. (9)
0 i=1

4
From (6) one concludes that the conditional posterior of µ belongs to the class of elliptically
contoured distribution with density generator fΨ,X (.) which is related to f (.) as shown in (7). In
some special cases, the density generator fΨ,X (.) can be expressed analytically. For instance, the
conditional posterior of µ is a multivariate normal distribution under the normal multivariate random
effects model, while µ conditionally on Ψ and X is t-distributed under the assumption of the t
multivariate random effects model (see, Section 5 in Bodnar and Bodnar (2023)).
The situation is more challenging in the case with the marginal posterior of Ψ. Even though the
integral in (9) can be analytically evaluated for some families of elliptically contoured distribution,
like the normal distribution and the t-distribution, the resulting expression of the marginal posterior
of Ψ remains a complicated function of Ψ which makes it impossible to derive in closed-form Bayesian
inference procedures neither for Ψ nor for µ. As such, two Metropolis-Hastings algorithms for drawing
samples from the joint posterior of µ and Ψ were proposed in Bodnar and Bodnar (2023). In the next
section, we suggest a further approach that is based on the Gibbs sampler and utilizes the fact that
the conditional posterior of µ is an elliptically contoured distribution.
It is also remarkable that the assumption of known covariance matrices Ui , i = 1, ..., n reported
by individual studies can be weakened. Namely, one can extend the generalized random effects model
(2) by assuming that

1  
p(X|µ, Ψ, U1 , ..., Un ) = p f vec(X − µ1⊤ )⊤ (Ψ ⊗ I + U)−1 vec(X − µ1) , (10)
det(Ψ ⊗ I + U)

and

p(U1 , ..., Un |Σ1 , ..., Σn ) = g (U1 , ..., Un ; Σ1 , ..., Σn ) , (11)

where g(.) is a density parameterized by Σ1 , ..., Σn . Such a model under the assumption of multivariate
normal distribution was considered in Rukhin (2007) and Zhao and Mathew (2018). If the parameter
space for µ, Ψ and Σ1 , ..., Σn can be presented as a Cartesian product, then the statistics U1 , ..., Un
are ancillary and the conditional principle can be used (Barndorff-Nielsen and Cox (1994), Reid
(1995), Fraser (2004), Ghosh et al. (2010), Sundberg (2019)). As such, the conditional likelihood
p(X|µ, Ψ, U1 , ..., Un ) is used in the derivation of Bayesian inference procedures and the realizations
of U1 , ..., Un are treated as known quantities. This approach is widely used in Bayesian regression
analysis among others where explanatory factors are usually assumed to be ancillary (cf., Gelman
et al. (2013), Norets (2015)).

3 Gibbs sampler
It is shown in (6) that the conditional posterior of µ given Ψ belong to the family of elliptically
−1 −1 and
Pn 
contoured distributions with location parameter x̃(Ψ), dispersion matrix i=1 (Ψ + Ui )
density generator fΨ,X (.). As such, using the last generated value of the between-study covariance

5
matrix Ψ(b−1) , the new µ(b) is drawn from the conditional distribution
n
! !
X
π(µ|Ψ = Ψ(b−1) , X) ∝ fΨ(b−1) ,X (µ − x̃(Ψ(b−1) ))⊤ (Ψ(b−1) + Ui )−1 (µ − x̃(Ψ(b−1) )) .
i=1

In some special cases, like under the normal multivariate random effects model and the t multivariate
random effects model, this step of the Gibbs sampler algorithm can further be simplified as presented
in detail in Section 3.1 and 3.2.
Next, we discuss the way how a new value of Ψ can be drawn from the conditional posterior
J2 1
distribution of Ψ given µ and X. Under the assumption 2pn+p2 n2
− 4 ≤ 0 where J2 is defined in (4),
we get from the proof of Theorem 4 in Bodnar and Bodnar (2023) that

π(Ψ)
Qn p ≤ det(Ψ)−(n+p+1)/2 (12)
i=1 det(Ψ + Ui )

under the Berger and Bernardo reference prior and

π(Ψ)
Qn p ≤ det(Ψ)−(n+p+2)/2 (13)
i=1 det(Ψ + U i )

under the Jeffreys prior. Furthermore, assuming that the density generator f (u) is a non-increasing
function in u ≥ 0 it holds that
   
f vec(X − µ1⊤ )⊤ (Ψ ⊗ I + U)−1 vec(X − µ1) ≤ f vec(X − µ1⊤ )⊤ Ψ−1 vec(X − µ1)

= f tr(Ψ−1 S(µ)) ,

(14)

where
n
X
S(µ) = (xi − µ)(xi − µ)⊤ . (15)
i=1

In using (12)-(14) and the fact that the conditional posterior of Ψ is proportional to the joint
posterior of µ and Ψ with the proportionality constant being the function of µ, we get that the
conditional posterior for Ψ given µ is bounded by

qR (Ψ|µ) = det(Ψ)−(n+p+1)/2 f tr(Ψ−1 S(µ))



(16)

under the Berger and Bernardo reference prior and by

qJ (Ψ|µ) = det(Ψ)−(n+p+2)/2 f tr(Ψ−1 S(µ))



(17)

under the Jeffreys prior. The two expressions qR (Ψ|µ) and qR (Ψ|µ) are the kernels of the generalized
p-dimensional inverse Wishart distribution (see, e.g., Sutradhar and Ali (1989)) with scale matrix
Pn ⊤
i=1 (xi − µ)(xi − µ) , density generator f , and (n + p + 1) degrees of freedom when the Berger and
Bernardo reference prior and with (n + p + 2) degrees of freedom for the Jeffreys prior. We will denote
these assertions by Ψ|µ, X ∼ GIWp (n + p + 1, S(µ), f ) and Ψ|µ, X ∼ GIWp (n + p + 2, S(µ), f ),
J2 1
respectively. Finally, it is noted that the assumption of 2pn+p2 n2
− 4 ≤ 0 and of a non-increasing

6
density generator f (.), which are used in the derivation of (16) and (17), are quite general and they
are fulfilled for many families of elliptically contoured distributions, like for the normal distribution
and t-distribution as discussed in Sections 3.1 and 3.2 below.
The derived results provide a motivation of choosing the generalized Wishart distribution as a
proposal used in drawing samples from the conditional posterior of Ψ given µ. It is remarkable that
it is simple to draw samples from the generalized Wishart distribution. Also, the generated matrices
are positive definite by construction. In the general case, the resulting hybrid Gibbs sampler approach
is performed under the Berger and Bernardo reference prior is performed in the following way:

Algorithm 1 Gibbs sampler for drawing realizations from the joint posterior distribution of
µ and Ψ under the generalized multivariate random effects model (2) and the Berger and
Bernardo reference prior (3)
(1) Initialization: Choose the initial values µ(0) and Ψ(0) for µ and Ψ and set b = 0.

(2) Given the previous value Ψ(b−1) and data X, generate µ(b) from the conditional pos-
terior (6):

N
!−1 
X
µ|Ψ = Ψ(b−1) , X ∼ Ep,1 x̃(Ψ(b−1) ), (Ψ(b−1) + Ui )−1 , fΨ(b−1) ,X  , (18)
i=1

where x̃(Ψ(b−1) ) is given in (8) and fΨ(b−1) ,X is defined in (7).

(3) Given the previous value µ(b) and data X, generate new value of Ψ(b) :

(i) Using µ(b) and X, generate Ψ(w) from


 
Ψ|µ = µ(b) , X ∼ GIWp n + p + 1, S(µ(b) ), f ) , (19)

where S(µ(b) ) is defined in (15).

(ii) Compute of the Metropolis-Hastings ratio:

π(Ψ(w) |µ(b) , X)qR (Ψ(b−1) |µ(b) )


M H (b) = , (20)
π(Ψ(b−1) |µ(b) , X)qR (Ψ(w) |µ(b) )

where qR (Ψ|µ) is the conditional density of Ψ given µ, i.e., the density of the generalized
inverse Wishart distribution (16).

(iii) Moving to the next state of the Markov chain:


Generate U (b) from the uniform distribution on [0, 1]. If U b < min 1, M H (b) , then set


µ(b) = µ(w) and Ψ(b) = Ψ(w) . Otherwise, set µ(b) = µ(b−1) and Ψ(b) = Ψ(b−1) .

(4) Return to step (2), increase b by 1, and repeat until the sample of size B is accumu-
lated.

7
Since the same value of µ is used in the conditional posterior for Ψ when the Metropolis-Hastings
ratio is computed, the conditional posterior for Ψ can be replaced by the joint posterior for µ and Ψ
in (20). Under the Jeffreys prior, step (3.i) in the algorithm should be replaced by

(i) Using µ(b) and X, generate Ψ(w) from


 
Ψ|µ = µ(b) , X ∼ GIWp n + p + 2, S(µ(b) ) , (21)

while qJ (Ψ|µ) should be used instead of qR (Ψ|µ) in the computation of the Metropolis-Hastings ratio
in (20).
In the next two subsections we present the Gibbs sampler in two partial cases of the normal
distribution and t-distribution.

3.1 Gibbs sampler under the normal multivariate random effects


model
Under the assumption of the normal multivariate random effects model, it holds that

f (u) = Kp,n exp(−u/2), u ≥ 0 with Kp,n = (2π)−pn/2 , (22)

which is a decreasing function. Moreover, under the assumption of normality it holds that

J2 1
− = 0.
2pn + p2 n2 4

Thus, the two conditions present in the derivation of the Gibbs sampler algorithm are fulfilled.
Using (22), we get that
 !−1 n !−1 
Xn X n
X
µ|Ψ, X ∼ Np  (Ψ + Ui )−1 (Ψ + Ui )−1 xi , (Ψ + Ui )−1 , (23)
i=1 i=1 i=1

and the marginal posterior for Ψ is given by


π(Ψ)
π(Ψ|X) ∝
det( i=1 (Ψ + Ui )−1 ) ni=1 det(Ψ + Ui )
p Pn Q p
n
!
1X
× exp − (xi − x̃(Ψ))⊤ (Ψ + Ui )−1 (xi − x̃(Ψ)) . (24)
2
i=1

Hence, the hybrid Gibbs sampler algorithm derived under the Berger and Bernardo reference prior
simplifies to

8
Algorithm 2 Gibbs sampler for drawing realizations from the joint posterior distribution of
µ and Ψ under the normal multivariate random effects model and the Berger and Bernardo
reference prior (3)
(1) Initialization: Choose the initial values µ(0) and Ψ(0) for µ and Ψ and set b = 0.

(2) Given the previous value Ψ(b−1) and data X, generate µ(b) from:

N
!−1 
X
µ|Ψ = Ψ(b−1) , X ∼ Np x̃(Ψ(b−1) ), (Ψ(b−1) + Ui )−1 , (25)
i=1

where x̃(Ψ(b−1) ) is given in (8).

(3) Given the previous value µ(b) and data X, generate new value of Ψ(b) :

(i) Using µ(b) and X, generate Ψ(w) from


 
Ψ|µ = µ(b) , X ∼ IWp n + p + 1, S(µ(b) ), f ) , (26)

where S(µ(b) ) is defined in (15) and the symbol IWp denotes the inverse Wishart distribution
(see, e.g., Gupta and Nagar (2000)).

(ii) Compute of the Metropolis-Hastings ratio:

π(Ψ(w) |µ(b) , X)qR (Ψ(b−1) |µ(b) )


M H (b) = , (27)
π(Ψ(b−1) |µ(b) , X)qR (Ψ(w) |µ(b) )

where qR (Ψ|µ) is given in (16) with f (.) as in (22).

(iii) Moving to the next state of the Markov chain:


Generate U (b) from the uniform distribution on [0, 1]. If U b < min 1, M H (b) , then set


µ(b) = µ(w) and Ψ(b) = Ψ(w) . Otherwise, set µ(b) = µ(b−1) and Ψ(b) = Ψ(b−1) .

(4) Return to step (2), increase b by 1, and repeat until the sample of size B is accumu-
lated.

Under the Jeffreys prior, step (3.i) in the algorithm becomes

(i) Using µ(b) and X, generate Ψ(w) from


 
Ψ|µ = µ(b) , X ∼ IWp n + p + 2, S(µ(b) ) . (28)

Finally, qR (Ψ|µ) should be replaced by qJ (Ψ|µ) in the computation of the Metropolis-Hastings ratio
in (27).

9
3.2 Gibbs sampler under the t multivariate random effects model
Under the assumption of the t multivariate random effects model with d degrees of freedom, we get

Γ ((d + pn)/2)
f (u) = Kp,n,d (1 + u/d)−(pn+d)/2 with Kp,n,d = (πd)−pn/2 , (29)
Γ (d/2)
pn(pn+2)(pn+d)
which is a decreasing function. Moreover, J2 = 4(pn+2+d) (see, Section 3.2 in Bodnar (2019)) and,
consequently,

J2 pn + d 1
2 2
= < .
2pn + p n 4(pn + d + 2) 4

Hence, the two conditions used in the derivation of the Gibbs sampler algorithm are fulfilled.
Then, it holds that (see, Section 5.2 in Bodnar and Bodnar (2023))

1 pn + d − p
π(µ|Ψ, X) ∝ 1+
pn + d − p d + ni=1 (xi − x̃(Ψ))⊤ (Ψ + Ui )−1 (xi − x̃(Ψ))
P

n
! !−(pn+d)/2
X
⊤ −1
× (µ − x̃(Ψ)) (Ψ + Ui ) (µ − x̃(Ψ)) , (30)
i=1

i.e., µ conditionally on Ψ and X has a p-dimensional t-distribution with pn + d − p degrees of freedom,


location parameter x̃(Ψ) and dispersion matrix
Pn n
!−1
d+ i=1 (xi − x̃(Ψ))⊤ (Ψ + Ui )−1 (xi − x̃(Ψ)) X
(Ψ + Ui )−1 . (31)
pn + d − p
i=1

The marginal posterior for Ψ is expressed as


π(Ψ)
π(Ψ|X) ∝
det( i=1 (Ψ + Ui )−1 ) ni=1 det(Ψ + Ui )
p Pn Q p

n
!−(pn+d)/2
1X ⊤ −1
× 1+ (xi − x̃(Ψ)) (Ψ + Ui ) (xi − x̃(Ψ)) . (32)
d
i=1

Finally, using the properties of the generalized inverse Wishart distribution with density generator
corresponding to (29), a random matrix from this distribution can be drawn by generating a random
matrix form the inverse Wishart distribution which should be multiply by a random draw from the
χ2 -distribution with d degrees of freedom divided by d. We summarize this approach in the algorithm
derived for the t multivariate random effect model when the Berger and Bernardo reference prior is
employed.

10
Algorithm 3 Gibbs sampler for drawing realizations from the joint posterior distribution of µ
and Ψ under the t multivariate random effects model and the Berger and Bernardo reference
prior (3)
(1) Initialization: Choose the initial values µ(0) and Ψ(0) for µ and Ψ and set b = 0.

(2) Given the previous value Ψ(b−1) and data X, generate µ(b) from the t-distribution with
pn + d − p degrees of freedom, location parameter x̃(Ψ(b−1) ) and dispersion matrix as
in (31) with Ψ replaced by Ψ(b−1) .

(3) Given the previous value µ(b) and data X, generate new value of Ψ(b) :

(i) Using µ(b) and X, generate Ω(w) from


 
Ω|µ = µ(b) , X ∼ IWp n + p + 1, S(µ(b) ), f ) , (33)

where S(µ(b) ) is defined in (15).

(ii) Generate ξ (w) from χ2 -distribution with d degrees of freedom independently of Ω(w) and
compute

ξ (w) (w)
Ψ(w) = Ω .
d

(iii) Compute of the Metropolis-Hastings ratio:

π(Ψ(w) |µ(b) , X)qR (Ψ(b−1) |µ(b) )


M H (b) = , (34)
π(Ψ(b−1) |µ(b) , X)qR (Ψ(w) |µ(b) )

where qR (Ψ|µ) is given in (16) with f (.) as in (29).

(iv) Moving to the next state of the Markov chain:


Generate U (b) from the uniform distribution on [0, 1]. If U b < min 1, M H (b) , then set


µ(b) = µ(w) and Ψ(b) = Ψ(w) . Otherwise, set µ(b) = µ(b−1) and Ψ(b) = Ψ(b−1) .

(4) Return to step (2), increase b by 1, and repeat until the sample of size B is accumu-
lated.

When the Jeffreys prior is used, step (3.i) in the algorithm is changed to

(i) Using µ(b) and X, generate Ω(w) from


 
Ω|µ = µ(b) , X ∼ IWp n + p + 2, S(µ(b) ) . (35)

and qR (Ψ|µ) should be replaced by qJ (Ψ|µ) in the computation of the Metropolis-Hastings ratio in
(34).

11
4 Simulation study
The aim of simulation study is two fold: (i) first, we study the convergence properties of the Markov
chains constructed by the Gibbs sampler algorithm and compare it with those obtained by using
two Metropolis-Hastings algorithms proposed in Bodnar and Bodnar (2023); (ii) second, the coverage
properties of the credible intervals, constructed for each parameter of the model are assessed.
To study the convergence properties of Markov chains constructed by three algorithms we perform
a simulation study for p = 2 and n = 10. The observation matrix X is drawn from the normal
multivariate random effects model and from the t multivariate random effects model, where the el-
ements of µ are generated from the uniform distribution on [1, 5]. We further set Ψ = τ 2 Ξ, and
U = diag(U1 , ..., Un ). The eigenvalues of Ξ, U1 , ... , Un , are drawn from the uniform distribu-
tion on [1, 4], while the eigenvectors are simulated from the Haar distribution (see, e.g., Muirhead
(1982)). Finally, several values of τ are considered with τ 2 ∈ {0.25, 0.5, 0.75, 1, 2}. We refer to two
Metropolis-Hastings algorithms from Bodnar and Bodnar (2023) as Algorithm A and Algorithm B,
while Algorithm C corresponds to the Gibbs sampler introduced in this paper. The average split-R̂
estimates based on the rank normalization is used to analyze the convergence properties of the con-
structed Markov chains, while the empirical coverage probability is employed to assess the coverage
properties of the credible intervals determined for each parameter of the model.
Figure 1 presents the average values of the split-R̂ estimates based on the rank normalization as
defined in Vehtari et al. (2021). The computations are based by constructing four Markov chains of
length 10000 with burn-in period of 5000 and on B = 5000 independent repetitions. This performance
measure is suggested as a generalization of the R̂ coefficient used in Gelman et al. (2013) with the aim
to define a measure which is robust to possible outliers that may be present in the constructed chains.
In Gelman et al. (2013), it is recommended to conclude that the constructed Markov chain converges
to its stationary distribution when the computed R̂ coefficient is smaller than 1.1. In the case of the
split-R̂ estimates based on the rank normalization, Vehtari et al. (2021) proposed the usage of 1.01
instead of 1.1.
In Figure 1 we observe that the Berger and Brenardo reference prior leads to smaller average values
of the split-R̂ estimates based on the rank normalization in comparison to the Jeffreys prior when the
observations are drawn from the normal multivariate random effects model, while the reverse relation
is present in the case of the t multivariate random effects model. Moreover, independently of the
employed prior, the algorithm used has only a minor impact on the split-R̂ coefficients in the case
of the between-study covariance matrix Ψ. In contrast, the application of the Gibbs sample leads to
the considerable improvements in the case of µ-estimation where the split-R̂ coefficients are all close
to one, independently of the value of τ and the distributional assumption of the multivariate random
effects model. Finally, we note that the values obtained under the t multivariate random effects model
are considerably smaller than those computed under the normal multivariate random effects model.

12
1.08

1.08
Jeffreys prior, Algorithm A Jeffreys prior, Algorithm A
Jeffreys prior, Algorithm B Jeffreys prior, Algorithm B
Jeffreys prior, Algorithm C Jeffreys prior, Algorithm C

1.06

1.06
Reference prior, Algorithm A Reference prior, Algorithm A
Reference prior, Algorithm B Reference prior, Algorithm B
Reference prior, Algorithm C Reference prior, Algorithm C
1.04

1.04
R

R
^

^
1.02

1.02
1.00

1.00
0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0
τ2 τ2
1.08

1.08
Jeffreys prior, Algorithm A Jeffreys prior, Algorithm A
Jeffreys prior, Algorithm B Jeffreys prior, Algorithm B
Jeffreys prior, Algorithm C Jeffreys prior, Algorithm C
1.06

1.06
Reference prior, Algorithm A Reference prior, Algorithm A
Reference prior, Algorithm B Reference prior, Algorithm B
Reference prior, Algorithm C Reference prior, Algorithm C
1.04

1.04
R

R
^

^
1.02

1.02
1.00

1.00
0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0
τ2 τ2
1.08

1.08
Jeffreys prior, Algorithm A
Jeffreys prior, Algorithm B
Jeffreys prior, Algorithm C
1.06

1.06

Reference prior, Algorithm A


Reference prior, Algorithm B
Reference prior, Algorithm C
1.04

1.04
R

R
^

^
1.02

1.02

Jeffreys prior, Algorithm A


Jeffreys prior, Algorithm B
Jeffreys prior, Algorithm C
1.00

1.00

Reference prior, Algorithm A


Reference prior, Algorithm B
Reference prior, Algorithm C

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0


τ2 τ2
1.08

1.08

Jeffreys prior, Algorithm A Jeffreys prior, Algorithm A


Jeffreys prior, Algorithm B Jeffreys prior, Algorithm B
Jeffreys prior, Algorithm C Jeffreys prior, Algorithm C
1.06

1.06

Reference prior, Algorithm A Reference prior, Algorithm A


Reference prior, Algorithm B Reference prior, Algorithm B
Reference prior, Algorithm C Reference prior, Algorithm C
1.04

1.04
R

R
^

^
1.02

1.02
1.00

1.00

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0


τ2 τ2
1.08

1.08

Jeffreys prior, Algorithm A


Jeffreys prior, Algorithm B
Jeffreys prior, Algorithm C
1.06

1.06

Reference prior, Algorithm A


Reference prior, Algorithm B
Reference prior, Algorithm C
1.04

1.04
R

R
^

^
1.02

1.02

Jeffreys prior, Algorithm A


Jeffreys prior, Algorithm B
Jeffreys prior, Algorithm C
1.00

1.00

Reference prior, Algorithm A


Reference prior, Algorithm B
Reference prior, Algorithm C

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0


τ2 τ2

Figure 1: Average split-R̂ estimates based on the rank normalization computed from four
Markov chains constructed by using the joint posterior of µ (first and second rows) and Ψ
(third to fifth rows) given in Section 2. The observations are drawn from the normal multivariate
random effects model (left column) and the t multivariate random effects model (right column)
with p = 2 and n = 10.
13
1.00

1.00
0.98

0.98
Coverage probability

Coverage probability
0.96

0.96
0.94

0.94
Jeffreys prior, Algorithm A Jeffreys prior, Algorithm A
Jeffreys prior, Algorithm B Jeffreys prior, Algorithm B
0.92

0.92
Jeffreys prior, Algorithm C Jeffreys prior, Algorithm C
Reference prior, Algorithm A Reference prior, Algorithm A
Reference prior, Algorithm B Reference prior, Algorithm B
0.90

0.90
Reference prior, Algorithm C Reference prior, Algorithm C

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0


τ2 τ2
1.00

1.00
0.98

0.98
Coverage probability

Coverage probability
0.96

0.96
0.94

0.94
Jeffreys prior, Algorithm A Jeffreys prior, Algorithm A
Jeffreys prior, Algorithm B Jeffreys prior, Algorithm B
0.92

0.92
Jeffreys prior, Algorithm C Jeffreys prior, Algorithm C
Reference prior, Algorithm A Reference prior, Algorithm A
Reference prior, Algorithm B Reference prior, Algorithm B
0.90

0.90
Reference prior, Algorithm C Reference prior, Algorithm C

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0


τ2 τ2
1.0

1.0
Coverage probability

Coverage probability
0.8

0.8
0.6

0.6
0.4

0.4

Jeffreys prior, Algorithm A Jeffreys prior, Algorithm A


Jeffreys prior, Algorithm B Jeffreys prior, Algorithm B
Jeffreys prior, Algorithm C Jeffreys prior, Algorithm C
Reference prior, Algorithm A Reference prior, Algorithm A
0.2

0.2

Reference prior, Algorithm B Reference prior, Algorithm B


Reference prior, Algorithm C Reference prior, Algorithm C

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0


τ2 τ2
1.00

1.00
0.98

0.98
Coverage probability

Coverage probability
0.96

0.96
0.94

0.94

Jeffreys prior, Algorithm A Jeffreys prior, Algorithm A


Jeffreys prior, Algorithm B Jeffreys prior, Algorithm B
0.92

0.92

Jeffreys prior, Algorithm C Jeffreys prior, Algorithm C


Reference prior, Algorithm A Reference prior, Algorithm A
Reference prior, Algorithm B Reference prior, Algorithm B
0.90

0.90

Reference prior, Algorithm C Reference prior, Algorithm C

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0


τ2 τ2
1.0

1.0
Coverage probability

Coverage probability
0.8

0.8
0.6

0.6
0.4

0.4

Jeffreys prior, Algorithm A Jeffreys prior, Algorithm A


Jeffreys prior, Algorithm B Jeffreys prior, Algorithm B
Jeffreys prior, Algorithm C Jeffreys prior, Algorithm C
Reference prior, Algorithm A Reference prior, Algorithm A
0.2

0.2

Reference prior, Algorithm B Reference prior, Algorithm B


Reference prior, Algorithm C Reference prior, Algorithm C

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0


τ2 τ2

Figure 2: Empirical coverage probabilities of the 95% probability symmetric univariate credible
intervals determined for the parameters µ (first and second rows) and Ψ (third to fifth rows)
by employing the Berger and Bernardo reference prior and Jeffreys prior. The observations are
drawn from the normal multivariate random effects model (left column) and the t multivariate
random effects model (right column) with p = 2 and n = 10.
14
1.0

1.0
Coverage probability

Coverage probability
Jeffreys prior, Algorithm A Jeffreys prior, Algorithm A

0.8

0.8
Jeffreys prior, Algorithm B Jeffreys prior, Algorithm B
Jeffreys prior, Algorithm C Jeffreys prior, Algorithm C
Reference prior, Algorithm A Reference prior, Algorithm A
Reference prior, Algorithm B Reference prior, Algorithm B
0.6

0.6
Reference prior, Algorithm C Reference prior, Algorithm C
0.4

0.4
0.2

0.2
0.00 0.01 0.02 0.03 0.04 0.05 0.00 0.01 0.02 0.03 0.04 0.05
β β
1.0

1.0
Coverage probability

Coverage probability
Jeffreys prior, Algorithm A
0.8

0.8
Jeffreys prior, Algorithm B
Jeffreys prior, Algorithm C
Reference prior, Algorithm A
Reference prior, Algorithm B
0.6

0.6
Reference prior, Algorithm C

Jeffreys prior, Algorithm A


0.4

0.4
Jeffreys prior, Algorithm B
Jeffreys prior, Algorithm C
Reference prior, Algorithm A
Reference prior, Algorithm B
0.2

0.2
Reference prior, Algorithm C

0.00 0.01 0.02 0.03 0.04 0.05 0.00 0.01 0.02 0.03 0.04 0.05
β β
1.0

1.0
Coverage probability

Coverage probability
0.8

0.8
0.6

0.6

Jeffreys prior, Algorithm A Jeffreys prior, Algorithm A


0.4

0.4

Jeffreys prior, Algorithm B Jeffreys prior, Algorithm B


Jeffreys prior, Algorithm C Jeffreys prior, Algorithm C
Reference prior, Algorithm A Reference prior, Algorithm A
Reference prior, Algorithm B Reference prior, Algorithm B
0.2

0.2

Reference prior, Algorithm C Reference prior, Algorithm C

0.00 0.01 0.02 0.03 0.04 0.05 0.00 0.01 0.02 0.03 0.04 0.05
β β
1.0

1.0
Coverage probability

Coverage probability
0.8

0.8
0.6

0.6

Jeffreys prior, Algorithm A Jeffreys prior, Algorithm A


0.4

0.4

Jeffreys prior, Algorithm B Jeffreys prior, Algorithm B


Jeffreys prior, Algorithm C Jeffreys prior, Algorithm C
Reference prior, Algorithm A Reference prior, Algorithm A
Reference prior, Algorithm B Reference prior, Algorithm B
0.2

0.2

Reference prior, Algorithm C Reference prior, Algorithm C

0.00 0.01 0.02 0.03 0.04 0.05 0.00 0.01 0.02 0.03 0.04 0.05
β β
1.0

1.0
Coverage probability

Coverage probability
0.8

0.8
0.6

0.6

Jeffreys prior, Algorithm A Jeffreys prior, Algorithm A


0.4

0.4

Jeffreys prior, Algorithm B Jeffreys prior, Algorithm B


Jeffreys prior, Algorithm C Jeffreys prior, Algorithm C
Reference prior, Algorithm A Reference prior, Algorithm A
Reference prior, Algorithm B Reference prior, Algorithm B
0.2

0.2

Reference prior, Algorithm C Reference prior, Algorithm C

0.00 0.01 0.02 0.03 0.04 0.05 0.00 0.01 0.02 0.03 0.04 0.05
β β

Figure 3: Empirical coverage probabilities of the 95% credible intervals as function of β de-
termined for the parameter Ψ11 by employing the Berger and Bernardo reference prior and
Jeffreys prior. The observations are drawn from the normal multivariate random effects model
(left column) and the t multivariate random effects model (right column) with p = 2, n = 10
and τ 2 = 0.25 (first row), τ 2 = 0.5 (second row), τ 2 = 0.75 (third row), τ 2 = 1 (fourth row),
τ 2 = 2 (fifth row) . 15
Figure 2 presents the empirical coverage probabilities of the probability symmetric credible intervals
determined for the parameters of the multivariate random effects model. Despite the small sample
size n = 10, used in the construction of the credible intervals, the overall performance is good.
Independently of the employed prior and the distributional assumption used in the data-generating
model, the empirical coverage probabilities of the credible intervals for both components of the overall
mean vector and the between-study covariance are always slightly above the desired level of 95%.
Better results are present for the t multivariate random effects model. Furthermore, the application of
the Jeffreys prior leads the credible intervals whose coverage probabilities are smaller and, in general,
closer to the target value.
The situation is different, when the credible intervals are determined for the between-study vari-
ances, the diagonal elements of Ψ. Especially, when τ is small, then the empirical coverage probability
is considerably smaller than the desired level of 95%. These finding are in line with the those doc-
umented in the univariate case. It is known that Bayesian procedures usually tend to overestimate
the between-study variance, especially when the true between-study variance is small (see, e.g., Bod-
nar et al. (2017)). Possible explanations of this effect could be the observation that the posterior
of between-study variance is usually skewed to the right and it is heavy-tailed. Moreover, since the
posterior is skewed to the right, the symmetric credible intervals may not be a good choice in such a
situation.
The issues with the credible intervals constructed for the between-study variances are further
investigated in Figure 3, where the empirical coverage probabilities are reported for the credible
intervals of the form [qβ , q1−α+β ] for β ∈ [0.0001, 0.05] with qβ being the β-quantile of the posterior
distribution. When β = α/2, the probability symmetric credible intervals are obtained. We observe
that the coverage probabilities decrease as β increases, independently of the chosen prior, employed
algorithm or distributional assumption used in the specification of the multivariate random effects
model. The best results are achieved for β being close to its lower bound. These findings are again
in line with the results obtained in the univariate case, namely that the Bayesian procedures tend to
overestimate the true value of the between-study variance. Interestingly, the application of the Berger
and Bernardo reference prior leads to smaller values of the coverage probability under the normal
multivariate random effects model, while the reverse relationship is present when the data are drawn
from the t multivariate random effects model.

5 Empirical illustration
In this section we apply the three numerical algorithms to draw samples from the posterior distribution
of µ and Ψ to real data, which consist of results documented in ten studies designed to investigate the
effectiveness of hypertension treatment for reducing blood pressure. The treatment effects on both the
systolic blood pressure and diastolic blood pressure were analyzed where the negative values document
beneficial effect of the treatment. The results of ten studies together with the reported within-study

16
covariance matrices are provided in Jackson et al. (2013) and they are also summarized in Table 1.

p Ui;12 p
Study Xi;1 (SBP) Xi;2 (DBP) Ui;11 (SBP) ρi;12 = p Ui;22 (DBP)
Ui;11 Ui;22
1 -6.66 -2.99 0.72 0.78 0.27
2 -14.17 -7.87 4.73 0.45 1.44
3 -12.88 -6.01 10.31 0.59 1.77
4 -8.71 -5.11 0.30 0.77 0.10
5 -8.70 -4.64 0.14 0.66 0.05
6 -10.60 -5.56 0.58 0.49 0.18
7 -11.36 -3.98 0.30 0.50 0.27
8 -17.93 -6.54 5.82 0.61 1.31
9 -6.55 -2.08 0.41 0.45 0.11
10 -10.26 -3.49 0.20 0.51 0.04

Table 1: Data from 10 studies about the effectiveness of hypertension treatment with the aim
to reduce blood pressure. The variables Xi;1 and Xi;2 denote the treatment effects on the
systolic blood pressure (SBP) and the diastolic blood pressures (DBP) from the ith study,
while Ui = (Ui;lj )lj=1,2 is the corresponding within-study covariance matrix.

Bayesian multivariate meta-analysis is conducted for the data from Table 1 by assuming the normal
multivariate random effects model and the t-multivariate random effects model and by assigning the
Berger and Bernardo reference prior and the Jeffreys prior to the model parameters. Using the two the
two Metropolis-Hastings algorithms (Algorithm A and Algorithm B) of Bodnar and Bodnar (2023)
and the Gibbs sampler (Algorithm C) introduced in Section 3, four Markov chains are constructed of
length 105 with the burn-in period of the same length for each model assumption and the employed
prior.
Figures 4 to 6 present the results obtained under the assumption of the normal multivariate random
effects model, while Figures 7 to 9 depict the plots under the t multivariate random effects model.
The figures provide the rank plots of posterior draws, which are obtained from the constructed four
Markov chains for each algorithm and prior. If good mixing properties of the constructed Markov
chains are present, then the plots should be similar to the histograms corresponding to the uniform
distribution. As such, the algorithm with the best performance should results in the rank plot which
is closest to the histogram of the uniform distribution. In the figures the results are provided in the
case of µ1 , Ψ11 and Ψ21 , while the results for µ2 and Ψ22 are similar; they are available from the
authors by request.
The rank plots of posterior draws created in the case of the parameter µ1 by the hybrid Gibbs
sampler are indistinguishable from the histogram corresponding to the uniform distributions, inde-
pendently whether the normal multivariate random effects model or the t multivariate random effects

17
Chain 1, µ1 Chain 2, µ1 Chain 3, µ1 Chain 4, µ1

6e−05

6e−05
4e−05
6e−05

3e−05

4e−05

4e−05
4e−05

2e−05

2e−05

2e−05
2e−05

1e−05
0e+00

0e+00

0e+00

0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, µ1 Chain 2, µ1 Chain 3, µ1 Chain 4, µ1


8e−05

0.00015

4e−05

4e−05
6e−05

0.00010
4e−05

2e−05

2e−05
0.00005
2e−05

0.00000
0e+00

0e+00

0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, µ1 Chain 2, µ1 Chain 3, µ1 Chain 4, µ1


2.0e−05
2.0e−05

2.0e−05
2.0e−05
1.0e−05
1.0e−05

1.0e−05
1.0e−05
0.0e+00

0.0e+00

0.0e+00

0.0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, µ1 Chain 2, µ1 Chain 3, µ1 Chain 4, µ1


8e−05
0.00020

6e−05

6e−05

4e−05
4e−05

4e−05
0.00010

2e−05
2e−05

2e−05
0.00000

0e+00

0e+00

0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, µ1 Chain 2, µ1 Chain 3, µ1 Chain 4, µ1


6e−05
4e−05

4e−05

8e−05
3e−05

4e−05
2e−05

2e−05

4e−05

2e−05
1e−05
0e+00

0e+00

0e+00

0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, µ1 Chain 2, µ1 Chain 3, µ1 Chain 4, µ1


2.0e−05
2.0e−05

2.0e−05

2.0e−05
1.0e−05
1.0e−05

1.0e−05

1.0e−05
0.0e+00

0.0e+00

0.0e+00

0.0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Figure 4: Rank plots of posterior draws from four chains in the case of the parameter µ1 (SBP)
of the normal multivariate random effects model by employing the Jeffreys prior (first to third
rows) and the Berger and Bernardo reference prior (fourth to sixth rows). The samples from
the posterior distributions are drawn by Algorithm A (first and fourth rows), Algorithm B
(second and fifth rows) and Algorithm C (third and sixth rows).

18
Chain 1, Ψ11 Chain 2, Ψ11 Chain 3, Ψ11 Chain 4, Ψ11

6e−05

4e−05

4e−05
4e−05
4e−05

2e−05

2e−05
2e−05
2e−05
0e+00

0e+00

0e+00

0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ11 Chain 2, Ψ11 Chain 3, Ψ11 Chain 4, Ψ11


8e−05

6e−05
0.00015

4e−05
6e−05

3e−05
0.00010

4e−05
4e−05

2e−05
0.00005

2e−05
2e−05

1e−05
0.00000
0e+00

0e+00

0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ11 Chain 2, Ψ11 Chain 3, Ψ11 Chain 4, Ψ11

6e−05

4e−05
4e−05
4e−05

3e−05
4e−05

2e−05
2e−05
2e−05

2e−05

1e−05
0e+00

0e+00

0e+00

0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ11 Chain 2, Ψ11 Chain 3, Ψ11 Chain 4, Ψ11


8e−05
0.00020

6e−05
4e−05

6e−05

4e−05
4e−05
0.00010

2e−05

2e−05
2e−05
0.00000

0e+00

0e+00

0e+00

10000 20000 30000 40000 5000 10000 20000 30000 40000 5000 10000 20000 30000 40000 5000 10000 20000 30000 40000

Chain 1, Ψ11 Chain 2, Ψ11 Chain 3, Ψ11 Chain 4, Ψ11


8e−05
4e−05
4e−05

8e−05

6e−05
3e−05

4e−05
2e−05
2e−05

4e−05

2e−05
1e−05
0e+00

0e+00

0e+00

0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ11 Chain 2, Ψ11 Chain 3, Ψ11 Chain 4, Ψ11


0.00015
6e−05

4e−05
4e−05
0.00010

3e−05
4e−05

2e−05
2e−05
0.00005
2e−05

1e−05
0.00000
0e+00

0e+00

0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Figure 5: Rank plots of posterior draws from four chains in the case of the parameter Ψ11
(SBP) of the normal multivariate random effects model by employing the Jeffreys prior (first to
third rows) and the Berger and Bernardo reference prior (fourth to sixth rows). The samples
from the posterior distributions are drawn by Algorithm A (first and fourth rows), Algorithm
B (second and fifth rows) and Algorithm C (third and sixth rows).

19
Chain 1, Ψ21 Chain 2, Ψ21 Chain 3, Ψ21 Chain 4, Ψ21

6e−05

4e−05

3e−05
4e−05
3e−05
4e−05

2e−05
2e−05

2e−05
2e−05

1e−05
1e−05
0e+00

0e+00

0e+00

0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ21 Chain 2, Ψ21 Chain 3, Ψ21 Chain 4, Ψ21


8e−05

0.00015

0e+00 1e−05 2e−05 3e−05 4e−05


4e−05
6e−05

0.00010
4e−05

2e−05
0.00005
2e−05

0.00000
0e+00

0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ21 Chain 2, Ψ21 Chain 3, Ψ21 Chain 4, Ψ21


4e−05
4e−05

6e−05

4e−05
3e−05
3e−05

4e−05
2e−05
2e−05

2e−05
2e−05
1e−05
1e−05
0e+00

0e+00

0e+00

0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ21 Chain 2, Ψ21 Chain 3, Ψ21 Chain 4, Ψ21


8e−05

6e−05
0.00020

4e−05

6e−05

4e−05
4e−05
0.00010

2e−05

2e−05
2e−05
0.00000

0e+00

0e+00

0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ21 Chain 2, Ψ21 Chain 3, Ψ21 Chain 4, Ψ21


6e−05

6e−05
4e−05

8e−05
4e−05

4e−05
2e−05

4e−05
2e−05

2e−05
0e+00

0e+00

0e+00

0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ21 Chain 2, Ψ21 Chain 3, Ψ21 Chain 4, Ψ21


0.00015

6e−05

4e−05
4e−05

0.00010

3e−05
4e−05

2e−05
2e−05

0.00005

2e−05

1e−05
0.00000
0e+00

0e+00

0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Figure 6: Rank plots of posterior draws from four chains in the case of the parameter Ψ21 (DBP,
SBP) of the normal multivariate random effects model by employing the Jeffreys prior (first to
third rows) and the Berger and Bernardo reference prior (fourth to sixth rows). The samples
from the posterior distributions are drawn by Algorithm A (first and fourth rows), Algorithm
B (second and fifth rows) and Algorithm C (third and sixth rows).

20
Chain 1, µ1 Chain 2, µ1 Chain 3, µ1 Chain 4, µ1

4e−05
3.0e−05

3.0e−05
6e−05
3e−05
2.0e−05

2.0e−05
4e−05
2e−05
1.0e−05

1.0e−05
2e−05
1e−05
0.0e+00

0.0e+00
0e+00

0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, µ1 Chain 2, µ1 Chain 3, µ1 Chain 4, µ1

3.0e−05
4e−05

3.0e−05
3.0e−05
3e−05

2.0e−05

2.0e−05
2.0e−05
2e−05

1.0e−05

1.0e−05
1.0e−05
1e−05

0.0e+00

0.0e+00

0.0e+00
0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, µ1 Chain 2, µ1 Chain 3, µ1 Chain 4, µ1


2.0e−05

2.0e−05
2.0e−05
2.0e−05

1.0e−05

1.0e−05
1.0e−05
1.0e−05
0.0e+00

0.0e+00

0.0e+00

0.0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, µ1 Chain 2, µ1 Chain 3, µ1 Chain 4, µ1

3.0e−05
3.0e−05
0e+00 1e−05 2e−05 3e−05 4e−05

2.0e−05

2.0e−05
2.0e−05
1.0e−05

1.0e−05
1.0e−05
0.0e+00

0.0e+00

0.0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, µ1 Chain 2, µ1 Chain 3, µ1 Chain 4, µ1


4e−05
3.0e−05
3.0e−05

3.0e−05
3e−05
2.0e−05
2.0e−05

2.0e−05
2e−05
1.0e−05
1.0e−05

1.0e−05
1e−05
0.0e+00

0.0e+00

0.0e+00
0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, µ1 Chain 2, µ1 Chain 3, µ1 Chain 4, µ1


2.0e−05

2.0e−05

2.0e−05
2.0e−05

1.0e−05

1.0e−05

1.0e−05
1.0e−05
0.0e+00

0.0e+00

0.0e+00

0.0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Figure 7: Rank plots of posterior draws from four chains in the case of the parameter µ1 (SBP)
of the t multivariate random effects model by employing the Jeffreys prior (first to third rows)
and the Berger and Bernardo reference prior (fourth to sixth rows). The samples from the
posterior distributions are drawn by Algorithm A (first and fourth rows), Algorithm B (second
and fifth rows) and Algorithm C (third and sixth rows).

21
Chain 1, Ψ11 Chain 2, Ψ11 Chain 3, Ψ11 Chain 4, Ψ11

3.0e−05

3.0e−05
3.0e−05

6e−05
2.0e−05

2.0e−05
2.0e−05

4e−05
1.0e−05

1.0e−05
1.0e−05

2e−05
0.0e+00

0.0e+00

0.0e+00
0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ11 Chain 2, Ψ11 Chain 3, Ψ11 Chain 4, Ψ11

4e−05
3.0e−05
3.0e−05

3e−05
3e−05
2.0e−05
2.0e−05

2e−05
2e−05
1.0e−05
1.0e−05

1e−05
1e−05
0.0e+00

0.0e+00

0e+00

0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ11 Chain 2, Ψ11 Chain 3, Ψ11 Chain 4, Ψ11


3.0e−05
3.0e−05

4e−05

3e−05
3e−05
2.0e−05
2.0e−05

2e−05
2e−05
1.0e−05
1.0e−05

1e−05
1e−05
0.0e+00

0.0e+00

0e+00

0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ11 Chain 2, Ψ11 Chain 3, Ψ11 Chain 4, Ψ11

3.0e−05
3.0e−05

3.0e−05
4e−05

2.0e−05
2.0e−05

2.0e−05
2e−05

1.0e−05
1.0e−05

1.0e−05
0.0e+00

0.0e+00

0.0e+00
0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ11 Chain 2, Ψ11 Chain 3, Ψ11 Chain 4, Ψ11


3.0e−05

3.0e−05

3.0e−05
3.0e−05

2.0e−05

2.0e−05

2.0e−05
2.0e−05

1.0e−05

1.0e−05

1.0e−05
1.0e−05
0.0e+00

0.0e+00

0.0e+00

0.0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ11 Chain 2, Ψ11 Chain 3, Ψ11 Chain 4, Ψ11


3.0e−05
3.0e−05
3.0e−05
4e−05
3e−05

2.0e−05
2.0e−05
2.0e−05
2e−05

1.0e−05
1.0e−05
1.0e−05
1e−05

0.0e+00

0.0e+00

0.0e+00
0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Figure 8: Rank plots of posterior draws from four chains in the case of the parameter Ψ11 (SBP)
of the t multivariate random effects model by employing the Jeffreys prior (first to third rows)
and the Berger and Bernardo reference prior (fourth to sixth rows). The samples from the
posterior distributions are drawn by Algorithm A (first and fourth rows), Algorithm B (second
and fifth rows) and Algorithm C (third and sixth rows).

22
Chain 1, Ψ21 Chain 2, Ψ21 Chain 3, Ψ21 Chain 4, Ψ21

3.0e−05

6e−05

3.0e−05
3.0e−05

2.0e−05

2.0e−05
4e−05
2.0e−05

1.0e−05

1.0e−05
1.0e−05

2e−05
0.0e+00

0.0e+00

0.0e+00
0e+00
0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ21 Chain 2, Ψ21 Chain 3, Ψ21 Chain 4, Ψ21

4e−05

3.0e−05
3.0e−05
3e−05

3e−05

2.0e−05
2.0e−05
2e−05

2e−05

1.0e−05
1.0e−05
1e−05

1e−05

0.0e+00

0.0e+00
0e+00

0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ21 Chain 2, Ψ21 Chain 3, Ψ21 Chain 4, Ψ21


3.0e−05

3.0e−05
3.0e−05

3e−05
2.0e−05

2.0e−05
2.0e−05

2e−05
1.0e−05

1.0e−05
1.0e−05

1e−05
0.0e+00

0.0e+00

0.0e+00
0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ21 Chain 2, Ψ21 Chain 3, Ψ21 Chain 4, Ψ21

3.0e−05
3.0e−05

3e−05
4e−05

2.0e−05
2.0e−05

2e−05
2e−05

1.0e−05
1.0e−05

1e−05
0.0e+00

0.0e+00
0e+00

0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ21 Chain 2, Ψ21 Chain 3, Ψ21 Chain 4, Ψ21


3.0e−05
3.0e−05

3.0e−05

3.0e−05
2.0e−05
2.0e−05

2.0e−05

2.0e−05
1.0e−05
1.0e−05

1.0e−05

1.0e−05
0.0e+00

0.0e+00

0.0e+00

0.0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Chain 1, Ψ21 Chain 2, Ψ21 Chain 3, Ψ21 Chain 4, Ψ21


3.0e−05

3.0e−05
3.0e−05
3e−05

2.0e−05

2.0e−05
2.0e−05
2e−05

1.0e−05

1.0e−05
1.0e−05
1e−05

0.0e+00

0.0e+00

0.0e+00
0e+00

0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000 0 10000 20000 30000 40000

Figure 9: Rank plots of posterior draws from four chains in the case of the parameter Ψ21
(DBP, SBP) of the t multivariate random effects model by employing the Jeffreys prior (first to
third rows) and the Berger and Bernardo reference prior (fourth to sixth rows). The samples
from the posterior distributions are drawn by Algorithm A (first and fourth rows), Algorithm
B (second and fifth rows) and Algorithm C (third and sixth rows).

23
model is fitted to the data. It is interesting that the performance of the two Metropolis-Hastings
algorithms cannot be clearly ranked. While Algorithm A performs better under the Jeffreys prior,
Algorithm B is better under the Berger and Bernardo reference prior. When the rank plots are
constructed for the components of the between-study covariance matrix Ψ, then better results are
achieved under the Jeffreys prior, while the differences in plots related to the application of different
algorithms to draw samples from the posterior distribution are not large. Finally, we note that bet-
ter mixing properties of the constructed Markov chains are present under the t multivariate random
effects model.

µ1 µ2 Ψ11 Ψ21 Ψ22


Jeffreys prior, Algorithm A
normal distribution 1.0064 1.0096 1.0071 1.0023 1.0027
t-distribution 1.0014 1.0004 1.0053 1.0066 1.0028
Jeffreys prior, Algorithm B
normal distribution 1.0781 1.0438 1.0246 1.0168 1.0263
t-distribution 1.0020 1.0023 1.0009 1.0012 1.0027
Jeffreys prior, Algorithm C
normal distribution 1.0000 1.0000 1.0097 1.0030 1.0051
t-distribution 1.0002 1.0002 1.0030 1.0024 1.0030
Berger and Bernardo reference prior, Algorithm A
normal distribution 1.0058 1.0133 1.0951 1.0680 1.1080
t-distribution 1.0009 1.0005 1.0050 1.0044 1.0026
Berger and Bernardo reference prior, Algorithm B
normal distribution 1.0196 1.0307 1.0458 1.0205 1.0308
t-distribution 1.0010 1.0006 1.0017 1.0025 1.0039
Berger and Bernardo reference prior, Algorithm C
normal distribution 1.0000 1.0001 1.0379 1.0010 1.0035
t-distribution 1.0004 1.0001 1.0033 1.0023 1.0027

Table 2: Split-R̂ estimates based on the rank normalization (see, Vehtari et al. (2021)) computed
for the normal multivariate random effects model and for the t multivariate random effects
model by employing the Berger and Bernardo reference prior and the Jeffreys prior. The
samples from the posterior distributions are drawn by Algorithm A (first and fourth rows),
Algorithm B (second and fifth rows) and Algorithm C (third and sixth rows).

Table 2 provides the values of the split-R̂ estimates based on the rank normalization, another
robust measure for studying the mixing properties of the constructed Markov chains as suggested in

24
Vehtari et al. (2021). The values close to one indicates better mixing properties and, as such, the
algorithm, which produces the values of the split-R̂ estimate closest to one, might be considered to be
the approach with the best performance. The results provided in Table 2 are in line with the rank pots
presented in Figures 4 to 9. The smallest values are obtained when the Gibbs sampler (Algorithm C)
is used to draw samples form the posterior distribution independently of the distributional assumption
used in the definition of the multivariate random effects model. The application of the t multivariate
random effects model reduces the split-R̂ estimates considerably. The exception is the case when
Algorithm C is used for the parameter µ1 where very small increases are observed and some cases
corresponding to parameters Ψ21 and Ψ22 . Moreover, the values computed under the t multivariate
random effects model are always below 1.01, the target value suggested in Vehtari et al. (2021).
0.5

Algorithm A Algorithm A

0.4
Algorithm B Algorithm B
Algorithm C Algorithm C
0.4

0.3
0.3
Posterior

Posterior
0.2
0.2

0.1
0.1
0.0

0.0

−14 −12 −10 −8 −6 −14 −12 −10 −8 −6


µ1 µ1
0.4

Algorithm A Algorithm A
Algorithm B Algorithm B
0.4

Algorithm C Algorithm C
0.3
0.3
Posterior

Posterior
0.2
0.2

0.1
0.1
0.0

0.0

−14 −12 −10 −8 −6 −14 −12 −10 −8 −6


µ1 µ1

Figure 10: Kernel density estimators of the marginal posterior for µ1 (SBP) of the normal
multivariate random effects model (left column) and t multivariate random effects model (right
column), obtained for the data from Table 1 by employing the Berger and Bernardo reference
prior and the Jeffreys prior and using Algorithms A, B and C to draw samples from the posterior
distribution.

Figures 10-12 depict kernel density estimators of the marginal posteriors of µ1 , Ψ11 and Ψ21 com-
puted under the normal multivariate random effects model (left column) and t multivariate random
effects model (right column) when the Berger and Bernardo reference prior and the Jeffreys prior
are assigned to the model parameters. For each prior and each algorithm, a Markov chain of length
B = 110000 is generated, where the first 10000 observations are used for as burn-in period. From the
remaining 100000 observations we use each 50th one in the construction of the plots.
Larger differences between the curves are observed in the case of the normal multivariate random
effects model, while the plots almost coincide under the assumption of the t-distribution. Moreover,

25
the kernel density estimators obtained under the t-distribution are more smooth and are less peaked
around the mode. Similarly, independently of the imposed distributional assumption, the application
of the Jeffreys prior leads to the posterior with higher peaks. While the posteriors for µ1 are roughly
symmetric, the marginal posteriors deduced for the elements of the between-study covariance matrix
are skewed to the right. Furthermore, the kernel densities constructed by using Algorithm A and B
under the assumption of normality have several modes in the case of µ1 and Ψ21 , while it is not the
case when a Markov chain is constructed by employing the Gibbs sampler (Algorithm C). Finally, we
note that the posteriors deduced under the Berger-Bernrado reference prior are more flat than the
ones obtained when the Jeffreys prior is used.
0.15

0.08
Algorithm A Algorithm A
Algorithm B Algorithm B
Algorithm C Algorithm C

0.06
0.10
Posterior

Posterior
0.04
0.05

0.02
0.00

0.00

0 5 10 15 20 0 5 10 15 20 25 30
Ψ11 Ψ11

Algorithm A Algorithm A
0.15

Algorithm B Algorithm B
0.06

Algorithm C Algorithm C
0.10
Posterior

Posterior
0.04
0.05

0.02
0.00

0.00

0 5 10 15 20 0 5 10 15 20 25 30
Ψ11 Ψ11

Figure 11: Kernel density estimators of the marginal posterior for Ψ11 (SBP) of the normal
multivariate random effects model (left column) and t multivariate random effects model (right
column), obtained for the data from Table 1 by employing the Berger and Bernardo reference
prior and the Jeffreys prior and using Algorithms A, B and C to draw a sample from the
posterior distribution.

Finally, the Bayesian point estimators together with the credible intervals obtained under the
three algorithms are compared in Tables 3 and 4 for the overall mean vector µ and the between-study
covariance matrix Ψ. Since the marginal posteriors for Ψ11 , Ψ21 and Ψ22 are skewed to the right the
credible intervals of the form [qβ , q1−(α−β) ] are constructed with α = 0.05 and β = 0.0001, motivated
by the results presented in Figure 3 and obtained for the between-study variance Ψ11 . The lengths
of this type of credible intervals appear to be considerably smaller than the lengths of the probability
symmetric credible intervals. In the case of µ1 and µ2 , probability symmetric credible intervals are
presented. For the completeness of the presentation, the estimators for these parameters obtained
by employing the methods of the frequentist statistics are provided as well. The considered methods

26
are the maximum likelihood estimator, the restrictive maximum likelihood estimator described in
Gasparrini et al. (2012), and the method of moment estimators from Jackson et al. (2013).

0.15
Algorithm A Algorithm A
0.30

Algorithm B Algorithm B
Algorithm C Algorithm C

0.10
0.20
Posterior

Posterior
0.05
0.10
0.00

0.00
−5 0 5 10 15 −5 0 5 10 15
Ψ21 Ψ21
0.30

Algorithm A Algorithm A
Algorithm B Algorithm B

0.12
Algorithm C Algorithm C
0.20
Posterior

Posterior
0.08
0.10

0.04
0.00

0.00

−5 0 5 10 15 −5 0 5 10 15
Ψ21 Ψ21

Figure 12: Kernel density estimators of the marginal posterior for Ψ21 (DBP, SBP) of the normal
multivariate random effects model (left column) and t multivariate random effects model (right
column), obtained for the data from Table 1 by employing the Berger and Bernardo reference
prior and the Jeffreys prior and using Algorithms A, B and C to draw a sample from the
posterior distribution.

The results of in Tables 3 and 4 are in line with the findings obtained in Figures 10 to 12. In
particular, wider credible intervals are obtained under the t random effects model when the Berger
and Bernardo reference prior is employed. This phenomenon becomes even more pronounced when
credible intervals for the elements of the between-study covariance matrix Ψ are constructed. Also,
the application of the Berger-Bernardo reference prior leads to larger values of the estimated elements
of the between-study covariance matrix, independently of the distributional assumption used in the
definition of the random effects model. Interestingly, despite larger difference obtained for the values
of the between-study covariance matrix, the differences between the Bayesian estimators constructed
for the elements of the overall mean vector are not large, which is explained by the fact that the
whole posterior of Ψ is used, when Bayesian inference procedures for µ are conducted. Finally, it is
noted that the confidence intervals obtained under the three frequentist approaches are considerably
narrower than the credible intervals delivered by the Bayesian procedures. This is explained by the
fact, that the frequentist methods ignore the uncertainty related to the estimation of the between-study
covariance matrix, while the Bayesian approaches take it into account.

27
Normal random effects model t random effects model
µ1 (SBP) µ2 (DBP) µ1 (SBP) µ2 (DBP)
Jeffreys prior, Algorithm A
post. mean/ med. -9.87/-9.93 -4.52/-4.50 -10.15/-10.08 -4.68/-4.65
post. sd. 0.98 0.58 1.10 0.59
cred. inter. [-11.95,-7.96] [-5.74,-3.56] [-12.61,-8.11] [-5.92,-3.64]
Jeffreys prior, Algorithm B
post. mean/med. -9.86/-9.89 -4.54/-4.54 -10.09/-10.05 -4.68/-4.67
post. sd. 0.97 0.55 1.13 0.63
cred. inter. [-11.90,-8.18] [-5.66,-3.48] [-12.42,-8.04] [-5.95,-3.49]
Jeffreys prior, Algorithm C
post. mean/med. -9.63/-9.61 -4.45/-4.44 -10.06/-10.00 -4.64/-4.65
post. sd. 1.03 0.57 1.13 0.62
cred. inter. [-11.74,-7.68] [-5.60,-3.40] [-12.39,-7.92] [-5.85,-3.38]
Berger and Bernardo reference prior, Algorithm A
post. mean/med. -9.84/-9.83 -4.56/-4.52 -10.10/-10.05 -4.66/-4.66
post. sd. 1.14 0.58 1.14 0.64
cred. inter. [-12.06,-7.32] [-5.77,-3.38] [-12.52,-7.87] [-5.91,-3.36]
Berger and Bernardo reference prior, Algorithm B
post. mean/med. -9.77/-9.63 -4.40/-4.39 -10.08/-9.99 -4.66/-4.63
post. sd. 1.07 0.63 1.18 0.67
cred. inter. [-12.18,-7.75] [-5.65,-3.17] [-12.66,-7.79] [-6.08,-3.34]
Berger and Bernardo reference prior, Algorithm C
post. mean/med. -9.66/-9.60 -4.45/-4.40 -10.02/-9.99 -4.64/-4.64
post. sd. 1.10 0.60 1.15 0.63
cred. inter. [-12.03,-7.58] [-5.74,-3.36] [-12.33,-7.86] [-5.93,-3.41]
ML, Gasparrini et al. (2012)
estimator -9.47 -4.41 – –
stand. error 0.68 0.44 – –
conf. inter. [-10.79, -8.14] [-5.26, -3.55] – –
REML, Gasparrini et al. (2012)
estimator -9.51 -4.43 – –
stand. error 0.73 0.47 – –
conf. inter. [-10.95, -8.07] [-5.35, -3.51] – –
Method of moments, Jackson et al. (2013)
estimator -9.17 -4.31 – –
stand. error 0.55 0.36 – –
conf. inter. [-10.26, -8.08] [-5.02, -3.60] – –

Table 3: Posterior mean, posterior median, posterior standard deviation and 95% credible
interval) for the parameter µ of the multivariate random effects model obtained for the data
from Table 1 by employing the Berger and Bernardo reference prior and the Jeffreys prior. The
last three panels of the table include the results of the maximum likelihood estimator and the
restrictive maximum likelihood estimator described in Gasparrini et al. (2012), and the method
of moment estimators from Jackson et al. (2013).

28
Normal random effects model t random effects model
ψ11 (SBP) ψ21 ψ22 (DBP) ψ11 (SBP) ψ21 ψ22 (DBP)
Jeffreys prior, Algorithm A
post. mean/med. 8.37/6.52 3.57/2.73 2.76/2.18 11.04/8.37 5.17/3.71 3.88/2.89
post. sd. 6.43 3.06 1.94 10.37 5.41 3.64
cred. inter. [2.56, 19.75] [-1.42,9.28] [0.76,6.43] [0.51,28.44] [-0.61,13.65] [0.23,9.79]
Jeffreys prior, Algorithm B
post. mean/med. 8.68/7.04 3.76/2.96 3.02/2.44 11.06/8.46 5.08/3.69 3.85/2.96
post. sd. 6.31 2.96 2.07 9.62 4.83 3.19
cred. inter. [2.96, 18.79] [-2.70, 8.82] [0.74,7.29] [0.71,27.53] [-2.49,13.63] [0.23,9.58]
Jeffreys prior, Algorithm C
post. mean/med. 7.51/6.11 3.20/2.32 2.60/1.99 11.37/8.42 5.21/3.75 3.91/3.01
post. sd. 5.85 2.81 1.85 10.34 5.07 3.31
cred. inter. [2.78,17.38] [-5.05,8.13] [0.68,5.94] [0.47,29.49] [-1.75,14.28] [0.17,9.93]
Berger and Bernardo reference prior, Algorithm A
post. mean/med. 10.41/7.72 4.47/3.48 3.61/2.76 13.04/9.39 6.09/4.12 4.62/3.31
post. sd. 9.13 4.09 2.88 13.54 6.77 4.33
cred. inter. [3.24,25.38] [-7.10,11.49] [0.82,9.36] [0.57,34.55] [-1.83,17.59] [0.19,12.36]
Berger and Bernardo reference prior, Algorithm B
post. mean/med. 8.95/6.22 4.03/3.60 4.18/3.04 12.77/9.39 5.79/4.20 4.30/3.45
post. sd. 8.36 3.71 3.04 12.03 5.74 3.62
cred. inter. [2.95,22.46] [-2.34,10.21] [0.71,9.14] [0.40,34.86 ] [-5.06,16.41] [0.17,11.15]
Berger and Bernardo reference prior, Algorithm C
post. mean/med. 9.06/6.39 3.62/2.42 2.96/2.24 12.38/9.05 5.71/4.00 4.40/3.33
post. sd. 8.11 4.04 2.74 13.48 6.64 4.65
cred. inter. [3.00,23.32] [-1.17,10.85] [0.61,7.80] [0.52,33.91] [-3.90,15.85] [0.12,11.43]
ML, Gasparrini et al. (2012)
estimator 3.29 1.51 1.57 – – –
REML, Gasparrini et al. (2012)
estimator 3.92 1.81 1.83 – – –
Method of moments, Jackson et al. (2013)
estimator 2.03 0.2 1.04 – – –

Table 4: Posterior mean, posterior median, posterior standard deviation and 95% credible
interval) for the parameter Ψ of the multivariate random effects model obtained for the data
from Table 1 by employing the Berger and Bernardo reference prior and the Jeffreys prior. The
last three panels of the table include the results of the maximum likelihood estimator and the
restrictive maximum likelihood estimator described in Gasparrini et al. (2012), and the method
of moment estimators from Jackson et al. (2013).

6 Summary
Multivariate random effects model is used as a common quantitative tool to pool the results of the
individual study together when several features are measured in each individual study. The model
is widely used in different fields of science. While the inference procedures for the parameters of the
model were first derived by the methods of the frequentist statistics, the Bayesian approaches have
increased its popularity.
Even though the joint posterior for the parameters of the multivariate random effects model is
derived in the analytical form, it is a challenging task to construct Bayesian inference procedures
due to the complicated structure which is present in the marginal posterior of the between-study
covariance matrix. To deal with the problem, two Metropolis-Hastings algorithms to draw samples

29
from the posterior distribution were suggested in the literature. In the present paper, an alternative
approached based on the hybrid Gibbs sampler is proposed. Via a simulation study and empirical
illustration we showed that the new numerical approach performs similarly to the existent ones or
even outperforms in terms of the convergence properties of the constructed Markov chain, which are
assessed by using the rank plots and the split-R̂ estimates based on the rank normalization (see,
Vehtari et al. (2021)). Moreover, the application of the t multivariate random effects model appears
to provide more reliable results in the conducted empirical study in comparison the model based on
the assumption of normality.

Acknowledgement
Olha Bodnar acknowledges valuable support from the internal grand (Rörlig resurs) of the Örebro
University. This research is a part of the project Statistical Models and Data Reductions to Esti-
mate Standard Atomic Weights and Isotopic Ratios for the Elements, and to Evaluate the Associated
Uncertainties (No. 2019-024-1-200), IUPAC (International Union of Pure and Applied Chemistry).

References
Barndorff-Nielsen, O. E. and Cox, D. R. (1994). Inference and Asymptotics. Chapman & Hall.
Berger, J. and Bernardo, J. M. (1992). On the development of reference priors. In Bernardo, J. M., Berger, J.,
Dawid, A. P., and Smith, A. F. M., editors, Bayesian Statistics, volume 4, pages 35–60. Oxford: University
Press.
Berger, J. O., Bernardo, J. M., Sun, D., et al. (2009). The formal definition of reference priors. The Annals of
Statistics, 37(2):905–938.
Bodnar, O. (2019). Non-informative Bayesian inference for heterogeneity in a generalized marginal random
effects meta-analysis. Theory of Probability and Mathematical Statistics, 100:7–23.
Bodnar, O. and Bodnar, T. (2023). Objective bayesian meta-analysis based on generalized marginal multivariate
random effects model. Bayesian Analysis, (to appear).
Bodnar, O. and Eriksson, V. (2023). Bayesian model selection: Application to the adjustment of fundamental
physical constants. The Annals of Applied Statistics, (to appear).
Bodnar, O., Link, A., Arendacká, B., Possolo, A., and Elster, C. (2017). Bayesian estimation in random effects
meta-analysis using a non-informative prior. Statistics in Medicine, 36(2):378–399.
Bodnar, O., Link, A., and Elster, C. (2016). Objective Bayesian inference for a generalized marginal random
effects model. Bayesian Analysis, 11(1):25–45.
Chen, H., Manning, A. K., and Dupuis, J. (2012). A method of moments estimator for random effect multivariate
meta-analysis. Biometrics, 68(4):1278–1284.
Fraser, D. A. (2004). Ancillaries and conditional inference. Statistical Science, 19(2):333–369.
Gasparrini, A., Armstrong, B., and Kenward, M. (2012). Multivariate meta-analysis for non-linear and other
multi-parameter associations. Statistics in Medicine, 31(29):3821–3839.

30
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2013). Bayesian Data
Analysis. CRC Press.
Ghosh, M., Reid, N., and Fraser, D. (2010). Ancillary statistics: A review. Statistica Sinica, 20(4):1309–1332.
Guolo, A. (2012). Higher-order likelihood inference in meta-analysis and meta-regression. Statistics in Medicine,
31(4):313–327.
Gupta, A. K. and Nagar, D. K. (2000). Matrix Variate Distributions. Chapman and Hall/CRC.
Gupta, A. K., Varga, T., and Bodnar, T. (2013). Elliptically Contoured Models in Statistics and Portfolio
Theory. Springer, New York.
Harville, D. A. (1997). Matrix Algebra from Statistician’s Perspective. Springer, New York.
Jackson, D. and Riley, R. D. (2014). A refined method for multivariate meta-analysis and meta-regression.
Statistics in Medicine, 33(4):541–554.
Jackson, D., White, I. R., and Riley, R. D. (2013). A matrix-based method of moments for fitting the multivariate
random effects model for meta-analysis and meta-regression. Biometrical Journal, 55(2):231–245.
Jackson, D., White, I. R., and Riley, R. D. (2020). Multivariate meta-analysis. In Schmid, C. H., Stijnen, T.,
and White, I. R., editors, Handbook of Meta-Analysis, pages 163–186. CRC Press.
Jackson, D., White, I. R., and Thompson, S. G. (2010). Extending DerSimonian and Laird’s methodology to
perform multivariate random effects meta-analyses. Statistics in Medicine, 29(12):1282–1297.
Jeffreys, H. (1946). An invariant form for the prior probability in estimation problems. Proceedings of the Royal
Society A, 186:453–461.
Lambert, P. C., Sutton, A. J., Burton, P. R., Abrams, K. R., and Jones, D. R. (2005). How vague is vague? A
simulation study of the impact of the use of vague prior distributions in mcmc using winbugs. Statistics in
Medicine, 24(15):2401–2428.
Laplace, P. S. (1812). Théorie Analitique des Probabilités. Paris: Courcier.
Liu, D., Liu, R. Y., and Xie, M. (2015). Multivariate meta-analysis of heterogeneous studies using only summary
statistics: efficiency and robustness. Journal of the American Statistical Association, 110(509):326–340.
Magnus, J. R. and Neudecker, H. (2019). Matrix Differential Calculus with Applications in Statistics and
Econometrics. John Wiley & Sons.
Michael, H., Thornton, S., Xie, M., and Tian, L. (2019). Exact inference on the random-effects model for
meta-analyses with few studies. Biometrics, 75(2):485–493.
Muirhead, R. (1982). Aspects of Multivariate Statistical Theory. New York: Wiley.
Nam, I.-S., Mengersen, K., and Garthwaite, P. (2003). Multivariate meta-analysis. Statistics in Medicine,
22(14):2309–2333.
Negeri, Z. F. and Beyene, J. (2020). Robust bivariate random-effects model for accommodating outlying and in-
fluential studies in meta-analysis of diagnostic test accuracy studies. Statistical Methods in Medical Research,
29(11):3308–3325.
Noma, H., Maruo, K., Gosho, M., Levine, S. Z., Goldberg, Y., Leucht, S., and Furukawa, T. A. (2019). Efficient
two-step multivariate random effects meta-analysis of individual participant data for longitudinal clinical
trials using mixed effects models. BMC Medical Research Methodology, 19(1):33.
Norets, A. (2015). Bayesian regression with nonparametric heteroskedasticity. Journal of Econometrics,
185(2):409–419.
Paul, M., Riebler, A., Bachmann, L., Rue, H., and Held, L. (2010). Bayesian bivariate meta-analysis of diagnostic
test studies using integrated nested Laplace approximations. Statistics in Medicine, 29(12):1325–1339.

31
Reid, N. (1995). The roles of conditioning in inference. Statistical Science, 10(2):138–157.
Riley, R. D., Lambert, P. C., and Abo-Zaid, G. (2010). Meta-analysis of individual participant data: Rationale,
conduct, and reporting. BMJ, 340:c221.
Rukhin, A. L. (2007). Estimating common vector parameters in interlaboratory studies. Journal of Multivariate
Analysis, 98(3):435–454.
Rukhin, A. L. (2013). Estimating heterogeneity variance in meta-analysis. Journal of the Royal Statistical
Society: Ser. B, 75:451–469.
Schwarzer, G., Carpenter, J. R., and Rücker, G. (2015). Meta-Analysis with R. Springer.
Strawderman, W. E. and Rukhin, A. L. (2010). Simultaneous estimation and reduction of nonconformity in
interlaboratory studies. Journal of the Royal Statistical Society: Ser. B, 72:219–234.
Sundberg, R. (2019). Statistical modelling by exponential families. Cambridge University Press.
Sutradhar, B. C. and Ali, M. M. (1989). A generalization of the wishart distribution for the elliptical model
and its moments for the multivariate t model. Journal of Multivariate Analysis, 29(1):155–162.
Sutton, A. J. and Higgins, J. (2008). Recent developments in meta-analysis. Statistics in Medicine, 27(5):625–
650.
Thompson, M. and Ellison, S. L. (2011). Dark uncertainty. Accreditation and Quality Assurance, 16:483–487.
Toman, B., Fischer, J., and Elster, C. (2012). Alternative analyses of measurements of the Planck constant.
Metrologia, 49(4):567–571.
Turner, R. M., Jackson, D., Wei, Y., Thompson, S. G., and Higgins, J. (2015). Predictive distributions for
between-study heterogeneity and simple methods for their application in Bayesian meta-analysis. Statistics
in Medicine, 34(6):984–998.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., and Bürkner, P.-C. (2021). Rank-normalization, folding,
and localization: An improved R̂ for assessing convergence of MCMC (with discussion). Bayesian Analysis,
16(2):667–718.
Veroniki, A. A., Jackson, D., Bender, R., Kuss, O., Langan, D., Higgins, J. P., Knapp, G., and Salanti, G.
(2019). Methods to calculate uncertainty in the estimated overall effect size from a random-effects meta-
analysis. Research Synthesis Methods, 10(1):23–43.
Viechtbauer, W. (2007). Confidence intervals for the amount of heterogeneity in meta-analysis. Statistics in
Medicine, 26(1):37–52.
Wei, Y. and Higgins, J. P. (2013). Bayesian multivariate meta-analysis with multiple outcomes. Statistics in
Medicine, 32(17):2911–2934.
Wynants, L., Riley, R., Timmerman, D., and Van Calster, B. (2018). Random-effects meta-analysis of the
clinical utility of tests and prediction models. Statistics in Medicine, 37(12):2034–2052.
Zhao, J. and Mathew, T. (2018). Some point estimates and confidence regions for multivariate inter-laboratory
data analysis. Sankhya B, 80(1):147–166.

32

You might also like