VarSelThesis PDF
VarSelThesis PDF
November, 2010
Eidesstattliche Erklärung
1
Abstract
Keywords: Bayesian variable selection; spike and slab priors; independence prior; Zell-
ner’s g-prior; fractional prior; normal mixture of inverse gamma distributions; stochastic
search variable selection; inefficiency factor.
2
Contents
List of Figures 4
List of Tables 5
Introduction 6
3
3.2 MCMC scheme for Dirac spikes . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.1 Marginal likelihood and posterior moments for a g-prior slab . . . . 31
3.2.2 Marginal likelihood and posterior moments for the fractional prior
slab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6 Simulation study 39
6.1 Results for independent regressors . . . . . . . . . . . . . . . . . . . . . . . 41
6.1.1 Estimation accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.1.2 Variable selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.1.3 Efficiency of MCMC . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.2 Results for correlated regressors . . . . . . . . . . . . . . . . . . . . . . . . 63
6.2.1 Estimation accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.2.2 Variable selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.2.3 Efficiency of MCMC . . . . . . . . . . . . . . . . . . . . . . . . . . 75
A Derivations 84
A.1 Derivation of the posterior distribution of β . . . . . . . . . . . . . . 84
A.2 Derivation of the full conditionals of β and σ 2 . . . . . . . . . . . . . 85
A.3 Derivation of the ridge estimator . . . . . . . . . . . . . . . . . . . . . 86
B R codes 87
B.1 Estimation and variable selection under the independence prior . 87
B.2 Estimation and variable selection under Zellner’s g-prior . . . . . . 90
B.3 Estimation and variable selection under the fractional prior . . . . 93
4
B.4 Estimation and variable selection under the SSVS prior . . . . . . 95
B.5 Estimation and variable selection under the NMIG prior . . . . . 97
Literature 100
5
List of Figures
4.1 SSVS-prior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.1 NMIG-prior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6
6.18 Correlated regressors: Box plot of SE of coefficient estimates, c=1 . . . . . 66
6.19 Correlated regressors: Box plot of coefficient estimates, c=0.25 . . . . . . . 67
6.20 Correlated regressors: Box plot of SE of coefficient estimates, c=0.25 . . . 67
6.21 Correlated regressors: Sum of SE of coefficient estimates . . . . . . . . . . 68
6.22 Correlated regressors: Box plots of the posterior inclusion probabilities,
c=100. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.23 Correlated regressors: Box plots of the posterior inclusion probabilities, c=1. 71
6.24 Correlated regressors: Box plots of the posterior inclusion probabilities,
c=0.25. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.25 Correlated regressors: Plots of NDR and FDR . . . . . . . . . . . . . . . . 73
6.26 Correlated regressors: Proportion of misclassified effects . . . . . . . . . . . 74
6.27 Correlated regressors: ACF of the posterior inclusion probabilities under
the independence prior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.28 Correlated regressors: ACF of the posterior inclusion probabilities under
the NMIG prior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
7
List of Tables
8
Introduction
In statistical tradition many methods have been proposed for variable selection. Com-
monly used methods are backward, forward and stepwise selection, where in every step
regressors are added to the model or eliminated from the model according to a precisely
defined testing schedule. Also information criteria like AIC and BIC are often used to
assess the trade-off between model complexity and goodness-of-fit of the competing mod-
els. Recently also penalty approaches became popular, where coefficient estimation is
accomplished by adding a penalty term to the likelihood function to shrink small effects
to zero. Well known methods are the LASSO by Tibshirani (1996) and Ridge estimation.
9
Penalization approaches are very interesting from a Bayesian perspective since adding
a penalty term to the likelihood corresponds to the assignment of an informative prior
to the regression coefficients. A unifying overview of the relationship between Bayesian
regression model analysis and frequentist penalization can be found in Fahrmeir et al.
(2010).
In the Bayesian approach to variable selection prior distributions representing the sub-
jective believes about parameters are assigned to the regressor coefficients. By applying
Bayes’ rule they are updated by the data and converted into the posterior distributions,
on which all inference is based on. For the result of a Bayesian analysis the shape of
the prior on the regression coefficients might be influential. If the prime interest of the
analysis is coefficient estimation, the prior should be located over the a-priori guess value
of the coefficient. If however the main interest is in distinguishing between large and small
effects, a useful prior concentrates mass around zero and spreads the rest over the parame-
ter space. Such a prior expresses the believe that there are coefficients close to zero on the
one hand and larger coefficients on the other hand. These priors can be easily constructed
as a mixture of two distributions, one with a ”spike” at zero and the other with mass
spread over a wide range of plausible values. This type of priors are called ”spike” and
”slab” priors. They are particularly useful for variable selection purposes, because they
allow to classify the regression coefficients into two groups: one group consisting of large,
important, influential regressors and the other group with small, negligible, probably noise
effects. So Bayesian variable selection is performed by classifying regressor coefficients,
rather than by shrinking small coefficient values to zero.
The aim of this master thesis is to analyze and compare five different spike-and-slab pro-
posals with regard to variable selection. The first three, independence prior, Zellner’s
g-prior and fractional prior, are called ”Dirac” spikes since the spike component consists
of a discrete point mass on zero. The others, SSVS prior and NMIG prior, are mixtures of
two continuous distributions with zero mean and different (a large and a small) variances.
To address the two components of the prior, for each coefficient a latent indicator variable
is introduced into the regression model. It indicates the classification of a coefficient to
one of the two components: the indicator variable has the value 1, if the coefficient is
assigned to the slab component of the prior, and 0 otherwise. To estimate the posterior
probabilities of coefficients and indicator variables for all five priors a Gibbs sampling
10
scheme can be implemented. Variable selection is then based on the posterior distribu-
tion of the indicator variable which is estimated by the empirical frequency of the values
1 and 0, respectively. The higher the posterior mean of the indicator variable, the higher
is evidence that the coefficient might be different from zero and therefore have an impact
on the response variable.
The master thesis is organized as follows: In chapter 1 an introduction into the Bayesian
analysis of normal regression models using conjugate priors is given. Chapter 2 describes
Bayesian variable selection using stochastic search variables and implements a basic Gibbs
sampling scheme to perform model selection and parameter estimation. In chapter 3 spike
and slab priors for variable selection, independence prior, Zellner’s g-prior and fractional
prior, are introduced. In chapter 4 and 5 slab and spike priors with continuous spike
component are studied: the stochastic search variable selection of George and McCulloch
(1993), where the prior for a regression coefficient is a mixture of two normal distributions,
and variable selection selection using normal mixtures of inverse gamma priors. Simu-
lation studies in chapter 6 compare the presented approaches to variable selection with
regard to accuracy of coefficient estimation, variable selection properties and efficiency for
both independent and correlated regressors. Finally results and issues arisen during the
simulations are discussed in chapter 7. The appendix summarizes derivations of formulas
and R-codes.
11
Chapter 1
In this section basic results of regression analysis are summarized and an introduction
into Bayesian regression is given.
where yi is the dependent variable and xi = (xi1 , . . . , xik ) is the vector of potentially
explanatory covariables. µ is the intercept of the model and α = (α1 , ..., αk ) are the
regression coefficients to be estimated. i is the error term which should capture all other
12
unknown factors influencing the dependent variable yi . In matrix notation model (1.1) is
written as
y = µ1 + Xα + = X1 β +
where y is the N x 1 vector of the response variable, X is the N x k design matrix, β are
the regression effects including the intercept, i.e. β = (µ, α), is the n x 1 error vector
and X1 denotes the design matrix (1, X). Without loss of generality we assume that the
regressor columns x1 , ..., xk are centered.
N
X
(yi − xi β̂)2 → min (1.2)
i=1
Assuming that X01 X1 is of full rank, the solution is given by
The estimator β̂ OLS has many desirable properties, it is unbiased and the efficient esti-
mator in the class of linear unbiased estimators, i.e. it has the so called BLUE-property
(Gauß Markov theorem).
y ∼ N (X1 β; σ 2 I) (1.4)
In this case, the ML-estimator β̂ M L of (1.4) coincides with the β̂ OLS (1.3) and is normally
distributed:
β̂ M L ∼ N (β; σ 2 (X1 0 X1 )−1 ) (1.5)
Significance tests on the effects, e.g. whether an effect is significantly different from zero,
are based on (1.5). Further information about maximum likelihood inference in normal
regression models can be found in Fahrmeir et al. (1996).
13
1.2 The Bayesian normal linear regression model
In the Bayesian approach probability distributions are used to quantify uncertainty. Thus,
in contrast to the frequentist approach, a joint stochastic model for response and pa-
rameters (β, σ 2 ) is specified. The distribution of the dependent variable y is specified
conditional on the parameters β and σ 2 :
The analyst’s certainty or uncertainty about the parameter before the data analysis is
represented by the prior distribution for the parameters (β, σ 2 ). After observing the
sample data (yi , xi ), the prior distribution is updated by the empirical data applying
Bayes’ theorem,
p(y|β, σ 2 )p(β, σ 2 )
p(β, σ 2 |y) = R (1.7)
p(y|β, σ 2 )p(β, σ 2 )d(β, σ 2 )
yielding the so called posterior distribution p(β, σ 2 |y) of the parameters (β, σ 2 ). Since
the denominator of (1.7) acts as a normalizing constant and simply scales the posterior
density, the posterior distribution is proportional to the product of likelihood function
and prior. The posterior distribution usually represents less uncertainty than the prior
distribution, since evidence of the data is taken into account. Bayesian inference is based
only on the posterior distribution. Basic statistics like mean, mode, median, variance and
quantiles are used to characterize the posterior distribution.
One of the most substantial aspects of a Bayesian analysis is the specification of appro-
priate prior distributions for the parameters. If the prior distribution for a parameter is
chosen so that the posterior distribution follows the same distribution family as the prior,
the prior distribution is said to be the conjugate prior of the likelihood. Conjugate priors
ensure that the posterior distribution is a known distribution that can be easily derived.
where the conditional prior for the parameter vector β is the multivariate Gaussian dis-
tribution with mean b0 and covariance matrix σ 2 B0 :
14
and the prior for σ 2 is the inverse gamma distribution with hyperparameter s0 and S0 :
This expression can be simplified, see Appendix A. It turns out that the joint posterior of
β and σ 2 can be split into two factors being proportional to the product of a multivariate
normal distribution and a inverse gamma distribution:
with parameters
BN = (X01 X1 + B−1
0 )
−1
(1.13)
bN = BN (X01 y + B−1
0 b0 ) (1.14)
sN = s0 + N/2 (1.15)
1 −1
SN = S0 + (yy + b0 B−1
0 b0 − bN BN bN ) (1.16)
2
If the error variance σ 2 is integrated out from the joint prior distribution of β and σ 2 , the
resulting unconditional prior for β is proportional to a multivariate Student distribution
with 2s0 degrees of freedom, location parameter b0 and dispersion matrix S0 /s0 B0 , see
e.g. Fahrmeir et al. (2007):
βi |σ 2 ∼ N (b0i , σ 2 B0ii )
15
especially the variance parameter B0ii expresses the scientist’s level of uncertainty about
the parameter’s location b0i . If prior information is scarce, a large value for the variance
parameter B0ii should be chosen, so that the prior distribution is flat. In this case coeffi-
cient values far away from the mean b0i are assigned a reasonable probability and the exact
specification of b0i is of minor significance. If at the extreme the variance becomes infinite
every value on the parameter space has the same density, the analyst claims absolute igno-
rance about the coefficient’s location. This type of prior is called ”noninformative prior”.
On the other hand, if the analyst has considerable information about the coefficient βi ,
he should choose a small value for the variance parameter B0ii . If a high probability is
assigned to values close to the mean b0i , information in the data has to be very large to
result in a posterior mean far away from b0i .
Choice of the prior parameters b0 , B0 of the prior distribution has an impact on the
posterior mean
bN = BN (X01 y + B−1
0 b0 )
BN = (X01 X1 + B−1
0 )
−1
If the prior information is vague, the prior covariance matrix B0 should be a matrix with
large values representing the uncertainty about the location b0 . The posterior covariance
matrix BN is then approximately σ 2 (X01 X1 )−1 and the mean bN ≈ σ 2 (X01 X1 )−1 X1 y,
which means that vague prior information leads to a posterior mean close to the OLS or
ML estimator. If on the other hand the prior information about the coefficient vector
is very strong, the prior covariance matrix B0 should contain small values. This yields
the posterior covariance matrix β N ≈ σ 2 B0 and the mean bN ≈ b0 , and the Bayesian
estimator is close to the prior mean.
16
quadratic loss (mean squared error, MSE):
The ith of the diagonal elements of M SE is the mean squared error of the estimate for
βi :
As mean squared error is a function of both bias and variance, it seems reasonable to
consider biased estimators which, however, considerably reduce variance.
A case where the variance of β can assume very large values is when columns of the data
matrix are collinear. In this case the inverse of X01 X1 can have very large values, leading
to high values of β̂ and V ar(β̂). To regularise estimation, a penalty function penalizing
large values of β can be included in the goal function. If the penalty is λβ 0 β, the so called
”ridge estimator” results:
ridge
β̂ = argminβ ((y − X1 β)0 (y − X1 β) + λβ 0 β) (1.19)
= (X01 X1 + λI)−1 X1 y
See Appendix A for details. The ridge estimator is a biased estimator for β, but its
variance and also its MSE can be smaller than that of the OLS estimator, see Toutenburg
(2003). The ridge estimator depends on a tuning parameter λ which controls the inten-
sity of the penalty term. If λ=0, the common β̂ OLS is obtained. With increasing λ, the
influence of the penalty in the goal function grows: the fit on the data becomes weaker
and the constraint on β dominates estimation.
Imposing a restriction on the parameter β in (1.19) also has a side effect: constraining
the parameter vector β to lie around the origin causes a ”shrinkage” of the parameter
estimation. The size of shrinkage is controlled by λ: as λ goes to zero, β ridge attains the
β OLS , as λ increases β ridge approaches 0. Shrinking is of interest for variable selection
problems: ideally small true coefficients are shrunk to zero and the models obtained are
17
simpler including no regressors with small effect.
The ridge estimator can be interpreted as a Bayes estimator. If the prior for β is specified
as
p(β|σ 2 ) = N (0, cIσ 2 )
bN = (X01 X1 + B−1 −1 0 −1
0 ) (X1 y + B0 b0 )
which is exactly the ridge estimator from (1.19) with λ = 1/c. This means that choosing a
prior for β causes regularization and shrinkage of the estimation of β. The tuning param-
eter c = 1/λ controls the size of coefficients and the amount of regularisation. However,
the variance of this estimator is BN = (X01 X1 + 1/cI)−1 in a Bayes interpretation and
(X01 X1 + 1/cI)−1 X01 X1 (X01 X1 + 1/cI)σ 2 in a classical interpretation. For more details on
the relationship between regularisation and Bayesian analysis see Fahrmeir et al. (2010).
Although under a conjugate prior for (β, σ 2 ) the posterior p(β, σ 2 |y) is available in closed
form, we will describe a Gibbs sampling scheme to sample from the posterior distribution.
This Gibbs sampling scheme is the basic algorithm which will be extented later to allow
variable selection.
A Gibbs sampler is an MCMC (Markov Chain Monte Carlo) method to generate a se-
quence of samples from the joint posterior distribution by breaking it down into more
manageable univariate or multivariate distributions. To implement a Gibbs sampler for
the posterior distribution p(β, σ 2 |y) the parameter vector is split into two blocks β and
σ 2 . Then random values are drawn from the conditional distributions p(β|σ 2 , y) and
p(σ 2 |β, y) alternately, where in each step a draw from the conditional posterior condi-
tioning on the current value of the other parameter is produced. Due to Markov Chain
Theory, after a burnin period the sampled values can be regarded as realisations from the
18
marginal distributions p(β|y) and p(σ 2 |y). The conditional distribution p(β|σ 2 , y) is the
normal distribution N (bN , BN σ 2 ). For the conditional distribution of σ 2 given β and y
we obtain (see appendix A for details)
∗
p(σ 2 |β, y) = G −1 (s∗N , SN ) (1.20)
with
Sampling from the posterior is feasible by a two-block Gibbs sampler. After assigning
starting values to the parameters, the following steps are repeated:
(1) sample σ 2 from G −1 (sN , SN ), parameters sN , SN are given in (1.21) and (1.22)
(2) sample β from N (bN , BN σ 2 ), parameters bN , BN are given in (1.13) and (1.14)
We could sample from the posterior alternatively using a one-block Gibbs sampler, where
σ 2 and β are sampled in one step:
(1a) sample σ 2 from G−1 (sN , SN ) with parameters sN , SN given in (1.15) and (1.16)
(1b) sample β|σ 2 from N (bN , BN σ 2 ) with parameters bN , B0 given in (1.13) and (1.14).
19
Chapter 2
The goal of variable selection in regression models is to decide which variables should be
included in the final model. In practical applications often a large number of regressors
could be used to model the response variable. However, usually only a small subset of
these potential regressors actually have an influence on the response variable, whereas
the effect of most covariates is very small or even zero. Variable selection methods try
to identify regressors with nonzero and zero effects. This is a challenging task, see e.g.
O’Hara and Sillanpää (2009) who summarize the variable selection problem as follows:
”In any real data set, it is unlikely, that the true regressor coefficients are either zero
or large; the sizes are more likely to be tapered towards zero. Hence, the problem is
not one of finding the zero coefficients, but of finding those that are small enough to be
insignificant, and shrinking them towards zero”. The stochastic search variable approach
to differentiate between regressors with (almost) zero and nonzero effects proceeds by
introducing auxiliary indicator variables: for each regressor a binary variable is defined
indicating whether the effect of that regressor is zero or nonzero. A Gibbs sampling scheme
can be implemented to generate samples from the joint posterior distribution of indicator
variables and regression coefficients. The posterior probability that an indicator variable
takes 1 can be interpreted as the posterior inclusion probability of the corresponding
regressor, i.e. probability that this regressor should be included in the final model.
20
2.1 Bayesian Model Selection
Variable selection is a special case of model selection. Model selection means to choose
the ”best” model for the data y from a set of candidate models M1 , M2 , M3 .... In the
Bayesian approach a prior probability p(Ml ) is assigned to each candidate model Ml and
the posterior probability of the model Ml is obtained by applying Bayes’ rule:
The probability p(y|Ml ) is called the marginal likelihood of the model Ml and it is the
key quantity in the equation above. It is the density of the sample given only the model
structure without any information on specific parameters. It is computed by integrating
out the parameters of the model:
Z
p(y|Ml ) = p(y|θl , Ml ) p(θl |Ml ) dθl
θl | {z } | {z }
likelihood prior
For every candidate model Ml , the posterior model probability p(Ml |y) has to be com-
puted and the model with the highest posterior probability is the favorite model. For
the normal linear regression model under conjugate priors this integration can be solved
analytically, and the marginal likelihood is given as
ZZ
p(y|M ) = p(y|δ, β, σ 2 )p(β, σ 2 |δ)dβdσ 2
where the parameters sN , SN and BN are given in (1.13) to (1.16). If selection among
competing regression models is considered choosing a model Ml means to choose a set
of regressor variables, usually a subset of x1 , . . . , xk , which corresponds to Ml . However,
in a regression model with k regressors considering all models with all possible subsets
of regressors is a computational challenge, even if k is moderate, e.g. for k=30 1.1 · 1012
marginal likelihoods would be required. Therefore, a different approach is to perform a
stochastic search for models with high posterior probability using MCMC methods. This
approach is presented in the following section.
21
2.2 Model selection via stochastic search variables
To perform a stochastic search for models with high posterior probability a new indicator
variable δj for each regressor coefficient αj is defined, where δj = 0 or δj = 1 represents
inclusion or exclusion of the regressor in the model:
0 if α = 0
j
δj =
1 otherwise
The vector δ = (δ1 , ..., δk ) determines the model, i.e. the vector contains the information
which elements of α are included in the model or restricted to be zero. Also each of the
2k candidate models is represented by a realisation of δ. Model selection means to choose
a value of δ. Once a value δ is chosen, the following reduced normal regression model is
obtained (in matrix notation, with β = (µ, α)):
where β δ contains only the nonzero elements of β and the design matrix Xδ1 only the
columns of X1 corresponding to nonzero effects. The intercept does not have an indicator
variable as µ is included in the model in any case.
In representation (2.2) model parameters are the model indicator δ, the regressor effects
β and the error variance σ 2 . In the Bayesian framework the goal is to derive the posterior
density of all the parameters δ, β and σ 2 :
22
As δj is a binary variable, a straightforward choice of a prior for δj is
p(δj = 1) = π, j = 1, . . . , k
where π is a fixed inclusion probability between 0 and 1. For σ 2 and those effects of β
which are not restricted to be zero, β δ , we use conjugate priors:
σ 2 ∼ G −1 (s0 , S0 )
β δ |σ 2 , δ ∼ N (bδ0 , B0 δ σ 2 )
A naive approach for a Gibbs sampler to draw from the joint posterior p(β, σ 2 , δ|y) would
be to add a further step to the algorithm described in section 1.2.3:
However, this scheme does not define an irreducible Markov chain: whenever δj = 0
also αj = 0 and hence the chain has absorbing states. This problem can be avoided
by sampling δ from the marginal posterior distribution p(δ|y). Formally, the marginal
posterior distribution is given as
p(δ|y) ∝ p(y|δ)p(δ)
where p(y|δ) is the marginal likelihood from the likelihood of the regression model with
regressors Xδ1 , where the effects β δ and error variance σ 2 are integrated out:
ZZ
p(y|δ) = p(y|δ, β, σ 2 )p(β, σ 2 |δ)dβdσ 2
sN = s0 + N/2 (2.5)
1 0
δ
y y + (bδ0 )0 (Bδ0 )−1 bδ0 − (bN )0 (BδN )−1 bδN
SN = S0 + (2.6)
2
23
To sample from the posterior p(δ, β, σ 2 |y) a one block Gibbs sampler can be implemented,
which involves the following steps:
(2) Sample σ 2 |δ from p(σ 2 |y, δ), which is the gamma inverse distribution G −1 (sδN , SN
δ
).
(3) Sample β δ |σ 2 in one block from p(β|y, δ, σ 2 ). This conditional posterior is the
normal distribution N (bN δ , BδN σ 2 ).
In this scheme model all parameters are sampled jointly. However, model selection could
be performed without parameter estimation by iterating only step 1.
If the number of regressors k is large, the sampling scheme above is computationally too
expensive because calculation of the marginal likelihood of all 2k models is required in each
iteration. A faster sampling scheme can be implemented, which updates each component
δj conditional on the actual value δ\j of the other elements of δ. Thus for each component
only the computation of two marginal likelihoods (for δj = 0 and δj = 1) and therefore in
one iteration step only the evaluation of 2k marginal likelihoods is required. This leads
to the following sampling scheme:
(1) sample each element δj of the indicator vector δ seperately conditional δ\j . This
step is carried out by first choosing a random permutation of the column numbers
1,...,k and updating the elements of δ in this order.
(3) sample the non-zero elements β δ in one block conditional on σ 2 from N (bN δ , BδN σ 2 ).
We have assumed that the columns of the design matrix X are centered and therefore
orthogonal to the 1 vector. In this case the intercept is identical for all models. Whereas
model selection cannot be performed using improper priors on the effects subject to selec-
tion, Kass and Raftery (1995) showed that model comparison can be performed properly
when improper priors are put on parameters common to all models. Therefore, a flat
prior for the intercept µ and a improper prior for the error term σ 2 can be assumed:
24
p(µ) ∝ 1 (2.7)
1
p(σ 2 ) = ∼ G −1 (0, 0) (2.8)
σ2
An advantage of the improper prior on the intercept is that in case of centered covariates
the mean µ can be sampled independently of the other regressor effects. If the flat prior
on the intercept is combined with a proper prior N (a0 δ , Aδ0 σ 2 ) on the other unrestricted
elements of αδ , a so called partially proper prior is obtained. In chapter 3 different
proposals how to select the prior parameters a0 , A0 are described.
M
1 X (m)
µ̂ = (µ) (2.9)
M m=1
M
1 X (m)
β̂ = β (2.10)
M m=1
M
1 X 2 (m)
σˆ2 = (σ ) (2.11)
M m=1
M
1 X (m)
δˆj = δ , j = 1...k (2.12)
M m=1 j
It should be remarked that the estimates of µ, α and σ 2 are averages over different re-
gression models as during the MCMC procedure they are drawn conditional on different
model indicator variables δ. This is known as ”Bayesian model averaging”.
The posterior probability p(δj = 1|y) for the regressor xj to be included in the model can
(m)
be estimated by the mean δˆj or alternatively by the mean of p(δj = 1|y). To select the
final model one of the following choices is possible:
• Selection of the median probability model: those regressors are included in the final
model which have a posterior probability greater than 0.5.
25
• Selection of the highest probability model: the highest probability model is indicated
by that δ which has occured most often during the MCMC iterations.
It has been shown by Barbieri and Berger (2004) that the median probability model
outperforms the highest probability model in regard to predictive optimality.
26
Chapter 3
27
Here we consider a spike and a point mass at zero (Dirac spike) and normal slabs:
Depending on the value of δj , the coefficient is assumed to belong to either the spike
distribution or the slab distribution. Spike and slab priors allow classification of the re-
gressor coefficients as ”zero” and ”non-zero” effects by analyzing the posterior inclusion
probabilities estimated by mean of the indicator variables. Also the priors presented in
chapter 4 and 5 are spike and slab priors, where the Dirac spike will be replaced by an
absolutely continuous density with mean zero and small variance.
We consider specifications with different prior parameters a0 and A0 of the N (a0 δ , Aδ0 σ 2 )-
slab:
• fractional prior: aδ0 = ((Xδ )0 (Xδ ))−1 (Xδ )0 yc , Aδ0 = 1b ((Xδ )0 (Xδ ))−1
where c, g, 1/b are constants and yc is the centered response. All priors considered here
are still conjugate priors and allow straightforward computation of the posterior model
probability.
To achieve more flexibility than using a fixed prior inclusion probability p(δj = 1) = π, we
use a hierarchical prior where the inclusion probability follows a Beta distribution with
parameters c0 and d0 :
π ∼ B(c0 , d0 )
Then the induced prior for the indicator vector δ equals the following Beta distribution:
p(δ) = B(pδ + c0 , k − pδ + d0 )
where pδ is the number of non zero elements in δ and k is the number of covariates. If c0
and d0 equals 1, the prior is uninformative, but also an informative prior could be used
to model prior information. Ley and Steel (2007) have shown that the hierarchical prior
outperforms the prior with fixed inclusion probabilities.
28
3.1.1 Independence slab
In the simplest case of a non informative prior for the coefficients each regressor effect is
assumed to follow the same distribution and to be independent of the other regressors:
the resulting prior for a regression coefficient is mixture of a (flat) normal distribution
(for δj = 1) and a (spike) point mass at point zero (for δj = 0). In figure (3.1) the contour
plot of the independence prior for 2 regressors is plotted, the blue point at 0 marks the
discrete point mass.
The g-prior introduced by Zellner (1986) is the most popular prior slab used for model
estimation and selection in regression models, see e.g. in Liang et al. (2008) and Maruyama
and George (2008). Like the independence prior the g-prior assumes that the effects are
a priori centered at zero, but the covariance matrix A0 is a scalar multiple of the Fisher
information matrix, thus taking the dependence structure of the regressors into account:
29
−1 2
δ 2 δ 0 δ
p(α |σ ) = N 0, g (X ) (X ) σ
Popularity of the g-prior is at least partly due to the fact that it leads to simplifications,
e.g. in evaluating the marginal likelihood, see Liang et al. (2008). The marginal likelihood
can be expressed as a function of the coefficient of determination in the regression model,
evaluation of determinants of prior and posterior covariances for the regressor effects is
not necessary. Marginal likelihood and posterior moments are given in section 3.2.1.
Recently shrinkage properties of the g-prior were discussed, e.g. by George and Maruyama
(2010), who critized that ”the variance is put in wrong place” and proposed a modified
version of the g-prior. Following Brown et al. (2002) who compare the shrinkage properties
of g-prior and independence prior, we consider the singular value decompensation of X,
X = TΛ1/2 V0 . T and V are orthonormal matrices and X0 X is assumed to have full rank.
30
The normal regression model
y = Xβ + , ∼ N (0, σ 2 I)
can be transformed to
where u = T0 y and θ = V0 β.
The independence prior β ∼ N (0, cσ 2 I) induces θ ∼ N (0, cσ 2 I). As
the g-prior β ∼ N (0, gσ 2 (X0 X)−1 ) induces θ ∼ N (0, gσ 2 Λ−1 ). Under a normal prior
θ ∼ N (0, D0 σ 2 ), the posterior mean of θ is given as (see (1.13) and (1.14)):
0 0
E(θ|u) = (Λ1/2 Λ1/2 + D−1 −1 1/2
0 ) Λ u
= (Λ + D−1 −1 1/2
0 ) Λ u
= (I + Λ−1 D−1 −1
0 ) θ̂ OLS
0 0
since θ̂ OLS = (Λ1/2 Λ1/2 )−1 Λ1/2 u. Therefore under the independence prior the mean of
θi is given as
λi
E(θi |u) = θ̂i (3.3)
λi + 1/c
whereas under the g-prior the mean of θi is given as
g
E(θi |u) = θ̂i (3.4)
g+1
Comparing the factors of proportionality in (3.3) and (3.4), it can be seen that under the
independence prior the amount of shrinkage depends on the eigenvalue λi : increasing the
eigenvalue the shrinkage factor increases to 1, meaning that shrinkage will disappear and
the posterior mean will approximate the ML estimate. On the other hand, in direction of
small eigenvalues the ML estimate are shrunk towards zero. On the contrary, the posterior
mean under the g-prior (3.4) is shrunk equally in the directions of all eigenvalues, which
is an undesirable effect. In figure (3.3) the posterior mean under independence prior and
g-prior is plotted for different amounts of eigenvalues.
31
Figure 3.3: Left: Independence prior: posterior mean of coefficient estimation in direction of different
eigenvalues, λi = 0.5 (dashed line) and λi = 10 (dotted line). Parameter c=1
Right: G-prior: posterior mean of coefficient estimation shrinks the ML-estimation equally in all direction
of eigenvalues. Parameter g=40.
The basic idea of the fractional prior introduced by O’Hagan (1995) is to use a fraction b
of the likelihood of the centered data yc to construct a proper prior under the improper
prior p(αδ ) ∝ 1. So the following proper prior on the unrestricted elements αj is obtained:
−1 δ 0 −1 2
p(αδ |σ 2 ) = N (Xδ )0 (Xδ ) (X ) yc , 1/b (Xδ )0 (Xδ ) σ
The fractional prior is centered at the value of the ML (OLS) estimate, with the variance-
covariance matrix multiplied by the factor 1/b. For 0 < b < 1, usually b 1, the
prior is considerably more spread than the sampling distribution. Since information used
for constructing the prior should not reappear in the likelihood, the fractional prior is
combined with the remaining part of the likelihood and yields the posterior (Frühwirth-
Schnatter and Tüchler (2008)):
with parameters
−1
AδN = (Xδ )0 (Xδ ) (3.5)
32
Figure 3.4: Contour plot of fractional
prior for 2 correlated regressors with ρ=0.8,
b=1/400, N=40 observations, and δ = (1, 1)
(green) and δ = (0, 0) (blue).
In figure (3.4) the contour fractional prior slab is plotted for two correlated regressors
with ρ=0.8, f=1/400, N=40 observations.
(1) sample each element δj of δ separately from p(δj |δ \j , y) ∝ p(y|δj , δ\j )p(δj , δ \j ),
where δ\j denotes the vector δ without element δj .
(4) sample the nonzero elements αδ |σ 2 in one block from N (aN δ , AδN σ 2 )
The marginal likelihood of the data conditioning only on the indicator variables is given
as
33
N −1/2 |AδN |1/2 Γ(sN )S0s0
p(y|δ) =
(2π)(N −1)/2 |Aδ0 |1/2 Γ(s0 )(SN )sN
with posterior moments
−1
AδN = (Xδ )0 Xδ + (Aδ0 )−1 (3.7)
sN = s0 + (N − 1)/2 (3.9)
1 0
yc yc + (aδ0 )0 (Aδ0 )−1 aδ0 − (aN )0 (AδN )−1 aδN
SN = S0 + (3.10)
2
δ kyc k2
S(X ) = (1 + g(1 − R(Xδ )2 )) (3.12)
1+g
where qδ is the number of nonzero elements in δ and R(Xδ ) is the coefficient of determi-
0 0
nation yc0 Xδ (Xδ Xδ )−1 Xδ yc .
The posterior moments are given as:
g 0
AδN = (Xδ Xδ )−1 (3.13)
1+g
g 0 0
aδN = (Xδ Xδ )−1 Xδ yc (3.14)
1+g
sN = s0 + (N − 1)/2 (3.15)
1
SN = S0 + S(Xδ ) (3.16)
2
For the fractional prior the marginal likelihood can be expressed as:
34
bqδ /2 Γ(sN )S0s0
p(y|δ) =
(2π)(N −1)(1−b)/2 Γ(s0 )(SN )sN
where qδ is the number of nonzero elements of δ, while aδN and AδN are given in (3.5) and
(3.6) and
35
Chapter 4
36
•
c c very small (e.g. 0.001) p(c) = 1 − ω
νj =
1 p(1) = ω
• ω ∼ Beta(c0 , d0 )
where ψ 2 is a fixed value chosen large enough to cover all reasonable values. In Swartz
et al. (2008) ψ 2 and c are set 1000 and 1/1000 respectively. Figure (4.1) shows the plot
of the two normal distributions.
Figure 4.1: Left: SSVS prior for a single regressor: mixture of a slab (blue) and a spike (green)
normal distribution. Right: The variances of the slab-and-spike components assume two values, with
mass ω and 1 − ω.
It should be remarked that the prior for αj is a spike and slab prior. In contrast to the
priors presented in chapter 3 where the ”spike” distribution was a discrete point mass,
the prior here is a mixture of two continuous distributions meaning that also the ”spike”
distribution is continuous. This makes it easier to sample the indicator variable νj as
for a continuous spike αj is not exactly zero when νj = c. The indicator can be drawn
conditionally on the regressor coefficient αj and computation of the marginal likelihood
is not required. However, in each iteration the full model is fitted. Nonrelevant regressors
remain in the model instead of being removed as under a Dirac spike. So model complexity
cannot be reduced during the MCMC steps. This can be quite cumbersome for data sets
with many covariables.
37
4.2 MCMC scheme for the SSVS prior
For MCMC estimation of the parameter (µ, ν, ω, α, σ 2 ) the following Gibbs sampling
scheme can be implemented:
(2) sample each νj , j = 1 . . . j, from p(νj |αj , ω) = (1−ω)fN (αj ; 0, cψ 2 )I{νj =c} +ωfN (αj ; 0, ψ 2 )I{νj =1}
P
(3) sample ω from B(c0 + n1 , d0 + k − n1 ), where n1 = j I{νj =1}
D = diag(ψ 2 νj )
(5) sample σ 2 from G −1 (sN , SN ),where sN = s0 +(N −1)/2 and SN = 21 ((yc −Xα)0 (yc −
Xα)).
38
Chapter 5
• ω ∼ Beta(c0 , d0 )
39
As in SSVS c is a fixed value close to zero, νj =1 indicates the slab component and νj =c the
spike component. The resulting prior for the variance parameter φ2j = νj ψj2 is a mixture
of scaled inverse Gamma distributions:
It can be shown that the marginal distribution for the components of αj is a mixture of
scaled t-distributions, (see Konrath et al. (2008) for more detail):
Figure 5.1: Left: NMIG-prior for a single regressor: mixture of two scaled t-distributions with
aψ0 = 5, bψ0 = 50, s0 = 0.000025 (blue line), s1 = 1 (green line). Right: the induced prior for the variance
follows a mixture of two inverse Gamma distributions.
40
(2) for each j=1,. . . ,k sample νj from p(νj |αj , ψj2 , ω, y) = (1 − ω)fN (αj ; 0, cψj2 )I{νj =c} +
ωfN (αj ; 0, ψj2 )I{νj =1}
(3) for each j=1,...,k sample ψj2 from p(ψj2 |αj , νj ) = G −1 (aψ0 + 1/2; bψ0 + 0.5αj2 /νj )
P
(4) sample ω from p(ω|ν) = B(c0 + n1 , d0 + k − n1 ) where n1 = j I{νj =1}
(6) sample the error variance σ 2 from G −1 (sN , SN ) with parameters sN = s0 +(N −1)/2,
SN = 21 ((yc − Xα)0 (yc − Xα)).
41
Chapter 6
Simulation study
To compare the performance of the different spike and slab priors described in chapters
3-5 a simulation study was conducted. Estimation, variable selection and efficiency of the
MCMC-draws are investigated. To simplify notation the following abbreviations are used
for the different priors:
For the independence prior ’p’ as abbreviation is used instead of ’i’, because the later is
reserved for indices.
For error term σ 2 and intercept µ the improper priors given in (2.8) and (2.7) are used.
Concerning the priors for the regression coefficients the tuning of the prior variances
is substantial. Considering the effect of penalization which depends on the size of the
prior variance discussed in section 1.2.2, the magnitude of the prior variance influences
estimation results. Therefore, to make the different priors for the regressor coefficients
comparable, the prior variances are specified in order to obtain approximately covariance
matrices of the same size, i.e. the constants c, g, b, bψ,0 aψ,0 , τ are chosen so that
42
prior variance parameter scaling groups
1 2 3
p c 100 1 0.25
g g 4000 40 10
1 1 1
f b 4000 40 10
Table (6.1) shows 3 different prior variance groups used for simulations. Results are
compared within the group, prior variance groups are denoted by the value of c.
For each scaling group 100 data sets consisting of N=40 responses with 9 covariates are
generated, according to the model
yi = β0 + βi xi + i
For all data sets the intercept is set to 1 and the error term is drawn from the N (0, 1)
distribution. To obtain independent regressors the covariates xi = (xi1 , . . . , xi9 ) are drawn
from a multivariate Gaussian distribution with covariance matrix equal to the identity
matrix. To generate correlated regressors the configuration of Tibshirani (1996) is used
where the covariance matrix Σ is set as Σij = corr(xi , xj ) = ρ|i−j| with ρ = 0.8.
To study the behavior of selection of both ’strong’ and ’weak’ regressors the coefficient
vector is set to
β = (2, 2, 2, 0.2, 0.2, 0.2, 0, 0, 0) (6.1)
where an effect of ”2” is strong and an effect of ”0.2” is weak. For the simulations with
highly correlated regressors the parameter vector is set to:
Correlation between regressors are highest between ”neighbouring” regressors. This set-
ting allows to study different scenarios, e.g. zero effects highly correlated with strong or
43
weak effects etc.
For each data set coefficient estimation and variable selection is performed jointly. MCMC
is run for M=1000 iterations without burn in. Additionally, ML estimates of the full model
are computed.
In a first step accuracy of coefficient estimation under the different priors measured by
the squared error (SE)
SE(αi ) = (αi − α̂)2
is compared. Estimated coefficients and squared errors are displayed in box plots in fig-
ures (6.1) to (6.6).
Starting with the prior variance group c = 100 for the independence prior (for the prior
variance parameters of the other priors see table (6.1)), the results of coefficient estima-
tions can be seen in figure (6.1). In the first row box plots of the estimated coefficients
of ”strong” regressors (αi = 2) are displayed, in the second one those of weak regressors
(αi = 0.2), and in the third row those of the zero effects (αi = 0). The red lines mark the
true values.
The mean of the estimates is close to the true coefficient values for both strong and zero
effects, but smaller for weak effects, where shrinkage to zero occurs. Considering the
discussion on the shrinkage property in section 1.2.2, it can be concluded that a large
prior variance causes negligible shrinkage and Bayes estimates approximately coincide
with ML-estimates. Bayes estimation with spike and slab priors implies model averag-
ing as the posterior mean of a coefficient is an average over different models. Inclusion
probabilities displayed in figure (6.8) show that strong regressors are included in almost
all data sets; weak and zero regressors however have lower inclusion probabilities which
means they are either set to zero for a Dirac spike or shrunk close to zero for a continuous
spike. Since the posterior mean is a weighted average of the estimate under the slab prior
(approximately equal to the OLS estimator) and the heavily shrunk estimate under the
44
spike prior, weak regressors are underestimated.
If the prior variance is smaller with c = 1 or c = 0.25, the shrinkage effect of a small
prior variance discussed in section 1.2.2 can be seen in figure (6.5). Although the inclu-
sion probability of strong regressors is still close to 1 (see figure (6.10)), the estimated
coefficients are smaller than the true value. This can be observed in particular for the
fractional prior, g-prior and SSVS prior. Also coefficients of weak regressors are shrunk,
but due to the increased inclusion probability (see figure (6.10)) implying a larger weight
on the almost unshrunk estimates under the slab prior, estimates are higher compared to
a prior variance of c = 100. Also the squared error of weak regressors is reduced and of
comparable size as the squared error of the MLE. Zero effects have an increased inclusion
probability too, but their estimates are still zero or close to zero. Again the squared error
of the zero effects is smaller for Bayes estimator than for the ML estimator.
• For estimation the shrinkage of the slab is not so pronounced. Due to model averag-
ing it is relevant how often a coefficient is sampled from the spike component leading
to a high shrinkage to zero. The inclusion probability depends on the variance of
the slab component. This leads to following recommendations:
– To estimate the effect of strong regressors a slab with large prior variance
should be chosen.
– To estimate ( and detect) weak regressors a slab with small prior variance
should be chosen.
– To exclude zero effects from a final model a slab with large prior variance
should be chosen.
• In figure (6.7) the sum of squared errors over all coefficients and for all data sets is
shown by box plots. For c=100 and c=1 the Bayes estimates under spike and slab
priors are smaller than those of the ML estimator.
45
Figure 6.1: Box plots of coefficient estimates, the red
line marks the true value. Prior variance group c=100.
46
Figure 6.3: Box plots of coefficient estimates, the red
line marks the true value. Prior parameter group c=1.
47
Figure 6.5: Box plots of coefficient estimates, the red
line marks the true value. Prior variance group c=0.25.
48
(a) Sum of SE, c=100 (b) Sum of SE, c=1
49
6.1.2 Variable selection
Variable selection means to decide for each regressor individually whether it should be
included in the final model or not. Following Barbieri and Berger (2004), in a Bayesian
framework the final model should be the median probability model consisting of those
variables whose posterior inclusion probability p(δj = 1|y) is at least 0.5. The inclusion
probability of a regressor is estimated by the posterior mean of the inclusion probability.
The mean corresponds to the proportion of draws of a coefficient from the slab compo-
nent of the prior. A larger posterior inclusion probability indicates that the corresponding
regressor xj has an effect which is not close to zero.
In figures (6.8), (6.9) and (6.10), the inclusion probabilities for each regressor are dis-
played in box plots for different prior variance settings. The first row of the plots shows
the inclusion probabilities of ”strong” regressors (βi = 2), the second row those of ”weak”
regressors (βi = 0.2) and the third row those of the zero effects (βi = 0). For ML esti-
mates the relative frequency of inclusion of a regressor based on a significance test with
significance level α = 0.05 is shown.
For strong regressors the inclusion probability is equal to one for all prior variances and
all priors. That means, that strong coefficients are sampled only from the slab compo-
nent of the priors, no matter what size of prior variance was chosen. For weak regressors,
however, the inclusion probability depends on the prior variance. If the prior variance
is large, the inclusion probability of weak regressors is low. The smaller the variance of
the slab distribution, the higher the inclusion probability. Inclusion probabilities of zero
effects show a similar behavior as those of weak regressors. For large variances the effect
is assigned to the spike component, for smaller slab variances posterior inclusion proba-
bilities increase as the effects are occasionally assigned to the slab component.
In the next step the influence of the size of prior variance on the number of misclassified
regressors is examined. For this purpose, the false-discovery-rate (FDR) and the non-
discovery-rate (NDR) defined as:
h(δi = 1|αi = 0)
F DR =
h(αi = 0)
50
h(δi = 0|αi 6= 1)
N DR =
h(αi 6= 1)
are calculated, where h denotes the absolute frequency. Figure (6.11) shows how FDR
and NDR change by varying the prior variance. As a benchmark line the FDR and NDR
of the classical approach are plotted, which are 0.05 for the FDR (α-error of coefficient
testing) and 0.40 for NDR (β-error of coefficient testing) respectively. Under all priors
FDR and NDR show a similar behavior: if the variance of the slab component becomes
smaller, the NDR decreases and the FDR increases. Therefore, looking at the total sum
of misclassified regressors defined as
k
1X
M ISS = (1{δi =1,αi =0} + 1{δi =0,αi 6=0} )
k i=1
displayed in figure (6.11), it can be seen that the total sum of the misclassified variables
remains roughly constant by varying the prior variance scaling.
Conclusions are:
• The inclusion probability depends on the size of the variance of the slab compo-
nent. By increasing the variance, the inclusion probability of weak and zero effects
decreases. However, the inclusion probability of strong regressors remains close to
one for all variance settings.
• It is almost impossible to distinguish between small and zero effects: either both
small and zero effects are included in the model or both are excluded simultaneously.
The number of misclassified regressors remains roughly constant if the slab variance
is varied.
• The different priors yield similar results for large variances. For a small prior vari-
ance (c = 0.25) the inclusion probability of weak and zero effects is smaller under
the independence prior and g-prior compared to the other priors.
To identify zero and nonzero effects the following recommendations can be given:
• Strong regressors are detected under each prior in each prior variance parameter
setting. Weak regressors are hard to detect in our simulation setup.
• For reasonable prior variances (c = 100 or c = 1) also zero effects are correctly
identified under each prior.
51
Figure 6.8: Box plots of the posterior inclusion probabilities p(δj = 1|y). Prior variance
group c=100.
52
Figure 6.9: Box plots of the posterior inclusion probabilities p(δj = 1|y). Prior variance
group c=1.
53
Figure 6.10: Box plots of the posterior inclusion probabilities p(δj = 1|y). Prior variance
group c=0.25.
54
(a) independence prior (b) fractional prior
Figure 6.11: NDR and FDR for different priors as a function of the prior variance parameters.
55
Figure 6.12: Proportion of misclassified effects as a function of the prior
variance scale
56
6.1.3 Efficiency of MCMC
In this chapter we compare computational effort and MCMC efficiency under different
priors. Independence prior, fractional prior and g-prior require the time-consuming calcu-
lation of 2k marginal likelihoods in every step of iteration. Computation of the marginal
likelihood is not necessary for the NMIG-prior and the SSVS-prior, but model complex-
ity is not reduced during MCMC, as no effects are shrunk exactly to zero. Draws from
an MCMC implementation are not independent but in general correlated. To measure
the loss of information of a dependent sample compared to a independent sample, the
inefficiency factor f defined as
∞
X
f =1+ ρs
s=1
M
ESS =
f
The effective sample size is the number of independent draws that correspond to the de-
pendent draws. For practical computation of the inefficiency factor autocorrelations are
summed only up to lag s. To determine s Geyer (1992) proposed to calculate the func-
tion Γm = ρ2m + ρ2m+1 , which is the sum of two adjacent pairs of autocorrelations. He
showed that for an irreducible, reversible Markov chain Γm is a strictly positive, strictly
decreasing and strictly convex function. s is determined as the lag where the conditions
are violated for the first time. Geyer (1992) showed also that with the obtained efficiency
factor the true variance is overestimated. If the ESS is divided by the computation time
needed, the number of effective iterations per second is obtained, which can be compared
for different priors.
To study the efficiency of MCMC under the presented priors, one data set with 40 obser-
vations is generated as described on page 41. MCMC is performed under each prior with
the variance parameters c = 10 for independence prior, g = 400 for g-prior, b = 1/400
for fractional prior, (aψ0 , bψ0 ) = (5, 50) for NMIG prior and τ 2 = 10 for SSVS prior. The
MCMC algorithms were run for 10000 iterations and the posterior inclusion probability
57
(m)
p(δj = 1|y) in each iteration is saved for each regressor. Autocorrelations, inefficiency
(m)
factors and ESS of the draws p(δj = 1|y) are computed.
Autocorrelations of the posterior inclusion probabilities under the independence prior are
shown in figure (6.13). The autocorrelations decay quickly, and this also is true for the
g-, fractional and SSVS prior. However, autocorrelations are high for the NMIG prior,
see figure (6.14). Efficiency factors summarized in table (6.2) vary between 1 and 2 under
Dirac spikes, between 1 and 5.3 for SSVS prior and between 1 and 64 for the NMIG prior.
In table (6.3) implementations for the different priors are compared taking into account
computation time: ESS per second for the independence prior, g-prior and fractional
prior are between 10 and 20, for the SSVS prior between 62 and 335, and for the NMIG
prior between 2 and 284. Performance is best for the SSVS as draws are nearly autocor-
related and sampling is fast.
To investigate whether models are sampled according to their model probability, the ob-
served frequencies of the drawn models are compared to the posterior model probabilities.
For the independence prior, g-prior and fractional prior the results are summarized in ta-
bles (6.6), (6.7) and (6.8). The model frequencies from MCMC closely correspond to the
posterior model probabilities: this means that the models are actually sampled according
to their posterior probability.
To determine the sample size necessary to achieve a good approximation to the posterior
model probability, MCMC is run for 100, 1000, 10000 and 20000 iterations under the
independence prior. The results shown in table (6.4) indicate that only 1000 draws yield
good approximation. However, posterior model probabilities can differ considerably under
different priors (see table (6.6), (6.7) and (6.8)).
• Computational time: MCMC for Dirac spikes is time consuming, CPU time is 10
times higher then for NMIG and SSVS prior.
58
• ESS: MCMC for the SSVS prior performs best. MCMC under the NMIG prior can
outperform but also perform worse than MCMC for Dirac spikes.
• Under the independence prior, g-prior and fractional prior models are sampled ac-
cording to their model probability.
• Convergence: For the independence prior 1000 iterations are enough to achieve a
reasonable approximation to the posterior model probability.
59
Figure 6.13: ACF of the posterior inclusion probabilities p(δ = 1|y) under the indepen-
dence prior, prior variance parameter c=10, M=10000.
60
Figure 6.14: ACF of the posterior inclusion probabilities p(δ = 1|y) under the NMIG
prior, prior variance parameter (aψ0 , bψ0 )=(5,50), M=10000.
61
p g f n s
f ms f ms f ms f ms f ms
x1 1.00 1 1.00 1 1.00 1 1.00 1 1.00 1
x2 1.55 3 1.00 1 1.52 5 1.00 1 1.00 1
x3 1.00 1 1.43 1 1.00 1 1.00 1 1.00 1
x4 1.63 3 2.00 7 2.04 5 9.70 17 1.98 5
x5 1.47 1 1.78 3 1.63 3 64.54 87 5.30 9
x6 1.47 1 1.84 3 2.03 9 23.53 57 1.90 9
x7 1.70 3 2.02 3 2.09 5 9.48 19 1.57 7
x8 1.59 1 2.01 3 2.11 9 5.42 9 1.57 3
x9 1.71 5 2.04 3 1.98 3 8.68 19 1.61 3
Table 6.2: Posterior indicator probability p(δj = 1|y): inefficiency factor (f) and number of the
autocorrelations (ms) summed up for computation of the inefficiency factor
p g f n s
ESS iter ESS iter ESS iter ES iter ESS iter
x1 9999 21 10000 22 10000 22 10000 284 10000 333
x2 6465 14 10000 22 6579 15 10000 284 10000 333
x3 10000 21 6995 15 10000 22 10000 284 10000 333
x4 6144 13 5009 11 4900 11 1031 29 5039 167
x5 6823 14 5623 12 6130 14 154 4 1886 62
x6 6801 14 5445 12 4925 11 424 12 5273 175
x7 5887 12 4944 11 4784 10 1055 30 6360 212
x8 6270 13 4978 11 4729 10 1845 52 6380 212
x9 5860 12 4905 11 5047 11 1152 32 6207 206
Table 6.3: Posterior inclusion probability p(δj = 1|y): effective sample size (ESS) and ESS per second
(iter)
62
model x1 x2 x3 x4 x5 x6 x7 x8 x9 h 100 h 1000 h 10000 h 20000 p mM
1 1 1 1 0 1 0 0 0 0 0.30 0.36 0.38 0.38 0.38
2 1 1 1 0 0 0 0 0 0 0.45 0.38 0.36 0.35 0.35
3 1 1 1 0 0 1 0 0 0 0.04 0.04 0.04 0.04 0.04
4 1 1 1 1 1 0 0 0 0 0.02 0.04 0.04 0.04 0.04
5 1 1 1 0 1 1 0 0 0 0.01 0.02 0.02 0.03 0.02
6 1 1 1 0 1 0 0 0 1 0.01 0.03 0.02 0.02 0.02
7 1 1 1 0 1 0 1 0 0 0.02 0.03 0.02 0.02 0.02
8 1 1 1 0 1 0 0 1 0 0.01 0.02 0.02 0.02 0.02
9 1 1 1 0 0 0 1 0 0 0.03 0.01 0.02 0.02 0.02
10 1 1 1 1 0 0 0 0 0 0.00 0.02 0.02 0.02 0.02
Table 6.4: Observed frequencies of the models under independence prior; number of MCMC iterations
m=100, 1000,10000, 20000. p mM denotes the model probability under the independence prior.
model x1 x2 x3 x4 x5 x6 x7 x8 x9 hp hg hf hn hs p mM
1 1 1 1 0 1 0 0 0 0 0.369 0.239 0.265 0.362 0.189 0.380
2 1 1 1 0 0 0 0 0 0 0.354 0.158 0.080 0.395 0.565 0.351
3 1 1 1 0 0 1 0 0 0 0.037 0.044 0.027 0.037 0.038 0.037
4 1 1 1 1 1 0 0 0 0 0.036 0.054 0.074 0.028 0.018 0.036
5 1 1 1 0 1 1 0 0 0 0.026 0.043 0.054 0.022 0.011 0.024
6 1 1 1 0 1 0 0 0 1 0.024 0.037 0.044 0.019 0.011 0.024
7 1 1 1 0 1 0 1 0 0 0.021 0.038 0.050 0.017 0.015 0.021
8 1 1 1 0 1 0 0 1 0 0.021 0.037 0.043 0.017 0.010 0.018
9 1 1 1 0 0 0 1 0 0 0.017 0.022 0.012 0.018 0.030 0.017
10 1 1 1 1 0 0 0 0 0 0.018 0.018 0.011 0.015 0.028 0.017
Table 6.5: Frequencies of the models for different priors. (p mM )is the model probability under the
independence prior.
63
model x1 x2 x3 x4 x5 x6 x7 x8 x9 h 10000 p p mM p
1 1 1 1 0 1 0 0 0 0 0.369 0.380
2 1 1 1 0 0 0 0 0 0 0.354 0.351
3 1 1 1 0 0 1 0 0 0 0.037 0.037
4 1 1 1 1 1 0 0 0 0 0.036 0.036
5 1 1 1 0 1 1 0 0 0 0.026 0.024
6 1 1 1 0 1 0 0 0 1 0.024 0.024
7 1 1 1 0 1 0 1 0 0 0.021 0.021
8 1 1 1 0 1 0 0 1 0 0.021 0.018
9 1 1 1 0 0 0 1 0 0 0.017 0.017
10 1 1 1 1 0 0 0 0 0 0.018 0.017
Table 6.6: Independence prior: observed frequencies (h 10000 p) and probability (p mM p) of different
models.
model x1 x2 x3 x4 x5 x6 x7 x8 x9 h 10000 g p mM g
1 1 1 1 0 1 0 0 0 0 0.239 0.231
2 1 1 1 0 0 0 0 0 0 0.158 0.165
3 1 1 1 1 1 0 0 0 0 0.054 0.054
4 1 1 1 0 0 1 0 0 0 0.044 0.043
5 1 1 1 0 1 1 0 0 0 0.043 0.043
6 1 1 1 0 1 0 1 0 0 0.038 0.039
7 1 1 1 0 1 0 0 0 1 0.037 0.039
8 1 1 1 0 1 0 0 1 0 0.037 0.036
9 1 1 1 0 0 0 1 0 0 0.022 0.023
10 1 1 1 1 0 0 0 0 0 0.018 0.021
Table 6.7: G-prior: observed frequencies (h 10000 g) and probability (p mM g) of different models.
64
model x1 x2 x3 x4 x5 x6 x7 x8 x9 h 10000 f p mM f
1 1 1 1 0 1 0 0 0 0 0.265 0.268
2 1 1 1 0 0 0 0 0 0 0.080 0.080
3 1 1 1 1 1 0 0 0 0 0.074 0.073
4 1 1 1 0 1 1 0 0 0 0.054 0.053
5 1 1 1 0 1 0 1 0 0 0.050 0.047
6 1 1 1 0 1 0 0 0 1 0.044 0.047
7 1 1 1 0 1 0 0 1 0 0.043 0.043
8 1 1 1 0 0 1 0 0 0 0.027 0.028
9 1 1 1 1 1 0 0 0 1 0.023 0.022
10 1 1 1 1 1 1 0 0 0 0.021 0.021
Table 6.8: Fractional prior: observed frequencies (h 10000 f ) and probability (p mM f ) of different
models
65
6.2 Results for correlated regressors
To study performance of variable selection under different priors another simulation study
with highly correlated regressors is carried out. Highly correlated data sets as described on
page 41 are generated and the same simulations as in the uncorrelated case are performed.
As for the uncorrelated regressors Bayes estimates are investegated under 3 different slab
variance parameters. Box plots of estimates are given in figures (6.15), (6.17) and (6.19),
and those of the squared error in figures (6.16), (6.18), (6.20). Strong regressors (β = 2)
are b1, b2 and b4, weak regressors (β = 0.2) are b5, b8 and b9, and zero-effects are b3, b6
and b7. The result is similar to that for independent regressors: the mean of the Bayes
estimates for strong and zero effects is close to the true value, whereas weak effects are
underestimated again under all priors. Compared to independent regressors estimation
error is increased, see figures (6.2) and (6.6). This is expected, since in the presence of cor-
related regressors the matrix X0 X tends to be ill-conditioned yielding unstable coefficient
estimations and large estimation errors. The increase of the estimation error is higher for
the ML estimates than for the Bayes estimates. In detail, for weak and zero effects the
SE of the Bayes estimates is much smaller than for the ML estimates, whereas for strong
effects Bayes estimates perform similar as the ML estimates. Comparing figures (6.7) and
(6.21), we see that the total sum of SE for ML estimation is approximately three times
as high as in the independent case, whereas it is only doubled for the Bayes estimates.
This illustrates the regularisation property of Bayes estimates. With regard to SE Bayes
estimates clearly outperform ML estimates.
As adjacent regressors are highly correlated, the influence of correlation can be seen by
looking at the position of a regressor within the coefficient vector. It seems that esti-
mation accuracy is not much affected by the correlation among regressors. For example,
the squared estimation error of a strong regressor differs not much in cases where its
neighbour is another strong or a weak regressor.
If the prior variances get smaller, the results remains essentially the same, apart from the
fact that the priors begin to act differently. As shown in figure (6.17) the smallest SE
66
is achieved by the Bayes estimate under the g- prior. Generally, all Bayes estimates still
outperform the ML estimation, see figure (6.21).
Conclusions are:
• For correlated regressors the SE of effect estimates are higher than for independent
regressors. The increase is smaller for Bayes estimates than for MLE.
• For large slab variance Bayes estimates under all priors perform similar, for very
small slab variance the g-prior perform best.
67
Figure 6.15: Correlated regressors: Box plot of coef-
ficient estimates, the red line marks the true value. Prior
variance group c=100.
68
Figure 6.17: Correlated regressors: Box plot of coef-
ficient estimates, the red line marks the true value. Prior
variance group c=1.
69
Figure 6.19: Correlated regressors: Box plot of coef-
ficient estimates, the red line marks the true value. Prior
variance group c=0.25.
70
(a) Sum of SE, c=100 (b) Sum of SE, c=1
Figure 6.21: Correlated regressors: Sum of SE of coefficient estimates for different prior variances.
71
6.2.2 Variable selection
Correlation among regressors might have an impact on variable selection as e.g. inclusion
probability of a strong effect might differ depending on whether it is highly correlated
with another strong or a zero effect. Box plots of the inclusion probabilities for different
slab variance parameters are shown in figures (6.22), (6.23) and (6.24).
Actually for strong effects the inclusion probability is close to one for all Bayes estimates
as well as for the ML estimates based on F-tests in the full model. Using F-tests for the
ML estimates weak regressors are only included in 5% of the data sets (compared to 20%
in the case of independent regressors). In contrast, under all priors inclusion probabilities
of weak effects are slightly higher in the simulation under correlated regressors compared
to independent regressors. Zero effects have low inclusion probabilities for both MLE and
Bayes estimates.
Whereas for a large prior variance inclusion probabilities are similar under all priors, for
smaller prior variance (c=1, c=0.25) we observe a different behavior under g- and frac-
tional prior than under the rest of priors. Whereas for independence prior, SSVS prior
and NMIG prior the inclusion probabilities for weak and zero effects increase with smaller
prior variance, increase is much smaller under the fractional prior and almost not present
under the g-prior.
Figure (6.25) shows NDR rate and FDR rate for correlated regressors. As for indepen-
dent regressors NDR decreases and FDR increases with increasing prior variance. Again
the proportion of misclassified covariates is approximately constant across different prior
variances. For large prior variance the proportion of misclassified covariates is slightly
smaller than that based on individual F-tests.
• Strong and zero effects are identified correctly by Bayesian variable selection under
the priors considered here.
• For a large prior variance the inclusion probabilities of correlated regressors do not
differ considerably from those of independent regressors.
• With smaller prior variance the inclusion probabilities of both weak and zero effects
increase faster than for independent regressors for all priors except the g-prior.
72
Figure 6.22: Correlated regressors: Box plots of the posterior inclusion probabilities
p(δj = 1|y). Prior parameter group c=100.
73
Figure 6.23: Correlated regressors: Box plots of the posterior inclusion probabilities
p(δj = 1|y). Prior variance group c=1.
74
Figure 6.24: Correlated regressors: Box plots of the posterior inclusion probabilities
p(δj = 1|y). Prior variance group c=0.25.
75
(a) independence prior (b) fractional prior
Figure 6.25: Correlated regressors: NDR and FDR for different priors as a function of the prior
variance parameters.
76
Figure 6.26: Correlated regressors: Proportion of misclassified effects as a
function of the prior variance scale.
77
6.2.3 Efficiency of MCMC
Finally we compare MCMC efficiency and computational effort under the different priors
for one data set with correlated regressors (generated as described on page 41). MCMC
was run for 10000 iterations with the prior variance parameters c = 10 for independence
prior, g = 400 for g-prior, b = 1/400 for fractional prior, (aψ0 , bψ0 ) = (5, 50) for NMIG
prior and τ 2 = 10 for SSVS prior. Inefficiency factor and effective sample size of the
posterior inclusion probabilities are computed as described in section 6.1.3.
Figures (6.27) and (6.28) show the autocorrelation function of the inclusion probabilities
under the independence prior and NMIG prior. Autocorrelations under the fractional
prior, g-prior and SSVS prior are similar to those under the independence prior and are
not shown. The autocorrelations are small for all priors except the NMIG prior where the
posterior inclusion probabilities are highly autocorrelated. Autocorrelations are similar
to those of independent regressors. This means that correlation among regressors has no
pronounced effect on the autocorrelation of the inclusion probability chains. This is sup-
ported by inefficiency factor and effective sample sizes (ESS), presented in tables (6.9),
which are of comparable order as for independent regressors. Again ESS per second is
similar for inclusion probabilities under independence prior, g-prior and fractional prior
for all regressors (11 to 22). Under the NMIG prior there is a high variation with ESS per
second varying from 1 to 295. Under the SSVS prior ESS per second is highest among all
priors with values from 29 up to 333, thus outperforming MCMC under all other priors
considered.
Finally the number of visits to models during MCMC is compared to the posterior model
probability under in tables (6.11), (6.12)and (6.13) respectively. The observed frequency
of each model visited during MCMC is a reasonable approximation for the posterior model
probability.
• Correlation between the regressors does not increase autocorrelations in the sampled
posterior inclusion probabilities, and inefficiency factor and effective sample sizes are
of the same order as for independent regressors.
78
• Relative frequencies of visits to different models during MCMC are good approxima-
tions of posterior model probabilities (for independence prior, g-prior and fractional
prior).
79
Figure 6.27: Correlated regressors: ACF of the posterior inclusion probabilities p(δ =
1|y) under the independence prior, prior variance parameter c=10, M=10000.
80
Figure 6.28: Correlated regressors: ACF of the posterior inclusion probabilities p(δ =
1|y) under the NMIG prior, prior variance parameters (aψ0 , bψ0 )=(5,50), M=10000.
81
p g f n s
regressor f ms f ms f ms f ms f ms
x1 1.32 1 1.37 3 1.00 1 1.00 1 1.00 1
x2 1.44 1 1.60 3 1.56 3 192.01 193 11.47 15
x3 1.51 5 1.88 1 1.61 3 1.00 1 1.00 1
x4 1.49 3 1.54 3 1.48 1 39.68 69 5.62 15
x5 1.49 5 1.61 3 1.52 3 27.22 61 3.03 5
x6 1.61 7 1.59 5 1.51 1 62.95 101 5.62 11
x7 1.69 3 1.81 3 1.64 3 21.25 43 3.13 9
x8 1.84 5 1.85 3 1.93 5 9.72 17 2.71 7
x9 1.74 3 1.86 3 1.95 7 15.82 37 3.15 11
Table 6.9: Posterior inclusion probability p(δj = 1|y) of correlated regressors: inefficiency factor (f)
and number of autocorrelations (ms) summed up for computation the inefficiency factor
p g f n s
regressor ESS iter ESS iter ESS iter ESS iter ESS iter
x1 7572 17 7298 16 9998 22 10000 295 10000 333
x2 6936 15 6231 14 6390 14 52 1 871 29
x3 6628 14 5305 12 6198 14 10000 295 10000 333
x4 6726 15 6513 14 6770 15 252 7 1780 59
x5 6714 15 6205 14 6597 15 367 10 3303 110
x6 6219 14 6305 14 6623 15 158 4 1777 59
x7 5906 13 5531 12 6099 13 470 13 3192 106
x8 5421 12 5406 12 5187 11 1029 30 3692 123
x9 5740 12 5364 12 5116 11 632 18 3174 105
Table 6.10: Posterior inclusion probability p(δj = 1|y) of correlated regressors: effective sample size
(ESS) and ESS per second (iter)
82
model x1 x2 x3 x4 x5 x6 x7 x8 x9 h 10000 p p mM p
1 1 1 1 0 0 0 0 0 0 0.298 0.302
2 1 1 1 1 0 0 0 0 0 0.170 0.165
3 1 1 1 0 0 1 0 0 0 0.153 0.151
4 1 1 1 0 1 0 0 0 0 0.073 0.074
5 1 1 1 0 0 0 1 0 0 0.039 0.038
6 1 1 1 1 0 1 0 0 0 0.026 0.026
7 1 1 1 0 0 1 0 0 1 0.018 0.018
8 1 1 1 1 1 0 0 0 0 0.015 0.018
9 1 1 1 0 0 1 1 0 0 0.017 0.016
10 1 1 1 0 1 1 0 0 0 0.015 0.016
Table 6.11: Correlated regressors under independence prior: observed frequencies (h 10000 p) and
probability (p mM p) of different models.
model x1 x2 x3 x4 x5 x6 x7 x8 x9 h 10000 g p mM g
1 1 1 1 0 0 0 0 0 0 0.192 0.196
2 1 1 1 0 0 1 0 0 0 0.109 0.106
3 1 1 1 1 0 0 0 0 0 0.091 0.091
4 1 1 1 0 1 0 0 0 0 0.058 0.059
5 1 1 1 0 0 0 1 0 0 0.043 0.043
6 1 1 1 0 0 1 0 0 1 0.026 0.028
7 1 0 1 0 0 0 0 0 0 0.024 0.026
8 1 1 1 1 0 1 0 0 0 0.023 0.024
9 1 1 1 0 0 0 0 1 0 0.023 0.023
10 1 1 1 0 0 0 0 0 1 0.024 0.023
Table 6.12: Correlated regressors under the g-prior: observed frequencies (h 10000 g) and probabili-
ties (p mM g) of different models.
83
model x1 x2 x3 x4 x5 x6 x7 x8 x9 h 10000 f p mM f
1 1 1 1 0 0 1 0 0 0 0.127 0.129
2 1 1 1 1 0 0 0 0 0 0.107 0.101
3 1 1 1 0 0 0 0 0 0 0.089 0.093
4 1 1 1 0 1 0 0 0 0 0.047 0.051
5 1 1 1 0 0 1 0 0 1 0.050 0.049
6 1 1 1 1 0 1 0 0 0 0.037 0.037
7 1 1 1 0 0 0 1 0 0 0.029 0.031
8 1 1 1 0 0 1 0 1 0 0.025 0.025
9 1 1 1 0 0 1 1 0 0 0.024 0.022
10 1 1 1 1 0 0 1 0 0 0.022 0.022
Table 6.13: Correlated regressors under the fractional prior: observed frequencies (h 10000 f ) and
probability (p mM f ) of different models.
model x1 x2 x3 x4 x5 x6 x7 x8 x9 hp hg hf hn hs p mM p
1 1 1 1 0 0 0 0 0 0 0.298 0.192 0.089 0.299 0.443 0.302
2 1 1 1 1 0 0 0 0 0 0.170 0.091 0.107 0.158 0.104 0.165
3 1 1 1 0 0 1 0 0 0 0.153 0.109 0.127 0.186 0.114 0.151
4 1 1 1 0 1 0 0 0 0 0.073 0.058 0.047 0.069 0.057 0.074
5 1 1 1 0 0 0 1 0 0 0.039 0.043 0.029 0.034 0.043 0.038
6 1 1 1 1 0 1 0 0 0 0.026 0.023 0.037 0.022 0.019 0.026
7 1 1 1 0 0 1 0 0 1 0.018 0.026 0.050 0.020 0.013 0.018
8 1 1 1 1 1 0 0 0 0 0.015 0.016 0.017 0.015 0.010 0.018
9 1 1 1 0 0 1 1 0 0 0.017 0.018 0.024 0.011 0.010 0.016
10 1 1 1 0 1 1 0 0 0 0.015 0.017 0.021 0.016 0.013 0.016
Table 6.14: Correlated regressors: Frequencies of the models for different priors. (p mM ) is the
model probability under the independence prior.
84
Chapter 7
The hypothesis when starting this work was that different priors would lead to different re-
sults, since they have different location and variance parameters. It was surprising to find,
that this was not the case. The results under the different priors were quite similar if the
scale setting of the parameters of the prior variances was at least approximately the same.
85
During simulations the problem of weak regressors turned up. They were apparently cho-
sen too small to be detected. The question arises whether a threshold exists for the size
of influential regressors. It seems that beneath a certain value regressors are nearly never
detected, beyond that in almost all the cases. By examining only the posterior inclusion
probability of a single regressor in our setting weak effects and zero effects could not be
well distinguished. Probably only completely different approaches may be able to achieve
this.
In the literature often the median probability model is recommended, e.g. in Barbieri and
Berger (2004). This means that the selected model includes variables with posterior in-
clusion probability exceeding 0.5. In the light of our results, the cut-off point of 0.5 seems
at least questionable. As we have observed, the size of the variance of the slab component
controls the result of variable selection, in the sense that a small prior variance encourages
regressor inclusion. Posterior inclusion probabilities increase with smaller prior variance.
So by choosing the prior variance small enough, every small coefficient can achieve a pos-
terior indicator of more than 0.5 for a given data set.
In our simulation setting relatively small data sets were used with 40 observations only,
therefore it was not reasonable to take much more than 9 regressors. This small setting
was chosen because the influence of the respective priors is more pronounced in small
data sets with little information than in large informative data sets. However, to compare
computational speed further simulation studies with considerably larger data sets would
be of interest.
Regarding the efficiency of MCMC under different priors, it was astonishing to observe
how fast NMIG prior and SSVS prior work. The CPU time needed was about 1/10 of the
time needed by independence, fractional and g-prior. However, the high autocorrelations
displayed under the NMIG prior was also an unexpected and unfavourable result. In
contrast, the performance of variable selection with the SSVS prior was surprisingly good,
it acts both fast and sparsely autocorrelated.
86
Appendix A
Derivations
87
1 1
= const ·
(σ 2 )(k+1)/2 (σ 2 )N/2+s0 +1
1 0 0 −1 0 −1 0 −1 0 −1 S0
exp(− (y y + β BN β − 2β BN B N (X y + B0 b 0 ) +b 0 B0 b 0 )) exp(− )=
2σ 2 | {z } σ2
bN
1 1
= const ·
(σ 2 )(k+1)/2 (σ 2 )N/2+s0 +1
1 0 0 −1 0 −1 0 −1 S0
exp(− (y y + β BN β − 2β BN b N + b 0 B 0 b 0 )) exp(− )=
2σ 2 σ2
1 1
= const 2 (k+1)/2 2 N/2+s0 +1 ·
(σ ) (σ )
1 S0
exp(− 2 (y0 y + (β − bN )0 B−1 0 −1 0 −1
N (β − bN ) − bN BN bN + b0 B0 b0 )) exp(− 2 ) =
2σ σ
1 1
= const 2 (k+1)/2 2 N/2+s0 +1 ·
(σ ) (σ )
1 1 1 0
exp(− 2 (β − bN )0 B−1 0 −1 0 −1
N (β − bN ) − 2 ( (y y − bN BN bN + b0 B0 b0 ) + S0 )) =
2σ σ |2 {z }
SN
1 1 1 SN
= const exp(− 2 (β − bN )0 B−1
N (β − bN )) exp(− )=
(σ 2 )(k+1)/2 2σ (σ 2 )N/2+s0 +1 σ2
| {z }
1
(σ 2 )sN +1
∝ fN (β; bN , BN σ 2 )fG−1 (σ 2 ; sN , SN )
with parameters
BN = (X01 X1 + B−1
0 )
−1
bN = BN (X01 y + B−1
0 b0 )
sN = s0 + N/2
1 −1
SN = S0 + (yy + b0 B−1
0 b0 − bN BN bN )
2
88
p(β|σ 2 , y) ∝ p(β, σ 2 |y) =
fN (β; bN , BN σ 2 )fG−1 (σ 2 ; sN , SN )
1 1
exp(− 2 ( ((y − Xβ)0 (y − Xβ) + (β − b0 )0 B−1
0 (β − b0 )) + S0 ))
σ |2 {z }
SN
2
∝ fG−1 (σ |sN , SN )
∂ ∂
(P RSS) = ((y − Xβ)0 (y − Xβ) + λβ 0 β)
∂β ∂β
∂ 0
= (y y − y0 Xβ − β 0 X0 y + β 0 X0 Xβ + λβ 0 β)
∂β
= −2X0 y + 2X0 Xβ + 2λβ
∂
(P RSS) = 0 ⇔ X0 y − (X0 X + λI)β = 0 ⇔ β̂ ridge = (X0 X + λI)−1 X0 y.
∂β
89
Appendix B
R codes
#de ... matrix with the indicator variables: d[i,j]=1 means that
regressor xj is included in the model in iteration i
90
probabilities of the regressors p(delta_j=1|y)
regIndPrior_J<-function(X,y,m,y_orig,c){
N=length(X[,1]) #number of observations
k=length(X[1,]) #number of columns=k (without intercept!)
#prior constants:
A0=diag(k)*c
a0=matrix(0,k)
s0=0 #prior parameter for s2~G-1(s0,S0)
S0=0
#for storing the results: #attention: actually the dimension of the models
# is k(without intercept)
s2=matrix(0,m)
b=matrix(0,m,k)
de=matrix(0,m,k)
post_delta_EQ_1=matrix(0,m,k)
mu=c()
delta=c(rep(FALSE,k)) #starting value for delta
for(i in 1:m){
for(l in 1:k){
log_ml_yd_j=c()
log_p_delta=c() #log(p(delta))
log_post_delta=c()
for(h in 0:1){
delta[sam[l]]=h==1 # delta[sampled order] is set 0 or 1 alternately
log_ml_yd_j[h+1]=marlik_P(a0,A0,s0,S0,y,X,delta)
log_p_delta[h+1]=lbeta(sum(delta)+1,k-sum(delta)+1)
log_post_delta[h+1]=log_ml_yd_j[h+1]+log_p_delta[h+1]
}
max_ml=max(log_post_delta)
e=exp(log_post_delta-max_ml)
post_delta=e/sum(e)
delta[sam[l]]=sample(c(0,1),1,replace=TRUE,post_delta)
delta=delta==TRUE;
91
post_delta_EQ_1[i,sam[l]]=post_delta[2]
}
s2[i]=1/rgamma(1,sN,SN_d)
b=cbind(mu,b)
Liste=list(b=b,s2=s2,de=de,post_delta_EQ_1=post_delta_EQ_1)
return(Liste)
}
92
if(sum(delta)){ #calculating the new constants only if
X_d=X[,delta] #at least one column/row is in the design-matrix
a0_d=a0[delta,]
if(is.matrix(A0[delta,delta])){
A0_d=A0[delta,delta]
}else {
A0_d=matrix(A0[delta,delta])
}
invA0_d=solve(A0_d)
AN_d=solve(t(X_d)%*%X_d+invA0_d)
aN_d=AN_d%*%(t(X_d)%*%y+invA0_d%*%a0_d)
sN=s0+(N-1)/2
SN_d=S0+0.5*(t(y)%*%y+t(a0_d)%*%invA0_d%*%a0_d-t(aN_d)%*%solve(AN_d)%*%aN_d)
log_ml_yd=-0.5*log(N)-(N-1)*0.5*(log(2*pi))+0.5*(log(det(AN_d)))-0.5*(log(det(A0_d)))+
+lgamma(sN)-lgamma(s0+1)+s0*(log(S0+1))-sN*(log(SN_d));
} else{
AN_d=matrix(1,1,1)
A0_d=matrix(1,1,1)
sN=s0+(N-1)/2
SN_d=S0+0.5*(t(y)%*%y)
log_ml_yd=-0.5*log(N)-(N-1)*0.5*(log(2*pi))+0.5*(log(det(AN_d)))-0.5*(log(det(A0_d)))+
+lgamma(sN)-sN*(log(SN_d));
}
return(log_ml_yd)
}
93
#values returned by the function:
#de ... matrix with the indicator variables: d[i,j]=1 means that
regressor xj is included in the model in iteration i
regGPrior_J<-function(X,y,g,m){
N=length(X[,1])
k=length(X[1,])
#prior constants:
s0=0
S0=0
#for storing the results: #Attention:actually the dimension of the models is k(without intercept)
s2=matrix(0,m)
b=matrix(0,m,k)
de=matrix(0,m,k)
post_delta_EQ_1=matrix(0,m,k)
mu=c()
#starting Gibbs-Sampling:
for(i in 1:m){
#cat("g: ",i,"\n")
#step1: sampling each component of delta conditional delta without_j
for(h in 0:1){
delta[sam[l]]=h==1
log_ml_yd_j[h+1]=marlik_Z(y,X,delta,g);
log_p_delta[h+1]=lbeta(sum(delta)+1,k-sum(delta)+1) #for hierarchical prior
94
log_post_delta[h+1]=log_ml_yd_j[h+1]+log_p_delta[h+1]
}
max_ml=max(log_post_delta)
e=exp(log_post_delta-max_ml)
post_delta=e/sum(e)
delta[sam[l]]=sample(c(0,1),1,replace=TRUE,post_delta)
delta=delta==TRUE;
post_delta_EQ_1[i,sam[l]]=post_delta[2]
}
de[i,]=delta #storing the chosen model
log_ml_yd=-(sum(delta))/2*log(1+g)+lgamma((N-1)/2)-0.5*log(N)-(N-1)/2*log(pi)-(N-1)*0.5*log(S)
return(log_ml_yd)
95
}
#de ... matrix with the indicator variables: d[i,j]=1 means that
regressor xj is included in the model in iteration i
regFracPrior_J<-function(y,X,f,m){
N=length(X[,1]) #number of observations
k=length(X[1,])
#prior constants:
s0=0
S0=0
96
b=matrix(0,m,k)
de=matrix(0,m,k)
post_delta_EQ_1=matrix(0,m,k)
mu=c()
#starting Gibbs-Sampling:
for(i in 1:m){
#cat("f: ",i,"\n")
#step1: sampling each component of delta conditional delta without_j
for(l in 1:k){
log_ml_yd_j=c()
log_p_delta=c() #log(p(delta))
log_post_delta=c()
for(h in 0:1){
delta[sam[l]]=h==1
log_ml_yd_j[h+1]=marlik_F(s0,S0,y,X,delta,f)
#log_p_delta[h+1]=sum(delta)*log(p)+(k-sum(delta))*log(1-p)
log_p_delta[h+1]=lbeta(sum(delta)+1,k-sum(delta)+1)
log_post_delta[h+1]=log_ml_yd_j[h+1]+log_p_delta[h+1]
}
max_ml=max(log_post_delta)
e=exp(log_post_delta-max_ml)
post_delta=e/sum(e)
delta[sam[l]]=sample(c(0,1),1,replace=TRUE,post_delta)
delta=delta==TRUE;
post_delta_EQ_1[i,sam[l]]=post_delta[2]
}
de[i,]=delta #storing the chosen model
97
}
s2[i]=1/rgamma(1,sN,SN_d)
98
#m ... number of iterations
#ny ... matrix with the indicator variables: ny[i,j]=1 means that on
iteration i coefficient alpha_j is sampled from the the flat
distribution N(0,ny_j*psi2)
regSSVS<-function(y,X,m,tau2){
N=length(X[,1])
k=length(X[1,])
#prior constants:
c=1/1000
#starting values:
mu[1]=1
b[1,]=rep(1,k)
ny[1,]=rep(1,k)
w[1]=0.5
s2[1]=5
#MCMC-steps:
for(i in 2:m) {
#print(i)
for(j in 1:k){
99
p1=w[i-1]*dnorm(b[i-1,j],0,sqrt(tau2))
ny[i,j]=sample(c(0,1),1,prob=c(p0,p1))
post_delta_EQ_1[i,j]=p1/(p0+p1)
}
}
b=cbind(mu,b)
Liste=list(b=b,ny=ny,s2=s2,post_delta_EQ_1=post_delta_EQ_1)
return(Liste)
}
100
#value returned by the function:
regNMIG<-function(y,X,m,apsi0,bpsi0){
N=length(X[,1])
k=length(X[1,])
#prior constants:
s0=0.000025
s1=1
#starting values:
mu[1]=1
b[1,]=rep(1,k)
ny[1,]=rep(1,k)
psi2[1,]=rep(1,k)
w[1]=0.5
s2[1]=5
#MCMC-steps:
for(i in 2:m) {
#cat("nmig: ",i,"\n")
for(j in 1:k){
101
f0=(1-w[i-1])*dnorm(b[i-1,j],0,sqrt(s0*psi2[i-1,j]))
f1=w[i-1]*dnorm(b[i-1,j],0,sqrt(s1*psi2[i-1,j]))
ny[i,j]=sample(c(s0,s1),1,prob=c(f0,f1))
post_delta_EQ_1[i,j]=f1/(f0+f1)
}
b=cbind(mu,b)
ny_w=(ny==1)
Liste=list(b=b,ny_w=ny_w,psi2=psi2,s2=s2, post_delta_EQ_1= post_delta_EQ_1)
return(Liste)
}
102
Bibliography
Barbieri, M. and O.J. Berger (2004). Optimal predictive model selection. Annals of
Statistics 32 (3), 870–897.
Brown, P.J., M. Vannucci and T.Fearn (2002). Bayes model averaging with variable
selection of regression. Journal of the Royal Statistical Society, Series B 64 (3), 519–
536.
Fahrmeir, L., T. Kneib and Susanne Konrath (2010). Bayesian regularisation in struc-
tured additive regression: a unifying perspective on shrinkage, smoothing and predictor
selection. Statistics and Computing 20 (2), 203–219.
Fahrmeir, L., T. Kneib and S. Lang (2007). Regression. Modelle, Methoden und Anwen-
dungen. Springer. Heidelberg.
George, E. and Y. Maruyama (2010). Fully Bayes model selection with a generalized
g-prior. Conference ’Frontiers of Statistical Decisions Making and Bayesian Analy-
sis’,University of Texas, San Antonio. March 17-20, 2010.
George, E.I. and R.E. McCulloch (1993). Variable selection via Gibbs sampling. Journal
of the Amercan Statistical Association 88 (423), 881–889.
Geyer, C.J. (1992). Practical Markov Chain Monte Carlo. Statistical Science 7 (4), 473–
511.
103
Ishwaran, H. and J. Rao (2003). Detecting differentially expressed genes in microarrays
using Bayesian model selection. Journal of the American Statistical Association 98 (1),
438–455.
Kass, R.E. and A.E. Raftery (1995). Bayes factors. Journal of the American Statistical
Association 90 (1), 773–795.
Ley, E. and M.F.J. Steel (2007). Jointness in Bayesian variable selection with application
to growth regression. Journal of Macroeconomics 29 (1), 476–493.
Liang, F., R. Paulo, G. Molina, C.A. Clyde and J.O. Berger (2008). Mixtures of g Priors
for Bayesian Variable Selection. Journal of the American Statistical Association 103 (1),
410–423.
O’Hagan, A. (1995). Fractional Bayes factors for model comparison. Journal of the Royal
Statistical Society B 57 (1), 99–138.
Swartz, M.D., R.K. Yu and S. Shete (2008). Finding factors influencing risk: Comparing
Bayesian stochastic search and standard variable selection methods applied to logistic
regression models of cases and controls. Statistics in Medicine 27 (29), 6158–6174.
Tibshirani, R. (1996). Regression shrinkage and selection via lasso. Journal of the Royal
Society B 58 (1), 267–288.
Zellner, A. (1986). On assessing prior distributions and Bayesian regression analysis with
g-prior distributions. In Bayesian Inference and Decision Techniques: Essays in Honor
of Bruno de Finetti 90 (1), 233–243.
104