Intro To Hierachical GAM
Intro To Hierachical GAM
net/publication/333702183
CITATIONS READS
216 1,521
4 authors, including:
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Gavin Leslie Simpson on 08 January 2020.
ABSTRACT
In this paper, we discuss an extension to two popular approaches to modeling
complex structures in ecological data: the generalized additive model (GAM) and the
hierarchical model (HGLM). The hierarchical GAM (HGAM), allows modeling
of nonlinear functional relationships between covariates and outcomes where the
shape of the function itself varies between different grouping levels. We describe the
theoretical connection between HGAMs, HGLMs, and GAMs, explain how to model
different assumptions about the degree of intergroup variability in functional
response, and show how HGAMs can be readily fitted using existing GAM software,
the mgcv package in R. We also discuss computational and statistical issues with
fitting these models, and demonstrate how to fit HGAMs on example data. All code
and data used to generate this paper are available at: github.com/eric-pedersen/
mixed-effect-gams.
Subjects Ecology, Statistics, Data Science, Spatial and Geographic Information Science
Submitted 29 October 2018 Keywords Generalized additive models, Hierarchical models, Time series, Functional regression,
Accepted 31 March 2019 Smoothing, Regression, Community ecology, Tutorial, Nonlinear estimation
Published 27 May 2019
Corresponding author INTRODUCTION
Eric J. Pedersen,
[email protected] Two of the most popular and powerful modeling techniques currently in use by ecologists
Academic editor
are generalized additive models (GAMs; Wood, 2017a) for modeling flexible regression
Andrew Gray functions, and generalized linear mixed models (“hierarchical generalized linear models”
Additional Information and (HGLMs) or simply “hierarchical models”; Bolker et al., 2009; Gelman et al., 2013)
Declarations can be found on for modeling between-group variability in regression relationships.
page 39
At first glance, GAMs and HGLMs are very different tools used to solve different
DOI 10.7717/peerj.6876
problems. GAMs are used to estimate smooth functional relationships between predictor
Copyright variables and the response. HGLMs, on the other hand, are used to estimate linear
2019 Pedersen et al.
relationships between predictor variables and response (although nonlinear relationships
Distributed under
Creative Commons CC-BY 4.0 can also be modeled through quadratic terms or other transformations of the predictor
variables), but impose a structure where predictors are organized into groups (often
How to cite this article Pedersen EJ, Miller DL, Simpson GL, Ross N. 2019. Hierarchical generalized additive models in ecology: an
introduction with mgcv. PeerJ 7:e6876 DOI 10.7717/peerj.6876
Abundance
Temperature
Figure 1 Hypothetical example of functional variability between different group levels. Each dashed
line indicates how the abundance for different species of fish in a community might vary as a function of
average water temperature. The orange species shows lower abundance at all temperatures, and the red
and blue species differ at which temperature they can achieve the maximum possible size. However, all
three curves are similarly smooth and peak close to one another relative to the entire range of tested
temperatures. The solid black line represents an “average abundance curve,” representing the mean
abundance across species in the sample. Full-size DOI: 10.7717/peerj.6876/fig-1
referred to as “blocks”) and the relationships between predictor and response may vary
across groups. Either the slope or intercept, or both, may be subject to grouping. A typical
example of HGLM use might be to include site-specific effects in a model of population
counts, or to model individual level heterogeneity in a study with repeated observations
of multiple individuals.
However, the connection between HGLMs and GAMs is quite deep, both conceptually
and mathematically (Verbyla et al., 1999). HGLMs and GAMs fit highly variable models
by “pooling” parameter estimates toward one another, by penalizing squared deviations
from some simpler model. In an HGLM, this occurs as group-level effects are pulled
toward global effects (penalizing the squared differences between each group-level
parameter estimate and the global effect). In a GAM, this occurs via the enforcement of a
smoothness criterion on the variability of a functional relationship, pulling parameters
toward some function that is assumed to be totally smooth (such as a straight line) by
penalizing squared deviations from that totally smooth function.
Given this connection, a natural extension to the standard GAM framework is to allow
smooth functional relationships between predictor and response to vary between groups,
but in such a way that the different functions are in some sense pooled toward a
common shape. We often want to know both how functional relationships vary between
groups, and if a relationship holds across groups. We will refer to this type of model as a
hierarchical GAM (HGAM).
There are many potential uses for HGAMs. For example, we can use them to estimate
how the maximum size of different fish species varies along a common temperature
gradient (Fig. 1). Each species will typically have its own response function, but since the
species overlap in range, they should have similar responses over at least some of the
temperature gradient; Fig. 1 shows all three species reach their largest maximum sizes in
the center of the temperature gradient. Estimating a separate function for each species
throws away a lot of shared information and could result in highly noisy function estimates
where EðYÞ is the expected value of the response Y (with an appropriate distribution
and link function g), fj is a smooth function of the covariate xj, β0 is an intercept term,
and g-1 is the inverse link function. Hereafter, we will refer to these smooth functions
as smoothers. In the example equation above, there are j smoothers and each is a
function of only one covariate, though it is possible to construct smoothers of
multiple variables.
10 10 10
y
0 0 0
0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00
x x x
Figure 2 Effect of different choices of smoothing parameter (λ) on the shape of the resulting
smoother (red lines). (A) l estimated using REML; (B) l set to zero (no smoothing); (C) l is set to a
very large value. The blue line in each panel is the known model used to simulate the data.
Full-size DOI: 10.7717/peerj.6876/fig-2
X
K
fj ðxj Þ ¼ bj;k bj;k ðxj Þ:
k¼1
L bT Sb
Figure 2 shows an example of how different choices of the smoothing parameter (l)
affect the shape of the resulting smoother. Data (points) were generated from the blue
function and noise added to them. In Fig. 2A, l was selected using restricted maximum
likelihood (REML) to give a good fit to the data. In Fig. 2B, l was set to zero so the penalty
has no effect and the function interpolates the data. Figure 2C shows when l is set to a
very large value, so the penalty removes all terms that have any wiggliness, giving a
straight line.
To measure the complexity of a penalized smooth terms we use the effective degrees of
freedom (EDF), which at a maximum is the number of coefficients to be estimated in
the model, minus any constraints. The EDF can take noninteger values and larger values
indicate more wiggly terms (see Wood (2017a, Section 6.1.2) for further details).
The number of basis functions, K, sets a maximum for the EDF, as a smoother cannot have
more than K EDF. When the EDF is well below K, increasing K generally has very little
effect on the shape of the function. In general, K should be set large enough to allow
0 F2
penalty
−1 1
F3
−2
value
0.5
F4 F5 F6
F4
1 0
F5
0
−1 F6
−2 F1 F2 F3 F4 F5 F6
0 0.5 1 0 0.5 1 0 0.5 1
x
C
0.5
F4 × 0.33
F6 × 0.12
0.0
F5 × − 0.1
value
F2 × − 0.4
−0.5 F3 × 0.26
F1 × 0.39
−1.0
−1.5
0.0 0.3 0.6 0.9
x
Figure 3 (A) Examples of the basis functions associated with a six basis function thin plate regression
spline (TPRS, m = 2), calculated for data, x, spread evenly between x = 0 and x = 1. Each line represents
a single basis function. (B) The smoothing penalty matrix for the thin plate smoother. Red entries
indicate positive values and blue indicate negative values. For example, functions F3 and F4 would have
the greatest proportionate effect on the total penalty (as they have the largest values on the diagonal),
whereas function F5 and F6 would not contribute to the wiggliness penalty at all (all the values in the fifth
and sixth row and column of the penalty matrix are zero). This means these functions are in the null
space of the penalty matrix, and are treated as completely smooth. (C) An example of how the basis
functions add up to create a single smooth function. Thin colored lines represent each basis function
multiplied by a coefficient, and the solid black line is the sum of those basis functions.
Full-size DOI: 10.7717/peerj.6876/fig-3
smoother are constrained to match in value and first derivative. These are useful for fitting
models with cyclic components such as seasonal effects. We will use these smoothers to
demonstrate how to fit HGAMs to cyclic data.
1. Should each group have its own smoother, or will a common smoother suffice?
2. Do all of the group-specific smoothers have the same wiggliness, or should each group
have its own smoothing parameter?
3. Will the smoothers for each group have a similar shape to one another—a shared global
smoother?
1. A single common smoother for all observations; We will refer to this as model G, as it
only has a Global smoother.
2. A global smoother plus group-level smoothers that have the same wiggliness. We will
refer to this as model GS (for Global smoother with individual effects that have a Shared
penalty).
3. A global smoother plus group-level smoothers with differing wiggliness. We will refer to
this as model GI (for Global smoother with individual effects that have Individual
penalties).
4. Group-specific smoothers without a global smoother, but with all smoothers having the
same wiggliness. We will refer to this as model S.
5. Group-specific smoothers with different wiggliness. We will refer to this as model I.
It is important to note that “similar wiggliness” and “similar shape” are two distinct
concepts; functions can have very similar wiggliness but very different shapes. Wiggliness
measures how quickly a function changes across its range, and it is easy to construct
two functions that differ in shape but have the same wiggliness. For this paper, we consider
two functions to have similar shape if the average squared distance between the functions
is small (assuming the functions have been scaled to have a mean value of zero across
their ranges). This definition is somewhat restricted; for instance, a cyclic function would
not be considered to have the same shape as a phase-shifted version of the same function,
2 model G
No group−level trends
1
−1
2 model GS model S
similar smoothness
Group−level trends
(Shared penalty)
1
f(x)
−1
2 model GI model I
different smoothness
(Individual penalties)
Group−level trends
1
−1
0 0.5 1 0 0.5 1
x
Figure 4 Alternate types of functional variation f(x) that can be fitted with HGAMs. The dashed line
indicates the average function value for all groups, and each solid line indicates the functional value at a
given predictor value for an individual group level. The null model (of no functional relationship between
the covariate and outcome, top right), is not explicitly assigned a model name.
Full-size DOI: 10.7717/peerj.6876/fig-4
nor would two normal distributions with the same mean but different standard
deviations. The benefit of this definition of shape, however, is that it is straightforward to
translate into penalties akin to those described in the section “A Review of Generalized
Additive Models.” Figure 4, model S illustrates the case where models have different
shapes. Similarly, two curves could have very similar overall shape, but differ in their
wiggliness. For instance, one function could be equal to another plus a high-frequency
oscillation term. Figure 4, model GI illustrates this.
We will discuss the trade-offs between different models and guidelines about when each
of these models is appropriate in the section “Computational and statistical issues when
fitting HGAMs”. The remainder of this section will focus on how to specify each of these five
models using mgcv.
A. The CO2 dataset, available in R via the datasets package. This data is from an
experimental study by Potvin, Lechowicz & Tardif (1990) of CO2 uptake in grasses
under varying concentrations of CO2, measuring how concentration-uptake functions
varied between plants from two locations (Mississippi and Quebec) and two
temperature treatments (chilled and warm). A total of 12 plants were used and CO2
uptake measured at seven CO2 concentrations for each plant (Fig. 5A). Here, we
will focus on how to use HGAMs to estimate interplant variation in functional
responses. This data set has been modified from the default version available with R, to
1
Note that mgcv requires that grouping or recode the Plant variable as an unordered factor Plant_uo1.
categorical variables be coded as factors
in R; it will raise an error message if B. Data generated from a hypothetical study of bird movement along a migration corridor,
passed data coded as characters. It is also sampled throughout the year (see Supplemental Code). This dataset consists of
important to know whether the factor is
coded as ordered or unordered (see simulated sample records of numbers of observed locations of 100 tagged individuals
?factor for more details on this). This each from six species of bird, at 10 locations along a latitudinal gradient, with one
matters when fitting group-level
smoothers using the by= argument (as is observation taken every 4 weeks. Counts were simulated randomly for each species in
used for fitting models GI and I, shown each location and week by creating a species-specific migration curve that gave the
below). If the factor is unordered, mgcv
will set up a model with one smoother for probability of finding an individual of a given species in a given location, then simulated
each grouping level. If the factor is the distribution of individuals across sites using a multinomial distribution, and
ordered, mgcv will set any basis func-
tions for the first grouping level to zero. subsampling that using a binomial distribution to simulate observation error (i.e., not
In model GI the ungrouped smoother every bird present at a location would be detected). The data set (bird_move)
will then correspond to the first grouping
level, rather than the average functional consists of the variables count, latitude, week, and species (Fig. 5B). This example
response, and the group-specific allows us to demonstrate how to fit these models with interactions and with non-normal
smoothers will correspond to deviations
from the first group. In model I, using an (count) data. The true model used to generate this data was model GS: a single global
ordered factor will result in the first function plus species-specific deviations around that global function.
group not having a smoother associated
with it at all.
Throughout the examples we use REML to estimate model coefficients and smoothing
parameters. We strongly recommend using either REML or marginal likelihood (ML)
rather than the default generalized cross-validation criteria when fitting GAMs, for the
reasons outlined in Wood (2011). In each case some data processing and manipulation has
been done to obtain the graphics and results below. See Supplemental Code for details
on data processing steps. To illustrate plots, we will be using the draw() function from the
gratia package. This package was developed by one of the authors (Simpson, 2018) as a
set of tools to extend plotting and analysis of mgcv models. While mgcv has plotting
capabilities (through plot() methods), gratia expands these by creating ggplot2 objects
(Wickham, 2016) that can be more easily extended and modified.
40 40
Latitude
30
20
10
Count 5 10 15 20
Figure 5 Example data sets used throughout section “What are Hierarchical GAMs?” (A) Grass CO2
uptake vs. CO2 concentration for 12 individual plants. Color and line type included to distinguish
individual plant trends. (B) Simulated data set of bird migration, with point size corresponding to weekly
counts of six species along a latitudinal gradient (zeros excluded for clarity).
Full-size DOI: 10.7717/peerj.6876/fig-5
For our CO2 data set, we will model loge(uptake) as a function of two smoothers: a
TPRS of loge-concentration, and a random effect for Plant_uo to model plant-specific
intercepts. Mathematically:
0.0 ●
0.00 ●
●
Effects
Effect
−0.3 −0.25
●
●
−0.50
−0.6
●
−0.75
4.5 5.0 5.5 6.0 6.5 7.0 −1 0 1
log(conc) Gaussian quantiles
Figure 6 gratia plotting output for model G applied to the CO2 dataset. s(log(conc)): the
smoother of loge concentration. Plant_uo: a quantile–quantile plot of the random effects against
Gaussian quantiles, used to check the appropriateness of the normal random effect assumption.
Full-size DOI: 10.7717/peerj.6876/fig-6
Figure 6 Illustrates the output of gratia's draw() function for CO2_modG: the panel
labelled s(log(conc)) shows the estimated smoother for concentration, and the panel
labelled Plant_uo shows a quantile–quantile plot of the estimated random effects vs.
Gaussian quantiles, which can be used to check our model.
Looking at the effects by term is useful, but we are often interested in fitted values or
predictions from our models. Using the built in prediction functions with mgcv, we can
estimate what the fitted function (and uncertainty around it) should look like for each
level, as shown in Fig. 7 (see Supplemental Code for more details on how to generate these
predictions).
Examining these plots, we see that while functional responses among plants are similar,
some patterns are not captured by this model. For instance, for plant Qc2 the model clearly
underestimates CO2 uptake. A model including individual differences in functional
responses may better explain variation.
For our bird example, we model the count of birds as a function of location and time,
including their interaction. For this we structure the model as:
Figure 7 Predicted uptake function (±2 s.e.) for each plant, based on model G (a single global
function for uptake plus a individual-level random effect intercept). Model predictions are for log-
uptake, but are transformed here to show the fitted function on the original scale of the data.
Full-size DOI: 10.7717/peerj.6876/fig-7
te(week,latitude)
60
Effect
4
40
2
latitude
−2
20
−4
0 10 20 30 40 50
week
Figure 8 Plot illustrating the average log-abundance of all bird species at each latitude for each week,
with red colors indicating more individuals and blue colors fewer.
Full-size DOI: 10.7717/peerj.6876/fig-8
20
15
10
5
Observed count 0
20
15
10
0
0 5 10 0 5 10 0 5 10
Predicted count
Figure 9 Observed counts by species vs. predicted counts from bird_modG (1–1 line added as
reference). If our model fitted well we would expect that all species should show similar patterns of
dispersion around the 1-1 line (and as we are assuming the data is Poisson, the variance around the mean
should equal the mean). Instead we see that variance around the predicted value is much higher for
species 1 and 6. Full-size DOI: 10.7717/peerj.6876/fig-9
However, the plot also indicates a large amount of variability in the timing of migration.
The source of this variability is apparent when we look at the timing of migration of each
species (cf. Fig. 5B).
All six species in Fig. 5B show relatively precise migration patterns, but they differ in the
timing of when they leave their winter grounds and the amount of time they spend at
their summer grounds. Averaging over all of this variation results in a relatively imprecise
(diffuse) estimate of migration timing (Fig. 8), and viewing species-specific plots of
observed vs. predicted values (Fig. 9), it is apparent that the model fits some of the species
better than others. This model could potentially be improved by adding intergroup
variation in migration timing. The rest of this section will focus on how to model this type
of variation.
0.0 0.0
Effect
Effect
−0.3
−0.5
−0.6
4.5 5.0 5.5 6.0 6.5 7.0 4.5 5.0 5.5 6.0 6.5 7.0
log(conc) log(conc)
Figure 10 Global function (s(log(conc))) and group-specific deviations from the global function
(s(log(conc),Plant_uo)) for CO2_modGS. Full-size DOI: 10.7717/peerj.6876/fig-10
using a penalty term that tends to draw these group-level smoothers toward zero. mgcv
provides an explicit basis type to do this, the factor-smoother interaction or "fs" basis (see
?mgcv::factor.smooth.interaction for details). This smoother creates a copy of each
set of basis functions for each level of the grouping variable, but only estimates one
smoothing parameter for all groups. To ensure that all parts of the smoother can be shrunk
3
As part of the penalty construction, each toward zero effect, each component of the penalty null space is given its own penalty3.
group will also have its own intercept
(part of the penalized null space), so
We modify the previous CO2 model to incorporate group-level smoothers as follows:
there is no need to add a separate term
for group specific intercepts as we did in loge ðuptakei Þ ¼ f ðloge ðconci ÞÞ þ fPlant uoi ðloge ðconci ÞÞ þ ei
model G.
where fPlant_uoi(loge(conci)) is the smoother for concentration for the given plant. In R we
4
As mentioned in the section “A Review
then have:
of Generalized Additive Models,” these
terms can be specified either with te()or CO2_modGS <- gam(log(uptake) ∼ s(log(conc), k=5, m=2) +
t2() terms. Using t2 as above (with s(log(conc), Plant_uo, k=5, bs="fs", m=2),
full=TRUE) is essentially a multivariate
equivalent of the factor-smoother inter- data=CO2, method="REML")
action; it requires more smooth terms
than te(), but can be fit using other Figure 10 shows the fitted smoothers for CO2_modGS. The plots of group-specific
mixed effects software such as lme4, smoothers indicate that plants differ not only in average log-uptake (which would
which is useful when fitting models with
a large number of group levels (see sec- correspond to each plant having a straight line at different levels for the group-level
tion “Computational and Statistical smoother), but differ slightly in the shape of their functional responses. Figure 11 shows
Issues When Fitting HGAMs” on com-
putational issues for details). We have how the global and group-specific smoothers combine to predict uptake rates for
generally found that t2(full=TRUE) is individual plants. We see that, unlike in the single global smoother case above, none of the
the best approach for multidimensional
GS models when the goal is to accurately curves deviate from the data systematically.
estimate the global smoother in the pre- The factor-smoother interaction-based approach mentioned above does not work for
sence of group-level smoothers; other
approaches (using te()) tend to result in higher-dimensional tensor product smoothers (fs() does still work for higher
the global smoother being overly pena- dimensional isotropic smoothers). Instead, the group-specific term can be specified with a
lized toward the flat function, and the
bulk of the variance being assigned to the tensor product of the continuous smoothers and a random effect for the grouping
group-level smoother. We discuss this parameter4. e.g.:
further in the section “Computational
and Statistical Issues When Fitting y ∼ te(x1, x2, bs="tp", m=2) +
HGAMs,” “Estimation issues when fit-
ting both global and group-level t2(x1, x2, fac, bs=c("tp","tp","re"), m=2, full=TRUE)
smoothers.”
Figure 11 Predicted uptake values (lines) vs. observed uptake for each plant, based on model GS.
Full-size DOI: 10.7717/peerj.6876/fig-11
Latitude
15
30
10
1 26 52 1 26 52 1 26 52 1 26 52 1 26 52 1 26 52
Week
B
Observed count
sp1 sp2 sp3 sp4 sp5 sp6
20
15
10
5
0
0 5 10 15 20 0 5 10 15 20 0 5 10 15 20 0 5 10 15 20 0 5 10 15 20 0 5 10 15 20
Predicted count (model GS)
Figure 12 (A) Predicted migration paths for each species based on bird_modGS, with lighter colors
corresponding to higher predicted counts. (B) Observed counts vs. predictions from bird_modGS.
Full-size DOI: 10.7717/peerj.6876/fig-12
Fitting a separate smoother (with its own penalties) can be done in mgcv by using the by
argument in the s() and te() (and related) functions. Therefore, we can code the formula
for this model as:
y ∼ s(x, bs="tp") + s(x, by=fac, m=1, bs="tp") + s(fac, bs="re")
Note two major differences here from how model GS was specified:
1. We explicitly include a random effect for the intercept (the bs="re" term), as
group-specific intercepts are not incorporated into factor by variable smoothers
(as would be the case with a factor smoother or a tensor product random effect).
2. We specify m=1 instead of m=2 for the group-level smoothers, which means the marginal
TPRS basis for this term will penalize the squared first derivative of the function, rather
than the second derivative. This, also, reduces colinearity between the global
smoother and the group-specific terms which occasionally leads to high uncertainty
around the global smoother (see section “Computational and Statistical Issues When
Fitting HGAMs” for more details). TPRS with m=1 have a more restricted null
space than m=2 smoothers, so should not be as collinear with the global smoother
(Wieling et al., 2016; Baayen et al., 2018). We have observed that this is much more of
an issue when fitting model GI compared to model GS.
Effects
Effect
Effect
−0.4 −0.4 −0.4
Effect
Effect
−0.4 −0.4 −0.4
Figure 13 Functional relationships for the CO2 data estimated for model GI. s(log(conc)): the
global smoother; Plant_uo: species-specific random effect intercepts. The remaining plots are a selected
subset of the plant-specific smoothers, indicating how the functional response of that plant differs from
the global smoother. Full-size DOI: 10.7717/peerj.6876/fig-13
Model S
Model S (shared smoothers) is model GS without the global smoother term; this type of
model takes the form: y∼s(x, fac, bs="fs") or y∼t2(x1, x2, fac, bs=c("tp",
"tp", "re") in mgcv. This model assumes all groups have the same smoothness, but that
the individual shapes of the smooth terms are not related. Here, we do not plot these
models; the model plots are very similar to the plots for model GS. This will not always be
the case. If in a study there are very few data points in each grouping level (relative to
the strength of the functional relationship of interest), estimates from model S will typically
be much more variable than from model GS; there is no way for the model to share
information on function shape between grouping levels without the global smoother. See
section “Computational and Statistical Issues When Fitting HGAMs” on computational
issues for more on how to choose between different models.
CO2_modS <- gam(log(uptake) ∼ s(log(conc), Plant_uo, k=5, bs="fs", m=2),
data=CO2, method="REML")
Model I
Model I is model GI without the first term: y∼fac+s(x, by=fac) or y∼fac+te(x1,x2,
by=fac) (as above, plots are very similar to model GI).
CO2_modI <- gam(log(uptake) ∼ s(log(conc), by=Plant_uo, k=5, bs="tp", m=2) +
s(Plant_uo, bs="re", k=12),
data=CO2, method="REML")
parametric p-values are for comparing these models; there is uncertainty about what
degrees of freedom to assign to models with varying smoothness, and slightly different
model specifications may not result in nested models (See Wood (2017a) Section 6.12.4
and ?mgcv::anova.gam for more discussion on using GLRTs to compare GAMs).
Comparing models based on AIC is a more robust approach to comparing the different
model structures. There is well-developed theory of how to include effects of penalization
and smoothing parameter uncertainty when estimating the model complexity penalty
for AIC (Wood, Pya & Säfken, 2016). We demonstrate this approach in Table 1. Using
AIC, there is strong support for including among-group functional variability for
both the CO2 dataset and the bird_move dataset (compare models G vs. all other models).
For the CO2 dataset (Table 1A), there is relatively strong evidence that there is more
intergroup variability in smoothness than model GS allows, and weaker evidence that
model S or I (separate smoothers for all plants) show the best fit.
For the bird_move dataset (Table 1B), model GS (global smoother plus group-level
smoothers with a shared penalty) gives the best fit for all models including a global smooth
5
If it is important for a given study to
determine if there is evidence for a sig-
nificant global smooth effect, we recom- (which is good as we simulated the data from a model with this structure!). However,
mend fitting model GS or GI, including model S (without a global term) still fits this data better than model GS based on AIC.
the argument select = TRUE in the gam
function. This has the effect of adding an This highlights an issue with AIC for selecting between models with and without a global
extra penalty to each smooth term, that smooth: as it is possible to fully recreate the global term by just allowing each group-level
penalizes functions in the null space of
the penalty matrices for each smooth. By smoother to have a similar shape to one another (i.e., the global term is totally
doing this, it is possible for mgcv to concurve with the group-level smoothers; see section “Computational and Statistical Issues
penalize all model terms to a zero effect,
in effect doing variable selection (Marra When Fitting HGAMs”) model selection criteria such as AIC may indicate that the
& Wood, 2011). When select=TRUE, extra parameters required to fit the global smoother are unnecessary5.
the significance of the global term can be
found by looking at the significance of Given this issue with selecting global terms, we strongly recommend not selecting
the term in summary.gam(model). Note models based purely on AIC. Instead, model selection should be based on expert subject
that this can significantly increase the
amount of time it takes to fit a model for knowledge about the system, computational time, and most importantly, the inferential
data sets with a large number of penalty goals of the study. Table 1A indicates that models S and I (which do not have a
terms (such as model GI when the
number of groups is high). global function) fit the CO2 data better than models with a global function, and that
EXAMPLES
We now demonstrate two worked examples on one data set to highlight how to use
HGAMs in practice, and to illustrate how to fit, test, and visualize each model. We will
demonstrate how to use these models to fit community data, to show when using a global
trend may or may not be justified, and to illustrate how to use these models to fit
seasonal time series.
For these examples, data are from a long-term study in seasonal dynamics of
zooplankton, collected by the Richard Lathrop. The data were collected from a chain
of lakes in Wisconsin (Mendota, Monona, Kegnonsa, and Waubesa) approximately
bi-weekly from 1976 to 1994. They consist of samples of the zooplankton communities,
taken from the deepest point of each lake via vertical tow. The data are provided by the
Wisconsin Department of Natural Resources and their collection and processing are
fully described in Lathrop (2000).
Zooplankton in temperate lakes often undergo seasonal cycles, where the abundance
of each species fluctuates up and down across the course of the year, with each species
typically showing a distinct pattern of seasonal cycles. The inferential aims of these
examples are to (i) estimate variability in seasonality among species in the community in a
single lake (Mendota), and (ii) estimate among-lake variability for the most abundant
taxon in the sample (Daphnia mendotae) across the four lakes. To enable evaluation
of out-of-sample performance, we split the data into testing and training sets. As there are
multiple years of data, we used data from the even years to fit (train) models, and the odd
years to test the fit.
Each record consists of counts of a given zooplankton taxon taken from a subsample
from a single vertical net tow, which was then scaled to account for the relative volume
of subsample vs. the whole net sample and the area of the net tow, giving population
Model S
zoo_comm_modS <- gam(density_adj ∼ s(taxon, year_f, bs="re") +
s(day, taxon, bs="fs", k=10, xt=list(bs="cc")),
data=zoo_train, knots=list(day=c(0, 365)),
family=Gamma(link="log"), method="REML",
drop.unused.levels=FALSE)
Model I
# Note that s(taxon, bs="re") has to be explicitly included here, as the
# day by taxon smoother does not include an intercept
zoo_comm_modI <- gam(density_adj ∼ s(day, by=taxon, k=10, bs="cc") +
s(taxon, bs="re") + s(taxon, year_f, bs="re"),
data=zoo_train, knots=list(day=c(0, 365)),
family=Gamma(link="log"), method="REML",
drop.unused.levels=FALSE)
At this stage of the analysis (prior to model comparisons), it is useful to determine if any
of the fitted models adequately describe patterns in the data (i.e., goodness of fit testing).
2.5 2
Deviance residuals
Deviance residual
0.0 0
−2.5
−2
−5.0
−4 −2 0 2 −2 0 2 4 6
Theoretical quantiles Linear predictor
Figure 14 Diagnostic plots for model I fitted to zooplankton community data in Lake Mendota.
(A) QQ-plot of residuals (black). Red line indicates the 1–1 line and gray bands correspond to the
expected 95% CI for the QQ plot, assuming the distribution is correct. (B) Deviance residuals vs. fitted
values (on the link scale). Full-size DOI: 10.7717/peerj.6876/fig-14
the k-index is a measure of the remaining pattern in the residuals, and the p-value is
calculated based on the distribution of the k-index after randomizing the order of the
residuals. Note that there is no p-value for the random effects smoothers s(taxon) and
s(taxon,year_f) as the p-value is calculated from simulation-based tests for
autocorrelation of the residuals. As taxon and year_f are treated as simple random effects
with no natural ordering, there is no meaningful way of checking for autocorrelation.
Differences between models S (shared smoothness between taxa) and I (different
smoothness for each taxa) seem to be driven by the low seasonality of Leptodiaptomus
siciloides relative to the other species, and how this is captured by the more flexible model
I (Fig. 15). Still, both models show very similar fits to the training data. This implies
that the added complexity of different penalties for each species (model I) is unnecessary
here, which is consistent with the fact that model S has a lower AIC (4667) than model I
(4677), and that model S is somewhat better at predicting out-of-sample fits for all taxa
than model I (Table 3). Both models show significant predictive improvement compared
to the intercept-only model for all species except Keratella cochlearis (Table 3). This may
be driven by changing timing of the spring bloom for this species between training and
out-of-sample years (Fig. 15).
Next, we look at how to fit interlake variability in dynamics for just Daphnia mendotae.
Here, we will compare models G, GS, and GI to determine if a single global function is
appropriate for all four lakes, or if we can more effectively model variation between lakes
with a shared smoother and lake-specific smoothers.
Model G
zoo_daph_modG <- gam(density_adj ∼ s(day, bs="cc", k=10) + s(lake, bs="re") +
s(lake, year_f, bs="re"),
data=daphnia_train, knots=list(day=c(0, 365)),
family=Gamma(link="log"), method="REML",
drop.unused.levels=FALSE)
Model GI
zoo_daph_modGI <- gam(density_adj∼s(day, bs="cc", k=10) +s(lake, bs="re") +
s(day, by=lake, k=10, bs="cc") +
s(lake, year_f, bs="re"),
data=daphnia_train, knots=list(day=c(0, 365)),
family=Gamma(link ="log"), method="REML",
drop.unused.levels=FALSE)
Diagnostic plots from gam.check() indicate that there are no substantial patterns
comparing residuals to fitted values (not shown), and QQ-plots are similar to those from
the zooplankton community models; the residuals for all three models closely correspond
to the expected (Gamma) distribution, except at small values, where the observed
residuals are generally larger than expected (Fig. 16). As with the community data, this is
likely an artifact of the assumption we made of assigning zero observations a value of 1,000
(the lowest possible value), imposing an artificial lower bound on the observed counts.
There was also some evidence that the largest observed values were smaller than expected
given the theoretical distribution, but these fell within the 95% CI for expected deviations
from the 1–1 line (Fig. 16).
AIC values indicate that both model GS (1,093.71) and GI (1,085.7) are better fits than
7
When comparing models via AIC, we use model G (1,097.62), with model GI fitting somewhat better than model GS.7 There does
the standard rule of thumb from
Burnham & Anderson (1998), where
not seem to be a large amount of interlake variability (the EDF per lake are low in
models that differ by two units or less models GS & GI). Plots for all three models (Fig. 17) show that Mendota, Monona, and
from the lowest AIC model have sub-
stantial support, and those differing by
Kegonsa lakes are very close to the average and to one another for both models, but Waubesa
more than four units have less support. shows evidence of a more pronounced spring bloom and lower winter abundances.
Model GI is able to predict as well or better than model G or GS for all lakes (Table 4),
indicating that allowing for interlake variation in seasonal dynamics improved model
prediction. All three models predicted dynamics in Lake Mendota and Lake Menona
significantly better than the intercept-only model (Table 4). None of the models did well in
terms of predicting Lake Waubesa dynamics out-of-sample compared to a simple model
with only a lake-specific intercept and no intra-annual variability, but this was due to
the influence of a single large outlier in the out-of-sample data that occurred after the
spring bloom, at day 243 (Fig. 17; note that the y-axis is log-scaled). However, baring a
more detailed investigation into the cause of this large value, we cannot arbitrarily exclude
this outlier from the goodness-of-fit analysis; it may be due either to measurement error or
a true high late-season Daphnia density that our model was not able to predict.
10
Population density
1
1
0.1 0.1
D. thomasi K. cochlearis
100
100
10 10
1 1
0.1 0.1
L. siciloides M. edax
10
10
1
1
0.1
0.1
0 100 200 300 0 100 200 300
Day of Year
Figure 15 Species-specific seasonal dynamics for the eight zooplankon species tracked in Lake
Mendota. Black points indicate individual plankton observations in the training data, and gray points
are observations in held-out years used for model validation. Lines indicate predicted average values for
model S (green) and model I (red). Ribbons indicate ±2 standard errors around the mean.
Full-size DOI: 10.7717/peerj.6876/fig-15
A B 4
C
2 2
2
Deviance residuals
0 0 0
−2 −2 −2
−4 −4 −4
−4 −2 0 2 −4 −2 0 2 −2 0 2
Theoretical quantiles Theoretical quantiles Theoretical quantiles
Figure 16 QQ-plots for model G (A), GS (B), and GI (C) fitted to Daphnia data across the four lakes.
Red line indicates the 1-1 line, black points are observed model residuals, and gray bands correspond to
the expected 95% CI for the QQ plot, assuming the distribution is correct.
Full-size DOI: 10.7717/peerj.6876/fig-16
Bias-variance trade-offs
The bias-variance trade-off is a fundamental concept in statistics. When trying to estimate
any relationship (in the case of GAMs, a smooth relationship between predictors and data)
bias measures how far, on average, an estimate is from the true value. The variance of
an estimator corresponds to how much that estimator would fluctuate if applied to multiple
different samples of the same size taken from the same population. These two properties
tend to be traded off when fitting models. For instance, rather than estimating a population
mean from data, we could simply use a predetermined fixed value regardless of the observed
8
While this example may seem contrived, data8. This estimate would have no variance (as it is always the same regardless of what
this is exactly what happens when we
assume a given regression coefficient is
the data look like) but would have high bias unless the true population mean happened to
equal to zero (and thus exclude it from a equal the fixed value we chose. Penalization is useful because using a penalty term slightly
model).
increases model bias, but can substantially decrease variance (Efron & Morris, 1977).
In GAMs, the bias-variance trade-off is managed by the terms of the penalty matrix, and
equivalently random effect variances in HGLMs. Larger penalties correspond to lower
Kegonsa Mendota
10.0
Menona Waubesa
10.0
1.0
0.1
100 200 300 100 200 300
Day of Year
Figure 17 Raw data (points) and fitted models (lines) for D. mendota data. Black points indicate
individual plankton observations in the training data, and gray points are observations in held-out years
used for model validation. Green line: model G (no inter-lake variation in dynamics); orange line: model
GS (interlake variation with similar smoothness); purple line: model GI (varying smoothness among
lakes). Shaded bands are drawn at ±2 standard errors around each model.
Full-size DOI: 10.7717/peerj.6876/fig-17
Table 4 Out-of-sample predictive ability for model G, GS, and GI applied to the D. mendotae dataset.
Lake Total deviance of out-of-sample data
variance, as the estimated function is unable to wiggle a great deal, but also correspond to
higher bias unless the true function is close to the null space for a given smoother (e.g., a
straight line for TPRS with second derivative penalties, or zero for a random effect). The
computational machinery used by mgcv to fit smooth terms is designed to find penalty
terms that best trade-off bias for variance to find a smoother that can effectively predict
new data.
The bias-variance trade-off comes into play with HGAMs when choosing whether to fit
separate penalties for each group level or assign a common penalty for all group levels (i.e.,
deciding between models GS & GI or models S & I). If the functional relationships we are
−2
2
Estimated curve
0
−2
2
−2
2
−2
0 2 4 6 0 2 4 6 0 2 4 6
x
B model S fit model I fit
Fitted wiggliness
103
100
≤ 10−3
100 103 100 103 100 103
True wiggliness
Figure 18 (A) Illustration of bias that can arise from assuming equal smoothness for all group levels
(model S, red lines) vs. allowing for intergroup variation in smoothness (model I, blue lines) across a
range of signal-to-noise ratios, holding the group-level signals constant. The true function for each
group level is shown in black. (B) Distribution of wiggliness (as measured by the integral of the squared
second derivative) of the estimated function for each replicate for each group level for model S (red) and
model I (blue), vs. the true wiggliness of the function for that grouping level, with the black line indicating
the one-to-one line. Points below (above) the black line indicate that a given model estimated the curve as
less (more) wiggly than the true curve used to generate the data. Estimated wiggliness less than 10-3 was
truncated for visual clarity, as mgcv estimated effectively straight lines for several groups, corresponding
to a wiggliness of 0, which would not appear on a log-scaled plot.
Full-size DOI: 10.7717/peerj.6876/fig-18
Complexity-computation trade-offs
The more flexible a model is, the larger an effective parameter space any fitting software
has to search. It can be surprisingly easy to use massive computational resources trying
to fit models to even small datasets. While we typically want to select models based on
their fit and our inferential goals, computing resources can often act as an effective upper
bound on model complexity. For a given data set, assuming a fixed family and link
function, the time taken to estimate an HGAM will depend (roughly) on four factors:
(i) the number of coefficients to be estimated (and thus the number of basis functions
chosen), (ii) the number of smoothing parameters to be estimated, (iii) whether the model
needs to estimate both a global smoother and group-level smoothers, and (iv) the
algorithm and fitting criteria used to estimate parameters.
The most straightforward factor that will affect the amount of computational resources is
the number of parameters in the model. Adding group-level smoothers (moving from model
G to the other models) means that there will be more regression parameters to estimate.
For a dataset with g different groups and n data points, fitting a model with just a global
smoother, y∼s(x,k=k) will require k coefficients, and takes Oðnk2 Þ operations to evaluate.
Fitting the same data using a group-level smoother (model S, y∼s(x,fac,bs="fs",k=k))
will require Oðnk2 g 2 Þ operations to evaluate. In effect, adding a group-level smoother
will increase computational cost by an order of the number of groups squared. The effect of
this is visible in the examples we fit in the section “What are Hierarchical GAMs?.” Table 5
compares the relative time it takes to compute model G vs. the other models.
Coefficients Penalties
A. CO2 data
G 1 17 2
GS 7 65 3
GI 14 65 14
S 5 61 3
I 16 61 13
B. Bird movement data
G 1 90 2
GS 510 540 8
GI 390 624 14
S 820 541 6
I 70 535 12
Note:
All times are scaled relative to the length of time model G takes to fit to that data set. The number of coefficients measures
the total number of model parameters (including intercepts). The number of smoothers is the total number of unique
penalty values estimated by the model.
One way to deal with this issue would be to reduce the number of basis functions used
when fitting group-level smoothers when the number of groups is large, limiting the
flexibility of the model. It can also make sense to use more computationally-efficient basis
functions when fitting large data sets, such as P-splines (Wood, 2017b) or cubic splines.
TPRSs entail greater computational costs (Wood, 2017a).
Including a global smoother (models GS and GI compared to models S and I) will not
generally substantially affect the number of coefficients that need to be estimated (Table 5).
Adding a global term will add at most k extra terms. It can be substantially less than
that, as mgcv drops basis functions from co-linear smoothers to ensure that the model
matrix is full rank.
Adding additional smoothing parameters (moving from model GS to GI, or moving
from model S to I) is more costly than increasing the number of coefficients to estimate, as
estimating smoothing parameters is computationally intensive (Wood, 2011). This means
that models GS and S will generally be substantially faster than GI and I when the
number of groups is large, as models GI and I fit a separate set of penalties for each group
level. The effect of this is visible in comparing the time it takes to fit model GS to model GI
(which has a smoother for each group) or models S and I for the CO2 example data
(Table 5). Note that this will not hold in all cases. For instance, model GI and I take less
time to fit the bird movement data than models GS or S do (Table 5B).
100
0.1
2 8 32 128
Number of groups
Figure 19 Elapsed time to estimate the same model using each of the four approaches. Each data set
was generated with 20 observations per group using a unimodal global function and random group-
specific functions consisting of an intercept, a quadratic term, and logistic trend for each group.
Observation error was normally distributed. Models were fit using model 2: y s(x, k=10, bs="cp") +
s(x,fac, k=10, bs="fs", xt=list(bs="cp"), m=1). All models were run on a single core.
Full-size DOI: 10.7717/peerj.6876/fig-19
The first option is the function bam(), which requires the fewest changes to existing
code written using the gam() function. bam() is designed to improve performance
when fitting large data sets via two mechanisms. First, it saves on memory needed to
compute a given model by using a random subset of the data to calculate the basis
functions. It then blocks the data and updates model fit within each block (Wood,
Goude & Shaw, 2015). While this is primarily designed to reduce memory usage, it can
also substantially reduce computation time. Second, when using bam()’s default
method="fREML" (“Fast REML”) method, you can use the discrete=TRUE option:
this first bins continuous covariates into a smaller number of discrete values before
estimating the model, substantially reducing the amount of computation needed (Wood
et al., 2017; see ?mgcv::bam for more details). Setting up the five model types (Fig. 4)
in bam() uses the same code as we have previously covered; the only difference is
that you use the bam() instead of gam() function, and have the additional option of
discretizing your covariates.
bam() has a larger computational overhead than gam(); for small numbers of groups it
can be slower than gam() (Fig. 19). As the number of groups increases, computational
time for bam() increases more slowly than for gam(); in our simulation tests, when
the number of groups is greater than 16, bam() can be upward of an order of magnitude
faster (Fig. 19). Note that bam() can be somewhat less computationally stable when
estimating these models (i.e., less likely to converge). While base bam() (not fit using
discrete=TRUE) is slower than the other approaches shown in Fig. 19, that does not
imply that bam() is a worse choice in general; it is designed to avoid memory limitations
when working with big data rather than explicitly speeding up model fitting. The bam()
functions would likely show much better relative performance when the number of
latitude
40
20
0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50
week
linear predictor
−30 −20 −10 0 10
Figure 20 Average global function used for simulating bird_move data set (A) compared to the
fitted global function for a GS model estimated with either a te() smoother (B) or a t2()
smoother with full=TRUE (C) for group-level terms. Both group-level smoothers used the same
model specification as in section “What are Hierarchical GAMs?” except for the type of tensor product
used. Colors indicate the value of the linear predictor of bird density at each location in each week.
Full-size DOI: 10.7717/peerj.6876/fig-20
perfect concurvity. For tensor-product smoothers, mgcv will generally drop K terms from
the group-level smoother, where K is the number of basis functions in the global smoother.
The total number of terms dropped will depend on the smoothers used for the global
and group-level terms. This means that some groups will have a different range of potential
deviations from the global smoother than others. This has the effect of also somewhat
altering the shape of the global smooth relative to what it would be based on model G
(the average curve through all the data); this will be a larger issue when the number of basis
functions in the global smooth and the number of group levels are small. We have tested the
effect of this issue on our simulated bird_move data set and did not find that it lead to
substantial bias in estimating the shape of the global smoother, relative to the amount of
11
It is also important to consider here that bias inherent in any smooth estimation method11 (Fig. 20). As noted in the section
the concept of a “global function” is a bit
fuzzy itself, and there are many possible
“What are hierarchical GAMs?”, we found that t2() tensor product smoothers with full
ways to define what a global function is penalties (full = TRUE in the t2() function) for group-level smoothers showed the best
(as we discussed in section “What are
Hierarchical GAMs?”). The global
performance at recreating the true global function from our simulated bird_move data set,
function being fit in all of these models compared to other possible types of tensor product. Using te() tensor products for the
is actually an average function, and the
shape of it will depend on the sampling
group-level terms lead to the global smoother being heavily smoothed relative to the
structure of any given study. In our actual average function, used to simulate the data (Fig. 20). However, more work on when
view, the global function fitted in these
models should generally be viewed as a
these models accurately reconstruct global effects is still needed.
useful summary of an average trend There is currently no way to disable dropping side constraints for these terms in mgcv.
across a wide range of groups, and
would only represent an actual average
In cases where accurately estimating the global smoother or group-level deviations is
relationship if the grouping levels were essential, we recommend either fitting model G, GS using factor-smooth group-level terms
drawn at random from some underlying
population and if there was scientific
(bs="fs", which can also be used to model multi-dimensional isotropic group-level
reason to believe that individual groups smoothers), or model GI. Alternatively, there is specialized functional regression software
should differ from the mean only via
some additive function.
such as the pffr function in the refund package (Scheipl, Gertheiss & Greven, 2016),
which does not impose these side constraints; instead the package uses a modified type
of tensor-product to ensure that group-level terms sum to zero at each level of the
CONCLUSION
Hierarchical GAMs are a powerful tool to model intergroup variability, and we have
attempted to illustrate some of the range and possibilities that these models are capable of,
how to fit them, and some issues that may arise during model fitting and testing. Specifying
these models and techniques for fitting them are active areas statistical research,
so this paper should be viewed as a jumping-off point for these models, rather than an
end-point; we refer the reader to the rich literature on GAMs (Wood, 2017a) and
functional regression (Ramsay & Silverman, 2005; Kaufman & Sain, 2010; Scheipl,
Staicu & Greven, 2014) for more on these ideas.
ACKNOWLEDGEMENTS
The authors would like to thank Carly Ziter, Tiago Marques, Jake Walsh, Geoff Evans,
Paul Regular, Laura Wheeland, and Isabella Ghement for their thoughtful feedback on
earlier versions of this manuscript, and the Ecological Society of America for hosting
the mgcv workshops that this work started from. The authors also thank the three
reviewers (Paul Bürkner, Fabian Scheipl, and Matteo Fasiolo) for their insightful and
useful feedback.
All authors contributed to developing the initial idea for this paper, and to writing and
editing the manuscript. Author order after the first author was chosen using the code:
set.seed(11)
sample(c('Miller','Ross','Simpson'))
Funding
This work was funded by Fisheries and Oceans Canada, Natural Science and Engineering
Research Council of Canada (NSERC) Discovery Grant (RGPIN-2014-04032), by OPNAV
N45 and the SURTASS LFA Settlement Agreement, managed by the U.S. Navy’s Living
Marine Resources Program under Contract No. N39430-17-C-1982, and by the USAID
PREDICT-2 Program. There was no additional external funding received for this study. The
funders had no role in study design, data collection and analysis, decision to publish, or
preparation of the manuscript.
Grant Disclosures
The following grant information was disclosed by the authors:
Fisheries and Oceans Canada, Natural Science and Engineering Research Council of
Canada (NSERC) Discovery Grant: RGPIN-2014-04032.
OPNAV N45 and the SURTASS LFA Settlement Agreement, managed by the U.S. Navy’s
Living Marine Resources program: N39430-17-C-1982.
USAID PREDICT-2 Program.
Competing Interests
Eric Pedersen is an employee of Fisheries and Oceans Canada, and Noam Ross is employed
by EcoHealth Alliance, a nonprofit organization.
Author Contributions
Eric J. Pedersen conceived and designed the experiments, performed the experiments,
analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the
paper, approved the final draft.
David L. Miller conceived and designed the experiments, prepared figures and/or tables,
authored or reviewed drafts of the paper, approved the final draft.
Gavin L. Simpson conceived and designed the experiments, prepared
figures and/or tables, authored or reviewed drafts of the paper, approved the
final draft.
Noam Ross conceived and designed the experiments, prepared figures and/or tables,
authored or reviewed drafts of the paper, approved the final draft.
Data Availability
The following information was supplied regarding data availability:
All data and code for this article are available via GitHub:
https://github.com/eric-pedersen/mixed-effect-gams
Supplemental Information
Supplemental information for this article can be found online at http://dx.doi.org/10.7717/
peerj.6876#supplemental-information.
REFERENCES
Baayen RH, Van Rij J, De Cat C, Wood S. 2018. Autocorrelated errors in experimental data
in the language sciences: some solutions offered by generalized additive mixed models.
In: Speelman D, Heylen K, Geeraerts D, eds. Mixed-Effects Regression Models in Linguistics.
Switzerland: Springer, 49–69.
Bates D, Mächler M, Bolker B, Walker S. 2015. Fitting linear mixed-effects models using lme4.
Journal of Statistical Software 67:1–48.
Bolker BM, Brooks ME, Clark CJ, Geange SW, Poulsen JR, Stevens MHH, White J-SS. 2009.
Generalized linear mixed models: a practical guide for ecology and evolution. Trends in Ecology
& Evolution 24(3):127–135 DOI 10.1016/j.tree.2008.10.008.
Bürkner P-C. 2017. brms: an R package for Bayesian multilevel models using Stan.
Journal of Statistical Software 80:1–28.
Burnham KP, Anderson DR. 1998. Model selection and inference: a practical information-theoretic
approach. New York: Springer Science & Business Media.
Carpenter B, Gelman A, Hoffman MD, Lee D, Goodrich B, Betancourt M, Brubaker M, Guo J,
Li P, Riddell A. 2017. Stan: a probabilistic programming language. Journal of Statistical
Software 76(1):1–32 DOI 10.18637/jss.v076.i01.
De Boor C. 1978. A Practical Guide to Splines. Switzerland: Springer.
Dormann CF, Calabrese JM, Guillera-Arroita G, Matechou E, Bahn V, Barto n K, Beale CM,
Ciuti S, Elith J, Gerstner K, Guelat J, Keil P, Lahoz-Monfort JJ, Pollock LJ, Reineking B,
Roberts DR, Schröder B, Thuiller W, Warton DI, Wintle BA, Wood SN, Wüest RO,
Hartig F. 2018. Model averaging in ecology: a review of Bayesian, information-theoretic,
and tactical approaches for predictive inference. Ecological Monographs 88(4):485–504
DOI 10.1002/ecm.1309.
Efron B, Morris C. 1977. Stein’s paradox in statistics. Scientific American 236(5):119–127
DOI 10.1038/scientificamerican0577-119.
Forster M, Sober E. 2011. AIC scores as evidence: a Bayesian interpretation. In: Bandy
opadhyay PS, Forster MR, eds. Philosophy of Statistics. Handbook of the Philosophy of Science.
Boston: Elsevier B.V., 535–549.
Gelman A. 2006. Multilevel (hierarchical) modeling: what it can and cannot do. Technometrics
48(3):432–435 DOI 10.1198/004017005000000661.
Gelman A, Carlin J, Stern H, Dunson D, Vehtari A, Rubin D. 2013. Bayesian data analysis.
Third Edition. New York: Taylor & Francis.
Greven S, Scheipl F. 2017. A general framework for functional regression modelling. Statistical
Modelling: An International Journal 17(1–2):1–35 DOI 10.1177/1471082X16681317.
Hastie TJ, Tibshirani RJ. 1990. Generalized additive models. New York: Taylor & Francis.
Kaufman CG, Sain SR. 2010. Bayesian functional {ANOVA} modeling using Gaussian process
prior distributions. Bayesian Analysis 5(1):123–149 DOI 10.1214/10-ba505.