An Introduction To Hierarchical Linear Modeling

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Tutorials in Quantitative Methods for Psychology

2012, Vol. 8(1), p. 52-69.

An introduction to hierarchical linear modeling

Heather Woltman, Andrea Feldstain, J. Christine MacKay, Meredith Rocchi


University of Ottawa

This tutorial aims to introduce Hierarchical Linear Modeling (HLM). A simple


explanation of HLM is provided that describes when to use this statistical technique
and identifies key factors to consider before conducting this analysis. The first section
of the tutorial defines HLM, clarifies its purpose, and states its advantages. The second
section explains the mathematical theory, equations, and conditions underlying HLM.
HLM hypothesis testing is performed in the third section. Finally, the fourth section
provides a practical example of running HLM, with which readers can follow along.
Throughout this tutorial, emphasis is placed on providing a straightforward overview
of the basic principles of HLM.

*Hierarchical levels of grouped data are a commonly different times and under different conditions are nested
occurring phenomenon (Osborne, 2000). For example, in the within each study participant (Raudenbush & Bryk, 2002;
education sector, data are often organized at student, Osborne, 2000). Analysis of hierarchical data is best
classroom, school, and school district levels. Perhaps less performed using statistical techniques that account for the
intuitively, in meta-analytic research, participant, procedure, hierarchy, such as Hierarchical Linear Modeling.
and results data are nested within each experiment in the Hierarchical Linear Modeling (HLM) is a complex form
analysis. In repeated measures research, data collected at of ordinary least squares (OLS) regression that is used to
analyze variance in the outcome variables when the
predictor variables are at varying hierarchical levels; for
*Please note that Heather Woltman, Andrea Feldstain, and J. example, students in a classroom share variance according
Christine MacKay all contributed substantially to this to their common teacher and common classroom. Prior to
manuscript and should all be considered first authors. the development of HLM, hierarchical data was commonly
Heather Woltman, Andrea Feldstain, Meredith Rocchi, assessed using fixed parameter simple linear regression
School of Psychology, University of Ottawa. J. Christine techniques; however, these techniques were insufficient for
MacKay, University of Ottawa Institute of Mental Health such analyses due to their neglect of the shared variance. An
Research, and School of Psychology, University of Ottawa. algorithm to facilitate covariance component estimation for
Correspondence concerning this paper should be addressed unbalanced data was introduced in the early 1980s. This
to Heather Woltman, School of Psychology, University of development allowed for widespread application of HLM to
Ottawa, 136 Jean-Jacques Lussier, Room 3002, Ottawa, multilevel data analysis (for development of the algorithm
Ontario, Canada K1N 6N5. Tel: (613) 562-5800 ext. 3946. see Dempster, Laird, & Rubin, 1977; for its application to
Email: [email protected]. HLM see Dempster, Rubin, & Tsutakawa, 1981). Following
The authors would like to thank Dr. Sylvain Chartier and this advancement in statistical theory, HLM’s popularity
Dr. Nicolas Watier for their input in the preparation of this flourished (Raudenbush & Bryk, 2002; Lindley & Smith,
manuscript. As well, the authors would like to thank Dr. 1972; Smith, 1973).
Veronika Huta for sharing her expertise in the area of HLM accounts for the shared variance in hierarchically
hierarchical linear modeling, as well as for her continued structured data: The technique accurately estimates lower-
guidance and support throughout the preparation of this level slopes (e.g., student level) and their implementation in
manuscript. estimating higher-level outcomes (e.g., classroom level;

52
53

For example, schools (level-3) that are in remote geographic


Table 1. Factors at each hierarchical level that affect students’
Grade Point Average (GPA) locations (level-3 variable) will have smaller class sizes
(level-2) than classes in metropolitan areas, thereby affecting
Hierarchical Example of Example Variables the quality of personal attention paid to each student and
Level Hierarchical noise levels in the classroom (level-2 variables).
Level Variables at the lowest level of the hierarchy (level-1) are
Level-3 School School’s geographic nested within level-2 groups and share in common the
Level location impact of level-2 variables. In our example, student-level
Annual budget variables such as gender, intelligence quotient (IQ),
Level-2 Classroom Class size socioeconomic status, self-esteem rating, behavioural
Level Homework assignment conduct rating, and breakfast consumption are situated at
load level-1. To summarize, in our example students (level-1) are
Teaching experience
situated within classrooms (level-2) that are located within
Teaching style
schools (level-3; see Table 1). The outcome variable, grade
Level-1 Student Gender
point average (GPA), is also measured at level-1; in HLM,
Level Intelligence Quotient (IQ)
the outcome variable of interest is always situated at the
Socioeconomic status
Self-esteem rating lowest level of the hierarchy (Castro, 2002).
Behavioural conduct rating For simplicity, our example supposes that the researcher
Breakfast consumption wants to narrow the research question to two predictor
GPAª variables: Do student breakfast consumption and teaching style
ª The outcome variable is always a level-1 variable. influence student GPA? Although GPA is a single and
continuous outcome variable, HLM can accommodate
Hofmann, 1997). HLM is prevalent across many domains, multiple continuous or discrete outcome variables in the
and is frequently used in the education, health, social work, same analysis (Raudenbush & Bryk, 2002).
and business sectors. Because development of this statistical
method occurred simultaneously across many fields, it has Methods for Dealing with Nested Data
come to be known by several names, including multilevel-, An effective way of explaining HLM is to compare and
mixed level-, mixed linear-, mixed effects-, random effects-, contrast it to the methods used to analyze nested data prior
random coefficient (regression)-, and (complex) covariance to HLM’s development. These methods, disaggregation and
components-modeling (Raudenbush & Bryk, 2002). These aggregation, were referred to in our introduction as simple
labels all describe the same advanced regression technique linear regression techniques that did not properly account
that is HLM. HLM simultaneously investigates relationships for the shared variance that is inherent when dealing with
within and between hierarchical levels of grouped data, hierarchical information. While historically the use of
thereby making it more efficient at accounting for variance disaggregation and aggregation made analysis of
among variables at different levels than other existing hierarchical data possible, these approaches resulted in the
analyses. incorrect partitioning of variance to variables, dependencies
in the data, and an increased risk of making a Type I error
Example (Beaubien, Hamman, Holt, & Boehm-Davis, 2001; Gill, 2003;
Throughout this tutorial we will make use of an example Osborne, 2000).
to illustrate our explanation of HLM. Imagine a researcher
asks the following question: What school-, classroom-, and Disaggregation
student-related factors influence students’ Grade Point Average?
This research question involves a hierarchy with three Disaggregation of data deals with hierarchical data
levels. At the highest level of the hierarchy (level-3) are issues by ignoring the presence of group differences. It
school-related variables, such as a school’s geographic considers all relationships between variables to be context
location and annual budget. Situated at the middle level of free and situated at level-1 of the hierarchy (i.e., at the
the hierarchy (level-2) are classroom variables, such as a individual level). Disaggregation thereby ignores the
teacher’s homework assignment load, years of teaching presence of possible between-group variation (Beaubien et
experience, and teaching style. Level-2 variables are nested al., 2001; Gill, 2003; Osborne, 2000). In the example we
within level-3 groups and are impacted by level-3 variables. provided earlier of a researcher investigating whether level-
54

Table 2. Sample dataset using the disaggregation method, with level-2 and level-3 variables excluded from the data
(dataset is adapted from an example by Snijders & Bosker, 1999)

Student ID Classroom ID School ID GPA Score Breakfast Consumption Score


(Level-1) (Level-2) (Level-3) (Level-1) (Level-1)
1 1 1 5 1
2 1 1 7 3
3 2 1 4 2
4 2 1 6 4
5 3 1 3 3
6 3 1 5 5
7 4 1 2 4
8 4 1 4 6
9 5 1 1 5
10 5 1 3 7

1 variable breakfast consumption affects student GPA,


disaggregation would entail studying level-2 and level-3 Aggregation
variables at level-1. All students in the same class would be
assigned the same mean classroom-related scores (e.g., Aggregation of data deals with the issues of hierarchical
homework assignment load, teaching experience, and data analysis differently than disaggregation: Instead of
teaching style ratings), and all students in the same school ignoring higher level group differences, aggregation ignores
would be assigned the same mean school-related scores lower level individual differences. Level-1 variables are
(e.g., school geographic location and annual budget ratings; raised to higher hierarchical levels (e.g., level-2 or level-3)
see Table 2). and information about individual variability is lost. In
By bringing upper level variables down to level-1, aggregated statistical models, within-group variation is
shared variance is no longer accounted for and the ignored and individuals are treated as homogenous entities
assumption of independence of errors is violated. If teaching (Beaubien et al., 2001; Gill, 2003; Osborne, 2000). To the
style influences student breakfast consumption, for example, researcher investigating the impact of breakfast
the effects of the level-1 (student) and level-2 (classroom) consumption on student GPA, this approach changes the
variables on the outcome of interest (GPA) cannot be research question (Osborne, 2000). Mean classroom GPA
disentangled. In other words, the impact of being taught in becomes the new outcome variable of interest, rather than
the same classroom on students is no longer accounted for
when partitioning variance using the disaggregation
approach. Dependencies in the data remain uncorrected, the
assumption of independence of observations required for
simple regression is violated, statistical tests are based only
on the level-1 sample size, and the risk of partitioning
variance incorrectly and making inaccurate statistical
estimates increases (Beaubien et al., 2001; Gill, 2003;
Osborne, 2000). As a general rule, HLM is recommended
over disaggregation for dealing with nested data because it
addresses each of these statistical limitations.
In Figure 1, depicting the relationship between breakfast
consumption and student GPA using disaggregation, the
predictor variable (breakfast consumption) is negatively
related to the outcome variable (GPA). Despite (X, Y) units
being situated variably above and below the regression line, Figure 1. The relationship between breakfast consumption
this method of analysis indicates that, on average, unit and student GPA using the disaggregation method. Figure
increases in a student’s breakfast consumption result in a is adapted from an example by Snijders & Bosker (1999) and
lowering of that student’s GPA. Stevens (2007).
55

Table 3. Sample dataset using the aggregation method, with level-1 variables excluded from the data
(dataset is adapted from an example by Snijders & Bosker, 1999)

Teacher ID Classroom GPA Classroom Breakfast Consumption


(Level-2) (Level-2) (Level-2)
1 6 2
2 5 3
3 4 4
4 3 5
5 2 6

student GPA. Also, variation in students’ breakfast habits is


no longer measurable; instead, the researcher must use HLM
mean classroom breakfast consumption as the predictor
variable (see Table 3 and Figure 2). Up to 80-90% of Figure 3 depicts the relationship between breakfast
variability due to individual differences may be lost using consumption and student GPA using HLM. Each level-1
aggregation, resulting in dramatic misrepresentations of the (X,Y) unit (i.e., each student’s GPA and breakfast
relationships between variables (Raudenbush & Bryk, 1992). consumption) is identified by its level-2 cluster (i.e., that
HLM is generally recommended over aggregation for student’s classroom). Each level-2 cluster’s slope (i.e., each
dealing with nested data because it effectively disentangles classroom’s slope) is also identified and analyzed separately.
individual and group effects on the outcome variable. Using HLM, both the within- and between-group
In Figure 2, depicting the relationship between regressions are taken into account to depict the relationship
classroom breakfast consumption and classroom GPA using between breakfast consumption and GPA. The resulting
aggregation, the predictor variable (breakfast consumption) analysis indicates that breakfast consumption is positively
is again negatively related to the outcome variable (GPA). In related to GPA at level-1 (i.e., at the student level) but that
this method of analysis, all (X, Y) units are situated on the the intercepts for these slope effects are influenced by level-2
regression line, indicating that unit increases in a factors [i.e., students’ breakfast consumption and GPA (X, Y)
classroom’s mean breakfast consumption perfectly predict a units are also affected by classroom level factors]. Although
lowering of that classroom’s mean GPA. Although a disaggregation and aggregation methods indicated a
negative relationship between breakfast consumption and negative relationship between breakfast consumption and
GPA is found using both disaggregation and aggregation GPA, HLM indicates that unit increases in breakfast
techniques, breakfast consumption is found to impact GPA consumption actually positively impact GPA. As
more unfavourably using aggregation. demonstrated, HLM takes into consideration the impact of

Figure 2. The relationship between classroom breakfast Figure 3. The relationship between breakfast consumption
consumption and classroom GPA using the aggregation and student GPA using HLM. Figure is adapted from an
method. Figure is adapted from an example by Snijders & example by Snijders & Bosker (1999) and Stevens (2007).
Bosker (1999) and Stevens (2007).
56

factors at their respective levels on an outcome of interest. It Bryk (2002; see Raudenbush & Bryk, 2002 for three-level
is the favored technique for analyzing hierarchical data models; see Wong & Mason, 1985 for dichotomous outcome
because it shares the advantages of disaggregation and variables). As stated previously, hierarchical linear models
aggregation without introducing the same disadvantages. allow for the simultaneous investigation of the relationship
As highlighted in this example, HLM can be ideally within a given hierarchical level, as well as the relationship
suited for the analysis of nested data because it identifies the across levels. Two models are developed in order to achieve
relationship between predictor and outcome variables, by this: one that reflects the relationship within lower level
taking both level-1 and level-2 regression relationships into units, and a second that models how the relationship within
account. Readers who are interested in exploring the lower level units varies between units (thereby correcting
differences yielded by aggregation and disaggregation for the violations of aggregating or disaggregating data;
methods of analysis compared to HLM are invited to Hofmann, 1997). This modeling technique can be applied to
experiment with the datasets provided. Level-1 and level-2 any situation where there are lower-level units (e.g., the
datasets are provided to allow readers to follow along with student-level variables) nested within higher-level units
the HLM tutorial in section 4 and to practice running an (e.g., classroom level variables).
HLM. An aggregated version of these datasets is also To aid understanding, it helps to conceptualize the
provided for readers who would like to compare the results lower-level units as individuals and the higher-level units as
yielded from an HLM to those yielded from a regression. groups. In two-level hierarchical models, separate level-1
In addition to HLM’s ability to assess cross-level data models (e.g., students) are developed for each level-2 unit
relationships and accurately disentangle the effects of (e.g., classrooms). These models are also called within-unit
between- and within-group variance, it is also a preferred models as they describe the effects in the context of a single
method for nested data because it requires fewer group (Gill, 2003). They take the form of simple regressions
assumptions to be met than other statistical methods developed for each individual i:
(Raudenbush & Bryk, 2002). HLM can accommodate non-
(1)
independence of observations, a lack of sphericity, missing
data, small and/or discrepant group sample sizes, and where:
heterogeneity of variance across repeated measures. Effect = dependent variable measured for ith level-1 unit
size estimates and standard errors remain undistorted and nested within the jth level-2 unit,
the potentially meaningful variance overlooked using = value on the level-1 predictor,
disaggregation or aggregation is retained (Beaubien, = intercept for the jth level-2 unit,
Hamman, Holt & Boehm-Davis, 2001; Gill, 2003; Osborne, = regression coefficient associated with for the jth
2000). level-2 unit, and
A disadvantage of HLM is that it requires large sample = random error associated with the ith level-1 unit
sizes for adequate power. This is especially true when nested within the jth level-2 unit.
detecting effects at level-1. However, higher-level effects are In the context of our example, these variables can be
more sensitive to increases in groups than to increases in redefined as follows:
observations per group. As well, HLM can only handle = GPA measured for student i in classroom j
missing data at level-1 and removes groups with missing = breakfast consumption for student i in classroom j
data if they are at level-2 or above. For both of these reasons, = GPA for student i in classroom j who does not eat
it is advantageous to increase the number of groups as breakfast
opposed to the number of observations per group. A study = regression coefficient associated with breakfast
with thirty groups with thirty observations each (n = 900) consumption for the jth classroom
can have the same power as one hundred and fifty groups = random error associated with student i in classroom
with five observations each (n = 750; Hoffman, 1997). j.
As with most statistical models, an important
Equations Underlying Hierarchical Linear Models assumption of HLM is that any level-1 errors ( ) follow a
We will limit our remaining discussion to two-level normal distribution with a mean of 0 and a variance of
hierarchical data structures concerning continuous outcome (see Equation 2; Sullivan, Dukes & Losina, 1999). This
(dependent) variables as this provides the most thorough, applies to any level-1 model using continuous outcome
yet simple, demonstration of the statistical features of HLM. variables.
We will be using the notation employed by Raudenbush and
(2)
57

In the level-2 models, the level-1 regression coefficients would thus be removed from Equation 3 (Hofmann, 1997).
( and ) are used as outcome variables and are related Special cases of the two-level model Equations 1, 3 and 4 can
to each of the level-2 predictors. Level-2 models are also be found in Raudenbush & Bryk (1992).
referred to as between-unit models as they describe the The assumption in the level-2 model (when errors are
variability across multiple groups (Gill, 2003). We will homogeneous at both levels) is that and have a
consider the case of a single level-2 predictor that will be normal multivariate distribution with variances defined by
modeled using Equations 3 and 4: and and means equal to and . Furthermore,
the covariance between and (defined as ) is equal
(3) to the covariance between and . As in the level-1
assumptions, the mean of and is assumed to be zero
(4) and level-1 and level-2 errors are not correlated. Finally, the
where: covariance between and and the covariance of
= intercept for the jth level-2 unit; and are both zero (Sullivan et al., 1999). The assumptions
= slope for the jth level-2 unit; of level-2 models can be summarized as follows
= value on the level-2 predictor; (Raudenbush & Bryk, 2002; Sullivan et al., 1999):
= overall mean intercept adjusted for G;
= overall mean intercept adjusted for G;
= regression coefficient associated with G relative to (5)
level-1 intercept;
= regression coefficient associated with G relative to
level-1 slope; In order to allow for the classification of variables and
= random effects of the jth level-2 unit adjusted for G coefficients in terms of the level of hierarchy they affect (Gill,
on the intercept; 2003), a combined model (i.e., two-level model; see Equation
= random effects of the jth level-2 unit adjusted for G 6) is created by substituting Equations 3 and 4 into Equation
on the slope. 1:
In the context of our example, these variables can be
redefined as follows: (6)
= intercept for the jth classroom;
= slope for the jth classroom; The combined model incorporates the level-1 and level-2
= teaching style in classroom j; predictors ( or breakfast consumption and or teaching
= overall mean intercept adjusted for breakfast style), a cross-level term ( or teaching style × breakfast
consumption; consumption) as well as the composite error
= overall mean intercept adjusted for breakfast ( ). Equation 6 is often termed a mixed
consumption; model because it includes both fixed and random effects
= regression coefficient associated with breakfast (Gill, 2003). Please note that fixed and random effects will be
consumption relative to level-2 intercept; discussed in proceeding sections.
= regression coefficient associated with breakfast A comparison between Equation 6 and the equation for a
consumption relative to level-2 slope; normal regression (see Equation 7) further highlights the
= random effects of the jth level-2 unit adjusted for uniqueness of HLM.
breakfast consumption on the intercept;
(7)
= random effects of the jth level-2 unit adjusted for
breakfast consumption on the slope. As stated previously, the HLM model introduces two new
It is noteworthy that the level-2 model introduces two terms ( and ) that allow for the model to estimate
new terms ( and ) that are unique to HLM and error that normal regression cannot. In Equation 6, the errors
differentiate it from a normal regression equation. are no longer independent across the level-1 units. The
Furthermore, the model developed would depend on the terms and demonstrate that there is dependency
pattern of variance in the level-1 intercepts and slopes among the level-1 units nested within each level-2 unit.
(Hofmann, 1997). For example, if there was no variation in Furthermore, and may have different values within
the slopes across the level-1 models, would no longer be level-2 units, leading to heterogeneous variances of the error
meaningful given that is equivalent across groups and terms (Sullivan et al., 1999). This dependency of errors has
58

Table 4. Hypothesis and necessary conditions: Does student breakfast consumption and
teaching style influence student GPA?
Hypotheses
1 Breakfast consumption is related to GPA.
2 Teaching style is related to GPA, after controlling for breakfast consumption.
3 Teaching style moderates the breakfast consumption-GPA relationship.
Conditions
1 There is systematic within- & between-group variance in GPA.
2 There is significant variance at the level-1 intercept.
3 There is significant variance in the level-1 slope.
4 The variance in the level-1 intercept is predicted by teaching style.
5 The variance in the level-1 slope is predicted by teaching style.

important implications for parameter estimation, which will Congdon & du Toit, 2006). This strategy provides the best
be discussed in the next section. estimate of the level-1 coefficients for a particular group
(e.g., classroom) because it results in a smaller mean square
Estimation of Effects error term (Raudenbush, 1988). Readers interested in further
Two-level hierarchical models involve the estimation of information concerning empirical Bayes estimation are
three types of parameters. The first type of parameter is directed to Carlin and Louis (1996).
fixed effects, and these do not vary across groups (Hofmann, The final type of parameter estimation concerns the
1997). The fixed effects are represented by , , and variance-covariance components which include: (1) the
in Equations 3 and 4. While the level-2 fixed effects covariance between level-2 error terms [i.e., cov( and )
could be estimated via the Ordinary Least Squares (OLS) or cov( and ) defined as ]; (2) the variance in the
approach, it is not an appropriate estimation strategy as it level-1 error term (i.e., the variance of denoted by );
requires the assumption of homoscedasticity to be met. This and (3) the variance in the level-2 error terms (i.e., the
assumption is violated in hierarchical models as the variance in and or and defined as and ,
accuracy of level-1 parameters are likely to vary across respectively). When sample sizes are equal and the
groups (e.g., classrooms; Hofmann, 1997). The technique distribution of level-1 predictors is the same across all
used to estimate fixed effects is called a Generalized Least groups (i.e., the design is balanced), closed-form formulas
Squared (GLS) estimate. A GLS yields a weighted level-2 can be used to estimate variance-covariance components
regression which ensures that groups (e.g., classrooms) with (Raudenbush & Bryk, 2002). In reality, however, an
more accurate estimates of the outcome variable (i.e., the unbalanced design is more probable. In such cases, variance-
intercepts and slopes) are allocated more weight in the level- covariance estimates are made using iterative numerical
2 regression equation (Hofmann, 1997). Readers seeking procedures (Raudenbush & Bryk, 2002). Raudenbush & Bryk
further information on the estimation of fixed effects are (2002) suggest the following conceptual approaches to
directed to Raudenbush & Bryk (2002). estimating variance-covariance in unbalanced designs: (1)
The second type of parameter is the random level-1 full maximum likelihood; (2) restricted maximum
coefficients ( and ) which are permitted to vary across likelihood; and (3) Bayes estimation. Readers are directed to
groups (e.g., classrooms; Hofmann, 1997). Hierarchical chapters 13 and 14 in Raudenbush & Bryk (2002) for more
models provide two estimates for random coefficients of a detail.
given group (e.g., classroom): (1) computing an OLS
regression for the level-1 equation representing that group Hypothesis Testing
(e.g., classroom); and (2) the predicted values of and The previous sections of this paper provided an
in the level-2 model [see Equations 3 and 4]. Of importance introduction to the logic, rationale and parameter estimation
is which estimation strategy provides the most precise approaches behind hierarchical linear models. The following
values of the population slope and intercept for the given section will illustrate how hierarchical linear models can be
group (e.g., classroom; Hofmann, 1997). HLM software used to answer questions relevant to research in any sub-
programs use an empirical Bayes estimation strategy, which field of psychology. It is prudent to note that for the sake of
takes into consideration both estimation strategies by explanation, equations in the following section (which we
computing an optimally weighted combination of the two will refer to as sub-models) purposely ignore one or a few
(Raudenbush & Bryk, 2002; Raudenbush, Bryk, Cheong, facets of the combined model (see Equation 6) and are not
59

ad hoc equations. Through this section we will sequentially = grand mean GPA ;
show how these sub-models can be used in order to run = within group variance in GPA;
specific tests that answer hierarchical research questions. = between group variance in GPA.
Thus the reader is reminded that all analyses presented in The level-1 equation above [see Equation 10] includes
this section could be run all at once using the combined only an intercept estimate; there are no predictor variables.
model (see Equation 6; Hofmann, 1997; Hofmann, personal In cases such as this, the intercept estimate is determined by
communication, April 25, 2010) and an HLM software regressing the variance in GPA onto a unit vector, which
program. The following example was adapted from the yields the variable's mean (HLM software performs this
model in Hofmann (1997). For more complex hypothesis implicitly when no predictors are specified). Therefore, at
testing strategies, please refer to Raudenbush and Bryk level-1, GPA is equal to the classroom's mean plus the
(2002). classroom's respective error. At level-2 [see Equation 11],
Suppose that we want to know how GPA can be each classroom's GPA is regressed onto a unit vector,
predicted by breakfast consumption, a student-level resulting in a constant ( ) that is equal to the mean of the
predictor, and teaching style, a classroom-level predictor. classroom means. As a result of this regression, the variance
Recall that the combined model used in HLM is the within groups ( ) is forced into the level-1 residual ( )
following: while the variance between groups ( ) is forced into the
level-2 residual ( ).
(8) HLM tests for significance of the between-group
variance ( ) but does not test the significance of the within-
Substituting in our variables the combined model would group variance ( ). In the abovementioned model, the total
look like this: variance in GPA becomes partitioned into its within and
between group components; therefore Variance(GPAij) =
Variance ( ) = . This allows for the
calculation of the ratio of the between group variance to the
(9)
total variance, termed the intra-class correlation (ICC). In
Our three hypotheses are a) breakfast consumption is other words, the ICC represents the percent of variance in
related to GPA; b) teaching style is related to GPA, after GPA that is between classrooms. Thus by running an initial
controlling for breakfast consumption; and c) teaching style ANOVA, HLM provides: (1) the amount of variance within
moderates the breakfast consumption-GPA relationship. In groups; (2) the amount of variance between groups; and (3)
order to support these hypotheses, HLM models require five allows for the calculation of the ICC using Equation 12.
conditions to be satisfied. Our hypotheses and necessary
(12)
conditions to be satisfied are summarized in Table 4.
Once this condition is satisfied, HLM can examine the next
Condition 1: There is Systematic Within- and Between- two conditions to determine whether there are significant
Group Variance in GPA differences in intercepts and slopes across classrooms.

The first condition provides useful preliminary Conditions 2 and 3: There is Significant Variance in the
information and assures that there is appropriate variance to Level-1 Intercept and Slope
investigate the hypotheses. To begin, HLM applies a one-
way analysis of variance (ANOVA) to partition the within- Once within- and between-group variance has been
and between-group variance in GPA, which represents partitioned, HLM applies a random coefficient regression to
breakfast consumption and teaching style, respectively. The test the second and third conditions. The second condition
relevant sub-models (see Equations 10 and 11) formed using supports hypothesis 2 because a significant result would
select facets from Equation 9 are as follows: indicate significant variance in GPA due to teaching style
when breakfast consumption is held constant. The third
(10) condition supports hypothesis 3 by indicating that GPAs
differ when students are grouped by the teaching style in
(11) their classroom. This regression is also a direct test of
where: hypothesis 1, that breakfast consumption is related to GPA.
= mean GPA for classroom j; The following sub-models (see Equations 13- 15) are created
60

using select facets of Equation 9: where:


= Level-2 intercept;
(13)
= Level-2 slope (Hypothesis 2);
= mean (pooled) slopes;
(14) = Level-1 residual variance;
= residual intercept variance;
(15) = variance in slopes.
where: The intercepts-as-outcomes model is similar to the
= mean of the intercepts across classrooms; random coefficient regression used for the second and third
= mean of the slopes across classrooms (Hypothesis conditions except that it includes teaching style as a
1); predictor of the intercepts at level-2. This is a direct test of
= Level-1 residual variance; the second hypothesis, that teaching style is related to GPA
= variance in intercepts; after controlling for breakfast consumption. The residual
= variance in slopes. variance ( ) is assessed for significance using another χ²
The and parameters are the level-1 coefficients of the test. If this test indicates a significant value, other level-2
intercepts and the slopes, respectively, averaged across predictors can be added to account for this variance. To
classrooms. HLM runs a t-test on these parameters to assess assess how much variance in GPA is accounted for by
whether they differ significantly from zero, which is a direct teaching style, the variance attributable to teaching style is
test of hypothesis 1 in the case of . This t-test reveals compared to the total intercept variance (see Equation 20).
whether the pooled slope between GPA and breakfast
consumption differs from zero.
(20)
A χ² test is used to assess whether the variance in the
intercept and slopes differs significantly from zero ( and
, respectively). At this stage, HLM also estimates the Condition 5: The Variance in the Level-1 Slope is Predicted
residual level-1 variance and compares it to the estimate by Teaching Style
from the test of Condition 1. Using both estimates, HLM
calculates the percent of variance in GPA that is accounted The fifth condition assesses whether the difference in
for by breakfast consumption (see Equation 16). slopes is related to teaching style. It is known as the slopes-
as-outcomes model. The following sub-models (see
(16) Equations 21-23) formed with select variables from Equation
Of note is that in order for the fourth and fifth conditions to 9 are used to determine if condition five is satisfied.
be tested, the second and third conditions must first be met.
(21)

Condition 4: The Variance in the Level-1 Intercept is (22)


Predicted by Teaching Style
(23)

The fourth condition assesses whether the significant where:


variance at the intercepts (found in the second condition) is = Level-2 intercept;
related to teaching style. It is also known as the intercepts- = Level-2 slope (Hypothesis 2);
as-outcomes model. HLM uses another random regression = Level-2 intercept;
model to assess whether teaching style is significantly = Level-2 slope (Hypothesis 3);
related to the intercept while holding breakfast consumption = Level-1 residual variance;
constant. This is accomplished via the following sub-models = residual intercept variance;
(see Equations 17-19) created from using select variables in = residual slope variance.
Equation 9: With teaching style as a predictor of the level-1 slope,
becomes a measure of the residual variance in the averaged
(17)
level-1 slopes across groups. If a χ² test on is significant,
(18) it indicates that there is systematic variance in the level-1
slopes that is as-of-yet unaccounted for, therefore other
(19)
level-2 predictors can be added to the model. The slopes-as-
61

outcomes model is a direct test of hypothesis 3, that teaching Preparation


style moderates the breakfast consumption-GPA
relationship. Finally, the percent of variance attributable to It is essential to prepare the data files using a statistical
teaching style can be computed as a moderator in the software package before importing the data structure into
breakfast consumption-GPA relationship by comparing its the HLM software. The present example uses PASW
systematic variance with the pooled variance in the slopes (Predictive Analytics SoftWare) version 18 (Statistical
(see Equation 24). Package for the Social Sciences; SPSS). A separate file is
created for each level of the data in PASW. Each file should
(24) contain the participants’ scores on the variables for that
level, plus an identification code to link the scores between
levels. It is important to note that the identification code
Model Testing – A Tutorial variable must be in string format, must contain the same
To illustrate how models are developed and tested using number of digits for all levels, and must be given the exact
HLM, a sample data set was created to run the analyses. same variable name at all levels. The data file must also be
Analysis was performed using HLM software version 6, sorted, from lowest value to highest value, by the
which is available for download online (Raudenbush, Bryk, identification code variable (see Figure 4).
Cheong, Congdon, & du Toit, 2006). For the purposes of the In this example, the level-1 file contains 300 scores for the
present demonstration, a two-level analysis will be measures of Shots_on_5 and Life_Satisfaction, where
conducted using the logic of HLM. participants were assigned identification codes (range: 01 to
30) based on their team membership. The level-2 file
Sample Data contains 30 scores for the measure of Coach_Experience and
identification codes (range: 01 to 30), which were associated
The sample data contains measures from 300 basketball with the appropriate players from the level-1 data. Once a
players, representing 30 basketball teams (10 players per data file has been created in this manner for each level, it is
team). Three measures were taken: Player Successful Shots on possible to import the data files into the HLM software.
Net (Shots_On_5), Player Life Satisfaction (Life_Satisfaction),
and Coach Years of Experience (Coach_Experience). Scores for HLM Set-Up
Shots_On_5 ranged from 0 shots to 5 shots; where higher
scores symbolized more success. Life_Satisfaction scores The following procedures were conducted according to
ranged from 5 to 25 with higher scores representing life those outlined by Raudenbush and Bryk (2002). After
satisfaction and lower scores representing life launching the HLM program, the analysis can begin by
dissatisfaction. Finally, Coach_Experience scores ranged clicking File  Make New MDM File  Stat Package Input. In
from 1 to 3, with the number representing their years of the dialogue box that appears, select the MDM (Multivariate
experience. The level-1predictor (independent; individual) Data Matrix). We will select HLM2 to continue because our
variable is Shots_On_5; the level-2 predictor (independent; example has two levels. A new dialogue box will open, in
group) variable is Coach_Experience, and the outcome which we will specify the file details, as well as load the
(dependent) variable is Life_Satisfaction. The main level-1 and level-2 variables.
hypotheses were as follows: 1) the number of successful First, specify the variables for the analysis by linking the
shots on net predicts ratings of life satisfaction, and 2) coach file to the level-1 and level-2 SPSS data sets that were
years of experience predict variance in life satisfaction. created. Once both have been selected, click Choose Variable
For the purposes of the present analysis, it is assumed to select the desired variables from the data set (check the
that all assumptions of HLM are adequately met. box next to In MDM) and specify the identification code
Specifically, there is no multicollinearity, the Shots_On_5 variables (check the box next to ID). Please note that you are
residuals are independent and normally distributed, and not required to select all of the variables from the list to be in
Shots_On_5 and Coach_Experience are independent of their the MDM, but you must specify an ID variable. You must
level-related error and their error terms are independent of also specify whether there are any missing data and how
each other (for discussion on the assumptions of HLM, see missing data should be handled during the analyses. If you
Raudenbush and Bryk, 2002). select Running Analyses for the missing data, HLM will
perform a pairwise deletion; if you select Making MDM,
HLM will perform a listwise deletion. In the next step,
62

Figure 4. Example of SPSS data file as required by HLM. The image on the left represents the data for level-1. The
image on the right represents the data for level-2.

ensure that under the Structure of Data section, Cross sectional outcome variable, and confirms whether HLM is necessary.
is selected. Under MDM File Name, provide a name for the Using the dialogue box, Life_Satisfaction is entered into the
current file, add the extension “.mdm”, and ensure that the model as an “outcome variable” (see Figure 5). The program
input file type is set to SPSS/Windows. Finally, in the MDM will also generate the level-2 model required to ensure that
Template File section, choose a name and location for the the level-1 model is estimated in terms of the level-2
template files. groupings (Coach_Experience). Click Run Analysis, then Run
To run the analyses, click Make MDM, and then click the Model Shown to run the model and view the output
Check Stats. Checking the statistics is an invaluable step that screen. The generated output should be identical to Figure 6.
should be performed carefully. At this point, the program The results of the first model test yield a number of
will indicate any specific missing data. After this process is different tables. For this model, the most important result to
complete, click Done and a new window will open where it examine is the chi-square test (x2) found within the Final
is possible to build the various models and run the required Estimation of Variance Components table in Figure 6. If this
analyses. Before continuing, ensure that the optimal output result is statistically significant, it indicates that there is
file type is selected by clicking File  Preferences. In this variance in the outcome variable by the level-2 groupings,
window, it is possible to make a number of adjustments to and that there is statistical justification for running HLM
the output; however, the most important is to the Type of analyses. The results for the present example indicate that
Output. For the clearest and easiest to interpret output file, it x2(29) = 326.02, p < .001; which supports the use of HLM.
is strongly recommended that HTML output is selected as As an additional step, the ICC can be calculated to
well as view HTML in default browser. determine which percentage of the variance in
Life_Satisfaction is attributable to group membership and
Unconstrained (null) Model which percentage is at the individual level. There is no
consensus on a cut-off point, however if the ICC is very low,
As a first step, a one-way analysis of variance is performed the HLM analyses may not yield different results from a
to confirm that the variability in the outcome variable, by traditional analysis. The ICC (see Equation 12) can be
level-2 group, is significantly different than zero. This tests calculated using the σ2 (level-1) and τ (level-2) terms at the
whether there are any differences at the group level on the top of the output, under the Summary of the model specified
63

Figure 5. Building the unconstrained (null) model in HLM.

heading in Figure 6 (see Equation 25).

(25)

In the present example, σ2 = 14.61 and τ = 14.96, which


results in an ICC of 0.506. This result suggests that 51% of
the variance in Life Satisfaction is at the group level and 49%
is at the individual level.

Random Intercepts Model

Next, test the relationship between the level-1 predictor


variable and the outcome variable. To test this, return to the
dialogue box and add Shots_on_5 as a variable group centered
in level-1. In most cases, the level-1 predictor variable is
entered as a group centered variable in order to study the
effects of the level-1 and level-2 predictor variables
independently and to yield more accurate estimates of the
intercepts. We would select variable grand centered at level-1
if we were not interested in analyzing the predictor
variables separately (e.g. an ANCOVA analysis, which tests
one variable while controlling for the other variable, would
require grand centering). Leave the outcome variable
(Life_Satisfaction) as it was for the first model and ensure
that both error terms ( and ) are selected in the “Level
2 Model” (see Figure 7). By selecting both error terms, the
analyses include estimates of both the between- and within-
error. Specifically, starts with the assumption that life
satisfaction varies from team to team and starts with the
assumption that strength of the relationship between Figure 6. HLM output tables – Unconstrained (null) model.
Shots_on_5 and Life_Satisfaction varies from group to This represents a default output in HLM.
64

Figure 7. Building the random intercepts model in HLM.

group. Click Run Analysis to run this model and view the
output screen. The generated output screen should be
identical to Figure 8.
A regression coefficient is estimated and its significance
confirms the relationship between the level-1 predictor
variable and the outcome variable. To view results of this
analysis, consult the significance values for the INTRCPT2,
in the Final estimation of fixed effect output table (refer to
Figure 8), which is non-standardized. The non-standardized
final estimation of fixed effects tables will be similar to the
standardized table (i.e. with robust standard errors) unless
an assumption has been violated (e.g. normality), in which
case, use the standardized final estimation. The results of the
present analysis support the relationship between
Shots_on_5 and Life_Satisfaction, b = 2.89, p < .001. Please
note that the direction (positive or negative) of this statistic
is interpreted like a regular regression.
To calculate a measure of effect size, calculate the
variance (r2) explained by the level-1 predictor variable in
the outcome variable using Equation 26.

(26)

Note that σ2null is the sigma value obtained in the previous


step (null-model testing) under the Summary of the model
specified heading in Figure 6 (σ2null = 14.61). The σ2random is
sigma value found at the top of the output under the
Summary of the model specified heading in Figure 8 (in the
present example, σ2random = 4.61). Using the values and the
specified equation, the results indicate that Shots_on_5

Figure 8. HLM output tables – Random intercepts model.


65

Figure 9. Building the means as outcomes model in HLM.


explains 71.5% of the variance in Life_Satisfaction.

Means as Outcomes Model

The next step is to test the significance and direction of


the relationship between the level-2 predictor variable and
the outcome variable. To test this, return to the dialogue box
and remove Shots_on_5 as a group centered predictor
variable in level-1 by selecting delete variable from model and
leave the outcome variable (Life_Satisfaction). Add
Coach_Experience as a grand centered predictor variable at
level-2 (see Figure 9). The issue of centering at level-2 is not
as important as it is at level-1 and is only necessary when
we are interested in controlling for the other predictor
variables. When examining the level-1 and level-2 predictor
variables separately, centering will not change the
regression coefficients but will change the intercept value.
When the level-2 predictor variable is centered, the level-2
intercept is equal to the grand mean of the outcome
variable. When the level-2 predictor variable is not centered,
the level-2 intercept is equal to the mean score of the
outcome variable when the level-2 predictor variables equal
zero. In the current example, a mean score of zero at level-2
is not of much interest given that coach experience scores
ranged from 1 through 3, therefore the grand centered option
was appropriate. When interested in the slopes and not the
intercepts, centering is not usually an issue at level-2. Click
Run Analysis to run the model and view the output screen.
The output generated should be identical to Figure 10.
A regression coefficient is estimated and, as before, its
significance confirms the relationship between the level-2
predictor variable and the outcome variable (at level-1). To
view the results, see COACH_EX y01 in the output under the
Final estimation of fixed effects table in Figure 10. The results
Figure 10. HLM output tables – Means as outcomes model.
66

Figure 11. Building the random intercepts and slopes model


in HLM. The mixed model must be obtained by clicking on
“Mixed”.

of this analysis support that Coach_Experience predicts


Life_Satisfaction, b = 4.78, p < .001. For a measure of effect
size, the explained variance in the outcome variable, by the
level-2 predictor variable can be computed using Equation
27.

(27)

where τ2null is the τ value obtained in the first step (null-


model testing) under the Summary of the model specified table
in Figure 6 (τ2null = 14.96). Next τ2means is the τ value obtained
under the Summary of the model specified table in the present
analysis (τ2means = 1.68; Figure 10). The results confirm that
Coach_Experience explains 88.8% of the between measures
variance in Life_Satisfaction.

Random Intercepts and Slopes Model

The final step is to test for interactions between the two


predictor variables (level-1 and level-2). Please note that if
only interested in the main effects of both predictor
variables (level-1 and level-2), this final step is not necessary.
Alternatively, this final model could be used to test the two
previous models instead of running them separately. If you
Figure 12. HLM output tables – Random intercepts and
slopes model.
67

choose to run this final model instead of testing the main their shots on net levels were also higher (relative to those
effects separately, be aware that the results will differ whose shots on net were lower). Next, the means-as-
slightly because of the maximum likelihood estimation outcomes model added coaches’ experience as a level-2
methods used to calculate the models. predictor variable. The regression coefficient relating
To test this final model, return to the dialogue box and coaches’ experience to player life satisfaction was positive
add Shots_on_5 as a group centered predictor variable in and statistically significant (b = 4.784, p < .001). Life
level-1, leave the remaining terms from the 3rd model, and satisfaction levels were higher in teams with coaches who
add the level 2 predictor variable, Coach_Experience as a had more experience (relative to coaches who had less
grand centered variable to both equations (f0 and f1). By adding experience). Finally, the intercepts model and slopes-as-
it to both equations, the interaction term does not outcomes model were simultaneously tested with all
accidentally account for all of the variance. The error terms predictor variables tested in the model to test the presence of
( and ) should be selected for both equations (see any interactions between predictor variables. The cross-level
Figure 11). Finally, click Run Analysis to run the model and interaction between shots on net and coaches’ experience
view the output screen. The generated output should be was not statistically significant (b = 0.38, p = .169); which
identical to Figure 12. means that the degree of coach experience had no influence
For this output, we will focus on the interaction term on the strength of the relationship between shots on net and
only. The results of the interaction can be found under the life satisfaction.
Final estimation of fixed effects table of Figure 12 (see
COACH_EX y11 ). HLM results reveal that the interaction Conclusion
was not significant (b = 0.38, p = .169), providing support that Since its inception in the 1970s, HLM has risen in
there is no cross-level interaction between the level-1 and popularity as the method of choice for analyzing nested
level-2 predictors. data. Reasons for this include the high prevalence of
hierarchically organized data in social sciences research, as
Reporting the Results well as the model’s flexible application. Although HLM is
generally recommended over disaggregation and
Now that the analyses are complete, it is possible to aggregation techniques because of these methods’
summarize the results of the HLM analysis. The statistical limitations, it is not without its own challenges.
analyses conducted in the present example can be HLM is a multi-step, time-consuming process. It can
summarized as follows: accommodate any number of hierarchical levels, but the
Hierarchical linear modeling (HLM) was used to workload increases exponentially with each added level.
statistically analyze a data structure where players (level-1) Compared to most other statistical methods commonly used
were nested within teams (level-2). Of specific interest was in psychological research, HLM is relatively new and
the relationship between player’s life satisfaction (level-1 various guidelines for HLM are still in the process of
outcome variable) and both the number of shots on the net development (Beaubien et al., 2001; Raudenbush & Bryk,
(level-1 predictor variable) and their coach’s experience 2002). Prior to conducting an HLM analysis, background
(level-2 predictor variable). Model testing proceeded in 4 interaction effects between predictor variables should be
phases: unconstrained (null) model, random intercepts accounted for, and sufficient amounts of within- and
model, means-as-outcome model, and intercepts- and between-level variance at all levels of the hierarchy should
slopes-as-outcomes model. be ensured. HLM presumes that data is normally
The intercept-only model revealed an ICC of .51. Thus, distributed: When the assumption of normality for the
51% of the variance in life satisfaction scores is between- predictor and/or outcome variable(s) is violated, this range
team and 49% of the variance in life satisfaction scores is restriction biases HLM output. Finally, as previously
between players within a given team. Because variance mentioned, outcome variable(s) of interest must be situated
existed at both levels of the data structure, predictor at the lowest level of analysis in HLM (Beaubien et al., 2001).
variables were individually added at each level. The Although HLM is relatively new, it is already being used
random-regression coefficients model was tested using in novel ways across a vast range of research domains.
players’ shots on net as the only predictor variable. The Examples of research questions analyzed using HLM
regression coefficient relating player shots on net to life include the effects of the environment on aspects of youth
satisfaction was positive and statistically significant (b = 2.89, development (Avan & Kirkwood, 2001; Kotch, et al., 2008;
p < .001). Player’s life satisfaction levels were higher when Lyons, Terry, Martinovich, Peterson, & Bouska, 2001),
68

longitudinal examinations of symptoms in chronic illness the American Statistical Association, 76(374), 341-353.
(Connelly, et al., 2007; Doorenbos, Given, Given, & Doorenbos, A. Z., Given, C. W., Given, B., & Verbitsky, N.
Verbitsky, 2006), relationship quality based on sexual (2006). Symptom experience in the last year of life among
orientation (Kurdek, 1998), and interactions between patient individuals with cancer. Journal of Pain & Symptom
and program characteristics in treatment programs (Chou, Management, 32(5), 403-12.
Hser, & Anglin, 1998). Gill, J. (2003). Hierarchical linear models. In Kimberly
Throughout this tutorial we have provided an Kempf-Leonard (Ed.), Encyclopedia of social measurement.
introduction to HLM and methods for dealing with nested New York: Academic Press.
data. The mathematical concepts underlying HLM and our Hofmann, D. A. (1997). An overview of the logic and
theoretical hypothesis testing example represent only a rationale of hierarchical linear models. Journal of
small and simple example of the types of questions Management, 23, 723-744.
researchers can answer via this method. More complex Kotch, J. B., Lewis, T., Hussey, J. M., English, D., Thompson,
forms of HLM are presented in Hierarchical Linear Models: R., Litrownik, A. J., … & Dubowitz (2008). Importance of
Applications and Data Analysis Methods, Second Edition early neglect for childhood aggression. Pediatrics, 121(4),
(Raudenbush & Bryk, 2002). Readers seeking information on 725-731.
statistical packages available for HLM and how to use them Kurdek, L. (1998). Relationship outcomes and their
are directed to HLM 6: Hierarchical Linear and Nonlinear predictors: Longitudinal evidence from heterosexual
Modeling (Raudenbush et al., 2006). married, gay cohabitating, and lesbian cohabitating
couples. Journal of Marriage and the Family, 60, 553-568.
References Lindley, D. V., & Smith, A. F. M. (1972). Bayes estimates for
Avan, B. I. & Kirkwood, B. (2010). Role of neighbourhoods the linear model. Journal of the Royal Statistical Society,
in child growth and development: Does 'place' matter? Series B (Methodological), 34(1), 1-41.
Social Science & Medicine, 71, 102-109. Lyons, J. S., Terry, P., Martinovich, Z., Peterson, J., &
Beaubien, J. M., Hamman, W. R., Holt, R. W., & Boehm- Bouska, B. (2001) Outcome trajectories for adolescents in
Davis, D. A. (2001). The application of hierarchical linear residential treatment: A statewide evaluation. Journal of
modeling (HLM) techniques to commercial aviation Child and Family Studies, 10(3), 333-345.
research. Proceedings of the 11th annual symposium on Osborne, J. W. (2000). Advantages of hierarchical linear
aviation psychology, Columbus, OH: The Ohio State modeling. Practical Assessment, Research, and Evaluation,
University Press. 7(1), 1-3.
Carlin, B. P., & Louis, T. A. (1996). Bayes and empirical bayes Raudenbush, S. W. (1998). Educational applications of
methods for data analysis. London, CRC Press LLC. hierarchical linear models: A review. Journal of
Castro, S. L. (2002). Data analytic methods for the analysis of Educational Statistics, 13, 85-116.
multilevel questions: A comparison of intraclass Raudenbush, S. W. & Bryk, A. S. (1992). Hierarchical linear
correlation coefficients, rwg( j), hierarchical linear models. Newbury Park, CA: Sage.
modeling, within- and between-analysis, and random Raudenbush, S. W. Bryk, A. S. (2002). Hierarchical linear
group resampling. The Leadership Quarterly, 13, 69-93. models: Applications and data analysis methods, second
Chou, C-P., Hser, Y. I., & Anglin, M. D. (1998). Interaction edition. Newbury Park, CA: Sage.
effects of client and treatment program characteristics on Raudenbush, S. W., Bryk, A. S., Cheong, Y. F., Congdon, R.
retention: An exploratory analysis using hierarchical T., & du Toit, M. (2006). HLM 6: Hierarchical linear and
linear models. Substance Use & Misuse, 33(11), 2281-2301. nonlinear modeling. Linconwood, IL: Scientific Software
Connelly, M., Keefe, F. J., Affleck, G., Lumley, M. A., International, Inc.
Anderson, T., & Waters, S. (2007). Effects of day-to-day Smith, A. F. M. (1973). A general Bayesian linear model.
affect regulation on the pain experience of patients with Journal of the Royal Statistical Society, Series B
rheumatoid arthritis. Pain, 131(1-2), 162-70. (Methodological), 35(1), 67-75.
Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Snijders, T. & Bosker, R. (1999). Multilevel analysis. London:
Maximum likelihood from incomplete data via the EM Sage Publications.
algorithm. Journal of the Royal Statistical Society, Series B Stevens, J. S. (2007). Hierarchical linear models. Retrieved
(Methodological), 39(1), 1-38. April 1, 2010, from http://www.uoregon.edu/~stevensj
Dempster, A. P., Rubin, D. B., & Tsutakawa, R. K. (1981). /HLM/data/.
Estimation in covariance components models. Journal of Sullivan, L. M., Dukes, K. A., & Losina, E. (1999). Tutorial in
69

biostatistics: An introduction to hierarchical linear


modeling. Statistics in Medicine, 18, 855-888.
Wong, G. Y., & Mason, W. M. (1985). The hierarchical
logistic regression model for multilevel analysis. Journal Manuscript received 23 September 2010.
of the American Statistical Association, 80, 513-524. Manuscript accepted 20 February 2012.

You might also like