Structural Equation Modeling

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Structural Equation Modeling

Natasha K. Bowen and Shenyang Guo

Print publication date: 2011


Print ISBN-13: 9780195367621
Published to Oxford Scholarship Online: Jan-12
DOI: 10.1093/acprof:oso/9780195367621.001.0001

Glossary

DOI: 10.1093/acprof:oso/9780195367621.002.0007

alternative models Alternative models are models that might statistically


explain the data as well (or better than) the model hypothesized by the
researcher, but that do so with a different arrangement of relationships
among the same variables. Alternative models offer a competing explanation
of the data. Researchers should propose and estimate alternative models
and justify why their preferred model should be retained over an explanation
offered by a statistically equivalent alternative model.

Comparative fit index (CFI) CFI is one of several indices available to


assess model fit. A value between 0.90 and 0.95 indicates acceptable fit, and
above 0.95 indicates good fit.

chi-square (χ2) The most basic and common fit statistic used to evaluate
structural equation models; chi-square should always be provided in reports
on SEM analyses. Chi-square values resulting in a nonsignificant p-value
(i.e., p # 0.05) indicate good model fit. The chi-square statistic is directly
affected by the size of the sample being used to test the model. With smaller
samples, this is a reasonable measure of fit. For models with more cases,
the chi-square is more frequently statistically significant. Chi-square is
also affected by the size of the correlations in the model: the larger the
correlations, the poorer the fit. For these reasons alternative measures of fit
have been developed. Both the chi-square and alternative fit indices should
be reported for SEM analyses.

constrained parameter Constrained parameters are those where the


value of one parameter is set (or constrained) to equal some function of
other parameters in the model. The most basic constraint is to set one

Page 1 of 12 Glossary
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2013.
All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a
monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: SUNY
Binghamton University; date: 21 June 2013
parameter equal to another parameter. In this example, the value of the
constrained parameter is not (p.192) estimated by the analysis software;
rather, the unconstrained (free) parameter will be estimated by the analysis
software, and this value will be applied to both parameters. A parameter
is not considered constrained when its value is freely estimated (see free
parameters) or when it is set to a specific value (e.g., zero) that is not
dependent on the value of any other parameters in the model (see fixed
parameters).

control variables Control variables, also known as covariates, are variables


included in an analysis because they are known to have some relationship to
the outcome variable. The parameter estimates of control variables are not
explicitly of interest in the current analysis, but in order to obtain the most
accurate estimates of a model’s substantive relationships, it is necessary
to “remove” or “control” for the control variables’ effects. Gender and race/
ethnicity are common control variables. They are often included in models
because they are known to be related to outcomes, even if the mechanisms
of their effects are unclear.

convergence Convergence is a term that describes the status of estimating


model parameters using a maximum likelihood estimator and typically
refers to obtaining a stable solution during the modeling process. In model
estimation, the program obtains an initial solution and then attempts
to improve these estimates through an iterative process of successive
calculations. Iterations of model parameter estimation continue until
discrepancies between the observed covariances (i.e., the covariances of the
sample data) and the covariances predicted, or implied, by the researcher’s
model are minimized. Convergence occurs when the incremental amount of
improvement in model fit resulting from an iteration falls below a predefined
(often default) minimum value. When a model converges, the software
provides estimates for each of the model parameters and a residual matrix.

Cook’s distance (Cook’s D) A statistic that reflects how much each of


the estimated regression coefficients change when the ith case is removed.
A case having a large Cook’s D (i.e., greater than 1) indicates that the
case strongly influences the estimated coefficients. Cook’s D is used as a
multivariate nonnormality diagnostic to detect influential cases.

correlation Correlation is a standardized measure of the covariance of two


variables. The correlation of two variables can be obtained by dividing their
covariance by the product of their standard deviations. Correlation values
range from –1 to 1. A value of 0 indicates no correlation. Values of –1 and

Page 2 of 12 Glossary
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2013.
All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a
monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: SUNY
Binghamton University; date: 21 June 2013
1 are equal in terms of magnitude, but a value of –1 indicates that scores
on one variable always go up when values on the other variable go down.
A positive correlation indicates that when scores on one variable increase,
scores on the other tend to increase.

covariance Covariance is a measure of how much the values of two


variables vary together across sample members. The formula for the
covariance between (p.193) two variables looks a little like the formula for
variance except that, instead of multiplying the difference from the mean
of each sample member’s score on a variable by itself (i.e., squaring the
difference), the differences of sample members’ scores from the mean on
one variable are multiplied by their difference scores from the mean on the
other variable. Covariance is the basic statistic of SEM analyses. Cov is an
abbreviation for covariance.

covariates Covariates, also known as control variables, are variables


included in an analysis whose parameter estimates are of interest but are
not the major substantive variables.

cross-sectional data Cross-sectional data are data collected on or about


only one point in time. These data can be used to identify associations
between variables but do not permit claims about time order of variables.

degrees of freedom The degrees of freedom (df) of an SEM model is the


difference between the number of data points and the number of parameters
to be estimated. The number of data points (i.e., unique matrix elements or
known pieces of data) for an SEM model is the number of unique variances
and covariances in the observed data being analyzed.

direct effect A variable has a direct effect on another variable when the
variable’s influence is not exerted through another endogenous variable.
That is, the effect is not mediated by an intervening variable. In the example
below, discrimination (variable A) has a direct effect on historical loss
(variable B) and a direct effect on alcohol abuse (variable C), represented
by paths BA and CA, respectively. Historical loss (variable B) has a direct
effect on alcohol abuse (variable C), which is represented by path CB. In this
example, discrimination also has an indirect effect on alcohol abuse.

Page 3 of 12 Glossary
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2013.
All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a
monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: SUNY
Binghamton University; date: 21 June 2013
endogenous variable Endogenous variables are variables in a model
that are explained or predicted by one or more other variables within the
model. If a variable serves as a dependent variable in at least one equation
represented in a model, it is considered endogenous and is notated by the
Greek symbol η (eta). It is important to remember that an endogenous
variable may also explain or predict another endogenous variable in the
model (i.e., it may also be the independent variable in one or more equations
represented in the model).

(p.194) estimation Estimation is the process of analyzing the model by using


the known information (e.g., covariances of the sample data) to estimate
values for the unknown model parameters. In SEM, the goal of estimation
is to obtain the parameter estimates that minimize the discrepancies
between the covariance matrix implied by the researcher’s model and the
covariance matrix of the observed (i.e., input) data. In SEM, estimation is
both simultaneous (i.e., all model parameters are calculated at once) and
iterative (i.e., the program obtains an initial solution and then attempts to
improve these estimates through successive calculations). Many different
estimation procedures are available (e.g., maximum likelihood, WLSMV),
and the choice of estimation method is guided by characteristics of the data
including sample size, measurement level, and distribution.

exogenous variable Exogenous variables are variables in a model that


are not explained or predicted by any other variables in the model. That
is, the variable does not serve as a dependent variable in any equations
represented in the model. By defining variables as exogenous, the
researchers claim that these variables are predetermined, and examining
causes or correlates of these variables is not the interest of the current
study. Exogenous variables are represented in SEM notation by the Greek
symbol ξ (ksee).

factor loading A factor loading is a statistical estimate of the path


coefficient depicting the effect of a factor on an item or manifest variable.

Page 4 of 12 Glossary
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2013.
All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a
monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: SUNY
Binghamton University; date: 21 June 2013
Factor loadings may be in standardized or unstandardized form and are
usually interpreted as regression coefficients.

fixed parameter Fixed parameters are parameters represented in a model


that the researcher does not allow to be estimated from the observed
data. Rather, the value specified by the researcher is used by the analysis
software as the obtained value of the parameter. Fixed parameters may
be set at any value, but the most common are zero (i.e., to indicate no
relationship between variables) and unity (or 1.0, e.g., when establishing the
metric for a latent variable by fixing an indicator’s loading).

free parameter Free parameters are those parameters represented in a


model that the researcher allows to be estimated from the observed data by
the analysis software. That is, the estimated value is not set or constrained
to any particular value by the researcher but is left “free” to vary. Free
parameters allow hypothesized relationships between variables to be tested.

indicator Indicators, also known as manifest variables or items, are


observed variables. In CFA, indicators are the observed variables that are
used to infer or indirectly measure latent constructs.

identification See model identification.

implied matrix The implied matrix is the matrix of variances and


covariances suggested (i.e., implied) from the relationships represented in a
hypothesized (p.195) SEM model. Model fit is determined by the extent to
which the model-implied variance–covariance matrix reproduces the matrix
from the observed data (i.e., the input matrix).

indirect effect A variable has an indirect effect on another variable when


the effect is partially or fully exerted through at least one intervening
variable. This intervening variable is called a mediator. In the example
below, the effects of the social environment on children’s school success
is mediated by, or explained by, the social environment’s effect on
psychological well-being. In this example, the social environment has an
indirect effect on school success.

Page 5 of 12 Glossary
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2013.
All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a
monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: SUNY
Binghamton University; date: 21 June 2013
input matrix The input matrix is the variance–covariance or correlation
matrix of the observed (i.e., input) variables. Model fit is determined by the
extent to which the model-implied variance–covariance matrix reproduces
the matrix from the observed data (i.e., the input matrix).

just-identified model An identified model in which the number of free


parameters exactly equals the number of known values. This model will
have zero degrees of freedom. The number of “knowns” exactly equals the
number of “unknowns.”

latent variable An important distinction in SEM is between observed


variables and latent variables. Latent variables are theoretical, abstract
constructs or phenomena of interest in a study, such as attitudes, cognitions,
social experiences, and emotions. These variables cannot be observed or
measured directly and must be inferred from measured variables. They are
also known as factors, constructs, or unobserved variables. Constructs such
as intelligence, motivation, neighborhood engagement, depression, math
ability, parenting style, organizational culture, and socioeconomic status can
all be thought of as latent variables.

linear regression Linear regression is a statistical procedure in which there


is a hypothesis about the direction of the relationship between one or more
independent variables and a dependent variable. If a dependent variable is
regressed on only one independent variable, the standardized regression
coefficient (beta) that is obtained will be the same as the correlation
between (p.196) the two variables. The unstandardized regression
coefficient for a variable in a linear regression equation indicates the amount
of change in the dependent variable that is expected for a one-unit change
in the independent variable using the independent variable’s original metric.
If variables are standardized, λ “is the expected shift in standard deviation
units of the dependent variable that is due to a one-standard deviation shift
in the independent variable” (Bollen, 1989, p. 349).

longitudinal data Longitudinal data are data that measure people or


phenomena over time. Cross-sectional data—data collected on or about only
one point in time—can be used to identify associations between variables;
longitudinal data also permit claims about time order of variables. Short-
term longitudinal studies may include pretest, posttest, and follow-up
observations. More traditional longitudinal studies may include data collected
at many time points over weeks, months, or years. Both types of longitudinal
study can be accommodated in the SEM framework, albeit with different
strategies.

Page 6 of 12 Glossary
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2013.
All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a
monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: SUNY
Binghamton University; date: 21 June 2013
Mahalanobis distance A statistic that indicates (in standard deviation
units) the distance between a set of scores for an individual case and the
sample means for all variables. It is used as a diagnostic to assess for
multivariate nonnormality. Mahalanobis distance is distributed on a chi-
squared distribution with the degrees of freedom equaling the number of
predictor variables used in the calculation. Individual cases with a significant
Mahalanobis distance (at the p # 0.001 level) would likely be an outlier.

matrix A matrix is a set of elements (i.e., numbers, values, or quantities)


organized in rows and columns. Matrices vary in size based on the number
of variables included and summarize raw data collected from or about
individuals. The simplest matrix is one number, or a scalar. Other simple
matrices are vectors, which comprise only a row or column of numbers.
Analysis of SEM models relies on the covariance or correlation matrices.

measurement error Measurement error refers to the difference between


the actual observed score obtained for an individual and the individual’s
“true” (unknowable) score for an indicator. In SEM, measurement error
represents two sources of variance in an observed indicator: (a) random
variance and (b) systematic error specific to the indicator (i.e., variation in
indicator scores that is not caused by the latent variable(s) modeled in the
measurement model, but by other unobserved factors not relevant to the
current model).

mediation Mediation occurs when one variable explains the effect of an


independent variable on a dependent variable. In the example below, the
effects of the social environment on children’s school success is mediated
by, or explained by, the social environment’s effect on health and well-being.
In this example, the effect of the social environment on school success is
mediated by psychological well-being.

(p.197)

model fit Model fit refers to how well the hypothesized model explains the
data (i.e., how well the model reproduces the covariance relationships in
the observed data). Many indices to assess model fit are available. It is a
good practice to use and report multiple fit measures to evaluate the fit of

Page 7 of 12 Glossary
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2013.
All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a
monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: SUNY
Binghamton University; date: 21 June 2013
a model because each statistic/index is developed on its own assumptions
about data, aims to address different features of model fit, and has both
advantages and disadvantages. “Good fit” does not guarantee that a model
is valid or that all parameters in a hypothesized model are statistically
significant or of the magnitude expected.

model identification Model identification concerns whether a unique


estimate for each parameter can be obtained from the observed data.
General requirements for identification are that every latent variable is
assigned a scale (e.g., the metric of the variable is set by fixing the factor
loading of one of its indicators to zero) and that there are enough known
pieces of data (observed information) to make all the parameters estimates
requested in a model. Models may be described as underidentified, just-
identified, or overidentified. SEM models must be overidentified in order to
test hypotheses about relationships among variables.

model specification Model specification involves expressing the


hypothesized relationships between variables in a structural model format.
Models should be based on theory and previous research. Models are
commonly expressed in a diagram but can also be expressed in a series of
equations or in matrix notation. During model specification, the researcher
specifies which parameters are to be fixed to predetermined values and
which are to be freely estimated from the observed data.

moderation Moderation occurs when the magnitude or direction of the


effect of one variable on another is different for different values of a third
variable. Gender, for example, would be a moderator of the relationship
between social environment and psychological well-being if the regression
coefficient for social environment was significantly higher for boys than
for girls. In standard multiple regression models, moderation is identified
through significant interaction terms. In the SEM framework, moderation is
tested using multiple group (p.198) analyses. Multiple-group analyses not
only indicate if a variable moderates the effects of one or more independent
variables on a dependent variable, but they also provide regression
coefficients for each level of the moderator (e.g., for boys and girls). Multiple
group analyses in confirmatory factor analysis indicate if a measurement
model differs significantly for one group versus another and, if so, which
parameters differ.

modification indices Modification indices are statistics indicating how


much model fit can be improved by changing the model to allow additional
parameters to be estimated. Modification indices are either provided by

Page 8 of 12 Glossary
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2013.
All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a
monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: SUNY
Binghamton University; date: 21 June 2013
default by the analysis software or requested by the user. Changes to
hypothesized models should not be made based solely on modification
indices; changes must be substantively and theoretically justifiable, not just
statistically justifiable.

Multiple-group analysis Multiple group analysis is a technique to test


whether the measurement and structural components of the model operate
the same for different groups. In a single-group SEM analysis, the assumption
is that parameter estimates are the same for all groups in the sample (e.g.,
males and females, doctors and nurses, renters and homeowners). In a
multiple-group analysis, this assumption is tested to determine if better fit
can be attained by allowing some or all parameter estimates to vary across
groups. Multiple-group analysis can be used in CFA to assess whether a scale
performs equally well for different groups (e.g., high school versus college
students).

nested model A nested model is a subset of another model; that is, a


nested model contains a subset of the parameters but all the same observed
variables as the model in which it is nested. Nested models are commonly
used to test alternative explanations of the data. Nested models contain all
the same observed variables as the models in which they are nested, but
different parameter configurations (e.g., omitting a path between two latent
variables; constraining a path, such as a factor loading, to be equal for two
groups).

nonconvergence Nonconvergence of a model occurs when the iterative


estimation process is unsuccessful in obtaining a stable solution of
parameter estimates.

nonrecursive model A nonrecursive model has one or more feedback loops


in the structural part of the model or has correlated structural errors. That is,
effects between variables may be bidirectional, or there are correlated errors
between endogenous variables that have a direct effect between them.

observed variable (manifest variables) An important distinction in SEM


is between observed variables and latent variables. Observed variables are
variables that are actually measured for a sample of subjects during data
collection. Observed variables, which are sometimes referred to as manifest
variables, may come from a number of sources, such as answers to items on
a questionnaire, performance on a test or assessment, or ratings provided by
an observer.

Page 9 of 12 Glossary
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2013.
All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a
monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: SUNY
Binghamton University; date: 21 June 2013
(p.199)overidentified model An overidentified model is a model for which
the number of parameters to be estimated is lower than the number of
unique pieces of input data. An overidentified model places constraints on
the model, allowing for testing of hypotheses about relationships among
variables.

parameter A parameter is a property of a population; population


parameters are estimated using statistics obtained from sample data. The
primary parameters of interest in an SEM are the variances, regression
coefficients, and covariances among variables. When specifying a model,
the researcher must choose whether a parameter represented in the model
will be free, fixed, or constrained based on an a priori hypothesis about the
relationships between variables.

power Power refers to the statistical ability to reject a false hypothesis.


Power is affected by the probability of making a Type I error (α; i.e., rejecting
a hypothesis that in fact is true and should not be rejected), sample size, and
effect size. Researchers generally desire a large level of power, such as 0.80.

recursive model A recursive model is a structural model that has no paths


that create a feedback loop or reciprocal causation. That is, all effects
between variables are one directional, and there are no correlated errors
between endogenous variables that have a direct effect between them.

residual matrix The residual matrix is the matrix containing the differences
between corresponding elements in the analyzed and implied matrices.
It is obtained by subtracting each element of the implied matrix from its
counterpart in the input matrix. If the elements of a residual matrix are small
and statistically indistinguishable from zero, then the analyzed model fits the
data well.

RMSEA RMSEA, or the root mean square error of approximation, is one of


many model fit indices available to assess how close the implied matrix is
to the observed variance–covariance matrix. It is a per-degree-of-freedom
measure of discrepancy. RMSEA values ≤ 0.05 indicate close fit, values
between 0.05 and 0.08 indicate reasonable fit, and values ≥ 0.10 indicate
poor fit.

simultaneous regression equations Equations in which one variable


can serve as both an independent and a dependent variable. The ability to
estimate simultaneous regression equations is a critical feature of SEM and
one of the key advantages of SEM over other methods.

Page 10 of 12 Glossary
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2013.
All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a
monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: SUNY
Binghamton University; date: 21 June 2013
standard deviation The standard deviation of a variable is the square root
of its variance and is a summary measure of how much scores obtained from
a sample vary around (or deviate from) their mean. Unlike variance, it is in
the same metric as the variable. SD or s may be used to denote standard
deviation.

specification error Specification error occurs when an assumption made in


a structural model is false. For example, if a path in a model is set equal to
zero (p.200) (e.g., no line connecting two variables indicates a correlation
of zero), but the true value of that path is not exactly zero (e.g., there is in
fact some correlation, however small, between the variables), then there is
specification error in the model. It is reasonable to expect that all models
contain some amount of specification error. One goal of model specification
is to propose a model with the least specification error.

standard error A standard error is the standard deviation of the sampling


distribution of a statistic. In statistical analysis, researchers use a statistic’s
standard error to construct a 95% confidence interval or to conduct
statistical significance test.

structural error The structural error for any dependent variable in a


structural model is the variance of the variable that is not explained by
its predictor variables. In a general structural model, any variable that is
regressed on others in the model has an error term representing structural
error. Structural error can also be thought of as the error of prediction
because, as in all regression analyses, variance in a dependent variable is
unlikely to be completely explained by the variables in the model; rather, it
is likely to be influenced, or predicted, by something other than the variables
included in a model.

TLI TLI, or the Tucker-Lewis index, is one of many indices available to assess
model fit. TLI values above 0.95 indicate good fit.

total effect Total effect refers to the sum of all effects, both direct and
indirect, of one variable on another variable. Direct effects + indirect effects
= total effect

underidentified model An underidentified model is one in which the


number of parameters to be estimated exceeds the number of unique pieces
of observed data.

unobserved variable See “Latent Variable.”

Page 11 of 12 Glossary
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2013.
All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a
monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: SUNY
Binghamton University; date: 21 June 2013
variance Variance is a summary measure of how much the scores on a
variable from a set of individuals (a sample) vary around the mean of the
scores. Mathematically, variance is the sum of the squared differences from
the mean of all of a sample’s scores on a variable, divided by the number of
scores (for the population data) or number of scores minus 1 (for the sample
data). A common symbol for the variance of a variable in a sample is σ2 or s2.

variance–covariance matrix A variance–covariance matrix contains the


variances of each variable along the main diagonal and the covariances
between each pair of variables in the other matrix positions. This matrix
(or its corresponding correlation matrix plus standard deviation and mean
vectors) is central to SEM analysis: it provides the data for the SEM analysis,
and it is the foundation for testing the quality of a model. The quality of a
model is measured in terms of how closely the variance–covariance matrix
implied by the researcher’s model reproduces the observed (i.e., the input)
variance–covariance matrix of the sample data.

(p.201) variance inflation factor (VIF) A statistic that is widely used as a


diagnostic to detect multicollinearity. VIF measures how much the variance
of an estimated regression coefficient is increased (inflated) because of
collinearity. A maximum VIF greater than 10 indicates a potentially harmful
multicollinearity problem.

WRMR Weighted root mean square residual is one of several fit indices
available to assess model fit. WRMR is provided in Mplus output only (not
Amos). WRMR values ≤ 0.90 are suggestive of good model fit.

Page 12 of 12 Glossary
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2013.
All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a
monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: SUNY
Binghamton University; date: 21 June 2013

You might also like