0% found this document useful (0 votes)
16 views20 pages

Nihms 1689951

The document discusses establishing a measure of reading mindset (RM) among upper elementary students and validating it against measures of word reading and comprehension. An item response theory approach was used to select optimal items from an existing pool to create a 5-item RM measure. This measure was found to predict reading comprehension above word reading alone, suggesting it could help identify areas for improvement.

Uploaded by

courursula
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views20 pages

Nihms 1689951

The document discusses establishing a measure of reading mindset (RM) among upper elementary students and validating it against measures of word reading and comprehension. An item response theory approach was used to select optimal items from an existing pool to create a 5-item RM measure. This measure was found to predict reading comprehension above word reading alone, suggesting it could help identify areas for improvement.

Uploaded by

courursula
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

HHS Public Access

Author manuscript
Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Author Manuscript

Published in final edited form as:


Assess Eff Interv. 2021 September 01; 46(4): 281–291. doi:10.1177/1534508420936753.

Establishing a Reading Mindset Measure: A Validation Study


Jamie L. Tock1, Jamie M. Quinn1, Stephanie Al Otaiba2, Yaacov Petscher1, Jeanne Wanzek3
1Florida Center for Reading Research, Florida State University
2Southern Methodist University
3Vanderbilt University
Author Manuscript

Abstract
Much attention has been given to the development and validation of measures of growth mindset
and its impact on learning, but the previous work has largely been focused on general measures
of growth mindset. The present research was focused on establishing the psychometric properties
of a Reading Mindset (RM) measure among a sample of upper elementary school students
and validating the measure via its relations with standardized measures of word reading and
comprehension. The RM measure was developed to capture student beliefs about their ability,
learning goals, and effort during reading. Item Response Theory (IRT) was used to select
items that optimally measured the RM measure from a pool of existing items from previous
research (Petscher et al., 2017). The final five-item RM measure predicted reading comprehension
outcomes above and beyond the effects of word reading, indicating that this measure may be an
important tool for diagnosing non-cognitive areas of improvement for developing readers. The
Author Manuscript

implications, limitations, and future directions for expanding upon the measure were discussed.

Keywords
Reading Mindset; Growth Mindset; Reading Comprehension; Word Reading; Upper Elementary

Skilled reading comprehension, or proficient reading of connected text to derive meaning


and understanding, is dependent on underlying component skills such as word decoding and
listening comprehension (Gough & Tunmer, 1986; Kim, 2017; Tunmer & Chapman, 2012).
Studies investigating relations among these skills have provided mixed evidence. Some
research has shown that 99–100% of the variance in reading comprehension is accounted
for by these component skills (e.g., Foorman et al., 2015; Kim, 2017); however, other
Author Manuscript

research has shown variance accounted for in reading comprehension to be between 50–
75 percent (Joshi et al., 2012; Ouellette & Beers, 2010). Indeed, a recent large-scale meta­
analysis of 155 studies indicated that reading component skills (e.g., vocabulary, listening

Correspondence concerning this article should be addressed to Jamie Tock, Florida Center for Reading Research, Florida State
University, 2010 Levy Ave. Suite 100., Tallahassee, FL, USA, 32310. [email protected], Phone: 208-590-1182.
Conflict of Interest: The authors declare they have no conflict of interest.
Dataset: The dataset examined for the current study will be posted on the Open Science Framework website (osf.io) when the current
set of manuscripts including this dataset is concluded.
Supplemental Materials: Supplemental materials for this manuscript are included on the SAGE website.
Tock et al. Page 2

comprehension, word reading, fluency) and cognitive factors (e.g., background knowledge,
Author Manuscript

reasoning and inference, working memory) accounted for 60 percent of the variance in
reading comprehension, leaving the remaining variance to be explained by unmeasured
non-cognitive variables and measurement-related error (Quinn & Wagner, 2018).

In the present study, the focus was on potential non-cognitive variables that might account
for unexplained variance in reading comprehension for students in upper elementary school.
Comprehension, specifically reading to learn, becomes increasingly important in both
English language arts and across content areas for students in upper elementary grades,
by which time they are expected to have mastered earlier phases of learning how to read
(e.g., Adams, 1990). Yet researchers’ findings from reading intervention studies indicate it
is challenging to improve reading comprehension skills for struggling readers once students
reach the upper elementary grades, particularly when the assessments are standardized
measures of comprehension, rather than near-transfer measures (e.g., Quinn & Wagner,
Author Manuscript

2018; Wanzek & Roberts, 2012). Thus, this is an important window of time as research has
shown that students who struggle to read in upper elementary school are likely to remain
struggling readers through high school (Brasseur-Hock et al., 2011; Francis et al., 1996;
Moats, 1999; Vaughn et al., 2003), and subsequently are at higher risk for dropping out of
school (Dynarski et al., 2008).

A variety of non-cognitive factors that may affect reading comprehension could include
behaviors and attitudes toward reading, motivation for reading, social-emotional learning,
beliefs about effort, approaches to learning, and implicit theories of intelligence (e.g.,
Duckworth & Yeager, 2015; Sisk et al., 2018). Implicit theories of one’s own intelligence, or
mindset, determine the relations between motivation and achievement or learning goals and
their beliefs about whether intelligence is inherent or malleable (Dweck, 1986, 1999; Dweck
Author Manuscript

et al., 1995). When a child has a fixed, or entity, mindset, they believe their intelligence
and ability cannot be changed because it is outside of their control. These children tend
to 1) hold performance-oriented goal beliefs, 2) are highly susceptible to others judgment
of their learning (Baird et al., 2009), 3) are more likely to display a helpless learning or
emotional response, and 4) avoid challenges and are more likely to make negative comments
on their own abilities (Baird et al., 2009; Dweck, 1999). In turn, a child with a growth, or
incremental, mindset, believes that intelligence and academic ability can be developed and
changed through perseverance, grit, and practice (Dweck, 1999).

Mindset has also been linked to self-regulation through a meta-analysis (e.g., Burnette et al.,
2013). Children with growth mindsets tend to use mastery-oriented learning goals and have
better self-regulation, such that they are more likely to seek out challenges and adapt to poor
Author Manuscript

performance through additional effort (e.g., Sisk et al., 2018). In their large meta-analysis
(n = 57,155, k = 43), Sisk and colleagues examined the effects of mindset interventions
on academic achievement (i.e., grades or GPA, standardized assessments, school or course
completion) and noted small but significant positive effects across all students (d =.08).
Larger effects were found for high-risk students (d = .19) and for students from low
socioeconomic backgrounds (d = .34), however, caution should be taken in extending these
findings to elementary students as most of the included studies focused on adolescents
or adults. Furthermore, despite growing interest in individual differences in mindset and

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 3

other non-cognitive abilities, researchers in the field (e.g., Duckworth & Yeager, 2015) have
Author Manuscript

noted there is a lack of agreement among scientists and the lay public about terminology,
definition, and measurement. However, they argued that schools need to understand which
sets of brief tasks can predict performance on academic behaviors and lead to school
improvements (e.g., Bryk et al., 2015).

Within the reading domain, students with more negative behaviors towards reading, such
as avoiding reading related schoolwork, typically have worse reading outcomes (r = −.26;
Baker & Wigfield, 1999). Elementary-aged students with more positive attitudes towards
reading, such as students who endeavor to read more and find reasons to enjoy reading,
typically have higher reading achievement outcomes (r = .44; Petscher, 2010). A more
positive attitude towards reading moderates students’ motivation for reading (Petscher, 2010;
Robinson & Weintraub, 1973), and students with higher motivation typically have better
reading outcomes (r = .49 – .51, Taboada et al., 2009). In their study of good and poor
Author Manuscript

readers, Logan, Medford and Hughes (2011) found that for students with poor reading
comprehension, intrinsic motivation accounted for an additional 21% of the variance in
reading comprehension above the effects of verbal IQ and decoding. However, for good
readers, no additional significant variance was accounted for in their reading comprehension
skills above and beyond the significant effects of verbal IQ and decoding (Logan et al.,
2011). Social-emotional learning is dependent on emotional intelligence, indicated by self­
regulation emotional expression (Salovey & Mayer, 1990). Better self-regulatory behaviors
are significantly correlated with higher literacy (.18 ≤ r ≤ .23), vocabulary (.27 ≤ r ≤ .35),
and math outcomes (.37 ≤ r ≤ .47; McClelland et al., 2007).

Where behavior, attitudes, motivation, and social-emotional learning have concurrent or


predictive relations to reading, implicit theories have only recently been studied in the
Author Manuscript

area of reading development. For example, Toste and colleagues (2017) tested the effects
of an embedded motivation training within a multisyllabic word training for third- and
fourth-grade students. Students were randomly assigned to the motivation plus word
reading, word reading alone, or a business-as-usual (BAU) control condition. Students in
the motivation plus word reading condition demonstrated stronger sentence-level reading
comprehension skills than students in the word reading only or BAU condition. Authors of
a more recent study of fourth- and fifth-grade students found that mastery and performance­
avoidance achievement goals fully mediated the relation between global mindset1 and
reading comprehension achievement in struggling readers, even after controlling for word
reading and vocabulary (Cho et al., 2019).

If a reading-specific mindset exists uniquely from a global mindset, it is important to


Author Manuscript

discover how this reading mindset manifests, if it can be specifically measured using
reading-related mindset items, and if a reading-specific mindset measure can be used to
predict reading outcomes. Although a reading beliefs inventory was recently created and
validated in an undergraduate population, this measure focused more on a reader’s beliefs on
how to approach the texts rather than measuring a mindset pertaining to their own reading

1Cho and colleagues (2019) utilized three questions regarding fixed mindset adopted from Blackwell and colleagues (2007) as their
measure of global mindset.

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 4

skills as exemplified by the epistemological wording of some of the items (e.g., “Different
Author Manuscript

types of text force one to learn new ways of reading,” “If a text is written correctly, everyone
can understand it;” see Lordán et al., 2017). Petscher and colleagues (2017) developed a
joint model of general and reading specific mindset by administering a mindset survey
including two subsets of items to 195 fourth graders in a low performing sample who
were participating in a larger study. The first subset included 13 items from the general
mindset measure established by Blackwell et al. (2007) related to theory of intelligence,
learning goals, and effort beliefs. The second subset included 13 reading specific mindset
items developed to measure non-cognitive information related to mindset, approaches to
learning, effort beliefs, and attitudes and emotions about reading (Al Otaiba et al., 2015).
This domain specific measure was written particularly to focus on struggling readers in
upper elementary grades. To obtain a final measure, the authors discarded 11 total items
with a negative impact on reliability and then tested a series of competing models including
Author Manuscript

eight general growth mindset items and seven reading specific mindset items (Petscher et
al., 2017). The best model fit was for a bifactor model indicated by two separate, specific
factors of general mindset and reading mindset, and a single underlying general factor for
global growth mindset (GGM). In a structural equation model (SEM), the authors found
that the specific reading mindset factor uniquely predicted reading comprehension and word
reading outcomes after controlling for general mindset and GGM. In an alternative model,
GGM and reading mindset accounted for statistically significant unique variance in reading
comprehension (15%) beyond the unique variance accounted for by word reading (67%).

Although measures have been developed for domain-specific tasks in math and history
(Buehl et al., 2002), and for broader reading beliefs in undergraduate students (Lordán
et al., 2017), little attention has been given to a reading specific mindset measure for
elementary students. Petscher and colleagues (2017) developed a joint measure of general
Author Manuscript

and reading specific mindset but did not examine the ability of the items of the reading
specific mindset measure to discriminate between levels of reading mindset across the trait
continuum. Further, there is evidence that an additive effect of including both word reading
and mindset training resulted in better sentence-level reading comprehension than a word
reading intervention alone (e.g., Toste et al., 2017), and poor readers benefited from higher
intrinsic motivation in predicting their reading comprehension outcomes above the effects
of decoding (Logan et al, 2011). However, no existing study has examined the interaction
between reading mindset and word reading and testing this hypothesis could lead to a more
informative model of reading comprehension, especially in lower-achieving readers.

Present Study
Author Manuscript

The objective of the present study was to establish a reading mindset measure by examining
the seven items from Petscher and colleagues (2017) that best measured the reading mindset
construct, replicate the statistically significant relations with word reading and reading
comprehension, and propose an alternative model featuring an interactive effect of reading
mindset and word reading on reading comprehension outcomes. To reflect the broader
approach to mindset for reading achievement (i.e. theory of intelligence, learning goals,
and effort beliefs), and distinguish this measure from the traditional conception of growth

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 5

mindset, the novel measure is henceforth referred to as Reading Mindset (RM). We sought
Author Manuscript

to answer the following research questions:

1. Do each of the seven items hypothesized to be related to reading mindset provide


unique information for the RM construct with an acceptable level of reliability?

2. Does RM predict significant variance in both word reading and reading


comprehension outcomes in a sample of fourth-grade students?

3. Does RM account for significant variance in reading comprehension above and


beyond the component skill of word reading?

4. Is there an interactive effect between word reading and reading mindset on


reading comprehension outcomes?

Methods
Author Manuscript

Participants
Sample participants consisted of 430 (Male = 200; Female = 194; no response = 36) fourth
grade students participating in a larger intervention study who were recruited from two
states in the southern United States. This larger intervention study included two cohorts of
fourth grade students from fifteen public schools located in three school districts (Petscher
et al., 2017). Approximately two thirds of the sample (n = 280) were at or below the
30th percentile on the reading comprehension subtest of the Gates-MacGinitie Reading Test
(MacGinitie & MacGinitie, 2006). School districts provided demographic information for
study participants. For ethnicity, 34.6% (n = 149) identified as Hispanic, 48.6% identified as
non-Hispanic (n = 209), and ethnicity was not available for 16.7% (n = 72) of the sample.
The racial composition of the sample was 37.9% (n = 163) Black, 24.4% (n = 105) White,
Author Manuscript

2.1% (n = 9) American Indian, and 1.9% (n = 8) Asian or Pacific Islander. Race was
unavailable for approximately 33 percent (n = 145) of the sample. All schools provided
instruction only in English, with about 16 percent (n = 71) of students identified as limited
English proficiency or eligible for language support. Approximately 66 percent (n = 282)
of the sample were eligible for free or reduced cost lunch. About 9 percent (n = 37) of the
students had a disability (e.g., learning disability).

Measures
The RM measure, two measures of word reading, and three measures of reading
comprehension were administered to the sample.

Reading Mindset (RM)—In the RM measure, items were written to reflect students’
Author Manuscript

beliefs of their ability, learning goals, and effort during reading (Al Otaiba et al., 2015).
Some examples of the items included, “If a book is hard to read, I stop reading it,” and
“I don’t like when my teacher corrects me when I am reading.” The items were originally
scored on a six-point Likert scale representing agreement with the target phrase. Following
the data collection, the items were reverse coded, such that low values corresponded to a
fixed mindset and higher values corresponded to a growth mindset (1 = Agree a lot, 6 =
Disagree a lot).

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 6

Word Reading—Word reading was individually measured by two subtests from the
Author Manuscript

Woodcock-Johnson III (WJ) Tests of Achievement: Letter-Word Identification (WJLW)


and Word Attack (WJWA). In the WJLW subtest, students identified (i.e., read aloud)
a list of letters and words that increased in difficulty. This task determines a student’s
skills in recognizing letters and words. In the WJWA subtest, students read aloud from
a list of increasingly difficult nonsense words, with the results indicating a participant’s
usage of letter-sound correspondences to aid in pronouncing unfamiliar words. Content and
concurrent validity were established using a representative sample; test-retest coefficients
for the two subtests used were .95 for Letter-Word Identification and .83 for Word Attack.
(Woodcock et al., 2001).

Reading Comprehension—Reading comprehension was assessed using the Woodcock­


Johnson III Passage Comprehension subtest (WJPC), the Gates-MacGinitie Reading Test –
Comprehension subtest (GMRTC; MacGinitie & MacGinitie, 2006), and the Test of Silent
Author Manuscript

Reading Efficiency and Comprehension (TOSREC; Wagner et al., 2010). The WJPC subtest
is a cloze task requiring a participant to supply missing words to sentences and passages.
Split-half reliability is .88, and the WJPC is correlated highly with the Wechsler Individual
Achievement Test (.70–.79) and the Kaufman Test of Educational Achievement (.62–.81;
Woodcock et al., 2001).

The GMRTC is a multiple-choice assessment of a student’s ability to understand main ideas


and draw inferences from the provided passages. Alternate form reliability ranges from .74
to .92 and test–retest reliability ranges from .88 to .92; validity estimates with other tests of
language and reading ranges from .60 to .62 (MacGinitie &MacGinitie, 2006).

The TOSREC is a timed reading task (3 min time limit) where a student silently reads
Author Manuscript

sentences of increasing length and complexity and assesses the truth of the sentence by
answering true or false (Wagner et al., 2010). Alternate form reliability exceeds .85 across
forms and grade levels; reliability with other measures of reading (e.g., Woodcock-Johnson
Tests of Achievement, 3rd Edition) exceed .70 (Wagner et al., 2010).

Procedure
The group-administered reading mindset survey was distributed via SurveyMonkey to all
participating students in the spring of the fourth grade (April/May). Trained research staff
also administered the standardized word reading and reading comprehension assessments
individually to students over a two-week period. The measures were counterbalanced across
students. All assessors were trained and required to demonstrate 100 percent reliability in
administration and in scoring prior to conducting assessments in the field.
Author Manuscript

Data Analytic Plan


RM Item Selection—In accordance with the results from Petscher and colleagues (2017),
seven items (see Table 1 for item content) that optimally measured the reading mindset
factor were specified in the current study. The items were specified on a unidimensional RM
factor using an IRT graded response model for categorical items with multiple responses. A
one-parameter logistic model (difficulty parameters estimated, slope parameters constrained

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 7

to equality; 1PL) and a two-parameter logistic model (difficulty parameters and slope
Author Manuscript

parameters estimated; 2PL) were compared using a likelihood ratio test to determine which
model best fit the sample data. The difficulty and discrimination parameters (b and a,
respectively) of the best model were then examined to select the fewest number of items
where the selected items each provided unique information and discriminated well between
different levels of the RM construct. Desirable a parameters were those that exceeded 1.0 in
value and b values were examined to identify those that included unique information across
the RM trait continuum. Additionally, a reliability analysis was performed to determine the
impact of the deletion of individual items on coefficient omega (Dunn et al., 2014). All
IRT models and parameter estimates were performed and obtained using the mirt2 package
(Chalmers, 2012) in R.

ICC and TIF—The item characteristic curves (ICCs) and test information function curve
(TIF) were reported for the final set of items. The ICCs visually display the relation between
Author Manuscript

the ability of an individual and the probability of a response. In a 2PL model, the curves
reflect two specific pieces of information pertinent to the interpretation of the RM items: 1)
the conditional probability of a response whereby for a given threshold (i.e., the b value in a
graded response model) the b is equal to where the probability for that threshold is equal to
.50; and 2) how well the item discriminates between those who are higher on the measured
attribute versus those who are lower on the measured attribute (i.e., the a parameter; De
Ayala, 2013). The TIF is a sum of item information curves across the complete set of items
and indicates the level of precision (i.e. reliability) in the measured attribute along levels
of the attribute. In this way, the TIF may communicate for whom the scores are precise.
Coefficient alpha thresholds of .70 and .80 were used as indicators of the precision of
measurement for RM across the trait continuum. The ICC and TIF figures were plotted in
the jrt package in R (Myszkowski & Storme, 2019) based on the estimation procedures from
Author Manuscript

the mirt package (Chalmers, 2012).

Structural Equation Modeling—The validity of the final RM measurement model was


examined via SEMs. The retained items were specified on a single RM latent factor. Two
latent factors for word reading and reading comprehension were composed by specifying
the WJLW and WJWA observed variables on the word reading (WR) latent variable,
and specifying the WJPC, GMRTC, and TOSREC observed variables on the reading
comprehension (RC) latent variable.

Three structural models were fit to the data. The goal of Model 1 was to determine whether
RM accounted for a significant amount of variance in RC and in WR by regressing WR on
RM and RC on RM. In Model 2, an alternative specification was tested to determine whether
Author Manuscript

RM could predict variance in RC above and beyond the effects of WR. Finally, in Model
3, an interaction model was performed where RC was regressed on RM, the interaction of
RM and WR (RMWR), and the covariate WR to determine whether the interaction term
accounted for a statistically significant amount of variance in RC above WR and RM. The
interaction term for Model 3 was composed via the product indicator approach whereby

2The item intercepts in mirt were converted to item difficulty parameters by dividing the item intercepts by the negative discrimination
value (b = d/-a).

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 8

pairs of manifest variables from the latent variables are multiplied to create a series of
Author Manuscript

products (Kline, 2016). A significant interaction term in the SEM was plotted by using one
standard deviation thresholds on the focal (WR) and moderator (RM) variables to estimate
RC as a function of higher or lower ability. Judgments on the quality of fit for all SEMs
were based on recommendations by Kline (2016; TLI > .90; CFI > .90; root mean square
error of approximation [RMSEA] between .05 and .08). All SEMs were estimated in the
lavaan package in R (Rosseel, 2012) with the WLSMV estimator for categorical variables
with robust standard errors.

Results
Descriptive Statistics
The descriptive statistics including information about the means, standard deviations,
distribution, data missingness, and correlation coefficients are listed in Table S1 of the
Author Manuscript

Online Supplementary Materials (OSM), while the item response proportions for the seven
items are listed in Table S2. Data missingness for the RM items was extremely low, with
a maximum of 2.10 percent missing data for individual items. The items ranged in mean
from 3.47 to 4.87 and indicated no departures from normality for skewness (≤ 2) or kurtosis
(≤ 7) as suggested by Curran, West, and Finch (1996). A closer look at the distribution of
the items in Table S2 indicated five items (RM1, RM2, RM4, RM5, RM6) had responses
that were negatively skewed, with between 65 and 73 percent of the responses being a
five (disagree) or six (disagree a lot), indicating a higher proportion of positive mindset
oriented responses. Among these items there were few low responses, with between 13 and
17 percent of the responses being a one (agree a lot) or two (agree), indicating a lower
proportion of negative mindset responses. Items RM3 and RM7 had more balanced item
responses, with 40 and 55 percent of the responses being a five or six, and 38 and 26 percent
Author Manuscript

of the responses being a one or two, respectively.

The reading outcome measures indicated no violations of the standards for skewness or
kurtosis and there was no missing data. The correlations between six of the RM items
(RM1-RM6) and the reading outcome measures (WJLW, WJWA, GMRTC, TOSREC,
WJPC) were significant and positive (.12 ≤ r ≤ .40). RM7 was only significantly related
to WJWA (r = .11), indicating a lack of relations between RM7 and the outcome variables.
The WJLW and WJWA were highly correlated (r = .83), the correlations were moderate
between measures of word reading and reading comprehension (.48 ≤ r ≤ .75), and the
correlations among reading comprehension measures were moderate (.60 ≤ r ≤ .69).

Scale Establishment
Author Manuscript

A graded response model including the seven items was composed to determine which items
optimally measured the RM construct. A comparison of the seven-item graded response
model with equality constraints on the slope parameters (−2LL = 4252.22; AIC = 8576.44;
BIC = 8722.73) to the unconstrained seven-item model (−2LL = 4220.59; AIC = 8525.18;
BIC = 8695.86) indicated a clear preference for the unconstrained model (i.e., 2PL),
suggested by the statistically significant likelihood ratio test, LRT(6) = 63.25, p < .001.

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 9

The parameter estimates for the seven-item 2PL graded response model are listed in Table
Author Manuscript

1. The b parameter estimates ranged from −3.07 to 1.36, indicating considerable range in
the location of responses across the trait continuum, while the a parameter estimates ranged
from .74 to 1.99, indicating a considerable range of ability to discriminate among the set
of items. Of the seven items, RM1 and RM7 were removed given their lack of unique
information along the reading achievement mindset trait continuum, content overlap, poor
discrimination values relative to the other items at the same difficulty level, and that the
reliability became larger when each of these items were dropped. RM3 was considered
for removal given its modest discrimination value (.98), but ultimately retained given
the unique information it provided at the higher end of the reading mindset trait and its
positive contribution to reliability. The RM2, RM4, RM5, and RM6 items were also retained
given good discrimination values and contributions to the overall scale information. The
coefficient omega for the five retained items (RM2, RM3, RM4, RM5, and RM6) was .73.
Author Manuscript

ICC and TIF—Figures pertaining to the ICCs (see Figure S1) and TIF (see Figure S2),
are included in the OSM. The interpretation of the TIF indicated less information above
zero, and lower precision of measurement above trait levels of 1.25. More specifically,
levels of precision exceeding a reliability of .70 were indicated for the range of RM
from approximately −2.50 to 1.25, corresponding to 89 percent of individuals in a normal
distribution, and levels of precision exceeding .80 were indicated for the range of RM from
−1.75 to .25, corresponding to 56 percent of individuals in a normal distribution. Relatedly,
the ICC indicated a relatively low threshold for the highest response options across the
items, and a heavy concentration of item thresholds in a narrow range of the RM trait score
between −1.50 and −.50, with some non-sequentially ordered item thresholds. The overlap
of item thresholds indicates less value for the response options in the middle of the trait
continuum (i.e. response options 2 through 5).
Author Manuscript

In response to the item overlap, a five-item post-hoc RM model was composed with four
response categories rather than six and is discussed in detail in the OSM (Alternative
Model Including Four Response Categories), with the parameter estimates listed in Table
S3 in the OSM, and the ICCs and TIF depicted in Figures S3 and S4 respectively of the
OSM. Although the four response model indicated less crowding of item thresholds, the
six-response model was henceforth used to be consistent with the original coding and due to
the lack of clear superiority for the four response model.

Structural Equation Models


A series of SEMs were fit to investigate the relations between the five-item RM measure
and the reading outcome measures WR and RC. In Model 1, WR and RC were regressed
Author Manuscript

onto the RM latent factor (see OSM Figure S5). The model fit was acceptable, (χ2[32]
= 85.33, p < .001; RMSEA = .06, 90% CI [.05, .08], TLI = .94; CFI = .96). The five
items loaded significantly on to the RM latent variable (.49 ≤ λ ≤ .84, p < .001). RM was
positively related to WR (β = .44, p < .001) and RC (β = .48, p < .001), and accounted for
approximately 20 percent of the variance in WR and 23 percent of the variance in RC.

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 10

In Model 2, RC was regressed on RM and WR (see OSM Figure S6). The model fit was
acceptable, (χ2[32] = 85.33, p < .001; RMSEA = .06, 90% CI [.05, .08], TLI = .94; CFI =
Author Manuscript

.96). There was a significant path from the covariate WR to RC (β = .78, p < .001), and there
was a small but significant path from RM to RC, (β = .13, p < .01. A total of approximately
72 percent of the variance in RC was accounted for by the combination of WR and RM.

In Model 3, RC was regressed on RM, WR, and the interaction term, RMWR (Figure 1).
The model fit approached acceptable fit, (χ2[82] = 324.22), p < .001; RMSEA = .08, 90%
CI [.08, .09], TLI = .90; CFI = .87). The covariate WR was significantly and positively
related to RC (β = .80, p < .001). The path from RM to RC, (β = .12, p < .01), and the path
from the interaction term RMWR to RC, (β = .12, p < .01), were statistically significant,
indicating a significant effect of RM and the RMWR interaction term after controlling for
variance related to WR. Seventy-six percent of the variance in RC was accounted for by RM,
WR, and RMWR.
Author Manuscript

The latent interaction plot (see OSM Figure S7) displayed the effect of the interaction on
reading comprehension. For individuals with low word reading (x-axis), the gap in reading
comprehension between students with high (+1SD) or low (−1SD) levels of reading mindset
was moderate (i.e., approximately 0.50 SD). For students with high word reading, the gap in
reading comprehension between those with high and low reading mindset was about 1.0 SD.

Discussion
The purpose of the current study was to use items related to implicit theories of intelligence,
perceived effort, and learning goals, as related to reading, and to examine how these items
were related to reading comprehension and word reading outcomes. The RM measure was
scaled such that students with higher RM scores indicated a belief their reading could
Author Manuscript

improve with dedication and practice, were not discouraged when reading, and were more
open to reading aloud in class, while students with lower scores on the RM measure
indicated that they believed their reading could not be improved, experienced more anxiety
while reading, or they were more discouraged while reading. The final 5-item RM items
covered a good range of difficulty while also discriminating well across the range of
the latent reading mindset trait. After establishing the optimal items for the RM measure
with good psychometric properties, a series of SEMs were performed to examine the
relations between the RM measure, word reading, and reading comprehension. The results
indicated unique predictive value of the RM measure in each of the models tested. Thus, the
findings of the current study were consistent with the meta-analysis conducted by Sisk and
colleagues (2018) while also extending upon those findings by focusing on upper elementary
school students and the content domain of reading.
Author Manuscript

In the first SEM (Model 1), the RM measure positively predicted reading comprehension
and word reading skills. Students with a more positive mindset and outlook towards reading
had better word reading and reading comprehension outcomes, such that students who were
less anxious and more motivated while reading difficult texts had better word reading and
reading comprehension outcomes. This model accounted for 20 percent and 23 percent of
the variance in word reading and reading comprehension respectively. This finding was

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 11

consistent with research describing other non-cognitive constructs such as the association of
Author Manuscript

positive attitudes toward reading with higher reading scores (e.g., Author, 2010), and that
motivation to read can account for additional variance in comprehension beyond decoding
skills (Logan et al., 2011). As Quinn and Wagner (2018) discovered in their meta-analysis,
up to two-fifths (40%) of the variance in reading comprehension was unexplained after
accounting for component skills and cognitive aspects of reading such as vocabulary, word
reading, and reasoning and inference skills.

Because word reading is one of the main components of reading comprehension (e.g.,
Gough & Tunmer, 1986; Joshi et al., 2012; Quinn & Wagner, 2018), it was also important
to test an alternative model to determine whether the RM measure independently predicted
reading comprehension outcomes while controlling for the effect of word reading. The
results of Model 2 indicated that the RM measure uniquely predicted reading comprehension
outcomes while controlling for the effects of word reading. The combined effects of word
Author Manuscript

reading and RM accounted for nearly three-fourths of the variance in reading comprehension
outcomes.

In a prior study, in comparing students who were subjected to word reading training
combined with mindset training versus those students who received word reading training
alone, the students in the combined condition had better sentence comprehension outcomes
than students who received only word reading training (Toste et al., 2017). Further, Logan
et al. (2011) found that intrinsic motivation was particularly important for poor readers
in predicting their reading comprehension outcomes above the effects of decoding and
verbal IQ. Building on these findings, an interactive effect between RM and word reading
was tested. The results of Model 3 indicated there was a significant, positive interaction
between word reading and RM that predicted reading comprehension outcomes above and
Author Manuscript

beyond the main effects of word reading and RM. For individuals with a high level of
word reading, those with a more positive mindset for reading had reading comprehension
scores one standard deviation higher than students whose mindset was more negative.
This gap in reading comprehension scores was still apparent for children with low word
reading, whereby children who had more positive mindsets towards reading had reading
comprehension scores approximately half a standard deviation higher than children with
more negative mindsets toward reading. These interactive effects aligned with the results of
Toste et al. (2017). Further, students still benefit from having a better reading mindset than
similarly skilled students with a concurrently low reading mindset, a finding that aligned to
that of Logan and colleagues (2011) particularly for poor readers.

Why does reading mindset matter more for students with good word reading than for
Author Manuscript

students with poor word reading? We speculate that for students with poor word reading and
positive reading mindsets, they believe that they can indeed improve their comprehension,
but perhaps they also understand that they first need to improve their word reading. This
provides a protective effect in motivating them to improve their skills. For students with
good word reading and positive reading mindsets, they do not have this additional barrier
blocking their reading comprehension improvement, and as such, benefit more from having
a better outlook on their reading. However, it is impossible to tell if this is the case from
the current study. Future work could explore this relation in good and poor readers through

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 12

combined interventions that target either children with low reading mindsets or children with
Author Manuscript

poor word reading in an effort to improve their reading comprehension. These groups of
children could be paired with either a mindset intervention or a word reading intervention,
modeled similarly to Toste and colleague’s (2017) study, in an effort to improve their
reading comprehension outcomes.

The findings of the current study may have implications for teacher training, as teachers
who believe students’ mindsets are malleable may provide more support when students
struggle academically in the classroom (e.g., Gutschall, 2013). Scores on this brief RM
measure may provide teachers with a better idea which of their students have more negative
thoughts about their reading and who might therefore be less responsive or engaged if they
do not believe their effort will lead to better performance and practice. For example, when
students are anxious while reading in the classroom, and therefore are more likely to endorse
“agree a lot” to RM3 (i.e., If I have to read out loud in class, I feel scared.), teachers could
Author Manuscript

encourage practice, improve goal setting, and provide positive supports through personalized
instruction. This is important for early intervention, as it is challenging to accelerate reading
comprehension for struggling readers later in school (Wanzek et al., 2019; Wanzek &
Roberts, 2012) and struggling readers are likely to remain poor readers through high school
(e.g., Brasseur-Hock et al., 2011; Francis et al., 1996).

The establishment of the RM measure may be particularly useful as a tool to enhance and
intensify reading interventions for vulnerable populations, such as the sample in the current
study, which included typical readers and students with severe reading difficulties. Notably,
66 percent of these students were eligible for free and reduced lunch, and nine percent had a
disability. In fact, the meta-analysis by Sisk and colleagues (2018) indicated positive effects
for students at academic risk (d = .19) and students from low socioeconomic backgrounds (d
Author Manuscript

= .34).

Limitations and Future Directions


The current study was limited by the inability to make conclusions about the size of the
relations between RM and reading comprehension outcomes that are independent of other
important cognitive based measures such as vocabulary knowledge, inference, background
knowledge, or working memory (e.g., Quinn et al., 2015, 2018; Cain et al., 2004; 2011). The
relation between RM and reading comprehension is apparent in the current study, but this
relation might not be a direct relation if other predictors were included in the model, such
as vocabulary knowledge. Additionally, although Petscher and colleagues (2017) indicated
the reading mindset measure was independent of general growth mindset and had unique
predictive value, the general growth mindset measure would still share variance with reading
Author Manuscript

mindset and should be included as a control variable in a future study.

The RM measure may have also been limited by the small number of items in the final
specification. Given that only five items were specified in the final measure and the set of
items lacked information at the higher end of the RM trait continuum, considerations should
be made for writing additional items to enhance the psychometric properties of the measure.
Although individuals most often endorsed the highest two response options of the RM
measure, the results indicated that the least amount of information was available at higher

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 13

levels of the RM trait. For example, item RM4 (If I make a lot of mistakes while reading, I
Author Manuscript

quit trying) indicated 75 percent of individuals endorsed a response of five or six, but only
needed to approach a theta level of zero for them to be most likely to endorse a response
of six, indicating this item would not be much value for discriminating between individuals
at high levels of RM. Including additional items targeting higher levels of the RM trait
would allow researchers to measure RM with equal precision across the trait continuum, and
have the secondary benefit of positively contributing to reliability, as the average inter-item
correlation needed to improve scale reliability becomes smaller as the number of items
increases (DeVellis, 2012). Additionally, fewer response options may be better suited to the
RM measure given the concentration of information and non-sequential ordering of some
of the response options in the middle of the RM trait. Four response options (i.e. Strongly
Agree; Agree, Disagree, Strongly Disagree, see OSM) may be better suited to reflect the
results indicated in the current study, however, the increase in response options came at
Author Manuscript

a cost of measurement precision. In sum, adaptations to the current measure (e.g., adding
items that measure the higher end of the RM trait continuum; including fewer response
options) may optimize the psychometric properties of the RM measure.

Finally, the sample used to establish the RM in the current study was a mixed sample,
consisting of typically developing readers and readers that were below the 30th percentile in
reading comprehension. As a result, it is plausible that the RM measure may have operated
differently across factor loadings or intercepts, depending on student reading comprehension
level. Multiple group testing could test this possibility, and with an adequately sized sample,
differential item functioning (DIF) analysis could also be performed. DIF would determine
whether individual items or the full measure functions differently for different groups (i.e.,
race or gender). For example, girls are more motivated to read and tend to have better
attitudes when reading (e.g., Logan & Johnston, 2009; McGeown et al., 2012) and certain
Author Manuscript

items on the RM measure therefore might show differential patterns of responses based on
gender of the student.

Conclusion
The RM measure is a reliable measure of a child’s outlook and attitudes related to their
reading. In addition to direct relations between the RM measure and word reading and
reading comprehension, RM also independently predicted reading comprehension above and
beyond the effects of word reading. Finally, there was an interaction between RM and word
reading, and this interaction in turn predicted reading comprehension outcomes. Authors of
future research can use this reliable and valid measure of mindset, particularly as it relates
to reading skills, to inform whether mindset interventions alone or mindset interventions
embedded in reading interventions can accelerate student reading achievement.
Author Manuscript

Supplementary Material
Refer to Web version on PubMed Central for supplementary material.

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 14

Funding:
Author Manuscript

The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education,
through Grant R324A150269 and by the Eunice Kennedy Shriver National Institute of Child Health & Human
Development of the National Institutes of Health under Award Number R01HD091232. The content is solely the
responsibility of the authors and does not necessarily represent the official views of the funding agency.

References
Adams MJ (1990). Beginning to read: Thinking and learning about print. Cambridge, MA: The MIT
Press.
Al Otaiba S, Rivas B, Jones J, Petscher Y & Wanzek J (2015) Reading Growth Mindset.
Unpublished test, College of Education, Southern Methodist University, Dallas, TX. Retrieved from
psyarxiv.com/38eda. doi: osf.io/38eda
Baird GL, Scott WD, Dearing E, & Hamill SK (2009). Cognitive self-regulation in youth with
and without learning disabilities: Academic self-efficacy, theories of intelligence, learning vs.
performance goal preferences, and effort attributions. Journal of Social and Clinical Psychology,
Author Manuscript

28, 881–908. doi: 10.1521/jscp.2009.28.7.881


Baker L, & Wigfield A (1999). Dimensions of children’s motivation for reading and their relations to
reading activity and reading achievement. Reading Research Quarterly, 34, 452–477. doi: 10.1598/
RRQ.34.4.4
Blackwell L, Trzesniewski K, & Dweck CS (2007). Implicit theories of intelligence predict
achievement across an adolescent transition: A longitudinal study and an intervention. Child
Development, 78, 246–263. doi: 10.1111/j.1467-8624.2007.00995 [PubMed: 17328703]
Brasseur-Hock IF, Hock MF, Kieffer MJ, Biancarosa G, & Deshler DD (2011). Adolescent struggling
readers in urban schools: Results of a latent class analysis. Learning and Individual Differences, 21,
438–452. doi: 10.10162011.01.008
Bryk AS, Gomez LM Grunow A & LaMahieu P,G (2015). Learning to improve: How America’s
schools get better at getting better. Boston, MA: Harvard Press.
Buehl MM, Alexander PA, & Murphy PK (2002). Beliefs about schooled knowledge: Domain
specific or domain general? Contemporary Educational Psychology, 27(3), 415 – 449. doi:10.1006/
Author Manuscript

ceps.2001.1103
Burnette J, O’Boyle E, Vanepps E Pollack J & Finkel E (2013). Mind-sets matter: A meta-analytic
review of implicit theories and self-regulation. Psychological Bulletin, 139, 655–701. doi: 10.1037/
a0029531 [PubMed: 22866678]
Cain K, Oakhill JV, Barnes MA, & Bryant PE (2001). Comprehension skill, inference-making ability,
and their relation to knowledge. Memory & Cognition, 29, 850–859. doi: 10.3758/BF03196414
[PubMed: 11716058]
Cain K, Oakhill J, & Bryant P (2004). Children’s reading comprehension ability: Concurrent
prediction by working memory, verbal ability, and component skills. Journal of Educational
Psychology, 96, 31–42. doi: 10.1037/0022-0663.96.1.31
Chalmers R,P (2012). mirt: A Multidimensional Item Response Theory Package for the R
Environment. Journal of Statistical Software, 48, 1–29. doi: 10.18637/jss.v048.i06
Cho E, Toste JR, Lee M, & Ju U (2019). Motivational predictors of struggling readers’ reading
comprehension: The effects of mindset, achievement goals, and engagement. Reading and Writing,
32, 1219–1242. 10.1007/s11145-018-9908-8
Author Manuscript

Curran PJ, West SG, & Finch JF (1996). The robustness of test statistics to nonnormality
and specification error in con- firmatory factor analysis. Psychological Methods, 1, 16–29.
doi:10.1037/1082-989X.1.1.16
De Ayala RJ (2013). The theory and practice of item response theory. New York, NY: Guilford Press.
DeVellis RF (2012). Scale development: Theory and applications (3rd ed.). Thousand Oaks, CA: Sage
Publications.

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 15

Duckworth AL, & Yeager DS (2015). Measurement matters: Assessing personal qualities other
than cognitive ability for educational purposes. Educational Researcher, 44, 237–251. doi:
Author Manuscript

10.3102/0013189X15584327 [PubMed: 27134288]


Dunn TJ, Baguley T, & Brunsden V (2014). From alpha to omega: A practical solution to the
pervasive problem of internal consistency estimation. British Journal of Psychology, 105, 399–412.
doi:10.1111/bjop.12046 [PubMed: 24844115]
Dweck CS (1986). Motivational processes affecting learning. American Psychologist, 41, 1040–1048.
doi: 10.1037/0003-066X.41.10.1040
Dweck CS (1999). Self-theories: Their role in motivation, personality, and development. Philadelphia,
PA: Psychology Press.
Dweck CS, Chiu C, & Hong Y (1995). Implicit theories and their role in judgments and reactions: A
word from two perspectives. Psychological Inquiry, 6, 267–285. doi: 10.1207/s15327965pli0604_1
Dynarski M, Clarke L, Cobb B, Finn J, Rumberger R, & Smink J (2008). Dropout prevention: IES
practice guide (NCEE 2008–4025). Washington, DC: U.S. Department of Education, Institute of
Education Sciences, National Center for Education Evaluation and Regional Assistance. Retrieved
from https://ies.ed.gov/ncee/wwc/PracticeGuide/9
Author Manuscript

Foorman BR, Koon S, Petscher Y, Mitchell A, & Truckenmiller A (2015). Examining general and
specific factors in the dimensionality of oral language and reading in 4th–10th grades. Journal of
Educational Psychology, 107, 884–889. doi: 10.1037/edu0000026 [PubMed: 26346839]
Francis DJ, Shaywitz SE, Stuebing KK, Shaywitz BA, & Fletcher JM (1996). Developmental lag
versus deficit models of reading disability: A longitudinal, individual growth curves study. Journal
of Educational Psychology, 88, 3–17. doi: 10.1037/0022-0663.88.1.3
Gough P, & Tunmer W (1986). Decoding, reading, and reading disability. Remedial and Special
Education, 7, 6–10. doi: 10.1177/074193258600700104
Gutschall CA (2013). Teachers’ mindsets for students with and without disabilities. Psychology in the
Schools, 50, 1073–1083. doi: 10.1002/pits.21725
Joshi RM, Tao S, Aaron PG, & Quiroz B (2012). Cognitive component of componential model of
reading applied to different orthographies. Journal of Learning Disabilities, 45, 480–486. doi:
10.1177/0022219411432690 [PubMed: 22293686]
Kim YSG (2017). Why the simple view of reading is not simplistic: Unpacking component skills of
Author Manuscript

reading using a direct and indirect effect model of reading (DIER). Scientific Studies of Reading,
21, 310–333. doi: 10.1080/10888438.2017.1291643
Kline RB (2016). Principles and practice of structural equation modeling (4th ed.). New York, NY:
Guilford Press.
Logan S, Medford E, & Hughes N (2011). The importance of intrinsic motivation for high and low
ability readers’ reading comprehension performance. Learning and Individual Differences, 21,
124–128. doi: 10.1016/j.lindif.2010.09.011
Logan S, & Johnston R (2009). Gender differences in reading ability and attitudes: Examining
where these differences lie. Journal of Research in Reading, 32, 199–214. doi: 10.1111/
j.1467-9817.2008.01389
Lordán E, Solé I, & Beltran FS (2017). Development and initial validation of a questionnaire to assess
the reading beliefs of undergraduate students: The Cuestionario de Creencias sobre la Lectura.
Journal of Research in Reading, 40(1), 37–56.
MacGinitie WH & MacGinitie RK, (2006). Gates-MacGinitie Reading Tests (4th Ed.). Iowa City, IA:
Houghton Mifflin.
Author Manuscript

McClelland MM, Cameron CE, Connor CM, Farris CL, Jewkes AM, & Morrison FJ (2007).
Links between behavioral regulation and preschoolers’ literacy, vocabulary, and math skills.
Developmental Psychology, 43, 947–959. doi:10.1037/0012-1649.43.4.947 [PubMed: 17605527]
McGeown S, Goodwin H, Henderson N, & Wright P (2012). Gender differences in reading motivation:
Does sex or gender identity provide a better account? Journal of Research in Reading, 35, 328–
336. 10.1111/j.1467-9817.2010.01481
Moats LC (1999). Teaching reading is rocket science. Washington, DC: American Federation of
Teachers.

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 16

Myszkowski N, & Storme M (2019). Judge Response Theory? A call to upgrade our psychometrical
account of creativity judgments. Psychology of Aesthetics, Creativity and the Arts, 13, 167–175.
Author Manuscript

doi:10.1037/aca0000225
Ouellette G, & Beers A (2010). A not-so-simple view of reading: How oral vocabulary and visual­
word recognition complicate the story. Reading and Writing, 23, 189–208.
Petscher Y (2010). A meta‐analysis of the relationship between student attitudes towards reading
and achievement in reading. Journal of Research in Reading, 33(4), 335–355. doi: 10.1111/
j.1467-9817.2009.01418
Petscher Y, Al Otaiba S, Wanzek J, Rivas B, & Jones F (2017). The relation between global and
specific mindset with reading outcomes for elementary school students. Scientific Studies of
Reading, 21(5), 376–391. doi: 10.1080/10888438.2017.1313846
Quinn JM, & Wagner RK (2018). Using meta‐analytic structural equation modeling to study
developmental change in relations between language and literacy. Child Development, 89(6),
1956–1969. doi:10.1111/cdev.13049 [PubMed: 29484642]
Quinn JM, Wagner RK, Petscher Y, & Lopez D (2015). Developmental relations between
vocabulary knowledge and reading comprehension: A latent change score modeling study. Child
Author Manuscript

Development, 86(1), 159–175. doi: 10.1111/cdev.12292 [PubMed: 25201552]


Robinson HM & Weintraub S (1973). Research related to children’s interests to developmental values
in reading. Library Trends, 22, 81–108.
Rosseel Y (2012). Lavaan: An R package for structural equation modeling and more. Version 0.5–12
(BETA). Journal of Statistical Software, 48, 1–36.
Salovey P, & Mayer JD (1990). Emotional intelligence. Imagination, Cognition and Personality, 9,
185–211. doi: 10.2190/DUGG-P24E-52WK-6CDG
Sisk VF, Burgoyne AP, Sun J, Butler JL, & Macnamara BN (2018). To what extent and under which
circumstances are growth mind-sets important to academic achievement? Two meta-analyses.
Psychological Science, 29, 549–571. doi:10.1177/0956797617739704 [PubMed: 29505339]
Taboada A, Tonks SM, Wigfield A, & Guthrie JT (2009). Effects of motivational and
cognitive variables on reading comprehension. Reading and Writing, 22, 85–106. doi: 10.1007/
s11145-008-9133-y
Toste JR, Capin P, Vaughn S, Roberts GJ, & Kearns DM (2017). Multisyllabic word-reading
Author Manuscript

instruction with and without motivational beliefs training for struggling readers in the upper
elementary grades. Elementary School Journal, 117, 593–615. doi: 10.1086/691684
Tunmer WE, & Chapman JW (2012). The simple view of reading redux: Vocabulary knowledge
and the independent components hypothesis. Journal of Learning Disabilities, 45, 453–466.
doi:10.1177/0022219411432685 [PubMed: 22293683]
Vaughn S, Linan-Thompson S, Kouzekanani K, Bryant DP, Dickson S, & Blozis SA (2003). Reading
instruction grouping for students with reading difficulties. Remedial and Special Education, 24,
301–315. 10.1177/07419325030240050501
Wagner RK, Torgesen JK, Rashotte CA, & Pearson NA (2010). Test of Silent Reading Efficiency and
Comprehension (TOSREC) examiner’s manual. Austin, TX: Pro-Ed.
Wanzek J, Petscher Y, Al Otaiba S, & Donegan RE (2019). Retention of reading intervention effects
from fourth to fifth grade for students with reading difficulties. Reading & Writing Quarterly,
35(3), 277–288. doi: 10.1080/10573569.2018.1560379
Wanzek J, & Roberts G (2012). Reading interventions with varying instructional emphases for
fourth graders with reading difficulties. Learning Disability Quarterly, 35(2), 90–101. doi:
Author Manuscript

10.1177/0731948711434047
Woodcock RW, McGrew KS, & Mather N (2001). Woodcock–Johnson III Tests of Achievement.
Itasca, IL: Riverside.

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Tock et al. Page 17
Author Manuscript
Author Manuscript
Author Manuscript

Figure 1.
RC regressed on RM, WR, and the interaction of RM and WR. RM = Reading Mindset
latent factor; RM2, RM3… = Reading Mindset Item; WR = Word Reading latent factor;
WJWA = Woodcock Johnson III Word Attack; WJLW = Woodcock Johnson III Letter
Word Identification; RMWR = Reading Mindset and Word Reading interaction latent factor;
RMWR1, RMWR2… = Product indicator of RM and WR manifest variables. RC = Reading
Author Manuscript

Comprehension latent factor; WJPC = Woodcock Johnson III Passage Comprehension;


GMRTC = Gates MacGinitie Reading Test; TOSREC = Test of Silent Reading Efficiency
and Comprehension. $ = Path set to zero to correctly specify latent variable interaction
model. Pathways are standardized.

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript

Table 1.

Means, Standard Deviations, Distribution, Data Missingness, and Correlations of Included Variables (N = 430)

Variable 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Tock et al.

1. Black

2. White −.65** --

3. Gender −.07 .05


4. RM1 −.01 .02 −.05
! .03 .00 −.03 .24**
5. RM2
! .05 −.05 −.10 .11* .30**
6. RM3
! −.02 .03 .03 .23** .47** .23**
7. RM4
! .09 −.11 −.02 .24** .43** .22** .41**
8. RM5
! −.07 .03 −.01 .14** .30** .34** .39** .32**
9. RM6

10. RM7 .01 −.02 −.08 .03 .15** .14** .20** .20** .33**

11. WJLW −.14** .01 −.03 .19** .40** .14** .27** .26** .16** .08

12. WJWA −.15** .03 −.04 .18** .34** .13** .18** .24** .15** .11* .83**

13. GMRTC −.18** .21** −.01 .17** .26** .18** .23** .23** .14** .08 .57** .48**

14. TOSREC −.09 .05 −.04 .15** .29** .16** .19** .20** .12* .09 .69** .59** .69**

15.WJPC −.02 −.03 .00 .17** .37** .16** .24** .29** .13** .04 .75** .64** .60** .66**
Mean .49 .31 .49 4.68 4.82 3.47 4.87 4.57 4.83 4.14 44.45 16.12 15.01 14.92 22.52
SD .50 .46 .50 1.65 1.68 1.95 1.63 1.69 1.56 1.88 7.87 6.91 8.53 7.40 5.21
Skewness .04 .81 .03 −1.13 −1.27 .03 −1.37 −.96 −1.25 −.57 −.23 .09 1.33 .69 −.08

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Kurtosis −2.00 −1.36 −2.00 −.07 .20 −1.56 .54 −.45 .34 −1.21 .35 −.84 1.52 1.26 −.25
% missing .22 .22 .08 .02 .01 .02 .01 .01 .01 .02 .00 .00 .00 .00 .00

Note. Black = non-Black (0), Black (1); White = non-White (0), White (1); Gender = Male (0), Female (1); RM1, RM2… = Reading Mindset Item;
!=
Item included in final measure;

WJWA = Woodcock Johnson Word Attack; GMRTC = Gates MacGinitie Reading Test Comprehension subtest; TOSREC = Test of Silent Reading Efficiency and Comprehension; WJPC = Woodcock­
Johnson III Passage Comprehension subtest.
*
p < .05.
Page 18
p < .01.
** Tock et al. Page 19
Author Manuscript
Author Manuscript
Author Manuscript
Author Manuscript

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript

Table 2.

Graded Response Model Item Threshold and Discrimination Parameters

Item Parameters
Tock et al.

Item Content a b1 b2 b3 b4 b5

RM1 If a book is hard to read, I stop reading it. .84 −3.07 −2.24 −1.71 −1.18 .29
RM2 I feel like I am one of the worst readers in my class. 1.89 −1.74 −1.47 −1.06 −.80 −.16
RM3 If I have to read out loud in class, I feel scared. .98 −1.31 −0.63 0.08 .43 1.36
RM4 If I make a lot of mistakes while reading, I quit trying. 1.99 −1.79 −1.50 −1.17 −.85 −.12
RM5 When I have to work hard at reading, it makes me feel like I am not very smart. 1.74 −1.88 −1.37 −0.96 −.56 .23
RM6 When someone reads better than me, I’m jealous. 1.45 −2.38 −1.78 −1.32 −.89 −.02
RM7 I don’t like when my teacher corrects me when I’m reading. .74 −2.42 −1.59 −1.00 −.36 .90

Note. RM1, RM2… = Reading Mindset Item; a = discrimination parameter; b1–b5 = threshold parameters. Bolded items were retained for the final RM measure

Assess Eff Interv. Author manuscript; available in PMC 2022 September 01.
Page 20

You might also like