HirshEtal (2020) ReviewsOfTeachingMethodsWhichFundamentalIssuesAreIdentified
HirshEtal (2020) ReviewsOfTeachingMethodsWhichFundamentalIssuesAreIdentified
HirshEtal (2020) ReviewsOfTeachingMethodsWhichFundamentalIssuesAreIdentified
Åsa Hirsh, Claes Nilholm, Henrik Roman, Eva Forsberg & Daniel Sundberg
To cite this article: Åsa Hirsh, Claes Nilholm, Henrik Roman, Eva Forsberg & Daniel Sundberg
(2022) Reviews of teaching methods – which fundamental issues are identified?, Education
Inquiry, 13:1, 1-20, DOI: 10.1080/20004508.2020.1839232
ARTICLE
Introduction
Given the global emphasis on education as a road to national and individual success, it
is not surprising that a vast amount of research concerns which teaching methods
enable education to fulfil its aims. Although education concerns many areas, such as
educational policy, the organisation of education, financial systems, and school leader
ship, there seems to be wide agreement that teaching, in the end, is the key factor in
making educational systems successful (e.g. Barber & Mourshed, 2007; Hargreaves &
Fullan, 2012; Hattie, 2003; OECD, 2016; Stigler & Hiebert, 2009).
How teaching should be arranged in the best possible way has been targeted in
a great number of investigations involving different theoretical points of departure
(Hattie, 2009). Consequently, reviews of the effectiveness or appropriateness of teaching
methods have become increasingly available. Producing such reviews is a logical way to
integrate findings and insights from different studies. Systematic research reviews can
contribute in various ways with knowledge that may inform research, practice and
policy decisions (cf. Gough, Thomas, & Oliver, 2012). However, findings from under
lying studies often show mixed and sometimes even conflicting results, due to a variety
of factors (e.g. Shute, 2008). Instructional methods and interventions act in complex
systems, and their effects are dependent on various factors in the context as well as the
ways in which and by whom they are implemented and enacted (cf. Cartwright &
Hardie, 2012; Pawson, 2006; Pawson, Greenhalgh, Harvey, & Walshe, 2005; Rycroft-
Malone et al., 2012).
Synthesising the results and effects of numerous primary studies inevitably involves
a certain degree of decontextualization. Still, at secondary research level, researchers
recognise, relate to, and/or problematise the meaning and impact of the context in
various ways. In the current study, we develop knowledge on how the tension between
contextuality and generalisability is addressed and elaborated in research reviews of
teaching methods. We are particularly interested in whether and how issues concerning
what works for whom and in what circumstances are problematised (cf. Pawson, 2006).
Thus, we explore those issues that recur across studied methods and overtime in
research reviews of teaching methods, with relevance to the tension between context
and generalisation. Subsequently, identified issues will be discussed in terms of possible
implications for both primary and secondary level research.
Method
Before the analysis specific to the present study could be carried out, extensive basic
work had already been done, where the research group as a first step identified the 75
most cited research reviews on teaching methods listed in the WoS between 1980 and
2017 (25 from 1980 to 1999, 25 from 2000 to 2009, and 25 from 2010 to 2017).
The actual analysis in the current study concerned the summaries in the third
through fifth columns of the table.
In the analysis phase, the summaries were regarded as text extracts that were the
subject of qualitative content analysis. Content analysis is a flexible method for analys
ing text data obtained in various ways, such as interviews, observations, open-ended
survey questions, or print media such as various types of articles, books, or policy
documents (Cavanagh, 1997; Kondracki & Wellman, 2002). The goal of content
analysis is “to provide knowledge and understanding of the phenomenon under
study” (Downe-Wamboldt, 1992, p. 314), through systematic coding and identification
of patterns (Hsieh & Shannon, 2005). Whatever type of text the content analysis takes
its starting point in, the analysis starts at the manifest level. It may then proceed to the
latent level, but not necessarily. The manifest analysis deals with the content aspect and
describes the visible, obvious components (Downe-Wamboldt, 1992; Kondracki et al.,
EDUCATION INQUIRY 5
2002), whereas the latent analysis deals with underlying meanings of the text (Downe-
Wamboldt, 1992; Kondracki & Wellman, 2002).
When summarising results and implications of each of the reviews in the original
coding process, our pronounced endeavour was to do so on a manifest level, that is,
with as little abstraction or interpretation as possible. In all cases where possible, we
took our starting point in the abstracts of the reviews, according to the logic that
summarised there is what the authors themselves consider to be the most important
results and implications. However, the results, discussion, conclusion, and/or implica
tion parts of each review were also read in full, resulting in complementary text and
more informative summaries than the very short lines appearing in the article
abstracts.
Of these, categories 1 and 3 were so complex that further sorting into subcategories was
carried out.
In the following results chapter, we use the term overview findings for our over
arching categories (with associated subcategories). An overview finding can be
described as a product of an accumulated analysis of individual review findings
describing a phenomenon or aspects of a phenomenon (here teaching methods) (cf.
Lewin et al., 2015). Overview findings thus arise in the analysis and involve interpreta
tion. In general, overview findings can be formulated at different abstraction levels,
depending on the degree of interpretation being made. Overview findings at a lower
level of abstraction are often relatively close to underlying studies and formulated with
concepts retrieved directly from them, while findings at a higher abstraction level may
require other terms to be used.
Further, our analysis was partly4 guided by the methodology in the framework
CERQual (which stands for confidence in the evidence from reviews of qualitative
research) described by Lewin et al. (2015). A core purpose of CERQual is to offer
a method for systematically and transparently assessing the weight (in terms of
6 Å. HIRSH ET AL.
coherence) of findings derived from qualitative research.5 Although our primary inter
est lies in describing recurrent patterns and in conducting a problematising discussion
about those patterns, we acknowledge the importance of visualising the occurrence and
frequency of different aspects (that together form our overview findings) in the various
underlying reviews as a signal of the “weight” (in terms of coherence) of the overview
findings. For this reason, we have created two tables highlighting the occurrence of
specific aspects in the various included studies (see Appendices C1 and C2).
Results
Before presenting the main results of the current study, i.e. the three overview findings,
some overall observations are briefly accounted for regarding the format of the under
lying reviews, as well as their temporal and geographical distribution.
The included 75 reviews build on different types of data in the primary studies,
which largely affect the format of the reviews. Quantitative reviews, which are based on
quantitative underlying studies, make up almost half of the sample (35/75). 24/75
reviews in the sample report both quantitative and qualitative data, whereas 16 reviews
are explicitly qualitative. The distribution between the three different types of reviews is
relatively even over the three periods 1980–1999, 2000–2009, and 2010–2017 (Roman
et al., 2018).
The tables in Appendices C1 and C2 visualise the occurrence and frequency of
different aspects in the underlying material. There, the reader can see which reviews
elaborate on which aspects, which year the reviews were published, and the geographi
cal distribution of the reviews in terms of national affiliations of the review authors. The
Web of Science is located in the US, and there is a clear North American domination
when it comes to the national affiliations of the authors. Three-quarters of all authors
are affiliated with institutions in the US or Canada. The final quarter are affiliated with
institutions in nine other countries: the Netherlands, the UK, Germany, Greece,
Taiwan, Israel, Hong Kong, Australia, and Brazil.
Although a more or less explicitly stated goal in several of the reviews is to give some
kind of general answer concerning the impact of a given method, the reservations are
ultimately many. Givers (teachers) as well as receivers (students) of the treatment are
heterogeneous groups in several ways, and, additionally, there is great variation con
cerning the contextual conditions framing the teaching-learning process. Many mod
erators or combinations of moderators may potentially affect the method’s impact on
students’ learning outcome.
This fact is also problematised and discussed in several of the included reviews that
together constitute the empirical material underlying this study. An overview finding
where coherence, thus, is strong (i.e. where a pattern is found across most of the
underlying studies) is that a particular method has little or no effect per se; rather,
our analysis shows that the effect depends on moderators linked to four (often inter
related) aspects (Table 1).
In an excerpt typical for many underlying reviews, Graham and Hebert (2011)
conclude the following:
Just because a writing intervention was effective in improving students’ reading in the
studies included in this review does not guarantee that it will be effective in all other
situations. In fact, there is rarely an exact match between the conditions in which the
research was implemented and the conditions in which it is subsequently implemented by
teachers. Mismatches between the conditions where a practice is implemented by a teacher
and its effectiveness as established by researchers can vary widely, including differences
between students (e.g. reading or writing skills, dispositions, previous school success),
instructional arrangements (e.g. number of students, material resources in the classroom),
and the capabilities of those implementing instruction (e.g. beliefs about teaching and
learning, success in managing the classroom, and experience from teaching writing and
reading). (p. 737)
In many reviews, especially those of the past decade, research on the use of technolo
gical artefacts in instruction has been synthesised. All these reviews come to conclusions
like that of Smetana and Bell (2012):
Despite the promise that computer simulations have shown in the teaching and learning of
science, success is certainly not guaranteed. Like any other instructional resource, computer
simulations can be effective if they are of high quality and are used appropriately. Therefore,
the appropriate question for researchers is often how teachers and students use simulations,
rather than whether the simulation in itself can achieve desired results. (p. 1362)
when it comes to the use of technological artefacts in teaching (in this case augmented
reality, or AR):
The importance of the teacher is also underlined by Smetana and Bell (2012):
Even when support is provided by the simulation software and its accompanying materials,
the teacher is critical for the successful implementation of instructional technologies and
computer simulations in particular. There are no teacher-proof simulations. The teacher
plays an important role in aligning the use of computer simulations to curricular objectives
and to student needs. (Smetana & Bell, 2012, p. 1359)
Rutten (2012), who reviewed the use of computer simulation in science education,
argues:
The effects of computer simulations in science education are caused by interplay between
the simulation, the nature of the content, the student and the teacher. A point of interest
for the research agenda in this area, as mentioned by De Jong and van Joolingen (1998) in
their review, is to investigate the place of computer simulations in the curriculum. Most of
the studies we reviewed however, investigated the effects of computer simulations on
learning ceteris paribus, consequently ignoring the influence of the teacher, the curricu
lum, and other such pedagogical factors. (p.151)
Among the studies included in each review, the composition of the overall studied
population can range from pre-school children to adult students in higher education in
different disciplines. Local contexts vary (sometimes strongly), due in part to the
heterogeneity of the population but also due to a range of other factors. Additionally,
the content of the studied interventions varies because of the methods’ comprehensive
ness. Formative feedback, as an example, can be given in a variety of ways (verbal,
written, modelling, etc.); it can be provided from teacher to student, between students,
or from computer to student. The extent of the feedback given can vary from
EDUCATION INQUIRY 11
Young et al. (2012), who undertook a review based on the question of how effective
video games are in enhancing students’ learning, conclude by directing criticism to both
themselves and the research community, urging researchers to “stop seeking simple
answers to the wrong questions” (p. 83):
Video games vary widely in their design and related educational affordances: Some have
elaborate and engaging backstories, some require problem solving to complete 5 to 40
multiplayer quests, and some rely heavily on fine motor controller skills. With this range of
attributes, perhaps no single experimental manipulation (independent variable) can ever
be defined to encompass the concept of video games writ large. Furthermore, given the
diversity of student learning goals and abilities, likewise perhaps no singular outcome
(dependent variable) from video games should be anticipated. Instead, applying principles
from situated cognition suggests that research should focus on the complex interaction of
player–game–context and ask the question, “How does a particular video game being used
by a particular student in the context of a particular course curriculum affect the learning
process as well as the products of school (such as test grades, course selection, retention,
and interest)?” No research of this type was identified in our review, suggesting the missing
element may be a more sophisticated approach to understanding learning and game play
in the rich contexts of home and school learning. (p. 83–84)
Discussion
The purpose of the present study was to identify how issues related to the tension
between contextuality and generalisability are elaborated in research reviews on teach
ing methods. Through careful mapping of the manifest data material, we have been able
to show that such issues are frequently addressed and problematised in the analysed
reviews. Three overview findings have been presented: the abundance of moderating
factors, the need for highly qualified teachers, and the research-practice gap. In this final
section, we will elaborate on our overview findings and discuss some implications for
primary and secondary level research.
The abundance of moderating factors and the need for highly qualified teachers
Claiming that several factors affect the relationship between a teaching method
and student learning is not very controversial. Methodologically, intervention
EDUCATION INQUIRY 13
studies deal with a moderator as a third variable affecting the causal relationship
between treatment (teaching method) and treatment outcome (effect on student
learning). The fact that moderators are controlled for is in itself a recognition of
the potential impact of the context. However, there is a difference between
accounting for controlled moderators and explicitly problematising them in
terms of what they may mean for a study’s external and ecological validity.
We identified nearly 30 moderators addressed across the four areas of pupil, teacher,
content, and context. Each of the moderators listed in Table 1 is highly complex, and the
number of possible combinations almost infinite. Obviously, it is difficult, not to say
impossible, to determine with certainty the effect of a teaching method ceteris paribus.
Simply put, methods do not have the same effect for all students in all situations. While
this fact likely is self-evident to most (not least teachers), it seems necessary to repeatedly
emphasise it in an era where the question asked often seems to be What works? rather than
What works for whom and in what circumstances? (cf. Cartwright & Hardie, 2012; Pawson
et al., 2005). Contextual variation and impact need to be clarified and acknowledged. In light
of such recognition, a teacher can examine his/her own practice in relation to research
findings and try to explore what will happen when employing a specific teaching method in
his/her own context.
In the field of social work, Cartwright and Hardie (2017) propose a model
aiming to predict the effect of a certain way of acting in a specific case. Such
predictions, they argue, will require practitioners to draw heavily on their profes
sional experience, causal understanding of their own situation, the proposed
intervention, and its effects. More informed predictions may be made when
intervention studies more fully account for the contextual complexity and circum
stances. In a similar vein, Khorsan and Crawford (2014) discuss the importance of
experimental studies in health care being explicit in explaining such aspects of the
studies that are crucial for practitioners (as well as for secondary level researchers)
if they are to be able to judge the external validity of implementation and out
comes. They argue that study quality must be regarded as a multidimensional
concept that includes both internal, external (population) and ecological (situation
and setting) validity. Moreover, they propose an external validity assessment tool
to measure the extent to which and how well various context and intervention
characteristics are described in experimental studies. Only if such aspects are
clearly described, the judgement of relevance for other settings is made possible.
In the field of teaching and learning, Bernstein (2018) discusses generalisation as
a two-way street, where the possibility to judge the external validity of a study is
a shared responsibility between the author and the reader of a study. The author’s
responsibility is to provide enough information in terms of rich, thick descriptions
of context to make judgements about generalisation possible. However, the
responsibility for discerning useful parts of the study and relating them to other
contexts rests with the reader.
In the section of overview findings, we argue that no teaching method or
artefact can replace the context-experienced teacher. The effect of methods on
students’ learning is undoubtedly moderated by differences at the student level and
other factors, wherefore the teacher’s situational awareness and ability to predict
or know what may work for whom, how, and in what circumstances is crucial. The
14 Å. HIRSH ET AL.
teacher definitely needs the method, and the method certainly needs the reflective
teacher. In line with the arguments above from researchers in different fields, we
find it important not only to account for moderating factors, but also to explain
and problematise the complexity of the context in such a way that practitioners
within the field of teaching may assess the external and ecological validity of
a study.
Reviews are crucial for establishing what is known and not known. The reviews we have
analysed are often adequately cautious in their conclusions of what is known. Moreover,
they point out knowledge gaps and how these can or should be addressed in future
research. However, by analysing a sample of research reviews spanning a period of four
decades, it becomes clear that the same types of problems and knowledge gaps are
pointed out repeatedly. The virtuous circle mentioned by Gough et al. (2012) above is
a metaphor used to underscore that one does not arrive at the same point, but rather
that there is a continuous knowledge development. The idea of research reviews as an
important element in creating virtuous circles presupposes that drawn conclusions and
appeals made in reviews form (at least in part) the starting point for new primary
studies. Reasonably, the primary study level has a great deal of responsibility when it
comes to creating more context-specific knowledge about teaching methods. However,
the responsibility also lies with second-order research and how the tension between
contextuality and generalisation is handled there.
EDUCATION INQUIRY 15
To conclude
Through our overview findings, we have highlighted issues that are frequently proble
matised across high impact research reviews on teaching methods over a period of four
16 Å. HIRSH ET AL.
decades. The substantive aspects of the findings are neither surprising nor previously
unknown. The strength of this study lies in how we have been able to show patterns and
coherence in conclusions across studied issues over time and their relevance for the
tension between context and generalisation.
Trying to determine where the effect of a method itself ends and where the
impact of the context begins is perhaps a mission impossible. What can be done in
both primary and second-order research is to explicitly recognise (to a greater
extent), explore, and discuss contextual complexity. In line with other researchers
above referred to, we want to underline the importance of viewing validity as
a multidimensional concept including both internal, external, and ecological
aspects. Basically, there are two questions research on teaching methods ought
to respond to: whether a particular way of teaching has an impact on students’
learning and performances, and what and how others can learn from completed
studies. Both are equally important, but the internal validity of studies seems to be
more valued than the external and ecological validity. As Bernstein (2018) argues,
foregrounding one at the expense of the other does not help advancing the field of
knowledge:
If we are unable to determine if what we are doing is working, we exist in an evidence-free
zone in which we are grasping in the dark to find the most effective ways to teach our
content. In addition, if we are unable to generalize our work to other contexts, we are not
building a field, and are not allowing the practice of teaching to advance outside our
individual classrooms. (p. 123)
Thus, richer descriptions and problematisation of context are needed, for both
practitioners and reviewers to be able to determine validity in a multidimensional
way. As for the review level, the realist approach suggested by Pawson et al. (2005)
may well be a viable way forward also in the field of research on teaching methods. Not
least – and due to the fact that many teaching methods are both comprehensive and
complex – it is important to emphasise the need for clearly articulated research
questions stating which aspects of an intervention or method are being studied and
that there is a limit to how much territory a review can cover.
Funding
This work was supported by the Vetenskapsrådet [2016-03679].
Notes
1. We are well aware of the fact that the WoS covers far from all educational research;
nevertheless, we restricted our searches to it because of its acknowledged high quality and
its prestigious position among databases.
2. A number of included reviews are based on studies carried out in both K-12 context and in
higher and/or adult education.
3. The codes are basically those listed as dashes under the four subcategories of overview
finding 1 (Table 1) and under the three subcategories of overview finding 3 (Table 2) in the
results section.
4. The CERQual framework primarily concerns reviews (i.e. secondary level) and involves
assessment of the methodological limitations and adequacy of data in underlying empirical
EDUCATION INQUIRY 17
qualitative studies. This has not been relevant in our case; our use of the CERQual’s starting
points concerns the coherence of the overview findings.
5. Underlying studies (in their entirety) cannot in our case be described as “qualitative
research”. However, the data we have analysed are qualitative (i.e. text excerpts).
6. Since our three overview findings are to a certain extent linked to each other, the reader will
notice that some of the excerpts in the results section are in fact illustrative of more than one
overview finding.
Notes on contributors
Åsa Hirsh is Associate Professor in Education at Jönköping University and the University of
Gothenburg, Sweden. Her research focus lies at the intersection between the research fields of
classroom instruction, educational assessment, and school development.
Claes Nilholm is a professor of Education at Uppsala University, Sweden. His research focus is
on inclusive education. He also has an interest in methodological issues in research reviewing
and is currently leading the project “Research about teaching – systematic mapping and analysis
of research topographies” financed by the Swedish Research Council, educational sciences.
Henrik Román is senior lecturer in Education at Uppsala University, Sweden. His research
focuses historical aspects of contemporary educational policy and practice.
Eva Forsberg is professor in Education at Uppsala University, Sweden and general editor
of Nordic Journal of Studies in Educational Policy (NordSTEP). Her research focuses the inter
face between educational policy, practice and research from a curriculum theoretical perspective.
Daniel Sundberg is Professor of Education, Linnaeus University, Department of Education and
Teachers’ Practice, Sweden. He is co-leader of the SITE research group (Studies in Curriculum,
Teaching and Evaluation) and chief editor of Educational Research in Sweden. His main field of
research is comparative and historical perspectives on education reforms, curriculum and
pedagogy.
Disclosure statement
No potential conflict of interest was reported by the authors.
ORCID
Claes Nilholm http://orcid.org/0000-0001-8613-906X
Eva Forsberg http://orcid.org/0000-0002-1768-1450
Daniel Sundberg http://orcid.org/0000-0003-0644-3489
References
Alfieri, L., Brooks, P. J., Aldrich, N. J., & Tenenbaum, H. R. (2011). Does discovery-based
instruction enhance learning? Journal of Educational Psychology, 103(1), 1–18.
Barber, M., & Mourshed, M. (2007). How the world’s best-performing school systems come out on
top. London: McKinsey & Co.
Bernstein, J. L. (2018). Unifying SoTL methodology: Internal and external validity. Teaching &
Learning Inquiry, 6(2), 115–126.
Cartwright, N., & Hardie, J. (2012). Evidence-based policy: A practical guide to doing it better.
New York: Oxford University Press.
18 Å. HIRSH ET AL.
Cartwright, N., & Hardie, J. (2017). Predicting what will happen when you intervene. Clinical
Social Work Journal, 45(1), 270–279.
Cavanagh, S. (1997). Content analysis: Concepts, methods and applications. Nurse Researcher, 4
(3), 5–16.
Cobb, B., Lehmann, J., Newman-Gonchar, R., & Alwell, M. (2009). Self-determination for
students with disabilities: A narrative meta-synthesis. Career Development of Exceptional
Individuals, 32(2), 108–114.
Coffey, A., & Atkinson, P. (1996). Making sense of qualitative data. Complementary research
strategies. Thousand Oaks, London, New Delhi: Sage Publications Inc.
De Jong, T., & van Joolingen, W. R. (1998). Scientific discovery learning with computer
simulations of conceptual domains. Review of Educational Research, 68(2), 179–201.
Dole, J. A., Duffy, G. G., & Pearson, P. D. (1991). Moving from the old to the new:
Research on reading comprehension instruction. Review of Educational Research, 61(2),
239–264.
Downe-Wamboldt, B. (1992). Content analysis: Method, applications, and issues. Health Care for
Women International, 13(3), 313–321.
Driver, R., Newton, P., & Osborne, J. (2000). Establishing the norms of scientific argumentation
in classrooms. Science Education, 84(3), 287–312.
Duit, R., & Treagust, D. F. (2003). Conceptual change: A powerful framework for improv
ing science teaching and learning. International Journal of Science Education, 25(6),
671–688.
Dunleavy, M., Dede, C., & Mitchell, R. (2009). Affordances and limitations of immersive
participatory augmented reality simulations for teaching and learning. Journal of Science
Education and Technology,18(1), 7–22. http://dx.doi.org/10.1007/s10956-008-9119–1
Frederiksen, N. (1984). Implications for cognitive theory for instruction in problem-solving.
Review of Educational Research, 54(3), 363–407.
Furtak, E. M., Seidel, T., Iverson, H., & Briggs, D. C. (2012). Experimental and
quasi-experimental studies of inquiry-based science teaching: A meta-analysis. Review of
Educational Research, 82(3), 300–329.
Gough, D., Thomas, J., & Oliver, S. (2012). Clarifying differences between review designs and
methods. Systematic Reviews, 28(1), 1–9.
Graham, S., & Hebert, M. (2011). Writing to read: A meta-analysis of the impact of writing and
writing instruction on reading. Harvard Educational Review, 81(4), 710–744.
Graneheim, U. H., & Lundman, B. (2004). Qualitative content analysis in nursing research:
Concepts, procedures and measures to achieve trustworthiness. Nurse Education Today, 24(2),
105–112.
Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review
types and associated methodologies. Health Information and Libraries Journal, 26(2),
91–108.
Hargreaves, A., & Fullan, M. (2012). Professional capital. Transforming teaching in every school.
New York: Teachers College Press.
Hattie, J. (2003). Teachers make a difference: What is the research evidence? Paper presented
at the Building Teacher Quality: What does the research tell us ACER Research Conference,
Melbourne, Australia. Retrieved from http://research.acer.edu.au/research_conference_
2003/4/
Hattie, J. (2009). Visible learning. A synthesis of over 800 meta-analyses relating to achievement.
London: Routledge.
Hirsh, Å., & Nilholm, C. (2019). Reviews of teaching methods – what are the fundamental
problems? Paper presented at ECER Conference, Hamburg. Retrieved from https://eera-ecer.
de/ecer-programmes/conference/24/contribution/47337/
Hmelo-Silver, C. E. (2004). Problem-based learning: What and how do students learn?
Educational Psychology Review, 16(3), 235–266.
Hsieh, H., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative
Health Research, 15(9), 1277–1288.
EDUCATION INQUIRY 19
attitudes, strategies, and techniques for the virtual classroom (pp. 57–72). Boston: Allyn &
Bacon.
Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1),
153–189.
Smetana, L. K., & Bell, R. L. (2012). Computer simulations to support science instruction and
learning: A critical review of the literature. International Journal of Science Education, 34(9),
1337–1370.
Squire, K., & Jan, M. (2007). Mad city mystery: developing scientific argumentation skills with a
place-based augmented reality game on handheld computers. Journal of Science Education and
Technology, 16(1), 5–29. http://dx.doi.org/10.1007/s10956-006-9037–z
Stigler, J. W., & Hiebert, J. (2009). The Teaching gap: Best ideas from the world’s teachers for
improving education in the classroom. Updated with a new preface and afterword. New York:
Free Press.
Terhart, E. (2011). Has John Hattie really found the holy grail of research on
teaching? An extended review of Visible Learning. Journal of Curriculum Studies, 43
(3), 425–438.
Thomas, J., & Harden, A. (2008). Methods for the thematic synthesis of qualitative research in
systematic reviews. BMC Medical Research Methodology, 8(45), 1–10.
Torgerson, C. J. (2007). The quality of systematic reviews of effectiveness in literacy learning in
English: A ‘tertiary’ review. Journal of Research in Reading, 30(3), 287–315.
van de Pol, J., Volman, M., & Beishuizen, J. (2010). Scaffolding in teacher-student interaction:
A decade of Research. Educational Psychology Review, 22(3), 271–296.
Wright, E. (1993) The irrelevancy of science education research: perception or reality? NARST
News,35(1), 1–2.
Wu, H. K., Lee, S. W. Y., Chang, H. Y., & Liang, J. C. (2013). Current status, opportunities and
challenges of augmented reality in education. Computers & Education, 62, 41–49.
Young, M. F., Slota, S., Cutter, A. B., Jalette, G., Mullin, G., Lai, B., … Yukhymenko, M. (2012).
Our princess is in another castle: A review of trends in serious gaming for education. Review of
Educational Research, 137160065182(1), 61–89.