Teaching Science As A Process Not A Set of Facts
Teaching Science As A Process Not A Set of Facts
https://doi.org/10.1007/s11191-021-00253-8
ARTICLE
Gunilla Öberg1 · Alice Campbell2 · Joanne Fox3,4 · Marcia Graves4 · Tara Ivanochko5 ·
Linda Matsuchi6 · Isobel Mouat4 · Ashley Welsh7
Abstract
The widespread misperception of science as a deliverer of irrefutable facts, rather than a
deliberative process, is undermining public trust in science. Science education therefore
needs to better support students’ understanding of the central role that disputes play in the
scientific process. Successfully incorporating scientific disputes into science education is,
however, challenging. The aim of this paper is to identify course components and design
features that develop undergraduate students’ abilities to write a logically coherent argu-
ment that is supported by evidence. First, we assessed student essays from a course that
had gone through a major revision aimed at strengthening students’ reasoning skills. When
comparing pre- and post-revision essays, we found substantial, and significant, improve-
ments across the assessment criteria. We then elicited oral and written feedback from
instructors who taught the course pre- and post-revision. We identified several changes that
instructors felt most impacted students’ reasoning skills, most importantly: streamlining
of learning outcomes and course content emphasizing argumentation skills; stronger scaf-
folding and better utilized peer review; and more detailed rubrics that specifically reference
learning outcomes and course content. The study illustrates the power of iterative course
revisions that incorporate findings from published research and instructors’ reflections on
teaching practices as a way to strengthen student learning.
* Gunilla Öberg
[email protected]
1
Institute for Resources, Environment and Sustainability (IRES) University of British Columbia
(IRES), Vancouver, BC, Canada
2
Learning Experiences Assessment & Planning Division, Simon Fraser University, Vancouver, BC,
Canada
3
Michael Smith Laboratories, University of British Columbia, Vancouver, BC, Canada
4
Department of Microbiology and Immunology, University of British Columbia, Vancouver, BC,
Canada
5
Department of Earth, Ocean and Atmospheric Sciences, University of British Columbia,
Vancouver, BC, Canada
6
Department of Zoology, University of British Columbia, Vancouver, BC, Canada
7
Center for Teaching, Learning and Technology, University of British Columbia, Vancouver, BC,
Canada
13
Vol.:(0123456789)
788 G. Öberg et al.
1 Introduction
There is a deep consensus in the science education literature that science students need
to better understand the role of dispute in the scientific process and to strengthen their
reasoning skills (Dagher & Erduran, 2016; Hodson & Wong, 2017; McCain, 2016; Wil-
son & Howitt, 2016). The goal of science is to enhance our knowledge about the physi-
cal world, and the understanding of phenomena and processes in nature is implicitly or
explicitly developed through arguments: claims supported by evidence through reason-
ing (Booth et al., 2003Osborne, 2010; Clough, 2011). Some or all of the elements mak-
ing up a scientific argument (claims, reasons, evidence, warrants, backings, qualifiers)
may be the subject of rebuttals or counter-arguments. Accordingly, the knowledge-pro-
ducing enterprise is characterized by critical scrutiny and debate: scientists present and
debate their findings at conferences; manuscript are sent out for review by other experts
in the field as are research proposals.
The central problem is that science is commonly taught and communicated as if it were
a deliverer of irrefutable facts. The role of debate and dissent is largely absent from sci-
ence education, even though it sits at the heart of science. This is in spite of the National
Academies of Sciences, Engineering and Medicine and numerous scholars for decades
having emphasized the urgent need to teach science students about the central role of dis-
cursive exploration of ideas in science (Hodson & Wong, 2017; Matthews, 1994; National
Academies of Sciences, Engineering, and Medicine, 2016). This is deeply problematic, not
least because it leads to unrealistic expectations (Osborne, 2010). Also, when contradictory
findings are presented in the media, it is commonly, and to an increasing extent, seen by the
general public as an indication of even proof that science is not trustworthy (Suiter, 2016;
Vernon, 2017). The need for science students to understand the central role of argumenta-
tion in science is not only because of the central role that reasoning plays in the making
of scientific knowledge, but also because citizens need to understand the nuts and bolts of
scientific argumentation in order to engage meaningfully in policy debates where scientific
claims are referenced or relied on. The increasing attacks on science by populist politicians
are making this need more important than ever as these attacks turn on the misconception
of science as linear, fact-producing enterprise (Collins et al., 2020).
A central question for educators is how to teach about the role of dispute in science.
The majority of empirical studies that probe this question focuses on K-12 instruction.
While it is likely that many of the challenges facing K-12 teachers are similar to those
faced by university instructors, we cannot presume that what works in a K-12 setting
also works at the university level (Öberg & Campbell, 2019). A central difference is,
for example, that university instructors have direct experience of scientific research
through their graduate studies and many are actively conducting research while teach-
ing. In contrast, few K-12 teachers have direct experience of scientific research, those
who have conducted graduate studies are more likely to have a degree in science educa-
tion than in a scientific discipline, and very few have the opportunity to actively con-
duct research while teaching. Also, many university instructors do not have formalized
training/degrees in teaching and learning (whereas K-12 teachers have a BEd) which
means university instructor may need more direct support with curricular and pedagogi-
cal approaches.
Using a case-study approach, we set out to identify curriculum design features/course
components that may facilitate effective university-level teaching of the role of dissent in
13
Teaching Science as a Process, Not a Set of Facts 789
1.1 The Case
We use as our case a first-year seminar in science introduced in 2010 at the University
of British Columbia (UBC), which has a student body of over 50,000 students of which
approximately half are international. UBC does not track ethnicity of its students, noting
that the domestic as well as international students are ethnically quite heterogeneous. We
wish to note that the UBC Vancouv er campus is located on the traditional, ancestral, and
unceded land of the xʷməθkʷəy̓əm (Musqueam) (https://www.musqueam.bc.ca/). In the
past decade, directed and strategic efforts have been made at UBC to decrease the attrition
of indigenous students and faculty members as well as increase the knowledge, respect,
and application of their cultural heritage, laws, and traditional knowledge (https://isp.ubc.
ca/, 2020). Even so, only a small percentage of the undergraduate students are indigenous,
and only a minority are enrolled in the Faculty of Science (source withheld). Accord-
ing to a 2011 survey, the two largest, equally-sized groups are students who identify as
white (European ancestry) and Chinese (approximately 1/3 each), followed by students of
Korean, Indian and Iranian ancestry. While UBC’s Black Student Union is comparably vis-
ible (https://www.ams.ubc.ca/stories/creating-a-space-for-black-students-at-ubc/), black
students form a minority of a few percent.
The course has as its aim of strengthening first-year students’ science literacy, emphasiz-
ing their reasoning skills over their abilities to memorize scientific facts (Fox et al., 2014).
It was designed to challenge students’ preconceived ideas of what science is and how it is
done. The ability to construct and deconstruct a scientific argument is a central tenet of the
course. The course involved a large institutional change, and it was launched successfully,
as evidenced by winning the xx Award in xxxx. Following the common design of first-year
seminars, students meet regularly in small groups (biweekly, 25 students) with a professor
and a TA. It is a writing intensive course characterized by student active learning activities.
During the course, students write short essays and carry out several other exercises that are
designed to help students learn how to construct and deconstruct a scientific argument that
is supported by evidence through logical reasoning. Several of the exercises are directly
related to the final assignment: the term paper. This paper should be an unresolved research
question and be based on data from the peer-reviewed literature. Students work on this
term paper throughout the entire course.
The course was popular among students, and analysis of student output demonstrated
that students’ ability to develop a written argument improved over the course (Birol et al.
2013; 2014). Each term, 8–10 sections are run in parallel (one section includes ~ 25 stu-
dents, 1 professor, and a TA) with each of the nine departments in the Faculty of Science
required to contribute one instructor per year. The deficit is filled with emeriti professors
and sessionals. By 2017, a total of 51 individuals had taught the course of who 22 had
taught it only once and 10 had taught it both pre- and post-revision.
In 2015, a 5-year review and revision was carried out under the leadership of the first
author of this paper, with the aim of further strengthening the course (Öberg and Camp-
bell, 2019). The review was based on student evaluations and student surveys, in combina-
tion with feedback provided by former instructors, TAs, department heads, and educational
leaders at university X. The revision led to more systematic use of scaffolding through-
out the course, more strategic use of historic and contemporary cases, the introduction of
13
790 G. Öberg et al.
face-to-face peer review, an “overhaul” of the vocabulary used to avoid jargon, the intro-
duction of a list of term-paper topics guiding students to suitable topics, development of
rubrics that more clearly described the difference between a weak and a strong paper, and
last, but not least, a more explicit focus on students’ ability to construct and deconstruct a
scientific argument. A more detailed description of the changes made to the course during
the revision process, and the reasons why can be found in Authors (2019). Assuming that
the revision actually resulted in improved reasoning skills, it provides an opportunity to
identify curriculum design features that contributed to the change.
In this study we set out to:
A. Investigate if the revision led to a change in students’ ability to develop and communi-
cate a logically coherent scientific argument based on evidence.
B. Method: a blind quantitative comparison of term papers from the pre and post revision
versions of the course.
C. If so, identify factors that instructors believe contributed to the observed change.
D. Method: qualitative interviews and written feedback from instructors, probing their
perceptions of reasons to the differences observed in phase 1.
E. Explore in more detail the course design elements the interviewees in B identify as
design features that contributed to the changes observed in A. This is done in light of
previous research, including literature about science eduction (Hodson, 2014; Kam-
pourakis, 2016; Leach et al., 2000; McComas, 2006), science studies (Douglas, 2017;
Elliott, 2017), scaffolding (Davis, 2015; McNeill et al., 2006), assessment practices
(Evans, 2013; Reimann & Sadler, 2017), and student peer review (Li, 2017; Topping,
1998; Trautmann, 2009), as well as literature on the development of reasoning skills and
critical thinking in science education and higher education more broadly (Aikenhead,
1996; Howitt & Wilson, 2015; Matthews, 2014; Wilson & Howitt, 2016).
2 Methods
13
Teaching Science as a Process, Not a Set of Facts 791
0 4 6 8 10
Fail Poor Acceptable Good Excellent
Claim The paper lacks a The claim is unclear The paper has a claim, The claim is clear and The claim is clear,
claim and inconsistent. It but may be too broad. debatable, but would specific, and debateable.
may not be a claim It may be inconsistent be improved by being It is consistent
but, rather, a (ex. the paper argues a more specific. It is throughout the paper.
statement of fact. different claim than that consistent throughout
which is in the the paper.
introducon.)
Counter There is no The counterargument The counterargument The counterargument The counterargument
evidence / counterargument and rebual are weak and rebual presented and rebual and rebual are strong,
Rebual presented. and difficult to are based in scienfic presented address a detailed, and address a
idenfy. evidence, but address a substanve aspect of substanve aspect of
It may not refute the minor aspect of the the argument, but the argument.
argument being made. argument. could be Both the counter and
It is not based in strengthened. rebual are supported
scienfic evidence. The counterargument by strong scienfic
and rebual are evidence.
supported by scienfic
evidence, but not
from primary data.
Reasons The paper lacks There are reasons At least one reason Reasons provided Each reason is precise
reasons provided, but they supports the claim. generally support the and clearly supports the
The reasons may be difficult to Some need to be claim, and are claim. Reasons are
Support the idenfy. They do not warranted. Two or more conceptually disnct. conceptually disnct.
claim support the claim. reasons may be They may be vague. Warrants are thoughul
Are warranted, conceptually indisnct. There is a clear and appropriate.
when Reasons given may be aempt to use
appropriate contradictory. warrants when
necessary.
Evidence The paper provides The evidence provided The evidence is The evidence is well- The evidence is well-
lile evidence. The is difficult to idenfy generally relevant, but selected, but includes selected.
Is the evidence: evidence does not and poorly supports includes lile data from lile data from It includes data from
Relevant support the reasons or the claim and reasons. scienfic papers scienfic papers scienfic papers
Precise claim. It may be seldom reporng original reporng original reporng original
Accurate relevant, insufficiently research. It may be research. research.
Representave detailed, inaccurate, insufficiently detailed, It is generally relevant, It is consistently
or difficult to idenfy. inaccurate, or not precise, accurate, and relevant, precise,
representave. It may representave. accurate, and
be unclear how the Evidence generally representave.
evidence supports the supports the claims It convincingly supports
claim and reasons. and reasons, but may the claim and reasons.
be uneven (strong for
one reason and less
strong for another).
Fig. 1 Rubric used for assessment of term papers from W1 term 2014 and 2017 (total 370 papers)
term paper does not deal with a scientific topic but a social, political, or ethical one; the
term paper is descriptive (lacking an argument). Term papers that did not fall into any
of these three categories are further assessed using the rubric shown in Fig. 1. A fourth
exclusion category was subsequently added: term papers that were on a topic that the
assessor could not reliably assess due to not understanding the scientific background
sufficiently well. The data was analyzed statistically (unpaired t test, using GraphPad
Prism software 8.4.2). Results are presented as mean ± 95% CI (Fig. 2),and percentage
(Fig. 3). Statistical tests, significance (p-value), and sample size (n, number of individu-
als per group) are stated in the figure legends. No standard deviations were significantly
different between groups. P-values indicated by asterisks: ****p < 0.0001.
Phase 2: To explore potential reasons to the difference that emerged between the qual-
ity of the papers produced by the two cohorts, we used a simplified form of narrative
inquiry, inspired by the Berkley Group conversations (Hollingsworth & Dybdahl, 2007).
We asked four of the most experienced instructors among the ten instructors who had
taught the course pre- and post-revision if they would be willing to participate in an
interview about the courses. All four accepted. The visualizations from phase 1 (Table 1
and Figs. 2 and 3) were presented to the interviewees in the format of open-ended semi-
13
792 G. Öberg et al.
Fig. 3 Histograms showing the distribution in the quality for each of the argument-components that
were blind assessed for term papers written by students in SCIE113 at UBC in the fall term of 2014 and
2017 using the rubric in Figure 1.
structured interviews, lasting between 15 and 53 min. The interviewees were given time
to take in the information and were then asked to comment on what they saw and what
they thought might have caused the pattern they saw. The interviewer responded to
clarification questions but refrained from interpreting the data for the interviewees, but
rather asked for their interpretation. The interviews were transcribed and thematically
coded and analyzed, noting course components and design features that the instructors
mentioned, also noting how they framed these components and features (Cresswell &
Cresswell, 2017). The components and features that were most prominently emphasized
by the interviewees were selected for further analysis. The outcome of the analysis was
validated by inviting the interviewees plus the other six instructors who had taught the
course pre- and post-revision to comment on a first draft of the manuscript, knowing
13
Table 1 Overview of term papers analyzed in the present study for all eight sections as well as for the two sections with instructors teaching in 2014 as well as in 2017
Teaching Science as a Process, Not a Set of Facts
Year Total number of term Numbers of term papers excluded from further assessment due to: Total number of
papers in batch papers assessed using
rubric
Written expres- Descriptive Not scientific Assessor did not feel com- Total number
sion topic petent to assess excluded
13
793
794 G. Öberg et al.
that their comments would be used as part of the data collection of instructors’ interpre-
tations of the findings. They were also invited to become co-authors. The invitation to
comment on the draft was also sent to a teaching assistant (TA) who had been engaged
in the revision work and conducted teaching as research projects (TAR) in support of
the revision. Three of the interviewees, two of the other instructors, and the TA accepted
the invitation and were sent the manuscript draft. All six provided written feedback, all
wished to be involved as co-authors, and all were engaged in the revision process of the
manuscript until submission. The study also draws on a 1-h long focus group held in
December 2019 with four of the instructors to better understand how instructors used
the rubric for written assignments.
The study was conducted with ethics approval (BREB H17-02,074; H1001749).
When the interviewees were asked to describe what they thought contributed to the dif-
ference observed between the 2014 and the 2017 term papers, peer review was one of the
factors that was highlighted.
the term paper version 1 correction is essential plus the peer review /…/ so we change
that in 2017 all the peer review is done in class so that’s … it is, it is, it is the peer
review. It’s the peer review of all the written assignments. (Instructor #1, interview)
13
Teaching Science as a Process, Not a Set of Facts 795
Student peer review has been one of the central activities in the course since its incep-
tion in 2010. This is because student peer review is held to enhance students’ critical think-
ing skills and it can help students achieve a stronger understanding of the scientific practice
(Li, 2017; Nicol et al., 2014; Trautmann, 2009). Peer review forms a cornerstone of scien-
tific knowledge production as referees are called upon to adjudicate the quality of scientific
work in numerous ways such as research proposals, job applications, promotion through
the ranks, conference proposals, and manuscripts submitted for publication. In other words,
peer review influences what gets funded, who gets hired, who gets promoted and tenure,
what gets discussed at conferences, and what gets published in particular journals ). In
cases where peer review processes have been developed and implemented in upper level
and graduate science education, it has been introduced to help students understand that
peer review is integral to the process of scientific knowledge production (Liu et al, 2002;
Trautmann, 2009, Glaser, 2014). It has also been introduced to provide scientists with an
experience of peer review before publishing their first paper.
A student survey conducted in 2012 indicated that the use of peer review in SCIE 113
at UBC was positively received by students (Author), but instructors expressed concerns
with the way it was implemented. Prior to the revision of the course, each student gave
anonymous feedback and graded three other students’ papers using a peer review software
(CPR) with the grade counting towards the final grade (i.e., summative assessment) (Li,
2017; Nicol et al., 2014; Trautmann, 2009). The explicit goal of the feedback was that it
should help their peers identify their strengths and weaknesses and target areas that needed
work (i.e., formative assessment). Before providing feedback, the students had to do some
trial runs and pass a calibration test. On occasion, instructors expressed concerns over not
having access to the feedback that students provided and that they therefore were unable
to provide feedback on the feedback. Some instructors questioned the validity of the cali-
bration and the grading as they suspected that certain students learned how to “game” the
system, which they felt was driven by students’ assessments counting towards their grades.
The concerns relate to topics that have received attention by previous studies on student
peer review, such as anonymity, and formative versus summative feedback (Li, 2017; Nicol
et al., 2014; Trautmann, 2009). During the revision, the implementation of peer review was
changed to face-to-face reciprocal formative assessment in groups of three (Table 2).
It is well documented that increased opportunities for students to revise their work after
feedback can improve the quality of the specific exercise as well as students’ writing and
reasoning skills, among other things (Cho & MacArthur, 2010; Li, 2017; Topping, 1998).
It is also well documented that students find peer review uncomfortable and challenging,
even when they benefit from the process (Walker, 2015). Several authors advance that it is
possible to reduce the level of discomfort and that effective peer review requires an envi-
ronment where students feel that is safe to provide feedback without repercussions. Liu
and Carless (2006) argue that the use of grades is one of the major reasons for student
discomfort and that it can also undermine the potential of peer feedback for improving
student learning. These concerns are echoed by Li (2017) who in their study found that the
top concern among students was that the assessment was going to be used towards their
scores on the assignment. In light of these findings, it seems likely that the omission of
the summative element of the peer review in the review process decreased the discomfort,
strengthened learning, and thus contributed to the observed improvement in the argumen-
tative quality of the essays (Figs. 1 and 2).
There are indications that student feedback can actually be more effective than feed-
back from experienced scholars (Cho & MacArthur, 2010; Nicol et al., 2014). This may
be because experts underestimate the challenges faced by novices, and student feedback
13
796
13
Table 2 Changes made to the design of peer review of written output in SCIE 113 at UBC (categories after Topping, 1998)
Variable Range of variation CPR Face-to-face
Focus Quantitative/summative qualitative/formative or both? Formative and quantitative summative assessment Formative assessment
Product/output Tests/marks/grades or writing or oral presentations or other Participation marks, qualitative written feedback, Written and oral feedback
skilled behaviors? quantitative peer-grading
Official weight Contributing to assess final official grade or not? Yes No
Directionality One-way, reciprocal, mutual? One-way Mutual
Privacy Anonymous/confidential/public? Anonymous Public
Contact Distance or face-to-face? Distance Face-to-face
Constellation assessors Individuals or pairs or groups? Individuals Groups of three
Constellation assessed Individuals or pairs or groups? Individuals Groups of three
Place In/out of class? Out of class In class
Time Class time/free time/informally? Out of class Class time
Formative assessment refers to assessment that is designed to help students identify areas where they can improve, whereas summative assessment refers to quantitative grad-
ing that counts towards the final grade
G. Öberg et al.
Teaching Science as a Process, Not a Set of Facts 797
is communicated in plain language void of expert jargon. Cho and co-workers have con-
ducted a series of studies probing the effectiveness of student and instructor feedback (Cho
et al., 2008; Cho & MacArthur, 2010; Cho & Cho, 2011). When describing factors that
they believed contributed to the improved quality of the essays, the interviewees in the
present study highlighted that students were now required to explain how they used the
feedback they received. Their perception finds indirect support in a study, consisting of a
series of experiments where students received written feedback from peers or an instructor
(Cho & MacArthur, 2010). Students were given the opportunity to revise their draft before
submitting the final product. They were also expected to include a description of the feed-
back, the revisions they made, and justifications as to why they choose to accept or ignore
suggestions. The authors found that students who received feedback from peers made
more substantial revisions than those that received feedback from the instructor. Cho et al.
(2010) hypothesize that students might be more effective in providing feedback because
they have a better understanding of the questions that their peers are asking because “peers
share problems, languages, and knowledge”. We are not aware of any studies that explicitly
have tested whether asking students to respond to the feedback impacts the learning gains,
but intuitively, this seems reasonable as the cognitive skills required to articulate what the
critique was and how it was used are more demanding than to simply receiving the feed-
back. It would be interesting to carry out research to explore whether or not this is the case.
Several studies find that students benefit more from providing than receiving feedback
(Cho & Cho, 2011; Li et al., 2010; Nicol et al., 2014). In his review of the peer assess-
ment literature, Keith Topping (1998) concludes that “For the assessor. Peer assessment is
reflexive. The expression learning by teaching, frequently applied to peer tutoring, might
become learning by assessing in the current context.” (p. 254, emphasis in the original).
When assessing someone else’s work, students need to apply a critical perspective and
learn how to ask good questions. Diagnosing a text to identify misconceptions, logical fal-
lacies, and otherwise weak argumentation is a cognitively demanding skill, and as for most
skills, practice enhances the skill level. The cognitive skills required to respond to feedback
are less demanding and would thus explain why providing feedback results in larger learn-
ing gains than receiving it. In summary, it seems reasonable that the shift to face-to-face,
omission of the summative assessment, and increased emphasis on formative assessment
contributed to the increased argumentative quality of the essays.
When reflecting on the changes that might have contributed to the increased quality in the
elements that make up an argument (Figs. 1 and 2), interviewees volunteered a variety of
scaffolds, such as the outline that students are required to use early on in the class, the
feedback on early versions provided by peers, the use of historical and contemporary cases
illustrating science in the making, and better integration and use of mandatory readings:
there is a different format for assignments building up to this more summative term
paper … we are doing a lot more scaffolding assignments towards using all skills
that you’ll use in your term paper. It wouldn’t surprise me that with that increased
scaffolding we’ve increased students’ abilities to do argumentation well. Things like
using an outline. Using the same outline template for your previous essays. Having
your essays be much more targeted. I think in 2014 we did more essays and now
13
798 G. Öberg et al.
we’ve removed that but increased the scaffolding. So I imagine that would have an
impact. (Interviewee #4)
The curriculum renewal process revealed a strong feeling among the teaching team that
the scaffolding could be improved to better support learning, not only with respect to peer
review, but more generally in relation to the course goals and expected learning outcomes.
In the 2014 version of the course, students wrote 3 argumentative essays in addition
to the term paper. In all cases, a first version was submitted, and students received sum-
mative and formative feedback from peers as well as instructors, with the final version of
the term paper being the last assignment to be submitted. To make room for more scaf-
folding, one argumentative essay was removed. Considerable support was provided for the
first essay: students received help to formulate a claim, they were required to develop an
outline following an instructor-provided format, and they received formative oral and writ-
ten feedback from peers as well as summative written feedback from instructors. The scaf-
folds for the term paper were slightly fewer than for essay 1, but students still received oral
and written feedback on a first version from peers and instructor. For the second essay,
the last assignment to be submitted, the only scaffolding that remained was formative oral
and written feedback provided by peers. In the interviews, this improved scaffolding was
commonly cited as a reason for the observed changes in the quality of the term papers: the
decrease in descriptive papers, the decrease of papers not dealing with a scientific contro-
versy, and the increased quality of the elements of an argument.
We have better system for scaffolding of the course in place now than we did earlier.
I don’t think the student ability has changed. It is simply that they are learning more
effectively. /../ (Instructor #3, interview)
This ties back to the reflections on the design of the peer review, as several authors
caution against introducing peer review without careful scaffolding (Liu et al., 2002; Top-
ping, 1998; Van den Berg et al., 2006). There is, no doubt, broad agreement in the lit-
erature that effective feedback requires that the practice is an integral part of the writing
process—rather than something that happens at the end—with students obligated to not
only revise their work after receiving feedback but also to reflect on the feedback (Walker,
2015; Weaver et al., 2016; Woodward, 2015). Most importantly, instructors and students
must move beyond the perception of feedback as one-way transmission with the reviewer
telling the reviewee what to correct. To be effective, feedback must be dialogic in nature
where both the reviewer and the reviewee play an active role (Nicol et al., 2014). The
receiver needs to formulate and conceptualize the feedback, which means that they need to
ask questions about the feedback, discuss it, and connect it to their preconceived ideas and
prior knowledge. Also, it is held that the practice is more effective if utilized in a series of
exercises rather than as “one-offs”.
Like numerous other authors, Herrington and Cadman (1991) underline the need for
structure and scaffolding: expectations need to be clearly laid out, high-quality feedback
need to be modelled, and students need to receive feedback on their feedback; otherwise,
there is an overhanging risk that students will comment on technicalities and style rather
than content. Strengthening the scaffolding included a revision of the rubric used to assess
the essays, to increase the utility for both students and instructors. The goal was to make
it easier to relate to the rubric throughout the course, thus feeding forward and making
expectations clearer. The interviewees called attention to the revised rubric:
That there is now a rubric a much clearer rubric /…/ the development of the rubric is
probably one of the stronger things that have gone to supporting, allowing students
13
Teaching Science as a Process, Not a Set of Facts 799
to think about how we want them to target and hit our targets. /…/ (Instructor #2,
interview)
Assessment has the potential to serve as a powerful tool for scaffolded learning, func-
tioning both as a transmitter of information and a help for students when constructing their
understanding (Handley et al., 2013; Howitt & Wilson, 2015; Reimann & Sadler, 2017).
Scholars increasingly agree that effective assessment needs to find a balance between
formative and summative assessment (Boud, 2000). Wiliam (2011) argues that when done
well, teaching and assessment become inseparable. In a similar vein, Reimann and Sadler
(2017), who conduct a study of higher education staff’s perception of assessment, conclude
that “What becomes clear from this analysis is the challenge of intricately weaving assess-
ment for learning into courses in a longitudinal manner.”(p.11). Among education scholars,
it is commonly held that to be effective, feedback needs to be an integrated and ongoing
part of assessment and focus more on “feeding forward”. In her review of over 400 papers
on assessment feedback, Evans (2013) notes that educational scholars are moving away
from the old divide between assessment as a corrective tool vs a challenging tool, increas-
ingly embracing a model that utilizes a combination of both.
From its inception, the rubric used for the written assignments in SCIE113 has played
this double role: for formative assessment when students receive feedback on early drafts
and for summative assessment on graded assignments. During the review of the course,
concerns were nevertheless expressed related to the rubric used to assess the term paper.
The concerns raised were of a seemingly technical character such as the original rubric
using Likert scales with ten sub-categories, and qualitative descriptions provided for the
end points and the middle of the scale, as illustrated in the example below for the sub-cate-
gory “Ability to structure a scientific argument” (Fig. 4). The rubric was iteratively revised
based on feedback received during workshops with instructors and TAs in light of literature
on rubric used for formative assessment (Brookhart, 2013; Greenberg, 2015; Panadero &
Jonsson, 2013). This work results in a rubric with each field corresponding to the letter
grades used at the university, as illustrated in the example provided in Fig. 5.
Interestingly, the revision of what seemed to be technicalities had a major impact on the
utility of the rubric, here illustrated by a quote from one of the interviewees:
The rubric is easier to use and therefore I get what I’m supposed to be teaching. And
that, I think, probably is part of this improvement as well. So it’s easier for the stu-
dents to learn but it’s easier for the teacher to teach. (Instructor #2, interview)
In the revision of the rubric, attention was paid to the vocabulary, explicitly utilizing
terms that were used in mandatory readings (claim supported by evidence through reason-
ing; Booth et al., 2003). The rubric was also used to bring attention to central concepts
such as that a strong claim is debatable and that strong reasons are warranted. The text in
Fig. 4 One of the categories in the previous rubric used to assess written assignments in SCIE 113 as an
example of the Likert scale used with descriptions provided at the end points and the midpoint of the scale
(the entire rubric: Appendix 3)
13
800 G. Öberg et al.
Claim The claim is clear, The claim is clear and The paper has a The claim is unclear The paper lacks a
specific, and debatable, but would claim, but it may be and inconsistent. It claim.
Mark: /10 debatable. It is be improved by being too broad. It may be may not be a claim
introduction).
Letter grades at A B C D F
UBC
Fig. 5 One of the sub-categories in the rubric developed during the revision process SCIE 113 at UBC. The
bottom row, which was not given in the rubric, provides the letter grades used at University X that corre-
spond to the percentage mark of the “boxes” in the rubric. (the entire rubric: Appendix 4)
the boxes was further revised to better align with the actual vocabulary used by instructors.
This was done to make it easier for instructors explain what, for example, a good claim
looks like and to more clearly communicate the expectations to the students. For example,
instructors felt that “The reasons support/do not support the claim” was more specific and
“more easily digested” than the proposed “The reasons are relevant/not relevant”. In addi-
tion, a series of related terms were used to provide granularity to the rubric, drawing on the
vocabulary used in Bloom’s taxonomy (Table 3).
The interviewees pointed to that the new rubric not only made it easier for instructors
to grade but that it also was easier to help students understand what they were expected to
learn, not least because it made the expectations clearer for both instructors and students.
Articulating the expectations is something that SCIE 113 does better than any other
course I’ve ever taught. (Instructor #1, focus group)
The category “logical progression” was added after the fall term 2017 because instruc-
tors pointed out that the rubric did not catch whether or not the argument was logically
consistent or not. This was pointed out in the final biweekly instructor meeting of the fall
2017 and also mentioned in a focus group on rubrics that was held early December that
year with four instructors with varying experience of teaching the course.
the enhanced emphasis on logic, and asking students to really evaluate if their evi-
dence logically supports the claim is key. The fact that we introduce this in Unit 2
and it is now reflected in how we assess the TP [term paper] in the rubric is a big
reason why students have improved their reasoning with evidence. (Instructor #7,
written feedback)
Students were explicitly asked to use the rubric when providing each other with feed-
back and interviewees underlined that the improved clarity of the revised rubric made it
13
Table 3 Terms used in SCIE 113 at UBC assessment rubric of written output to indicate increasing quality of elements making up the argument and the quality of the logical
progression in an argumentative essay
Quality indicators
13
801
802 G. Öberg et al.
easier for students to provide each other with constructive feedback when reviewing each
other’s work.
That there is now a rubric a much clearer rubric I think in the term paper. There is
the first draft and the feedback on the first draft and the peer review with the students.
/…/ the rubric has changed. And there’s clear delineation. I think. On how we’re
thinking about these things. And so the students doing the peer review can probably
more clearly think about the claim and about the evidence. And therefore provide
more concrete feedback. (Instructor #2, interview)
Comments made by instructors suggest that many utilized the rubric when students
sought feedback during office hours as a help to get students to understand how to improve
the paper. Instructors said that they point to the specific wording in the box where the stu-
dent had been placed and the wording in the adjacent boxes to help them understand what
was lacking.
I talked to a lot of them about their papers individually; and at that point I used
the rubric, and said, ok well the difference, you can see it says here…a lot of them
had to do with the evidence one, “what is evidence?” “Well, you can see here, it’s
including data, so all of those comments that I meant about providing details—is
there more specific information and data you can use?” So that tended to be along
these…”well you’re following this category, but you can see this one, these words
here mean that if you did this you would get into that category”. So that tended
to be much more individual, “let’s look at the rubric and see where you were,
what you would have to do to get higher.” (Focus group/participant #2, emphasis
added)
I did have one person who was like, I think he was at the bottom of the class, or
he’s like bottom two or three, who just won’t accept anything I say. He’s challenged
me on everything, and so this was actually very helpful, because he has a hard time
understanding me when I give him feedback about things. And so I tell by the way
he argues back with me that he doesn’t really understand what I said fully. So having
him read this [was really helpful]. (Focus group/participant #3)
The number of term papers that the assessor did not feel competent to assess increased
from 5 in 2014 to 17 in 2017. A comparison of papers from 2014 to 2017 on similar
topics or research areas suggests that the major difference between the 2 years lies in
a larger number of students going more deeply into the scientific literature and thus
writing more complex papers. Interviewees shared that they thought that the framing
of the term paper assignment had contributed to more advanced essays. While the
2014 cohort was asked to write about “a current controversy in science that interests
you” (term paper instructions 2014, Appendix 4), the 2017 cohort was asked to write
about “an unresolved scientific research question” (see Appendix 4 for full instruc-
tions). This was done in response to instructors expressing frustration over the many
students selected topics presented as scientific controversies in popular media, but
that rarely are topics of contemporary scientific debates, such as “Why did the dino-
saurs die out?”.
13
Teaching Science as a Process, Not a Set of Facts 803
What I noticed was that a lot of students would Google “scientific controversies”
and then pick from the popular media results that emerged. (Instructor #6, written
feedback)
Conversations held during the revision process also revealed that instructors and TAs
felt that some students interpreted the term “controversy” as implicating that one side of
the argument is “controversial”, i.e., wrong, leading to a wish to pick the “right side” and a
resistance to include “controversial” counter-arguments. The change in wording was glob-
ally received as a major improvement, which here is illustrated with quotes from two of the
interviewees when they were reflecting on potential explanations to the results shown in
Table 1 and Figs. 1 and 2.
I think the language now is not even scientific controversy. It’s more like ‘an active
area of scientific research’ or something like that. Suggesting that it is not about one
side or the other but more about ‘as we gain insight through a scientific approach we
will see that evidence will amass’. And so we are asking them to engage in a topic
that is early enough in that process that the evidence on both sides is still equal in
some way. (Instructor #2, interview)
A significant change, I think, is in the wording of the prompt for the assignment of
actually changing that away from ‘a scientific controversy’ to ‘an unresolved scien-
tific question’. /…/ that is a significant change because it validates both sides of the
argument. A controversy, in the frame of controversy, implies a positive side and a
negative side. That there is a negativity to some ‘controversy’ whereas ‘unresolved
question’ means you can debate both sides and that really is the best kind of topic for
students. (Instructor #4, interview)
Interviewees also pointed to that while the 2014 cohort was asked to identify a
topic on their own, the 2017 cohort was given topics to write about, and that this
would also have contributed to the improved quality. This change was also done in
response to instructors expressing concerns that students tended to gravitate towards
term paper topics that had received attention in the popular media. A list of 20–25
topics spanning multiple disciplines that were assessed as suitable in relation to the
aim of the course was iteratively developed in dialogue with instructors, with the
explicit aim of helping students write stronger essays. Students were still allowed
to pick a topic of their own choosing, and a few students did so, but the majority
selected a topic from the list. Below we provide a couple of examples of topics from
the list:
13
804 G. Öberg et al.
13
Teaching Science as a Process, Not a Set of Facts 805
4 Conclusions
• Face-to-face formative peer review accompanied with the requirement that students
explain how they used the feedback they received.
• Strategic scaffolding designed to support the ability to develop and communicate a log-
ically coherent scientific argument based on evidence.
• Careful attention to vocabulary: aligning terms and concepts in required readings
on argumentation with the learning objectives, written instructions to students, and
rubrics.
• Detailed assessment rubrics for student writing that are used for both form-
ative and summative assessment and directly linked to specific learning
outcomes.
• Development of a suitable list of term paper topics about unresolved research
questions.
The present study illustrates the power of utilizing iterative course revisions that com-
bine findings from educational research with instructors’ reflections on teaching practices
as a way to strengthen student learning.
13
806 G. Öberg et al.
Writing Structure:
Thesis statement 5 4 3 2 1 0
States the main idea or
claim of the argument Clear thesis statement Thesis statement lacks Thesis
clarity statement
Mark: ______/5 is missing
Development statement 5 4 3 2 1 0
Presents main reasons
that will be developed Clear development Development Development
to support the argument statement statement lacks clarity statement
is missing
Mark: ______/5
Organization of ideas 10 9 8 7 6 5 4 3 2 1 0
Same order as stated in
thesis & development. Ideas are presented in Ideas are generally Ideas are
Each idea moves the the same order as presented in the same not presented
argument forward. stated in thesis & order as stated in the in the same order
development thesis & development
Mark: ______/10
Paragraphs 10 9 8 7 6 5 4 3 2 1 0
Consist of one main
idea with supporting The writing is 50% of the writing is Writing
evidence and examples organized into organized into does not
paragraphs paragraphs contain
Mark: ______/10 paragraphs
Sentences 5 4 3 2 1 0
The sentences are clear and The sentences lack clarity and/or
are grammatically correct. have grammatical errors that
Mark: ______/5 inhibit understanding.
13
Teaching Science as a Process, Not a Set of Facts 807
Argument:
Idea Development: 10 9 8 7 6 5 4 3 2 1 0
Ability to structure a
scientific argument Scientific argument Scientific argument Scientific argument
presented with claims presented with most presented with few, if
based on logical claims based on logical any, claims based on
Mark: ______/10 reasons. reasons. logical reasons.
Content: 20 18 16 14 12 10 8 6 4 2 0
Writing Skills:
SCIENTIFIC WRITING SKILLS
Total (THIS SECTION FOR TERM PROJECT ONLY)
Mark: ______/15
Citations 5 4 3 2 1 0
Abstract 10 9 8 7 6 5 4 3 2 1 0
Provides a summary of
the scientific argument Abstract provides a Abstract generally Abstract lacks clarity
that will be presented in clear summary of the provides a summary of or does not provide a
the term project. argument the argument summary of the
argument
Mark: ______/10
13
808 G. Öberg et al.
I: Argument Ia: Claim The paper The claim is The paper The claim is The claim
Mark: /50 Mark: /10 lacks a unclear and has a clear and is clear,
claim inconsist- claim, but debat- specific, and
ent. It may it may be able, but debatable. It
not be a too broad. would be is consistent
claim but, It may be improved throughout
rather, a inconsist- by being the paper
statement ent (e.g. more spe-
of fact the paper cific. It is
argues a consistent
different throughout
claim than the paper
that which
is in the
introduc-
tion)
Ib: Reasons The paper There are At least one Reasons Each reason
The reasons lacks reasons reason provided is precise
• Support reasons provided, supports generally and clearly
the claim but they the claim. support the supports
• Are war- may be Some claim, and the claim.
ranted, difficult to need to be are con- Reasons are
when identify. warranted. ceptually conceptu-
appropriate They do Two or distinct. ally distinct.
Mark: /15 not support more rea- They may Warrants are
the claim sons may be vague. thoughtful
be con- There is and appro-
ceptually a clear priate
indistinct. attempt to
Reasons use war-
given may rants when
be contra- necessary
dictory
13
Teaching Science as a Process, Not a Set of Facts 809
13
810 G. Öberg et al.
13
Teaching Science as a Process, Not a Set of Facts 811
13
812 G. Öberg et al.
Comments:
Term Project
You will develop an argumentative essay, with feedback throughout the term, on the following:
Identify a Current Controversy in Science that Interests you. State your Opinion
and Present the Evidence that Justifies your Position
13
Teaching Science as a Process, Not a Set of Facts 813
Tips and details for working through each part of this assignment are on the term
project module of connect-all sections.
Term project Revised outline, version 1, and final term project = 25%
Class 11 M 1 Oct Paragraph on topic idea for approval due, submit in class
Class 13 F 8 Oct Outline due, submit in class (not for marks)
Class 16 M 20 Oct Revised outline (including references) due, submit in class (3%)
Class 23 F 7 Nov Version 1 due, along with turn-it-in report; submit to connect (7%)
Class 25 W 12 Nov Bring copy of version 1 to class for peer feedback
Not a class W 3 Dec Final term project due, submit to connect (15%)
Identify a current controversy in science that interests you. State your opinion, and
present the evidence that justifies your position.
The term project is expected to present an evidence-based argument that is motivated
by your own interests. The structure of the term project should reflect how you devel-
oped your position. Begin with broad contextual statements that motivate the topic. In
the main body, discuss different viewpoints (compare, contrast), and explain and pro-
vide support for why you are taking your position. In the conclusion, briefly summarize
the previous discussion and state your position clearly again.
See Fig. 6
13
814 G. Öberg et al.
Introduction
For your term paper, you will write an argumentative essay in response to this prompt:
Choose an unresolved scientific research question that interests you. State your
claim and present the reasons and evidence that justify your position.
You will conduct research for your term paper using peer-reviewed scientific literature.
Guidelines
Assessment
Feedback
You will receive peer and instructor feedback on version 1 of your term paper. You are
expected to incorporate this feedback into the final version of your term paper. When
you submit the final version of your term paper, you must include a response to feed-
back worksheet . Please include this as the last page of your paper, rather than as a
separate file.
If you do not fully understand the feedback that you have received, be sure to speak
with the person providing feedback (whether peer or instructor), so that he or she can
clarify it for you.
13
Teaching Science as a Process, Not a Set of Facts 815
Grading
Your term paper outline is worth 2.5% of your final SCIE 113 grade.
Version 1 of your term paper is worth 15% of your final SCIE 113 grade.
The final version of your term paper is worth 20% of your final SCIE 113 grade.
Your instructor will use the term paper version 1 rubric when grading version 1 of
your term paper, and the term paper final version rubric when grading the final version.
All writing must be your original work. Students are expected to be familiar with the uni-
versity X policy on academic integrity and plagiarism [link disabled for review purposes],
and to adhere strictly to it for all work in this course. Academic misconduct of any kind
will not be tolerated; the consequences for academic misconduct will, at minimum, include
a grade of zero for the assignment. Students who engage in academic misconduct may also
face possible expulsion from the course and possible suspension from the university.
We expect you to use proper citations and references in your term paper. Citing your
sources as you develop your outline and your draft, and keeping careful notes about each
paper that you cite, will help to prevent unintentional plagiarism.
You will submit both version 1 and the final version of your paper Turnitin (Links to an
external site) Links to an external site. to verify that each version is your original writing.
You must submit a Turnitin similarity report with both version 1 and the final version of
your term paper.
Funding This study was funded by UBC’s Development Fund with matching funding from the departments
of Botany, Computer Science, Earth, Ocean and Atmospheric Sciences (EOAS), Microbiology and Immu-
nology, Statistics, and Zoology at the university of British Columbia.
Data Availability Raw data were generated at the University of British Columbia. Derived data supporting
the findings of this study are available from the corresponding author Gunilla Öberg on request.
Declarations
Conflict of Interest The authors declare that they have no conflict of interest.
References
Aikenhead, G. S. (1996). Science education: Border crossing into the subculture of science. Studies in Sci-
ence Education, 27, 1–52.
Booth, W. C., Colomb, G. G., Colomb, G. G., Williams, J. M., & Williams, J. M. (2003). The craft of
research: University of Chicago press.
Boud, D. (2000). Sustainable assessment: Rethinking assessment for the learning society. Studies in Con-
tinuing Education, 22(2), 151–167.
Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading: Ascd.
Brownell, S. E., Price, J. V., & Steinman, L. (2013). A writing-intensive course improves biology under-
graduates’ perception and confidence of their abilities to read scientific literature and communicate
science. Advances in Physiology Education, 37(1), 70–79.
13
816 G. Öberg et al.
Cho, K., Chung, T. R., King, W. R., & Schunn, C. (2008). Peer-based computer-supported knowledge
refinement: An empirical investigation. Communications of the ACM, 51(3), 83–88.
Cho, K., & MacArthur, C. (2010). Student revision with peer and expert reviewing. Learning and
Instruction, 20(4), 328–338.
Cho, Y. H., & Cho, K. (2011). Peer reviewers learn from giving comments. J Instructional Science,
39(5), 629–643.
Clough, M. P. (2011). The story behind the science: Bringing science and scientists to life in post-sec-
ondary science education. Science & Education, 20(7), 701–717.
Collins, H., Evans, R., Durant, D., & Weinel, M. (2020). Experts and the Will of the People: Springer.
Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods
approaches. Sage publications.
Dagher, Z. R., & Erduran, S. (2016). Reconceptualizing the nature of science for science education. Sci-
ence & Education, 25(1–2), 147–164.
Davis, E. A. (2015). Scaffolding learning. In Encyclopedia of Science Education (pp. 845–847):
Springer.
de Melo-Martín, I., & Intemann, K. (2018). The fight against doubt: How to bridge the gap between sci-
entists and the public: Oxford University Press.
Douglas, H. (2017). Science, values, and citizens. In (Vol. 81, pp. 83–96). Cham: Springer International
Publishing.
Elliott, K. C. (2017). A tapestry of values: An introduction to values in science: Oxford University Press.
Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational
Research, 83(1), 70–120.
Greenberg, K. P. (2015). Rubric use in formative assessment: A detailed behavioral rubric helps students
improve their scientific writing skills. Teaching of Psychology, 42(3), 211–217.
Handley, K., den Outer, B., & Price, M. (2013). Learning to mark: Exemplars, dialogue and participa-
tion in assessment communities. Higher Education Research & Development, 32(6), 888–900.
Herrington, A. J., & Cadman, D. (1991). Peer review and revising in an anthropology course: Lessons
for learning. College Composition & Communication, 42(2), 184–199.
Hodson, D. (2014). Nature of science in the science curriculum: Origin, development, implications and
shifting emphases. In International handbook of research in history, philosophy and science teach-
ing (pp. 911–970): Springer.
Hodson, D., & Wong, S. L. (2017). Going beyond the consensus view: Broadening and enriching the
scope of NOS-oriented curricula. Canadian Journal of Science, Mathematics and Technology Edu-
cation, 17(1), 3–17.
Howitt, S., & Wilson, A. (2015). Developing, expressing and contesting opinions of science: Encourag-
ing the student voice. Higher Education Research & Development, 34(3), 541–553.
Kampourakis, K. (2016). The “general aspects” conceptualization as a pragmatic and effective means to
introducing students to nature of science. Journal of Research in Science Teaching, 53(5), 667–682.
https://doi.org/10.1002/tea.21305
Kuh, G. D. (2008). Excerpt from high-impact educational practices: What they are, who has access to
them, and why they matter. Association of American Colleges and Universities, 14(3), 28–29.
Leach, J., Millar, R., Ryder, J., & Séré, M. G. (2000). Epistemological understanding in science learn-
ing: The consistency of representations across contexts. Learning and Instruction, 10(6), 497–527.
https://doi.org/10.1016/S0959-4752(00)00013-X
Li, L. (2017). The role of anonymity in peer assessment. Assessment & Evaluation in Higher Education,
42(4), 645–656.
Li, L., Liu, X., & Steckelberg, A. L. (2010). Assessor or assessee: How student learning improves by
giving and receiving peer feedback. British Journal of Educational Technology, 41(3), 525–536.
Libarkin, J & Ording, G (2012) The utility of writing assignments in undergraduate bioscience. CBE
Life Sciences Education, 11(1):39-46.
Liu, J., Pysarchik, D. T., & Taylor, W. W. J. A. B. (2002). Peer Review in the Classroom., 52(9),
824–829.
Liu, N.-F., & Carless, D. (2006). Peer feedback: The learning element of peer assessment. Teaching in
Higher Education, 11(3), 279–290.
Matthews, M. R. (2014). International handbook of research in history, philosophy and science teach-
ing: Springer.
Matthews, M. (1994). Science teaching: The role of history and philosophy of science: New York. Routledge.
McCain, K. (2016). The Nature of Scientific Knowledge. Springer.
McComas, W. F. (2006). The Nature of Science in Science Education: Rationales and Strategies:
Springer Science & Business Media.
13
Teaching Science as a Process, Not a Set of Facts 817
McNeill, K. L., Lizotte, D. J., Krajcik, J., & Marx, R. W. (2006). Supporting students’ construction of scien-
tific explanations by fading scaffolds in instructional materials. The Journal of the Learning Sciences,
15(2), 153–191.
National Academies of Sciences, Engineering, and Medicine. (2016). Science literacy: Concepts, contexts,
and consequences. Washington, DC: National academies press.
Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education: A peer
review perspective. Assessment & Evaluation in Higher Education, 39(1), 102–122.
Öberg, G., & Campbell, A. (2019). Navigating the divide between scientific practice and science studies to
support undergraduate teaching of epistemic knowledge. International Journal of Science Education,
2, 230–247.
Osborne, J. (2010). Arguing to learn in science: The role of collaborative, critical discourse. Science,
328(5977), 463–466.
Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited:
A review. Educational Research Review, 9, 129–144.
Reimann, N., & Sadler, I. (2017). Personal understanding of assessment and the link to assessment prac-
tice: The perspectives of higher education staff. Assessment & Evaluation in Higher Education, 42(5),
724–736.
Suiter, J. (2016). Post-truth politics. Political Insight, 7(3), 25–27.
Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational
Research, 68(3), 249–276.
Trautmann, N. M. (2009). Designing peer review for pedagogical success. Journal of College Science
Teaching, March/April, 14–19. Retrieved from http://search.proquest.com/openview/32d73a1858
4d8d1d90e0176d5f89f9ad/1?pq-origsite=gscholar&cbl=49226
Van den Berg, I., Admiraal, W., & Pilot, A. (2006). Designing student peer assessment in higher education:
Analysis of written and oral peer feedback. Teaching in Higher Education, 11(2), 135–147.
Vernon, J. L. (2017). Science in the post-truth era. American Scientist, 105(1), 2–3.
Walker, M. (2015). The quality of written peer feedback on undergraduates’ draft answers to an assignment,
and the use made of the feedback. Assessment & Evaluation in Higher Education, 40(2), 232–247.
Weaver, K., Morales, V., Nelson, M., Weaver, P., Toledo, A., & Godde, K. (2016). The benefits of peer
review and a multisemester capstone writing series on inquiry and analysis skills in an undergraduate
thesis. CBE-Life Sciences Education, 15(4), ar51.
Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), 3–14.
Wilson, A., & Howitt, S. (2016). Developing critical being in an undergraduate science course. Studies in
Higher Education, 1–12.
Woodward, G. M. (2015). Peer review in the classroom: Is it beneficial? Literacy Learning: THe Middle
Years, 23(1), 40.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
13