Nudging Open Science
Nudging Open Science
Nudging Open Science
net/publication/350544547
CITATIONS READS
2 1,060
14 authors, including:
All content following this page was uploaded by Myriam A. Baum on 01 April 2021.
Robson S. G.1., Baum, M. A.2, Beaudry, J.3, Beitner, J.4, Brohmer, H.5, Chin, J. M.6, Jasko, K.7,
Kouros, C. D.8, Laukkonen, R. E.9, Moreau D.10, Searston R. A.11, Slagter, H. A.9, Steffens N.
K.1, Tangen J. M.1
Email: [email protected]
1
Abstract
In this article, we provide a toolbox of resources and nudges for those who are interested in
advancing open scientific practice. Open Science encompasses a range of behaviours that aim to
include the transparency of scientific research and how widely it is communicated. The paper is
divided into seven sections, each dealing with a different stakeholder in the world of research
(researchers, students, departments and faculties, universities, academic libraries, journals, and
funders). With two frameworks in mind — EAST and the Pyramid of Culture Change — we
describe the influences and incentives that sway behaviour for each of these stakeholders, we
outline changes that can foster Open Science, and suggest actions and resources for individuals
to nudge these changes. In isolation, a small shift in one person’s behaviour may appear to make
little difference, but when combined, these small shifts can lead to radical changes in culture. We
offer this toolbox to assist individuals and institutions in cultivating a more open research
culture.
2
Nudging Open Science
A spotlight on replication failures spanning many scientific fields has given rise to what
has been called a ‘reproducibility crisis’. Psychology, economics and medicine are just a few
disciplines where the reproducibility of findings have been criticised (Duvendack et al., 2017;
Open Science Collaboration, 2015; Prinz et al., 2011); Questionable research and publication
practices (QRPPs) are partly to blame for this crisis; a norm or bias toward publishing positive
results (the ‘file-drawer problem’; Rosenthal, 1979) incentivises researchers to be especially
liberal when analysing their data (e.g., ‘p-hacking’; Simmons et al., 2011) and generate
hypotheses after the results of an experiment are known as if they were expected from the outset
(‘HARKing’; Kerr, 1998). More than half of researchers in psychology, for instance, have
reportedly peeked at their study’s results before subsequently deciding whether to collect more
data and more than one third also claimed to have reported unexpected findings as though they
were expected from the outset (John et al., 2012). QRPPs provide fertile ground for further
irreproducibility and result in part from the culture and incentive structures in academia
(Edwards & Roy, 2017; Munafò et al., 2017; Nosek et al., 2012).
A movement of Open Science has arisen in response to these issues (Vazire, 2018). The
umbrella term ‘Open Science’ encompasses a range of behaviours that aim to increase the
transparency of scientific research and how widely it is communicated (Fecher & Friesike,
2014). Reforms such as preregistration, publicly sharing data, open review processes, and open-
access publication are designed to make research easier to use, evaluate and reproduce (Corker,
2018; Pontika et al., 2015; Spellman et al., 2018). However, some individuals are not yet
committed to change because they may be unaware of Open Science or its benefits. Individuals
and institutions may be dissuaded by a perception that more transparent science is too laborious
or may not know how to change the way they currently do things. Additionally, the incentive
structures and publication practices are problematic because they tend to reward positive findings
more than negative or null findings. If certain kinds of findings are demanded, then Open
Science can be perceived as a barrier to these goals.
Whether or not researchers and institutions decide to adopt Open Science practices is
largely a behavioural question (Norris & O’Connor, 2019). Insights from psychology and other
behavioural sciences suggest that humans are far from purely rational decision makers. Instead,
people routinely make decisions through automatic, impulsive, and emotional processes — often
3
driven by social pressures and immediate cues in their environment (Kahneman, 2011; Tversky
& Kahneman, 1974). Everyday decisions are often shaped by surprisingly incidental or
opportunistic factors. When a person chooses what toothpaste to buy, they rarely make a fully
rational choice by weighing up the costs and benefits, or foreseeable utility. Instead, they tend to
opt for the toothpaste presented at eye level (Thaler & Sunstein, 2008), or the one that is the most
familiar or regularly advertised (Pliner, 1982; Zajonc, 1968).
The influence of psycho-social factors, however, is not limited to consumer decision-
making. It extends to decisions that researchers and institutions make when deciding how to
conduct, report, evaluate, publish or fund research. For example, ‘bad’ scientific practices
include QRPPs (e.g., p-hacking, HARKing, or selective reporting) whereas Open Science
includes behaviours such as preregistration, posting preprints, or publicly sharing data. Human
psychology is at the centre of every decision, whether it be buying toothpaste, running a
scientific study, or evaluating a research project.
Theories and findings from across the behavioural sciences can inform practically any
situation where a human decision-maker is involved. There are many examples where simple and
inexpensive changes in choice architecture (i.e., the way that choices are presented to consumers)
have significant impacts on behaviour. When confronted with a decision, for instance, people
tend to choose the option that requires the least effort. In most cases, the status quo — how
things currently are — is preferred over a more effortful change. Sweden, for example, enjoys far
higher rates of organ donation than Denmark not because the Swedes are more compassionate or
because organ donation is a core value of their nationhood, but simply because Sweden requires
people to opt-out of donating their organs whereas in people in Denmark must opt-in (Johnson &
Goldstein, 2003; Davidai et al., 2012).
Highlighting a social norm –– the accepted standard of behaviour of most people in group
that one cares about — can also greatly influence how people act. For example, if people
discover that 90% of fellow group members (rather than 10%) put their rubbish in the bin, they
are more likely to do the same. Social norms messaging is a cost-effective strategy for a range of
behaviours, including conserving energy (Benartzi et al., 2017; Nolan et al., 2008), reducing
over-prescription of antibiotics among general practitioners (Hallsworth et al., 2016) and
increasing tax compliance (Hallsworth, 2014).
4
These are all examples of ‘nudging’ (Thaler & Sunstein, 2008). They are small, easy-to-
avoid changes to a person’s decision-making environment that alter behaviour in a predictable
way without forbidding any options or using economic incentives. Though we have only outlined
a fraction of the possible interventions that behavioural research can offer, the field has a great
deal of potential to help improve the uptake and maintenance of positive, open scientific
practices. There are at least two key frameworks for effective behaviour change (see Figure 1).
The first framework is the Pyramid of Culture Change (Nosek, 2019) and the second is the
EAST (Easy, Attractive, Social, and Timely) framework (UK Behavioural Insights Team, 2014).
Many of the underlying principles from the Pyramid of Culture Change and EAST are similar,
and both converge with well-established findings from the behavioural change and nudging
literature.
Figure 1. An illustration of the Pyramid of Culture Change and the EAST framework, and how the components of
each model relate to one another.
The first rung on the Pyramid of Culture Change prescribes that desired behaviours be
made possible by providing the necessary infrastructure, while the second rung highlights a need
for desired behaviours to be simple for people to engage in. These two rungs broadly resemble
the first principle of the EAST framework: desired behaviours ought to be made easy. The third
rung on the Pyramid endorses the power of communities in shaping normative behaviour and
5
instilling social connection. Similarly, the EAST framework emphasises the utility of social
norms, social commitments, and the power of networks to change behaviour. The fourth rung
underscores incentivised reward systems. In other words, desired behaviours ought to be
attractive, as per the ‘A’ in EAST. We will ignore the top rung of the pyramid because
‘requiring’ any behaviour runs counter to the nature of nudging. However, many nudges would
be effective if presented regularly or when people are most receptive (i.e., timely, as per the ‘T’
in EAST).
Inspired by these two frameworks, we offer concrete ways for individuals to nudge a
more open research culture. Each section in this paper is devoted to nudging a different node in
the research ecosystem: researchers, students, departments and faculties, universities, libraries,
journals, and funders. In doing so, we map the psychology of the target audience, outline their
role in the scientific community and suggest actions they can take to improve scientific research.
We then offer ways to bridge the intention-action gap — the void between what people say they
would like to do and what they actually do — using the principles outlined in the Pyramid of
Culture Change and EAST frameworks. We believe that incremental behavioural shifts across
the research ecosystem will greatly improve the way scientific research is conducted, promoted
and disseminated, and that action at each node is essential for bringing about these shifts. In
Table 1, we also provide a list of resources that readers can use to promote Open Science.
Table 1.
Open Science Resources for Nudging Various Stakeholders in the Research Ecosystem.
Resource UURL
Researchers
Grassroots movements https://www.cos.io/blog/how-build-open-science-network-your-
community
6
https://osf.io/t6m9v/
https://dunn.psych.ubc.ca/resources/
7
Student Initiative for Open https://studentinitiativeopenscience.wordpress.com/
Science
8
OSF institutions https://osf.io/institutions
https://www.cos.io/products/osf-institutions
9
Commitment statements for http://www.researchtransparency.org/
grant applications
Researchers
10
recruitment strategies, stopping rules, exclusion criteria, materials, procedures, predictions, data
analyses, and statistical tests upfront before data is collected (Gelman & Loken, 2013). Shifting
this work to occur earlier in the research process helps to ensure that unforeseeable issues with
an experiment are addressed early. Although some have questioned the utility of preregistration
(see Devezer et al., 2020; Szollosi et al., 2019), the act can serve partial antidote to QRPPs such
as HARKing and p-hacking (Fischhoff & Beyth, 1975; Munafò et al., 2017; Nosek et al., 2018;
Wicherts et al., 2016). Preregistration can also partly address file drawer problems by improving
the visibility of otherwise undetectable null findings. There are now several online registries
dedicated to preregistration (see Table 1).
Registered reports take preregistration to the next level. In a Registered Report, one
details the research questions, methodology, and analysis plan and submits this for review prior
to collecting data. Once a Registered Report is accepted, the journal agrees to publish the study
regardless of the outcomes as long as the quality control criteria are met. Many journals now
accept Registered Reports as a publication format. Preregistration together with Registered
Reports can help to ensure that the scientific value of hypothesis testing is determined by the
quality of research questions and methodology rather than the findings themselves.
A preprint, on the other hand, is a version of a scientific paper that precedes a formal
peer-reviewed publication in a scholarly journal and is often made freely available to the public.
Papers that make it through the peer review process are then copyedited and typeset before
publication. There is no doubt that peer review is an integral aspect of science, but the process
can result in less nuanced findings and interpretations being published. For instance, negative or
‘messy’ findings that do not present a ‘nice story’ can be removed at the recommendation of a
reviewer. The preprints can offer a raw, unedited version of a study for people to consider. They
also enable authors to openly share versions of their papers in ways that are openly accessible
before or during the peer review process as well as following publication in an academic journal
to allow members of the public to access the paper. Sharing scientific manuscripts as preprints is
becoming increasingly popular for many researchers (Narock & Goldstein, 2019).
11
Nudging preregistration and preprints. There are many means to potentially improve
the transparency and rigour of scientific research (e.g., open code, data analysis, and materials),
but preregistration and preprints are relatively easy first steps. Devoting a minute or two to
preregistration and preprints during a research seminar or workshop can serve as a point of entry
into Open Science. It can also signal a descriptive norm to your colleagues as well as an
injunctive norm if the benefits of preregistration and preprints to the scientific community are
highlighted.
Simply knowing that other researchers preregister their studies and post preprints,
however, does not provide the opportunity for researchers to it themselves. Organising Open
Science seminars and workshops for colleagues to attend may also be highly effective. So too
could inviting Open Science advocates to present at your department or faculty. These events
allow researchers to learn more about preregistration and preprints — what they entail, how they
can be done, and why they are important, for instance, because of cognitive biases and QRPPs
(e.g., the effects of p-hacking on the interpretability of findings).
Preregistration and preprints are becoming more and more commonplace (Fu & hughey,
2019; Nosek & Lindsay, 2018). A need to connect can be leveraged at these events by
strengthening the sense of these practices as increasingly normative. Noting the advantages of
preregistration and preprints can also incentivise researchers to engage in these behaviours. For
example, both practices prevent ‘scooping’ and intellectual overlap because they are associated
with time-stamped digital-object identifiers (DOIs). Many journals also offer Open Science
badges that signal the use of practices such as preregistration (McKiernan et al, 2016). Papers
associated with preprints also tend to be cited more than papers with no accompanying preprint
(Fu & Hughey, 2019; Serghiou & Ioannidis, 2018), which may be particularly beneficial to early
career researchers (Allen & Mehler, 2019; Berg et al., 2016; Sarabipour et al., 2019). These
behaviours could also be framed as minor additions to a researcher’s workflow and basic
templates could be used to run through worked examples. Holding these workshops and seminars
at times when colleagues tend to begin new projects will undoubtedly attract even greater
attention and is an ideal time for researchers to form new habits. In addition, such events offer
the opportunity for researchers to raise reservations and lively debate could help dispel any
misconceptions people may have. In Tables 2 and 3 we outline some common misconceptions to
preregistration and preprinting, and ways to respond to these objections.
12
Table 2.
Responses to Misconceptions about Preregistration.
Misconceptions about Reality
preregistration
“Preregistration stifles creativity. If I Preregistration is not intended to discourage exploratory analyses or
preregister then I cannot run any more creativity. It is simply a mechanism to distinguish confirmatory from
exploratory analyses.” exploratory analysis (i.e., prediction from postdiction). It best fits more
mature research questions or direct replications where theories and methods
are well-developed, effect sizes of interest are well-informed, and where
predictions can be likewise specific (McIntosh, 2017). However, it can also fit
exploratory research if greater emphasis is placed on parameter estimation
and hypothesis generation than on a priori hypotheses and p-values
(McIntosh, 2017). As DeHaven (2017) stated it, a preregistration is “a plan,
not a prison”.
“I don’t need to preregister my work Open Science means engaging in behaviours that make research more
because it isn’t experimental.” transparent to readers and it can take many forms. Specifying how one will
conduct a study and the methodology used can apply to many kinds of
analysis, even when conducting secondary data analyses of already existing
data (Haven & van Grootel, 2019; Mertens & Krypotos, 2019).
“When I preregister, other labs will If one is worried that their ideas will get scooped, preregistrations can be
‘scoop’ me.” embargoed until a specified date until which the project is only viewable by
contributors. Moreover, a preregistration is a time-stamped proof of one’s
idea early in its conception. Thus, it is actually a precautionaury measure
against getting scooped.
“I won’t get published because I will It is legitimate to deviate from one’s preregistered plans as long these
be constrained by the analysis plan I deviations are transparently reported. In fact, additional, unplanned analyses
laid out in my preregistration.” that improve the quality of the final analyses will only serve to illuminate the
research process more clearly (Nelson, Simmons, & Simonsohn, 2017).
Though the results may be messier than if one were to selectively report
findings, studies that follow open practices tend to be more impactful in the
long term (e.g., it attracts more citations; McKeirnan et al. 2016).
“Preregistration will only slow down Preregistration does mean more work early in the research process, but it can
my output because it requires far more reduce work later on as methodological, theoretical, and statistical flaws are
work up front.” likely to be noticed and ironed out earlier, and the analysis plan laid out.
Preregistrations can also be as detailed or as short as one prefers and many
templates (e.g., on the OSF) that make the process easy. More detailed
preregistrations can also be submitted to journals as Registered Reports,
which are more likely to be accepted than traditional manuscripts (Chambers,
2019).
Table 3.
Responses to Misconceptions about Preprints.
13
Misconception about Reality
preprints
“Preprints are not peer-reviewed and Though not peer-reviewed, preprints are often what is submitted to a journal
therefore not scientifically sound and for publication. It is possible for researchers to upload a half-baked
less trustworthy. So why bother? ” manuscript in theory, but most are aware that once something is available on a
preprint server, remains there for all to see and evaluate. Manuscripts are
usually accompanied by a permanent individual DOI as well. These
mechanisms mean that researchers hold some responsibility for the preprints
they post. Researchers also typically add revised versions of their manuscripts
after peer-review to preprint servers simply to make the work more accessible.
Peer-reviewed articles associated with preprints have 36% more citations than
those that are not (Fu & Hughey, 2019). Posting preprints also allows for
researchers receive feedback, which can help authors improve their
manuscript.
“When I preprint my work, other labs Preprints receive a time-stamped DOI, which can prevent ideas from being
will scoop me.” scooped because a citable document is created before the peer-review process,
which often takes many months. This DOI can be especially helpful when two
labs are working independently on the same scientific problem.
“If I preprint my work, I cannot Many journals today accept preprints prior to submission and publication.
publish it anymore.” Some journals (e.g., eLife) are also beginning to acknowledge the value of
preprints to an even greater extent by offering preprint review. This process is
similar to the typical review process, but reviews are published alongside the
preprint on BioRxiv (eLife, 2020). Resources for determining a journal’s
preprint policy can be found in Table 1.
“Uploading preprints is difficult and A preprint is simply a manuscript-for-submission in PDF format and
takes so much time.” uploading a preprint (e.g., on PsyArXiv or BioRxiv) only takes a few minutes.
Resources to help with preprinting can be found in Table 1.
14
offer further opportuniy to share tips, links, templates, guides, and papers. Providing links and
materials on departmental or faculty websites will also reduce these barriers and regularly
prompt open behaviours among researchers. Early career researchers and graduate students in
particular may benefit most from these social initiatives in a rapidly changing research landscape
and because their habits are less entrenched.
A sound infrastructure for open practices makes it easier to create a normative culture. A
physical ‘Open Science wall’ in the department or an Open Science section in the departmental
newsletter or website can signal the widespread use of such practices among colleagues. Those
already interested in Open Science can post the latest information on preregistration, preprints
via flyers, and infographics, and onlookers will be reminded regularly of these behaviours.
Early adopters can also build ‘open’ norms by highlighting preprints on their websites
and CVs next to peer-reviewed publications. Researchers can create forums and webpages
dedicated to video tutorials, animations, infographics, and other resources to encourage Open
Science. Fostering group cohesion in this way can inspire people to identify with the Open
Science movement. Common goals can also strengthen bonds among researchers and rally more
commitment. Some universities, for example, have taken part in the OSF’s Preregistration
Challenge, which ranks universities on the number of preregistrations within their institution.
Although the challenge has ended, institutions can build in-group solidarity through this sort of
collective action.
Moving from Quantity to Quality and from Data to Theory
Changing how we evaluate research is crucial to promoting open practices as well. The
current ‘publish or perish’ culture not only incentivises novel and positive results, but it can also
motivate unnecessary and costly data collection that may contribute little to scientific progress.
Seeking only results that are positive and novel may encourage QRPPs among otherwise well-
intentioned researchers. Similarly, overweighting the quantity of publications can motivate quick
and easy experiments that produce just enough new data for a new publication. For example, the
fastest way to accumulate publications would be to tweak experiments in minor ways and to
write ap a separate article for each of these least publishable units (LPUs; also referred to as
‘salami slicing’).
Above we note that open practices such as preprinting can attract more citations, but
ideally the primary goal of research should not be to increase one’s citation count. If the metric
15
was quality, or ‘contribution to scientific progress’ (rather than the number of publications), then
researchers would be incentivised to devise careful experiments that aim to falsify or expand key
theories in the field, and to publish single papers with a strong emphasis on how the data
contributes to theory. Some have even argued that preregistration and statistics will not fix
science, because scientific progress critically depends on strong theories that map well onto
statistical models (Fiedler, 2017; Szollosi et al., 2019). Psychology, for example, may have
become too focused on theorising about specific phenomena that are often created by the
experiments themselves, rather than on more general human capacities (Hommel, 2019). While
psychology is a relatively young science, the focus on publication quantity may be partly
responsible for the lack of strong theory.
When there are excessive publications, grasping the core findings of any subfield
becomes increasingly challenging, which in turn discourages interdisciplinary work and creates
scientific siloing. While some scientific siloing is inevitable and perhaps even desirable,
collecting new data for new findings––for the sake of new data and new findings––is not a recipe
for scientific progress, but is currently an ingredient of career progress. In contrast, a culture that
values quality would inspire interdisciplinary work that requires more time and collaboration, but
ultimately moves the field forward.
The growing trend of open data also creates exciting opportunities for researchers to test
hypotheses and larger theoretical questions using highly powered integrated datasets from
multiple paradigms and diverse populations. Recent examples include The Confidence Database
comprising more than four million trials and 8,700 participants (Rahnev et al., 2020). Another is
a recent data driven analysis of approximately one quarter of the fMRI literature, comprising
approximately 17,000 experiments (Taylor et al., 2015). These efforts require multi-lab, multi-
institution, and multidisciplinary efforts that would only be advantageous under a quality over
quantity value scheme.
Nudging Changes to Evaluation. It goes without saying that researchers need some way
to evaluate research, and that citation metrics or publication output can sometimes make for
useful heuristics. But given growing awareness of the drawbacks of citation metrics and
‘publications for the sake of publications’, changes to our schemas for evaluating research are
likely inevitable. Thus, one way we can start to encourage each other to focus on quality over
quantity and theory over data may be to acknowledge that our metrics will inevitably change.
16
Knowing this fact may encourage researchers to avoid the risk of investing time and resources to
fulfil goals that will not be incentivised in the future (i.e., anticipated regret; Zeelenberg, 1999).
On a more practical level, researchers may opt to hide their citation metrics where
possible. Withdrawing overt advertising of these metrics on personal websites may help to create
a new social norm. For example, the current norm may be to scroll past papers that have few
citations but are otherwise relevant. A preferable norm, if those metrics are not visible, would be
to briefly skim an abstract or a methods section before committing to a full read of a paper.
Given that scientists tend to have relatively good intuitions about which papers will replicate
(Camerer et al., 2018), a brief skim of the abstract or methods will likely provide a much better
estimate regarding the quality of the research than will the citation count. We can encourage this
norm simply by not advertising such metrics; for example, making it easier for the reader to click
the article than to seek out a citation count. With their publications, researchers can also list links
to preregistration documents and/or the open-source data and scripts of published work on their
websites and CVs, as another indication of study quality.
But what other heuristics can we develop for evaluating research? This is a domain ripe
for innovation and there are some straightforward alternatives. For example, there is already a
growing trend of advertising one’s two, five or ten ‘Best Papers’ in job applications. This is a
simple and effective way to avoid arbitrarily counting publications or citations. Another option is
to innovate metrics that quantify a paper’s theoretical contribution and methodological rigour.
Public peer review of papers on open access platforms such as PsyArxiv or ResearchGate, is one
mechanism through which experts in a field could collectively endorse different metrics of a
paper’s quality. Taken together, we can nudge other researchers by a) raising awareness that
metrics are changing, b) changing social norms by advertising the quality of one’s work (e.g.,
‘Ten Best Papers’) rather than citation count, and c) encouraging innovations in evaluating
researchers, such as public peer review. Additional guidelines for evaluating researchers can be
found in Table 1. But perhaps the simplest thing researchers can do is stop celebrating articles or
academics merely for their citation counts, and celebrate them instead for the exciting, rigorous,
replicable, and theoretically motivated research.
Students
17
academics, good habits can be ingrained in students early in their training. The ability to think
critically about research designs, statistical claims, and everyday events has long been touted as a
core learning outcome of psychology undergraduate programs in particular (Nisbett et al., 1987).
Indeed, psychology training has been shown to produce large gains in statistical and
methodological reasoning compared with other disciplines (Lehman et al., 1988). Examples of
confounding variables and how to deal with them (e.g., random assignment, random sampling,
control groups, blinding, etc.) are common features in undergraduate methods curricula as is
training in probability theory and statistical techniques for dealing with uncertainty.
As training is foundational to good scientific practice, many universities now teach these
concepts in their science degrees. Further steps to teach open methods to students from the outset
ought to carry on into their future research projects. Many students are intrinsically motivated to
learn, but others are looking to get a competitive edge in the job market. Teaching Open Science
tools can provide both an engaging learning environment and valuable skill set.
Nudging Students
University teachers and lecturers are uniquely placed to foster open research principles
and practices in training the next generation of researchers and professionals. There are a number
of strategies teachers can use to nudge students to adopt open practices. First, they can model
open practices themselves. Authority figures play an important role in our social lives, including
establishing and perpetuating a culture that takes certain values seriously (Robbennolt &
Sternlight, 2013). If students observe their mentors thoughtfully integrate Open Science into their
own work, including preregistration, preprinting, and open data and code, then they may be more
likely to follow suit. Teachers could, for instance, signal their use of these practices when talking
about studies in class along with examples.
When coordinating a course, another effective nudge may be to embed Open Science
practices into assignments. It is common for students to be graded on how well they report
experiments. Why not assign them to write a mock preregistration before conducting a class
experiment? They could even post a mock preprint on an internal server for fellow classmates to
read and review each other’s work before submitting the revised version for grading.
Preregistration could also be incorporated into thesis and dissertation projects as part of their
research proposal. Academics might also consider designing courses devoted to Open Science
theory and tools. Educating students about the consequences of the current status quo in research,
18
such as the replication crisis in psychology, could also make students more receptive to open
practices (Chopik et al., 2018). Massive open online courses (MOOCs) can broaden the audience
of these teachings further still. With these learning experiences in place, open practices may
become the default as students venture into research careers and will be central to how they
evaluate research.
Nudging Open Science via practical tools and statistical skills can also instil a culture of
open practices among students. Learning to run analysis scripts at the click of a button and write
detailed commentary alongside code can make analyses easy to interpret and reproduce. In turn,
coding errors or inconsistencies in analyses are easy to discover. Moving toward open-source
statistical programming tools — such as Python and R (alongside R scripts, R markdown, R
notebooks, and the RStudio interface and other software built on R) — could benefit students as
well because these platforms allow people to document code as they learn, not to mention the
customizability of script-based languages like R and Python relative to other GUI-based
statistical software. Familiarising students with platforms like GitHub can also introduce them to
the ways in which researchers can share code.
Wrangling data, using sensible variable names, and producing attractive visualisations are
other small nudges that go a long way toward making data analyses more interpretable. Practical
challenges during class and running simulations on existing datasets are desirable difficulties that
can greatly improve student learning. Using these open tools early in research training will
establish them as the status quo and working in groups can ensure that students are supported
socially as they learn these skills.
Grassroots initiatives, such as those described thus far, will not in isolation persuade most
researchers to adopt open practices. Researchers may be well-placed to organize seminars and
online tools, but too often they are busy and under-resourced. Researchers may find themselves
in a department where everyone is committed to promoting Open Science. Others, however, may
be surrounded by colleagues who are wary of the movement or may be unmotivated to make
significant changes. On the other hand, initiatives organized at the department or faculty level —
for example, by deans, department heads, and academic committees — are more likely to be
well-organised and well-resourced. Action at many nodes of the research ecosystem is more
19
likely to result in culture change and heads of departments or overseeing committees can greatly
change how academics themselves conduct research.
Regularly curated resources that a faculty provides can make adopting open practices
easier. Useful materials can be regularly sent to researchers via mailing lists. An Open Science
tab could be added to a department’s website where researchers can find FAQs and other support
services (e.g., preregistration guides, how to responsibly share sensitive data). The department
could even fund people to organise Open Science initiatives and workshops in much the same
way that departments have dedicated statistics advisors. Entire graduate or undergraduate courses
with a significant focus on Open Science can also be implemented (e.g., Sarafoglou et al., 2019).
And departments could ask researchers to provide (mock) preregistrations as part of ethical
approval applications (as is the default at the University of Amsterdam’s Psychology
Department).
Symbolic actions can be powerful in creating change as well. A public pre-commitment
can help align one’s future behaviour with their desired goals (Dolan et al., 2010). Commitments
or declarations explicitly communicate the values and norms, and aspirations of an organization.
They affirm the aspirations of those who already share Open Science values and can be a catalyst
for change in departments and faculties that are yet to make significant changes. Departments
from the Southern Methodist University, Göttingen University and Utrecht University are just
some institutions that have openly voiced a commitment to open practices (see Table 1).
Departments or faculties could also signal open values by presenting Open Science awards
during a festive meeting attended by all staff.
Evaluating candidates for hire and promotion in faculties can also be modified to weight
open practices more heavily. Such changes are likely to encourage researchers to reconsider their
practices and can increase the perception that open practices are not only normative but also
valued (Nosek, 2019). Job listings and interview questions could ask for evidence of open
practices or for the candidate to illustrate commitment to transparent research in their cover
letter. It may also be fruitful to ask candidates to provide an annotated CV detailing whether their
articles have been preregistered or openly accessible, what datasets they have openly shared, or
indicate only five or ten of their ‘best articles’. Gernsbacher (2018) also provides some specific
recommendations for how to reward open practices.
20
Theses and dissertations could also be evaluated not just on novelty and impact (as they
now often still are), but also according to how well they demonstrate a critical attitude towards
one’s research findings and the steps taken to avoid QRPPs. Research students could describe
how they have made their data, analysis, and materials publicly available, or justify why they
have not. A greater focus on these aspects of research may create reward structures that enable
open practices to flourish.
With this said, the majority of middle and senior academics are on permanent or tenured
contracts (Coates et al., 2009), so relying on new hires and postgraduate students to shift the tide
is an incomplete solution. It is critical to consider open practices and research quality in tenure
and/or promotion decisions as well (League of European Research Universities, 2018; Munafò,
2019). Identifying specific nudges with regard to tenure and promotion criteria is difficult
because there is considerable variety in how these decisions are made and by whom they are
made. A number of Dutch universities and funding agencies have, however, revised their
guidelines to have a broader, holistic approach rather than a metric-based approach
(ScienceGuide, 2019). When deciding how to distribute internal research funding and yearly
performance reviews, departments could recognise outputs such as open data, preregistration,
open materials, and open-source code more positively (rather than neutrally), and committees
could look beyond journal impact factors when assessing the quality of publications. Review
processes could also reconsider their preference for sole-or few-author publications instead of
large-scale, multi-author collaborations (e.g., Brock, 2020).
Nudging Departments and Faculties
How can we ensure that the initiatives above are adopted? Nudges can come in the form
of emails or conversations involving those in positions of authority (e.g., the deans or department
heads). Beyond those few who hold administrative positions, groups with decision power (e.g.,
academic committees) can also lead to declarations of commitment to open practices, promote
platforms for open resources, and encourage changes to evaluation that value Open Science.
Resistance to change at the departmental or faculty level may have various causes, so it is
crucial to establish resources and information for these initiatives. For example, sample
declarations can be circulated throughout the workplace and to departmental heads or academic
committees. The normative nature of Open Science ought to be highlighted in meetings and in
emails as well, including sharp upward trend in open practices (Christensen et al., 2019).
21
Faculties and departments are motivated by rankings, funding, and student intake. It is therefore
important to highlight the benefits (or potential losses) that foregoing Open Science initiatives
can have. As journals and funding bodies call for more open research, failing to keep up may
mean lower research rankings, less desirable prospects for students, and less funding (McKeirnan
et al., 2016).
Departments and faculties tend to be responsible for hiring and promoting researchers.
There are strong incentives that can be leveraged to ensure Open Science practices are prioritized
when these decisions are made. Job advertisements typically detail the requisite credentials and
preferred skills and expertise. Researchers can nudge departmental chairs (or equivalent) to
include explicit statements about open practices in the advertisements they post. In line with the
‘make it easy’ principle, researchers can provide committees with a statement rather than leaving
it to the committee to invest time and resources to do this (see useful suggestions offered by
Schönbrodt et al., 2019).
Even if advertisements do not explicitly mention Open Science practices, researchers can
determine a job candidate’s practices by looking at their published work (e.g., open data,
preregistration) or at their presence on online platforms such as OSF, Figshare or GitHub. If not
provided as part of the application, this information can be shared with the hiring committee in
evaluating a candidate’s engagement with Open Science. It may be that some candidates
(particularly those who recently finished a PhD or postdoctoral position) have been discouraged
from engaging in open practices by their supervisors (e.g., Krishna & Peter, 2018); thus, it is
important to give candidates the opportunity to demonstrate and discuss their attitudes toward
Open Science.
Regardless of whether they are formal members of a selection committee or informal
participants, those involved in the interview process can ask about a candidate’s practices.
However, serving on tenure and promotion committees can be a good platform from which to
advocate for Open Science. Committee members can draw attention to research practices that
improve (or threaten) research integrity, offer open evaluation criteria, and push to reward
researchers who engage in Open Science. For example, the co-founder of the UK
Reproducibility Network reported that his institution now requires applicants to demonstrate
open data practices in order to be eligible for promotion (Munafò, 2019).
22
Universities
23
impact, these task forces ought to be university-wide and endorsed by the administration (see
Munafò, 2019).
OSF Institutions. Many universities are beginning to adopt OSF Institutions, which is a
free scholarly web tool that aims to enhance transparency, foster collaboration and increase the
visibility of research outputs at the institutional level (see Table 1). OSF Institutions has single
sign-on authentication, making it easy for users to incorporate the OSF into their existing
research workflow. There is also an option for universities to recommend the OSF as a platform
on which to manage research projects and make materials and data available to others.
The DORA. More than 1,800 organisations and over 15,000 individuals across the world
are now signatories on the San Francisco Declaration of Research Assessment (DORA, 2012).
The DORA makes 18 recommendations to revise how the academic community evaluates
researchers and research outputs. The DORA makes specific recommendations for researchers,
organisations that supply metrics, publishers, institutions, and funding agencies. Many of these
stakeholders have revised their research assessment guidelines in line with DORA.
One potential challenge is the conflict between the DORA recommendations and the
research assessment frameworks used in different countries. For example, some universities have
financial incentive structures entirely geared to reward research that is published in high impact
journals. The financial reward for publishing a single paper in Science or Nature in China’s top
100 universities, for instance, has reached as high as 165,000 USD (Quan et al., 2017). Financial
incentive structures of this are employed in many countries (Abritis & McCook, 2017). A cash-
per-publication reward policy can fuel QRPPs, emphasizes quantity over quality, and is simply
not conducive to truth-seeking. Conflict between how things currently are, and the changes
outlined in the DORA will need to be addressed. However, we foresee that, as more individuals
and organisations sign the DORA, institutional frameworks will more closely align with its
recommendations.
Nudging Universities
Those in (recognised) positions of power within the university hierarchy need to approve
of the initiatives suggested above. Researchers can draw the administration’s attention to
emerging Open Science issues and push for new initiatives. One effective nudge may be a well-
organised face-to-face pitch. Like-minded academics can give a 10-minute Open Science
presentation to the Vice-Chancellor or Pro-Vice-Chancellor (or equivalent) advocating for Open
24
Science initiatives. An effective pitch might explain the problem (i.e., the replication crisis),
demonstrate its insidiousness across many fields; discuss solutions (e.g., preregistration, open
data, etc.), and highlight efforts that other universities have already undertaken to combat these
issues, including the adoption of OSF Institutions, the DORA, and task forces/officers. A pitch of
this kind would highlight that a problem of real concern exists and suggest readily
implementable solutions.
The benefits that result from the proposed initiatives could be highlighted as well. There
is potential for the university to become a leader in the emerging Open Science space.
Universities that adopt official statements on open practices (e.g., UK Reproducibility Network)
signal that they are responding to the replication crisis and that they are committed to producing
high quality, reproducible research rather than solely focused on impact factors.
Following the pitch, it is important to check in on the progress an administrator has made
on said issues, and to suggest to easy and actionable steps that they can take next. Any change at
an institutional level can have a snowball effect. Once academics successfully advocate for
change in one area, there can be considerable momentum when leading change in other areas. As
such, rather than waiting for the ‘perfect’ opportunity, there is value in taking any steps (no
matter how large) to initiate change.
Academic Libraries
25
promoting openness (Ogungbeni et al., 2018). However, administrative and financial status quos,
and a drive to satisfy customers (students and staff), may be preventing widespread changes.
Libraries can promote open practices by establishing guidelines, training and roadmaps
within their institution, which researchers can use to make their work more transparent. Libraries
can also subsidize article publication charges at their institution and promote universal access to
research that the institution generates. The Compact for Open-Access Publishing Equity (COPE)
have pioneered this approach and several institutions have followed suit (Eckman & Weil, 2010).
These changes can help ensure that publications at an institution are freely available online.
Furthermore, libraries can influence how FAIR (findable, accessible, interpretable, and
reusable) the datasets in their repositories are. They can also adopt research data management
strategies so that data are recorded, preserved and widely accessible (Christensen-Dalsgaard et
al., 2012; Cox et al., 2017; Tzanova, 2019; Pinfield et al., 2014). The linking of data and code to
the publications they are associated with can also be facilitated by the right online infrastructure.
Universities are by definition exclusionary; they offer limited positions to prospective
students. As curators of information, however, academic libraries can make educational
resources openly accessible (Open Educational Resources) to other researchers and the broader
the public. The recent growth of MOOCs offered by prestigious universities has illustrated that
good that they open resources can do. Libraries can move forward in this space by creating and
providing access to educational resources such as textbooks, lecture notes, exams, videos,
presentation slides and other media.
Nudging libraries
Individuals can nudge academic libraries primarily by liaising and communicating with
librarians either in person or over email. As a first point of contact, researchers can link
preregistrations, preprints, or online datasets related to a study and ask the library how to cross
link these resources in their online repository. If more researchers do so, libraries may begin to
link these supplementary materials as the default course of action. Researchers could also offer
to coordinate data management workshops to highlight the importance of recording and
generating interpretable data and code for others to use.
Suggesting an open access funding initiative for researchers within the institution may be
another effective nudge. One could pitch this idea and present it along with a survey of
likeminded academics. It is critical to frame any change as an injunctive norm. If the goal of
26
libraries is to disseminate knowledge and information widely, then appealing to the public good
that results from the initiative would help invoke a sense of moral or civic duty and in turn
motivate change. Providing links to frameworks that other universities have used can help
libraries to easily model their own initiative (see Table 1).
Journals
27
2016). These badges are also likely to be attractive to editors because they strengthen the
reliability and trustworthiness of the research that is published in their journal.
The Power of Defaults. Changing defaults is another powerful option that editors have at
their disposal if they want to promote Open Science. Editors can set the default option at their
journal to favour open practices: authors must opt-out of linking their study to a preregistration
and/or opt out of making their data and materials openly available. An evaluation of the
mandatory open data policy implemented at the journal Cognition revealed that policies of this
kind enhanced the prevalence of data availability statements and the reusability of the data that
they made available (Hardwicke et al., 2018). Prior to the introduction of the policy, a minority
of researchers made their data available (and a minority of these data were reusable), but the vast
majority of researchers did so after its introduction.
Widening Publication Formats. Journals can also easily restructure editorial and
publication processes to support Open Science is another easy change that journals can make. A
simple (and increasingly common) change is offering a variety of publication (submission)
formats that emphasize the process rather than the outcome of research. These new formats may
include Registered Reports (Chambers, 2013; Nosek & Lakens, 2014) or manuscripts that are
sent out for peer review where the results of the study are blinded (Grand et al., 2018).
Manuscript Evaluation. Editors could also invite specialized reviewers on their editorial
board who evaluate and encourage open practices and reproducible methods and statistics
(Hardwicke et al., 2019). Similarly, when providing evaluation criteria for reviewers, editors
could offer explicit evaluation criteria that emphasise open practices. Journals could even
implement an open peer-review process. Open peer-review leads to more tactful and constructive
feedback, and clearer communication between reviewers, editors and authors in how and why a
manuscript may have been accepted or rejected, or why certain changes are necessary (Ross-
Hellauer, 2017).
Nudging Journals
Editors can draw on a range of simple, social, and attractive means and resources to
encourage more transparent science. Even when some editorial teams may see drawbacks or
have reservations about promoting open practices (Hopp & Hoover, 2019), researchers can
nudge journals to make changes. For one, researchers can promote these changes when serving
28
as editors and reviewers. Researchers are often selected by editors to review papers or are
members of societies to which journals are associated. These are positions from which well-
worded emails can nudge journals to alter the way the journal operate. There are already how-to
guides, infographics, and other tools available online for many of these initiatives (see Table 1).
These can be linked explicitly in an email so that editors are one click away from the online
infrastructure needed to implement such changes. One could also point to the normative nature
of these practices, highlighting that currently (in early 2021) more than 75 journals have
implemented Open Science badges, more than 250 have registered reports as a publication
format, and more than 1000 have implemented TOP guidelines (https://www.cos.io/).
Outlining what journals can potentially gain from these changes will also make these
initiatives more appealing. Adopting TOP guidelines can increase the quality of submitted
publications by improving the standards of reporting and helping to detect errors before
publication as well as reducing the time that authors and reviewers spend communicating (Nosek
et al., 2015). As previously mentioned, Open access publications also receive more citations than
closed publications (Hajjem et al., 2006; Li et al., 2018; Davis, Lewenstein et al., 2008; Piwowar
et al., 2018). Registered reports are also less prone to QRPPs than papers published through
conventional means and Open Science badges promote data sharing (Kidwell et al., 2016).
Benefits such as these are highly attractive to journals and ought to be included in any pitch.
Funders
Research funding is limited and difficult to obtain. Grant applications are typically
written by researchers before being sent to specialists in the field for review. A committee
assesses these applications and then ranks them so that they can prioritise the available funds.
The judgements of reviewers in this anonymous review process are often highly inconsistent
(Mayo et al., 2006) and favours mainstream scientific enterprises. Anonymous reviews process
also tend to favour researchers with more citations, placing them in a better position to publish
further research (i.e., the Matthew effect; Bol et al., 2018). In fact, researchers with the highst
citations counts (top 20%) received over 60% of the funding from the National Science
Foundation (Drutman, 2012). Awarding funding to more established, senior academics might be
stifling the uptake of Open Science because they are perhaps less responsive to these reforms
(Rowlands & Nicholas, 2006).
29
Alternative funding review processes have been proposed, including innovation lotteries,
open review processes and crowd-funded research (Eisfeld-Reschke & Wenzlaff, 2013, Giles,
2012). However, as researchers, it may be prudent to focus on already existing funding
pathways. Some funding bodies have adopted policies that require funded articles to be made
freely available and they may also pay reasonable open access charges (see the Canadian
Institutes of Health Research, the Social Sciences and Humanities Research Council of Canada;
National Institutes of Health, the Research Council UK and cOALition S).
Funding bodies, such as Netherlands Organisation for Scientific Research (NWO) and the
European Research Council (ERC), have also embraced ‘FAIR data principles’ (Wilkinson et al.,
2016). In fact, the ERC has proposed that funded projects need to have openly accessible
research data following publication (European Research Council, 2020). Further, both the ERC
and NWO expect grant applicants to specify a data research management plan that adheres to the
FAIR principles. The Australian Research Council (ARC) and National Health and Medical
Research Council (NHMRC) funding bodies in Australia also have open access policies, where
outputs arising from funded research must be made openly accessible, but there polices are
largely ineffectual (Holcombe & Todd, 2013).
Returning to the review process, funders can more deliberately evaluate investigators on
the basis of the quality, rather than the quantity, of their scientific contributions. They can also
evaluate proposals on the basis of an investigator’s plan to make their research open access, to
preregister and preprint their studies, or to share data on a public repository. The NWO, for
instance, now adheres to the DORA when evaluating the grant applications of scientists (NWO,
2019). It has introduced the narrative CV format which consists of two parts: a narrative
academic profile and a list of no more than five or ten key outputs (depending on seniority and
the grant itself). Researchers are also not allowed to mention H-indexes, journal impact factors,
or any metric that refers to the journal, publisher or publication platform, rather than to the
individual output item.
Nudging Funders
Researchers can help nudge Open Science among funders and grant reviewers in several
ways. First, they can include prepared statements on open practices when they write grant
applications. Funders often ask for broader impact statements a history of translating research
into action via open practices can be seen favourably (e.g., National Institute of Health, 2011).
30
Researchers can leverage their track record of open practices by outlining that funded research
will be made available as a preprint and the data will be openly available. Grant reviewers may
also appreciate intentions to provide Open Science training to postgraduate and post-doctoral
researchers. We recommend highlighting instances of these actions in the past and to include
these plans in the project’s budget. At first, these statements might make a particular project
stand out as a rigorous and transparent, but as more applications include them, such statements
will become an expectation rather than the exception.
Grant applications commonly undergo editing at a university’s research office before
they are sent off to the funding body for review. These offices often recommend that traditional
metrics such as h-indexes be emphasised. However, researchers ought to highlight less
traditional metrics of research quality such as the number of open access publications or public
projects on the OSF or underscoring one’s ‘Ten Best Papers’. Changing the norms and
highlighting the appeal of these alternative metrics may change what information is ultimately
considered important when grants are evaluated.
Researchers might also consider using open practices and open tools when collaborating
with partnered industry funders. In applied fields, for instance, there is a great deal of
communication between researchers and industry collaborators. Searston et al. (2019) have
outlined ways in which the OSF and other tools can keep partners involved and updated at every
stage of a research project. Transparency while working with industry improves the quality of
the end product and establishes a norm of open collaboration between industry and the academy.
Conclusions
In each section of this paper, we described the roles and goals of the key stakeholders in
the research ecosystem, along with changes that can promote open scientific practices. Using two
frameworks — EAST and the Pyramid of Culture Change —we also offer ways in which
individuals and small groups can nudge these stakeholders to adopt change. It is true that
progress in this space may be ushered in from the top-down by larger institutions and
organisations, or via conventional economic solutions (Loewenstein & Chater, 2017). However,
without pressure from researchers themselves there will be little demand driving such change.
Significant improvements in infrastructure, norms and reward structures are needed before policy
change is even possible or seen as necessary. The behaviours of the various agents in the
31
scientific community ultimately determine the quality of the research that is generated and
disseminated. We hope the resources and nudges we offer will provide a valuable toolbox for
those interested in improving scientific practices.
32
Contributions
Here we list the authors who contributed to each section: Introduction (SGR), Researchers:
Preregistration (MAB, CDK & DM), Researchers: Preprints (JB & HB), Researchers: Data to
theory and quantity to quality (REL & HAS), Students (RAS), Departments/faculties (JMC &
KJ), Universities (JLB & SGR), Journals (NKS), Libraries (SGR) and Funders (SGR & HAS).
Additionally, MAB, JB, JLB, KJ CDK, SGR, HAS, NKS and JMT contributed to general
editing. The idea for this paper was conceived by JMT during the Society for the Improvement
of Psychological Science (SIPS) 2019 meeting in Rotterdam.
Acknowledgements
This research was supported by grant No. LP170100086 from the Australian Research Council to
JMT and RAS, and by a grant from the National Science Center (2015/19/B/HS6/01253) to KJ.
Conflict of interest
The authors declare no conflict of interest.
33
References
Abele-Brehm, A. E., Gollwitzer, M., Steinberg, U., & Schönbrodt, F. D. (2019). Attitudes toward
open science and public data sharing. Social Psychology. 50, 252-260.
https://doi.org/10.1027/1864-9335/a000384.
Abritis, A., & McCook, A., (2017, August 10). Cash bonuses for peer-reviewed papers go global.
Science Magazine. doi:10.1126/science.aan7214
Allen, C., & Mehler, D. M. (2019). Open Science challenges, benefits and tips in early career and
beyond. PLoS Biology, 17(5), e3000246. https://doi.org/10.1371/journal.pbio.3000246
Benartzi, S., Beshears, J., Milkman, K. L., Sunstein, C. R., Thaler, R. H., Shankar, M., ... & Galing,
S. (2017). Should governments invest more in nudging?. Psychological science, 28(8),
1041-1055. https://doi.org/10.1177/0956797617702501
Berg, J. M., Bhalla, N., Bourne, P. E., Chalfie, M., Drubin, D. G., Fraser, J. S., ... & King, S.
(2016). Preprints for the life sciences. Science, 352(6288), 899-901.
doi.org/10.1126/science.aaf9133
Bol, T., de Vaan, M., & van de Rijt, A. (2018). The Matthew effect in science funding. Proceedings
of the National Academy of Sciences, 115(19), 4887-4890.
https://doi.org/10.1073/pnas.1719557115
Brock, J (2020, January 6). Psychology looks to physics to solve replication crisis. Nature Index.
https://www.natureindex.com/news-blog/psychology-looks-physics-to-solve-replication-
crisis
Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T. H., Huber, J., Johannesson, M., ... & Altmejd,
A. (2018). Evaluating the replicability of social science experiments in Nature and Science
between 2010 and 2015. Nature Human Behaviour, 2(9), 637-644.
https://doi.org/10.1038/s41562-018-0399-z
Center for Open Science. “More About the Preregistration Challenge.” The Center for Open
Science. Accessed October 31, 2019. https://cos.io/our-services/prereg-more-
information/?_ga=2.99617502.1140806084.1569228081-2063469120.1563260321.
Center for Open Science. “OSF - Preregistration Challenge: Plan, Test, Discover Wiki.” OSF.
Accessed October 31, 2019.
https://osf.io/x5w7h/wiki/06%20LeaderBoard/?_ga=2.88934363.1140806084.1569228081-
2063469120.1563260321&view.
34
Center for Open Science. “OSF - Templates of OSF Registration Forms Wiki.” OSF. Accessed
October 31, 2019.
https://osf.io/zab38/wiki/home/?_ga=2.131598031.1140806084.1569228081-
2063469120.1563260321&view.
Center for Open Science. “Preregistration.” The Center for Open Science. Accessed October 31,
2019. https://cos.io/prereg/.
Center for Open Science. “Training Services.” The Center for Open Science. Accessed October 31,
2019. https://cos.io/our-services/training-services/.
Chambers, C. D. (2013). Registered reports: a new publishing initiative at Cortex. Cortex, 49(3),
609-610. doi: 10.1016/j.cortex.2012.12.016.
Chambers, C. D. (2019). What’s next for registered reports? Nature, 537, 187-189.
Chopik, W. J., Bremner, R. H., Defever, A. M., & Keller, V. N. (2018). How (and whether) to teach
undergraduates about the replication crisis in psychological science. Teaching of
Psychology, 45(2), 158-163. https://doi.org/10.1177/0098628318762900
Christensen, G., Wang, Z., Levy Paluck, E., Swanson, N., Birke, D., Miguel, E., & Littman, R.
(2020). Open science practices are on the rise: The State of Social Science (3S) Survey.
Working Paper Series No. WPS-106. Center for Effective Global Action. University of
California, Berkeley.
Christensen-Dalsgaard, B., van den Berg, M., Grim, R., Horstmann, W., Jansen, D., Pollard, T., &
Roos, A. (2012). Ten recommendations for libraries to get started with research data
management. Final report of the LIBER working group on E-Science/Research Data
Management.
Coates, H., Dobson, I., Edwards, D., Friedman, T., Goedegebuure, L., & Meek, L. (2009). The
attractiveness of the Australian academic profession: A comparative analysis. LH Martin
Institute, University of Melbourne & Australian Council for Educational Research &
Educational Policy Institute.
Devezer, B., Navarro, D. J., Vandekerckhove, J., & Buzbas, E. O. (2020). The case for formal
methodology in scientific reform. biorxiv. https://doi.org/10.1101/2020.04.26.048306
eLife (2020, May 13) New from eLife: Invitation to submit to Preprint Review.
eLife. Retrieved from: https://elifesciences.org/inside-elife/d0c5d114/new-from-elife-invitation-to-
submit-to-preprint-review
35
Corker, K. (2018). Open science is a behavior. [Blog post]. Center for Open Science. Retrieved
from https://cos.io/blog/open-science-is-a-behavior/
Cox, A. M., Kennan, M. A., Lyon, L., & Pinfield, S. (2017). Developments in research data
management in academic libraries: Towards an understanding of research data service
maturity. Journal of the Association for Information Science and Technology, 68(9), 2182-
2200. https://doi.org/10.1002/asi.23781
Davidai, S., Gilovich, T., & Ross, L. D. (2012). The meaning of default options for potential organ
donors. Proceedings of the National Academy of Sciences, 109(38), 15201-15205.
10.1073/pnas.1211695109
Davis, P. M., Lewenstein, B. V., Simon, D. H., Booth, J. G., & Connolly, M. J. (2008). Open access
publishing, article downloads, and citations: randomised controlled trial. BMj, 337.
https://doi.org/10.1136/bmj.a568
DeHaven, A. (2017). Preregistration: A Plan, Not a Prison. Available online at:
https://cos.io/blog/preregistration-plan-not-prison/ (Accessed March 3, 2020)
Dolan, Paul, Hallsworth, Michael, Halpern, David, King, Dominic and Vlaev, Ivo (2010)
MINDSPACE: influencing behaviour for public policy. Institute of Government, London,
UK.
DORA (2012). Declaration on Research Assessment. Retrieved from www.ascb.org/dora/
Drutman, L. (2012). How the NSF allocates billions of federal dollars to top universities. Sunlight
foundation blog. Available at: http://sunlightfoundation.com/blog/2012/09/13/nsf-funding/
Duvendack, M., Palmer-Jones, R., & Reed, W. R. (2017). What Is Meant by "Replication" and Why
Does It Encounter Resistance in Economics? American Economic Review, 107 (5): 46-51.
Doi: 10.1257/aer.p20171031
Eckman, C. D., & Weil, B. T. (2010). Institutional open access funds: Now is the time. PLoS
Biology, 8(5), e1000375. https://doi.org/10.1371/journal.pbio.1000375
Edwards, M. A. & Roy, S. (2017). Maintaining Scientific Integrity in a Climate of Perverse
Incentives and Hypercompetition. Environmental Engineering Science, 34(1) 51-61.
http://doi.org/10.1089/ees.2016.0223.
Eisfeld-Reschke J., Herb U., Wenzlaff K. (2014) Research Funding in Open Science. In: Bartling
S., Friesike S. (eds) Opening Science. Springer, Cham. https://doi.org/10.1007/978-3-319-
00026-8_16
36
European Data Portal (2020) The Economic Impact of Open Data Opportunities for value creation
in Europe. European Commission. Retrieved from:
https://www.europeandataportal.eu/sites/default/files/the-economic-impact-of-open-data.pdf
European Research Council Scientific Council (2019). Open research data and data management
plans. European Commission.
https://erc.europa.eu/sites/default/files/document/file/ERC_info_document-
Open_Research_Data_and_Data_Management_Plans.pdf
European Research Council (2020). Guidelines on implementation of open access to scientific
publications and research data. European Commission.
https://ec.europa.eu/research/participants/data/ref/h2020/other/hi/oa-pilot/h2020-hi-erc-oa-
guide_en.pdf
Fecher, B., & Friesike, S. (2014). Open science: one term, five schools of thought. In Opening
science (pp. 17-47). Springer, Cham. https://doi.org/10.1007/978-3-319-00026-8_2
Fiedler, K. (2017). What constitutes strong psychological science? The (neglected) role of
diagnosticity and a priori theorizing. Perspectives on Psychological Science, 12(1), 46-61.
https://doi.org/10.1177/1745691616654458
Fischhoff, B., & Beyth, R. (1975). I knew it would happen: Remembered probabilities of once —
future things. Organizational Behavior and Human Performance, 13(1), 1-16.
https://doi.org/10.1016/0030-5073(75)90002-1
Fröhlich, G. (2003). Anonyme Kritik: Peer review auf dem Prüfstand der Wissenschaftsforschung.
Medizin—Bibliothek—Information, 3(2), 33–39.
Fu, D. Y., & Hughey, J. J. (2019). Meta-Research: Releasing a preprint is associated with more
attention and citations for the peer-reviewed article. eLife, 8, e52646.
Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a
problem, even when there is no “fishing expedition” or “p-hacking” and the research
hypothesis was posited ahead of time. Department of Statistics, Columbia University.
Giles, J. (2012) Finding philanthropy: Like it? Pay for it. Nature, 431, 252–253. doi:
10.1038/481252a
Grand, J. A., Rogelberg, S. G., Banks, G. C., Landis, R. S., & Tonidandel, S. (2018). From outcome
to process focus: Fostering a more robust psychological science through registered reports
37
and results-blind reviewing. Perspectives on Psychological Science, 13(4), 448-456.
https://doi.org/10.1177/1745691618767883
Gernsbacher, M. A. (2018). Rewarding research transparency. Trends in cognitive sciences, 22(11),
953-956. 10.1016/j.tics.2018.07.002
Hajjem, C., Harnad, S., & Gingras, Y. (2006). Ten-year cross-disciplinary comparison of the
growth of open access and how it increases research citation impact. arXiv preprint
cs/0606079.
Hallsworth, M. (2014). The use of field experiments to increase tax compliance. Oxford Review of
Economic Policy, 30(4), 658-679. https://www.jstor.org/stable/90023228
Hallsworth, M., Chadborn, T., Sallis, A., Sanders, M., Berry, D., Greaves, F., ... & Davies, S. C.
(2016). Provision of social norm feedback to high prescribers of antibiotics in general
practice: a pragmatic national randomised controlled trial. The Lancet, 387(10029), 1743-
1752. https://doi.org/10.1016/S0140-6736(16)00215-4
Hardwicke, T. E., Frank, M. C., Vazire, S., & Goodman, S. N. (2019). Should Psychology Journals
Adopt Specialized Statistical Review?. Advances in Methods and Practices in Psychological
Science. https://doi.org/10.1177/2515245919858428
Hardwicke, T. E., Mathur, M., MacDonald, K., Nilsonne, G., Banks, G. C., Kidwell, M. C., . . .
Tessler, M. H. (2018). Data availability, reusability, and analytic reproducibility: Evaluating
the impact of a mandatory open data policy at the journal Cognition. Royal Society Open
Science, 5. 180448. https://doi.org/10.1098/rsos.180448
Harvey, L. (ed.) (2008). Rankings of higher education institutions: A critical review. Quality in
Higher Education, 14 (3), 187–207. https://doi.org/10.1080/13538320802507711
Haven, L. T., & Van Grootel, D. L. (2019). Preregistering qualitative research. Accountability in
Research, 26(3), 229-244. https://doi.org/10.1080/08989621.2019.1580147
Holcombe, A. O., & Todd, M. (2013, January 9). Free for all: ARC-funded research now open to
the public. The Conversation. https://theconversation.com/free-for-all-arc-funded-research-
now-open-to-the-public-11497
Hommel, B. (2019). Pseudo‐mechanistic Explanations in Psychology and Cognitive Neuroscience.
Topics in cognitive science, 12(4). 1294 -1305. https://doi.org/10.1111/tops.12448
38
Hopp, C., & Hoover, G. A. (2019). What Crisis? Management Researchers’ Experiences with and
Views of Scholarly Misconduct. Science and engineering ethics, 1-40. 10.1007/s11948-018-
0079-4
Houtkoop, B. L., Chambers, C., Macleod, M., Bishop, D. V., Nichols, T. E., & Wagenmakers, E. J.
(2018). Data sharing in psychology: A survey on barriers and preconditions. Advances in
methods and practices in psychological science, 1(1), 70-85.
https://doi.org/10.1177/2515245917751886
Huang, M.H. (2012) Opening the black box of QS World University Rankings. Research
Evaluation, 21(1), 71–78. https://doi.org/10.1093/reseval/rvr003
Johnson, E. J., Goldstein, D. (2003) Do defaults save lives? Science 302(5649) 1338-1339. Doi:
10.1126/science.1091721
Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social
Psychology Review, 2(3), 196-217. https://doi.org/10.1207/s15327957pspr0203_4
Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L.
S., ... & Errington, T. M. (2016). Badges to acknowledge open practices: A simple, low-cost,
effective method for increasing transparency. PLoS biology, 14(5), e1002456.
https://doi.org/10.1371/journal.pbio.1002456
Krishna, A., & Peter, S. M. (2018). Questionable research practices in student final theses–
Prevalence, attitudes, and the role of the supervisor’s perceived attitudes. PloS one, 13(8),
e0203470. https://doi.org/10.1371/journal.pone.0203470
League of European Research Universities (2018). Open science and its role in universities: A
roadmap for culture change. https://www.leru.org/files/LERU-AP24-Open-Science-full-
paper.pdf
Lehman, D. R., Lempert, R. O., & Nisbett, R. E. (1988). The effects of graduate training on
reasoning: Formal discipline and thinking about everyday-life events. American
Psychologist, 43(6), 431-442.
Li, Y., Wu, C., Yan, E., & Li, K. (2018). Will open access increase journal CiteScores? An
empirical investigation over multiple disciplines. PloS one, 13(8), e0201885.
ttps://doi.org/10.1371/journal.pone.0201885
39
Lin, J., Murphy, F.L., Taylor, M. & Allen, L. 2017, "Building the infrastructure to make science
metrics more scientific", F1000Research, vol. 5.
Loewenstein, G., & Chater, N. (2017). Putting nudges in perspective. Behavioural Public Policy,
1(1), 26-53. doi:10.1017/bpp.2016.7
Martin, B. R. (2011), The research excellence framework and the ‘impact agenda’: Are we creating
a Frankenstein monster?, Research Evaluation, 20(3), 247–254,
https://doi.org/10.3152/095820211X13118583635693
Mayo, N. E., Brophy, J., Goldberg, M. S., Klein, M. B., Miller, S., Platt, R. W., & Ritchie, J.
(2006). Peering at peer review revealed high degree of chance associated with funding of
grant applications. Journal of clinical epidemiology, 59(8), 842-848.
10.1016/j.jclinepi.2005.12.007
McIntosh, R. D. (2017). Exploratory reports: A new article type for Cortex [Editorial]. Cortex 96,
A1–A4. https://doi.org/10.1016/j.cortex.2017.07.014
McKiernan, E. C., Bourne, P. E., Brown, C. T., Buck, S., Kenall, A., Lin, J., … Yarkoni, T. (2016).
Point of view: How open science helps researchers succeed. Elife, 5, e16800.
10.7554/eLife.16800
Mertens, G., & Krypotos, A. M. (2019). Preregistration of analyses of preexisting data.
Psychologica Belgica, 59(1), 338. http://doi.org/10.5334/pb.493
Munafò, M. R. (2019). Raising research quality will require collective action. Nature, 576, 183. doi:
10.1038/d41586-019-03750-7
Munafò, M. R., Nosek, B. A., Bishop, D. V., Button, K. S., Chambers, C. D., Du Sert, N. P., ... &
Ioannidis, J. P. (2017). A manifesto for reproducible science. Nature human behaviour, 1(1),
1-9. https://doi-org.ezproxy.library.uq.edu.au/10.1038/s41562-016-0021
Narock, T., & Goldstein, E. B. (2019). Quantifying the growth of preprint services hosted by the
Center for Open Science. Publications, 7(2), 44. doi.org/10.3390/publications7020044.
https://www.sub.uni-goettingen.de/en/electronic-publishing/open-science/.
National Institutes of Health (2011, December 12) NDAR federation creates largest source of
autism research data to date. NIH. Retrieved from: https://www.nih.gov/news-events/news-
releases/ndar-federation-creates-largest-source-autism-research-data-date
Nisbett, R. E., Fong, G. T., Lehman, D. R., & Cheng, P. W. (1987). Teaching reasoning. Science,
238(4827), 625-631. 10.1126/science.3672116
40
Nolan, J. M., Schultz, P. W., Cialdini, R. B., Goldstein, N. J., & Griskevicius, V. (2008). Normative
social influence is underdetected. Personality and Social Psychology Bulletin, 34(7), 913-
923. 10.1177/0146167208316691
Norris, E., & O’Connor, D. B. (2019) Science as behaviour: Using a behaviour change approach to
increase uptake of open science, Psychology & Health, 34(12), 1397-1406, Doi:
10.1080/08870446.2019.1679373
Nosek, B. (2019, June 11) Strategy for culture change. Center for Open Science.
https://www.cos.io/blog/strategy-for-culture-change
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., ... &
Contestabile, M. (2015). Promoting an open research culture. Science, 348(6242), 1422-
1425. 10.1126/science.aab2374
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration
revolution. Proceedings of the National Academy of Sciences, 115(11), 2600-2606.
https://doi.org/10.1073/pnas.1708274114
Nosek, B. A., & Lakens, D. (2014). Registered reports: A method to increase the credibility of
published results. Social Psychology, 45(3), 137-141. http://dx.doi.org/10.1027/1864-
9335/a000192
Nosek, B. A., Lindsay, S. (2018, March) Preregistration becoming the norm in psychological
science. Association for Psychological Science
https://www.psychologicalscience.org/observer/preregistration-becoming-the-norm-in-
psychological-science
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and
practices to promote truth over publishability. Perspectives on Psychological Science, 7(6),
615-631. https://doi.org/10.1177/1745691612459058
Nuzzo, R. (2015). How scientists fool themselves – and how they can stop. Nature 526, 182–185.
doi:10.1038/526182a
NWO (2019, April 18) KNAW, NWO and ZonMw to sign DORA declaration. Dutch Research
Council. Retrieved from: https://www.nwo.nl/en/news/knaw-nwo-and-zonmw-sign-dora-
declaration.
41
O’Boyle Jr, E. H., Banks, G. C., & Gonzalez-Mulé, E. (2017). The chrysalis effect: How ugly initial
results metamorphosize into beautiful articles. Journal of Management, 43(2), 376-399.
https://doi.org/10.1177/0149206314527133
Ogungbeni, J. I., Obiamalu, A. R., Ssemambo, S., & Bazibu, C. M. (2018). The roles of academic
libraries in propagating open science: a qualitative literature review. Information
Development, 34(2), 113-121. https://doi.org/10.1177/0266666916678444
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science.
Science, 349(6251), aac4716. 10.1126/science.aac4716
Open Science Knowledge Base. “Open Science Knowledge Base.” Accessed October 31, 2019.
https://how-to-open.science/.
Open Science Knowledge Base. “Planning Research.” Accessed October 31, 2019. https://how-to-
open.science/#preregistration.
Orben, A. (2019). A journal club to fix science. Nature, 573(7775), 465. doi.org/10.1038/d41586-
019-02842-8.
Pinfield, S., Cox, A. M., & Smith, J. (2014). Research data management and libraries:
Relationships, activities, drivers and influences. PLoS One, 9(12), e114734.
https://doi.org/10.1371/journal.pone.0114734
Pliner, P. (1982). The effects of mere exposure on liking for edible substances. Appetite, 3(3), 283-
290. https://doi.org/10.1016/S0195-6663(82)80026-3
Pontika, N., Knoth, P., Cancellieri, M., & Pearce, S. (2015, October). Fostering open science to
research using a taxonomy and an eLearning portal. In Proceedings of the 15th international
conference on knowledge technologies and data-driven business. Association for Computing
Machinery, New York, NY, USA, Article 11, 1–8. https://doi.org/10.1145/2809563.2809571
Prinz, F., Schlange, T., & Asadullah, K. (2011). Believe it or not: how much can we rely on
published data on potential drug targets?. Nature reviews Drug discovery, 10(9), 712-712.
https://doi.org/10.1038/nrd3439-c1
Quan, W., Chen, B. and Shu, F. (2017), Publish or impoverish: An investigation of the monetary
reward system of science in China (1999-2016). Aslib Journal of Information Management,
69(5), 486-502. https://doi.org/10.1108/AJIM-01-2017-0014
Rahnev, D., Desender, K., Lee, A.L.F. et al. The Confidence Database. Nat Hum Behav (2020).
https://doi.org/10.1038/s41562-019-0813-1.
42
Robbennolt, J. K., & Sternlight, J. R. (2013). Behavioral legal ethics. Arizona State Law Journal,
45(3), 1107-1182. Robbennolt, Jennifer K. and Sternlight, Jean R., Behavioral Legal Ethics
(2013). http://dx.doi.org/10.2139/ssrn.2248137
Roese, N. J.; Vohs, K. D. (2012). Hindsight bias. Perspectives on Psychological Science 7(5), 411–
426. https://doi.org/10.1177/1745691612454303
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological bulletin,
86(3), 638. https://doi.org/10.1037/0033-2909.86.3.638
Ross-Hellauer, T. (2017). What is open peer review? A systematic review. F1000Research, 6.
10.12688/f1000research.11369.2
Rowlands, I., & Nicholas, D. (2006). The changing scholarly communication landscape: An
international survey of senior researchers. Learned Publishing, 19(1), 31-55.
https://doi.org/10.1087/095315106775122493
Sarabipour, S., Debat, H. J., Emmott, E., Burgess, S. J., Schwessinger, B., & Hensel, Z. (2019). On
the value of preprints: An early career researcher perspective. PLoS Biology, 17(2),
e3000151. doi.org/10.1371/journal.pbio.3000151
Sarafoglou, A., Hoogeveen, S., Matzke, D., & Wagenmakers, E.-J. (2019). Teaching Good
Research Practices: Protocol of a Research Master Course. Psychology Learning &
Teaching, 19(1), 46–59. https://doi.org/10.1177/1475725719858807
Science Guide (2019, November 18). Dutch universities and research funders move away from the
impact factor. https://www.scienceguide.nl/2019/11/dutch-universities-and-research-
funders-move-away-from-the-impact-factor/
Schönbrodt, F. “Our Commitment to Research Transparency and Open Science.” Our Commitment
to Research Transparency and Open Science, 2019. http://www.researchtransparency.org/.
Schönbrodt, F. D., Mellor, D. T., Bergmann, C., & Penfold, N. (2019). Academic job offers that
mentioned open science. https:// osf.io/7jbnt
Searston, R. A., Thompson, M. B., Robson, S. G., Corbett, B. J., Ribeiro, G., Edmond, G., &
Tangen, J. (2019). Truth and transparency in expertise research. Journal of Expertise, 2(4),
199-209.
Serghiou, S., & Ioannidis, J. P. (2018). Altmetric scores, citations, and publication of studies posted
as preprints. JAMA, 319(4), 402-404. doi.org/10.1001/jama.2017.21168
43
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed
flexibility in data collection and analysis allows presenting anything as significant.
Psychological science, 22(11), 1359-1366. https://doi.org/10.1177/0956797611417632
Spellman, B.A., Gilbert, E.A. and Corker, K.S. (2018). Open Science. In Stevens' Handbook of
Experimental Psychology and Cognitive Neuroscience, J.T. Wixted (Ed.).
doi:10.1002/9781119170174.epcn519
Szollosi, A., Kellen, D., Navarro, D. J., Shiffrin, R., van Rooij, I., Van Zandt, T., & Donkin, C.
(2020). Is Preregistration Worthwhile? Trends in Cognitive Sciences, 24(2), 94–95.
https://doi.org/10.1016/j.tics.2019.11.009
Tacke O. (2010) Open Science 2.0: How Research and Education Can Benefit from Open
Innovation and Web 2.0. In: Bastiaens TJ, Baumöl U., Krämer BJ (eds) On Collective
Intelligence. Advances in Intelligent and Soft Computing, 76, pp. 37-48. Springer, Berlin,
Heidelberg. https://doi.org/10.1007/978-3-642-14481-3_4
Taylor, P., Hobbs, J. N., Burroni, J., & Siegelmann, H. T. (2015). The global landscape of
cognition: hierarchical aggregation as an organizational principle of human cortical
networks and functions. Scientific reports, 5(1), 1-18. https://doi.org/10.1038/srep18112
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and
happiness. Yale University Press.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science,
185(4157), 1124-1131. 10.1126/science.185.4157.1124
Tzanova, S. (2019) Changes in academic libraries in the era of Open Science, Education for
Information, 36(8), 1-19. 10.3233/EFI-190259
UK Behavioural Insights Team (2014). EAST: Four simple ways to apply behavioural insights.
Retrieved from https://www.behaviouralinsights.co.uk/wp-content/uploads/2015/07/BIT-
Publication-EAST_FA_WEB.pdf
Utrecht University. “Open Science at Utrecht University.” Home - Utrecht University, 2019.
https://open-science.sites.uu.nl/
Vazire, S. (2018). Implications of the credibility revolution for productivity, creativity, and
progress. Perspectives on Psychological Science, 13(4), 411-417.
https://doi.org/10.1177/1745691617751884
44
Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., Van Aert, R., & Van Assen, M.
A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological
studies: A checklist to avoid p-hacking. Frontiers in psychology, 7, 1832.
10.3389/fpsyg.2016.01832
Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., ... &
Bouwman, J. (2016). The FAIR Guiding Principles for scientific data management and
stewardship. Scientific data, 3(1), 1-9. https://doi.org/10.1038/sdata.2016.18
Zajonc, R. B. (1968). Attitudinal effects of mere exposure. Journal of Personality and Social
Psychology, 9(2), 1–27. https://doi.org/10.1037/h0025848
Zeelenberg, M. (1999). Anticipated regret expected feedback and behavioral decision making.
Journal of behavioral decision making, 12(2), 93-106. https://doi.org/10.1002/(SICI)1099-
0771(199906)12:2<93::AID-BDM311>3.0.CO;2-S
45