The Tools of Policy Formulation An Introduction
The Tools of Policy Formulation An Introduction
The Tools of Policy Formulation An Introduction
INTRODUCTION
understood of all the policy process stages; indeed, there is a growing belief
that it may constitute the final, ‘missing link’ (Hargrove 1975) in policy
analysis. Interest in policy design is also re-awakening, partly because
of the rise to prominence of ever more complex problems such as energy
insecurity and climate change that defy standard policy remedies (Howlett
et al. 2014). And having invested heavily in tools in the past, tool promot-
ers and policy practitioners are eager to understand how – and indeed
if – they perform in practice.
The remainder of this chapter is divided as follows. The second section
takes a step back by examining the main actors, processes and venues of
policy formulation in a very general sense. The third section scours the
various existing literatures to explore in more detail the development of
the various policy formulation tools that could in principle be used in
these venues. It also charts the subsequent turn away from these tools
in mainstream public policy research, and explores some of the reasons
why interest in policy formulation has recently undergone a renaissance.
Section 4 explores the analytical steps that will be needed to re-assemble
the various literatures into a more coherent sub-field of policy research,
revolving around a series of common foci. To that end, we propose a new
definition and typology of tools, and offer a means of re-assembling the
field around an analytical framework focused on actors, venues, capacities
and effects. We conclude by introducing the rest of the book, including
our final, concluding chapter.
One of the most common ways to comprehend the process of policy for-
mulation is to break it down into constituent steps or tasks. For Wolman
(1981), policy formulation comprises several ‘components’, each impacting
heavily on overall policy performance. In his view, the ‘formulating
process’ starts with the ‘conceptualization of the problem’ by policymakers
(Wolman 1981, p. 435). Like Wolman, Thomas (2001, pp. 216‒217) also
identifies an initial ‘[a]ppraisal phase’ of data collection where ‘critical
issues . . . [are] identified’ by stakeholders. However, as many commentators
have observed, ‘problems’ themselves are not self-evident or neutral, with
Wolman (1981, p. 437) arguing that they may be contested, subjective or
socially constructed and may change through time in response to societal
values. Problem characterization could therefore be considered to be an
extension of the agenda-setting process. Policymakers may select certain
forms of evidence to support action on specific issues, or issues themselves
may be productive of certain types of evidence (see for example, Kingdon
2010; Baumgartner and Jones 1991).
Having established the existence of a policy problem (or problems)
through some form of data collection, the various policy-relevant dimen-
sions of the problem are then evaluated to determine their causes and extent,
chiefly as a basis for identifying potential policy solutions. Inadequate
understanding at this stage creates a need for what Wolman (1981, p. 437)
terms ‘[t]heory evaluation and selection’. While the point is often made
that causation tends to be difficult to precisely establish, Wolman observes
that ‘the better the understanding is of the causal process . . . the more
likely . . . we will be able to devise public policy to deal with it success-
fully’ (Wolman 1981, p. 437). Understanding causation, as Wolman puts
it, is also reliant on the generation of adequate theoretical propositions in
addition to relevant data on which to support them. For Wu et al. (2010,
p. 40) ‘[u]nderstanding the source of the problem’ is an unavoidable part of
formulation. They also make the point that rarely is there ‘full agreement
over . . . underlying causes’ (Wu et al. 2010, p. 40). Like initial problem
characterization, evaluation of the causes of a problem may thus involve
political conflict as different actors seek to apportion blame, reduce their
perceived complicity or shape subsequent policy responses in line with
their interests. These characteristics strongly condition the type of tools
used.
Once a broad consensus has been reached on the nature and extent
of the problem(s), policymakers turn to consider appropriate responses.
From the initial information gathering and analysis of causes, formula-
tors engage in the ‘[s]pecification of objectives’ (Wolman 1981, p. 438) or
‘[c]larifying policy objectives’ (Wu et al. 2010, p. 40) stage. Initially, this
third step of objective specification can involve the determination of
the objectives to be met and the timescales for action (Wu et al. 2010).
Again, disagreements over objectives can quickly ensue but once they
are established, as a fourth step, specific policy options can be assessed
and recommendations made on policy design(s). Because any particular
problem may have multiple potential solutions, each with differing costs
and benefits, these options require comparative assessment to guide deci-
sion making. As Howlett (2011, p. 31) puts it, this part of the formula-
tion process ‘sees public officials weighing the evidence on various policy
options and drafting some form of proposal that identifies which of these
options will be advanced to the ratification stage’.
Prior to the adoption of the final policy, it undergoes a fifth step – design.
Having determined objectives, various means are available for selection
from the tool box (for example Howlett 2011; Jordan et al. 2012; Jordan
et al. 2013b). Determining the preferred policy mix is central to design con-
siderations. While typologies also abound in the instruments literature,
four main categories are evident: regulations; market-based instruments;
voluntary approaches; and informational measures (Jordan et al. 2013b).
In addition, the instrument of public spending or budgeting may also be
identified (see for example, Russel and Jordan 2014). Policymakers select
from these instruments according to a range of considerations that are
both internal and external to the instrument. This stage of formulation
could, according to Wolman (1981, pp. 440‒446), consequently involve the
weighing-up of several factors: the ‘causal efficacy’ of the policy; ‘political
feasibility’; ‘technical feasibility’; any ‘secondary consequences’ result-
ing from the design; instrument type (regulations or incentives); and the
capacity of implementation structures.
As above, all the steps including this one may become deeply contested.
After all, the final architecture of the policy could, once implemented,
create winners and losers via processes of positive and negative feedback
(Jordan and Matt 2014). One means of dissipating distributional conflict
throughout the entire formulation process is to engage in what Thomas
(2001, p. 218) terms consensus building or ‘consolidation’, whereby agree-
ment is sought between the various policy formulators and their client
groupings. We shall show that a number of tools have been developed
specifically for this purpose. But while ‘[a]nticipating and addressing the
. . . concerns of the various powerful social groups is essential’, consulta-
tion may create associated transaction costs such as the slowing down of
policy adoption (Wu et al. 2010, p. 41). A decision can be taken – the sub-
sequent stage of the policy process – once agreement has been reached on
the chosen course of action.
These five tasks constitute the standard steps or tasks of policy formula-
tion. During the 1960s and 1970s, when the policy analysis movement was
still in its infancy, policy formulation was depicted as though it were both
analytically and in practice separate from agenda setting and decision
making. It was the stage where policy analysts ‘would explore alternative
approaches to “solve” a policy problem that had gained the attention of
decision makers and had reached the policy agenda’ (Radin 2013, p. 23). In
doing so, policy formulation could be ‘politically deodorized’ (Heclo 1972,
p. 15) in a way that allowed policy specialists to draw on the state of the art
in policy tools and planning philosophies, to ensure that policy remained
on as rationally determined a track as possible (Self 1981, p. 222).
As we saw above, and shall explain more fully below, it soon became
apparent that the politics could not be so easily squeezed out of policy for-
mulation by using tools or indeed any other devices. It also became clear
that some of the formulation tasks could overlap or be missed out entirely.
Indeed, policy formulation may not culminate in the adoption of a discrete
and hence settled ‘policy’: on the contrary, policies may continue to be (re)
formulated throughout their implementation as tool-informed learning
takes place in relation to their operational effectiveness and associated
outcomes (Jordan et al. 2013a). As we shall show, many policy analysts
responded to these discomforting discoveries by offering ever more stri-
dent recommendations on how policy formulation should be conducted
(Vining and Weimer 2010; Dunn 2004); notably fewer have studied how
it is actually practiced (Colebatch and Radin 2006; Noordegraaf 2011). In
the following section we shall explore what a perspective focusing on tools
and venues offers by way of greater insight into the steps and the venues
of policy formulation.
detected, including, inter alia, within federal, state and local governments
plus within international organizations (Pralle 2003), European Union
institutions and national governments (Beyers and Kerremans 2012), and
various trans-governmental co-operation mechanisms (Guiraudon 2002).
Venues can include ‘formal political arenas such as legislatures, executives
and the judiciary, but also the media and the stock market’ and so-called
‘scientific venues such as research institutes, think-tanks and expert com-
mittees’ (Timmermans and Scholten 2006, p. 1105). A particular role is
also ascribed to the use of scientific evidence by actors to achieve agenda-
setting demands in venue shopping strategies (Timmermans and Scholten
2006).
On this basis, any attempt to categorize venues for policy formulation
should be cognizant of the institutional space itself and, significantly,
the type of evidence used. With respect to the former, when examining
formulation we can more neatly divide venues by functional power rather
than institutional level or actor group. Here, in terms of relative power,
it is national government executives that are still arguably dominant glo-
bally, despite increasing shifts towards multi-level governance (Jordan and
Huitema 2014). To give greater analytical purchase to our conceptualiza-
tions we therefore build on Peters and Barker (1993), Baumgartner and
Jones (1993) and Timmermans and Scholten (2006), and define policy
formulation venues as institutional locations, both within and outside gov-
ernments, in which certain policy formulation tasks are performed, with the
aim of informing the design, content and effects of policymaking activities.
Policy formulation venues can in principle exist at different levels of
governance (nation state versus supra/sub-national); and within or outside
the structures of the state. There has been much work (see for example
Barker 1993; Parsons 1995; Halligan 1995) on classifying policy advice
systems, and two dimensions identified therein are particularly important
for understanding policy formulation venues more generally. First, are the
policy formulation tasks conducted externally or internally to the execu-
tive; in other words, where is the task undertaken? For example, internal
venues may be populated wholly or mainly by serving officials or minis-
ters and may include departmental inquiries, government committees and
policy analysis units (for examples of the latter, see Page 2003). External
venues may encompass legislative, governmental or public inquiries and
involve non-executive actors such as elected parliamentarians, scientific
advisors, think tanks, industry representatives and non-governmental
organizations.
Second, are official (executive) or non-official sources of knowledge
employed, that is, what knowledge sources do policy formulators draw
upon? We distinguish between executive-sanctioned or derived knowledge,
Internal
Unofficial Official
External
and unofficial sources that may include surveys, research which appears as
non-formal reports, and the outputs of research networks and public intel-
lectuals. Rather closed processes of policy formulation can occur within
internal venues using officially derived evidence, in contrast to more open
external venues that draw upon non-official forms of knowledge.
Neither of these two dimensions – well known to scholars of policy advi-
sory systems (Craft and Howlett 2012, p. 87) – are binary. For example,
there are varying degrees to which the entirety of a policy formulation task
is undertaken internally or externally, and varying degrees to which differ-
ent types of evidence are employed at different times or for different pur-
poses. We therefore propose to represent them by means of a 232 matrix
(Figure 1.1).
As noted above, tools have always had a special place in the history of
policy analysis. Modern policy analysis is often held to have developed in
earnest from the 1940s onwards (DeLeon 2006). Harold Lasswell’s (1971)
as noted above, been dominated by generalists and those with a legal back-
ground (Radin 2013, p. 14). These tools initially drew on techniques from
operational research and economic analysis, including methods for assess-
ing the costs and benefits of different policy alternatives, and analysis of
interacting parts of complex systems. Tools such as cost–benefit analysis
(CBA) and computer models were to be found in the analycentric ‘back-
room’ (Self 1981, p. 222), where political ‘irrationalities’ could be tempered
and policy made more ‘rational’. These tools and tool-utilizing skills had
originally been developed and honed during the Second World War, but
as Radin (2013, p. 14) puts it rather nicely, ‘the energy of Americans that
had been concentrated on making war in a more rational manner now
sought new directions’. The tool specialists found a willing audience
amongst politicians and policymakers who were anxious to embark upon
new endeavours.
critical policy decisions had already effectively been made? (Shulock 1999,
p. 241). Politics could also intervene more insidiously, through the values
embodied and reproduced by particular, ostensibly neutral tools. CBA in
particular lost legitimacy in certain policy sectors as a result (Owens et al.
2004), though hung on quite tenaciously thereafter. The very idea that
policy analysis should seek to provide analytical solutions for ‘elites’ was
challenged; rather, claims were made that analysts should concentrate on
understanding the multiple actors that are involved in policy formulation
(Hajer and Wagenaar 2003), and uncover the many meanings that they
bring to the process and the framings they employ (Radin 2013, p. 162). So
while the academic critique of tools and methods were mostly centred on the
most positivist, rational variants (in other words, the PPBS and CBA) (Self
1985), its effect was eventually much more wide ranging and long lasting.
Second, policymakers also began to turn away from centralized, tool-
driven forms of policy planning. The abolition of PPBS in the 1970s
and of the CPRS in the early 1980s, coupled with the rise of a much
more explicitly ideological approach to policymaking in the 1980s, led
not to the removal of analysis altogether, but changes in the type and
tools of analysis demanded. Thus, the rise of private sector management
techniques in running public services (in other words, the New Public
Management agenda), coupled with desire to reduce the power and scope
of bureaucracy, nurtured a demand for a new set of accounting tools for
contracting out public services (Mintrom and Williams 2013).
Third, the mainstream of public policy research had long before turned
to other research questions. These focused more on attempts (of which
Lindblom (1959) is a classic early example) to better understand the policy
process itself, not as a series of stages in which rational analysis could/
should be applied, but as a much more complex, negotiated and above all
deeply political process. Others built on the claim that policy formulation
was actually not especially influential – that policy implementation, not
formulation, was the missing link – and devoted their energies to post-
decisional policymaking processes. Meanwhile, after Salamon’s (1989)
influential intervention, policy instrument scholars increasingly focused
on the selection and effects of the implementing instruments.
Finally, the tool designers and developers became ever more divided
into ‘clusters of functional interest’ (Schick 1977, p. 260). The idea of an
integrated policy analysis for democracy was quietly forgotten in the rush
to design ever more sophisticated tools. Indeed, some have devoted their
entire careers to this task, only later to discover that relatively few policy-
makers routinely use the tools they had designed (Pearce 1998; Hanley et al.
1990). As Schick (1977, p. 262) had earlier predicted, they believed that the
route to usefulness was via ever greater precision and rigour – but it wasn’t.
need new analytical tools that will help them to diagnose and map the external
environments of the public agencies, to recognize the inherent tensions and
dynamics in these environments as they pertain to policy development and
consensus building, and to develop new strategies for ‘working’ in these envi-
ronments in the interests both of their political masters and those of the broader
communities they serve.
In attempting to move the study of policy formulation tools back into the
mainstream of public policy research, we immediately confront a problem –
the relative absence of common definitions and typologies. Without these,
it is difficult to believe that the literatures discussed above can be telescoped
into a new sub-field. We believe that four literatures provide an especially
important source of common terms and concepts, which we now briefly
summarize.
The first literature describes the internal characteristics and functions of
each tool, and/or offers tool kits which seek to assist policy formulators in
selecting ‘the right tool for the job’. On closer inspection, there are in fact
many sub-literatures for all of a vast array of different tools; numerous
classic texts like Dunn (2004) and Rossi et al. (2004) introduce some of
the main ones. Generally speaking, rather fragmented into the main tool
subtypes, and rather rationalistic in its framing, this literature nonethe-
less remains crucial because it outlines the intrinsic features of each tool.
However (as repeatedly noted above), it does not have a great deal to say
about where, how, why and by whom (in other words, by which actors and
in which venues) they are used, and what effects they (do not) produce.
The second is dominated by typologies. Tools can be typologized in
a number of different ways, for example: by the resources or capacities
they require; by the activity they mainly support (for example, agenda
setting, options appraisal); by the task they perform; and by their spatial
resolution. Radin (2013, p. 145) opts for a more parsimonious framing,
distinguishing between two main types: the more economic tools such
as cost–benefit analysis (CBA) and what she terms the more ‘systematic
approaches’ such as criteria analysis and political mapping. The problem
is that dividing the field into two does not really offer much typological
variation. In an earlier analysis, we elected to subdivide the main tools into
three main types based on their level of technical complexity (Nilsson et
al. 2008):
At the time, we noted that there was no normative ranking implied in this
typology. We also noted the basic difference between tools (such as scenar-
ios and public participation) with more open procedures and purposes, and
those like CBA that follow a set of standard procedural steps. But we did
not relate these to the policy formulation tasks that tools could or should
perform. We return to the matter of typologies below.
The third literature adopts a more critical perspective (Wildavsky
1987; Shulock 1999; Self 1981), offering words of caution about expect-
ing too much from tools. It appears to have left a deep impression on a
sufficient number of policy analysts, perhaps sufficient to militate against
the development of a new sub-field. However, it is clear that despite these
cautionary words, many tools have been developed and are very heavily
applied in certain venues to routinely produce effects that are not currently
But what are the main tools of policy formulation and which of the
interlinked formulation tasks mentioned in this definition do they seek
to address? Today, the range of policy formulation tools is considerably
wider and more ‘eclectic’ (Radin 2013, p. 159) than it was in Lasswell’s time.
While keenly aware that typologizing can very easily become an end in
itself, developing some kind of workable taxonomy nonetheless remains a
crucial next step towards enhancing a shared understanding of how policy
formulation tools are used in contemporary public policymaking.
We propose that the five policy formulation tasks outlined above –
problem characterization, problem evaluation, specification of objectives,
policy options assessment and policy design – may be used to structure a
typology of policy formulation tools, based on what might be termed the
An Analytical Framework
In the rest of this book, a number of experts in policy formulation tools and
venues seek to shed new light on the interaction between four key aspects
of these tools, which together constitute our analytical framework: actors,
capacities, venues and effects.
Actors
First, we seek to elucidate those actors who participate in policy formu-
lation, particularly those that develop and/or promote particular policy
formulation tools. The tools literature has often lacked a sense of human
agency and, as noted above, the policy formulation literature tended to
ignore the tools being used. These two aspects need to be brought together.
In this book we therefore seek to know who the actors are and why they
develop and/or promote particular tools. Why were particular tools devel-
oped, when and by whom? And what values do the tools embody?
Venues
Second, we want to know more about by whom and in which policy for-
mulation venues such tools are used, and for what purposes. What factors
shape the selection and deployment of particular tools? Again the broader
question of agency seems to be largely unaddressed in the four existing
literatures summarized above. Tool selection is treated largely as a ‘given’;
indeed many studies seem to ignore entirely the reasons why policymakers
utilize them (or do not). Finally, relatively little is known about how the
various tools and venues intersect, both in theory and, as importantly, in
practice.
Capacities
Third, we wish to examine the relationship between policy capacity and
policy formulation tools. Policy capacity is one of a number of sub-
dimensions of state capacity, which together include the ability to create
and maintain social order and exercise democratic authority (Matthews
2012). Broadly, it is the ability that governments have to identify and
pursue policy goals and achieve certain policy outcomes in a more or less
instrumental fashion, that is, ‘to marshal the necessary resources to make
intelligent collective choices about and set strategic directions for the allo-
cation of scarce resources to public ends’ (Painter and Pierre 2005, p. 2).
It is known to vary between policy systems and even between governance
levels in the same policy system. Policy instruments and tools have long
been assumed to have an important influence on policy capacity – if they
did not, why use them (Howlett et al. 2014, p. 4)? The fact that they are
unevenly used over time, for example, could explain why the policy capac-
ity to get things done also varies across space and time (Bähr 2010; Wurzel
et al. 2013).
The chapters of this book seek to examine the relationship between
policy capacity and tools in three main ways. First, they conceive of the
policy formulation or policy analytic capacities that inhere within each
tool (in other words, Table 1.1). For example, scenarios and foresight
exercises provide policymakers with the capacity to address the problem
characterization and problem evaluation tasks, particularly in situations
of high scientific uncertainty. By contrast, tools such as CBA and multi-
criteria analysis (MCA) provide a means to complete the policy assess-
ment of option and policy design stages of the policy formulation process.
Second, the chapters also tackle the question of what policy capacities
are in turn required by policymakers to employ – and perhaps even more
fundamentally to select – certain policy formulation tools. For example,
relatively heavily procedural tools such as MCA and CBA arguably
require specialist staff and specific oversight systems. When these are weak
or absent, the use made of tools may tend towards the symbolic. Thus,
several questions may be posed. What capacities do actors have – or need –
to employ specific policy formulation tools? And what factors enable and/
or constrain these capacities?
Finally, the chapters open up the potentially very broad – but equally
important – question of what factors might conceivably enable or con-
strain the availability of these capacities. The fact that critical supporting
Effects
Finally, what effects, both intended and actual, do the various tools gener-
ate when they are employed? As we explained above, our original expecta-
tion was that the tools would produce some quite specific epistemic and
political effects. But while some evidence is available on their wider effects,
much more is required. The policy instruments literature has been strug-
gling to address this question, at least for implementation tools, ever since
Salamon (2002, p. 2) speculated that each tool imparts its own distinctive
spin or twist on policy dynamics. Substantive effects include learning in
relation to new means to achieve given policy goals (a feature which is
predominant amongst the more structured procedural tools such as CBA,
but also computer modelling tools) through to the heuristic-conceptual
effects on problem understandings (see for example Chapters 2 and 3, this
volume). The procedural effects could be similarly wide ranging including
(re-)channelling political attention, opening up new opportunities for
outsiders to exert influence and uncovering political power relationships.
The chapters examine whether or not these and other effects occurred, and
whether they were, or were not, originally intended.
The chapters are grouped into two main parts. Those in Part II provide – in
some cases, for the very first time – a systematic review of the literature on
particular tools. They are written by tool experts according to a common
template and draw upon examples from across the globe. Given space con-
straints, we elected to focus on six of the most widely known and commonly
advocated tools, which broadly reflect the range of tool types and policy
formulation tasks summarized in Table 1.1. Thus, Matthijs Hisschemöller
and Eefje Cuppen begin by examining participatory tools (Chapter 2),
Marta Pérez-Soba and Rob Maas cover scenarios (Chapter 3) and Markku
Lehtonen reviews indicators (Chapter 4). Then, Martin van Ittersum and
Barbara Sterk summarize what is currently known about computerized
models (Chapter 5), Catherine Gamper and Catrinel Turcanu explore forms
of multi-criteria analysis (Chapter 6) and Giles Atkinson concludes by
reviewing the literature on cost–benefit analysis (Chapter 7).
The chapters in Part II explore the relationship between actors, venues,
capacities and effects from the perspective of each tool. By contrast,
the authors in Part III cut across and re-assemble these four categories
by looking at tool–venue relationships in Europe, North America and
Asia. Some (for example, Chapters 8 and 9) turn the analytical telescope
right around and examine the use made of multiple tools in one venue.
Each chapter employs different theories to interpret freshly collected
empirical information to test explanations and identify pertinent new
research questions. In broad terms, the first two chapters in Part III
examine the use of multiple tools in one or more venues, whereas those
that follow focus on the application of specific tools in one or more
venues. Thus in their chapter, Michael Howlett and colleagues explore
the distribution of all tools across many venues in Canada (Chapter 8),
whereas John Turnpenny and colleagues explore the use of all the tools
in the single venue of policy-level appraisal within Europe (Chapter 9).
Sachin Warghade examines the use of two tools in a number of differ-
ent venues in India (Chapter 10), and Christina Boswell et al. investigate
the use of indicators in the UK (Chapter 11). Finally, Paul Upham and
colleagues explore the application of a particular type of computer-
ized model in a range of different policy formulation venues in the UK
(Chapter 12). In the final Chapter (13), we draw together the main find-
ings of the book and identify pertinent new policy and analytical research
challenges. Conscious that this still has the look and feel of a sub-field of
policy analysis ‘in the making’ we attempt to draw on these findings to
critically reflect back on our typology, our definition of formulation tools
and our analytical framework.
More generally, in Chapter 13 we seek to explore what a renewed focus
on policy formulation tools adds to our understanding of three impor-
tant matters. First, what stands to be gained in respect of our collective
understanding of the tools themselves, which as we have repeatedly noted
have often been studied in a rather isolated, static and descriptive manner?
Second, what does it reveal in relation to policy formulation and policy-
making more generally? Policy formulation is arguably the most difficult
policy ‘stage’ of all to study since it is often ‘out of the public eye . . . [and]
in the realm of the experts’ (Sidney 2007, p. 79). Howlett has argued that
it is a ‘highly diffuse and often disjointed process whose workings and
results are often very difficult to discern and whose nuances in particular
instances can be fully understood only through careful empirical case
study’ (Howlett 2011, p. 32). Aware of the challenges, in this book we
seek to investigate what a renewed focus on tools is able to add to the
current stock of knowledge. In doing so, we seek to directly challenge
the conventional wisdom about tools as epiphenomenal, that is, wholly
secondary to ideas, interests, power and knowledge. Finally, what does it
add to our collective understanding of the politics of policymaking? This
is an extremely pertinent question because many of the tools were origi-
nally conceived as a means to take the political heat out of policymaking.
NOTES
1. Hood and Margetts’ (2007) concept of ‘detector’ tools for harvesting policy relevant
information corresponds only to one of a number of different policy formulation tasks.
2. Although we regard the terms tool and instrument as being broadly synonymous, hence-
forth we use the term ‘tools’ mainly to differentiate policy formulation tools from policy
implementation instruments.
REFERENCES
Jordan, A.J., D. Benson, R. Wurzel and A.R. Zito (2012), ‘Environmental policy:
governing by multiple policy instruments?’, in J.J. Richardson (ed.), Constructing
a Policy Making State?, Oxford: Oxford University Press, pp. 104‒124.
Kingdon, J.W. (2010), Agendas, Alternatives and Public Policies, Harmondsworth:
Longman.
Lascoumes, P. and P. Le Galés (2007), ‘Introduction: understanding public policy
through its instruments’, Governance, 20 (1), 1‒22.
Lasswell, H. (1971), A Pre-view of Policy Sciences, New York: Elsevier.
Lindblom, C.E. (1959), ‘The science of “muddling through”’, Public Administration
Review, 19 (2), 79‒88.
Linder, S.H. and B.G. Peters (1990), ‘Policy formulation and the challenge of con-
scious design’, Evaluation and Program Planning, 13, 303–311.
Lindquist, E. (1992), ‘Public managers and policy communities’, Canadian Public
Administration, 35, 127‒159.
Matthews, F. (2012), ‘Governance and state capacity’, in D. Levi-Faur (ed.),
The Oxford Handbook of Governance, Oxford: Oxford University Press,
pp. 281‒293.
Meltsner, A.J. (1976), Policy Analysts in the Bureaucracy, Berkeley: University of
California Press.
Mintrom, M. and C. Williams (2013), ‘Public policy debate and the rise of policy
analysis’, in E. Araral, S. Fritzen, M. Howlett, M. Ramesh and X. Wu (eds),
Routledge Handbook of Public Policy, London: Routledge, pp. 3‒16.
Nilsson, M., A. Jordan, J. Turnpenny, J. Hertin, B. Nykvist and D. Russel (2008),
‘The use and non-use of policy appraisal tools in public policy making’, Policy
Sciences, 41 (4), 335‒355.
Noordegraaf, M. (2011), ‘Academic accounts of policy experience’, in
H. Colebatch, R. Hoppe and M. Noordegraaf (eds), Working for Policy,
Amsterdam: University of Amsterdam Press, pp. 45–67.
Owens, S. and R. Cowell (2002), Land and Limits: Interpreting Sustainability in the
Planning Process, London and New York: Routledge.
Owens, S., T. Rayner and O. Bina (2004), ‘New agendas for appraisal: reflections
on theory, practice and research’, Environment and Planning A, 36, 1943‒1959.
Page, E.C. (2003), ‘The civil servant as legislator: law making in British adminis-
tration’, Public Administration, 81 (4), 651–679.
Page, E.C. and B. Jenkins (2005), Policy Bureaucracy: Governing with a Cast of
Thousands, Oxford: Oxford University Press.
Painter, M. and J. Pierre (2005), ‘Unpacking policy capacity: issues and themes’, in
M. Painter and J. Pierre (eds), Challenges to State Policy Capacity, Basingstoke:
Palgrave, pp. 1‒18.
Parsons, W. (1995), Public Policy, Aldershot, UK and Brookfield, VT, USA:
Edward Elgar Publishing.
Pearce, D.W. (1998), ‘Cost–benefit analysis and policy’, Oxford Review of
Economic Policy, 14 (4), 84‒100.
Peters, B.G. and A. Barker (1993), ‘Introduction: governments, information,
advice and policy-making’, in B.G. Peters and A. Barker (eds), Advising West
European Governments: Inquiries, Expertise and Public Policy, Edinburgh:
Edinburgh University Press, pp. 1‒19.
Pralle, S.B. (2003), ‘Venue shopping, political strategy, and policy change: the
internationalization of Canadian forest advocacy’, Journal of Public Policy, 23,
233‒260.