Compton and Hart - 2019 - Great Policy Successes Or, A Tale About Why It's Amazing That Governments Get So Little Credit For

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

OUP CORRECTED PROOF – FINAL, 25/7/2019, SPi

  ‘’    3

to nurturing and protecting elite and public perceptions of the policy’s/pro-


gramme’s ideology, intent, instruments, implementation, and impact during the
often long and tenuous road from ideas to outcomes. Success must be experienced
and actively communicated, or it will go unnoticed and underappreciated. In this
volume, we aim to shed light on how these two fundamental tasks—programme and
process design; and coalition-building and reputation management—are taken up
and carried out in instances of highly successful public policymaking.
Following in the footsteps of Pressman and Wildavsky and Hall, this volume
contains in-depth case studies of prominent instances of public policymaking and
planning from around the world. By offering insight into occurrences of policy
success across varied contexts, these case studies are designed to increase aware-
ness that government and public policy actually work remarkably well, at least
some of the time, and that we can learn from these practices. Before we get into
these cases, however, it is necessary to equip readers of this book and future
researchers of policy success with a guide on how to go about identifying and
analysing instances of policy success. The chief purpose of this chapter is to offer
researchers, policy-makers, and students a field guide to spotting great policy
successes in the real world—in the wild—so that we can begin to analyse how they
came about and what might be learned from them.

How Do We Know a ‘Great Policy Success’ When We See One?

Policy successes are, like policy failures, in the eye of the beholder. They are not
mere facts but stories. Undoubtedly ‘events’—real impacts on real people—are a
necessary condition for their occurrence. But in the end, policy successes do not so
much occur as they are made. To claim that a public policy, programme, or project
X is a ‘success’ is effectively an act of interpretation, indeed of framing. To say this
in a public capacity and in a public forum makes it an inherently political act: it
amounts to giving a strong vote of confidence to certain acts and practices of
governance. In effect it singles them out, elevates them, validates them.
For such an act to be consequential, it needs to stick: others must be convinced
of its truth and they need to emulate it. The claim ‘X is a success’ needs to become
a more widely accepted and shared narrative. When it does, it becomes performa-
tive: X looks better and better because so many say so, so often. When the
narrative endures, X becomes enshrined in society’s collective memory through
repeated retelling and other rituals. Examples of the latter include the conferral of
awards on people or organizations associated with X, who then subsequently get
invitations to come before captive audiences to spread the word; the high place
that X occupies in rankings; the favourable judgements of X by official arbiters of
public value in a society, such as audit agencies or watchdog bodies, not to
mention the court of public opinion. Once they have achieved prominence,
OUP CORRECTED PROOF – FINAL, 25/7/2019, SPi

4   

success tales—no matter how selective and biased critics and soft voices may claim
them to be (see Schram and Soss 2001)—come to serve as important artefacts in
the construction of self-images and reputational claims of the policy-makers,
governments, agencies, and societal stakeholders that credibly claim authorship
of their making and preservation (Van Assche et al. 2012).
We must tread carefully in this treacherous terrain. Somehow, we need to arrive
at a transparent and widely applicable conceptualization of ‘policy success’ to be
deployed throughout this volume, and a basic set of research tools allowing us to
spot and characterize the ‘successes’ which will be studied in detail throughout this
book. To get there, we propose that policy assessment is necessarily a multi-
dimensional, multi-perspectivist, and political process. At the most basic level we
distinguish between two dimensions of assessment. First, the programmatic
performance of a policy: success is essentially about designing smart programmes
that will really have an impact on the issues they are supposed to tackle, while
delivering those programmes in a manner to produce social outcomes that are
valuable. There is also the political legitimacy of a policy: success is the extent to
which both the social outcomes of policy interventions and also the manner in
which they are achieved are seen as appropriate by relevant stakeholders and
accountability forums in view of the systemic values in which they are embedded
(Fischer 1995; Hough et al. 2010).
The relation between these two dimensions of policy evaluation is not straight-
forward. There can be (and often are) asymmetries: politically popular policies are
not necessarily programmatically effective or efficient, and vice versa. Moreover,
there is rarely one shared normative and informational basis upon which all actors
in the governance processes assess performance, legitimacy, and endurance
(Bovens et al. 2001). Many factors influence beliefs and practices through which
people form judgements about governance. Heterogeneous stakeholders have
varied vantage points, values, and interests with regard to a policy, and thus
may experience and assess it differently. An appeal to ‘the facts’ does not neces-
sarily help settle these differences. In fact, like policymaking, policy evaluation
occurs in a context of multiple, often competing, cultural and political frames and
narratives, each of which privileges some facts and considerations over others
(Hajer and Wagenaar 2003). It is inherently political in its approach and impli-
cations, no matter how deep the espoused commitment to scientific rigour of
many of its practitioners. This is not something we can get around; it is something
we have to acknowledge and be mindful of without sliding into thinking that it is
all and only political, and that therefore ‘anything goes’ when it comes assessing
the success or otherwise of a policy (Bovens et al. 2006).
Building upon Bovens and ‘t Hart’s programmatic–political dichotomy,
McConnell (2010) added a third perspective, process success, to produce a
three-dimensional assessment map. We have adapted this three-dimensional
assessment for our purposes (see also Newman 2014) and added an
OUP CORRECTED PROOF – FINAL, 25/7/2019, SPi

  ‘’    5

additional—temporal—dimension. Assessing policy success in this volume thus


involves checking cases against the following four criteria families:
Programmatic assessment—This dimension reflects the focus of ‘classic’ evalu-
ation research on policy goals, the theory of change underpinning it, and the
selection of the policy instruments it deploys—all culminating in judgements
about the degree to which a policy achieves valuable social impacts.
Process assessment—The focus here is on how the processes of policy design,
decision-making, and delivery are organized and managed, and whether these
processes contribute to both its technical problem-solving capacity (effectiveness
and efficiency) and to its social appropriateness, and in particular the sense of
procedural justice among key stakeholders and the wider public (Van den Bos
et al. 2014).
Political assessment—This dimension assesses the degree to which policy-
makers and agencies involved in driving and delivering the policy are able to
build and maintain supportive political coalitions, and the degree to which policy-
makers’ association with the policy enhances their reputations. In other words, it
examines both the political requirements for policy success and the distribution of
political costs/benefits among the actors involved in it.
Endurance assessment—The fourth dimension adds a temporal perspective. We
surmise that the success or otherwise of a public policy, programme, or project
should be assessed not through a one-off snapshot but as a multi-shot sequence or
episodic film ascertaining how its performance and legitimacy develop over time.
Contexts change, unintended consequences emerge, surprises are thrown at
history: robustly successful policies are those that adapt to these dynamics through
institutional learning and flexible adaptation in programme (re)design and delivery,
and through political astuteness in safeguarding supporting coalitions and main-
taining public reputation and legitimacy.
Taking these dimensions into account, we propose the following definition of a
(‘great’) policy success:

A policy is a complete success to the extent that (a) it demonstrably creates widely
valued social outcomes; through (b) design, decision-making, and delivery pro-
cesses that enhance both its problem-solving capacity and its political legitimacy;
and (c) sustains this performance for a considerable period of time, even in the
face of changing circumstances.

Table 1.1 presents an assessment framework that integrates these building blocks.
Articulating specific elements of each dimension of success—programmatic, pro-
cess, political, endurance—in unambiguous and conceptually distinct terms, this
framework lends a structure to both contemporaneous evaluation and dynamic
consideration of policy developments over time. All contributing authors have
drawn upon it in analysing their case studies in this volume.
OUP CORRECTED PROOF – FINAL, 25/7/2019, SPi

6   

Table 1.1 A policy success assessment map

Programmatic assessment: Process assessment: Political assessment:


Purposeful and valued Thoughtful and fair Stakeholder and public
action policymaking practices legitimacy for the policy

• A well-developed and • The policy process • A relatively broad and deep


empirically feasible public allows for robust political coalition supports
value proposition and deliberation about the policy’s value
theory of change (in terms thoughtful proposition, instruments
of ends–means consideration of: the and current results
relationships) underpins relevant values and • Association with the policy
the policy interests; the hierarchy enhances the political
• Achievement of (or of goals and objectives; capital of the responsible
considerable momentum contextual constraints; policy-makers
towards) the policy’s the (mix of ) policy
instruments; and the • Association with the policy
intended and/or other enhances the organizational
beneficial social outcomes institutional
arrangements and reputation of the relevant
• Costs/benefits associated capacities necessary for public agencies
with the policy are effective policy
distributed equitably in implementation
society
• Stakeholders
overwhelmingly
experience the making
and/or the delivery of
policy as just and fair
Temporal Assessment
• Endurance of the policy’s value proposition (i.e. the proposed ‘high-level’ ends–means
relationships underpinning its rationale and design, combined with the flexible
adaptation of its ‘on-the-ground’ and ‘programmatic’ features to changing circumstances
and in relation to performance feedback).
• Degree to which the policy’s programmatic, process, and political performance is
maintained over time.
• Degree to which the policy confers legitimacy on the broader political system.

Studying Policy Success: Methodological Considerations

Now that we have a working method of ‘seeing’ policy success in operational


terms, the next step is to apply the concept in studying governance and public
policymaking. Before we do so, however, it is important to point out that there are
range of methods which researchers have employed in this task. These efforts can
be grouped into three types of approach.
At the macro-level, studies of overall government performance usually take the
form of cross-national and cross-regional comparison of indicators published in
large datasets. Some researchers focus on the inputs and throughput side of
government. A prominent example is the Quality of Government dataset that
captures cross-national difference in the trustworthiness, reliability, impartiality,
OUP CORRECTED PROOF – FINAL, 25/7/2019, SPi

  ‘’    7

incorruptibility, and competence of public institutions (Rothstein 2011). Of more


direct relevance from a policy success point of view are datasets and balanced
scorecard exercises focusing on aggregate governance outputs, outcomes, and
productivity in specific domains of government activity, performed and propa-
gated by e.g. the World Bank, the OECD, and many national audit offices and
government think tanks (Goderis 2015).
At the meso-level, social problems, policy domain, and programme evaluation
specialists regularly examine populations of cases to identify cases and areas of
high performance. For example, common areas of focus include crime prevention
programmes, adult literacy programmes, refugee settlement programmes, and
early childhood education programmes. With this method, scholars examine
‘what works’ and assess whether these programmes or key features of them can
be replicated and transferred to other contexts (e.g. Light 2002; Isaacs 2008;
Lundin et al. 2015; Blunch 2017; Weisburd et al. 2017).
Finally, at the micro-level, researchers probe deeply into the context, design,
decision-making, implementation, reception, assessment, and evolution of single
or a limited number of policies or programmes. Both Hall’s and Pressman and
Wildavsky’s seminal studies are examples of micro-level studies.
Each of these three approaches has a distinctive set of potential strengths and
weaknesses. Macro studies offer a view of the big picture, with a helicopter
perspective of linkages between governance activities and social outcomes. They
lend insight into the social and economic consequences of institutional design and
the effect of public spending patterns. This approach generally offers little or no
insight into what occurs in the ‘black box’ in which these linkages take shape.
Meso-level studies, on the other hand, drill down to the level of programmes and
come closer to establishing the nature of the links between their inputs, through-
puts, outputs, and outcomes. Structured and focused comparative case designs
which control for institutional and contextual factors can yield richer pictures of
‘what works’. A limitation of these population-level comparisons is the conse-
quence of parsimony, which limits the depth of attention paid to context, chance,
choice, communication, cooperation, and conflict within each unit in the sample.
As a result, it often proves difficult for meso-level studies to convincingly answer
why things work well or not so well.
The latter is the main potential strength of micro-level, single, or low-n case
study designs. This approach offers the greatest leverage in opening the black box,
and examining the stakeholder interests, institutional arrangements, power rela-
tionships, leadership and decision-making processes, and the realities of front-line
service delivery involved. This gives analysts in this tradition a better shot at
reconstructing the constellations of factors and social mechanisms that are at
work in producing policy successes. The chief limitation of micro studies of policy
success lies in the limited possibilities for controlled hypothesis testing and the
impossibility of empirically generalizing their findings. This volume is set in the

You might also like