S08 - Sanderson, 2003
S08 - Sanderson, 2003
S08 - Sanderson, 2003
To cite this article: Ian Sanderson (2003) Is it what works that matters? Evaluation
and evidencebased policymaking, Research Papers in Education, 18:4, 331-345, DOI:
10.1080/0267152032000176846
To link to this article: http://dx.doi.org/10.1080/0267152032000176846
Ian Sanderson
ABSTRACT
The notion of evidence-based policy-making (EBP) has gained renewed currency in the UK
in the context of the current Labour Governments commitment to modernise government.
Thus, a key driver of modernisation is seen as evidence based policy-making and service
deliverywhat matters is what worksin the context of a performance management
strategy for regulation of public services. The aim of this paper is to critically examine the
assumptions underpinning EBP asking, in particular, the extent to which the increased
emphasis on the role of evidence in policy-making is indicative of instrumental rationality
which erodes the normative basis of policy-making and undermines the capacity for
appropriate practice. The potential for theory-based evaluation to deliver on its evidential
promise is critically examined and, based upon an expanded notion of practical reason, it is
argued that we need to extend the scope of our concern from what works to what is
appropriate in addressing complex and ambiguous social problems, embracing ethical-moral
concerns.
Keywords: evidence; policy; evaluation; theory-based; rationality
Ian Sanderson is Director and Professor of Policy Analysis and Evaluation, Policy Research
Institute, Leeds Metropolitan University.
Research Papers in Education ISSN 0267-1522 print/ISSN 1470-1146 online 2003 Taylor & Francis Ltd
http://www.tandf.co.uk/journals
DOI: 10.1080/0267152032000176846
INTRODUCTION
The role of scientific knowledge in public policy-making is an issue of enduring concern to
both policy makers and researchers, a legacy of the Enlightenment in the seventeenth century
when the focus shifted to the potential for the application of knowledge to change and
improve the world. A key figure in what A. N. Whitehead called the century of genius was
Francis Bacon who, Zagorin (1998, p. 222) argues, . . . gave science an ethos and social
function, the investigation of nature for human betterment which, if never universally
accepted, continues to be very widely regarded up to the present day as its ultimate rationale.
John Dewey, in his quest for a philosophical basis for the capacity to turn change in the
direction of our desires, was a passionate advocate of Bacons systematic critical empiricism
founded upon careful experimentation to . . . force the apparent facts of nature . . . (to) . . .
tell the truth about themselves, as torture may compel an unwilling witness to reveal what he
(sic.) has been concealing (Dewey, 1957, p. 32).
In contrast to Deweys optimism about the role of reason in attaining the better and
averting the worse (Dewey, 1993), the pessimistic diagnosis of some postmodernists about
a world out of control and beyond the reach of redemptive politics (OSullivan, 1993)
appears rather bleak. Of course, there are two issues here. The first concerns our capacity to
understand the worldto wrest those facts from the uncooperative witness; the second
concerns our ability to apply those facts to influence the direction of social change in the
context of wider political drivers. Enough has been written about policy-making processes to
show that ideas rarely win the battles, although there remains a deep-seated optimism about
their capacity to help win the wars.
The position adopted in this paper is one of constructive scepticism in a critical appraisal
of the present Governments pre-occupation with evidence-based policy-making (EBP). Tony
Blair set the agenda shortly after being elected for his first term of office by outlining the
commitment to modernise the public sector and declaring that what counts is what works.
In other words, the old age of ideologically-driven politics was to be consigned to the dustbin
of history and a new age of modern policy-making would be driven by research evidence of
what was proven to be effective in addressing social problems and achieving the desired
outcomes. In key policy areas such as crime, education and welfare-to-work, we continually
hear the Prime Minister and other ministers talk about their commitment to finding out what
works.
Now, for an applied social researcher, this position raises a dilemma. It is music to the ears
of the applied researcher committed to the modernist belief in social progress informed by
reason and immersed in contracted research from government departments and agencies
specifically intended to contribute to the development and improvement of public policies.
However, the music is mixed with distant alarm bells in the ears of the academic social
scientist influenced by the work of, for example, Jurgen Habermas, John Dryzek and Frank
Fischer (Sanderson, 1998, 1999). Is this emphasis on knowledge and expertise in policymaking a form of instrumental rationality, focusing on deriving correct means to given ends
at the expense of consideration of the appropriateness of those ends? Does it signal the
devaluing of democratic debate about the ethical and moral issues raised by policy choices?
How much does what works matter? How much emphasis can and, indeed, should be
placed on scientific evidence of what works in decisions about policies to address social
problems? This paper seeks to address these questions by the following route. First, I will refer
briefly to the historical context of EBP and outline in a little more detail the present
Governments stance. I then focus on the role of evaluation in providing the evidence
required, asking to what extent it can tell us what works. Based upon the conclusion that the
rationalist conception of EBP has some rather shaky foundations, I question just how much
what works really does matter in the context of a broader conception of policy-making. In
the concluding part, I elaborate briefly a proposition on what does matter in making public
policy choices.
overtaken by a renewed optimism about achieving more direct and instrumental use of
research in policy-making processes. These were ushered in by the Prime Ministerial
declaration that what counts is what works (Powell, 1999, p. 23). In this context, what
works can be taken as referring essentially to types of government intervention that are
effective in addressing the problems at which they are directed and achieving their intended
outcomes and effects. The Governments preoccupation with EBP has developed in the
context of a model of modern, professional policy making that has been propounded by the
Cabinet Office:
Good quality policy-making depends on high quality information and evidence. Modern
policy-making calls for the need to improve Departments capacity to make best use of
evidence, and the need to improve the accessibility of evidence available to policy-makers
(Bullock et al., 2001, p. 25).
Modern policy-making lies at the heart of the modernising government agenda, which
is seeking to make government more responsive and effective in achieving results (Cabinet
Office, 1999; Sanderson, 2001, 2002). The emphasis is very much on results, expressed in the
form of measurable targets in government departments Public Service Agreements (PSAs)
with the Treasury (HM Treasury, 2000). In a strong performance management regime,
departments are accountable for achievement against their targets and considerable emphasis
is given to evaluation . . . showing what worked well in improving public services and why,
and considering what further practical steps were needed to enhance service delivery and
improve effectiveness (National Audit Office, 2001). This is indeed now a familiar plea
indicating the predominance of an instrumental view of evaluation in EBP.
The underpinning rationale of the Governments position on evidence-based policymaking was articulated a couple of years ago by David Blunkett, then Secretary of State for
Education and Employment, in a much-quoted lecture to the Economic and Social Research
Council (ESRC) (DfEE, 2000). He argued that . . . rational thought is impossible without
good evidence . . . social science research is central to the development and evaluation of
policy (DfEE, 2000, p. 24); emphasised the Governments . . . clear commitment that we will
be guided not by dogma but by an open-minded approach to understanding what works and
why . . . (DfEE, 2000, p. 2); and expressed his passionate belief that . . . having ready access
to the lessons from high quality research can and must vastly improve the quality and
sensitivity of the complex and often constrained decisions we, as politicians, have to make
(DfEE, 2000, p.4).
In this rationalist vision, there is little sympathy for the broader enlightenment function
of social research, much of which is seen as . . . inward looking, too piecemeal . . . too
supplier-driven rather than focusing on the key issues of concern to policy makers . . .
(DfEE, 2000, p.8). Blunkett voiced concerns about the focus, relevance and timeliness of
research and indicated a strong desire for more research that is directly accessible, intelligible
and relevant to users in the policy community. The emphasis, therefore, is on enhancing
instrumental use. Whilst acknowledging the place for fundamental blue-skies research, he
placed the major emphasis upon research with practical applications. He highlighted the
need for research that . . . leads to a coherent picture of how society works: what are the
main forces at work and which of these can be influenced by government . . .; the need
. . . to be able to measure the size of the effect of A on B . . .; and the . . . huge potential
for quantitative analysis. . . (DfEE, p.22).
However, while experiments can provide a basis for causal inference in the limited
circumstances where a uniform, standardised intervention is provided for a clearly defined
target group under tightly-controlled conditions, this is based solely on the successionist
assumption that effect must follow the intervention if all other possible causes are controlled
out. They do not provide an understanding of how the effects are produced thus creating
problems of external validitythe ability to generalise from the result to other situations.
Although there remain matters of philosophical dispute, I would line up behind advocates
of the need to use the full range of methodological approaches to provide the best hope of
capturing various facets of policy interventions and piecing together a picture of how they
produce change in particular contextual circumstances (Sanderson, 2000). This is a stance of
very modest expectations in contrast to the bullish claims of some proponents of TBE made
in the context of rationalist conceptions of evidence-based policy-making. Even Carol Weiss
(1997) refers to the aggressively rationalistic stance of TBE but her overall position represents
a cautious and pragmatic response seeking the middle way between the horns of the Cartesian
dilemma. It is worth quoting her at length:
In my most optimistic moments, I succumb to the notion that evaluations may be able to
pin down which links in which theories are generally supported by evidence and that
program designers can make use of such understanding in modifying current programs and
planning new ones . . . Such hopes are no doubt too sunny. Given the astronomical variety
of implementations of even one basic program model, the variety of staffs, clients,
organizational contexts, social and political environments, and funding levels, any hope for
deriving generalizable findings is romantic. Nevertheless, theory-based evaluation can add
to knowledge. Even relatively small increments of knowledge about how and why
programmes work or fail cannot help but improve program effectiveness. And that is what
program evaluation is all about (Weiss, 2000, p. 44).
However, not all reactions to this problem are so pragmatic. I think we can legitimately
question whether the use of public money in resource-intensive theory-based evaluations is
appropriate if they are producing relatively small increments of knowledge. Improving the
knowledge base for policy intervention is clearly important but has to be set against other calls
on scarce public resources in terms of the value added for society. It will be interesting to see
how this issue plays out as we get more experience with TBE and as it becomes clearer just
how much it can deliver robust evidence of what works.
Of course, in the extreme, postmodernists will reject this whole agenda of attempting to
enhance the role of reason in the guidance of human affairs but many who acknowledge
the need to strive for social improvement reject such a nihilistic position (Trigg, 2001;
Oakley, 2000). Nevertheless, some critics take issue with the technical-rationalist
orientation represented by TBE. This stance is represented by Thomas Schwandt, who
regards evaluation in its dominant guise essentially as modernist project designed . . . to
tame the unruly social world, to bring order to our way of thinking about what does and
does not work for improving social life (Schwandt, 2000a). In this project, rationality is a
matter of correct procedure or method in a context where . . . policymakers seek to
manage economic and social affairs rationally in an apolitical, scientized manner such that
social policy is more or less an exercise in social technology (Schwandt, 1997, p. 74). He
sees the problem in the domination of technical rationality and expertise, which subsumes
issues of moral-political judgement. What he finds objectionable is the belief that practice
will somehow be less ambiguous or more rational if we can only find the right ways to
generate and apply evaluation knowledge (Schwandt, 2000a). He argues that the application
of scientific method to contemporary life can lead to the deformation of praxis, and
maintains that social scientific knowledge (which is general and theoretical) cannot provide
the primary basis for evaluation judgements under conditions of plurality, uncertainty and
difference (Schwandt, 2000b).
While acknowledging the role to be played by such knowledge, Schwandt argues that
evaluators must recognise the need for practical knowledge or wisdom to help practitioners
understand their practice better and to make wise moral-political judgements:
While of course we would like to have at our disposal the best general, scientific
knowledge we can acquire, the corrigibility, ambiguity, and circumstantiality of everyday
evaluative judgement cannot be eliminated, replaced or refined by relying on scientific
method and its associated rationality (Schwandt, 2000b).
Central to Schwandts perspective on evaluation is the notion of critical intelligence:
Critical intelligence . . . is the ability to question whether the . . . (end) . . . is worth
getting to. It requires not simply knowledge of effects, strategies, procedures and the like
but the willingness and capacity to debate the value of various ends of a practice. It requires
acknowledging and understanding the force of tradition (prejudices) in shaping the
conceptualisation of those ends, the means used to frame those ends, and the practices
employed to assess their effects and a simultaneous effort to transform that knowledge in
the process of coming to that understanding. This is fundamentally an exercise in practicalmoral reasoning. (Schwandt, 1997, p. 79.)
From this perspective, what matters is not so much what works as what is appropriate in
particular circumstances and evaluation is not merely technique involving robust objective
analysis but rather more craft activity involving reasoned judgement of various forms of
knowledge and normative implications. This leads us, therefore, to the issue of the desirability
of basing policy decisions on research evidenceto question just how much what works
does matter.
. . . the choice of policy instrument is not a technical problem that can be safely delegated
to experts. It raises institutional, social and moral issues that must be clarified through a
process of public deliberation and resolved by political means (Majone, 1989, p. 143).
In Majones view, the notion of EBP places too much emphasis on the potential role of
causal knowledge in improving policy effectiveness and insufficient emphasis on the
normative, institutional and organisational context in which decisions and choices are made
and action is taken.
Alasdair MacIntyre (1984) also draws on Aristotle in his critique of the notion that rational
action is an essentially technical matter of applying morally neutral expertise as the means for
achieving authoritative ends. He argues that this notion neglects the intrinsic virtuessuch
standards as fairness, truthfulness, trust and honestythat are embodied in human practices
and which give such practices an inherently moral and ethical character. He defines a practice
as a cooperative human activity that has its own standards of excellence. In striving to achieve
these standards, practitioners realise internal goods, which are satisfactions intrinsic to the
activity derived from doing it well. A virtue, then, is . . . an acquired human quality the
possession and exercise of which tends to enable us to achieve those goods which are internal
to practices . . . (MacIntyre, 1984, p. 191). The implication of this position is that the model
of rationality underpinning evaluation should not be based upon an instrumental notion of
the effectiveness of means to given ends but rather upon a practical notion of the appropriateness
of action from a broader ethical-moral standpoint.
The philosophical basis for such a position is traceable back to Aristotle and the preEnlightenment acceptance that scientific reasoning and expertise was but one amongst many
legitimate bases for belief and actiona situation that Stephen Toulmin (2001, p. 29) terms
the Balance of Reason. Toulmin argues that this balance was disrupted in the seventeenth
century as, under the influence of Galileo and Descartes, exact sciences susceptible to
mathematical methods, theoretical abstraction and logical deduction gained ascendancy as
apparent means of overcoming the uncertainties and ambiguities that had previously been
accepted. Thus, began what John Dewey called the quest for certainty, which culminated in
Newtonian physics being taken as showing that the Solar System was an exemplar of a
rationally intelligible system demonstrating . . . regularity, uniformity, and above all stability
(Toulmin, 2001, p. 48). Toulmin argues that the hegemony of Newtonian dynamics, bolstered
by its . . . intellectual coherence with a respectable picture of Gods Material Creation . . .
(Toulmin, 2001, p.79), provided social sciences with an unrealistic and, indeed, distorting
exemplar of Serious Science, which has blinded them to matters of political and moral
concern and practical relevance:
. . . (A) traditional reliance on Euclidian and Newtonian models of theory continues to
focus attention on doing your sums right and conceals the equally important task of
making sure that you are doing the right sums; in other words, doing calculations that are
directly relevant to the practical situation in question (Toulmin, 2001, p. 66).
Toulmin calls for the restoration of the Balance of Reason by acknowledging the validity
and role of practical wisdom in assessing what is reasonable or appropriate in dealing with
human and social problems. Practical wisdom is a translation of Aristotles concept of phronesis
which, in contrast to episteme (theoretical knowledge) and techne (instrumental knowledge),
involves, according to Dryzek (1990, p. 9) . . . persuasion, reflection upon values, prudential
judgement, and free disclosure of ones ideas. Aristotles discussion of phronesis referred to
medical practice and helmsmanship, realms of skilled and experienced practice that rely
heavily on tacit knowledge. Such knowledge is grounded in experience but largely taken for
granted and embedded in routines that go without saying; it is not easily conveyed through
written reports and papers. A closely related concept is that of metis, which translates from
Greek as knack, wit, or cunning. Toulmin argues for the restoration of these forms of tacit
knowledge in a Balance of Reason, which recognises that . . . all scientific knowledge is a
balance of the theoretical with the practical, the verbal with the non-verbal (Toulmin, 2001,
p. 183).
I would argue that this position provides a more robust basis for dealing with the major
human and social problems that we face than instrumental forms of rationality that have
become embedded in Western liberal democratic political systems. A major challenge derives
from the increasing complexity of late modern society under the influence of increasingly
globalised processes of economic and technological change (Giddens, 1990). According to
Smart (1999, p. 63) We find ourselves abroad in a world in which social theory and analysis
is no longer able, with any credibility, to provide a warrant for political practice and ethical
decision-making. However, the dilemma is that . . . questions concerning political
responsibility and ethical decision-making, the difficulties of adjudicating between the
expression and pursuit of self-interest and the promotion and adequate provision of the public
domain, as well as the problems encountered in everyday social life of making a choice or
taking a stand, have if anything become analytically more significant . . . (Smart, 1999).
Dryzek (1990) argues that dominant responses to the challenge of complexity tend to
reflect the preoccupations of instrumental rationality, emphasising forms of analysis and
technical aides to decision making that are more sophisticated, in order to maintain control
over a problematical environment. We have seen aspects of just such responses in the
Governments approach to EBP in the context of the performance management controlstrategy. From the perspective of practical reason, such responses will have limited success for
two main reasons. First, by focusing on formal scientific and technical knowledge, they
neglect the key role played in problem solving by practical wisdom and informal tacit
knowledge. Second, by conceiving of rationality in terms of means to given ends, they neglect
the ethical-moral dimension of problem solving. Indeed, these two aspects should be seen as
inter-related in that practical knowledge or wisdom is necessarily applied in practice within
a normative framework.
We can consider this further by looking briefly at the organisational context in which
professional practice occurs in relation to policy-making and public service delivery. Recent
research on organisations has emphasised the importance of social relations and informal
processes founded upon tacit knowledge (Hatch, 1997; Moingeon and Edmondson, 1996).
New institutionalism also highlights the role of the informal normative order defined in
terms of norms, routines and conventions, which are largely tacit and implicit and deeply
ingrained in organisational life (March and Olsen, 1989; Lowndes, 1997). March and Olsen
(1989) argue that the focus should therefore be on the appropriateness of action and
behaviour, defined in relation to the normative order of obligation and necessity, rather than
on the rational order of preference and calculation and consequence.
Given this conception of the organisational and institutional context in which policymakers and practitioners encounter evidence, make judgements and take decisions, rationaldecisionistic models appear highly simplistic and distorting. For example, with reference to
evidence-based practice in social work, Stephen Webb (2001) argues that professionals
working in public sector organisations employ reasoning strategies which . . . consistently fail
to respect the canons of rationality assumed by the evidence-based approach (Webb, 2001, p.
64). People employ cognitive heuristics that are selective in relation to evidence and decisions
are influenced by factors such as the politics of inter-agency relations and internal
organisational interest groups and . . . are based upon the reflexive understanding of
contestable beliefs and meanings and not determinate judgements (Webb, 2001, p. 68).
The role of the normative context of social and political rules, conventions and structures
of power and authority in shaping professional practice and action is emphasised in John
Foresters (1993) analysis of planning practice as communicative action. Not only does such
practice draw on sources of knowledge other than explicit scientific evidence (in particular
on tacit and experiential practitioner knowledge) but it also necessarily addresses normative
considerations: . . . questions of norms and values are . . . necessarily influential and
constitutive of the very sense of action itself (Forrester, 1993, p. 72). In the context of
community care, Janet Lewis (2001) has recently highlighted the danger, in evidence-based
practice, of excluding both the practical wisdom of the experienced practitioner and the
experience of service users as forms of knowledge that do not constitute valid evidence.
However, the problem here is not limited to the restriction of the cognitive basis of practice.
The exclusion of these perspectives also excludes the value stances that go with them,
resulting in a de facto privileging of the normative commitments of academics, managers and
policy makers (cf. Beresford, 2001).
The argument can be applied to professionals working in other public service contexts.
Scientific evidence tends to be at a relatively high level of generality whereas professionals are
faced with decisions about dealing with particular problems in particular circumstances and
institutional contexts. Research evidence is notoriously slippery even in terms of informing
general policy guidelines to deal with such problems. When it comes to the specifics of
practice, it recedes even further into the background in decisions on appropriate action. To
take an example from the educational field, a schools policy for dealing with bullying should
certainly be informed by evidence of what is generally effective but responses to bullying
incidents will be dominated by practice wisdom, cautiously teasing out the most appropriate
course of action in the specific circumstances in a context of informal rules, heuristics, norms
and values. The question for teachers is not simply what is effective but rather, more broadly
it is, what is appropriate for these children in these circumstances.
reasonable decisions and ensuring that we take action that is appropriate to situations that
are both morally and factually ambiguous (cf. Harmon, 1995). The practical rationality of
this process requires inclusive, open and free debate, ensuring that all those with a stake in
decisions are able to bring to bear their knowledge (in whatever form) and their normative
commitments.
If we accept this position, and acknowledge the need to nurture the application of practical
wisdom in our efforts to improve the world, then we need to broaden the focus of evaluation
beyond the technical concerns of measuring effects, identifying causes and assessing what
works. It can help in the practical task of identifying what is appropriate or reasonable. To do
this, it must be acknowledged that the ethical and moral implications of policies and the values
and goods (and bads) that they promote are amenable to rational consideration and debate
(cf. Julnes et al., 1998). Broadening the focus of evaluation in this way also involves broadening
its methodologies beyond analytical techniques to include methods and accompanying
institutional frameworks to promote full, free and open normative debate among all those
with a stake in the policies concerned, including service users and citizens.
In this way, evaluation can strengthen the basis for making wise policy choices with
profound ethical and moral implications. It can strengthen our capacity to answer what
Zagorin (1998, p. 224) calls Tolstoys anguished questionwhat shall we do and how shall
we live? It is this question that tasked John Dewey, who was passionately committed to the
application of intelligence to the solution of human and social problems and transforming the
world for the better, advocating experimentation and an open-minded will to learn. His is
the appropriate final word in advocating . . .
. . . the necessity of a deliberate control of policies by the method of intelligence, an
intelligence which is not the faculty of intellect honored in text-books and neglected
elsewhere, but which is the sum-total of impulses, habits, emotions, records, and
discoveries which forecast what is desirable and undesirable in future possibilities, and
which contrive ingeniously on behalf of an imagined good (Dewey, 1993, p. 9).
REFERENCES
BERESFORD, P. (2001). Evidence-based care: developing the discussion, Managing Community Care,
9, 36.
BRONK, R. (1998). Progress and the Invisible Hand: The Philosophy and Economics of Human Advance.
London: Warner Books.
BULLOCK, H., MOUNTFORD, J. and STANLEY, R. (2001). Better Policy Making. Centre for
Management and Policy Making, London: Cabinet Office.
BULMER, M. (Ed.) (1987). Social Science Research and Government: Comparative Essays on Britain and the
United States. Cambridge: Cambridge University Press.
CABINET OFFICE (1999). Modernising Government. Cm. 4310. London: The Stationery Office.
CHEN, H-T. (1990). Theory-Driven Evaluations. Newbury Park: Sage Publications.
CONNELL, J. P., KUBISCH, A. C., SCHORR, L. B. and WEISS, C. H. (Eds) (1995). New Approaches
to Evaluating Community Initiatives, Volume 1: Concepts, Methods and Contexts. Washington, DC: Aspen
Institute.
COOK, T. D. (2000). The False Choice Between Theory-Based Evaluation and Experimentalism. In:
ROGERS, P. J., PETROSINO, A. J., HEUBNER, T. A. and HASCI, T. A. (Eds) Program Theory
Evaluation: Practice, Promise, and Problems, New Directions in Evaluation, No. 87, 2734. San Francisco:
Jossey Bass.
DAVIES, H. T. O., NUTLEY, S. M. and SMITH, P. C. (1999). Editorial: what works? The role of
evidence in public sector policy and practice, Public Money and Management, 19, 35.
DAVIES, H. T. O., NUTLEY, S. M. and SMITH, P. C. (Eds) (2000). What Works? Evidence-Based Policy
and Practice in Public Services. Bristol: Policy Press.
DEWEY, J. (1957). Reconstruction in Philosophy. Enlarged Edition. Boston, MA: Beacon Press.
DEWEY, J. (1993). The Need for a Recovery of Philosophy. In: MORRIS, D. and SHAPIRO, I.
(Eds) John Dewey: The Political Writings. Indianapolis, IN: Hackett Publishing Co.
DfEE. (2000). Influence or Irrelevance: Can Social Science Improve Government? Secretary of States ESRC
Lecture Speech, 2 February. London: Department for Education and Employment.
DRYZEK, J. S. (1990). Discursive Democracy: Politics, Policy and Political Science. Cambridge: Cambridge
University Press.
DUPRE, J. (2001). Human Nature and the Limits of Science. Oxford: Clarendon Press.
FISCHER, F. (1990). Technocracy and the Politics of Expertise. Newbury: Sage Publications.
FORESTER, J. (1993). Critical Theory, Public Policy and Planning Practice: Toward Critical Pragmatism. New
York: State University of New York Press.
FULBRIGHT-ANDERSON, K., KUBISCH, A. C. and CONNELL, J. P. (Eds) (1998). New
Approaches to Evaluating Community Initiatives, Volume 2: Theory, Measurement and Analysis. Washington,
DC: Aspen Institute.
GIDDENS, A. (1990). The Consequences of Modernity. Cambridge: Polity Press.
GRANGER, R. C. (1998). Establishing Causality in Evaluations of Comprehensive Community
Initiatives. In: FULBRIGHT-ANDERSON, K., KUBISCH, A. C. and CONNELL, J. P. (Eds)
New Approaches to Evaluating Community Initiatives, Volume 2: Theory, Measurement and Analysis.
Washington DC: Aspen Institute.
HATCH, M. J. (1997). Organization Theory: Modern, Symbolic, and Postmodern Perspectives. Oxford:
Oxford University Press.
HARMON, M. M. (1995). Responsibility as Paradox: A Critique of Rational Discourse on Government.
Thousand Oaks, CA: Sage.
HM TREASURY (2000). 2000 Spending Review: Public Service Agreements. Cm. 4808. London: HM
Treasury.
JUDGE, K. and BAULD, L. (2001). Strong theory, flexible methods: evaluating complex communitybased initiatives, Critical Public Health, 11, 1938.
JULNES, G., MARK, M. M. and HENRY, G. T. (1998). Promoting realism in evaluation: realistic
evaluation and the broader context, Evaluation, 4, 483504.
LEWIS, J. (2001). What works in community care?, Managing Community Care, 9, 36.
LOWNDES, V. (1997). Change in public service management: new institutions and new managerial
regimes, Local Government Studies, 23, 4266.
MACINTYRE, A. (1984). After Virtue: A Study in Moral Theory. Notre Dame: University of Notre
Dame, IN Press.
MAJONE, G. (1989). Evidence, Argument and Persuasion in the Policy Process. New Haven, CT: Yale
University Press.
MARCH, J. G. and OLSEN, J. P. (1989). Rediscovering Institutions: The Organizational Basis of Politics.
New York: The Free Press.
MOINGEON, B. and EDMONDSON, A. (Eds) (1996). Organisational Learning and Competitive
Advantage. London: Sage.
NATIONAL AUDIT OFFICE. (2001). Modern Policy-Making: Ensuring Policies Deliver Value for Money.
London: NAO.
OAKLEY, A. (2000). Experiments in Knowing: Gender and Method in the Social Sciences. Cambridge: Polity
Press.
OSULLIVAN, N. (1993). Political integration, the limited state, and the philosophy of postmodernism, Political Studies, XLI, 2142.
PAWSON, R. and TILLEY, N. (1997). Realistic Evaluation. London: Sage Publications.
POWELL, M. (Ed.) (1999). New Labour, New Welfare State? The Third Way in British Social Policy. Bristol:
The Policy Press.
ROGERS, P. J., PETROSINO, A., HEUBNER, T. A. and HASCI T. A. (Eds) (2000). Program Theory
Evaluation: Practice, Promise, and Problems, New Directions in Evaluation, No. 87. San Francisco, CA:
Jossey Bass.
SANDERSON, I. (1998). Beyond performance measurement? Assessing value in local government,
Local Government Studies, 24, 125.
SANDERSON, I. (1999). Participation and democratic renewal: from instrumental to communicative rationality?, Policy and Politics, 27, 325341.
SANDERSON, I. (2000). Evaluation in complex policy systems, Evaluation, 6, 433454.
SANDERSON, I. (2001). Performance management, evaluation and learning in modern local
government, Public Administration, 79, 297313.
SANDERSON, I. (2002). Evaluation, policy learning and evidence-based policy making, Public
Administration, 80, 122.
SCHWANDT, T. A. (1997). Evaluation as practical hermeneutics, Evaluation, 3, 6983.
SCHWANDT, T. A. (2000a). Meta-analysis and everyday life: the good, the bad and the ugly, American
Journal of Evaluation, 21, 213.
SCHWANDT, T. A. (2000b). Further diagnostic thoughts on what ails evaluation practice, American
Journal of Evaluation, 21, 225.
SCRIVEN, M. (1998). Minimalist theory: the least theory that practice requires, American Journal of
Evaluation, 19, 57.
SMART, B. (1999). Facing Modernity: Ambivalence, Reflexivity and Morality. London: Sage Publications.
TOULMIN, S. (2001). Return to Reason. Cambridge, MA: Harvard University Press.
TRIGG, R. (2001). Understanding Social Science: A Philosophical Introduction to the Social Sciences, 2nd Edn.
Oxford: Blackwell Publishers.
WAGENAAR, H. C. (1982). A Cloud of Unknowing: Social Science Research in a Political Context.
In: KALLEN, D. B. P., KOSSE, G. B., WAGENAAR, H. C., KLOPROGGE, J. J. J. and
VORBECK, M. (Eds) Social Science Research and Public Policy-Making. Foundation for Educational
Research in the Netherlands. Windsor: NFER-Nelson.
WALKER, R. (2001). Great expectations: can social science evaluate New Labours policies?,
Evaluation, 7, 305330.
WEBB, S. A. (2001). Some considerations on the validity of evidence-based practice in social work,
British Journal of Social Work, 31, 5779.
WEISS, C. H. (1982). Policy Research in the Context of Diffuse Decision-Making. In: KALLEN, D.
B. P., KOSSE, G. B., WAGENAAR, H. C., KLOPROGGE, J. J. J. and VORBECK, M. (Eds) Social
Science Research and Public Policy-Making. Foundation for Educational Research in the Netherlands.
Windsor: NFER-Nelson.
WEISS, C. H. (1995). Nothing as Practical as Good Theory: Exploring Theory-Based Evaluation for
Comprehensive Community Initiatives for Children and Families. In: CONNELL, J. P.,
KUBISCH, A. C., SCHORR, L. B. and WEISS, C. H. (Eds) New Approaches to Evaluating
Community Initiatives, Volume 1: Concepts, Methods and Contexts. Washington, DC: Aspen Institute.
WEISS, C. H. (1997). How can theory-based evaluation make greater headway?, Evaluation Review, 21,
501525.
WEISS, C. H. (2000). Which Links in Which Theories Shall We Evaluate?, In: ROGERS, P. J.,
PETROSINO, A., HEUBNER, T. A. and HASCI, T. A. (Eds) Program Theory Evaluation: Practice,
Promise, and Problems, New Directions in Evaluation, No. 87, pp.3545. San Francisco: Jossey Bass.
ZAGORIN, P. (1998). Francis Bacon. Princeton, MA: Princeton University Press.
CORRESPONDENCE
Professor Ian Sanderson, Policy Research Institute, Leeds Metropolitan University, Bronte
Hall, Beckett Park Campus, Leeds LS6 3QS, UK. E-mail: [email protected]