[FREE PDF sample] Program Evaluation and Performance Measurement: An Introduction to Practice James C Mcdavid ebooks

Download as pdf or txt
Download as pdf or txt
You are on page 1of 55

Download the Full Version of textbook for Fast Typing at textbookfull.

com

Program Evaluation and Performance Measurement: An


Introduction to Practice James C Mcdavid

https://textbookfull.com/product/program-evaluation-and-
performance-measurement-an-introduction-to-practice-james-c-
mcdavid/

OR CLICK BUTTON

DOWNLOAD NOW

Download More textbook Instantly Today - Get Yours Now at textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Evaluation Practice for Collaborative Growth: A Guide to


Program Evaluation with Stakeholders and Communities Lori
L Bakken
https://textbookfull.com/product/evaluation-practice-for-
collaborative-growth-a-guide-to-program-evaluation-with-stakeholders-
and-communities-lori-l-bakken/
textboxfull.com

Designing Performance Measurement Systems: Theory and


Practice of Key Performance Indicators Fiorenzo
Franceschini
https://textbookfull.com/product/designing-performance-measurement-
systems-theory-and-practice-of-key-performance-indicators-fiorenzo-
franceschini/
textboxfull.com

Improving public services: international experiences in


using evaluation tools to measure program performance 1st
Edition Douglas J. Besharov
https://textbookfull.com/product/improving-public-services-
international-experiences-in-using-evaluation-tools-to-measure-
program-performance-1st-edition-douglas-j-besharov/
textboxfull.com

An Introduction to C GUI Programming Simon Long

https://textbookfull.com/product/an-introduction-to-c-gui-programming-
simon-long/

textboxfull.com
An Introduction to Physical Science 15th Edition James
Shipman

https://textbookfull.com/product/an-introduction-to-physical-
science-15th-edition-james-shipman/

textboxfull.com

Effective C An introduction to professional C programming


1st Edition Robert C. Seacord

https://textbookfull.com/product/effective-c-an-introduction-to-
professional-c-programming-1st-edition-robert-c-seacord/

textboxfull.com

History An Introduction to Theory Method and Practice


Peter Claus

https://textbookfull.com/product/history-an-introduction-to-theory-
method-and-practice-peter-claus/

textboxfull.com

Flow Measurement Handbook Industrial Designs Operating


Principles Performance and Applications 2nd Edition Roger
C. Baker
https://textbookfull.com/product/flow-measurement-handbook-industrial-
designs-operating-principles-performance-and-applications-2nd-edition-
roger-c-baker/
textboxfull.com

An Introduction to Performance Analysis of Sport 2nd


Edition Adam Cullinane

https://textbookfull.com/product/an-introduction-to-performance-
analysis-of-sport-2nd-edition-adam-cullinane/

textboxfull.com
Reviews of the Third Edition

“The book is thorough and comprehensive in its coverage of principles and practices of program evaluation and
performance measurement. The authors are striving to bridge two worlds: contemporary public governance
contexts and an emerging professional role for evaluators, one that is shaped by professional judgement informed
by ethical/moral principles, cultural understandings, and reflection. With this edition the authors successfully
open up the conversation about possible interconnections between conventional evaluation in new public
management governance contexts and evaluation grounded in the discourse of moral-political purpose.”

—J. Bradley Cousins

University of Ottawa

“The multiple references to body-worn-camera evaluation research in this textbook are balanced and interesting,
and a fine addition to the Third Edition of this book. This careful application of internal and external validity for
body-worn cameras will be illustrative for students and researchers alike. The review of research methods is specific
yet broad enough to appeal to the audience of this book, and the various examples are contemporary and topical
to evaluation research.”

—Barak Ariel

University of Cambridge, UK, and Alex Sutherland, RAND Europe, Cambridge, UK

“This book provides a good balance between the topics of measurement and program evaluation, coupled with
ample real-world application examples. The discussion questions and cases are useful in class and for homework
assignments.”

—Mariya Yukhymenko

California State University, Fresno

“Finally, a text that successfully brings together quantitative and qualitative methods for program evaluation.”

—Kerry Freedman

Northern Illinois University

“The Third Edition of Program Evaluation and Performance Measurement: An Introduction to Practice remains an
excellent source book for introductory courses to program evaluation, and a very useful reference guide for
seasoned evaluators. In addition to covering in an in-depth and interesting manner the core areas of program
evaluation, it clearly presents the increasingly complementary relationship between program evaluation and
performance measurement. Moreover, the three chapters devoted to performance measurement are the most
detailed and knowledgeable treatment of the area that I have come across in a textbook. I expect that the updated
book will prove to be a popular choice for instructors training program evaluators to work in the public and not-
for-profit sectors.”

—Tim Aubry

University of Ottawa

“This text guides students through both the philosophical and practical origins of performance measurement and
program evaluation, equipping them with a profound understanding of the abuses, nuances, mysteries, and
successes [of those topics]. Ultimately, the book helps students become the professionals needed to advance not
just the discipline but also the practice of government.”

2
—Erik DeVries

Treasury Board of Canada Secretariat

3
Program Evaluation and Performance Measurement

Third Edition

4
This book is dedicated to our teachers, people who have made our love of learning a life’s work. From Jim McDavid:
Elinor Ostrom, Tom Pocklington, Jim Reynolds, and Bruce Wilkinson. From Irene Huse: David Good, Cosmo Howard,
Evert Lindquist, Thea Vakil. From Laura Hawthorn: Karen Dubinsky, John Langford, Linda Matthews.

Sara Miller McCune founded SAGE Publishing in 1965 to support the dissemination of usable
knowledge and educate a global community. SAGE publishes more than 1000 journals and over 800
new books each year, spanning a wide range of subject areas. Our growing selection of library products
includes archives, data, case studies and video. SAGE remains majority owned by our founder and after
her lifetime will become owned by a charitable trust that secures the company’s continued
independence.

Los Angeles | London | New Delhi | Singapore | Washington DC | Melbourne

5
Program Evaluation and Performance Measurement

An Introduction to Practice

Third Edition

James C. McDavid
University of Victoria, Canada

Irene Huse
University of Victoria, Canada

Laura R. L. Hawthorn

6
Copyright © 2019 by SAGE Publications, Inc.

All rights reserved. No part of this book may be reproduced or utilized in any form or by any means, electronic or
mechanical, including photocopying, recording, or by any information storage and retrieval system, without
permission in writing from the publisher.

For Information:

SAGE Publications, Inc.

2455 Teller Road

Thousand Oaks, California 91320

E-mail: [email protected]

SAGE Publications Ltd.

1 Oliver’s Yard

55 City Road

London, EC1Y 1SP

United Kingdom

SAGE Publications India Pvt. Ltd.

B 1/I 1 Mohan Cooperative Industrial Area

Mathura Road, New Delhi 110 044

India

SAGE Publications Asia-Pacific Pte. Ltd.

3 Church Street

#10–04 Samsung Hub

Singapore 049483

Printed in the United States of America.

This book is printed on acid-free paper.

18 19 20 21 22 10 9 8 7 6 5 4 3 2 1
Names: McDavid, James C., author. | Huse, Irene, author. | Hawthorn, Laura R. L.

Title: Program evaluation and performance measurement : an introduction to practice / James C. McDavid, University of Victoria, Canada, Irene
Huse, University of Victoria, Canada, Laura R. L. Hawthorn.

Description: Third Edition. | Thousand Oaks : SAGE Publications, Inc., Corwin, CQ Press, [2019] | Revised edition of the authors' Program
evaluation and performance measurement, c2013. | Includes bibliographical references and index.

Identifiers: LCCN 2018032246 | ISBN 9781506337067 (pbk.)

Subjects: LCSH: Organizational effectiveness–Measurement. | Performance–Measurement. | Project management–Evaluation.

Classification: LCC HD58.9 .M42 2019 | DDC 658.4/013–dc23 LC record available at https://lccn.loc.gov/2018032246

Acquisitions Editor: Helen Salmon

Editorial Assistant: Megan O’Heffernan

Content Development Editor: Chelsea Neve

Production Editor: Andrew Olson

Copy Editor: Jared Leighton and Kimberly Cody

Typesetter: Integra

Proofreader: Laura Webb

7
Indexer: Sheila Bodell

Cover Designer: Ginkhan Siam

Marketing Manager: Susannah Goldes

8
Contents
Preface
Acknowledgments
About the Authors
Chapter 1 • Key Concepts and Issues in Program Evaluation and Performance Management
Chapter 2 • Understanding and Applying Program Logic Models
Chapter 3 • Research Designs For Program Evaluations
Chapter 4 • Measurement for Program Evaluation and Performance Monitoring
Chapter 5 • Applying Qualitative Evaluation Methods
Chapter 6 • Needs Assessments for Program Development and Adjustment
Chapter 7 • Concepts and Issues in Economic Evaluation
Chapter 8 • Performance Measurement as an Approach to Evaluation
Chapter 9 • Design and Implementation of Performance Measurement Systems
Chapter 10 • Using Performance Measurement for Accountability and Performance Improvement
Chapter 11 • Program Evaluation and Program Management
Chapter 12 • The Nature and Practice of Professional Judgment in Evaluation
Glossary
Index

9
10
Preface

The third edition of Program Evaluation and Performance Measurement offers practitioners, students, and other
users of this textbook a contemporary introduction to the theory and practice of program evaluation and
performance measurement for public and nonprofit organizations. Woven into the chapters is the performance
management cycle in organizations, which includes: strategic planning and resource allocation; program and
policy design; implementation and management; and the assessment and reporting of results.

The third edition has been revised to highlight and integrate the current economic, political, and socio-
demographic context within which evaluators are expected to work. We feature more evaluation exemplars,
making it possible to fully explore the implications of the evaluations that have been done. Our main exemplar,
chosen in part because it is an active and dynamic public policy issue, is the evaluation of body-worn cameras
(BWCs) which have been widely deployed in police departments in the United States and internationally. Since
2014, as police departments have deployed BWCs, a growing number of evaluations, some experimental, some
quasi-experimental, and some non-experimental have addressed questions around the effectiveness of BWCs in
reducing police use of force, citizen complaints and, more broadly, the perceived fairness of the criminal justice
system.

We introduce BWC evaluations in Chapter 1 and follow those studies through Chapter 2 (program logics),
Chapter 3 (research designs), and Chapter 4 (measurement) as well as including examples in other chapters.

We have revised and integrated the chapters that focus on performance measurement (Chapters 8, 9 and 10) to
feature research and practice that addresses the apparent paradox in performance measurement systems: if they are
designed to improve accountability, first and foremost, then over the longer term they often do not further
improve program or organizational performance. Based on a growing body of evidence and scholarship, we argue
for a nuanced approach to performance measurement where managers have incentives to use performance results
to improve their programs, while operating within the enduring requirements to demonstrate accountability
through external performance reporting.

In most chapters we have featured textboxes that introduce topics or themes in a short, focused way. For example
we have included a textbox in Chapter 3 that introduces behavioral economics and nudging as approaches to
designing, implementing, and evaluating program and policy changes. As a second example, in Chapter 4, data
analytics is introduced as an emerging field that will affect program evaluation and performance measurement in
the future.

We have updated discussions of important evaluation theory-related issues but in doing so have introduced those
topics with an eye on what is practical and accessible for practitioners. For example, we discuss realist evaluation in
Chapter 2 and connect it to the BWC studies that have been done, to make the point that although realist
evaluation offers us something unique, it is a demanding and resource-intensive approach, if it is to be done well.

Since the second edition was completed in 2012, we have seen more governments and non-profit organizations
face chronic fiscal shortages. One result of the 2008–2009 Great Recession is a shift in the expectations for
governments – doing more with less, or even less with less, now seems to be more the norm. In this third edition,
where appropriate, we have mentioned how this fiscal environment affects the roles and relationships among
evaluators, managers, and other stakeholders. For example, in Chapter 6 (needs assessments), we have included
discussion and examples that describe needs assessment settings where an important question is how to ration
existing funding among competing needs, including cutting lower priority programs. This contrasts with the more
usual focus on the need for new programs (with new funding).

In Chapter 1, we introduce professional judgment as a key feature of the work that evaluators do and come back
to this theme at different points in the textbook. Chapter 12, where we discuss professional judgment in some
depth, has been revised to reflect trends in the field, including evaluation ethics and the growing importance of

11
professionalization of evaluation as a discipline. Our stance in this textbook is that an understanding of
methodology, including how evaluators approach cause-and-effect relationships in their work, is central to being
competent to evaluate the effectiveness of programs and policies. But being a competent methodologist is not
enough to be a competent evaluator. In Chapter 12 we expand upon practical wisdom as an ethical foundation for
evaluation practice. In our view, evaluation practice has both methodological and moral dimensions to it. We have
updated the summaries and the discussion questions at the end of the chapters.

The third edition of Program Evaluation and Performance Measurement will be useful for senior undergraduate or
introductory graduate courses in program evaluation, performance measurement, and performance management.
The book does not assume a thorough understanding of research methods and design, instead guiding the reader
through a systematic introduction to these topics. Nor does the book assume a working knowledge of statistics,
although there are some sections that do outline the roles that statistics play in evaluations. These features make
the book well suited for students and practitioners in fields such as public administration and management,
sociology, criminology, or social work where research methods may not be a central focus.

A password-protected instructor teaching site, available at www.sagepub.com/mcdavid, features author-provided


resources that have been designed to help instructors plan and teach their courses. These resources include a test
bank, PowerPoint slides, SAGE journal articles, case studies, and all tables and figures from the book. An open-
access student study site is also available at www.sagepub.com/mcdavid. This site features access to recent,
relevant full-text SAGE journal articles.

12
Acknowledgments

The third edition of Program Evaluation and Performance Measurement was completed substantially because of the
encouragement and patience of Helen Salmon, our main contact at Sage Publications. As a Senior Acquisitions
Editor, Helen has been able to suggest ways of updating our textbook that have sharpened its focus and improved
its contents. We are grateful for her support and her willingness to countenance a year’s delay in completing the
revisions of our book.

Once we started working on the revisions we realized how much the evaluation field had changed since 2012
when we completed the second edition. Being a year later in completing the third edition than was planned is
substantially due to our wanting to include new ideas, approaches and exemplars, where appropriate.

We are grateful for the comments and informal suggestions made by colleagues, instructors, students, and
consultants who have used our textbook in different ways in the past six years. Their suggestions to simplify and in
some cases reorganize the structure of chapters, include more examples, and restate some of the conceptual and
technical parts of the book have improved it in ways that we hope will appeal to users of the third edition.

The School of Public Administration at the University of Victoria provided us with unstinting support as we
completed the third edition of our textbook. For Jim McDavid, being able to arrange several semesters
consecutively with no teaching obligations, made it possible to devote all of his time to this project. For Irene
Huse, being able to count on timely technical support for various computer-related needs, and an office for the
textbook-related activities, were critical to being able to complete our revisions.

Research results from grant support provided by the Social Sciences and Humanities Research Council in Canada
continue to be featured in Chapter 10 of our book. What is particularly encouraging is how that research on
legislator uses of public performance reports has been extended and broadened by colleagues in Canada, the
United States, and Europe. In Chapter 10, we have connected our work to this emerging performance
measurement and performance management movement.

The authors and SAGE would like to thank the following reviewers for their feedback:

James Caillier, University of Alabama

Kerry Freedman, Northern Illinois University

Gloria Langat, University of Southampton

Mariya Yukhymenko, California State University Fresno

13
About the Authors

James C. McDavid

(PhD, Indiana, 1975) is a professor of Public Administration at the University of Victoria in British
Columbia, Canada. He is a specialist in program evaluation, performance measurement, and organizational
performance management. He has conducted extensive research and evaluations focusing on federal, state,
provincial, and local governments in the United States and Canada. His published research has appeared in
the American Journal of Evaluation, the Canadian Journal of Program Evaluation and New Directions for
Evaluation. He is currently a member of the editorial board of the Canadian Journal of Program Evaluation
and New Directions for Evaluation.

In 1993, Dr. McDavid won the prestigious University of Victoria Alumni Association Teaching Award. In
1996, he won the J. E. Hodgetts Award for the best English-language article published in Canadian Public
Administration. From 1990 to 1996, he was Dean of the Faculty of Human and Social Development at the
University of Victoria. In 2004, he was named a Distinguished University Professor at the University of
Victoria and was also Acting Director of the School of Public Administration during that year. He teaches
online courses in the School of Public Administration Graduate Certificate and Diploma in Evaluation
Program.
Irene Huse

holds a Master of Public Administration and is a PhD candidate in the School of Public Administration at
the University of Victoria. She was a recipient of a three-year Joseph-Armand Bombardier Canada Graduate
Scholarship from the Social Sciences and Humanities Research Council. She has worked as an evaluator and
researcher at the University of Northern British Columbia, the University of Victoria, and in the private
sector. She has also worked as a senior policy analyst in several government ministries in British Columbia.
Her published research has appeared in the American Journal of Evaluation, the Canadian Journal of
Program Evaluation, and Canadian Public Administration.
Laura R. L. Hawthorn

holds a Master of Arts degree in Canadian history from Queen’s University in Ontario, Canada and a
Master of Public Administration degree from the University of Victoria. After completing her MPA, she
worked as a manager for several years in the British Columbia public service and in the nonprofit sector
before leaving to raise a family. She is currently living in Vancouver, running a nonprofit organization and
being mom to her two small boys.

14
1 Key Concepts and Issues in Program Evaluation and
Performance Measurement

Introduction 3
Integrating Program Evaluation and Performance Measurement 4
Connecting Evaluation to the Performance Management System 5
The Performance Management Cycle 8
Policies and Programs 10
Key Concepts in Program Evaluation 12
Causality in Program Evaluations 12
Formative and Summative Evaluations 14
Ex Ante and Ex Post Evaluations 15
The Importance of Professional Judgment in Evaluations 16
Example: Evaluating a Police Body-Worn Camera Program in Rialto, California 17
The Context: Growing Concerns With Police Use of Force and Community Relationship 17
Implementing and Evaluating the Effects of Body-Worn Cameras in the Rialto Police Department 18
Program Success Versus Understanding the Cause-and-Effect Linkages: The Challenge of Unpacking
the Body-Worn Police Cameras “Black Box” 20
Connecting Body-Worn Camera Evaluations to This Book 21
Ten Key Evaluation Questions 22
The Steps in Conducting a Program Evaluation 28
General Steps in Conducting a Program Evaluation 28
Assessing the Feasibility of the Evaluation 30
Doing the Evaluation 37
Making Changes Based on the Evaluation 41
Summary 43
Discussion Questions 44
References 45

15
Introduction
Our main focus in this textbook is on understanding how to evaluate the effectiveness of public-sector policies
and programs. Evaluation is widely used in public, nonprofit, and private-sector organizations to generate
information for policy and program planning, design, implementation, assessment of results,
improvement/learning, accountability, and public communications. It can be viewed as a structured process that
creates and synthesizes information intended to reduce the level of uncertainty for decision makers and
stakeholders about a given program or policy. It is usually intended to answer questions or test hypotheses, the
results of which are then incorporated into the information bases used by those who have a stake in the program
or policy. Evaluations can also uncover unintended effects of programs and policies, which can affect overall
assessments of programs or policies. On a perhaps more subtle level, the process of measuring performance or
conducting program evaluations—that is, aside from the reports and other evaluation products—can also have
impacts on the individuals and organizations involved, including attentive stakeholders and citizens.

The primary goal of this textbook is to provide a solid methodological foundation to evaluative efforts, so that
both the process and the information created offer defensible contributions to political and managerial decision-
making. Program evaluation is a rich and varied combination of theory and practice. This book will introduce a
broad range of evaluation approaches and practices, reflecting the richness of the field. As you read this textbook,
you will notice words and phrases in bold. These bolded terms are defined in a glossary at the end of the book.
These terms are intended to be your reference guide as you learn or review the language of evaluation. Because this
chapter is introductory, it is also appropriate to define a number of terms in the text that will help you get some
sense of the “lay of the land” in the field of evaluation.

In the rest of this chapter, we do the following:

Describe how program evaluation and performance measurement are complementary approaches to creating
information for decision makers and stakeholders in public and nonprofit organizations.
Introduce the concept of the performance management cycle, and show how program evaluation and
performance measurement conceptually fit the performance management cycle.
Introduce key concepts and principles for program evaluations.
Illustrate a program evaluation with a case study.
Introduce 10 general questions that can underpin evaluation projects.
Summarize 10 key steps in assessing the feasibility of conducting a program evaluation.
Finally, present an overview of five key steps in doing and reporting an evaluation.

16
Integrating Program Evaluation and Performance Measurement
The richness of the evaluation field is reflected in the diversity of its methods. At one end of the spectrum,
students and practitioners of evaluation will encounter randomized experiments (randomized controlled trials,
or RCTs) in which people (or other units of analysis) have been randomly assigned to a group that receives a
program that is being evaluated, and others have been randomly assigned to a control group that does not get the
program. Comparisons of the two groups are usually intended to estimate the incremental effects of programs.
Essentially, that means determining the difference between what occurred as a result as a program and what would
have occurred if the program had not been implemented. Although RCTs are not the most common method used
in the practice of program evaluation, and there is controversy around making them the benchmark or gold
standard for sound evaluations, they are still often considered exemplars of “good” evaluations (Cook, Scriven,
Coryn, & Evergreen, 2010; Donaldson, Christie, & Melvin, 2014).

Frequently, program evaluators do not have the resources, time, or control over program design or
implementation situations to conduct experiments. In many cases, an experimental design may not be the most
appropriate for the evaluation at hand. A typical scenario is to be asked to evaluate a policy or program that has
already been implemented, with no real ways to create control groups and usually no baseline (pre-program) data
to construct before–after comparisons. Often, measurement of program outcomes is challenging—there may be
no data readily available, a short timeframe for the need for the information, and/or scarce resources available to
collect information.

Alternatively, data may exist (program records would be a typical situation), but closer scrutiny of these data
indicates that they measure program or client characteristics that only partly overlap with the key questions that
need to be addressed in the evaluation. We will learn about quasi-experimental designs and other quantitative and
qualitative evaluation methods throughout the book.

So how does performance measurement fit into the picture? Evaluation as a field has been transformed in the past
40 years by the broad-based movement in public and nonprofit organizations to construct and implement systems
that measure program and organizational performance. Advances in technology have made it easier and less
expensive to create, track, and share performance measurement data. Performance measures can, in some cases,
productively be incorporated into evaluations. Often, governments or boards of directors have embraced the idea
that increased accountability is a good thing and have mandated performance measurement to that end.
Measuring performance is often accompanied by requirements to publicly report performance results for
programs.

The use of performance measures in evaluative work is, however, seldom straightforward. For example, recent
analysis has shown that in the search for government efficiencies, particularly in times of fiscal restraint,
governments may cut back on evaluation capacity, with expectations that performance measurement systems can
substantially cover the performance management information needs (de Lancer Julnes & Steccolini, 2015). This
trend to lean on performance measurement, particularly in high-stakes accountability situations, is increasingly
seen as being detrimental to learning, policy and program effectiveness, and staff morale (see, for example,
Arnaboldi et al., 2015; Coen & Roberts, 2012; Greiling & Halachmi, 2013; Mahler & Posner, 2014). We will
explore this conundrum in more depth later in the textbook.

This textbook will show how sound performance measurement, regardless of who does it, depends on an
understanding of program evaluation principles and practices. Core skills that evaluators learn can be applied to
performance measurement. Managers and others who are involved in developing and implementing performance
measurement systems for programs or organizations typically encounter problems similar to those encountered by
program evaluators. A scarcity of resources often means that key program outcomes that require specific data
collection efforts are either not measured or are measured with data that may or may not be intended for that
purpose. Questions of the validity of performance measures are important, as are the limitations to the uses of
performance data.

17
We see performance measurement approaches as complementary to program evaluation, and not as a replacement
for evaluations. The approach of this textbook is that evaluation includes both program evaluation and
performance measurement, and we build a foundation in the early chapters of the textbook that shows how
program evaluation can inform measuring the performance of programs and policies. Consequently, in this
textbook, we integrate performance measurement into evaluation by grounding it in the same core tools and
methods that are essential to assess program processes and effectiveness. We see an important need to balance these
two approaches, and our approach in this textbook is to show how they can be combined in ways that make them
complementary, but without overstretching their real capabilities. Thus, program logic models (Chapter 2),
research designs (Chapter 3), and measurement (Chapter 4) are important for both program evaluation and
performance measurement. After laying the foundations for program evaluation, we turn to performance
measurement as an outgrowth of our understanding of program evaluation (Chapters 8, 9, and 10). Chapter 6 on
needs assessments builds on topics covered in the earlier chapters, including Chapter 1. Needs assessments can
occur in several phases of the performance management cycle: strategic planning, designing effective programs,
implementation, and measuring and reporting performance. As well, cost–benefit analysis and cost–effectiveness
analysis (Chapter 7) build on topics in Chapter 3 (research designs) and can be conducted as part of strategic
planning, or as we design policies or programs, or as we evaluate their outcomes (the assessment and reporting
phase).

Below, we introduce the relationship between organizational management and evaluation activities. We expand on
this issue in Chapter 11, where we examine how evaluation theory and practice are joined with management in
public and nonprofit organizations. Chapter 12 (the nature and practice of professional judgment) emphasizes
that the roles of managers and evaluators depend on developing and exercising sound professional judgment.

18
Connecting Evaluation to the Performance Management System
Information from program evaluations and performance measurement systems is expected to play a role in the way
managers operate their programs (Hunter & Nielsen, 2013; Newcomer & Brass, 2016). Performance
management, which is sometimes called results-based management, emerged as an organizational management
approach that has been part of a broad movement of new public management (NPM) in public administration.
NPM has had significant impacts on governments worldwide since it came onto the scene in the early 1990s. It is
premised on principles that emphasize the importance of stating clear program and policy objectives, measuring
and reporting program and policy outcomes, and holding managers, executives, and politicians accountable for
achieving expected results (Hood, 1991; Osborne & Gaebler, 1992).

While the drive for NPM—particularly the emphasis on explicitly linking funding to targeted outcomes—has
abated somewhat as paradoxes of the approach have come to light (Pollitt & Bouckaert, 2011), particularly in
light of the global financial crisis (Coen & Roberts, 2012; OECD, 2015), the importance of evidence of actual
accomplishments is still considered central to performance management. Performance management systems will
continue to evolve; evidence-based and evidence-informed decision making depend heavily on both evaluation
and performance measurement, and will respond as the political and fiscal structure and the context of public
administration evolve. There is discussion recently of a transition from NPM to a more centralized but networked
New Public Governance (Arnaboldi et al., 2015; Osborne, 2010; Pollitt & Bouckaert, 2011), Digital-Era
Governance (Dunleavy, Margetts, Bastow, & Tinker, 2006; Lindquist & Huse, 2017), Public Value Governance
(Bryson, Crosby, & Bloomberg, 2014), and potentially a more agile governance (OECD, 2015; Room, 2011). In
any case, evidence-based or evidence-informed policy making will remain an important feature of public
administration and public policy.

Increasingly, there is an expectation that managers will be able to participate in evaluating their own programs and
also be involved in developing, implementing, and publicly reporting the results of performance measurement.
These efforts are part of an organizational architecture designed to pull together the components to achieve
organizational goals. Changes to improve program operations and efficiency and effectiveness are expected to be
driven by evidence of how well programs are doing in relation to stated objectives.

American Government Focus on Program Performance Results

In the United States, successive federal administrations beginning with the Clinton administration in 1992 embraced program goal
setting, performance measurement, and reporting as a regular feature of program accountability (Joyce, 2011; Mahler & Posner, 2014).
The Bush administration, between 2002 and 2009, emphasized the importance of program performance in the budgeting process. The
Office of Management and Budget (OMB) introduced assessments of programs using a methodology called PART (Performance
Assessment Rating Tool) (Gilmour, 2007). Essentially, OMB analysts reviewed existing evaluations conducted by departments and
agencies as well as performance measurement results and offered their own overall rating of program performance. Each year, one fifth of
all federal programs were “PARTed,” and the review results were included with the executive branch (presidential) budget requests to
Congress.

The Obama administration, while instituting the 2010 GPRA Modernization Act (see Moynihan, 2013) and departing from top-down
PART assessments of program performance (Joyce, 2011), continued this emphasis on performance by appointing the first federal chief
performance officer, leading the “management side of OMB,” which was expected to work with agencies to “encourage use and
communication of performance information and to improve results and transparency” (OMB archives, 2012). The GPRA Modernization
Act is intended to create a more organized and publicly accessible system for posting performance information on the
www.Performance.gov website, in a common format. There is also currently a clear theme of improving the efficiencies and integration of
evaluative evidence, including making better use of existing data.

At the time of writing this book, it is too early to tell what changes the Trump administration will initiate or will keep from previous
administrations, although there is intent to post performance information on the Performance.gov website, reflecting updated goals and
alignment. Its current mission is “to assist the President in meeting his policy, budget, management and regulatory objectives and to fulfill
the agency’s statutory responsibilities” (OMB, 2018, p. 1).

19
Canadian Government Evaluation Policy

In Canada, there is a long history of requiring program evaluation of federal government programs, dating back to the late 1970s. More
recently, a major update of the federal government’s evaluation policy occurred in 2009, and again in 2016 (TBS, 2016a). The main
plank in that policy is a requirement that federal departments and agencies evaluate the relevance and performance of their programs on a
5-year cycle, with some exemptions for smaller programs and contributions to international organizations (TBS, 2016a, sections 2.5 and
2.6). Performance measurement and program evaluation is explicitly linked to accountability (resource allocation [s. 3.2.3] and reporting
to parliamentarians [s. 3.2.4]) as well as managing and improving departmental programs, policies, and services (s. 3.2.2). There have
been reviews of Canadian provinces (e.g., Gauthier et al., 2009), American states (Melkers & Willoughby, 2004; Moynihan, 2006), and
local governments (Melkers & Willoughby, 2005) on their approaches to evaluation and performance measurement. In later chapters, we
will return to this issue of the challenges of using the same evaluative information for different purposes (see Kroll, 2015; Majone, 1989;
Radin, 2006).

In summary, performance management is now central to public and nonprofit management. What was once an
innovation in the public and nonprofit sectors in the early 1990s has since become an expectation. Central
agencies (including the U.S. Federal Office of Management and Budget [OMB], the General Accountability
Office [GAO], and the Treasury Board of Canada Secretariat [TBS]), as well as state and provincial finance
departments and auditors, develop policies and articulate expectations that shape the ways program managers are
expected to create and use performance information to inform their administrative superiors and other
stakeholders outside the organization about what they are doing and how well they are doing it. It is worthwhile
following the websites of these organizations to understand the subtle and not-so-subtle shifts in expectations and
performance frameworks for the design, conduct, and uses of performance measurement systems and evaluations
over time, especially when there is a change in government.

Fundamental to performance management is the importance of program and policy performance results being
collected, analyzed, compared (sometimes to performance targets), and then used to monitor, learn, and make
decisions. Performance results are also expected to be used to increase the transparency and accountability of
public and nonprofit organizations and even governments, principally through periodic public performance
reporting. Many jurisdictions have embraced mandatory public performance reporting as a visible sign of their
commitment to improved accountability (Van de Walle & Cornelissen, 2014).

20
The Performance Management Cycle
Organizations typically run through an annual performance management cycle that includes budget
negotiations, announcing budget plans, designing or modifying programs, managing programs, reporting their
financial and nonfinancial results, and making informed adjustments. The performance management cycle is a
useful normative model that includes an iterative planning–implementation–assessment–program adjustments
sequence. The model can help us understand the various points at which program evaluation and performance
measurement can play important roles as ways of providing information to decision makers who are engaged in
leading and managing organizations and programs to achieve results, and reporting the results to legislators and
the public.

In this book, the performance management cycle illustrated in Figure 1.1 is used as a framework for organizing
different evaluation topics and showing how the analytical approaches covered in key chapters map onto the
performance management cycle. Figure 1.1 shows a model of how organizations can integrate strategic planning,
program and policy design, implementation, and assessment of results into a cycle where evaluation and
performance measures can inform all phases of the cycle. The assessment and reporting part of the cycle is central to
this textbook, but we take the view that all phases of the performance management cycle can be informed by
evaluation and performance measurement.

We will use the performance management cycle as a framework within which evaluation and performance
measurement activities can be situated for managers and other stakeholders in public sector and nonprofit
organizations. It is important to reiterate, however, that specific evaluations and performance measures are often
designed to serve a particular informational purpose—that is, a certain phase of the cycle—and may not be
appropriate for other uses.

The four-part performance management cycle begins with formulating and budgeting for clear (strategic)
objectives for organizations and, hence, for programs and policies. Strategic objectives are then translated into
program and policy designs intended to achieve those objectives. This phase involves building or adapting
organizational structures and processes to facilitate implementing and managing policies or programs. Ex ante
evaluations can occur at the stage when options are being considered and compared as candidates for design and
implementation. We will look a bit more closely at ex ante evaluations later in the textbook. For now, think of
them as evaluations that assess program or policy options before any are selected for implementation.

21
Figure 1.1 The Performance Management Cycle

The third phase in the cycle is about policy and program implementation and management. In this textbook, we
will look at formative evaluations as a type of implementation-related evaluation that typically informs managers
how to improve their programs. Normally, implementation evaluations assess the extent to which intended
program or policy designs are successfully implemented by the organizations that are tasked with doing so.
Implementation is not the same thing as outcomes/results. Weiss (1972) and others have pointed out that
assessing implementation is a necessary condition to being able to evaluate the extent to which a program has
achieved its intended outcomes. Bickman (1996), in his seminal evaluation of the Fort Bragg Continuum of Care
Program, makes a point of assessing how well the program was implemented, as part of his evaluation of the
outcomes. It is possible to have implementation failure, in which case any observed outcomes cannot be attributed
to the program. Implementation evaluations can also examine the ways that existing organizational structures,
processes, cultures, and priorities either facilitate or impede program implementation.

The fourth phase in the cycle is about assessing performance results, and reporting to legislators, the public, and
other (internal or external) stakeholders. This phase is also about summative evaluation, that is, evaluation that is
aimed at answering questions about a program or policy achieving its intended results, with a view to making
substantial program changes, or decisions about the future of the program. We will discuss formative and
summative evaluations more thoroughly later in this chapter.

Performance monitoring is an important way to tell how a program is tracking over time, but, as shown in the
model, performance measures can inform decisions made at any stage of the performance cycle, not just the
assessment stage. Performance data can be useful for strategic planning, program design, and management-related
implementation decisions. At the Assessment and Reporting Results phase, “performance measurement and
reporting” is expected to contribute to accountability for programs. That is, performance measurement can lead to
a number of consequences, from program adjustments to impacts on elections. In the final phase of the cycle,

22
strategic objectives are revisited, and the evidence from earlier phases in the cycle is among the inputs that may
result in new or revised objectives—usually through another round of strategic planning.

Stepping back from this cycle, we see a strategic management system that encompasses how ideas and evaluative
information are gathered for policy planning and subsequent funding allocation and reallocation. Many
governments have institutionalized their own performance information architecture to formalize how programs
and departments are expected to provide information to be used by the managerial and political decision makers.
Looking at Canada and the United States, we can see that this architecture evolves over time as the governance
context changes and also becomes more complex, with networks of organizations contributing to outcomes. The
respective emphasis on program evaluation and performance measurement can be altered over time. Times of
change in government leadership are especially likely to spark changes in the performance information
architecture. For example, in Canada, the election of the current Liberal Government in the 2015 federal election
after nine years of Conservative Government leadership has resulted in a government-wide focus on implementing
high-priority policies and programs and ensuring that their results are actually delivered (Barber, 2015; Barber,
Moffitt, & Kihn, 2011).

23
Policies And Programs
As you have been reading this chapter, you will have noticed that we mention both policies and programs as
candidates for performance measurement and evaluation. Our view is that the methodologies that are discussed in
this textbook are generally appropriate for evaluating both policies and programs. Some analysts use the terms
interchangeably—in some countries, policy analysis and evaluation is meant to encompass program evaluation
(Curristine, 2005). We will define them both so that you can see what the essential differences are.

What Is a Policy?

Policies connect means and ends. The core of policies are statements of intended outcomes/objectives (ends) and the means by which
government(s) or their agents (perhaps nonprofit organizations or even private-sector companies) will go about achieving these outcomes.
Initially, policy objectives can be expressed in election platforms, political speeches, government responses to questions by the media, or
other announcements (including social media). Ideally, before a policy is created or announced, research and analysis has been done that
establishes the feasibility, the estimated effectiveness, or even the anticipated cost-effectiveness of proposed strategies to address a problem
or issue. Often, new policies are modifications of existing policies that expand, refine, or reduce existing governmental activities.

Royal commissions (in Canada), task forces, reports by independent bodies (including think tanks), or even public inquiries
(congressional hearings, for example) are ways that in-depth reviews can set the stage for developing or changing public policies. In other
cases, announcements by elected officials addressing a perceived problem can serve as the impetus to develop a policy—some policies are a
response to a political crisis.

An example of a policy that has significant planned impacts is the British Columbia government’s November 2007 Greenhouse Gas
Reduction Targets Act (Government of British Columbia, 2007) that committed the provincial government to reducing greenhouse gas
emissions in the province by 33% by 2020. From 2007 to 2013, British Columbia reduced its per capita consumption of petroleum
products subject to the carbon tax by 16.1%, as compared with an increase of 3.0% in the rest of Canada (World Bank, 2014).

The legislation states that by 2050, greenhouse gas emissions will be 80% below 2007 levels. Reducing greenhouse gas emissions in
British Columbia will be challenging, particularly given the more recent provincial priority placed on developing liquefied natural gas
facilities to export LNG to Asian countries. In 2014, the BC government passed a Greenhouse Gas Industrial Reporting and Control Act
(Government of British Columbia, 2014) that includes a baseline-and-credit system for which there is no fixed limit on emissions, but
instead, polluters that reduce their emissions by more than specified targets (which can change over time) can earn credits that they can
sell to other emitters who need them to meet their own targets. The World Bank annually tracks international carbon emission data
(World Bank, 2017).

What Is a Program?

Programs are similar to policies—they are means–ends chains that are intended to achieve some agreed-on objective(s). They can vary a
great deal in scale and scope. For example, a nonprofit agency serving seniors in the community might have a volunteer program to make
periodic calls to persons who are disabled or otherwise frail and living alone. Alternatively, a department of social services might have an
income assistance program serving clients across an entire province or state. Likewise, programs can be structured simply—a training
program might just have classroom sessions for its clients—or be complicated—an addiction treatment program might have a range of
activities, from public advertising, through intake and treatment, to referral, and finally to follow-up—or be complex—a
multijurisdictional program to reduce homelessness that involves both governments and nonprofit organizations.

To reduce greenhouse gases in British Columbia, many different programs have been implemented—some targeting the government
itself, others targeting industries, citizens, and other governments (e.g., British Columbia local governments). Programs to reduce
greenhouse gases are concrete expressions of the policy. Policies are usually higher level statements of intent—they need to be translated
into programs of actions to achieve intended outcomes. Policies generally enable programs. In the British Columbia example, a key
program that was implemented starting in 2008 was a broad-based tax on the carbon content of all fuels used in British Columbia by
both public- and private-sector emitters, including all who drive vehicles in the province. That is, there is a carbon tax component added
to vehicle per liter fuel costs.

Increasingly, programs can involve several levels of government, governmental agencies, and/or nonprofit organizations. A good example
is Canada’s federal government initiatives, starting in 2016, to bring all provinces on board with GHG reduction initiatives. These kinds
of programs are challenging for evaluators and have prompted some in the field to suggest alternative ways of assessing program processes
and outcomes. Michael Patton (1994, 2011) has introduced developmental evaluation as one approach, and John Mayne (2001, 2011)
has introduced contribution analysis as a way of addressing attribution questions in complex program settings.

24
In the chapters of this textbook, we will introduce multiple examples of both policies and programs, and the
evaluative approaches that have been used for them. A word on our terminology—although we intend this book
to be useful for both program evaluation and policy evaluation, we will refer mostly to program evaluations.

25
Key Concepts In Program Evaluation

26
Causality in Program Evaluations
In this textbook, a key theme is the evaluation of the effectiveness of programs. One aspect of that issue is whether
the program caused the observed outcomes. Our view is that program effectiveness and, in particular, attribution
of observed outcomes are the core issues in evaluations. In fact, that is what distinguishes program evaluation from
other, related professions such as auditing and management consulting. Picciotto (2011) points to the centrality of
program effectiveness as a core issue for evaluation as a discipline/profession:

What distinguishes evaluation from neighboring disciplines is its unique role in bridging social science
theory and policy practice. By focusing on whether a policy, a program or project is working or not (and
unearthing the reasons why by attributing outcomes) evaluation acts as a transmission belt between the
academy and the policy-making. (p. 175)

In Chapter 3, we will describe the logic of research designs and how they can be used to examine causes and effects
in evaluations. Briefly, there are three conditions that are widely accepted as being jointly necessary to establish a
causal relationship between a program and an observed outcome: (1) the program has to precede the observed
outcome, (2) the presence or absence of the program has to be correlated with the presence or absence of the
observed outcome, and (3) there cannot be any plausible rival explanatory factors that could account for the
correlation between the program and the outcome (Cook & Campbell, 1979).

In the evaluation field, different approaches to assessing causal relationships have been proposed, and the debate
around using experimental designs continues (Cook et al., 2010; Cresswell & Cresswell, 2017; Donaldson et al.,
2014). Our view is that the logic of causes and effects (the three necessary conditions) is important to understand,
if you are going to do program evaluations. Looking for plausible rival explanations for observed outcomes is
important for any evaluation that claims to be evaluating program effectiveness. But that does not mean that we
have to have experimental designs for every evaluation.

Program evaluations are often conducted under conditions in which data appropriate for ascertaining or even
systematically addressing the attribution question are hard to come by. In these situations, the evaluator or
members of the evaluation team may end up relying, to some extent, on their professional judgment. Indeed, such
judgment calls are familiar to program managers, who rely on their own observations, experiences, and
interactions to detect patterns and make choices on a daily basis. Scriven (2008) suggests that our capacity to
observe and detect causal relationships is built into us. We are hardwired to be able to organize our observations
into patterns and detect/infer causal relationships therein.

For evaluators, it may seem “second best” to have to rely on their own judgment, but realistically, all program
evaluations entail a substantial number of judgment calls, even when valid and reliable data and appropriate
comparisons are available. As Daniel Krause (1996) has pointed out, “A program evaluation involves human
beings and human interactions. This means that explanations will rarely be simple, and interpretations cannot
often be conclusive” (p. xviii). Clearly, then, systematically gathered evidence is a key part of any good program
evaluation, but evaluators need to be prepared for the responsibility of exercising professional judgment as they do
their work.

One of the key questions that many program evaluations are expected to address can be worded as follows:

To what extent, if any, were the intended objectives met?

Usually, we assume that the program in question is “aimed” at some intended objective(s). Figure 1.2 offers a
picture of this expectation.

27
Figure 1.2 Linking Programs and Intended Objectives

The program has been depicted in a “box,” which serves as a conceptual boundary between the program and the
program environment. The intended objectives, which we can think of as statements of the program’s intended
outcomes, are shown as occurring outside the program itself; that is, the intended outcomes are results intended to
make a difference outside of the activities of the program itself.

The arrow connecting the program and its intended outcomes is a key part of most program evaluations and
performance measurement systems. It shows that the program is intended to cause the outcomes. We can restate
the “objectives achievement” question in words that are a central part of most program evaluations:

Was the program effective (in achieving its intended outcomes)?

Assessing program effectiveness is the most common reason we conduct program evaluations and create
performance measurement systems. We want to know whether, and to what extent, the program’s actual results
are consistent with the outcomes we expected. In fact, there are two evaluation issues related to program
effectiveness. Figure 1.3 separates these two issues, so it is clear what each means.

Figure 1.3The Two Program Effectiveness Questions Involved in Most Evaluations

The horizontal causal link between the program and its outcomes has been modified in two ways: (1) intended
outcomes have been replaced by the observed outcomes (what we actually observe when we do the evaluation),
and (2) a question mark (?) has been placed over that causal arrow.

We need to restate our original question about achieving intended objectives:

To what extent, if at all, was the program responsible for the observed outcomes?

Notice that we have focused the question on what we actually observe in conducting the evaluation, and that the
“?” above the causal arrow now raises the key question of whether the program (or possibly something else) caused
the outcomes we observe. In other words, we have introduced the attribution question—that is, the extent to
which the program was the cause or a cause of the outcomes we observed in doing the evaluation. Alternatively,
were there factors in the environment of the program that caused the observed outcomes?

We examine the attribution question in some depth in Chapter 3, and refer to it repeatedly throughout this book.

28
As we will see, it is often challenging to address this question convincingly, given the constraints within which
program evaluators work.

Figure 1.3 also raises a second evaluation question:

To what extent, if at all, are the observed outcomes consistent with the intended outcomes?

Here, we are comparing what we actually find with what the program was expected to accomplish. Notice that
answering that question does not tell us whether the program was responsible for the observed or intended outcomes.

Sometimes, evaluators or persons in organizations doing performance measurement do not distinguish the
attribution question from the “achievement of intended outcomes” question. In implementing performance
measures, for example, managers or analysts spend a lot of effort developing measures of intended outcomes.
When performance data are analyzed, the key issue is often whether the actual results are consistent with intended
outcomes. In Figure 1.3, the dashed arrow connects the program to the intended outcomes, and assessments of
that link are often a focus of performance measurement systems. Where benchmarks or performance targets have
been specified, comparisons between actual outcomes and intended outcomes can also be made, but what is
missing from such comparisons is an assessment of the extent to which observed and intended outcomes are
attributable to the program (McDavid & Huse, 2006).

29
Exploring the Variety of Random
Documents with Different Content
SAARET

On ulappa aava ja myrskyinen


Elo ihmisten,
Mut muistot, unelmat, toivehet
Ovat saaria sen.
On niissä niin herttaista viivähtää,
Kun sydän on synkkä ja polttavi pää,
Niin unhottain, miten kaukana
On onnen kaivattu valkama.
LAULAJAPOIKA

Olin nuori laulajapoika


Ja kiertelin maailmaa,
Ei painanut huoli, ei köyhyys,
Mun rintaani rohkeaa.

Niin kulkiessani kerran


Tulin loistavaan linnahan,
Näin puistossa linnan neidin,
Tytön viehkeän, hurmaavan.

Yhä ristikkoportilla seisoin,


Hänt' ääneti katsellen,
Niin onnen autuas tunne
Mun valtasi sydämen.

»Jos kuningas korkea oisin,


Ah, impeni armahin!
Niin maan sekä taivaan aarteet
Sun helmahas heittäisin!»

»Mut valtakuntani mulla


On laajoissa lauluissain,
En muuta sulle voi antaa,
Kuin lempeni lahjan vain!»

Sydän sykkien soittimeen tartuin


Ja kieliä kosketin,
Niin tenhoavasti en koskaan
Ole soittanut sittemmin.

Kuin helminä helkytellen


Sävel kumpusi rinnastain,
Minä soitin, soitin ja lauloin — —
Muun maailman unhottain.

Käsiin neitonen päätään painoi,


Ja kyynelin kuunteli,
Mua silmäsi hehkumielin
Ja viipymään viittasi.

Sydän sinne jäämähän käski,


Toki lähdinkin kulkemaan,
Hän ol' ylhäinen linnan neiti,
Minä köyhä laulaja vaan.

Mut ijäksi unholle heitin,


Ilokantelon entisen,
Yhä soitan, kaihoa kerron,
Maat, manteret kiertäen.
KUUTAMOLLA

Rannalla yksin istun,


Mieli on kaihoinen.
Etäistä maata tuolla
Yhäti silmäilen.

Siellähän asut, armas,


Kultani kallihin.
Silta jos sinne veisi,
Luoksesi rientäisin.

Kirkasna kuuhut loistaa,


Pilvistä pilkistää,
Laineilla leikkii, tanssii,
Vedessä väräjää.

Valosta luotu silta,


Siintävä, hohtoinen,
Jospa mun viedä voisit
Rannalle toiveitten!

Vaan ei ne Kuuttaren sillat


Kannata kulkijaa,
Niihin ken luottaa, astuu,
Aaltohon haudan saa.
KAUNIS KATRI

Vaimo vanha tietä astuu. —


Tuvan ikkunassa,
Kaunis Katri, säihkysilmä,
Istuu neulomassa.

»Mitä neulot, Katri kaunis?» —


»Valkopuvun laitan,
Hohtoharsot; seppeleeksi
Myrtin-oksan taitan.

»Eilen orhillansa ohi


Ritar' Arno kulki,
Sisään poikkes, valat vannoi,
Sylihinsä sulki.»

Syksyn tullen vaimo vanha


Taaskin tietä astuu.
Katri neuloo, tumma silmä
Kyynelistä kastuu.

»Mitä neulot, Katri kalvas?» —


»Murhepuvun laitan,
Harsot mustat; seppeleeksi
Tuonen liljan taitan.

»Ritar' Arno valat rikkoi,


Hylkäs tyttö-rukan.
Talven lumi peittäköhön
Kuihtunehen kukan.»
HYLJÄTTY

Sä tieltäs kukan taitat


Ja sitten heität pois.
Se kohta unhoittuupi,
Kuin ollutkaan ei ois.

Tuo hiekka, jota poljet,


Kuin sulle on arvoton!
Maan halpa, musta multa —
Sen huonompaako on?

Mut mielelläin, voi armas,


Tomu oisin jaloissas,
Tai kuihtuva kukka, jonka
Pois heität kulkeissas.

Ei katkerammin konsaan
Se koskea mua vois,
Kuin nyt, kun lemmen poljet
Ja tylysti työnnät pois.
ONNEA TAVOTTAESSA

Erään kuvan johdosta.

Sun säihkyvi silmäs, poika,


Puna polttavi poskillas,
Eteenpäin kuin hurjana kiidät,
Ja kannustat ratsuas.

Mut eelläsi Onnetar liitää,


Rusopilvenä väikkyilee,
Hän kukkia tiellesi kylvää
Ja luoksensa houkuttelee.

Varo, poika! Taaksesi katso,


On Kuolema kumppalinas!
Teräviikate välkkyy, ja vaiti
Hän vartovi vuoroas.

Tuo kiehtova, kaunis Onni


Petollista vain on unelmaa,
On turhia ponnistukses,
Et voi sitä saavuttaa.
Mut kuulla et, poikanen, malta,
On huumaus vallannut sun,
Sa kaiken tahtoisit vaihtaa
Vain yhtehen suuteluhun.

Jo silmihis syttyy into,


Kätes kutreja tavoittaa,
Mut Onni kun lähinnä ompi,
Sun Kuolema saavuttaa.

Se rautaisin kourin painaa


Sun syöntäsi sykkäilevää,
Ja hyyks veren lämpimän muuttaa,
Tulen rintasi jäähdyttää.

Mut Onnetar eellehen liitää,


Rusopilvenä väikkyilee,
Hän kukkia tiellemme kylvää
Ja luoksensa houkuttelee.
AUNE

Erään toverin muistoksi,

Aamusella Aune, armas tyttö,


Ilomielin lähti lähtehelle,
Siellä kietoi kultakutreillensa
Sinikaunokeista kiehkuraisen
Sekä virkkoi hymy huulillansa:
»Kuinka lienee kaikki ensi vuonna,
Käynkö vielä kodin kukkasena,
Vaiko veräjillä vierahilla?»

Lauloi lintu lehdon liepeheltä:


»Ensi vuonna, Aune, armas tyttö,
Saapuu sulle luokses sulho suuri,
Suutelon hän painaa otsahasi,
Kätehesi kantaa kihlat kalliit,
Ja sun kodistasi myötään viepi.»

Kului vuosi, Aune, armas tyttö,


Jälleen istuu lähteen partahalla,
Kirkkahina nytkin loistaa silmät,
Poskipäillä rusohohde leikkii.
Mutta kuumeen tult' on silmän säihky,
Taudin punaa posken helo hohde,
Kalvenneina hymyhuulet hellät.

Aune, armas, neiti nuorimpani!


Jopa saapui sulle sulhos suuri,
Mahtavin lie mailman valtiaista.
Tuli Tuoni, kukan hennon taittoi,
Kulki kuolo, kantoi kihlat kalliit,
Suudelmalla morsioksi vihki,
TAIVAAN TIE

On autio hieta-erämaa.
Ypö yksin munkki taivaltaa.
Vie sairaalle lohdutusta. —
Yö synkeä on ja musta.

»Jo käänny, kulkija, takaisin!


Voit eksyä aavikon polkuihin.» —
»Ah, Herra, johdata tiellä!
Mua sairas vartovi siellä!» —

»Jalopeuran kuuletko karjuntaa,


Se öistä korpea kaiuttaa!» —
»Ah Herra, huomahas peitä!
Ma kuljen taivahan teitä!» —

»Miks lähdit myrskyhyn alkavaan,


Tuo hietavyöryn se helmassaan!» —
»Tie taivaan ahdas on aina,
Ah, Herra, armosi lainaa!»

Hän maahan heittäy rukoillen,


» Käy myrsky vinkuen, kohisten.
Hän hiekkahan haudan saapi,
Mut taivas se aukeaapi.

EDVI JA ELGIIVA
Historiallinen ballaadi-sarja Viidessä laulussa

Edvi, Englannin kuningas, Alfred suuren pojanpoika, oli rakastunut


nuoreen, kauneudestaan maan mainioon Elgiivaan, jonka hän
sittemmin myös koroitti kuningattareksensa. Mutta mahtava
kirkollinen puolue, apotti Dunstan etupäässä, ei suosinut tätä liittoa
ja sen toimesta lähetettiin Elgiiva ensin maanpakoon ja myöhemmin
surmattiin, Edvikin kuoli nuorena hallittuaan ainoastaan lyhyen ajan
956 — 959.

1. Laulu:

EDVI KUNINGAS NÄKEE ELGIIVAN

Sadat hopealamput hohtaa,


Ja valoa tulvivaa
Salin korkeat kristalliseinät
Tuhatkertaisna heijastaa.

Käy kahina raskaan silkin,


Puhe virtana vieriää,
Timantteja säihkyy ja katseet
Ne kilvan välkähtää.

Vaan äkkiä vait'-olo valtaa


Ja taukoovi kuiskeetkin.
Kuningas, nuori Edvi,
On astunut salihin.

Hän ylväänä tervehtääpi,


Salin ympäri katseen luo,
Ja parvessa kaunotarten
Sen hetken viipyä suo.

Vaan tuolla ken — oven suussa? -—


Noin silmätkö pettää vois!
Ah, onhan kuin pyhä neitsyt
Alas taivaast' astunut ois!

Kukan vartena norjana kohoo


Siro vyötärys hentoinen.
Voi, välkettä kultakutrin
Ja suloa silmien!

»Ah, Dunstan, tuonne katso,


Tuon impyen tunnetko?
Hän muista eroittuupi
Kuin tähdistä aurinko!
»Vai etkö, — no, vähät siitä,
Hän sentään, ma vannon sen,
Vaikk' kerjäläinenkin oisi,
On hengetär sulouden!

»Sävel tulinen, tenhosa soios,


Ja kantelo helkkykään!
Kun kuningatartaan Edvi
Käy tanssihin pyytämään»

Ja kaikkien katsellessa
Hän astuvi immen luo:
»Sua, kaunis tuntematon,
Mun tanssihin viedä suo!»

Tytön poskella puhtahalla


Puna polttava hehkuaa,
Ja kainona, vitkastellen,
Sinisilmät hän kohottaa.

Sadat katsehet seuraa heitä,


Ylt'-ympäri kuiskaillaan,
Mut ääneti, lumottuina
He toistaan katsovat vaan.

Ja soiton soidessa kuuman


Hänet Edvi tanssihin vie.
Voi, sorja, hento Elgiiva,
Kuin liukas on tuo tie!
2. Laulu:

EDVI ELGIIVAN LUONA

Pois hatun heitän ja kauhtanan


Ja purppuramanttelin.
Kas noin! Nyt poissa on kuningas,
Vain orjasi olenkin.
Ah, raskasta olla, tyttöni,
On valtias suuren maan,
Ja hallitushuolia muistaa, kun
Sydän lemmelle sykkii vaan.

Tuo jakkara jalkaini juurehen!


Pää painaos polvelle mun.
Näin istua iltakaudet voin
Ja katsoa silmiis sun.
On suuressa mailmassa vihaa vain
Ja juonia, valhetta.
Mun mailmani sun on sydämes,
Se on pieni ja valoisa.

Tuo Dunstan — vaalenet tyttöni?


Sanaa tuotako kauhistut?
Elgiivanko korvihin ehtineet
On joutavat juoruilut?
Pyhä Dunstan korskea kyllä on,
Kyyn kätkevi povessaan;
Mut jälkeläistäkö Alfredin
Masentaisi hän milloinkaan?
Yhä pilvi on tyttöni otsalla,
Kun poistaa voisin sen!
Ota harppusi, lapsi, seinältä
Suo sointua sävelen!
Sitä kuullessa nukkuu pelko pois
Ja uinuvi aavistus.
Sydämessä vain lempi on valloillaan
Ja riemujen runsaus.

3. Laulu:

DUNSTAN

Yö saapunut on. Kellot Glastonbury'n [luostari Lontoossa]


Jo kahdentoista lyönnin soittivat.
On tulet sammuneet. Vain kirkossa
Edessä kuvan pyhän neitsyen
Ikuinen lamppu luopi valoa.

Mut luota luostar'-muurin korkean


Majasta matalasta, kurjasta
Myös valo välkähtelee. Polvillaan
Ääressä krusifiksin rautaisen
Mies kumarassa ompi. Vartaloa
Verhoopi munkkikaapu karkea,
Rukousnauhaa sormet kiertelee,
Ja kasvot kalvaat tuohus valaisee.
Nuo kasvot muistaa kerran nähtyään.
On niihin intohimot, kärsimykset,
Salaiset pyyteet, itsekidutus
Leimansa luoneet kulumattoman.
Silmissä tuli palaa sisäinen,
Niin tuima, sammumaton, kiihkeä,
Kuin tuhaks vois se polttaa ihmisen.

Vuoskymmen sitten — kuinka toisin Dunstan!


Sai silloin silkkivaippa loistava
Sun uljaan vartalosi verhota.
Mies paras olit tanssin pyörtehissä,
Pidoissa, juomingeissa ensimäinen.
Saleissa hovin korskeana kuljit
Sydämet sytytellen katseellas,
Ja silmät kaunihit sait kyyneliin
Sulavin sormin sitraa soittaissas.

Ken muistais aikaa tuota? Ollut, mennyt


Ja unohdettu on se kauvan sitte.
Nyt »munkkein ruhtinas» vaan tunnetaan,
Nyt olet »pyhä Dunstan». Hurskautes
On kuulu kautta kaiken Brittein maan!

Jo päättyi rukous. Hän kolmasti


Hartaana ristinmerkin tekevi
Ja sitten istuu pöydän äärehen,
Kädessä kirja, johon himmeästi
Seinältä sattuu valo tuohuksen.

Mut tänä iltana ei luku luista,


Niin levotonna ajatukset lentää,
Ja katse rauhatonna harhailee.

Paavilta äsken salaviesti saapui:


»Nyt, Dunstan, toimeen tulee ryhtyä!
On aika Rooman valtaa lujittaa.
Kuningas nuor on, lapsi melkein vielä;
Min kirkon hyväks teet, on oikeaa.»

Yhäti himmeämmin tuohus hehkuu,


Ja kirja huomaamatta putoaa,
Mut silmiin munkin outo säihky saa.

»Niin, nyt on aika! Toimeen viipymättä!


Mun suunnitelmain, kauvan valmistettu
Ei oikun vuoks saa tyhjiin raueta.
Jo vähitellen, varovasti olin
Käsiini koonnut vallan ohjakset,
Ja nyt ne naisen tähden kadottaisin.
Ei, kautta taivaan! Ensi työksi heti
Elgiivan täytyy pois. Tuo sinisilmä,
Kiharatukka noita kujeillaan
Kuin verkkoon vanginnut on kuninkaan.
Enemmän häntä kuunnellaan kuin mua.
Ei sovi se. Mun valta olkohon.
Ken väisty ei, se turman oma on!»

4. Laulu:

TUOMITUT
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like