100% found this document useful (18 votes)
750 views23 pages

Handbook of Evaluation Methods For Health Informatics., 978-0123704641

ISBN-13: 978-0123704641. Handbook of Evaluation Methods for Health Informatics Full PDF DOCX Download

Uploaded by

renellruvolono
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (18 votes)
750 views23 pages

Handbook of Evaluation Methods For Health Informatics., 978-0123704641

ISBN-13: 978-0123704641. Handbook of Evaluation Methods for Health Informatics Full PDF DOCX Download

Uploaded by

renellruvolono
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Handbook of Evaluation Methods for Health Informatics

Visit the link below to download the full version of this book:
https://cheaptodownload.com/product/handbook-of-evaluation-methods-for-health-in
formatics-full-pdf-docx-download/
This Page Intentionally Left Blank
H A N D B O O K OF E V A L U A T I O N M E T H O D S
FOR H E A L T H I N F O R M A T I C S

Jytte Brender
University of Aalborg, Denmark

Translated by Lisbeth Carlander

AMSTERDAM BOSTON
9 HEIDELBERG
9 LONDON
9
NEW YORK OXFORD
9 PARIS
9 SAN
9 DIEGO
SAN FRANCISCO SINGAPORE
9 SYDNEY
9 TOKYO
9
ELSEVIER

A c a d e m i c Press is an imprint o f Elsevier


Elsevier Academic Press
30 Corporate Drive, Suite 400, Burlington, MA 01803, USA
525 B Street, Suite 1900, San Diego, California 92101-4495, USA
84 Theobald's Road, London WC1X 8RR, UK

The front page illustration has been reprinted from Brender J. Methodology for Assessment of
Medical IT-based Systems - in an Organisational Context. Amsterdam: lOS Press, Studies in
Health Technology and Informatics 1997; 42, with permission.

This book is printed on acid-free paper. (~

Copyright 92006, Elsevier Inc. All rights reserved.

No part of this publication may be reproduced or transmitted in any form or by any


means, electronic or mechanical, including photocopy, recording, or any information
storage and retrieval system, without permission in writing from the publisher.

Permissions may be sought directly from Elsevier's Science & Technology Rights
Department in Oxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333,
E-mail: permi~sions@cls~wiev~co~. You may also complete your request on-line
via the Elsevier homepage (http://elsevier.com), by selecting
"Customer Support" and then "Obtaining Permissions."

Library of Congress Cataloging-in-Publication Data


Brender, Jytte.
Handbook of evaluation methods in health informatics / Jytte
Brender.
p. ;cm.
Includes bibliographical references and index.
ISBN-13:978-0-12-370464-1 (pbk. : alk. paper)
ISBN-10:0-12-370464-2 (pbk. : alk. paper)
1. Medical informatics--Methodology. 2. Information storage and
retrieval systems--Medicine--Evaluation. I. Title.
[DNLM: 1. Information Systems--standards. 2. Medical Informatics
--methods. 3. Decision Support Techniques. W 26.55.I4 B837h
2006]
R858.B7332 2006
651.5'04261--dc22
2005025192

British Library Cataloguing in Publication Data


A catalogue record for this book is available from the British Library

ISBN 13:978-0-12-370464-1
ISBN 10:0-12-370464-2

For all information on all Elsevier Academic Press publications


visit our Web site at www.books.elsevier.com

Printed in the United States of America


05 06 07 08 09 10 9 8 7 6 5 4 3 2 1
CONTENTS

Contents

NOTE TO THE READER IX


The Aim of the Handbook ix
Target Group x

Criteria for Inclusion xii


Acknowledgments xiii
Additional Comments xiv

P A R T I: I N T R O D U C T I O N 1
INTRODUCTION 3
1.1 What Is Evaluation? 3
1.2 Instructions to the Reader 6
1.3 Metaphor for the Handbook 7
CONCEPTUAL APPARATUS 9
2.1 Evaluation and Related Concepts 9
2.1.1 Definitions 9
2.1.2 Summative Assessment 11
2.1.3 Constructive Assessment 11
2.2 Methodology, Method, Technique, and Framework 12
2.2.1 Method 13
2.2.2 Technique 13
2.2.3 Measures and Metrics 14
2.2.4 Methodology 14
2.2.5 Framework 15
2.3 Quality Management 16
2.4 Perspective 18
2.4.1 Example." Cultural Dependence on Management Principles 19
2.4.2 Example: Diagramming Techniques 20
2.4.3 Example: Value Norms in Quality Development 20
2.4.4 Example: Assumptions of People's Abilities 21
2.4.5 Example: The User Concept 21
2.4.6 Example: The Administrative Perspective 22
2.4.7 Example: Interaction between Development and Assessment
Activities 23
2.4.8 Example: Interaction between Human and
Technical Aspects 24
2.5 Evaluation Viewed in the Light of the IT System's Life Cycle 25
0 TYPES OF USER ASSESSMENTS OF IT-BASED SOLUTIONS 29
CONTENTS

3.1 Types of User Assessment during the Phases of a System's


Life Cycle 31
3.1.1 The Explorative Phase 31
3.1.2 The Technical Development Phase 32
3.1.3 The Adaptation Phase 33
3.1.4 The Evolution Phase 34
3.2 Assessment Activities in a Holistic Perspective 34
0 CHOOSING OR CONSTRUCTING METHODS 37
4.1 How Do You Do It? 37
4.1.1 Where in Its Life Cycle Is the IT Project? 37
4.1.2 What Is the Information Need? 38
4.1.3 Establishing a Methodology 39
4.1.4 Choosing a Method 40
4.1.5 Choosing Metrics and Measures 42
4.1.6 Execution of the Method 42
4.1.7 Interpreting Results 43
4.2 From Strategy to Tactics: An Example 44
4.3 Another (Abstract) Example 45
4.4 A Practical Example of a Procedure 46
4.4.1 Planning at the Strategic Level 46
4.4.2 Planning at the Tactical Level 46
4.4.3 Planning at the Operational Level 47
4.5 Frame of Reference for Assessment 47
4.6 Perils and Pitfalls 48

P A R T II: M E T H O D S AND TECHNIQUES 51

5. INTRODUCTION 53
5.1 Signature Explanations 54
5.1.1 Application Range within the IT System's Life Cycle 54
5.1.2 Applicability in Different Contexts 56
5.1.3 Type of Assessment 57
5.1.4 Use of Italics in the Method Descriptions 58
5.2 Structure of Methods' Descriptions 58
6. OVERVIEW OF ASSESSMENT METHODS 61
6.1 Overview of Assessment Methods: Explorative Phase 61
6.2 Overview of Assessment Methods: Technical Development Phase 64
6.3 Overview of Assessment Methods: Adaptation Phase 65
6.4 Overview of Assessment Methods: Evolution Phase 68
6.5 Other Useful Information 72
7. DESCRIPTIONS OF METHODS AND TECHNIQUES 73
Analysis of Work Procedures 73
Assessment of Bids 78
CONTENTS

Balanced Scorecard 85
BIKVA 88
Clinical/Diagnostic Performance 91
Cognitive Assessment 96
Cognitive Walkthrough 102
Delphi 106
Equity Implementation Model 109
Field Study 111
Focus Group Interview 116
Functionality Assessment 120
Future Workshop 125
Grounded Theory 128
Heuristic Assessment 132
Impact Assessment 135
Interview 142
KUBI 147
Logical Framework Approach 149
Organizational Readiness 154
Pardizipp 156
Prospective Time Series 159
Questionnaire 163
RCT, Randomized Controlled Trial 172
Requirements Assessment 180
Risk Assessment 185
Root Causes Analysis 188
Social Networks Analysis 190
Stakeholder Analysis 192
SWOT 196
Technical Verification 199
Think Aloud 204
Usability 207
User Acceptance and Satisfaction 215
Videorecording 219
WHO: Framework for Assessment of Strategies 222
0 OTHER USEFUL INFORMATION 227
Documentation in a Situation of Accreditation 227
Measures and Metrics 232
Standards 238

P A R T III: M E T H O D O L O G I C A L A N D M E T H O D I C A L
P E R I L S AND P I T F A L L S AT A S S E S S M E N T 243

9. BACKGROUND INFORMATION 245


9.1 Perspectives 246

vii
CONT6NTS

10. APPROACH TO IDENTIFICATION OF PITFALLS AND PERILS 249

11. FRAMEWORK FOR META-ASSESSMENT OF ASSESSMENT


STUDIES 253
11.1 Types of (Design) Strengths 257
11.1.1 Circumscription of Study Objectives 25 7
11.1.2 Selecting the Methodology~Method 261
11.1.3 Defining Methods and Materials 267
11.1.4 (User) Recruitment 271
11.1.5 (Case) Recruitment 273
11.1.6 The Frame of Reference 2 76
11.1.7 Outcome Measures or End-Points 284
11.1.8 Aspects of Culture 289
11.2 Types of (Experimental) Weaknesses 290
11.2.1 The Developers'Actual Engagement 291
11.2.2 Intra- and Interperson (or -Case) Variability 293
11.2.3 Illicit Use 295
11.2.4 Feed-back Effect 296
11.2.5 Extra Work 297
11.2.6 Judgmental Biases 298
11.2.7 Postrationalization 3 O0
11.2.8 Verification of Implicit Assumptions 301
11.2.9 Novelty of the Technology- Technophile or Technophobe 302
11.2.10 Spontaneous Regress 303
11.2.11 False Conclusions 303
11.2.12 Incomplete Studies or Study Reports 304
11.2.13 Hypothesis Fixation 307
11.2.14 The Intention to Treat Principle 311
11.2.15 Impact 313
11.3 Types of Opportunities 313
11.3.1 Retrospective Exploration of(Existing) Data Material 314
11.3.2 Remedying Problems Identified- beyond the Existing Data
Material 315
11.4 Types of Threats 317
11.4.1 Compensation for Problems 317
11.4.2 Pitfalls and Perils 318
11.4.3 Validityof the Study Conclusion 319
12. DISCUSSION 321
12.1 A Meta-View on the Study of Pitfalls and Perils 322
LIST OF ABBREVIATIONS 325

LIST OF REFERENCES 327


Annotated, Generally Useful References, Including Case Studies 337
Annotated World Wide Web Links 343
INDEX 347

viii
NOTE TO Tld~ R~ADER

NOTE TO TH E READER
This H a n d b o o k o f Evaluation Methods is a translated and updated version of a
combination of the following two publications:
~ Brender J. Handbook of Methods in Technology Assessment of IT-based
Solutions within Healthcare. Aalborg: EPJ-Observatoriet, ISBN: 87-91424-
04-6, June 2004. 238pp (in Danish).
9Brender J. Methodological and methodical perils and pitfalls within
assessment studies performed on IT-based solutions in healthcare. Aalborg:
Virtual Centre for Health Informatics; 2003 May. Report No.: 03-1 (ISSN 1397-
9507). 69pp.

The Aim of the Handbook

"[W]e view evaluation not as the application o f a set o f tools and techniques, but as
a process to be understood. By which we mean an understanding o f the functions
and nature o f evaluation as well as its limitations and problems. ""
(Symons and Walsham 1988)

The primary aim of this book is to illustrate options for finding appropriate tools
within the literature and then to support the user in accomplishing an assessment
study without too many disappointments. There are very big differences between
what developers and users should assess with regard to an IT-based solution. This
book deals solely with assessments as seen from the users' point of view.

Within the literature there are thousands of reports on assessment studies for IT-
based systems and solutions specifically for systems within the healthcare sector.
However, only a fraction of these are dedicated to a description of assessment
activities. Consequently, from an assessment perspective one has to consider them
as superficial and rather useless as model examples. Only the best and
paradigmatic examples of evaluation methods are included in this book.

The problem in assessment of IT-based solutions lies partly in getting an overview


of the very complex domain encompassing topics that range from technical
aspects to the soft behavioral and organizational aspects and partly in avoiding
pitfalls and perils to obtain valid results, as well as identifying appropriate
methods and case studies similar to one's own.
NOTE TO THE READER

Please note the terminology of the book, which encompasses the notions of
evaluation and assessment in Section 2.1. Briefly, evaluation means to measure
characteristics (in a decision-making context), while assessment is used in an
overall sense that does not distinguish between retrospective objectives of the
study aims and therefore not whether it is evaluation, verification, or validation.

This book deals mainly with current user-oriented assessment methods. It includes
methods that give the users a fair chance of accomplishing all or parts of an
investigation, which lead to a professionally satisfactory answer to an actual
information need.

It is not each and every one of the methods included that were originally
developed and presented in the literature as evaluation methods, but nevertheless,
they may be applicable either directly- as evaluation methods for specific
purposes- or indirectly- as support in an evaluation context. An example is the
Delphi method developed for the American military to predict future trends.
Another example is diagramming techniques for modelling workflow, which in
some instances constitute a practical way of modelling, such as in assessing effect
or impact or in field studies, and so forth.

The book is primarily aimed at the health sector, from which the chosen
illustrative examples are taken. The contents have been collected from many
different specialist areas, and the material has been chosen and put together in
order to cover the needs of assessment for IT-based solutions in the health sector.
However, this does not preclude its use within other sectors as well.

Target Group
It is important to remember that a handbook of methods is not a textbook but a
reference book enabling the reader to get inspiration and support when completing
a set task and/or as a basis for further self-education. Handbooks, for use in
natural sciences for instance, are typically aimed at advanced users and their level
of knowledge. The structure of this handbook has similarly been aimed at the
level of user with a profile as described below.

The target readers of this book constitute all professionals within the healthcare
sector, including IT professionals. However, an evaluation method is not
something that one pulls out of a hat, a drawer, or even from books, and then uses
without reflection and meticulous care. A list of desirable competences and
personal qualifications is therefore listed in the next section.
NOTIZ TO TIdE RI~AC~P.

Skills Required to Evaluate an IT System Appropriately


In the following list, the personal and professional qualifications needed to
accomplish assessment studies are briefly discussed. Ultimately, this of course
depends on the specific information needs and the complexity of the overall
system. In short, it is important to be responsible and independently minded, have
a good overview of the subject matter, and be both reflective and self-critical.

9It is critical to be able to disregard the political interests of one's own


profession to view the situation in a larger perspective. There is a need to be
able to gain an overview and thereby a need to make abstractions and
reflections about issues within their larger context before, during, and after an
assessment study. Often, activities may be carried out in more than one way,
and it is necessary to be able to judge which one is best given the conditions.
9It is necessary to have the skills needed for dealing with methods. One must
have the ability and the courage to form one's own opinion and carry it out in
terms of variations of known methods and so on. It is also important that one
is capable of capturing, handling, and interpreting deviations within the actual
investigation and among the factual observations. Such deviations are more
or less inevitable in practice.
9There is a need for stringency and persistence, as assessment studies are
usually large and involve a great deal of data, the analysis of which must be
methodically, cautiously, and exhaustively carried out.
9One must have a degree of understanding of IT-based solutions, enabling one
to see through the technology, its conditions and incidents, and so that one
dares set limits (without being stubborn) toward professionals with formal
credentials. It is necessary to be able to see through the implications of the in-
teraction between the IT systems and their organizational environment, as
well as conditions, interactions, and events for the organization and for its
individuals.
9There is a need for project management skills and experience. An assessment
study is not something that one does single-handedly or as a desk test.
Therefore, it is necessary to be able to coordinate and delegate, to perform
critical problem solving and decision making, and to have the ability to
predict the consequences of one's decisions and initiatives.
9One must have the motivation and desire to acquaint oneself with new
material and search for information in the literature, on the Internet, or from
professionals in other disciplines.
9It is essential to be able to remain constructively critical, verging on the
suspicious, toward verbal or written statements, including one's own
approach and results.
9There is a need for thorough insight into the healthcare sector, including the
conditions under which such organizations work.
NOTE TO TBI~ P.I~AD6P.

In other words, there is a greater emphasis on functional competences than on


academic achievements.

Criteria for Inclusion

Methods that require specific qualifications, such as economic and statistical


methods, have been excluded. It requires particular understanding for processing
the basis that leads to the methods description in this handbook. Apart from
common sense, economic methods fall outside the author's personal area of
competence, so they have been consciously omitted from this handbook.
Assessing IT-based solutions is a multidisciplinary task, so without the necessary
competence in a particular area, the assessor must acknowledge this and ask for
advice from experts with that knowledge, specifically in areas such as economics
and the law.

Statistical methods are not designed to assess IT-based systems. They are general
methods used to support processing results of an assessment activity, for instance.
Knowledge of basic statistical methods should be applied conscientiously. When
such knowledge is lacking, one should get help from professionals or from the
vast number of existing statistical textbooks as early as during the planning stage
of the assessment study. Descriptions of these methods are not relevant to this
book.

Commercial assessment tools, techniques, and methods based on already


implemented software products have been excluded, unless the method or
technique has been sufficiently well documented in its own right.

Verification of inference mechanisms, consistency checks of knowledge bases,


and so on for knowledge-based systems are subject to the type of verification that
specifically falls under development work and is classified as a type of assessment
('debugging'), which normal users are not expected to undertake. Therefore, this
aspect has been excluded from this handbook.

There are no distinctions between different types of IT systems, such as EHR,


laboratory information systems, hospital information systems, or decision-support
systems. The only types of IT systems excluded are embedded IT systems -
systems that do not have an independent user interface but that work as integral
parts of medico-technical equipment, for example. Some of the methods may be
more relevant to one type of IT system than to another type. This would be the
case in assessment of the precision of clinical diagnostics, for example. The
handbook has been organized in such a way that the choice of methods follows a


NOTE TO Tld{ READER

natural pattem. The keys to the methods are the development stage and an actual
need for information rather than for a system type.

Similarly, assessments of knowledge-based IT systems (expert systems and


decision-support systems) have not been described separately. In principle these
systems are not different from other IT-based systems. The dissimilarity lies in
their role within the organization and the consequential variation of the focus
(points of measure) in an assessment study. Extraordinary demands are put on the
technical correctness (as in diagnostic, prognostic, screening, and monitoring) of
(user) assessments of knowledge-based systems, including specificity, sensitivity,
and so on. Formally, however, this type of assessment is still classified as
"'Technical Verification" (see below). Other user assessments (i.e., Usability and
User Satisfaction) work in the same way as in other types of IT systems.

Methods specific to embedded systems have also been excluded, such as software
components in various monitoring equipment, electronic pumps, and so forth.
This does not preclude that some of the methods can be used for ordinary
assessment purposes, while other specific information needs are referred to
technology assessment approaches within the medico-technical domain.

Acknowledgments
This book is based on the author's knowledge accumulated over a period of thirty
years, initially twelve years specializing in Clinical Biochemistry, a medical
domain strong in metrology. Subsequently the author gained knowledge from a
number of research projects within Health Informatics under various EU
Commission Framework Programmers 1, and from a PhD project financed by the
Danish Research Council for Technical Science from 1995 to 1996.

All of this has contributed to a process of realization of what evaluation is really


all about and what it means in a decision-making context. The author's three most
appreciated colleagues during this process- as expert counterparts, collaborators,
and coaches- a r e - in alphabetical o r d e r - Marie-Catherine Beuscart-Z6phir,
cognitive psychologist from EVALAB, CERIM, Univ. 2, Lille France; Peter
McNair, Project Manager at the Copenhagen Hospital Corporation, Denmark; and
Jan Talmon, Associate Professor at the Department of Medical Informatics,
Maastricht University, Maastricht, Holland.

The projects KAVAS (A1021), KAVAS-2 (A2019), OpenLabs (A2028), ISAR (A2052), and
CANTOR (HC 4003), and the concerted actions COMAG-BME and ATIM.

xiii
NOTI~ TO TIdE RI~AIDER

The funding supporting the present synthesis comes from two primary and equal
sources: (1) the CANTOR (HC4003) Healthcare Telematics Project under the
European Commission's Fourth Framework Programme, leading to an early
version of the framework in Part III, and (2) the MUP-IT project under the Danish
Institute for Evaluation and Health Technology Assessment (CEMTV) that
enabled the finalization of the framework and the analysis of the literature for
sample cases. The author owes both of those her sincere gratitude.

The author would also like to thank the panel of reviewers of an early Danish
version of this handbook for the effort they have put into reviewing the book. This
handbook would not have been nearly as good had it not been for their extremely
constructive criticisms and suggestions for improvement. The panel of reviewers
consisted of Ame Kverneland, Head of Unit, and Seren Lippert, Consultant, both
at the Health Informatics Unit at the National Board of Health; Pia Kopke, IT
Project Consultant, the Informatics Department, the Copenhagen Hospital
Cooperation; Egil Boisen, Assistant Professor, the Department of Health Science
and Technology, Aalborg University; and Hallvard Laerum, PhD Student,
Digimed Senter, Trondheim, Norway. The final and formal review was carried out
by an anonymous reviewer, whom the author wishes to thank for pointing out the
areas where the book may have been unclear.

Furthermore, the author wishes to thank colleagues at the Department of Health


Science and Technology and the Department of Social Development and Planning
at Aalborg University, who on many occasions listened to expositions of theories,
conceptual discussions, metaphors of validation, and the like, and who have all
volunteered as professional adversaries. On each occasion they have raised the
level and contributed to ensuring the best possible professional foundation. I also
thank the Danish Society for Medical Informatics and the EPJ Observatory for
giving encouragement and the opportunity to test and discuss some of the ideas of
the contents of the Handbook with its future users and for financing the printing of
the Danish version of the Handbook.

Additional Comments

Descriptions of individual methods and techniques are consciously stated with


differing levels of detail: The less accessible the original reference, the more
detail is given for the method described. This gives the reader a chance to better
understand the actual method before deciding whether to disregard it or actively
search for it.

Registered trademarks from companies have been used within this document. It is
acknowledged here that these trademarks are recognized and this document in no


NOTI~ TO Tldl~ R~AE~R

way intends to infringe on any of the rights pertaining to these trademarks,


especially copyrights. The document therefore does not contain a registered
trademark symbol atter any instance of a trademark.

Jytte Brender
University of Aalborg
June 2005

XV
This Page Intentionally Left Blank
Part I: I n t r o d u c t i o n
This Page Intentionally Left Blank
I--IANDt3OOK OF: EVALUATION METHODS

9 Introduction
1.1 What Is Evaluation?

Evaluation can be defined as "acts related to measurement or exploration of a


system's properties". In short system means ~all the components, attributes, and
relationships needed to accomplish an objective" (Haimes and Schneiter 1996).
Evaluation may be accomplished during planning, development, or operation and
maintenance of an IT system. When put to its logical extreme, evaluation simply
means to put numbers on some properties of the system, and, consequently,
evaluation has little sense as a self-contained and independent activity. The
purpose of evaluation is to provide the basis for a decision about the IT system
investigated in some decision-making context, and that decision-making context is
also the context of the evaluation:
"Evaluation can be defined as the act of measuring or exploring properties o f a
health information system (in planning, development, implementation, or operation),
the result o f which informs a decision to be made concerning that system in a
specific context. '"
(Ammenwerth et al. 2004)

When evaluation is used in the context of controlling whether an IT s y s t e m - for


instance, at delivery- fulfills a previously set agreement, then the activity is
called verification. Similarly, the concept of validation is used when the decision-
making context is concerned with whether or not the system suffices to fulfill its
purpose. The concept of 'assessment' is used as a collective term when it is
unnecessary to distinguish between the different types of purpose of an evalua-
tion. For further details on the definitions, see Section 2.1.

It is not yet possible to write a cookbook for the assessment of IT-based systems
with step-by-step recipes of "do it this way". The number of aspects to be
investigated and types of systems are far too large. Consequently, as Symons and
Walsham (1988) express it:
"[W]e view evaluation not as the application o f a set of tools and techniques, but as
a process to be understood. By which we mean an understanding of the functions
and nature o f evaluation as well as its limitations and problems. "

In short: One has to understand what is going on and what is going to take place.

When addressing the issue of assessing an electronic healthcare record


(abbreviated EHR) or another IT system within healthcare, the object of the
~ANDE}OOI< OF::{=VALUATION METHODS

assessment activity is usually the entire organizational solution and not only the
technical construct. The book distinguishes between an 'IT system' and an 'IT-
based solution'. The term 'IT system' denotes the technical construct of the whole
solution (hardware, software, including basic software, and communication
network), while 'IT-based solution' refers to the IT system plus its surrounding
organization with its mission, conditions, structure, work procedures, and so on.
Thus, assessment of an IT-based solution is concerned not only with the IT
system, but also its interaction with its organizational environment and its mode
of operation within the organization. For instance, it includes actors (physicians,
nurses, and other types of healthcare staff, as well as patients), work procedures
and structured activities, as well as external stakeholders, and, last but not least, a
mandate and a series of internal and external conditions for the organization's
operation. Orthogonal to this, evaluation methods need to cope with aspects
ranging from the technical ones - via social and behavioral ones - to managerial
ones. The assessment activity must act on the basis of this wholeness, but it is of
course limited to what is relevant in the specific decision-making context.

The preceding short introduction to the concept of evaluation indicates that to


accomplish an evaluation requires more than picking a standard method and using
it. The important part is to understand the process within which the future result is
going to be used, as well as the premises for obtaining a good result- before one
can choose a method at all. Applying a given method in reality is probably the
easiest yet most laborious part of an evaluation.

The first thing to make clear, before starting assessment activities at all, is "Why
do you want to evaluate?", "What is it going to be used for?", and "What will be
the consequence of a given outcome?" The answers to the questions "Why do you
want to evaluate?" and "What is it going to be used for?" are significant
determinants for which direction and approach one may pursue. Similarly, the
intended use of the study results is a significant factor for the commitment and
motivation of the involved parties and thereby also for the quality of the data upon
which the outcome rests.

There are natural limits to how much actual time one can spend on planning,
measuring, documenting, and analyzing, when assessment is an integrated part of
an ongoing implementation process (cf. the concept of 'constructive assessment'
in Section 2.1.3). Furthermore, if one merely needs the results for internal
purposes - for progressing in one decision-making context or another, for
example - then the level of ambition required for scientific publications may not
necessarily be needed. However, after the event, one has to be very careful in case
the option of publication, either as an article in a scientific journal or as a public
technical report, is suggested. The results may not be appropriate for publication
or may not be generalizable and thereby of no value to others. In this day and age,
with strict demands on evidence (cf. 'evidence-based medicine'), one must be
I-IANE~OOI< O~ EVALUATION METHODS

aware that the demands on the quality of a study are different when making one's
results publicly available.

Answers depend on the questions posed, and if one does not fully realize what is
possible and what is not for a given method, there is the risk that the answers will
not be very useful. It is rare that one is allowed to evaluate simply to gain more
knowledge of something.

A problem often encountered in assessment activities is that they lag behind the
overall investment in development or implementation. Some aspects can only be
investigated once the system is in use on a day-to-day basis. However, this is
when the argument "but it works" normally occurs - at least at a certain level -
and "Why invest in an assessment?" when the neighbor will be the one to benefit.
There must be an objective or a gain in assessing, or it is meaningless. Choice of
method should ensure this objective.

Furthermore, one should refrain from assessing just one part of the system or
within narrow premises and then think that the result can be used for an overall
political decision-making process. Similarly, it is too late to start measuring the
baseline against which a quantitative assessment should be evaluated once the
new IT system is installed, or one is totally immersed in the analysis and
installation work. By then it will in general be too late, as the organization has
already moved on.

It is also necessary to understand just what answers a given method can provide.
For instance, a questionnaire study cannot appropriately answer all questions,
albeit they are all posed. Questionnaire studies are constrained by a number of
psychological factors, which only allow one to scratch the surface, but not to
reach valid quantitative results (see Parts II and III). The explanation lies in the
difference between (1) what you do, (2) what you think you do, and (3) how you
actually do things and how you describe it (see Part III and Brender 1997a and
1999). There is a risk that a questionnaire study will give the first as the outcome,
an interview study the second (because you interact with the respondent to
increase mutual understanding), while the third outcome can normally only be
obtained through thorough observation. This difference does not come out of bad
will, but from conditions within the user organization that make it impossible for
the users to express themselves precisely and completely. Part III presents several
articles where the differences between two of the three aspects are shown by
triangulation (Kushniruk et al. 1997; Ostbye et al. 1997; Beuscart-Z6phir et al.
1997). However, the phenomenon is known from knowledge engineering (during
the development of expert systems) and from many other circumstances (Dreyfus
and Dreyfus 1986; Bansler and Havn 1991; Stage 1991; Barry 1995; Dreyfus
1997; and Patel and Kushniruk 1998). For instance, Bansler and Havn (1991)
express it quite plainly as follows:
HAND~OOI( OF- EVALUATION I'II~TIdODS

"Approaches for system development implicitly assumes that documents such as a


functional specification and a system specification contain all relevant information
about the system being developed; developers achieve an operational image (a mental
image, a theory) of the solution as a kind of apprenticeship; the programmers'
knowledge about the program cannot be expressed and therefore communicated by
means of program specifications or other kinds of documentation."

In other words, one has to give the project careful thought before starting. The
essence of an assessment is:
There must be accordance between the aim, the premises, the process, and
the actual application o f the results - otherwise it might go wrong/

1.2 Instructions to the Reader

The Handbook o f Evaluation Methods is divided into three main parts:


I. The Introduction is concerned with the terminology and the
conceptualization fundamental to this handbook.
II. Part II contains descriptions of methods.
III. Part III comprises an exhaustive review of known perils and pitfalls for
experimental investigations with sample cases from the literature on
assessment of IT-based solutions in healthcare.

This Handbook o f Evaluation Methods is not intended to be read like a normal


book. It is an encyclopedia, a work of reference to be used when one needs
support for accomplishing a specific assessment study or when one needs
inspiration for the formulation of candidate themes for investigation. The
Handbook exemplifies available options.

It is recommended that one initially reads Chapters 1-3 of the Introduction to


ensure an understanding of the terms and concepts. Then, when one has a feeling
of what one wants to explore and has recognized the state of the IT system within
its life cycle, it may be useful to familiarize oneself with the candidate methods in
Part II. Additionally, one should get acquainted with a number of the book
references, like (van Gennip and Talmon 1995; Friedman and Wyatt 1996; and
Coolican 1999). It is important to be familiar with the terminology, and with the
overall meaning of evaluation. The aforementioned references also give
instructions on how to get off to a sensible start.

When ready to take the next step, proceed from the beginning of Chapter 4 and
onwards, while adjusting the list of candidate methods based on the identified
information needs versus details of the specific methods and attributes of the case.
I-lANE)BOOK OI:::::EVALUATION MF=TIdODS

Get hold of the relevant original references from the literature and search for more
or newer references on the same methods or problem areas as applicable to you.

When a method (or a combination of several) is selected and planning is going on,
look through Part III to verify that everything is up to your needs or even better.
Part III is designed for carrying out an overall analysis and judgment of the
validity of an assessment study. However, it is primarily written for experienced
evaluators, as the information requires prior know-how on the subtlety of
experimental work. Nevertheless, in case the description of a given method in Part
II mentions a specific pitfall, less experienced evaluators should also get
acquainted with these within Part III in order to judge the practical implication
and to correct or compensate for weaknesses in the planning.

1.3 Metaphor for the Handbook

As discussed above, the point of departure for this handbook is that one cannot
make a cookbook with recipes on how to evaluate. Evaluation is fairly difficult; it
depends on one's specific information need (the question to be answered by the
evaluation study), on the demands for accuracy and precision, on the project
development methods (for constructive assessment), on preexisting material, and
so forth.

Descriptions of evaluation methods and their approaches are usually fairly easy to
retrieve from the literature, and the target audience is used to make literature
searches. This was discussed during meetings with a range of target users at an
early stage of the preparation of the handbook. They explicitly stated that they can
easily retrieve and read the original literature as long as they have good
references.

As a consequence of this and of the huge number of biases in existing reports on


assessment studies demonstrated in the review in Part III, it was decided to
exclude exhaustive descriptions of the individual methods. Instead the emphasis is
on aspects like assumptions for application, tacit built-in perspectives of the
methods as well as their perils and pitfalls. Authors of methods rarely describe
this kind of information themselves, and it is very difficult for nonexperts to look
through methods and cases in the literature to identify potential problems during
application.

Consequently, the emphasis of this handbook is on providing the information,


which ordinary members of the target group don't have the background and
experience to recognize. This includes experimental perils and pitfalls.

You might also like