Handbook of Evaluation Methods For Health Informatics., 978-0123704641
Handbook of Evaluation Methods For Health Informatics., 978-0123704641
Visit the link below to download the full version of this book:
https://cheaptodownload.com/product/handbook-of-evaluation-methods-for-health-in
formatics-full-pdf-docx-download/
This Page Intentionally Left Blank
H A N D B O O K OF E V A L U A T I O N M E T H O D S
FOR H E A L T H I N F O R M A T I C S
Jytte Brender
University of Aalborg, Denmark
AMSTERDAM BOSTON
9 HEIDELBERG
9 LONDON
9
NEW YORK OXFORD
9 PARIS
9 SAN
9 DIEGO
SAN FRANCISCO SINGAPORE
9 SYDNEY
9 TOKYO
9
ELSEVIER
The front page illustration has been reprinted from Brender J. Methodology for Assessment of
Medical IT-based Systems - in an Organisational Context. Amsterdam: lOS Press, Studies in
Health Technology and Informatics 1997; 42, with permission.
Permissions may be sought directly from Elsevier's Science & Technology Rights
Department in Oxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333,
E-mail: permi~sions@cls~wiev~co~. You may also complete your request on-line
via the Elsevier homepage (http://elsevier.com), by selecting
"Customer Support" and then "Obtaining Permissions."
ISBN 13:978-0-12-370464-1
ISBN 10:0-12-370464-2
Contents
P A R T I: I N T R O D U C T I O N 1
INTRODUCTION 3
1.1 What Is Evaluation? 3
1.2 Instructions to the Reader 6
1.3 Metaphor for the Handbook 7
CONCEPTUAL APPARATUS 9
2.1 Evaluation and Related Concepts 9
2.1.1 Definitions 9
2.1.2 Summative Assessment 11
2.1.3 Constructive Assessment 11
2.2 Methodology, Method, Technique, and Framework 12
2.2.1 Method 13
2.2.2 Technique 13
2.2.3 Measures and Metrics 14
2.2.4 Methodology 14
2.2.5 Framework 15
2.3 Quality Management 16
2.4 Perspective 18
2.4.1 Example." Cultural Dependence on Management Principles 19
2.4.2 Example: Diagramming Techniques 20
2.4.3 Example: Value Norms in Quality Development 20
2.4.4 Example: Assumptions of People's Abilities 21
2.4.5 Example: The User Concept 21
2.4.6 Example: The Administrative Perspective 22
2.4.7 Example: Interaction between Development and Assessment
Activities 23
2.4.8 Example: Interaction between Human and
Technical Aspects 24
2.5 Evaluation Viewed in the Light of the IT System's Life Cycle 25
0 TYPES OF USER ASSESSMENTS OF IT-BASED SOLUTIONS 29
CONTENTS
5. INTRODUCTION 53
5.1 Signature Explanations 54
5.1.1 Application Range within the IT System's Life Cycle 54
5.1.2 Applicability in Different Contexts 56
5.1.3 Type of Assessment 57
5.1.4 Use of Italics in the Method Descriptions 58
5.2 Structure of Methods' Descriptions 58
6. OVERVIEW OF ASSESSMENT METHODS 61
6.1 Overview of Assessment Methods: Explorative Phase 61
6.2 Overview of Assessment Methods: Technical Development Phase 64
6.3 Overview of Assessment Methods: Adaptation Phase 65
6.4 Overview of Assessment Methods: Evolution Phase 68
6.5 Other Useful Information 72
7. DESCRIPTIONS OF METHODS AND TECHNIQUES 73
Analysis of Work Procedures 73
Assessment of Bids 78
CONTENTS
Balanced Scorecard 85
BIKVA 88
Clinical/Diagnostic Performance 91
Cognitive Assessment 96
Cognitive Walkthrough 102
Delphi 106
Equity Implementation Model 109
Field Study 111
Focus Group Interview 116
Functionality Assessment 120
Future Workshop 125
Grounded Theory 128
Heuristic Assessment 132
Impact Assessment 135
Interview 142
KUBI 147
Logical Framework Approach 149
Organizational Readiness 154
Pardizipp 156
Prospective Time Series 159
Questionnaire 163
RCT, Randomized Controlled Trial 172
Requirements Assessment 180
Risk Assessment 185
Root Causes Analysis 188
Social Networks Analysis 190
Stakeholder Analysis 192
SWOT 196
Technical Verification 199
Think Aloud 204
Usability 207
User Acceptance and Satisfaction 215
Videorecording 219
WHO: Framework for Assessment of Strategies 222
0 OTHER USEFUL INFORMATION 227
Documentation in a Situation of Accreditation 227
Measures and Metrics 232
Standards 238
P A R T III: M E T H O D O L O G I C A L A N D M E T H O D I C A L
P E R I L S AND P I T F A L L S AT A S S E S S M E N T 243
vii
CONT6NTS
viii
NOTE TO Tld~ R~ADER
NOTE TO TH E READER
This H a n d b o o k o f Evaluation Methods is a translated and updated version of a
combination of the following two publications:
~ Brender J. Handbook of Methods in Technology Assessment of IT-based
Solutions within Healthcare. Aalborg: EPJ-Observatoriet, ISBN: 87-91424-
04-6, June 2004. 238pp (in Danish).
9Brender J. Methodological and methodical perils and pitfalls within
assessment studies performed on IT-based solutions in healthcare. Aalborg:
Virtual Centre for Health Informatics; 2003 May. Report No.: 03-1 (ISSN 1397-
9507). 69pp.
"[W]e view evaluation not as the application o f a set o f tools and techniques, but as
a process to be understood. By which we mean an understanding o f the functions
and nature o f evaluation as well as its limitations and problems. ""
(Symons and Walsham 1988)
The primary aim of this book is to illustrate options for finding appropriate tools
within the literature and then to support the user in accomplishing an assessment
study without too many disappointments. There are very big differences between
what developers and users should assess with regard to an IT-based solution. This
book deals solely with assessments as seen from the users' point of view.
Within the literature there are thousands of reports on assessment studies for IT-
based systems and solutions specifically for systems within the healthcare sector.
However, only a fraction of these are dedicated to a description of assessment
activities. Consequently, from an assessment perspective one has to consider them
as superficial and rather useless as model examples. Only the best and
paradigmatic examples of evaluation methods are included in this book.
Please note the terminology of the book, which encompasses the notions of
evaluation and assessment in Section 2.1. Briefly, evaluation means to measure
characteristics (in a decision-making context), while assessment is used in an
overall sense that does not distinguish between retrospective objectives of the
study aims and therefore not whether it is evaluation, verification, or validation.
This book deals mainly with current user-oriented assessment methods. It includes
methods that give the users a fair chance of accomplishing all or parts of an
investigation, which lead to a professionally satisfactory answer to an actual
information need.
It is not each and every one of the methods included that were originally
developed and presented in the literature as evaluation methods, but nevertheless,
they may be applicable either directly- as evaluation methods for specific
purposes- or indirectly- as support in an evaluation context. An example is the
Delphi method developed for the American military to predict future trends.
Another example is diagramming techniques for modelling workflow, which in
some instances constitute a practical way of modelling, such as in assessing effect
or impact or in field studies, and so forth.
The book is primarily aimed at the health sector, from which the chosen
illustrative examples are taken. The contents have been collected from many
different specialist areas, and the material has been chosen and put together in
order to cover the needs of assessment for IT-based solutions in the health sector.
However, this does not preclude its use within other sectors as well.
Target Group
It is important to remember that a handbook of methods is not a textbook but a
reference book enabling the reader to get inspiration and support when completing
a set task and/or as a basis for further self-education. Handbooks, for use in
natural sciences for instance, are typically aimed at advanced users and their level
of knowledge. The structure of this handbook has similarly been aimed at the
level of user with a profile as described below.
The target readers of this book constitute all professionals within the healthcare
sector, including IT professionals. However, an evaluation method is not
something that one pulls out of a hat, a drawer, or even from books, and then uses
without reflection and meticulous care. A list of desirable competences and
personal qualifications is therefore listed in the next section.
NOTIZ TO TIdE RI~AC~P.
Statistical methods are not designed to assess IT-based systems. They are general
methods used to support processing results of an assessment activity, for instance.
Knowledge of basic statistical methods should be applied conscientiously. When
such knowledge is lacking, one should get help from professionals or from the
vast number of existing statistical textbooks as early as during the planning stage
of the assessment study. Descriptions of these methods are not relevant to this
book.
•
NOTE TO Tld{ READER
natural pattem. The keys to the methods are the development stage and an actual
need for information rather than for a system type.
Methods specific to embedded systems have also been excluded, such as software
components in various monitoring equipment, electronic pumps, and so forth.
This does not preclude that some of the methods can be used for ordinary
assessment purposes, while other specific information needs are referred to
technology assessment approaches within the medico-technical domain.
Acknowledgments
This book is based on the author's knowledge accumulated over a period of thirty
years, initially twelve years specializing in Clinical Biochemistry, a medical
domain strong in metrology. Subsequently the author gained knowledge from a
number of research projects within Health Informatics under various EU
Commission Framework Programmers 1, and from a PhD project financed by the
Danish Research Council for Technical Science from 1995 to 1996.
The projects KAVAS (A1021), KAVAS-2 (A2019), OpenLabs (A2028), ISAR (A2052), and
CANTOR (HC 4003), and the concerted actions COMAG-BME and ATIM.
xiii
NOTI~ TO TIdE RI~AIDER
The funding supporting the present synthesis comes from two primary and equal
sources: (1) the CANTOR (HC4003) Healthcare Telematics Project under the
European Commission's Fourth Framework Programme, leading to an early
version of the framework in Part III, and (2) the MUP-IT project under the Danish
Institute for Evaluation and Health Technology Assessment (CEMTV) that
enabled the finalization of the framework and the analysis of the literature for
sample cases. The author owes both of those her sincere gratitude.
The author would also like to thank the panel of reviewers of an early Danish
version of this handbook for the effort they have put into reviewing the book. This
handbook would not have been nearly as good had it not been for their extremely
constructive criticisms and suggestions for improvement. The panel of reviewers
consisted of Ame Kverneland, Head of Unit, and Seren Lippert, Consultant, both
at the Health Informatics Unit at the National Board of Health; Pia Kopke, IT
Project Consultant, the Informatics Department, the Copenhagen Hospital
Cooperation; Egil Boisen, Assistant Professor, the Department of Health Science
and Technology, Aalborg University; and Hallvard Laerum, PhD Student,
Digimed Senter, Trondheim, Norway. The final and formal review was carried out
by an anonymous reviewer, whom the author wishes to thank for pointing out the
areas where the book may have been unclear.
Additional Comments
Registered trademarks from companies have been used within this document. It is
acknowledged here that these trademarks are recognized and this document in no
•
NOTI~ TO Tldl~ R~AE~R
Jytte Brender
University of Aalborg
June 2005
XV
This Page Intentionally Left Blank
Part I: I n t r o d u c t i o n
This Page Intentionally Left Blank
I--IANDt3OOK OF: EVALUATION METHODS
9 Introduction
1.1 What Is Evaluation?
It is not yet possible to write a cookbook for the assessment of IT-based systems
with step-by-step recipes of "do it this way". The number of aspects to be
investigated and types of systems are far too large. Consequently, as Symons and
Walsham (1988) express it:
"[W]e view evaluation not as the application o f a set of tools and techniques, but as
a process to be understood. By which we mean an understanding of the functions
and nature o f evaluation as well as its limitations and problems. "
In short: One has to understand what is going on and what is going to take place.
assessment activity is usually the entire organizational solution and not only the
technical construct. The book distinguishes between an 'IT system' and an 'IT-
based solution'. The term 'IT system' denotes the technical construct of the whole
solution (hardware, software, including basic software, and communication
network), while 'IT-based solution' refers to the IT system plus its surrounding
organization with its mission, conditions, structure, work procedures, and so on.
Thus, assessment of an IT-based solution is concerned not only with the IT
system, but also its interaction with its organizational environment and its mode
of operation within the organization. For instance, it includes actors (physicians,
nurses, and other types of healthcare staff, as well as patients), work procedures
and structured activities, as well as external stakeholders, and, last but not least, a
mandate and a series of internal and external conditions for the organization's
operation. Orthogonal to this, evaluation methods need to cope with aspects
ranging from the technical ones - via social and behavioral ones - to managerial
ones. The assessment activity must act on the basis of this wholeness, but it is of
course limited to what is relevant in the specific decision-making context.
The first thing to make clear, before starting assessment activities at all, is "Why
do you want to evaluate?", "What is it going to be used for?", and "What will be
the consequence of a given outcome?" The answers to the questions "Why do you
want to evaluate?" and "What is it going to be used for?" are significant
determinants for which direction and approach one may pursue. Similarly, the
intended use of the study results is a significant factor for the commitment and
motivation of the involved parties and thereby also for the quality of the data upon
which the outcome rests.
There are natural limits to how much actual time one can spend on planning,
measuring, documenting, and analyzing, when assessment is an integrated part of
an ongoing implementation process (cf. the concept of 'constructive assessment'
in Section 2.1.3). Furthermore, if one merely needs the results for internal
purposes - for progressing in one decision-making context or another, for
example - then the level of ambition required for scientific publications may not
necessarily be needed. However, after the event, one has to be very careful in case
the option of publication, either as an article in a scientific journal or as a public
technical report, is suggested. The results may not be appropriate for publication
or may not be generalizable and thereby of no value to others. In this day and age,
with strict demands on evidence (cf. 'evidence-based medicine'), one must be
I-IANE~OOI< O~ EVALUATION METHODS
aware that the demands on the quality of a study are different when making one's
results publicly available.
Answers depend on the questions posed, and if one does not fully realize what is
possible and what is not for a given method, there is the risk that the answers will
not be very useful. It is rare that one is allowed to evaluate simply to gain more
knowledge of something.
A problem often encountered in assessment activities is that they lag behind the
overall investment in development or implementation. Some aspects can only be
investigated once the system is in use on a day-to-day basis. However, this is
when the argument "but it works" normally occurs - at least at a certain level -
and "Why invest in an assessment?" when the neighbor will be the one to benefit.
There must be an objective or a gain in assessing, or it is meaningless. Choice of
method should ensure this objective.
Furthermore, one should refrain from assessing just one part of the system or
within narrow premises and then think that the result can be used for an overall
political decision-making process. Similarly, it is too late to start measuring the
baseline against which a quantitative assessment should be evaluated once the
new IT system is installed, or one is totally immersed in the analysis and
installation work. By then it will in general be too late, as the organization has
already moved on.
It is also necessary to understand just what answers a given method can provide.
For instance, a questionnaire study cannot appropriately answer all questions,
albeit they are all posed. Questionnaire studies are constrained by a number of
psychological factors, which only allow one to scratch the surface, but not to
reach valid quantitative results (see Parts II and III). The explanation lies in the
difference between (1) what you do, (2) what you think you do, and (3) how you
actually do things and how you describe it (see Part III and Brender 1997a and
1999). There is a risk that a questionnaire study will give the first as the outcome,
an interview study the second (because you interact with the respondent to
increase mutual understanding), while the third outcome can normally only be
obtained through thorough observation. This difference does not come out of bad
will, but from conditions within the user organization that make it impossible for
the users to express themselves precisely and completely. Part III presents several
articles where the differences between two of the three aspects are shown by
triangulation (Kushniruk et al. 1997; Ostbye et al. 1997; Beuscart-Z6phir et al.
1997). However, the phenomenon is known from knowledge engineering (during
the development of expert systems) and from many other circumstances (Dreyfus
and Dreyfus 1986; Bansler and Havn 1991; Stage 1991; Barry 1995; Dreyfus
1997; and Patel and Kushniruk 1998). For instance, Bansler and Havn (1991)
express it quite plainly as follows:
HAND~OOI( OF- EVALUATION I'II~TIdODS
In other words, one has to give the project careful thought before starting. The
essence of an assessment is:
There must be accordance between the aim, the premises, the process, and
the actual application o f the results - otherwise it might go wrong/
When ready to take the next step, proceed from the beginning of Chapter 4 and
onwards, while adjusting the list of candidate methods based on the identified
information needs versus details of the specific methods and attributes of the case.
I-lANE)BOOK OI:::::EVALUATION MF=TIdODS
Get hold of the relevant original references from the literature and search for more
or newer references on the same methods or problem areas as applicable to you.
When a method (or a combination of several) is selected and planning is going on,
look through Part III to verify that everything is up to your needs or even better.
Part III is designed for carrying out an overall analysis and judgment of the
validity of an assessment study. However, it is primarily written for experienced
evaluators, as the information requires prior know-how on the subtlety of
experimental work. Nevertheless, in case the description of a given method in Part
II mentions a specific pitfall, less experienced evaluators should also get
acquainted with these within Part III in order to judge the practical implication
and to correct or compensate for weaknesses in the planning.
As discussed above, the point of departure for this handbook is that one cannot
make a cookbook with recipes on how to evaluate. Evaluation is fairly difficult; it
depends on one's specific information need (the question to be answered by the
evaluation study), on the demands for accuracy and precision, on the project
development methods (for constructive assessment), on preexisting material, and
so forth.
Descriptions of evaluation methods and their approaches are usually fairly easy to
retrieve from the literature, and the target audience is used to make literature
searches. This was discussed during meetings with a range of target users at an
early stage of the preparation of the handbook. They explicitly stated that they can
easily retrieve and read the original literature as long as they have good
references.